22:59 GMT +319 November 2018
Listen Live
    Google chief executive Sundar Pichai

    Fly in the Ointment: Google CEO and Other Tech Luminaries Highlight Risks of AI

    © Flickr/ Maurizio Pesce
    Tech
    Get short URL
    331

    Though most of us increasingly hail AI advances and sometimes pin our hopes on their incredible life-changing potential, many technical specialists and scientists, share a number of key AI ethics-related concerns. Google CEO Sundar Pichai and Apple’s Tim Cook are the latest high profile figures to do so.

    Google CEO Sundar Pichai, January 23, 2018

    In an interview with Recode and MSNBC due to be aired on January 26, search giant head Sundar Pichai likened AI to fire, saying it is very useful, but needs to be treated with caution.

    "AI is one of the most important things that humanity is working on," Pichai said. "It’s more profound than, I don’t know, electricity or fire. [While fire is good] it kills people, too. They learn to harness fire for the benefits of humanity, but we will have to overcome its downsides, too."

    Meanwhile, the Google-owned AI company DeepMind has announced plans to delve into AI ethics and try to shed light on the matter in its would-be research projects.

    READ MORE: Empowerment, Empathy and Experience: Honda Unveils New Helpful Robots

    Pichai underscored potential advancements in AI technology like those that could be used to cure cancer, according to Android Authority. Humanity should, however, be cautious about progress in the field, Pichai said.

    So, let’s take a closer look at the rundown of others issuing warnings regarding  AI's potential dangers; Apple’s Tim Cook is perhaps the most notable.

    Apple Inc. CEO Tim Cook, December 3, 2017

    Apple CEO Tim Cook
    © AFP 2018 / Kimihiro Hoshino
    Apple CEO Tim Cook

    Taking the floor at the internet conference in Ujena, China, Apple Inc. CEO Tim Cook, said he isn't worried about artificial intelligence proper, but people who think like machines, adding that technologies will empower us greatly, provided they are in harmony with humanity at large.

    "I’m not worried about machines that think as people, I worry about people who think like machines. We need to work together to introduce technology to humanity," he said.

    Nevertheless, he went on to say that many aspects of everyday life will change soon, allowing us to prosper from AI advances:

    "Technologies can change the world for the better, if they are embedded in humanity. We believe that artificial intelligence will be able to embroider a person’s ability and help to make a breakthrough that transforms our lives in education, in access to health services and in countless other areas."

    Cook’s approach appeared to be parallel to Russian President Vladimir Putin’s earlier statements, when he revealed that artificial intelligence, centered around human cognition, is what the future holds for Russia and the whole world, with no one to be left "at the end of the line." The statement outright prompted Tesla CEO Elon Musk’s concerned reply.

    SpaceX, Tesla CEO Elon Musk, February — September 2017

    In this May 29, 2014 photo, Elon Musk, CEO and CTO of SpaceX, introduces the SpaceX Dragon V2 spaceship at the SpaceX headquarters in Hawthorne, Calif.
    © AP Photo / Jae C. Hong, file
    In this May 29, 2014 photo, Elon Musk, CEO and CTO of SpaceX, introduces the SpaceX Dragon V2 spaceship at the SpaceX headquarters in Hawthorne, Calif.

    Speaking out publicly following Russian President Putin’s remarks on who or rather what will rule the world, Musk said artificial intelligence could be humanity’s greatest existential threat, in that it could start a third world war. 

    Musk clarified that he was not just concerned about the prospect of a world leader starting the war, but also of an approach "that a [pre-emptive] strike is [the] most probable path to victory."

    He even reiterated that AI is "vastly more risky" than North Korea’s bold nuclear ambitions, which are increasingly on the rise.

    Musk’s fear of AI warfare has been a driving force in many of his public outcries. Earlier, for instance, he was one of 100-odd signatories calling for a UN-initiated ban on lethal autonomous weapons.

    Separately, during his talk at the World Government Summit in Dubai in February 2017, Musk brought up the issue of "deep AI" which goes beyond driverless cars to what he dubbed "artificial general intelligence." He described the latter as AI that is "smarter than the smartest human on Earth" and called it a "dangerous situation."

    Physicist Stephen Hawking, November 6, 2017

    Stephen Hawking
    © AFP 2018 / NIKLAS HALLE'N /
    Stephen Hawking

    During a talk at the Web Summit technology conference in Lisbon, Portugal, in early November, the renowned physicist asserted that the emergence of artificial intelligence could be the "worst event in the history of our civilization."  He warned against the potential danger of computers and machines, saying they are capable of "emulating human intelligence and even exceeding it," CNBC reported.

    He admitted the future was shrouded in uncertainty, essentially with two possible outcomes:

    "Success in creating effective AI, could be the biggest event in the history of our civilization. Or the worst. We just don't know. So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it," Hawking said during the speech.

    "Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy."

    Hawking assumed that to prevent the potentially negative effects, one needs to employ competent management and effective skills – "best practice," as he referred to it.

    Microsoft Founder Bill Gates, February 2017

    Microsoft Corp. founder Bill Gates.
    © AP Photo / Ted S. Warren
    Microsoft Corp. founder Bill Gates.

    Outlining the potential labor market, Microsoft CEO Bill Gates stated that since robots are expected to steal human jobs, they should also pay income tax.

    "If a human worker does $50,000 of work in a factory, that income is taxed," said Gates in a recent interview with Quartz editor-in-chief Kevin Delaney. "If a robot comes in to do the same thing, you'd think we'd tax the robot at a similar level."

    Citing the upcoming shortage of human jobs, along with a string of irreplaceable human positions, he argued that tax revenue is needed to train and educate, as well as make "new, irreplaceable jobs available to human workers."

    READ MORE: Google Goes to Grass Roots With Its New AI Tool

    Related:

    Google Goes to Grass Roots With Its New AI Tool
    Tech Goes Nuts: Silicon Valley Wag Registers 1st AI Church
    AI Use on the Rise as Pentagon Deploys Image Recognition and Geolocation
    Pentagon Working on AI to Predict Fighter Jet Malfunctions
    New 'Unsupervised' AI Method Poised to Revolutionize Planet Searching
    Tags:
    tech giants, dangers, AI, physics, science, Tesla, Google, Apple, Sundar Pichai, Stephen Hawking, Elon Musk, Bill Gates, Tim Cook, Europe, United States, Russia
    Community standardsDiscussion
    Comment via FacebookComment via Sputnik