22:57 GMT +318 October 2018
Listen Live
    Artificial intelligence

    Google Vows Not to Use AI for Weaponry, to Retain Ties With Military

    CC0 / Pixabay
    Business
    Get short URL
    2 0 0

    In the wake of a fierce debate over Google’s involvement in a US military drone project, the IT titan has delivered on its promise to outline a set of principles to oversee work with artificial intelligence (AI.)

    The document, released under the name "Artificial Intelligence at Google: our Principles," notably details the company’s firm intention not to embark on projects to develop weaponry, but rather pursue more pacifist ones, stating its views on privacy, biases and other aspects. 

    However, the new principles suggest that Goggle will go on to collaborate with the military "in many areas," other than AI surveillance projects that may violate internationally recognized human rights principles.

    READ MORE: Google Drone Project Skirts US Regulations With a Little Help From NASA

    This means the company is dedicated to staying accountable to humans and subject to their control, as well as upholding "high standards of scientific excellence" and introducing privacy guarantees. In an interview with The Verge, a company representative notably stated that if Google had outlined the principles earlier it would hardly have been involved in the US Department of Defense’s drone program, which exploited AI to analyze surveillance data.

    Google CEO Sundar Pichai further reiterated the company’s goals in a blog post stating that their ultimate goal is to make IT products even more practical, citing spam-free email, which is essentially easy to put together, and a digital assistant to extend users a helping hand in their operations.

    Although Google explicitly stated that it will honor its business ties with the Pentagon, it will end its involvement with controversial Project Maven when it expires next year. However, according to a Google representative cited by The Verge, it is set to continue to pursue the contract where it doesn't violate the company's new principles. Under the project, Google was tasked with mapping low-resolution objects using AI, reportedly in a bid to win a promising contract with the Pentagon worth about $10 billion, which is also being sought-after by top IT companies like IMB and Microsoft.

    AI has repeatedly come under scrutiny in recent years amid a broad ethical debate over its basic principles and the grim technology-dominated future it is feared to deliver. World-renowned scientists and businessmen including Elon Musk and Stephen Hawking, just to name a few, have warned on more than one occasion about the perils of AI: Elon Musk, taking on the role of a doomsayer, called AI humanity’s "biggest existential threat," with mankind subconsciously defeating itself while racing for something revolutionary. Separately, the late physicist and popularizer of science Stephen Hawking voiced concerns that artificial intelligence could one day reach a point where it becomes superior to humans, constituting "a new form of life."

    READ MORE: Cook Vs. Hawking, Musk: Apple CEO Fears Machine-Like Humans, Not AI

    Related:

    Facebook, Google Sued Over Failure to Disclose Data on Political Ads
    Killer Intelligence: Google Assistant Capable of Shooting You to Death
    George Osborne's Evening Standard Sells 'Favorable' Coverage to Google, Uber
    Google to End Drone Program With Pentagon in 2019 – Reports
    Limited Menu, Friendly Roommate, Would Visit Again: Google Reviews of UK Prisons
    Tags:
    AI, principles, humanity, robotics, technology, robots, Google, Stephen Hawking, Elon Musk, United States
    Community standardsDiscussion
    Comment via FacebookComment via Sputnik
    • Сomment