09:19 GMT15 June 2021
Listen Live
    Get short URL

    Members of an advisory board appointed by the US Department of Defense (DoD) recently proposed a set of ethical principles for artificial intelligence (AI) use during war at a Georgetown University conference.

    The members of the Defense Innovation Board (DIB) conducted numerous studies and consulted many experts in the AI field before crafting their list of ethical principles. 

    Under their proposed guidelines, the DIB suggests that AI use in warfare be equitable, traceable, reliable and governable. The DIB also noted that human beings should be responsible and “exercise appropriate levels of judgment” in the use of DoD AI systems. It is now up to the DoD to either accept, alter or outright reject the recommendations put forth by the DIB.

    “DoD should take deliberate steps to avoid unintended bias in the development and deployment of combat or noncombat AI systems that would inadvertently cause harm to persons,” one of the principles reads.

    The board members also urged the DoD to continue researching and developing AI systems, as they are still rapidly changing.

    "The valuable insights from the DIB are the product of 15 months of outreach to commercial industry, the government, academia and the American public," said Air Force Lt. Gen. John N.T. Shanahan, director of the Joint Artificial Intelligence Center (JAIC), according to a Friday DoD statement.

    "The DIB's recommendations will help enhance the DoD's commitment to upholding the highest ethical standards as outlined in the DoD AI Strategy, while embracing the US military's strong history of applying rigorous testing and fielding standards for technology innovations,” the statement adds.

    The DoD has been making strides in its use of AI through the JAIC. According to its website, the center’s mission is to “transform the DoD by accelerating the delivery and adoption of AI to achieve mission impact at scale.” 

    However, the DoD’s moves to implement AI have been met with criticism from some quarters. 

    In April 2018, more than 4,000 Google employees signed a letter to CEO Sundar Pichai, asking him to end the company’s involvement in an AI program run by the Pentagon. The program, established in April 2017 and dubbed Project Maven, involves developing AI software for analyzing drone footage collected by the DoD. After intense scrutiny, Google decided not to review its Project Maven project with the Pentagon, although it has since partnered with the Pentagon’s Defense Advanced Research Projects Agency (DARPA) to work on the Next AI program, a noncombat use of the technology.

    In August, the DoD expressed concern about falling behind China in terms of AI technology, particularly given US tech giants’ reservations about working with the Pentagon.

    “If we do not find a way to strengthen the bonds between the United States government and industry and academia, then I would say we do have the real risk of not moving as fast as China when it comes to [AI],” Shanahan said during a press briefing.

    Georgetown’s Center for Security and Emerging Technology has been collaborating with the University’s School of Foreign Service to advise policymakers on artificial intelligence and national security.


    We Have to Develop Scalable Methods for AI Control so it Remains Aligned With Human Values - Prof.
    We Want to Democratize the Use of AI to Create Art - French Art Collective Member
    Military AI Would Direct Weapons To Hit Enemy Targets Within Milliseconds - Report
    AI Erotica: Sex Robot That 'Breathes' Poised to Take Adult Entertainment Market by Storm
    AI Set to Crack Isaac Newton’s Sun, Earth and Moon Gravitation & Orbiting Dilemma
    artificial intelligence, Department of Defense, United States
    Community standardsDiscussion