The magazine Wired has quoted an unnamed spokesperson for the UK-based artificial intelligence (AI) research company DeepMind as urging society to discuss acceptable approaches to dealing with the issue of AI weapons.
"The establishment of shared norms around responsible use of AI is crucial. […] We take a thoughtful and responsible approach to what we publish", the spokesperson pointed out.
The remarks were made as an AI-controlled F-16 fighter jet recently eliminated a human F-16 pilot 5-0 in a virtual dogfight that was hosted by the US Defence Advanced Research Projects Agency (DARPA). Notably, the algorithm used by the virtual warplane during the dogfight was earlier popularised by DeepMind.
Wired, in turn, underlined that the air battle indicated AI's potential "to take on mission-critical military tasks that were once exclusively done by humans".
At the same time, the dogfight reflected DeepMind currently being "caught between two conflicting desires", according to the magazine.
"The company doesn't want its technology used to kill people. On the other hand, publishing research and source code helps advance the field of AI and lets others build upon its results. But that also allows others to use and adapt the code for their own purposes", Wired argued.
In the project-related papers released in August 2019, DARPA explained that the development of such "cyber physical systems" (CPS) currently takes an enormous amount of time and resources, whereas AI would reduce the gap between the system's inception and its deployment "from years to months".
In 2015, DeepMind was one of the first companies to sign an open letter calling on the world's governments to ban work on AI weapons. The petitioners, among them Stephen Hawking, Elon Musk, and Jack Dorsey, specifically urged governments to agree on laws and regulations to effectively prohibit the development and construction of AI-based autonomous "killer robots".