The US military should be given more resources to invest in technology which can identify and disarm the potential “national security risk” caused by deepfakes, Pentagon Joint Artificial Intelligence Center director Lt. Gen. Jack Shanahan has said.
Speaking at a conference dedicated to AI at the John Hopkins Applied Physics Laboratory in Laurel, Maryland last week, Shanahan alleged that US adversaries had already used other ‘disnformation tools’ in previous elections to “cause friction and chaos,” with deepfakes becoming another tool in their arsenal.
“We saw strong indications of how this could play out in the 2016 election, and we have every expectation that – if left unchecked – it will happen to us again,” he said, his comments quoted by intelligence defence publication C4ISRNET.
“As a department, at least speaking for the Defence Department, we’re saying it’s a national security problem as well. We have to invest a lot in it. A lot of commercial companies are doing these every day. The level of sophistication seems to be exponential,” Shanahan added.
The Pentagon official pointed to the Defence Advanced Research Project Agency’s Media Forensics program as one way that the military is already tackling the issue.
“It’s coming up with ways to tag and call out [disinformation],” Shanahan said. Once completed, the DARPA project is expected to allow the military to detect manipulation to images and video and even find out how they were created.
DARPA aren’t the only ones working on technology to challenge deepfakes – a form of computer-assisted fakery using a machine learning algorithm to create hyper-realistic fake content through face swapping technology. Last month, Sputnik reported that researchers from the University of California at Riverside and tech R&D firm Mayachitra had teamed up to create a novel deep-learning architecture to detect content-changing manipulation based on tiny distortions invisible to the human eye.
It’s feared that in addition to their ability to change people’s perceptions ahead of elections, deepfakes can also influence target group behaviour for purposes of psychological warfare, with the end goals ranging from provoking a financial panic to starting wars. However, regarding the Pentagon’s work on programs to counter deepfakes, it remains unclear how successful DARPA’s efforts will be, and whether the DoD itself may not try to use the technology for its own ends.
Deepfakes first came to prominence in 2017, raising grave concerns about the ability to use the manipulations to create fake news, fake pornographic videos featuring politicians or celebrities, and other malicious content.
In addition to worries about deepfakes, the Pentagon’s Joint Artificial Intelligence Center expressed concerns this past week about the power of China’s ‘military-civil fusion’ strategy – which promotes partnerships between the civilian technology sector and the military. The US, Shanahan said, has nothing similar in place, and risks falling behind China on AI in the long run.