- Sputnik International
World
Get the latest news from around the world, live coverage, off-beat stories, features and analysis.

Facebook Unveils Its Defence Against The Deepfake Threat

CC0 / Pixabay / Facebook logo
Facebook logo - Sputnik International
Subscribe
Many experts have been sounding the alarm that “deepfake” videos manipulating reality are becoming ever-more sophisticated, generating the potential for new kinds of misinformation, with devastating consequences.

Facebook announced on 5 September that it is gearing up to take on the challenge of videos doctored with artificial intelligence, often called "deepfakes", and is commissioning the creation of its own deepfake clips, which will be used to make a data set, reports CNN Business.

The company hopes the data set will ultimately be used for testing and benchmarking deepfake detection tools aimed at flagging AI-forged videos online and stopping them from spreading.

According to Facebook’s CTO, Mike Schroepfer, deepfakes are advancing rapidly, underscoring the urgency of devising efficient tools to tackle them.

“We have not seen this as a huge problem on our platforms yet, but my assumption is if you increase access—make it cheaper, easier and faster to build these things—it clearly increases the risk that people will use this in some malicious fashion. I don’t want to be in a situation where this is a massive problem and we haven’t been investing massive amounts in R&D,” Schroepfer said on Thursday.

The project is part of a Facebook-sponsored competition dubbed the Deepfake Detection Challenge, which will offer grants and awards to spur participation from AI researchers.

The company is offering over $10 million and collaborating with a number of organisations on the competition, including Microsoft, schools like MIT and the University of California, Berkeley, and the Partnership on AI, a nonprofit research and policy organisation.

Paid actors doing and saying routine things will be used in the videos, according to Mike Schroepfer, who spoke with reporters on 4 August.

Facebook plans to release the data set in December.

Schroepfer said the ultimate goal of the competition is to advance the creation of an AI system that could effectively determine which videos have been altered.

Currently the problem is being studied by researchers and several startups, with a number of methods out there for flagging deepfakes. These include looking at the video for things like out-of-place shadows and strange visual artefacts, but the fast-evolving technology is rendering this increasingly challenging.

"To set expectations, this is a really, really hard problem," Schroepfer said.

Deepfakes are becoming cheaper, faster, and easier to make, leaving experts increasingly alarmed.

Hany Farid, a professor at UC Berkeley and image-forensics expert whose lab received a grant from Facebook linked to its deepfake detection research, believes the competition will be instrumental in solving the important problem.

However, Farid warned Facebook against complacency once the tools have been developed, as any technological solution must change over time.

"It's always evolving because our adversaries are always evolving," he said.

The expert also underscored that Facebook would need to take decisions about its policies regarding false videos, as Mike Schroepfer said the company was "figuring out in parallel" what its rules regarding misinformation and deepfakes should be.

Deepfake - a combination of the terms "deep learning" and "fake"- is an AI-based technology used to produce or alter video content so that it presents something that didn't actually happen.

The term is named for a Reddit user known as deepfakes, who, in December 2017, used deep learning technology to edit the faces of celebrities onto people in pornographic video clips.

Facebook has often come under fire for reportedly failing to stop the spread of misinformation and hate speech, with deepfakes now representing a new, albeit largely hypothetical, challenge.

While it has long been possible for movie studios to manipulate images and video with software and computers, the rise of deepfakes has been driven by recent advances in machine learning.

Although methods for spotting forged media exist, they often involve painstaking expert analysis, with tools for identifying deepfakes automatically only just emerging.

Newsfeed
0
To participate in the discussion
log in or register
loader
Chats
Заголовок открываемого материала