Jordan Peterson, a prominent public intellectual and free speech champion, has issued an ominous warning about the perils of deepfake trickery after becoming yet another victim of the nascent technology.
In a post on his website on Thursday, the self-help guru spoke out about audio spoofing websites and fake videos created by machine learning algorithms, which can make it appear as if a person has said or done something.
“It’s hard to imagine a technology with more power to disrupt,” he wrote. “I’m already in the position (as many of you soon will be as well) where anyone can produce a believable audio and perhaps video of me saying absolutely anything they want me to say. How can that possible be fought?”
“How are we going to trust anything electronically-mediated in the very near future (say, during the next Presidential election)?” he asked.
A Call for Action
Peterson has called on legislators to criminalise the production of deepfakes, making it illegal for them to be used for defamation or deception.
He continued: “And it seems to be that we should perhaps throw caution to the wind, and make this an exceptionally wide-ranging law. We need to seriously consider the idea that someone’s voice is an integral part of their identity, of their reality, of their person—and that stealing that voice is a genuinely criminal act, regardless (perhaps) of intent.”
A Trust-less Future
If the problem of deepfakes were not addressed, he warned, it would spell a future where direct personal contact would be the only credible source of information – something that would lead to the inevitable decline of mass media.
“I can’t imagine what the world will be like when we will truly be unable to distinguish the real from the unreal, or exercise any control whatsoever on what videos reveal about behaviours we never engaged in, or audio avatars broadcasting any opinion at all about anything at all,” he added.
“Wake up. The sanctity of your voice, and your image, is at serious risk. It’s hard to imagine a more serious challenge to the sense of shared, reliable reality that keeps us linked together in relative peace. The Deep Fake artists need to be stopped, using whatever legal means are necessary, as soon as possible.”
An AI Peterson
The charismatic professor has become a model for a deepfake voice simulator, notjordanpeterson.com, which transcribes any written message into an audio sample sounding precisely like him. His numerous public appearances have also been used to train AI algorithms making his digital model rap like Eminem and read feminist rants.
Notjordanpeterson.com has been taken down after being operational for just one week – according to the creator, US software engineer Chris Vigorito, in response to Peterson’s post and “out of respect” for him.
“I'm encouraged by the many accounts I received from people who were interested in using this technology in positive ways, and I'm cautiously optimistic that we will be able to use technologies like this for the better, while minimising their potential harm,” reads the statement on the AI voice website.
The deepfake problem drew international attention in June, with the rise and fall of DeepNude, an app that could “undress” women on photos, juxtaposing the face of any female on an AI-generated naked body.
US lawmakers have been trying to take proactive step on the issue, introducing a bipartisan bill in June that would require the Department of Homeland Security to do an annual study of deepfakes and similar content.
“Deepfakes can be used to manipulate reality and spread misinformation quickly. In an era where we have more information available at our fingertips than ever, we have to be vigilant about making sure that information is reliable and true in whichever form it takes,” said Senator Cory Gardner, one of the four co-sponsors of the bill.