Canadian clinical psychologist, best-selling author, and culture warrior Jordan Peterson has tweeted a YouTube video revealing the danger of deepfakes for current and future generations just days after an audio spoofing website replicating his voice was discovered online.
Aside from the fact that the video, posted on the YouTube channel The Thinkery, addressed the "NotJordanPeterson" neural network that can make the AI model say anything one wants in the professor's voice, it also took a look at some potentially disturbing consequences of the further development of such technology.
The vlogger behind the clip, "Deepfakes Will Destroy Our Information Ecosphere", found an article about "the world's top deepfake artist" who was working to solve a problem he had created in the first place, and suggested that the AI actually poses a threat to Western democracies.
"The way our democracies function relies on the fact that we can be sure that a piece of media is reliable, as in this is a video recording or an audio recording, and that this is why it's so nefarious when media outlets clip a piece of audio or slice it together, or like change camera and cut out a certain segment of what is being said. But what happens when they don't need to do that and they can just literally f*cking make up your statements for you?" the content creator said.
While his point of view has found much support on Twitter, many of Peterson's followers believe that the technology could bring a positive change, as people would start questioning everything they see or hear, thereby developing critical thinking skills:
Actually this is great... in the future everything will be considered fake first... It is actually a sigh of relief... Only a live broadcast confirmed from a reliable source will matter. Or just tweets??? No voice no video...#deepfake— Everyday Patriot 🇺🇸 (@GlendalePatriot) 20 августа 2019 г.
It might be good, it could motivate ppl to start questioning what they're being shown & develop some critical thinking skills.... pic.twitter.com/Lxy0Exar8N— Partoftheproblem (@Problemspartof) 20 августа 2019 г.
One non-obvious benefit is that we might focus more on what is said rather than who is saying it.— C.S. Wright (@maximumchars) 20 августа 2019 г.
I think it should already be a rule to trust nothing on the internet. Believe nothing 100% unless you see it with your own eyes in real life - and even then, some skepticism is good.— Mostly-Metal (@Metal_Mick_) 20 августа 2019 г.
On the plus side, lots of us, who hear the wisdom in your words. Now can create our own tailor made wisdom & positivity boosts to our names & own targetted words of support but in the voice of JP..— Aarran Oliver (@AarranOliver) 20 августа 2019 г.
I find this completely disturbing and terrifying.— ɓℓσɠเɳƭɦεωɦεεℓ🌹ℓเƒε (@bloginthewheel) 20 августа 2019 г.
It feels like we're doomed.
How will we ever know what's true?
Others couldn't resist getting the most out of the neural network and made their own deepfakes, for instance, one featuring a "never-before-heard" 2Pac-Peterson collab:
IDK I am pretty content with my new voice mail message of you saying I must be busy cleaning my room and will get back to them when I take care of my personal responsibilities— Kyle J Surber (@Whathuh15) 19 августа 2019 г.
I’m going to have JBP sing Despacito to me and no one can stop me.— Clayton Wilbury (@daybreakerrr) 19 августа 2019 г.
The US government has already expressed concern about how deepfakes could potentially be used to spread convincing fake news ahead of the 2020 presidential election: earlier this month, the House Intelligence Committee asked big tech, such as Facebook, Twitter, and Google, how they were going to tackle the threat of digital trickery. The companies said they were working on the problem, but did not go into details.