Joe Rogan, the comedian-turned-podcaster, endorsing a coffee brand for men will not be entirely normal.
But when a video featuring Rogan and his guest, Andrew Huberman, was recently shared on TikTok, some sharp-eyed viewers were shocked, including Huberman himself.
Huberman, upon seeing the ad, tweeted, “Yep that’s fake,” despite the fact that he never endorsed the coffee for its purported ability to increase testosterone levels.
The Power Of AI Deepfakes
The commercial was just one example of a spate of similar hoaxes being spread across social media platforms using AI-powered production tools. Rogan’s voice, according to experts, sounds like it was synthesized with the help of artificial intelligence tools designed to imitate celebrity voices. The quotes from Huberman were taken from an entirely different interview.
In the past, sophisticated facial morphing software was required to create what is now called “deep fakes,” or convincingly phony videos. Now, however, regular people can get their hands on a wide variety of resources for making them, including apps for their smartphones, and frequently at no cost.
TikTok and Twitter have been flooded with newly edited videos, which have primarily been the work of meme-makers and marketers. Researchers have labeled the content they create as “cheap fakes,” but it actually relies on techniques like cloning celebrity voices, editing mouth movements to match alternative audio, and writing persuasive dialogue.
Some AI researchers are worried about the potential dangers posed by the videos, and they have raised new concerns over whether social media companies are prepared to moderate the growing prevalence of digital fakery.
Online fact-checkers are bracing for a flood of digital fakes that could mislead viewers or make it harder to tell the difference between fact and fiction.
The Term “Cheapfakes”
Assistant professor of library and information science at Rutgers University and co-creator of the term “cheapfakes” Britt Paris: “What’s different is that everybody can do it now.” “The group includes more than just those who are well-versed in computing and have access to high-end hardware and software. Instead, it’s a no-cost mobile app.”
For years, mountains of skewed content have been shared on TikTok and elsewhere, typically employing more low-tech methods like careful editing or switching out audio clips. Vice President Harris appeared to imply that all patients hospitalized with Covid-19 had received vaccinations in a TikTok video. In fact, she claimed that the patients had not been immunized.
Graphika, a disinformation research company, discovered deep fakes of fictional news anchors being distributed by pro-China bot accounts at the end of last year. This was the first known instance of this technology being used for state-aligned influence campaigns.
The availability of similar technology to the general public through a number of new online tools, however, allows comedians and political activists to create their own convincing spoofs.
There was a hoax video going around last month that purported to show Vice President Joe Biden calling for a national draught to help fight the war in Ukraine. The video was made by the people behind Jack Posobiec’s “Human Events Daily” podcast and Livestream, where Posobiec is known for promoting right-wing conspiracy theories.
Posobiec, in the video’s explanation, mentioned that his team had used AI to make the project. A conservative account, The Patriot Oasis, tweeted about the video with a breaking news label without mentioning that the video was fake. More than 8 million people saw the tweet.
Startup Involved With Cheap Deepfakes?
Eleven Labs is an American startup founded by a former Google engineer, and its technology was used in many of the videos with synthetic voices. This past November, the firm introduced a speech-cloning tool that can be taught to imitate voices in a matter of seconds.
Last month, Eleven Labs made headlines when the racist and conspiratorial message board 4chan started using the company’s tool to spread hate speech. In one instance, 4chan users recorded an anti-Semitic text read by a computer voice designed to sound like Emma Watson. Motherboard previously covered how 4chan was employing the audio system.
On Twitter, Eleven Labs announced that it would be implementing new security measures, such as restricting voice cloning to premium accounts and releasing a new AI detection tool. However, some 4chan users have claimed they will develop their own version of the voice-cloning technology using open-source code and have posted demos that are eerily similar to Eleven Labs’ output.
While the specific tool used is unclear, experts who study deep fake technology have speculated that the fake ad featuring Rogan and Huberman was created using a voice cloning program. Rogan’s voice was edited into a genuine Huberman interview about testosterone.
The final product is not flawless. In December, Rogan posted an interview he had with professional pool player Fedor Gorst, from which he culled the above clip. The timing of Rogan’s mouth actions and the audio are off, and his voice sounds artificial at points. Though it’s unclear whether or not the video persuaded TikTok users, it received a lot of attention after being reported for its convincing fakery.