The world is awash in deepfakes—video, audio, or pictures in which people appear to do or say things they did not or be somewhere they were not. Most deepfakes are explicit videos and images concocted by mapping the face of a celebrity onto the body of someone else. Some are used to scam consumers or to damage the reputations of politicians and other people in the public eye. Advances in artificial intelligence mean it takes just a few taps on a keyboard to conjure them up. Alarmed governments are looking for ways to fight back.
What is being done to combat deepfakes?
On February 8, the US Federal Communications Commission (FCC) made it illegal for companies to use AI-generated voices in robocalls. The ban came two days after the FCC issued a cease-and-desist order against the company responsible for an audio deepfake of US President Joe Biden. New Hampshire residents received a robocall before the State’s presidential primary that sounded like Biden urging them to stay home and “save your vote for the November election”. The voice even uttered one of Biden’s signature phrases: “What a bunch of malarkey.”
There is currently no US federal law banning deepfakes. Some States have implemented laws regarding deepfake pornography, but their application is inconsistent across the country, making it difficult for victims to hold the creators to account. A proposed European Union AI Act would require platforms to label deepfakes as such.
Also Read | Google Gemini: Is this the next big thing in AI?
Where else have deepfakes been in the news?
Explicit deepfake images of pop star Taylor Swift were widely shared on social media sites in late January, drawing the ire of her legions of fans. The incident drew expressions of concern from the White House. Earlier that month, Xochitl Gomez, a 17-year-old actress in the Marvel series, spoke out about finding sexually explicit deepfakes with her face on social media and not succeeding in getting the material taken down, NBC News reported.
How are deepfake videos made?
They are often crafted using an AI algorithm that is trained to recognise patterns in real video recordings of a particular person, a process known as deep learning. It is then possible to swap an element of one video, such as the person’s face, into another piece of content without it looking like a crude montage. The manipulations are most misleading when used with voice-cloning technology, which breaks down an audio clip of someone speaking into half-syllable chunks that can be reassembled into new words that appear to be spoken by the person in the original recording.
How did deepfake technology take off?
The technology was initially the domain of academics and researchers. However, Motherboard, a Vice publication, reported in 2017 that a Reddit user called “deepfakes” had devised an algorithm for making fake videos using open-source code. Reddit banned the user, but the practice spread.
Initially, deepfakes required video that already existed and a real vocal performance, along with savvy editing skills. Today’s generative AI systems allow users to produce convincing images and video from simple written prompts. Ask a computer to create a video putting words into someone’s mouth and it will appear. The digital forgeries have become harder to spot as AI companies apply the new tools to the vast body of material available on the web, from YouTube to stock image and video libraries.
What are some other examples of deepfakes?
Chinese trolls circulated manipulated images of the August wildfires on the Hawaiian island of Maui to support an assertion that they were caused by a secret “weather weapon” being tested by the US. In May 2023, US stocks dipped briefly after an image spread online appearing to show the Pentagon on fire. Experts said the fake picture had the hallmarks of being generated by AI. That February, a manufactured audio clip emerged with what sounded like Nigerian presidential candidate Atiku Abubakar plotting to rig that month’s vote.
In 2021, a minute-long video published on social media appeared to show Ukrainian President Volodymyr Zelenskyy telling his soldiers to lay down their arms and surrender to Russia.
What is the danger here?
The fear is that deepfakes will eventually become so convincing that it will be impossible to distinguish what is real from what is fabricated. Imagine fraudsters manipulating stock prices by producing forged videos of chief executives issuing corporate updates, or falsified videos of soldiers committing war crimes. Politicians, business leaders, and celebrities are especially at risk, given how many recordings of them are available.
The technology makes so-called revenge porn possible even if no actual naked photo or video exists, with women typically targeted. Once a video goes viral on the internet, it is almost impossible to contain. An additional concern is that spreading awareness about deepfakes will make it easier for people who truly are caught on tape doing or saying objectionable or illegal things to claim that the evidence against them is bogus. Some people are already using a deepfake defence in court.
What else can be done to suppress deepfakes?
The kind of machine learning that produces deepfakes can not easily be reversed to detect them. But a handful of startups such as Netherlands-based Sensity AI and Estonia-based Sentinel are developing detection technology, as are many big US tech companies. Intel Corp. launched a FakeCatcher product in November 2022, which it says can detect faked video with 96 per cent accuracy by observing the subtle colour changes on the subject’s skin caused by blood flow. Companies including Microsoft Corp. have pledged to embed digital watermarks in images created using their AI tools in order to distinguish them as fake.