How ‘cheapfakes’ are pushing misinformation in the Lok Sabha election

AI-driven deepfakes are still rare on Indian social media. Other forms of misleading videos and images are more common.

Published : May 31, 2024 18:19 IST - 5 MINS READ

Fake videos of politicians and celebrities have been circulating online since the beginning of the 2024 Lok Sabha election. However, experts believe that assuming all of them are AI deepfakes may be a red herring.

Fake videos of politicians and celebrities have been circulating online since the beginning of the 2024 Lok Sabha election. However, experts believe that assuming all of them are AI deepfakes may be a red herring. | Photo Credit: Anupam Nath/AP

The 2024 general election was in full swing when hundreds of social media users shared a video that appeared to show Union Home Minister Amit Shah saying the ruling party wanted to scrap a quota system aimed at undoing centuries of caste discrimination. The controversial comments caused a brief furore before fact-checkers stepped in and declared the video a fake one made using old footage that was doctored with the help of basic editing tools—a so-called “cheapfake”.

As the election process continues, politicians and digital rights groups voiced concern that voters could be swayed by misinformation contained in AI-driven “deepfake” videos. But fact-checkers say most of the falsified pictures and videos posted online during the six-week election have not been made using artificial intelligence (AI), instead using relatively cheap and simple techniques such as footage editing or mislabelling to present content in a misleading context.

“Maybe 1 per cent of the content we have seen is AI-generated,” said Kiran Garimella, an assistant professor at Rutgers University who researches WhatsApp in India. “From what we can tell, it’s still a very small percentage of misinformation.”

Also Read | Editor’s Note: What should really worry us about AI

Whether cheapfakes or deepfakes, the result can be equally convincing, fact-checkers say, putting the onus on social media companies to do more to root out all forms of misinformation being spread on their platforms. “You can resurrect dead leaders using AI but people realise it’s propaganda... However, if you mislabel a video or clip it out of context, people are more likely to believe it,” said Pratik Sinha from Alt News, an Indian nonprofit fact-checking website. “Rather than getting into the binary of deepfakes and cheapfakes, there is a need for finding a way to tackle misinformation more effectively,” Sinha told the Thomson Reuters Foundation.

Both Meta Platforms Inc., which owns Facebook and Instagram, and X, formerly Twitter, introduced new policies to crack down on different forms of misinformation in a big year for global elections. Still, fact-checking groups say the results have been disappointing.

Updated guidelines

Responding to criticism from its oversight board, Meta updated its guidelines in April to add prominent labels to all forms of misinformation. Meta’s earlier policy only applied to content altered or created using AI. “We agree with the Oversight Board’s argument that our existing approach is too narrow since it only covers videos that are created or altered by AI to make a person appear to say something they didn’t say,” Monika Bickert, the company’s vice president of content policy, wrote in a blog post on April 2024.

Under the new approach, which took effect before the Lok Sabha election started on April 19, fact-checkers working with Meta review and rate posts, including ads, articles, photos, videos, reels, and audio on its social media network under six labels to provide more information to users. They can use the labels False, Partly False, Altered, Missing Context, Satire, and True.

Sinha questioned the policy’s effectiveness in tackling false and misleading digitally manipulated posts over the election period. “I’m not sure how effective Meta’s labelling has been,” he said, calling for the company to release data on its fact-checking programmes.

An analysis by the Thomson Reuters Foundation found many fact-checked videos on Facebook had not been labelled correctly, or carried no warning label at all. In one video doctored through editing, Prime Minister Narendra Modi appears to ask supporters to vote for a rival party. Rather than being labelled Altered, it is labelled Partly False—meaning it contains “some factual inaccuracies”.

X’s introduction in April 2024 of a new feature for Indian users to combat misinformation has also fallen short, said Karen Rebelo, deputy editor at fact-checking website Boom Live. According to X, its Community Notes feature is designed to combat misinformation by inviting users from diverse backgrounds to contribute as note authors to set the record straight.

But Rebelo said different note authors often contradict each other, creating further confusion as no clear consensus arises on the integrity of the post in question. “A lot of misinformation has notes on it but it’s not surfacing because other contributors don’t agree with it. X needs to find a way to work this out because otherwise, it defeats the purpose of community notes,” she said.

The Thomson Reuters Foundation found a cheapfake video of Mallikarjun Kharge, president of the opposition Congress, could be found on X and had no notes on it despite being debunked by fact-checking websites. In the mislabelled footage, viewed 43,000 times, Kharge appears to say his party would distribute Hindus’ wealth to minority Muslims.

Broader threat

Even when doctored videos have been labelled as fakes by social media platforms, they often still spread unabated on messaging apps such as WhatsApp, Garimella said. “Forty per cent of the viral content being forwarded has already been fact-checked many times, but that hasn’t stopped it from spreading because there is no moderation as such on the messaging app,” Garimella said. “That tells us that people perhaps aren’t aware (it is fake),” he said, warning that without tough controls by platforms that would likely continue.

Also Read | With the relentless rise of AI, journalists face tough choices

Ahead of the election, Meta, which owns WhatsApp, launched a fact-checking helpline on the app with the Misinformation Combat Alliance (MCA) to combat AI-generated misinformation. Most content flagged to the helpline had been manipulated using simple methods, not AI-driven tools, said Pamposh Raina, head of the MCA’s Deepfake Analysis Unit.

But the alarm about deepfakes may have distracted platforms from the broader threat of misinformation, Sinha said. “We’ve hardly seen any deepfake videos that spread misinformation... But (social media) put its money and resources into debunking deepfakes. It should have researched the market better,” he said.

Sign in to Unlock member-only benefits!
  • Bookmark stories to read later.
  • Comment on stories to start conversations.
  • Subscribe to our newsletters.
  • Get notified about discounts and offers to our products.
Sign in

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide to our community guidelines for posting your comment