The Threat of AI-Driven News
Seconds after the attempted assassination of President Trump on July 13, 2024, AI-generated content and conspiracy theories began rippling through social media. People on both sides of the political spectrum fell prey to doctored images of Secret Service agents, conspiracy theories regarding the motive, and much more. This incident and the resulting quakes felt throughout the internet are a fearful omen of what could be ahead during the rest of the 2024 election cycle.
Fake news and doctored images have been part of the American political discourse for nearly a decade. In 2016, fake images and documents were used to create news articles that grew to dominate headlines and social media websites. Misleading headlines touting Clinton’s email leaks, Trump’s endorsement by the Pope, and a pizza restaurant’s outrageous “scandal,” all contributed to deceiving voters so much so that in the four months leading up to Election Day, Facebook recorded higher engagement numbers for fake news content than real verified ones.
The first ‘AI Election’
At a July 12 Ethnic Media Services briefing, experts monitoring the rise of AI-empowered racialized messaging reported on how artificial intelligence is supercharging threats to the election system by stoking disinformation to confuse voters. They also discussed efforts to push new legislative controls to halt the spread of synthetic content before the 2024 elections.on
Jonathan Mehta Stein, the Chair of the California Institute for Technology & Democracy (CITED), said, “The 2016 election cycle was merely a preview. With the rise of GenAI, we are only now entering the first ‘AI election’, meaning that AI deep fakes and AI disinformation campaigns now have more power than ever to confuse voters, inundate our political discourse, and undermine our trust in democracy.”
Stein said that AI-generated content is even this second, being spread and shared widely in social media. The feared assault of AI-powered misinformation by online trolls, foreign agents, conspiracy theorists, and even candidates themselves, is already here. He stressed voters needed to develop a radar to detect fake news.
What is AI and the newer Generative AI?
Artificial Intelligence is a form of machine intelligence that learns from data and then develops a model to solve the specified task. In the past decade, AI research has turbocharged its development and leaped from research labs to our mobile phones. AI uses range from mundane instances like Netflix’s recommendation system and Apple’s FaceID to Google’s Wind Pattern Predictions and ChatGPT’s ability to pass the bar exam.
Before Generative AI, specialized forms of AI were trained to tackle different kinds of inputs: images, audio, video, and text. But now, tools like OpenAI’s ChatGPT are so powerful that they can learn, adapt, and generate entirely new multimedia content based on a simple prompt.
These tools, due to their wild popularity, have also been made readily accessible to people around the world and can be used to generate images, articles, and videos of such high quality that it is almost impossible for the human eye to pick them apart.
Examples of GenAI
Popular instances of AI deep fakes include the viral image of Pope Francis in a puffer jacket, Donald Trump surrounded by black voters to showcase minority support, videos of popular Bollywood actors Aamir Khan and Ranveer Singh mocking the BJP, and the Face Swap expose of Rahul Gandhi criticizing a member of his own coalition. Moreover, fake news websites posing as local news publications, such as the made-up Miami Chronicle, have been popping up this past year to promote Russian propaganda.
Jonathan Stein adds that although deep fakes of national-level politicians can be corrosive to democracy, fake news about local elections, mayors, county officials, and state representatives could be even more damaging. This could result from several cascading factors: more watchdogs in the national sphere, the recent collapse of local news stations and lower number of journalists covering local news, and the proliferation of fake news in messaging apps like WhatsApp, Messenger, etc. that are encrypted and private, and hence, harder to track down.
How AI Fake News Spreads & Affects Communities of Color
Misinformation has been used for years to target immigrant communities and disenfranchise them. This year, AI has the potential to create highly damaging and believable content to sway voters of color. India’s 2024 election was awash with deep fakes due to candidates either readily employing GenAI or feeling peer-pressured to create such content.
Unlike fake content on mainstream media channels which is easier to debunk, fake news spreading in regional languages is harder to quash. Communities of color have limited resources, fewer fact-checkers, and lower AI literacy rates; fake news tends to spread like wildfire through apps like WeChat, Whatsapp, and Telegram and are much harder to expose.
AI, Friends & Family
“The truth is that most conspiracy theories are shared by people you know – by family, friends, influencers, and public figures you trust. And this content will mostly be shared via private end-to-end messaging apps and may not get flagged until they get truly viral,” warned Jinxia Niu, the program manager of the Chinese Digital Engagement at Chinese for Affirmative Action(CAA), where she manages multiple WeChat accounts and the Chinese-language fact-checking website Piyaoba.
As Big Tech gives up on tackling misinformation, it falls to the community and the media to halt the flow of fake news. Some tactics that can help include fact-checking websites and deep fake spotting tools, verifying the news on WhatsApp and other platforms against trusted news sites before re-sharing; developing a culture of declaring that images and content are AI-generated when posting on social media, and voting for bills and measures aimed at tackling the spread of fake news. For example, California’s AB 3211 and the US Senate’s COPIED Act are currently making their way through the legislature. These bills seek to combat deep fakes by requiring AI companies to watermark synthetic content, by adding safeguards to ensure rightful credit goes to writers, journalists, and artists, and by creating a set of new transparency standards.
Image courtesy: Unsplash (visuals-2TS23o0-pUc-unsplash.jpeg) and India Currents


