This was originally published by The Edge at the Park Center for Independent Media
“We are so screwed, it’s beyond what most of us can imagine,” exclaimed Aviv Ovadya. “We were utterly screwed a year and a half ago, and we’re even more screwed now. And the further you look into the future, the worse it gets.” Ovadya, CEO of the AI & Democracy Foundation, was referring to advancements in artificial intelligence (AI), which he warned would cause an “Infocalypse.” Ovadya issued this warning in 2016, and nearly a decade later, the U.S. finds itself in a presidential election where this is a real factor. Much has been said about how the 2024 election is historically significant, with a former president and convicted felon running for office and a mixed-race woman within striking distance of the presidency. However, a less-discussed but equally important factor is that this is the first artificial intelligence-driven election.
Much of Ovadya’s concern was rooted in the development of deepfakes, which use a synthetic media technology that can create hyper-realistic videos, audio, and images by using AI.
Deepfakes existed in previous presidential cycles, as seen in 2020 when videos circulated showing then Speaker of the House, Nancy Pelosi, appearing drunk. These videos were clearly fabricated or altered to the naked eye. In the years since, however, AI has advanced significantly. Companies like OpenAI, Alphabet’s Google, and Microsoft have developed user-friendly AI tools that allow individuals with little to no technical skill to create false or misleading images, videos, and written content that appear legitimate. For example, shortly after President Joe Biden announced his decision to run for re-election in 2023 (a decision he has since reversed), the Republican Party created and distributed an AI-generated video response, stoking fear about his continued presidency.
It is no surprise that in an election year, these new tools are being used to influence voters. For example, Trump supporters have circulated convincing AI-generated images showing Trump in a variety of heroic or divine roles, from praying under a heavenly light to rescuing people with a gun during the 2024 hurricanes. Additionally, they have distributed images of Vice President Kamala Harris speaking in what appears to be a Soviet Union-era venue, positioning her as Joseph Stalin incarnate. Similarly themed deepfakes purport to show Harris’ communist credentials. Relatedly, Elon Musk, a Trump supporter, shared a fake audio file of Harris saying things that she did not actually say on his X (formerly Twitter) platform.
AI-generated images not only reflect partisan tactics but also expose the continued legacy of racism in the U.S. While AI-generated content featuring figures like Trump or Biden appear remarkably accurate, the same cannot be said for Harris. The images of her bear little resemblance to the real person. Scholars have noted that this discrepancy arises because AI is largely programmed using data from a predominantly white majority, reflecting Silicon Valley’s demographic makeup, which is disproportionately White and male. While these errors may be corrected in the future, they highlight the racial inequities embedded in AI technology and its development.
Fake news, such as AI-generated content, misleads the electorate. However, the problem with AI content goes beyond its falsehood. Once the public becomes aware of the existence of deepfakes, they can no longer trust images, video, or audio at face value. This skepticism, though sometimes healthy, has been exploited by political campaigns. For example, Trump falsely claimed that a large crowd at a Harris rally was AI-generated. While journalists debunked the claim, the fragmented media landscape and declining trust in journalism make it unlikely that the entire electorate will be reached or persuaded by this factual conclusion.
The blending of truth, lies, and deepfakes can be used to gaslight the electorate, causing people to question reality when they can no longer distinguish fact from fiction. A notable example from the 2024 election was the release of deepfake images by the Trump campaign showing Taylor Swift fans — known as “Swifties” — wearing “Swifties for Trump” shirts. Social media posts claimed Swift had endorsed Trump. A few weeks later, the real Swift endorsed Harris, but this did not stop Trump supporters from continuing to spread AI-generated images suggesting otherwise. For voters who do not follow or trust the news, such mixed messages can be highly confusing.
We are just beginning to understand the role and influence of AI in electoral politics. Voters can take steps to mitigate the harms of AI, such as slowing down and investigating the source of content; not believing everything they see, hear, or read, corroborating media messages; and avoiding sharing content on social media until they are certain it is true. However, broader discussions at the governmental level are needed to address the challenges AI poses to democracy. These discussions should include revisiting copyright laws regarding people and brands, as well as exploring defamation, libel, and responsibility for content creation and dissemination. These challenges will likely grow more pronounced in the years ahead. The 2024 election is already giving us a glimpse of what’s to come, and it does not look good.
Nolan Higdon is an author, lecturer at Merrill College and the Education Department at University of California, Santa Cruz, Project Censored National Judge, and founding member of the Critical Media Literacy Conference of the Americas. Higdon’s areas of concentration include critical AI literacy, podcasting, digital culture, news media history & propaganda, and critical media literacy. All of Higdon’s work is available at Substack (https://nolanhigdon.substack.com/). He is the author of The Anatomy of Fake News: A Critical News Literacy Education (2020); Let’s Agree to Disagree: A Critical Thinking Guide to Communication, Conflict Management, and Critical Media Literacy (2022); The Media And Me: A Guide To Critical Media Literacy For Young People (2022); and Surveillance Education: Navigating the conspicuous absence of privacy in schools (Routledge). Higdon is a regular source of expertise for CBS, NBC, The New York Times, and The San Francisco Chronicle.