The Age of Generative Warfare
Why the 2026 Middle East Crisis Demands Critical AI Literacy
“Tel Aviv, stripped of illusion, as you have never witnessed it,” read the caption above a viral March 2026 video showing missiles hammering the Israeli city as explosions burst across the night sky. To the casual scroller, it appeared to be a harrowing document of modern conflict. The problem, however, was that the video was a deepfake.
Deepfakes are synthetic media edited or generated using Artificial Intelligence (AI). According to the New York Times, a “cascade of A.I. fakes about war with Iran” have proliferated across social media since the United States (U.S.) and Israel reignited military actions with Iran on February 28, 2026. Indeed, the digital landscape is increasingly saturated with synthetic fabrications, as false videos of boisterous celebrations, frantic airport evacuations, devastating bombings, and graphic casualties flood users’ feeds in a relentless stream of misinformation.
As these digital fabrications blur the line between reality and simulation, the necessity for Critical Artificial Intelligence Literacy (CAIL) has moved from an educational luxury to a vital requirement. We are currently navigating a landscape where the “fog of war” is no longer just a metaphor for confusion on the battlefield, but a literal description of an information environment choked by “AI slop.” Indeed, one study found that more than 20% of the content on YouTube is AI generated. Without a robust, systemic effort to instill CAIL, the public remains defenseless against sophisticated psychological operations. We must understand not just how to use these tools, but the socio-political structures that own them and the inherent biases they encode.
From Trojan Horses to Tonkin
The deployment of false information is not a modern phenomenon; it has been a foundational staple of conflict since the ancient world. From the Greeks’ legendary construction of a hollow wooden horse to infiltrate Troy, to Genghis Khan’s Mongol cavalry utilizing feigned retreats to lure enemies into fatal disarray, strategic deception has always defined the battlefield.
In modern democracies like the U.S., leaders have frequently refined these tactics into “false news” designed to manufacture public consent for intervention. This pattern of deception is evident in the “phantom” attack in the Gulf of Tonkin used to escalate the Vietnam War and the infamous claims of “phantom” Weapons of Mass Destruction (WMDs) that prefaced the 2003 invasion of Iraq. Beyond initiating conflict, misinformation serves to artificially sustain public morale and project an illusion of progress. This was notoriously exemplified by the White House during the Vietnam War, where official reports continuously claimed the U.S. was winning even as internal assessments acknowledged a deepening quagmire. Similarly, President George W. Bush’s “Mission Accomplished“ declaration, delivered from the deck of an aircraft carrier just weeks into the 2003 invasion of Iraq, provided a false sense of finality to a war that would ultimately span decades.
Pardon the interruption! I’m committed to keeping the latest news and research as accessible as possible, but sustaining this work—and expanding into new video content and collaborations—requires the support of our community. If you’re able, please consider becoming a paid subscriber for just $5 a month. Your support keeps this information flowing to readers and classrooms everywhere. If you can’t swing that right now, no worries! Simply sharing this post or sending it to a friend helps us tremendously with the algorithms. Thank you for being here and for your support. Now, back to the essay!
The Architecture of Synthetic Media
While the intent to deceive is ancient, AI and social media have complicated these issues by allowing anyone to create slick, convincing content at scale. Even before the recent escalation, the Russia-Ukraine war and the geopolitical tensions between Israel and Bahrain were already inundated with AI-generated misinformation.
The proliferation of deepfakes does more than just spread lies; it erodes the very foundation of objective truth by fostering universal skepticism. This phenomenon allows genuine evidence of suffering to be dismissed as mere simulation. For instance, NBC News reported on a grueling investigation confirming that a video of starving Gazans awaiting food in May 2025 was entirely authentic; nonetheless, a barrage of social media users reflexively dismissed the footage as a deepfake. When the public can no longer distinguish between a sophisticated fabrication and a documented reality, the truth becomes a matter of partisan convenience rather than empirical fact.
In high-stakes environments, the fog of war creates panic and visceral reactions where people feel their decision-making is a matter of life or death. If the information they consume is incorrect, it could be the difference between a peaceful protest and an individual becoming radicalized toward violence.
For content creators and platform algorithms, the incentives are skewed toward chaos. Social media platforms are designed to amplify content that triggers intense emotional reactions. Because fake news is often more sensational than the nuanced truth, it spreads faster and wider.
While the ideal response is for the public to wait and investigate before passing judgment, this is a tall order when individuals believe they are witnessing an active massacre. Some deepfakes can be debunked quickly, such as the video of Israeli Prime Minister Benjamin Netanyahu which showed him with six fingers. In many cases, verifying information takes time; one must geolocate footage, check metadata, and often accept the uncomfortable conclusion that there is not yet enough evidence to draw a certainty. AI has made this truth-finding mission exponentially harder for the average citizen who lacks the resources for deep digital forensics.
Ironically, many people now rely on AI to tell them if content is AI-generated. This reliance illustrates a profound lack of AI literacy. What we commonly call AI today is more accurately described as Large Language Models (LLMs). These are not “intelligent“ in any human sense; they are pattern-recognition engines that memorize and predict sequences of data. They are only as good as the data fed into them, and as a result, they reflect human biases, often amplified to a dangerous degree.
Studies consistently show that AI responses can be factually inaccurate about half the time. These models frequently “hallucinate,” fabricating information and citations that do not exist. A study by The Intercept highlighted this absurdity, showing how Google Gemini gave conflicting responses about whether a specific text was AI-generated, even when the text in question was something Gemini itself had produced. When news outlets cite AI detectors as definitive proof, they are often building their conclusions on a foundation of sand.
The CAIL Framework: Interrogating Power
This AI illiteracy compounds decades of neglected media literacy. While many nations have made media literacy a compulsory part of their national curriculum, the U.S. has largely left it to the discretion of local communities. Media literacy is the ability to access, analyze, evaluate, create, and act using all forms of communication, from print to digital media. Without this foundation, the public is ill-equipped to handle the nuances of the algorithmic age.
Critical AI Literacy is an evolving framework that goes beyond simply knowing how to prompt a chatbot. It teaches students to interrogate ownership: who owns the AI, and how does that ownership shape its bias, ideology, and purpose? If a corporation owns the model, will it prioritize profit over democratic stability?
A critical approach also examines representation. We must ask how AI-generated images reflect the biases of their training data, such as the white supremacist or extremist content occasionally surfaced by unmoderated models like Grok AI. Furthermore, it reminds us that the Big Tech industry is often fundamentally anti-human in its philosophy, viewing human beings as buggy systems that need to be fixed or optimized by code.
Choosing Our Reality: A Mandate for the Common Good
As researcher Gary Smith suggests, AI will only surpass human intelligence if humans continue to use it in ways that degrade our own cognitive abilities. Studies show that prolonged, uncritical reliance on AI and screens contributes to a decline in cognitive abilities, memory and focus. CAIL points out that humans are the smart ones; the platforms are merely tools.
In a time of war, the absence of this literacy has deadly consequences. If deepfakes and hallucinating bots are shaping our emotions and our interpretations of international conflict, we are living in a state of perpetual, manufactured crisis. We cannot afford to repeat the mistakes of previous decades, where we naively assumed that simply having access to technology would make the world more connected and smarter.
The goal of Critical AI Literacy is not to make us run from technology, but to understand it so it can be harnessed for the common good. We must decide if AI will be a partner in automating meaningless tasks to improve the human condition, or an exploitative force that dictates the citizenry’s reality. That is a decision for an informed public to make, not for Big Tech executives. If the public remains AI illiterate, they will remain dependent on the very narratives designed to exploit them.
A Note to My Readers
I want to take a moment to express my sincere gratitude for your time and attention. My goal has always been to ensure that the latest news, critical studies, and deep-dive information remain as accessible as possible to everyone. However, maintaining this level of independent research and outreach necessitates the support of our community.
To help me continue bringing this essential content to readers, classrooms, and beyond, please consider becoming a paid subscriber for as little as $5 a month.
Your support directly funds:
New Video Content: I am currently working to develop more visual resources to make complex topics easier to digest.
Collaborations: Building bridges with other truth-tellers and dedicated supporters of education.
Independent Research: Ensuring that the information you receive is rigorous, timely, and free from corporate influence.
If you are in a position to join today, your contribution makes this work sustainable.
If you can’t swing a paid subscription right now, no worries at all. You can still play a vital role in this mission by sharing this post on social media or sending it to a friend. Engaging with the content helps us navigate the algorithms and get this information in front of the people who need it most.
Thank you for being part of this community and for your ongoing support.
🔥 Cut Through the Noise – The Disinfo Detox Podcast
Good news! The Disinfo Detox is now on Substack! Get sharp, no-BS takes on politics, AI, and higher education—stuff mainstream media won’t touch.
💥 Every click helps fight disinfo. Be part of the antidote—subscribe, comment, and share!
🔥 Just Ask The Question
Join host Brian Karem, Nolan Higdon, and Mark Zaid every week as they strip away the spin and break down the biggest headlines.
🔥 Recent Media Appearances
KTVU (Tune in every morning during the 9am hour & the TAKE2 podcast)
https://www.ktvu.com/news/bay-area-gas-prices-spike-following-iran-conflict







I think that we are pretty much helpless with AI generated fake news, and your point is well taken, Noland, as we will get to where we cannot believe anything. "Who you gonna believe, me or your lying eyes" is no longer relevant.
On a more cheery note, all AI in this genre takes an immense amount of computing power as well as massive amounts of electricity and cooling water. This is very expensive, and just as the declining dollar will prevent overseas political manipulation and war-making, AI production will fare no better.
May I introduce the liar's dividend? Coined by legal scholars Bobby Chesney and Danielle Citron, it describes the second-order effect of deepfakes that your piece almost names but doesn't quite land on: the real strategic value of synthetic media in conflict is not that people believe the fake. it's that people lose the ability to trust the real.
The NBC/Gaza example you cite isn't a byproduct of confusion; it's the goal. Once a population is conditioned to cry deepfake at any inconvenient footage, atrocity documentation loses its political force whether or not it's authentic. That's not a literacy failure, that's the point.
Which brings me to a distinction I think is doing a lot of unexamined work in the piece: disinformation and ambient synthetic noise are not the same thing, and conflating them blunts the analysis. Disinformation has a directing hand with intent, coordination, a beneficiary. Ambient synthetic noise is the AI slop ecosystem: the flood of low-grade generated content that degrades epistemic infrastructure without necessarily having anyone steering it.
The reason this distinction matters is that state and state-adjacent actors in a conflict don't need to produce the definitive deepfake. They only need to seed enough ambient noise that the liar's dividend does the work for them. The confusion becomes the operation and no individual actor needs to claim credit for it. And we as writers must be careful about that!
Hi I'm Phil and I write a lot about this in my spare time. Hmm.... shameless promotion? Yeah why not. If you, the reader, got this far. Why not.
https://substack.com/@thedisinformationobserver?utm_source=user-menu