The AI Illusion
Manipulation, Privacy, and the Erosion of Democracy
“Advances in artificial intelligence (AI) offer the prospect of manipulating beliefs and behaviors on a population-wide level” warns a recent publication in Science. Indeed, the article notes that AI can generate convincing falsehoods faster than any technology previously. It even warns that “techniques meant to refine AI reasoning, such as chain-of-thought prompting, can be used to generate more convincing falsehoods.” Relatedly, another recent study found that “people often trust fake local news sites more than real ones.” Yet, another study explained that “A multi-university team of researchers has found that chatting with a politically biased AI model was more effective than political advertisements at nudging both Democrats and Republicans to support presidential candidates of the opposing party.” Ironically, despite producing the research on these risks, many colleges and universities, the very institutions expected to lead in ethical implementation, have been found relying on AI to score admissions essays.
The threats AI poses to the establishment of anything resembling an agreed-upon understanding of truth are jarring, yet they are only the beginning. These tools operate within the framework of surveillance capitalism, an economic system centered on the commodification of personal data. In this model, every human behavior, emotion, or attitude is viewed as a “behavioral surplus” to be captured, processed, and packaged as data.
Within this economy, privacy is not a right but a blockade to a business model that thrives on total transparency of the individual. Companies utilize these data points to feed “prediction products” that anticipate what a user will do, buy, or believe next. By treating the private human experience as free raw material for hidden commercial practices of extraction, prediction, and sales, surveillance capitalism shifts the power dynamic from the individual to the algorithm. AI acts as the ultimate engine for this system, refining raw data into increasingly persuasive and manipulative tools of influence.
Digital McCarthyism and the End of Privacy
This week more details emerged about the ways in which user privacy is being eviscerated. The Israeli company Paragon’s spyware, known as a cleaner version of Pegasus, was accidentally released on LinkedIn, illustrating the reach of the powerful tools known as “mercenary spyware.” Its products, such as Graphite, are reportedly among the most effective tools for allowing state actors to infiltrate mobile devices. Israel is hardly alone.
Governments including the U.S. are actively using such spyware to monitor domestic and foreign populations. In what seems to be an attempt at digital McCarthyism, Attorney General Pam Bondi admitted this week that the Department of Justice (DOJ) is keeping a list of “domestic terrorists,” a label they have used on political opponents before. Relatedly, they are sending requests to tech companies to get information about the digital data and communications related to critics of U.S. Immigration and Customs Enforcement (ICE).
At the same time, Big Tech has further revealed its waning commitment to user privacy. This was revealed in the disappearance of Nancy Guthrie; her defunct Ring camera, which no longer had a subscription, was still recording. Google could go back and attain the recording even though she had chosen to stop the subscription. Essentially, the tools record any and all of the time and can be accessed when those in power see fit regardless of the customer’s preferences. This is especially alarming because Google just paid $68 million over allegations its voice assistant eavesdropped on users. It shows that these settlements are treated as the price of doing business rather than a deterrent to the erasure of privacy.
Pardon the interruption! I just wanted to take a moment to thank all of my subscribers. A special shout-out to the paid members who help me bring this information to life every week. If you’d like to support the newsletter, you can become a paid subscriber for just $5 a month. If you can’t swing that right now, no worries! Liking, commenting, and sharing on social media helps just as much. Thanks for being here, and enjoy the rest of the read!
The AI Competency Gap: Hype vs. Heuristics
Part of the reason so-called AI tools are being adopted so rapidly is the relentless hype generated by Big Tech. However, research increasingly illustrates a significant gap between the claims made by these corporations and the actual performance of the technology. For example, AI does not possess superior intelligence; rather, it often fails to meet the basic standards of accuracy and reliability promised by its creators. One study found AI summaries of news content were inaccurate 45% of the time. AI systems are also known to fabricate information, known as hallucinations, which has gotten users in trouble. Examples include the lawyer who used it to make a brief that was full of fake case law and lost their license, or the ethics in education committee who used it to draft their report only to find fabricated studies. AI cannot even spot AI, as Google’s AI recently flipped back and forth between deeming an image a known AI image and saying it was not AI.
Nonetheless, doctors have voiced concern that patients are using AI for medical advice. About one in six adults used chatbots to find health information. Studies show that these AI bots often provide misinformation masked as health advice leading to all sorts of problems and complications. Sadly, medical professionals are also relying on AI and it is having similarly disastrous results. Medical facilities see AI as a cost-saving tool, but its use in the media profession has led to misidentified body parts and botched surgeries.
Labor Dynamics and the “Efficiency” Myth
Rather than misinformation or privacy, the concerns around AI have largely focused on the job market. Indeed, three out of four Americans believe AI will reduce employment opportunities. However, a recent study from Oxford Economics casts doubt on the volume of jobs replaced by AI. It warns that “firms don’t appear to be replacing workers with AI on a significant scale,” suggesting instead that “companies may be using the technology as a cover for routine headcount reductions.” Similarly, a Yale University study found that “While the occupational mix is changing more quickly than it has in the past, it is not a large difference and predates the widespread introduction of AI in the workforce.” Looking at this, Fortune concluded that the “AI layoffs are looking more and more like corporate fiction that’s masking a darker reality.”
Indeed, this was supported by a Harvard Business Review study that noted that rather than replace jobs, it is going to result in employees taking on more responsibility for the same pay. It noted that “AI tools didn’t reduce work, they consistently intensified it. In an eight-month study of how generative AI changed work habits at a U.S.-based technology company with about 200 employees, we found that employees worked at a faster pace, took on a broader scope of tasks, and extended work into more hours of the day, often without being asked to do so. Importantly, the company did not mandate AI use (though it did offer enterprise subscriptions to commercially available AI tools). On their own initiative workers did more because AI made “doing more” feel possible, accessible, and in many cases intrinsically rewarding.”
Some contend that the discrepancy between the promise of AI’s ability and the reality has led to a decline in the industry. Indeed, OpenAI’s own forecast predicts a $14 billion loss in 2026. OpenAI recently started testing ads in ChatGPT. This follows a long-standing pattern where Big Tech companies ramp up advertising to compensate for declining revenue as their products reach saturation or lose their initial appeal. Relatedly, Meta laid off thousands of VR workers as Zuckerberg’s vision for a virtual reality goggles set fails. (Note: I always hoped humans would not be dumb enough to let Silicon Valley achieve its dream of convincing you to glue a screen to your face; way to go humans!). Meanwhile, Big Tech continues to distract the public from these economic realities with massive PR campaigns, including saturating the Super Bowl with advertisements framing AI as the inevitable future.
The Litigation Front: Addiction and Accountability
In addition to economic realities, legal imbroglios are threatening Big Tech’s business model. Elon Musk’s Twitter is facing multiple investigations in Britain and France over issues of free speech and privacy. In the U.S., a major trial is set against tech companies such as Alphabet (which owns Google and YouTube) and Meta (which owns Facebook and Instagram) over how the design of their platforms is responsible for harmful outcomes to young people such as suicide. TikTok and Snapchat have already settled.
At the center of it is a question of whether or not the tools are addictive. Academics who take corporate funding often dismiss this, while those who have not been bought by the industry have concluded they are addictive. A recent study found that “half of the research published in top journals has disclosable ties to industry in the form of prior funding, collaboration, or employment. However, the majority of these ties go undisclosed in the published research.”
Legislative Paralysis and Partisan Barriers
Beyond the results of this specific case, there is an urgent need for a legal framework that addresses how AI undermines democratic practices. A recent study titled “How AI Destroys Institutions,” warns that AI systems are fundamentally designed in ways that ‘degrade and destroy’ civic institutions like the free press and the rule of law. It is not like the threats posed by AI are unknown to the people in Big Tech. This week, another AI researcher quit, this time from Anthropic, warning that the “world is in peril” in large part due to AI advances. The researcher Mrinank Sharma said the safety team “constantly [faces] pressures to set aside what matters most,” such as bioterrorism and other risks.
Currently, the burden of regulation falls on judges; for instance, a recent court decision ruled that AI-created legal documents do not qualify for attorney-client privilege. But there are numerous other issues capturing national attention. For example, citizens are outraged that Grok AI allowed users to create nude images of people, including minors. Also, local communities have expressed outrage about the ways in which the development of data centers has led to steep increases in energy prices for consumers that also threaten the climate.
Several barriers keep our government from solving these high-profile tech problems. First, the party out of power often proposes popular policies, such as the Democratic proposal to ban surveillance pricing in grocery stores, knowing they cannot be enacted, thereby avoiding any real confrontation with the wealthy interests that oppose them. Second, widespread media illiteracy within the government remains a hurdle; for example, Trump’s acting cyber chief was reportedly ignorant about the lack of privacy in a public tool and uploaded sensitive files to a public version of ChatGPT. Indeed, how can these lawmakers regulate that which they do not understand? Finally, even when initiatives like data center regulation gain political traction, Big Tech allies in the media often suppress them. This was evident when Senator Bernie Sanders proposed a moratorium on data center development, only for the Washington Post, owned by Amazon founder Jeff Bezos, to publish an op-ed in direct opposition to the plan.
Rather than preventing the excesses of Big Tech, Trump’s second administration seems intent on making it worse. First, he has banned states from regulating AI. Instead, he is focused on ensuring his political allies’ control of Big Tech. For example, to allow TikTok to continue to operate in the U.S., it was sold to a conglomerate of owners including his ally Larry Ellison. Almost immediately, users reported that content mentioning “Epstein“ was temporarily blocked from the platform. Rather than regulate these tools, the Trump Administration is using them to create AI videos sympathetic to immigration enforcement, including one of an activist sobbing, and racist videos depicting the Obamas as gorillas.
Reclaiming the Narrative: The Public’s Counter-Strike
The public is not without opportunities or recourse to rectify some of these abuses. For example, in April 2025, leaked State Department memos revealed that the case of Rümeysa Öztürk, who was arrested for 45 days, was built entirely on an op-ed Öztürk co-wrote. Due to the First Amendment, the arrest was a violation of Öztürk’s constitutional rights and a judge finally dismissed the case. The government’s abusive collection of data was utilized against them in this case, as the leak played a major role in making that irrefutable.
In the absence of effective legislative oversight, the responsibility shifts to the public to navigate these digital threats. This involves a two-pronged approach: holding political candidates accountable for their inability to regulate tech, and employing personal defense strategies. While studies continue to show that media literacy remains a primary defense against misinformation, the concept of ‘critical ignoring‘ has emerged as a vital skill. By strategically filtering out manipulative content, users can reclaim their attention and protect their cognitive well-being from the pervasive influence of Big Tech. Democracy will not be saved by the algorithms that disrupted it, but by a public that refuses to be its raw material.
🔥 Cut Through the Noise – The Disinfo Detox Podcast
Good news! The Disinfo Detox is now on Substack! Get sharp, no-BS takes on politics, AI, and higher education—stuff mainstream media won’t touch.
Want to support the podcast? Here’s how:
👉 Like, subscribe, and hit the 🔔 on YouTube
👉 Subscribe to this Substack for just $5/month to spread media literacy, fight disinfo, and amplify truth-tellers
⚡ Your 3-click favor (takes 10 seconds, makes a huge difference):
1️⃣ Click any episode link below
2️⃣ Smash 👍 & drop a comment
3️⃣ Hit 🔔 to subscribe
💥 Every click helps fight disinfo. Be part of the antidote—subscribe, comment, and share!
MAGAcademy Episode 4: Students as Customers and Products — The Marketization of Learning (Watch every episode here)
The Persuasion Economy: AI, Cognitive Hacks, and the Future of Truth (W/Alessandra Di Lorenzo)
From Rodney King to ICE: The 5 Stages of a Government Cover-Up (W/Brian Martin)
Nolan’s Recent Media Appearances
KTVU: Weekly contributor to Mornings on 2 and the Take Two podcast.
Watch: https://www.ktvu.com/news/government-shutdown-immigration-demands-midterm-politics-collide-funding-fight-continues
Just Ask the Question: Weekly appearances alongside Brian Karem and Mark Zaid.
🎓What are Some Trustworthy Media Literacy Organizations and Resources?
For those who want more information and resources, the US is home to many thriving media literacy organizations.
Click here to access recommended media literacy organizations and resources from NolanHigdon.com.



