Although it’s been competing for attention with countless other issues, events, and concerns, AI has become a pivotal topic during President Donald Trump’s second term. In January 2025, Trump repealed President Joe Biden’s executive order designed to create guardrails around artificial intelligence (AI) development. Trump then signed executive orders aimed at ensuring U.S. “dominance” in AI development and expanding AI education. Congress is currently considering Trump’s “One Big Beautiful Bill,” which includes a provision to block states from regulating AI for the next ten years.
To help unpack both the promises and perils of AI, and to cut through the growing web of hype and misinformation, I reached out to several leading AI scholars and researchers for their perspectives. Below, you’ll find compelling essays representing a range of perspectives, each offering expert insight into this pivotal moment for AI policy and public understanding.
AI in Schools: Innovation or Indoctrination?
Nolan Higdon, University of California, Santa Cruz
Trump’s AI agenda includes the AI Education Initiative, which seeks to “foster collaboration between government, academia, philanthropy, and industry to address national challenges with AI solutions.” The question of whether “industry” should be involved in education has divided the media literacy education movement for decades.
Critical media literacy scholars have long warned that industry involvement often transforms education into corporate indoctrination. They argue that rather than fostering critical thinking and civic engagement, industry led education normalizes corporate media, reduces students to customers and data mines, silences alternative perspectives, and, to quote the late, great social critic George Carlin, produces “obedient workers—people just smart enough to run the machines and just dumb enough to passively accept their situation.”
Meanwhile, so-called non-critical media educators—many of whom receive foundation funding or enjoy corporate perks—insist that industry money does not shape their agendas, pedagogy, or research. The current crop of corporate media education enablers is often the same people who ignore or dismiss mounting evidence of AI’s risks while aggressively promoting its rapid, widespread adoption. Critical concerns—such as the erosion of privacy and the intellectual limitations of AI—are brushed aside, drowned out by vapid cheerleading.
Of course, this PR for the AI industry ignores the growing chorus of scholars and thinkers who have warned that AI is nowhere near as objective or intelligent as its boosters claim. At best, these tools follow instructions. This was made painfully clear when Elon Musk’s xAI chatbot, Grok, without being prompted, perpetuated the debunked claim of a white genocide in South Africa. It was simply doing what its creator told it to do. Worse, a recent study from Apple concluded that AI cannot perform complex functions, adapt to change, or account for nuanced variables.
These limitations have been ignored or downplayed by many AI boosters—and the scholars, researchers, and influencers they fund (social media is currently awash with fallacious posts from AI boosters seeking to debunk the Apple study)—because there is too much profit to be made in selling the illusion of superhuman intelligence. And it’s working. The public has become so enchanted with AI that it’s now being primed to set insurance premiums, evaluate job applicants, sentence criminals, offer therapy, act as companions, and even educate students.
Today’s AI promoters feel like the spiritual successors of the tech evangelists of the late 20th and early 21st centuries—the ones who scoffed at legitimate concerns about the internet and smart devices (and who, in hindsight, look pretty ridiculous through 2025 eyes—perhaps that’s why they’ve recently scrubbed their decades-long corporate media affiliations from their websites). At the time, they acted as classroom public relations operatives for Big Tech titans such as Mark Zuckerberg, Elon Musk, and Peter Thiel. These boosters were wrong then, and we’re watching history repeat itself—only this time, the stakes are even higher.
There’s no denying that AI represents a remarkable step forward in human progress. It’s a powerful tool with real potential to benefit society—but we can’t afford to focus solely on those upsides while ignoring the serious risks. The public, not profit seeking corporations, needs to create the framework for education in the age of AI.
Two Sides, One Failure: AI Education Without Students
Nick Potkalitsky, Ph.D., is the Founder and CEO of Pragmatic AI Solutions
The Trump administration’s recent push for K–12 AI education arrives with the usual fanfare: a White House task force, a Presidential AI Challenge, and a call to integrate AI literacy into every subject. But like much of the administration’s education policy, it’s performative. There’s no new funding, no meaningful teacher consultation, and no plan for sustained support. It’s a geopolitical gesture, not a pedagogical one.
Yet the most disappointing response hasn’t come from policymakers—it’s come from universities. Across the country, institutions that have long championed interdisciplinary thinking about media, technology, and knowledge are now issuing rigid prohibitions against AI use in the classroom. Faculty who assign work on distributed cognition, algorithmic culture, and digital authorship are simultaneously telling students that any AI-generated text is dishonest by default.
The contradiction isn’t just philosophical—it’s educational. Both sides are failing to ask what would best serve students.
The administration treats students as future economic assets in a global tech race. Higher ed treats them as would-be cheaters to be monitored and disciplined. Lost in both approaches is the opportunity to teach students how to engage critically and creatively with the tools that are already reshaping their intellectual lives.
What education needs isn’t more regulation or more cheerleading. We need an approach to AI in education rooted in trust, literacy, and adaptation. Students don’t need to be protected from technology, and they don’t need to be sacrificed to it. They need to be invited into real, structured conversations about how to write, think, and create in a world where intelligence is no longer exclusively human.
Dig deeper into Nick’s work:
Pragmatic AI Solutions
Educating AI Substack
Pragmatic AI Educator Newsletter
LinkedIn Profile
Technical Skills Over Equity: Trump’s AI Push Dismantles DEI
Sydney Sullivan, PhD, Lecturer
The Trump administration’s approach to artificial intelligence ignores the critical importance of having a diverse human population shaping, developing, monitoring, and regulating AI. Many of the individuals that Trump has appointed individuals to AI leadership positions have actively opposed diversity, equity, and inclusion (DEI) initiatives in science and education. What's missing here is a serious reckoning with the lived consequences of AI systems on human beings, especially for marginalized communities who disproportionately experience surveillance, misclassification, and exclusion in automated systems.
Research underscores the importance of DEI in scientific innovation and studies have shown that diverse research teams produce more novel and impactful scientific contributions. As the Greater Goods Editors write, “While certainly we are all still learning the best ways to co-exist and cooperate, evidence suggests that a wholesale rejection of diversity, equity, and inclusion would do enormous harm—to people of all races and ethnicities.”
The administration's AI education initiative further reflects this troubling trend. While promoting AI literacy is commendable, the initiative's emphasis on workforce readiness and national competitiveness, coupled with the dismantling of DEI efforts, risks creating an educational environment that prioritizes technical skills over critical thinking and ethical considerations. Educators are being encouraged to integrate AI into classrooms without adequate support for addressing the technology's societal implications, such as bias and privacy concerns.
AI education should empower students to question and challenge societal structures. By sidelining discussions on equity and ethics, the administration's policies risk producing a generation of technologists ill-equipped to navigate the complex social dimensions of AI. This will pose significant risks to both innovation and social equity. A more inclusive and ethically grounded approach is essential to ensure that AI serves the public good and reflects the diverse society it aims to benefit.
Dig deeper into Sydney’s work on social media, mental health, and education:
Sydney Sullivan, PhD | Substack
Sydney Sullivan PhD | Instagram, TikTok | Linktree
Educating for Empire: AI, Capitalism, and the New Patriotic Curriculum
Tyler Poisson, PhD Student in Communication at University of Massachusetts, Amherst
In order to make sense of the Trump Administration’s orientation toward artificial intelligence in schools, one must consider their directives in the context of various economic and political determinants. To begin with and above all else, the federal government – especially though by no means exclusively under current leadership – is motivated to outstrip China in the race to develop and apply artificial intelligence with the goal of securing monetary hegemony in anticipation of a new phase of global capitalism. This is stated rather explicitly in portions of the billionaires’ agenda known as “Project 2025”. Furthermore, the administration’s efforts to deregulate and accelerate the artificial intelligence industry are inseparable from the naked donations and lobbying efforts of individuals and corporations who are in the business of supplying the hardware, software, data, and algorithms on which the technology depends. In key respects the “Removing Barriers to American Leadership in Artificial Intelligence Act” parallels the “Unleashing American Energy Act”, where each is in its own sphere designed to spur “growth” and “innovation” without regard for attendant risks, however catastrophic they may be. We must also recognize that these two policies are interconnected, as AI demands and consumes ever more water and energy.
With all of this in mind we can analyze the Trump Administration’s approach to artificial intelligence as it relates to education. According to a White House fact sheet published in April of this year, Trump signed the “Advancing Artificial Intelligence Education for American Youth” executive order in an effort to foster “interest and expertise in artificial intelligence (AI) technology from an early age to maintain America’s global dominance in this technological revolution for future generations.” The blatant instrumentality of this order in the course of competing with China is obvious enough. Accordingly, it cannot be decoupled from the administration’s official promotion of “patriotic education”. As stated in the fact sheet, the “Advancing Artificial Intelligence Education” order directs relevant agencies and authorities to “prepare America’s students to be confident participants in the AI-assisted workforce” and to “establish public-private partnerships to provide resources for K-12 AI education”. Both of these mandates stand to ensure that private corporations and tech billionaires will (continue to) profit from the integration of AI in schools. These stakeholders will benefit on the one hand from a government subsidized market for AI-enhanced ed-tech products, and on the other from a workforce that will be inured to accept new patterns of behavior and forms of consciousness owing to AI-assisted work under capitalism, not to mention the downstream benefits should the U.S. achieve “global dominance” in AI.
Students’ tendency to use generative artificial intelligence to complete their work reflects the fact that they are disinterested in the process and products of schooling, suggesting that we need to radically rethink the purpose of education in the United States. As against this necessity, the federal government under Trump is committed to treating education in the age of AI as it has been treated historically, namely as a means to prepare students for various positions in the corporate economy. Simultaneously facilitating the use of AI in schools and deregulating the AI industry, as the Trump administration is doing, is likely to, among other things, disconnect students yet further from their work and, ultimately, from themselves as they are trained to offset potentially enriching linguistic and cognitive labor to AI in service of the owning class.
Conclusion: Ignoring Democracy While Fueling Disinformation
Nolan Higdon, University of California, Santa Cruz
Democracy is notably absent from President Donald Trump’s AI agenda. The word does not appear once in the White House fact sheets outlining his policies. That’s no accident. Key figures in Big Tech—like Marc Andreessen and Curtis Yarvin, both of whom have influenced Trump’s administration—have openly dismissed democracy. This anti-democratic mindset runs deep in Silicon Valley, where the prevailing belief is that human judgment is flawed—and that technology should take its place.
Too much of today’s political discourse is consumed by cult-like devotion to personalities, hyperpartisan bickering, and performative outrage. Meanwhile, democracy—when not practiced or protected—atrophies into irrelevance.
One area where AI and democratic decline clearly intersect is information. The founders of the U.S. understood that a functioning democracy depends on a well-informed public—one capable of making decisions, taking action, and holding power accountable. This was part of the rationale for the First Amendment to the U.S. Constitution. There have always been those who try to poison the information ecosystem with lies and propaganda. But AI has taken this threat to a new level. Today, it’s not just difficult to trust what you read—it’s becoming nearly impossible to trust what you see, hear, or even experience.
Just this month, during protests in Los Angeles over immigration and the clash between federal and state authority, a series of AI-generated videos went viral. These included fake video clips of protesters admitting they were paid to attend, people claiming the protests weren’t violent while chaos unfolded in the background, and a disturbingly realistic deepfake of Governor Gavin Newsom casually combing his hair amid the unrest. All of this divisive content was fabricated—but visually and emotionally convincing.
Weeks earlier, media critics warned that social media platforms and YouTube were being flooded with videos that look like polished network news—but are 100% fake. In this environment, truth becomes a casualty—and democracy quickly follows.
In addition to disinformation, AI poses other threats to essential components of democracy, such as citizens’ privacy. Without privacy, there can be no true freedom—and without freedom, democracy cannot survive. That’s why it’s alarming that Big Tech has spent decades embedding itself and its leaders within government, aiming to consolidate power and maximize profits—with dire consequences for the public. For example, recently, Trump tapped Peter Thiel’s Palantir to compile data on all U.S. citizens—Orwell would be ashamed, but not surprised.
A healthy society needs a free, independent press to expose the truth. But it also needs a free and educated public—one that understands AI, because it’s here whether we like it or not—and it’s the public’s responsibility to decide how, if at all, it can be used to strengthen democracy rather than undermine it. It is up to the public to ensure that it is the people and their commitment to democracy that guide the future of AI, not profit-seeking corporations and their paid henchmen in government, media, and education. If democracy dies in darkness, AI is flipping the switch.
NEW Episode of Disinfo Detox Podcast
The AI Revolution Is a Lie? Apple Drops Bombshell Study (W/ Gary Smith)
The Crackdown in LA: What the Media Didn’t Tell You (W/ Caroline Luce)
🔥Recent Media Appearance
Takedown of California’s senior U.S. Senator, Alex Padilla, expected to ratchet up interest in Saturday protests in Bay Area Nolan Higdon discusses Padilla’s treatment with Ethan Baron.
🤝 Support This Work
If you find value in what I’m doing, consider becoming a monthly paid subscriber—it helps me keep the lights on and the content flowing. Can’t swing a subscription right now? No worries. A restack, like, comment, share, or free subscribe goes a long way in boosting visibility and helping this project grow. The more we spread media literacy, the stronger our democracy becomes.