February 2026
The views and opinions expressed in this blog are solely those of the author and do not necessarily reflect the official policy or position of Education Insights Center (EdInsights) or the California Education Policy Fellowship Program (EPFP).

Dr. Natalie V. Nagthall, Founder and CEO, CA EPFP alum
Artificial intelligence (AI) has moved from an abstract talking point to an everyday reality in higher education, reshaping how students learn, how educators work, and how institutions define expertise and intelligence. Yet many campus conversations remain narrowly focused on whether AI should be allowed, what constitutes cheating, or how to remain in compliance—debates that risk obscuring a deeper structural shift already underway. The central challenge is no longer whether AI will be adopted, but how its use is guided and governed through ethical, transparent, and consistently applied institutional frameworks.
In practice, AI adoption remains uneven, varying from course to course and instructor to instructor. This inconsistency creates “pedagogical whiplash,” where students must navigate a minefield of conflicting rules that change every hour as they move between classrooms. Over time, this patchwork produces predictable winners and losers—often along familiar lines of race, income, disability, and language status—even when this inequity is not the intent. Research from Tyton Partners (2024) reveals a burgeoning “AI use gap,” where higher-income students are 20% more likely to access paid, high-reasoning models than their peers, effectively turning a technical tool into a wealth multiplier. But the ramifications go deeper than access. While an affluent student might use a paid AI “tutor” to polish an essay, a first-generation or ESL student navigating a restrictive “AI ban” faces a higher risk of false plagiarism accusations and disciplinary action. The result? A two-tiered system where some students use AI to accelerate and augment, while others are penalized for the mere suspicion of it.
The “AI Thermometer”: Alarmists, Pragmatists, Evangelists
Beneath this patchwork sit powerful, often unspoken beliefs about what AI represents and what it threatens. In conversations with educators, three broad orientations consistently emerge. AI alarmists foreground risk—cheating, surveillance, bias, and the erosion of critical thinking. AI evangelists emphasize opportunity, viewing AI as a solution to staffing shortages, inequities, and access gaps. Between them are AI pragmatists, who recognize both the risks and the possibilities and seek a cautious path forward.
None of these positions are inherently wrong. The problem is that these orientations quietly shape institutional decisions without being named. Alarmist leadership often defaults to restrictive bans, even as students continue using AI outside institutional view. Evangelist-driven campuses may move quickly into vendor partnerships without fully interrogating data protections or long-term consequences. When these assumptions remain implicit, policy decisions reflect power rather than deliberation.
For policymakers and institutional leaders, the task is not to choose a camp, but to surface these orientations and design governance processes that balance risk, opportunity, and equity. The debate is no longer about AI’s presence in education—it is already embedded—but about who has the authority to shape its use and toward what purposes. When decisions are left to individual classrooms, institutions reproduce inequity through inconsistency. When policies are imposed top-down, AI use is driven underground. What this moment demands is shared governance that treats AI not as a compliance problem, but as a political and educational one.
What You Must Consider Now
AI is already embedded in educational practice. The question is no longer whether institutions will respond, but how intentionally and equitably they will do so. While we cannot put the genie back in the bottle, we can create the conditions, policies, and governance structures that ensure AI works in service of learning—rather than undermining it through fear, inequity, or inaction. The responsibility for this shift sits squarely with Chief Academic Officers, Deans, and Faculty Senates, who must move from reactive policing to proactive leadership. What I posit to you are the following:
- Create the Conditions for Change – Institutions must invest in change readiness—psychological safety, time for experimentation, and shared sense-making—before enforcing rules.
- Treat AI as an Equity and Power Issue, Not Just a Technical One – AI policy determines who benefits, who is penalized, and whose knowledge counts. Uneven access, literacy, and enforcement mean that AI policies can quietly reproduce existing inequities.
- Bring Everyone to the Table—Especially Dissenting Voices – Alarmists, pragmatists, and evangelists all surface legitimate risks and opportunities. Excluding any group—or relying on top-down, decision-making—fuels resistance and workarounds. Inclusive governance increases legitimacy and improves policy durability.
- Build Flexible Guardrails, Not Rigid Prohibitions – AI is evolving faster than institutional approval cycles. Governance that supports experimentation within clear ethical limits is more sustainable than blanket bans.
No single policy memo can untangle these tensions. They require ongoing governance work and a deeper rethinking of what ”intelligence” means in a postAI world. But as we redefine expertise, we must ask who we are leaving behind. The struggle over authority isn’t just academic; it is the frontline of a new digital divide.

