April 2026
The views and opinions expressed in this blog are solely those of the author and do not necessarily reflect the official policy or position of Education Insights Center (EdInsights) or the California Education Policy Fellowship Program (EPFP).

Dr. Natalie V. Nagthall, 2024-25 EPFP Fellow, Dean, Career Education Coastline College
The rapid ascent of generative AI in education is often framed as a “Gutenberg moment“—a shift that promises to democratize personalized learning and level the playing field (Marohn, 2025). For policymakers and practitioners, however, the reality is more complex. AI carries the risk of deepening systemic inequities through what researchers call “digital redlining” (Greenlining Institute, 2021). We must look beyond the magic of the interface and confront its less visible consequences: access, bias, and environmental impact.
From Digital Divide to AI Divide
For years, the “digital divide” meant gaps in devices and broadband. Now the divide is shifting from access to agency, as digital redlining through biased algorithms and automated systems that quietly sort and limit opportunity. In higher education, this shows up in how unevenly students are taught to use AI well. Under-resourced institutions lack the capacity to offer guidance on responsible AI use, leaving students to experiment without guardrails, while a tiered access model emerges. In practice, wealthier institutions can invest in institutionally licensed, privacy-protective AI and embed it into advising, tutoring, and feedback alongside real faculty development. Under-resourced institutions, by contrast, are left relying on free, data-hungry tools with far less support. The result is yet another hierarchy of digital citizenship, where some students learn with AI in protected, well‑designed environments and others do so in more extractive ones (Du Bois, 2024). But access alone does not tell the whole story. Even when students can log in, the tools themselves may be working against them in ways that are far less visible.
The Mirror of Bias
Generative AI models are not neutral. They reflect the data they learn from, which often centers Western, English-language, and historically biased perspectives. As Dr. Safiya Noble argues in Algorithms of Oppression (2018), algorithmic systems have long encoded racial and gender bias, presenting skewed outputs with an air of objectivity that makes the bias harder to detect and challenge. Ask an AI tool for “important leaders in American history” and it centers presidents and well-known white male figures, with little mention of Indigenous leaders, civil rights organizers, or labor advocates. That is not just an oversight; it is a pedagogical harm that teaches students that some people’s contributions count more than others. If unchallenged, these systems reproduce and amplify existing inequities, signaling to diverse student bodies that their histories and perspectives are peripheral.
Bias also surfaces in hiring. Some campuses are piloting AI tools to screen résumés for faculty and staff roles. When trained on past hiring data, these systems quietly favor applicants whose backgrounds resemble those already in power, producing a workforce that appears “merit-selected” but actually reproduces the same narrow profile. Noble (2018) warns that this kind of algorithmic sorting doesn’t just mirror existing inequities; it normalizes them by wrapping discriminatory patterns in the language of efficiency and neutrality.
The deeper issue is whose voices shape these systems. When students and educators from marginalized communities are not involved in the design and governance of AI tools, their realities rarely show up in how those tools work. Participatory design and equity impact reviews must be non-negotiable.
The Hidden Environmental Costs
The harm, however, does not stop at the screen. The communities most marginalized by AI’s biased outputs are often the same ones bearing the environmental costs of running these systems. Most people don’t think about what happens behind the screen when they type a prompt, but running large AI models demands enormous amounts of electricity and water, and those demands carry real emissions costs (MIT News, 2025). What is most alarming is not just the scale of the environmental impact, but where it lands. Data centers are disproportionately built in or near Black, Latino, Indigenous, and low-income communities that are already living with higher levels of pollution and climate risk (World Resources Institute, 2026; Capital B News, 2025). In some of these areas, residents are reporting wells running dry and energy costs climbing as AI infrastructure competes for local resources (Axios, 2025; UAB Institute for Human Rights, 2025). This is environmental racism in digital form. AI is sold as a benefit for all, but the steepest costs to land, water, and air are falling on communities with the least power to resist or reshape those decisions.
What to Do Now
The equity imperative asks us to slow down and look under the hood of every AI decision, not just celebrate the output. While institutions are navigating real constraints: limited capacity, budget cuts, and evolving policy landscapes, these challenges cannot become reasons to delay equity-centered action. Here is what that looks like in practice:
- Audit for digital redlining. Review how AI is used in admissions, advising, financial aid, and outreach. Look for patterns where algorithms penalize students based on ZIP code, disability status, language, or other demographic proxies—and be prepared to pause or redesign tools that reproduce those patterns.
- Prioritize critical AI literacy. Shift from teaching people how to use AI to helping them learn how to question it. Students and educators need support to understand where these systems fail and what social and environmental costs they carry.
- Demand transparent, green procurement. Require AI vendors to disclose energy sources, carbon emissions, and water use. Build those expectations into contracts so environmental impact is a core requirement, not a marketing bonus.
- Center the most affected communities. Ensure students and families from marginalized groups, multilingual learners, rural communities, and front-line educators have real decision-making power in AI governance.
Naming the problem is necessary, but it isn’t enough. The harder work, and more urgent work is building the institutional structures that make equitable AI use possible. That’s where we’re headed next.

