UFT’s $23m Sell‑Out: Educators Betrayed in the AI Gold Rush
AFT and UFT announce $23m collaboration with Big Tech AI companies. Yet, still no MOA with the City to address issues with AI and to ensure protections exists. Who is union leadership working for?
On June 10, 2025, the United Federation of Teachers (UFT), alongside the American Federation of Teachers (AFT), signed a sweetheart deal with OpenAI, Microsoft, and Anthropic to launch the so‑called National Academy for AI Instruction. What reads like a public‑spirited initiative designed to provide a national model for AI-integrated curriculum and training is nothing short of complicity in letting the Big Tech overlords co‑opt and get their tentacles into our schools and teacher union.
Curiously, the press release and press launch, came last week, on July 8th — nearly a month after the agreement.
Trading Teacher Trust for Corporate Clout
Let’s be clear: AFT/UFT’s agreement to create this “AI Academy” weaponizes educators as marketing assets for billionaire‑backed AI firms. With $12.5 million from Microsoft, $8 million in funding and $2 million in “technical resources,” from Open AI, and $500,000 from Anthropic, plus sweet perks like “priority access,” tokens, API credits, and “technical support”, what this really buys is influence—Big Tech’s influence over curricula, classroom tools, teacher training, and ultimately, the pedagogical process itself. Teachers should be the bulwark against unfettered data‑mining and algorithmic bias—not sales reps for proprietary software.
Particularly concerning is the lack of any information about the governance structure of this new organization. There is no mention on the Academy’s rudimentary, bare-bones website nor in any of the AFT publicity materials of who will sit on its board except for Roy Bahat, head of Bloomberg Beta, a venture capital firm that invests in these companies. Why this doesn’t represent a profound conflict of interest is not explained.
There is also no information about whether any privacy experts, advocates or parents will be appointed to the Board of Directors, or the advisory board, which should be an absolute requirement.
The AFT’s Well-Meaning ‘Common Sense’ Guardrails
In June 2024, the American Federation of Teachers (AFT) introduced its “Commonsense Guardrails” on the use of artificial intelligence in schools. The document aimed to offer a thoughtful starting point for how AI should—and shouldn’t—be used in classrooms. It highlights the importance of teacher voice, student privacy, and transparency, and encourages school communities to stay in the driver’s seat when choosing new technologies. AFT leaders called the framework a much-needed step to protect students and educators from the growing influence of unregulated AI. It also provides some solid strategies for its locals for collective bargaining to address AI issues. But notably, however, the AFT’s largest local—the UFT in New York City—still doesn’t have a memorandum of agreement in place regarding AI use in city schools.
That said, many educators, advocates and researchers have raised concerns that the AFT’s type of proposal doesn’t go far enough. For instance, the guide recommends using tools like GPTZero to detect AI-written work, even though these tools have been shown to mislabel human writing as AI-generated.
More broadly, the framework places too much responsibility on individual schools and districts without pushing for broader state or federal oversight to help level the playing field. It also doesn’t fully address some of the deeper risks AI brings to education—like baked-in bias, student surveillance, or the long-term influence and entrenchment of private tech companies in public schools.
The core guidelines are largely aspirational—emphasizing shared accountability and democracy—but doesn’t specify who enforces what or detail mechanisms for accountability. There’s minimal direction on ethics boards’ mandates, evaluation criteria for an AI tool, or enforcement protocols.
Without stronger enforcement, clearer rules, and better protections, critics worry that these “guardrails” could end up being more of a suggestion than a safeguard.
History of Failure: No AI Safeguards in Last Contract
Despite educators’ growing anxieties, the last UFT contract did nothing to stem the tide of AI intrusion even as it encouraged the expansion of virtual learning in city schools. Not only was there no protection for educators’ intellectual property—like lesson plans or class assessments—but there was zero mention of key issues such as data ownership, algorithmic transparency, student and teacher privacy, job protection or ethical boundaries in AI’s classroom role.
Contrast that with SAG‑AFTRA: in 2023–24, they took to the picket lines demanding—and winning—clauses protecting performers from AI “digital replicas,” forced usage, and likeness appropriation. Educators? Nada. And yet we’re right on the cusp of a digital swirl. UFT’s failure here isn’t oversight—it’s union dereliction and negligence.
Why Educators Deserve a Memorandum of Agreement (MOA)
An MOA on AI isn’t optional—it’s an urgent necessity. Here’s why:
Ownership & Compensation: If AI mines a teacher’s lesson plan—or even paraphrases it—the educator should own it or be paid for it. AI companies are increasingly desperate for high-quality data – which is running out, as websites are barricading themselves against AI data-mining bots.
Consent & Transparency: Teachers must know what teacher or student data is fed into AI tools, who controls it, and how the product works;
Algorithmic Fairness: AI should not replicate systemic biases in grading or classroom support—teachers must reserve final say in all decision-making.
Opt‑out Rights: Educators and students should have the legal right to reject AI tools without penalty. In fact, the state student privacy law specifically prohibits the use of student data to improve products – which nearly all AI programs already do.
Limitations on Use: Many studies show that the frequent use of AI can cause cognitive decline – by both teachers and students -- and the loss of critical thinking and creativity. AI has also been shown to produce digital hallucinations, possess biases and as we have seen just this week with Grok, hate speech and racism. What specific protections will be maintained against this?
Protections against cheating: An article in the Wall St. Journal revealed that for nearly two years, Open AI has the ability to provide something like a watermark that can reliably detect when someone uses ChatGPT to an essay or research paper. The company hasn’t released this tool, presumably because this would impair their corporate competitiveness.
There are also huge environmental costs to the use of Generative AI, which relies on vast amounts of energy, leading to the further degradation of our climate. How responsible is that?
Without these guarantees, what the AFT and UFT are rolling out essentially opens a data‑trove for Big Tech to exploit, to the potential detriment of our students, our jobs, and the planet itself.
Protecting Our Jobs: AI Cannot Replace Human Educators
There is a growing, unspoken fear among educators—and it’s not unfounded. AI isn’t just coming for lesson plans; it’s coming for our jobs as well. Roles like curriculum specialists, instructional coaches, data analysts, and even classroom teachers are being eyed by districts eager to cut costs and boost “efficiency.” Bill Gates has proclaimed that “Over the next decade advances in AI will mean that humans will no longer be needed “for most things” including teaching. To add to these worries, AI is being used to observe and evaluate teachers as well.
Yet nowhere in the AFT’s announcement is there specific language about a commitment to protect people from being replaced by software and algorithms. There is no language guaranteeing that AI will be used only to support—not supplant—educators. No collectively bargained guardrails to ensure that AI won’t be used to downsize staff, eliminate curriculum departments, or reduce instructional decision-making to a set of algorithms written in Silicon Valley.
If UFT is truly representing its members, it must fight to codify protections that guarantee AI tools will not result in the loss of educator jobs or diminish the professional autonomy and craft of teaching. Anything less is a green-light for corporate displacement masquerading as “innovation.”
Privacy Nightmare: Surveillance Disguised as “Support”
Microsoft and OpenAI are not charities—they’re voracious data‑mining enterprises poised to harvest student and teacher data via API integrations. This deal grants them “access to … school learning systems”. Do you really trust multi‑national corporations to respect FERPA or state and local privacy laws when profit is at stake?
Beyond simple privacy, there’s the chilling effect: classrooms become monitored ecosystems where every lesson, comment, and assignment is captured, analyzed, and potentially monetized. UFT has seemingly failed to secure any binding collectively bargained limits or to advocate for robust regulations to ensure data deletion, privacy protections, and third‑party privacy impact assessments and cybersecurity audits. That’s not just negligence—it’s a betrayal of trust.
Lest we forget, just a few weeks ago, Open AI was pushing hard for a provision in the federal budget bill that would have blocked states and localities from regulating the use of AI for ten years. Luckily, because of widespread public outrage, it was taken out of the bill at the last minute by a 99-1 vote of the Senate, but this wouldn't have happened if CEO Sam Altman had had his way.
Final Verdict: UFT Needs a Revolution, Not a Press Release
UFT’s foray into Big Tech alliances is less “empowering educators” and more “empowering Big Tech.” Without a binding memorandum of agreement that includes our employers—one that guarantees teacher ownership, data privacy, algorithmic ethics, compensation structures, job protections and opt‑out rights—this is nothing more than complicity in Big Tech’s takeover of public education in exchange for more UFT Teacher Center patronage jobs and payoffs to fund the leadership’s machine.
One needs to remember how badly the AFT and UFT alliance with Bill Gates and Big Tech ended with the Common Core, teacher and school evaluation based on test scores, and inBloom. This could be far worse. Geoffrey Hinton, the scientist who won the Nobel Prize for inventing AI, has warned that without tough regulations, the use of this technology could threaten the future of humanity itself.
Educators deserve better than glossy press announcements and “AI hubs.” They don’t just need training—they need legally enforceable protections. It’s time for the UFT to step up, demand a real MOA that involves rank and file cooperation, and hold Big Tech accountable—before every lesson becomes another line of code on someone else’s balance sheet and a notch on their belt.
And while you’re at it, Weingarten and Mulgrew: Let members read this actual agreement with Big Tech so we can know how to further protect ourselves.

You covered all of the bases here. A very well done piece, Arthur. As an aside, I was coincidentally speaking to a friend today about our retirement funds being heavily invested in AI companies. The main theme of the discussion was our inability to divest from these companies. They're all intertwined with other companies that provide hardware, software and materials for the growing AI industry. Even the rare earth metal companies that provide the material for the chips, wiring, circuits, and more, are making profit from the collaboration between Big AI and the end users, such as the DOE here. MP Materials just partnered with the Department of Defense to mine for rare earth metals. Share values went up 100% in two days of trading so far. That said, your argument to include limits on the use of AI through collective bargaining was spot on. Again, your research into this issue really stood out here.
This is excellent!