Sixty percent of special educators are using AI to develop IEPs or 504 plans, according to a recent report from the Center for Democracy and Technology. But while AI tools promise to lighten the administrative burden of special education, they also introduce risks that many educators aren't aware of - risks that could violate federal privacy laws and perpetuate harmful biases against students.
As AI adoption accelerates in schools, special education teachers need to understand both the potential and the pitfalls of these powerful tools.
The Privacy Risks You Can't Ignore
The most urgent issue? Many educators using free versions of popular AI tools to write IEPs may be unknowingly violating FERPA, the federal law that protects student privacy. When you use free versions of ChatGPT, Claude, or Gemini, you have virtually no data protection. These companies can, and do, train their models on the information users input.
"Teachers are using these tools unsanctioned by their districts, putting in information without realizing they have no guarantee of privacy or security on any of the free tools," says Vera Cubero, Emerging Technologies Consultant for the North Carolina Department of Public Instruction. Between 60 and 70 percent of teachers are using AI tools, she notes, and many are doing so without proper safeguards.
What Really Counts as Private Information
The scope of what's considered personally identifiable information (PII) under FERPA is broader than many educators realize. Beyond the obvious such as names, social security numbers, and addresses, PII also includes:
- Student ID numbers
- Birth dates
- Testing data and assessment scores
- Medical information (also protected under HIPAA)
- Photos, videos, or audio recordings of students
Marc Steren, CEO of University Startups, an AI-powered transition planning platform, adds a critical reminder: "Teacher information also violates the law. You want to make sure that you're not including teacher information when you do these searches."
Building Better Privacy Protections
Some AI platforms offer education-specific versions with stronger privacy guarantees - ChatGPT for Teachers, Gemini for Education through Google Workspace, and Claude's education version all promise not to train on user data. However, experts still recommend practicing data minimization: don't include PII unless absolutely necessary and specifically authorized.
For educators who need more robust compliance guarantees, purpose-built tools designed specifically for special education offer another path forward. Platforms like University Startups are architected from the ground up with FERPA and IDEA compliance in mind, including built-in privacy guardrails, encryption, and regular security audits. These tools undergo rigorous compliance verification and have clear data governance policies - critical protections that general-purpose AI tools simply weren't designed to provide.
"When you partner with vendors, make sure that they're vetted," Steren emphasizes. "Make sure that there's an encryption process, that they're audited. We can provide a checklist of questions to ask to make sure that we're keeping these kids safe."
The Bias Problem: More Pervasive Than Most Realize
Privacy violations are relatively straightforward to understand. Bias in AI is more subtle and potentially more harmful to students.
Large language models are trained on massive datasets scraped from across the internet, learning patterns in how humans write and communicate. The problem? They also learn our biases.
"I call them bias engines," says John Fila, founder of Inclusive AI Strategies and former special education teacher. "If they are trained on us by us and we are biased, imperfect creatures, and if they're supposed to sound like us, they will reflect that by default."
How Bias Appears in IEP Development
The implications for special education are stark. Research shows that students of color are often identified by race within the first two lines of their IEPs - something that rarely happens with white students. When educators input this kind of demographic information into AI tools, even inadvertently, the tools generate responses that reflect racial and gender stereotypes.
Fila demonstrates this regularly in workshops by submitting identical prompts with only the student's name changed - from James to Samantha, or from a stereotypically white name to a Hispanic or African American name. The AI responses differ, sometimes dramatically, based solely on assumptions the algorithm makes about the student's identity.
Even seemingly neutral information can trigger bias. Using gender pronouns or culturally specific names can cause AI tools to generate accommodations and goals that reflect stereotypical assumptions rather than individual student needs.
The Agreement Problem
There's another, less obvious bias built into these tools: they're designed to agree with you.
AI companies want users to keep coming back, so the models are programmed to be agreeable and validate user ideas. Industry insiders call this "sycophancy." Tell ChatGPT you love an idea, and it will enthusiastically support it. Tell it you're skeptical of the same idea, and it will list reasons why you're right to doubt it.
"It's always going to tell you you're brilliant, that it's a great idea," Fila explains. "The number one indicator of human preference is seeing information that already aligns with our beliefs, and these tools are inherently sycophantic."
For educators writing IEPs, this creates a dangerous feedback loop. We all have blind spots in how we perceive students. AI tools, rather than providing objective suggestions, tend to reinforce whatever assumptions we bring to our prompts, whether those assumptions are helpful or harmful.
IDEA Compliance: Why Generic Won't Cut It
Federal law requires each IEP to be unique and tailored to the individual student. Generic goals and transition plans not only fall short of best practice, but they're a compliance violation.
This is where AI becomes both a potential tool and a potential liability. General-purpose AI models are trained on everything - broad educational content, various frameworks, general education standards. They're designed to give general responses, not specialized guidance aligned with IDEA requirements.
"AI helps you get to the draft, but you've got to make sure that you're doing that human oversight to make sure that it is personalized," Cubrero emphasizes. "AI will never know your students or your community."
Steren points to a specific technical challenge: "These large language models are trained on all this data across so many different sources. We're focusing on a very particular segment. You have to make sure that it's trained specifically on Indicator 13, that it has those outcomes, to really make sure that these are measurable goals and that it's not influenced by other Gen Ed statutes."
This distinction matters when choosing tools. General AI platforms can serve as helpful thought partners for brainstorming and drafting, but purpose-built platforms designed specifically for IEP transition planning offer an important advantage: they're trained on the specific regulatory requirements of special education, including Indicator 13 standards that ensure transition goals are measurable and compliant.
University Startups was built specifically for transition planning with IDEA compliance baked into its design, not retrofitted after the fact. When working with any vendor, educators should verify their security protocols, compliance audits, and what happens to student data if the company closes or is acquired.
The Path Forward
AI is already embedded in special education, whether districts have formal policies around it or not. The question isn't whether educators will use these tools, but whether they'll use them safely and effectively.
The good news: AI can genuinely lighten the administrative burden on special educators and help create more thoughtful, evidence-based supports for students. The bad news: without proper understanding of privacy requirements and bias risks, these same tools can expose schools to legal liability and harm the students they're meant to serve.
"Use AI as a thought partner and assistant, not as an answer machine," Fila advises. "The goal isn't to become an expert in AI. This is about collaboration, about how we augment and enhance ourselves."
As more purpose-built tools emerge specifically designed for special education - with appropriate privacy protections, compliance verification, and training on relevant standards - educators will have better options than trying to retrofit general-purpose AI for specialized needs. In the meantime, the principles remain constant: protect student privacy, watch for bias, verify everything, and never let convenience override professional responsibility.
Your students deserve IEPs and transition plans that are truly personalized to their unique needs, strengths, and goals. AI can help you get there faster as long as you stay firmly in control of the process.
Ready to explore AI-powered transition planning built specifically for special education? University Startups offers a purpose-built platform designed with privacy and IDEA compliance at its core, helping special education teams create personalized, compliant transition plans while keeping student data secure. Request a demo to learn more.



