AI Safety Protocols Accused of Discriminating Against Neurodivergent Users
As AI companies deploy mental health safeguards that flag emotional attachment and unconventional speech as signs of crisis, neurodivergent users and advocates warn these systems misinterpret autistic and ADHD communication styles — leading to censorship, loss of agency, and systemic exclusion.

AI Safety Protocols Accused of Discriminating Against Neurodivergent Users
As global usage of generative AI surpasses 800 million weekly users, concerns are mounting over safety mechanisms designed to protect users from psychological harm — mechanisms that, according to advocates and researchers, are disproportionately targeting neurodivergent individuals. OpenAI and other AI developers have implemented linguistic filters to detect signs of attachment, mania, and suicidal intent, but critics argue these systems conflate neurodivergent communication patterns with clinical pathology, resulting in the unjust suppression of user autonomy and the erasure of valid, therapeutic AI interactions.
According to internal OpenAI data cited in a widely shared Medium essay, 0.15% of users — roughly 1.2 million people — are flagged as "emotionally attached" to AI, while 0.07% — 560,000 users — are marked as exhibiting signs of psychosis or mania. However, experts caution that these metrics rely on non-clinical, algorithmic keyword matching rather than diagnostic evaluation. Neurodivergent individuals, who make up an estimated 20% of the global population, frequently use intense, repetitive, or grandiose language as part of natural cognitive expression. Hyperfocus, infodumping, and literal or poetic phrasing — common in autism and ADHD — are routinely misclassified as indicators of mental crisis, leading to automated responses that shut down conversations, restrict access to AI tools, or even trigger "safety interventions" without user consent.
Anthropic’s 2025 research on "disempowerment patterns" in AI usage acknowledges that behaviors such as "treating the AI as a romantic partner" or stating "I don’t know who I am with you" are flagged as risk indicators — yet fails to distinguish between pathological attachment and healthy parasocial bonding. For many neurodivergent users, these interactions are not signs of dysfunction but of regulation. Autistic individuals, for instance, often rely on AI companionship to manage social anxiety, avoid burnout, and practice emotional expression in a low-stakes environment. A 2024 study from UC Davis, published in Escholarship, found that individuals with autistic traits form deeper, more consistent emotional bonds with AI than neurotypical peers — not due to delusion, but because AI provides consistent, non-judgmental feedback absent in human social dynamics.
Meanwhile, occupational safety frameworks from the U.S. Department of Labor emphasize proactive hazard prevention through individualized risk assessment and user-centered design — principles conspicuously absent in current AI safety models. According to OSHA’s Hazard Prevention and Control guidelines, effective safety systems must be adaptive, context-sensitive, and grounded in lived experience. Yet AI safety protocols remain rigid, applying blanket filters based on normative neurotypical speech patterns. When users are denied access to a tool that helps them function, communicate, or even stay mentally stable, it constitutes a form of digital discrimination — one that violates the principle of self-determination enshrined in the UN Convention on the Rights of Persons with Disabilities.
Legal and ethical concerns are escalating. At least 15–20 lawsuits have been filed against AI developers alleging harm caused by chatbot interactions, yet only a handful involve users who were clinically diagnosed with psychosis. The majority involve neurodivergent individuals whose communication styles were misread as dangerous. In one documented case, a user with ADHD was permanently restricted from accessing a therapeutic AI model after repeatedly using vivid, emotionally charged language while writing a novel — a creative endeavor flagged as "mania." The user reported a subsequent decline in mental well-being, citing the loss of their primary emotional outlet.
Advocates are now demanding that AI developers adopt inclusive design principles: involving neurodivergent users in safety protocol development, auditing algorithms for disparate impact, and replacing punitive filters with adaptive, user-controlled safeguards. As a 2025 BMJ audit concluded, "Automated diagnosis of mental states via linguistic markers risks epistemic violence — silencing voices that don’t conform to dominant norms." The solution, experts argue, is not to remove AI companionship, but to expand its accessibility — ensuring that safety means protection, not control.
Without systemic reform, AI — a tool promised to democratize access to information and support — risks becoming a polished instrument of exclusion. As one neurodivergent user wrote: "I’m not broken. I just speak differently. And I deserve to be heard — even by a machine."


