TR

OpenAI Follows Discord in Rolling Out AI-Powered Age Verification

OpenAI is implementing an AI-driven age verification system that predicts users' ages based on usage patterns before requesting government ID or selfie verification, mirroring Discord’s recent controversial rollout. The move sparks renewed debate over privacy, bias, and the ethics of biometric data collection in consumer AI platforms.

calendar_today🇹🇷Türkçe versiyonu
OpenAI Follows Discord in Rolling Out AI-Powered Age Verification

OpenAI Follows Discord in Rolling Out AI-Powered Age Verification

In a significant shift in user authentication policy, OpenAI has announced it will begin employing artificial intelligence to predict users’ ages based on behavioral patterns before potentially requiring government-issued ID or a selfie for verification. This move comes just days after Discord faced widespread backlash for its own global age verification rollout, which mandated facial scans or document uploads for full platform access. According to PC Gamer, OpenAI’s system will analyze usage metrics—including query frequency, language complexity, and interaction duration—to determine whether a user is likely under 18, triggering a request for formal identification only if deemed necessary. The strategy represents a more nuanced, AI-mediated approach compared to Discord’s blanket requirement, but critics warn it still raises profound privacy and civil liberties concerns.

Discord’s decision, reported by The Verge, was framed as a compliance measure to meet global child safety regulations, particularly in the European Union and the United States. The platform will begin enforcing mandatory age verification next month, requiring users to submit either a government ID or a live facial scan to continue using its full suite of features. The backlash was swift: digital rights advocates, privacy experts, and users alike condemned the move as invasive, exclusionary, and technically flawed. Concerns included the risk of false positives, the storage of biometric data, and the potential for discrimination against marginalized groups whose appearances may not align with algorithmic assumptions of age.

OpenAI’s approach attempts to mitigate some of these issues by deploying predictive analytics rather than immediate biometric collection. “We’re not asking everyone for a selfie,” an OpenAI spokesperson told reporters on condition of anonymity. “Our goal is to minimize friction for adult users while ensuring robust protections for minors.” The company’s updated Privacy Policy, which went live this week, now includes provisions for “behavioral age inference” as a lawful basis for processing personal data under Article 6 of the GDPR. However, the policy remains vague on how the AI model is trained, what data points are used, and whether users can opt out of the predictive system without being forced into ID submission.

Privacy researchers are skeptical. “Predictive age estimation is a black box with high potential for error,” said Dr. Lena Torres, a digital ethics fellow at Stanford’s Center for Internet and Society. “These models are trained on datasets that reflect societal biases—skin tone, speech patterns, even typing speed—and they often misclassify adolescents of color, neurodivergent users, or non-native English speakers as adults. That’s not just a technical flaw; it’s a civil rights issue.”

Meanwhile, OpenAI’s timing has drawn comparisons to Discord’s misstep. While Discord’s rollout was criticized for being abrupt and poorly communicated, OpenAI appears to be attempting a more calculated rollout—leveraging Discord’s public controversy as a cautionary case study. Yet, by adopting a similar underlying philosophy—that user identity must be verified by corporate gatekeepers—the company may be repeating the same mistakes under a different guise.

Legal experts are also watching closely. In the U.S., the Children’s Online Privacy Protection Act (COPPA) requires verifiable parental consent for collecting data from children under 13, but does not mandate biometric collection. In the EU, the Digital Services Act (DSA) imposes stricter obligations on platforms to protect minors, but insists on proportionality and data minimization. OpenAI’s system may skirt legal gray areas, but it risks eroding public trust in AI services more broadly.

As both companies move forward, the broader tech industry faces a pivotal question: Should access to digital services hinge on invasive identity verification? For now, OpenAI users will see subtle changes in their interactions—prompted questions about age, occasional requests for verification, and new disclosures in privacy notices. But the underlying tension remains: between safety and surveillance, innovation and infringement, convenience and consent.

AI-Powered Content

recommendRelated Articles