AI Chatbots Begin Verifying Users' Age

Technology companies are developing methods such as automatic age estimation to protect children from the potential risks of AI chatbots. However, the accuracy of these systems, privacy concerns, and political debates are complicating the process.

AI Chatbots Begin Verifying Users' Age

Technology giants are taking steps regarding methods to verify the age of users interacting with AI chatbots. This move comes to the forefront particularly as a result of growing concerns about children's risk of exposure to inappropriate content. However, age verification is not only a politically contentious issue but also presents significant technical challenges.

OpenAI's Automatic Age Estimation System

In a blog post published last week, OpenAI announced it would implement an automatic age estimation system. The company explained that its developed model aims to predict whether a user is under 18 by evaluating various factors, such as the time of day. For users identified by the system as young or children, filters will be applied in ChatGPT to reduce exposure to violent or sexual role-play content. YouTube implemented a similar system last year.

Technical Challenges and Privacy Dilemma

However, these systems are not flawless. Misclassifications can label an adult as a child or a child as an adult. Users mistakenly flagged as under 18 may have to verify their identity by sending a selfie or an official ID document to a company called Persona. It is known that selfie verification systems make errors more frequently with individuals with certain disabilities and those with darker skin tones.

Sameer Hinduja, co-director of the Cyberbullying Research Center, points out that storing millions of official IDs and biometric data by a company like Persona creates a new security vulnerability. Hinduja warns that in the event of a data breach, very large populations could be put at risk.

Apple's Device-Based Solution Proposal

Apple CEO Tim Cook advocates for a different approach. Cook has called on US lawmakers for age verification to be done at the device level. According to this model, a parent would set their child's age when initially setting up the phone, and this information would be stored on the device and securely shared with apps and websites. This approach appears consistent with Apple's AI and privacy strategies.

Age Verification Battles in the Political Arena

The issue has turned into a political battleground in the US. According to laws supported by the Republican Party and adopted in several states, sites containing adult content must verify the age of their users. Critics argue that such regulations could be used to block a much broader range of content, such as sexual education, that could be labeled as 'harmful to minors.'

States like California are passing laws directly targeting AI companies, aimed at protecting children who converse with chatbots. These developments are considered part of broader global debates on technology ethics.

The FTC's Decisive Role

A key actor that will determine the course of this process is the Federal Trade Commission (FTC). The FTC stands out as the agency responsible for enforcing these new laws. However, the Commission has been observed to become increasingly politicized during the Trump presidency. Last December, the FTC overturned a Biden-era decision regarding an AI company, citing its contradiction with Trump's AI Action Plan. This situation shows how the direction of regulations is influenced by the political climate.

Developments in age verification technology are not limited to the US. The European Union is also investigating platforms due to content generated by models like Grok. Similarly, the oversight of AI-generated content is raising new questions in financial systems as well.

While technical and ethical challenges persist, it remains unclear who will take responsibility for age verification. Experts emphasize that not only codes but also systems that take responsibility need to be developed. Solving this complex problem requires balanced collaboration between companies, regulators, and society.

Related Articles