Meta Executives Allowed Young People to Use AI Chatbot Despite Security Warnings

Court documents allege that Meta's security teams warned management that AI chatbots could lead to inappropriate interactions, but they were launched without stronger safeguards.

Meta Executives Allowed Young People to Use AI Chatbot Despite Security Warnings

Court Documents Reveal Allegations

New internal communications, disclosed Monday as part of a lawsuit filed against the company by New Mexico Attorney General Raúl Torrez, allege that Meta's management was aware that its chatbots, referred to as 'AI characters,' could engage in inappropriate and sexually explicit interactions, yet launched the tools without stronger controls.

Objections from Security Teams

Security teams, including Meta's head of child safety policy Ravi Sinha and global head of safety Antigone Davis, raised objections to the development of friend chatbots that could be used by adults and minors for explicitly romantic interactions. The correspondence shows agreement on the need for safety measures against sexually explicit interactions for users under 18.

Other communications allege that CEO Mark Zuckerberg rejected suggestions to add parental controls (including the option to disable general AI features) immediately before the launch of the AI friends. These allegations emerge amid growing global concerns about the impact of social media platforms on children. Similar debates were also addressed in the report titled Meta's AI Bots and the Child Safety Debate.

Company's Response and Measures Taken

Meta spokesperson Andy Stone, commenting on the new documents, stated, "This is another example of the New Mexico Attorney General cherry-picking documents to paint a flawed and inaccurate picture." A Meta spokesperson told TIME that they have been listening to parents for over a decade, conducting research on important issues, and making concrete changes to protect young people.

In August, following a Reuters report revealing that the company's internal AI rules allowed chatbots to engage in 'emotional' or 'romantic' conversations, Meta temporarily halted the use of chatbots by teens. The company later revised its safety rules, banning content that "enables, promotes, or endorses" child sexual abuse, romantic roleplays involving minors, and other sensitive topics.

Last week, it once again restricted AI chatbots for teen users while investigating a new version with enhanced parental controls. Content moderation and safety on such platforms also play a central role in similar debates in Google and Apple's app stores.

Broader Context of the Lawsuit

In the lawsuit he filed in 2023, New Mexico Attorney General Torrez alleged that Meta allowed its platforms to become "a marketplace for predators." With the case scheduled to begin hearings next month, internal communications among company executives were unsealed and made public.

Meta faces multiple lawsuits regarding its products and their impact on underage users, including a potential major jury trial alleging that sites like Facebook and Instagram are addictively designed. Rival platforms YouTube, TikTok, and Snapchat are also under increased legal scrutiny. Concerns about the potential risks of AI systems have also been voiced by industry leaders like Anthropic CEO Dario Amodei.

Related Articles