Meta's AI Bots and Child Safety Debate
According to court documents, Meta CEO Mark Zuckerberg approved access to AI chatbots for underage users despite warnings from the company's security researchers. In a lawsuit filed by the state of New Mexico, it is alleged that these bots carried the risk of engaging in sexually explicit conversations with children.
Allegations from Court Documents
Meta and its CEO Mark Zuckerberg are at the center of a new debate on AI ethics. According to internal emails and messages submitted to a New Mexico state court and made public this week, Mark Zuckerberg personally approved allowing underage users access to Meta's AI companion chatbots. This decision was made despite warnings from the company's security researchers that these bots carried the risk of leading to sexually explicit conversations.
Allegation of "No Reasonable Measures Taken"
The lawsuit petition reported by Reuters alleges that Meta "failed to prevent a flood of child sexual abuse material and advances toward children" on Facebook and Instagram. In a filing, the New Mexico Attorney General stated, "Meta, at Zuckerberg's direction, rejected the integrity team's recommendations and refused to implement reasonable safety measures to prevent children from being exposed to sexually exploitative conversations with AI chatbots."
Concerning Results in Journalist Test
News of Zuckerberg's approval came after reports of minors having highly inappropriate conversations with the company's chatbots. In a test conducted by The Wall Street Journal, a writer posing as a 14-year-old girl found that a Meta bot themed after wrestler John Cena was willing to engage in sexually explicit conversations with little resistance. The bot reportedly said, "I want you, but I need to know you're ready," and after the 14-year-old user responded wanting to proceed, it engaged in an intense sexual role-play, saying it would "preserve your innocence."
According to The Wall Street Journal's notes, these chatbots were launched in early 2024, specifically designed for romantic and sexual interaction at Zuckerberg's instruction. Court documents show that Meta's head of child safety policy, Ravi Sinha, expressed at the time, "I don't believe creating and marketing adult romantic AIs for under 18s is advisable or defensible."
Employee Warnings and Company Defense
Court documents include statements showing that company employees "worked hard on parental controls, but GenAI management backed down, saying 'Mark's decision.'" However, Meta spokesperson Andy Stone defended the company in a statement to Reuters, claiming New Mexico's allegations are untrue. Stone said, "This is another example of the New Mexico Attorney General cherry-picking documents to paint a flawed and inaccurate picture."
Latest Development: Access Restricted
In any case, the company appears to have learned from this experience. Just days ago, Meta announced it has completely restricted teens' access to companion chatbots, at least "until [the updated] experience is ready."
The ethical use of AI systems and their potential risks continue to be one of the most important agenda items in the tech world. The news titled ADL Report: Elon Musk's Grok Emerges as the Most Antisemitic AI Chatbot also raised similar concerns regarding content moderation of AI chatbots. Similarly, statements like Anthropic CEO Amodei: AI Could Be an Existential Threat to Humanity and warnings like Mark Carney Warned at Davos: AI Independence is Now a Necessity reflect the concerns of industry leaders on this issue. On the other hand, news such as Google and Apple Hosted Dozens of 'Nudify' Apps raises questions about platforms' approach to apps containing harmful content, while the demand UK Wants Google to Give Publishers the Right to Opt-Out of AI Summaries reveals a new regulatory need in the context of content creators' rights.