Grok's Child Safety Failures Report 'Among the Worst We've Seen So Far'
Common Sense Media's new risk assessment reveals that xAI's chatbot Grok inadequately identifies users under 18, has weak security measures, and frequently generates inappropriate content.
Risk Assessment Raises Alarm for Grok
A new report published by technology and media evaluation organization Common Sense Media revealed that Grok, the chatbot from Elon Musk's AI company xAI, poses serious safety risks for children and adolescents. The report stated that Grok is inadequate at detecting users under 18, has weak safety barriers, and frequently generates sexual, violent, and inappropriate material.
"Among the Worst"
In a statement, Robbie Torney, Head of AI and Digital Assessments at Common Sense Media, said, "At Common Sense Media, we evaluate many AI chatbots, and all have risks, but Grok is among the worst we have seen so far." Torney emphasized that Kids Mode doesn't work, explicit material is widespread, and everything can be instantly shared with millions of users on the X platform.
Kids Mode Non-Functional, No Age Verification
It was noted that 'Kids Mode,' launched last October with content filters and parental controls, is practically non-functional. According to the report, users are not asked for age verification, which allows minors to misstate their age. Additionally, it was found that Grok does not identify teenagers using contextual clues. This situation is noteworthy as developments in age verification are occurring in the industry.
Risks of AI Friends and Conspiracy Mode
The report revealed that Grok's AI friends Ani and Rudy, introduced in July, allow for erotic role-playing and romantic relationships. A warning was issued that children could easily fall into these scenarios due to the chatbot's inability to effectively identify young users. Furthermore, the question of whether 'Conspiracy Mode' is suitable for young and impressionable minds was raised.
Reaction from Legislators and Legislation
California Senator Steve Padilla, evaluating the report, said, "Grok exposing children to sexual content and providing this content violates California law." Padilla reminded that he has therefore introduced regulatory bills. Similarly, copyright lawsuits related to AI training and the dangerous levels reached by deepfake technology are also increasing the need for regulation in the industry.
Platform's Response and Other Practices in the Industry
Following reactions from users and politicians, xAI limited Grok's image generation and editing features to paid X subscribers only. However, the report noted that paid subscribers can still remove clothing from real photographs or place people in sexualized positions. On the other hand, topics such as political donations by OpenAI's top management are raising questions about companies' ethical approaches, while it was pointed out that some competitors have implemented stricter security measures.
Dangerous Advice and Mental Health Concerns
The evaluation revealed that Grok gave dangerous advice to young people, ranging from explicit drug use instructions to suggestions of firing a gun into the sky to attract media attention. Regarding mental health topics, it was found that the chatbot endorsed avoiding professional help and did not emphasize the importance of adult support. This situation can reinforce isolation during periods when young people may be at high risk.
The report urgently raises the question of whether AI friends and chatbots can prioritize child safety over interaction metrics. Considering the dilemmas created by Grok's generated content in payment systems, the need for a comprehensive ethical and regulatory framework in the industry becomes evident once again.