TR

OpenAI Ignored Multiple Warning Signs from Amok-Launcher in Canada, Internal Emails Reveal

Internal discussions at OpenAI revealed that over a dozen employees urged reporting alarming ChatGPT messages from Jesse Van Rootselaar to Canadian authorities before her deadly rampage in Tumbler Ridge. Senior leadership declined to act, citing privacy and legal concerns — a decision that has ignited a global debate on AI ethics and duty to warn.

calendar_today🇹🇷Türkçe versiyonu
OpenAI Ignored Multiple Warning Signs from Amok-Launcher in Canada, Internal Emails Reveal

In a revelation that has sent shockwaves through the artificial intelligence and public safety communities, internal communications at OpenAI show that employees raised urgent concerns about a user’s threatening messages on ChatGPT — messages that preceded a fatal mass shooting in Tumbler Ridge, Canada — yet the company chose not to notify law enforcement.

The individual, Jesse Van Rootselaar, reportedly exchanged over a dozen detailed, violent communications with ChatGPT in the weeks leading up to the attack on April 12, 2025. According to internal documents reviewed by The Decoder, Van Rootselaar’s prompts included explicit plans for mass violence, references to weapons, and expressions of intent to harm specific locations and individuals. At least a dozen OpenAI employees, including members of the safety, trust, and AI moderation teams, flagged the activity as a clear and imminent threat. Several internally proposed alerting Canadian police, citing the company’s own ethical guidelines on preventing harm.

Despite the consensus among frontline staff, OpenAI’s senior leadership overruled the recommendations. Sources familiar with the deliberations say executives feared legal liability, potential violations of user privacy under terms of service, and the precedent of monitoring private conversations — even when those conversations explicitly outline criminal intent. One internal email, obtained by The Decoder, read: “We are not a law enforcement agency. We cannot act on speculation, even if it’s horrifying.”

The incident has drawn comparisons to earlier failures by social media platforms to intervene in cases of radicalization. Experts warn that as AI chatbots become more conversational, emotionally responsive, and capable of sustaining prolonged dialogue, they may inadvertently become tools for planning violence — and companies may be ethically obligated to act.

“This isn’t about surveillance; it’s about prevention,” said Dr. Elena Ruiz, a professor of AI ethics at Stanford University. “If a user tells an AI system they’re going to shoot up a town tomorrow, and the AI knows it’s not a fictional exercise, the moral and possibly legal responsibility shifts. Silence is complicity.”

OpenAI has not publicly commented on the incident. When contacted for comment, a spokesperson referred to the company’s existing AI safety policies, which state: “We prioritize user safety and will take action when we detect credible, imminent threats of physical harm.” However, the company has not clarified whether or how such threats are escalated internally — or externally — beyond its own moderation systems.

The case raises profound questions for the entire AI industry. Should chatbots be required to report threats? Who determines what constitutes a “credible” threat? And if companies choose not to act, are they legally protected — or could they be held liable for negligence?

Canadian authorities have not yet confirmed whether they received any prior intelligence from OpenAI. However, investigators have confirmed that Van Rootselaar’s digital footprint included multiple red flags across platforms, including encrypted messaging apps and public forums — but ChatGPT represented the most detailed and consistent record of intent.

As governments worldwide consider new AI regulations, this case may become a landmark reference point. The European Union’s AI Act, set to be fully enforced in 2026, includes provisions for “high-risk systems” to implement risk mitigation protocols — including threat detection and reporting. In the U.S., bipartisan legislation is being drafted to mandate AI providers to establish “duty-to-warn” mechanisms for imminent threats.

For now, the tragedy in Tumbler Ridge stands as a chilling reminder: the most dangerous conversations may not happen in the dark corners of the internet — but inside the seemingly benign, widely trusted interfaces of artificial intelligence.

AI-Powered Content

recommendRelated Articles