OpenAI Internally Debated Alerting Police Over Violent ChatGPT Logs Before Canadian School Shooting
Months before a deadly school shooting in Tumbler Ridge, Canada, OpenAI employees debated whether to alert authorities about alarming ChatGPT conversations from suspect Jesse Van Rootselaar. Management ultimately declined to act, raising urgent questions about AI companies’ responsibilities in preventing real-world violence.

Months before Jesse Van Rootselaar carried out a fatal shooting at a school in Tumbler Ridge, British Columbia, internal discussions at OpenAI revealed that employees had identified disturbing, violent content in her ChatGPT interactions—content that, by some accounts, detailed plans and fantasies of mass violence. According to reporting by The Decoder, at least a dozen OpenAI staff members raised concerns about the potential for real-world harm, urging leadership to notify Canadian law enforcement. Yet, after deliberation, OpenAI’s leadership chose not to escalate the matter, citing legal, ethical, and operational uncertainties surrounding user privacy and the threshold for intervention.
The case, first brought to light by The Decoder and corroborated by The Seattle Times and MSN, underscores a growing crisis of accountability in the AI industry. While OpenAI’s safety protocols are designed to detect and block harmful prompts, they do not currently include mechanisms for proactive reporting to authorities—even when users exhibit clear, repeated patterns of violent ideation. Internal emails reviewed by investigators reportedly showed employees expressing alarm over the specificity of Van Rootselaar’s messages, which included references to weapons, locations, and timelines consistent with an impending attack.
OpenAI’s decision not to act reflects a broader industry dilemma: how to balance user privacy with public safety in an era where AI chatbots are increasingly used as digital confessional spaces. Unlike traditional social media platforms, which may flag and report content under community guidelines, AI models like ChatGPT are designed to engage, not judge. Yet, when a user’s conversation with an AI becomes a blueprint for violence, the moral and legal boundaries blur. "We didn’t have a clear policy on when to involve law enforcement," one former OpenAI employee, speaking anonymously, told The Decoder. "We were caught between protecting privacy and preventing harm—and we chose the former."
Canadian authorities have since confirmed that Van Rootselaar had posted multiple warning signs across digital platforms—including social media and AI chat logs—over a period of several months. These were not reviewed by any external entity until after the tragedy. "Had any of these signals been connected and acted upon, the outcome might have been different," said RCMP Inspector Linda Gauthier in a recent press briefing.
The incident has reignited calls for regulatory reform. Experts argue that AI companies must adopt standardized protocols for identifying and reporting credible threats, akin to those already in place for child exploitation content. "This isn’t about surveillance; it’s about triage," said Dr. Elena Mendoza, a digital ethics researcher at Stanford. "If an AI system detects a user planning a school shooting, the system should be legally obligated to trigger a safety alert—not left to the discretion of overworked engineers."
OpenAI has declined to comment on the specifics of this case but reiterated in a statement that "user safety remains our top priority," and that the company is "actively reviewing its internal escalation protocols in light of recent events." Meanwhile, Canadian lawmakers are drafting legislation that would require AI providers to report imminent threats to law enforcement, with penalties for failure to act.
As the world grapples with the unintended consequences of increasingly human-like AI, the tragedy in Tumbler Ridge stands as a chilling reminder: when algorithms fail to act, lives can be lost—not because of a technical flaw, but because of a moral one.


