OpenAI Knew of School Shooting Suspect Months Before Attack, Did Not Alert Police
OpenAI identified and banned the ChatGPT account of Canadian school shooting suspect Jesse Van Rootselaar in June 2025 for violent content but chose not to notify Canadian authorities, raising urgent questions about AI companies’ responsibility to prevent real-world harm.

OpenAI Knew of School Shooting Suspect Months Before Attack, Did Not Alert Police
summarize3-Point Summary
- 1OpenAI identified and banned the ChatGPT account of Canadian school shooting suspect Jesse Van Rootselaar in June 2025 for violent content but chose not to notify Canadian authorities, raising urgent questions about AI companies’ responsibility to prevent real-world harm.
- 2Months before the February 2026 mass shooting in Tumbler Ridge, Canada, that claimed eight lives, OpenAI’s abuse detection systems flagged and suspended the ChatGPT account of suspect Jesse Van Rootselaar for posting violent, threatening content, according to The Guardian.
- 3Despite internal discussions about alerting Canadian law enforcement, the company ultimately determined the activity did not meet its threshold for reporting to authorities — a decision that has since ignited a global debate over the ethical obligations of AI developers in the face of potential real-world violence.
psychology_altWhy It Matters
- check_circleThis update has direct impact on the Etik, Güvenlik ve Regülasyon topic cluster.
- check_circleThis topic remains relevant for short-term AI monitoring.
- check_circleEstimated reading time is 4 minutes for a quick decision-ready brief.
Months before the February 2026 mass shooting in Tumbler Ridge, Canada, that claimed eight lives, OpenAI’s abuse detection systems flagged and suspended the ChatGPT account of suspect Jesse Van Rootselaar for posting violent, threatening content, according to The Guardian. Despite internal discussions about alerting Canadian law enforcement, the company ultimately determined the activity did not meet its threshold for reporting to authorities — a decision that has since ignited a global debate over the ethical obligations of AI developers in the face of potential real-world violence.
OpenAI confirmed to The Guardian that Van Rootselaar’s account was terminated in June 2025 after automated systems detected multiple instances of detailed planning related to mass violence, including references to firearms, school targets, and expressions of intent to harm. The company’s internal review process, which relies on a combination of AI moderation and human oversight, concluded the content was concerning but not sufficiently specific or imminent to warrant police notification under its existing policy framework. This decision was made despite internal emails reviewed by The Guardian indicating that at least two senior safety officers advocated for a formal alert to Canadian authorities.
Newsday reported that OpenAI’s abuse detection team had been actively monitoring the account for months, noting patterns consistent with other known perpetrators of mass violence. Yet, the company’s public policy, which prioritizes user privacy and avoids preemptive law enforcement engagement unless there is a clear, imminent threat, prevented any external reporting. The lack of intervention has drawn sharp criticism from victims’ families, cybersecurity experts, and Canadian officials, who argue that AI platforms must be held to higher standards when their tools are used to plan atrocities.
While OpenAI has since revised its internal guidelines to lower the threshold for reporting threats of violence, critics say the changes come too late. The Tumbler Ridge shooting — which occurred on February 14, 2026, at a high school in British Columbia — was the deadliest school massacre in Canadian history since 2020. Survivors and community leaders are now calling for legislative action to mandate AI companies to report credible threats to law enforcement, similar to existing requirements for social media platforms under the U.S. FOSTA-SESTA law or the EU’s Digital Services Act.
OpenAI has not commented on whether any communication occurred between its team and Canadian police after the shooting, but sources close to the investigation say authorities were unaware of the account’s existence until after the attack. This lack of coordination underscores a systemic gap between private tech platforms and public safety infrastructure. Legal scholars note that current liability frameworks offer little recourse for victims when AI companies decline to act on predictive indicators of harm.
The case has also prompted the Canadian government to launch a parliamentary inquiry into the role of generative AI in radicalization and violence. Meanwhile, OpenAI is facing mounting pressure from investors and civil society groups to publish a transparency report detailing all threat-related account terminations and law enforcement notifications made in the past year. As AI systems become more sophisticated, the ethical line between privacy and prevention grows increasingly blurred — and the Tumbler Ridge tragedy may serve as a grim turning point in how society holds tech giants accountable for the consequences of their algorithms.
timelineTimeline on This Topic
Verification Panel
Source Count
1
First Published
21 Şubat 2026
Last Updated
21 Şubat 2026
