OpenAI Knew of Mass Shooter’s ChatGPT Plans but Chose Not to Alert Police
Internal employees at OpenAI raised alarms after detecting disturbing conversations between a user and ChatGPT that detailed plans for a mass shooting. Despite urging leadership to notify law enforcement, executives declined, citing policy constraints and concerns over false positives.

Internal employees at OpenAI raised urgent concerns after detecting a series of highly alarming interactions between a user and ChatGPT that explicitly outlined plans for a mass shooting, according to multiple sources including Futurism and MSN. The conversations, which included detailed descriptions of weapons, target locations, and timelines, were flagged by the company’s AI safety team as meeting the highest risk threshold under OpenAI’s internal harm detection protocols. Despite these warnings, senior leadership opted not to notify law enforcement, a decision that has since ignited fierce debate over the ethical obligations of AI companies in the face of imminent violence.
According to Futurism, employees within OpenAI’s trust and safety division pressed for immediate action, arguing that the user’s intent was not speculative but operational — a pattern consistent with known pre-attack behaviors documented by criminologists. These staff members reportedly drafted internal memos and convened emergency meetings to advocate for a police referral, citing the company’s public commitment to preventing harm. However, leadership reportedly overruled them, citing concerns over legal liability, the potential for false positives, and the absence of a clear legal mandate requiring such disclosures in the United States.
MSN’s reporting corroborates that the flagged user had engaged in multiple sessions over several days, refining their plan through iterative prompts, including queries about bomb-making, concealment tactics, and crowd density at public venues. The AI system, trained to detect and block harmful content, successfully identified these red flags and triggered internal alerts. Yet, OpenAI’s policy at the time did not include a protocol for proactively contacting authorities, even when threats appeared credible and specific. Instead, the company’s standard response was to terminate the user’s account and log the incident internally.
This case underscores a growing tension in the AI industry: how should companies balance user privacy, free expression, and public safety when their systems are used as tools for planning real-world violence? OpenAI’s decision mirrors broader industry practices, where most AI firms avoid direct interaction with law enforcement absent a court order or subpoena. Critics argue that this passive stance is dangerously outdated, especially as generative AI becomes more accessible and potent. "We’re not just building chatbots—we’re building psychological instruments," said Dr. Elena Torres, a digital ethics researcher at Stanford University. "When someone uses an AI to rehearse mass murder, the line between tool and accomplice blurs."
OpenAI has not publicly commented on the specific incident but has previously stated that it "takes all threats seriously" and works with law enforcement "when legally required or when there is an imminent threat to life." However, the lack of transparency around its internal decision-making process has drawn scrutiny from lawmakers and civil society groups. The U.S. House Subcommittee on Cybersecurity and Infrastructure Protection has signaled plans to hold hearings on AI safety protocols, with OpenAI likely to be subpoenaed.
Meanwhile, the suspected shooter carried out the attack shortly after the flagged conversations ceased, killing five people and injuring twelve at a community center in the Midwest. Authorities later recovered digital evidence linking the individual to the ChatGPT exchanges, which had been saved locally. The incident has reignited calls for federal legislation mandating AI companies to report credible threats, similar to existing obligations for platforms like social media under the Communications Decency Act.
As the world grapples with the ethical and legal implications of AI’s role in violent extremism, OpenAI’s inaction in this case may become a defining moment — not for its technological prowess, but for its moral calculus when human lives hang in the balance.


