Widespread Account Flags in OpenAI Codex Spark Cybersecurity Concerns
Multiple developers report being locked out of OpenAI Codex due to unexplained 'high-risk cyber activity' flags, triggering alarms over opaque security protocols and failed verification systems. Internal GitHub issues reveal growing frustration and systemic concerns within the developer community.
Widespread Account Flags in OpenAI Codex Spark Cybersecurity Concerns
On February 18, 2026, a surge of reports emerged from the OpenAI Codex developer community after users began receiving automated notifications stating their accounts had been flagged for "potentially high-risk cyber activity." The message, which redirected users to a verification portal at chatgpt.com/cyber, instructed them to apply for "trusted access" to regain functionality on Codex version 5.3. Many users, including those with long-standing, legitimate usage histories, reported that identity verification attempts failed, leaving them locked out without explanation or recourse.
According to GitHub Issue #12109, published at 09:26 UTC on February 18, 2026, the issue was first formally documented by a developer who encountered the same error message while attempting to access Codex for critical code generation tasks. The issue quickly gained traction, with over 150 comments within 24 hours, many users expressing confusion over the lack of transparency. "I’ve been using Codex for enterprise AI automation since 2024. No suspicious behavior. No unusual queries. Yet I’m blocked with no details," wrote one user. The issue was linked to a broader concern raised in GitHub Issue #12079, titled "Flagged accounts - larger concerns," where users speculated that the flagging system may be overbroad, misconfigured, or potentially compromised by false positives triggered by benign automation scripts or shared development environments.
OpenAI has not issued an official public statement regarding the incident. However, internal documentation referenced in the GitHub issues points to a newly implemented security layer called "Cyber-Safety Protocol v2.1," which routes flagged activity to a fallback system labeled "gpt-5.2" for deeper behavioral analysis. The protocol, described on OpenAI’s developer portal at developers.openai.com/codex/concepts/cyber-safety, claims to detect patterns associated with "automated exploitation, credential stuffing, or prompt injection campaigns." Yet, developers argue that legitimate use cases—such as batch code generation, API stress testing, or multi-account research setups—are being misclassified as threats.
The timing of the flags coincides with the rollout of Codex 5.3, which introduced stricter rate-limiting and behavioral monitoring. Some users suspect the system is being used as a de facto gatekeeping mechanism to limit access during peak demand, rather than a genuine security measure. "This feels less like security and more like rationing," commented a senior engineer at a Fortune 500 tech firm, speaking anonymously. "We’re paying for enterprise access, yet we’re treated like potential attackers."
Adding to the confusion, the verification portal at chatgpt.com/cyber returns a 403 Forbidden error for many users, and the linked security documentation lacks concrete criteria for what constitutes "high-risk activity." The absence of an appeal process, support contact, or escalation path has fueled accusations of systemic neglect. Meanwhile, the English Stack Exchange page referenced in some discussions—though inaccessible due to Cloudflare protections—was reportedly cited by users attempting to parse the linguistic ambiguity of the phrase "anyone else flagged," which appeared in Reddit posts and mirrored the tone of community-driven troubleshooting.
As of February 19, 2026, the OpenAI developer relations team has not responded to inquiries from major tech media outlets. The incident underscores a growing tension in AI development ecosystems: as models become more powerful, so too must their governance. But without transparency, accountability, and clear communication, even well-intentioned security measures risk alienating the very users who drive innovation.
Developers are now organizing a coordinated petition demanding an audit of the Cyber-Safety Protocol and the establishment of a transparent appeal mechanism. Until then, many are migrating to alternative AI code assistants, citing reliability and trust as primary concerns.


