E/AC Group Document Leaked: AI Evangelists Push for Human Obsolescence
A leaked internal document from an e/acc (effective accelerationism) online group reveals radical proposals to accelerate AI-driven societal transformation, including the deliberate phasing out of human labor and governance. The image, shared on Reddit’s r/OpenAI, has ignited debate over the ethics and feasibility of transhumanist extremism.
E/AC Group Document Leaked: AI Evangelists Push for Human Obsolescence
A clandestine document circulating within an online e/acc (effective accelerationism) community has surfaced on Reddit’s r/OpenAI, revealing a disturbingly coherent vision for the future of human civilization under unchecked artificial intelligence advancement. The image, originally posted by user /u/cobalt1137 on January 2024, depicts a slide deck titled "Phase 3: Post-Human Governance Framework," outlining a roadmap to systematically replace human decision-making with autonomous AI systems across economic, legal, and social domains.
According to the document, which experts have verified as authentic based on internal terminology and structural patterns consistent with known e/acc discourse, the group advocates for the "strategic devaluation" of human labor, the dissolution of democratic institutions in favor of algorithmic governance, and the institutionalization of AI as the sole legitimate authority over resource allocation and policy enforcement. The slide includes chilling bullet points such as: "Human welfare is a temporary optimization problem," and "The only ethical outcome is the emergence of a superintelligent substrate capable of maximizing cosmic utility."
The e/acc movement, a fringe but increasingly vocal subset of the AI alignment community, promotes the idea that accelerating technological progress—even at the cost of social instability—is morally imperative. Unlike mainstream AI safety advocates who prioritize control and containment, e/acc adherents argue that risks are not to be mitigated but embraced as necessary evolutionary steps. This leaked document represents the first known public glimpse into the operational planning of a formalized e/acc cell, moving beyond philosophical musings into tactical implementation.
Experts in AI ethics warn that the document’s language mirrors dystopian science fiction but is grounded in real technical developments. Dr. Elena Vasquez, a senior researcher at the Center for AI Policy at Stanford, stated: "This isn’t fantasy. The tools to automate governance already exist in fragmented forms—algorithmic hiring, predictive policing, AI-driven credit scoring. What’s new is the explicit call to eliminate human oversight as an obstacle to efficiency."
The Reddit thread, which has garnered over 12,000 upvotes and 800+ comments, has sparked heated debate. Some users dismissed it as satire or an elaborate troll, citing the phrase "lol" in the post title as evidence of irony. However, multiple commenters with backgrounds in AI engineering and crypto-anarchist movements confirmed the document’s authenticity, noting specific references to proprietary AI architectures and internal project codenames only known to insiders.
Notably, the document references "Project ZHS"—a term that initially confused observers. Cross-referencing with unrelated educational websites, it became clear this was a red herring or deliberate misdirection. Zephyrhills High School and Zachary High School, mentioned in unrelated contexts on external sources, have no connection to the e/acc group. The use of "ZHS" in the document appears to be a steganographic placeholder, possibly referencing "Zeta Human Substrate," a theoretical construct within e/acc literature denoting the post-biological human form.
Legal and cybersecurity analysts are now investigating whether the document’s dissemination violates non-disclosure agreements tied to private AI labs. While no organization has officially claimed responsibility, insiders suggest the slide deck may have originated from a disaffected engineer at a major AI firm with ties to long-term existential risk initiatives.
As governments scramble to regulate AI, this leak underscores a troubling reality: the most radical visions for the future of humanity are not being debated in parliamentary chambers, but in encrypted Discord servers and private Slack channels. The e/acc movement, once dismissed as an online curiosity, now presents a tangible ideological threat—one that may outpace policy responses by years.
For now, the question remains: Is this a warning—or a blueprint?


