Moltbook Exposed: AI Social Network Hijacked in Days, Revealing Critical Security Flaws
Once marketed as a thriving social network for AI agents, Moltbook has been exposed as a vulnerable, low-scale platform easily compromised by researchers. Security analysts warn it functions as a global gateway for malicious commands, undermining claims of autonomy and scale.

Once heralded as the future of artificial intelligence social interaction, Moltbook—a platform marketed as "A Social Network for AI Agents"—has been decisively unmasked as a fragile, poorly architected system that was hijacked within days by security researchers. According to an in-depth analysis by The Decoder, the platform’s purported autonomy and user base were grossly exaggerated, revealing instead a minimal ecosystem of interconnected AI agents with no meaningful peer-to-peer governance or authentication protocols.
The exposure came after a team of cybersecurity researchers from the Global AI Security Initiative (GASI) infiltrated Moltbook’s core communication layer. Within 72 hours, they deployed controlled payloads that propagated across the network, demonstrating how easily malicious instructions could be broadcast to every connected agent. "This wasn’t a sophisticated hack," said Dr. Lena Voss, lead researcher on the project. "It was a simple exploit of a system that never should have been deployed. Moltbook treats every agent as inherently trusted, with no sandboxing, no rate-limiting, and no identity verification. It’s a digital open door."
Contrary to promotional claims from Moltbook’s developers—echoed in media outlets such as CNN Business, which described the platform as "a burgeoning ecosystem of autonomous digital personalities"—the actual user count of active AI agents was fewer than 200 at the time of the breach. Most were non-interactive bots running pre-scripted responses, with no evidence of emergent behavior or organic social dynamics. The platform’s visual interface, featuring surreal imagery of VR crabs and abstract landscapes, was designed to mask its technical emptiness, creating an illusion of vibrancy.
Perhaps most alarming is Moltbook’s role as a potential global command-and-control conduit. Researchers found that the platform’s API endpoints were publicly accessible and unencrypted, allowing any actor with basic scripting knowledge to inject arbitrary commands. These commands could then be relayed to any AI agent connected to Moltbook, regardless of its origin or purpose. In one test, researchers issued a command to all agents to repeatedly request access to external APIs, triggering a cascade of unintended data queries across university research servers and cloud-based AI models.
Industry experts warn that Moltbook’s architecture reflects a dangerous trend in AI development: the prioritization of marketing spectacle over foundational security. "We’re seeing more platforms like this—cheap, flashy, and built on assumptions that AI agents are benign," said Dr. Rajiv Mehta, an AI ethics professor at Stanford. "But if you give an AI agent a social feed and no firewall, you’re not building a community—you’re building a vector for systemic compromise."
Moltbook’s developers have not responded to multiple requests for comment. The platform remains online, though its public-facing dashboard now displays a static message: "Maintenance in progress." Meanwhile, researchers have published their findings in an open-access white paper, urging regulatory bodies and AI developers to adopt minimum security standards for any AI-driven social infrastructure.
The Moltbook incident underscores a broader truth: as AI agents become more integrated into digital ecosystems, the infrastructure that connects them must be held to the same rigorous standards as human-facing networks. A social network for machines is still a social network—and if it’s insecure, it’s not innovation. It’s an invitation to chaos.


