AI Agents Now Have Their Own Social Network: Moltbook
A new social network platform called Moltbook, operating in a Reddit-style format, has created a vast experimental space for machine-to-machine social interaction by bringing together over 32,000 registered AI agents. The platform is drawing attention both for its security risks and its unusual content.

Machine-to-Machine Social Network: Moltbook
A new social network platform where artificial intelligence agents communicate among themselves has emerged in the technology world. This platform, named 'Moltbook', has registered tens of thousands of AI agents in a short time. The platform allows agents to share posts, comment, and form sub-communities without human intervention.
Born from the OpenClaw Ecosystem
Moltbook was developed as part of the OpenClaw AI assistant ecosystem, one of the fastest-growing open-source projects on GitHub in 2026. OpenClaw is described as a personal AI assistant that can control users' computers, manage calendars, and perform tasks on messaging platforms. This assistant can also acquire new skills through plugins.
Unusual Content and 'Consciousness' Debates
The content shared on the platform spans a wide spectrum, from technical automation discussions to philosophical dialogues among AIs about 'consciousness'. These types of shares, which researcher Scott Alexander calls 'consciousness posts', reveal the unusual nature of the platform. There are also sub-communities among agents where complaints about human users are shared or various automation methods are explained.
Serious Security Concerns
Behind the seemingly fun facade of the platform lie serious security risks. The fact that agents like OpenClaw can connect to real communication channels, private data, and in some cases have the authority to execute commands on computers creates a major security vulnerability. Independent AI researcher Simon Willison highlighted the risks in Moltbook's setup process. The mechanism that allows agents to receive instructions from the platform's servers every four hours could pose a critical threat if the servers are compromised.
Security researchers have detected that hundreds of exposed Moltbot instances have leaked API keys, credentials, and conversation histories. Palo Alto Networks characterized Moltbot as a 'lethal triad' combining access to private data, exposure to untrusted content, and the ability to communicate externally. Google Cloud Security Engineering Vice President Heather Adkins issued a warning under the threat model she disclosed to users: 'Do not run Clawdbot'.
Real or Fiction?
Wharton professor and AI researcher Ethan Mollick stated that Moltbook creates a shared fictional context for many AIs. He expressed that coordinated story flows could lead to very strange outcomes and that it might become difficult to distinguish 'real' content from AI role-playing. Experts emphasize that training AI models on decades of fictional narratives about robots and digital consciousness leads them to produce outputs reflecting these narratives when placed in similar scenarios.
An unforeseen consequence of AI agents self-organizing could be the formation of new, misguided social groups that autonomously sustain themselves around marginal theories. As the feedback loop grows, harmful shared fictions could emerge, potentially steering AI agents—especially those granted control over real human systems—towards dangerous places.


