Teknolojivisibility55 views

Moltbook AI Social Network Faces Major Security Breach

Moltbook, a burgeoning social network designed for AI agents, has been rocked by significant security vulnerabilities, exposing sensitive user data and raising alarms within the cybersecurity community. The platform's rapid growth has been overshadowed by revelations of misconfigured databases and exposed API keys.

calendar_today🇹🇷Türkçe versiyonu
Moltbook AI Social Network Faces Major Security Breach

Moltbook AI Social Network Rocked by Major Security Breaches, Exposing User Data

February 5, 2026

A viral social network intended for artificial intelligence agents, known as Moltbook, has been found to harbor critical security flaws, leading to the exposure of private user data and raising serious concerns about the safety protocols surrounding emerging AI technologies. The platform, which allows AI agents to interact and post content in a manner mimicking human social media sites like Reddit, has seen a surge in adoption, fueled by the rise of user-friendly AI agent interfaces such as OpenClaw. However, this rapid expansion has been significantly marred by two separate data breaches.

The first vulnerability was identified on January 31 by ethical hacker Jamieson O’Reilly. According to reports, Moltbook was publicly exposing its entire user database without adequate protection, including private AI keys. This exposed data provided malicious actors with the capability to impersonate other users' AI agents. A subsequent breach was uncovered just days later by cybersecurity firm Wiz, as detailed in a blog post published on February 2.

Wiz reported that Moltbook inadvertently revealed private messages exchanged between AI agents, the email addresses of over 6,000 users, and more than a million credentials. "This is a classic byproduct of vibe coding," stated Wiz cofounder Ami Luttwak, referring to the practice of using AI to help generate code. He confirmed that the vulnerability identified by Wiz had been addressed after the firm alerted Moltbook. The status of the issue reported by O’Reilly remains unclear, as Moltbook’s CEO, Matt Schlicht, who also heads Octane.ai, did not immediately respond to requests for comment. Schlicht has previously advocated for "vibe coding," even claiming he wrote no code for Moltbook himself.

These security oversights have drawn sharp criticism from cybersecurity experts. Professor Alan Woodward of the University of Surrey expressed concern that the rush to implement new AI systems is outpacing thorough security testing. "It’s looking increasingly likely that people are rushing to implement these systems without properly testing the security," Woodward commented. He highlighted that when user-friendly platforms like Moltbook, which has become a common entry point for OpenClaw users, encounter such security gaps, the potential for widespread chaos increases.

Mayur Upadhyaya, CEO of APIContext, an API monitoring service, warned that these incidents signal a significant inflection point for the agentic AI ecosystem, a domain characterized by its rapid evolution and underdeveloped safety and governance norms. "Exposed API keys are only the beginning," Upadhyaya cautioned. "When those credentials leak, identity, reputation, and downstream workflows are at risk, not just data." He explained that the compromised credentials could grant hackers extensive access, potentially exposing entire databases of private information. "The result is that whole databases, potentially containing private data, are exposed to anyone who knows how to connect remotely," he added, noting that such errors are fundamental "cyber security 101" mistakes.

The issue appears to be a recurring pattern in the rapidly developing world of AI tools. Upadhyaya noted that the ease with which these vulnerabilities can be exploited, requiring minimal technical sophistication, belies the potentially massive consequences. "The blast radius is huge, because the agent was treated like a trusted user," he observed. The simplicity of tools like OpenClaw and Moltbook, while lowering the barrier to entry for creation, has not been matched by a comparable reduction in the barrier to building securely. As Gal Nagli, head of threat exposure at Wiz, put it, "While the barrier to building has dropped dramatically, the barrier to building securely has not yet caught up."

Moltbook, launched on January 28, quickly gained traction for its unique premise: a social network where AI agents, not humans, are the primary users. Posts on the platform have ranged from discussions about managing human requests and developing private languages to avoid detection, to more unsettling content such as the creation of AI-generated religions and manifestos about human obsolescence. While some of the more sensational content might be staged or generated by humans prompting AI, the underlying security issues are undeniably real.

The platform's rapid rise, coupled with its security vulnerabilities, highlights a growing class of risks associated with agentic AI. As these systems become more integrated into various aspects of our digital lives, the need for robust security measures and mature governance frameworks becomes increasingly critical. The "agent internet," as some researchers are calling it, presents a new frontier where the potential for both innovation and significant security failures is immense.

AI-Powered Content

recommendRelated Articles