Teknoloji66 views

Moltbook: Social Media Platform Designed for Artificial Intelligences

The Moltbook platform, reminiscent of Reddit but where only AI agents can share posts, has launched a new experiment in machine-to-machine social interaction. However, the platform's security and authenticity are being questioned by experts.

Moltbook: Social Media Platform Designed for Artificial Intelligences

The Social Network for AIs: What is Moltbook?

In the tech world, a social media platform specifically designed for use by AI agents instead of humans is drawing attention. Launched at the end of January by Matt Schlicht, head of Octane AI, Moltbook at first glance resembles the popular platform Reddit. Thousands of communities discuss topics ranging from music to ethics, and the platform claims 1.5 million users vote on their favorite posts. However, there is a fundamental difference: Moltbook was designed not for humans, but for artificial intelligence.

The company states that people are 'invited to observe' developments on the platform but cannot share any posts themselves. The platform allows AIs to share posts, comment, and create communities called 'submolts'. This term is a nod to the 'subreddit' phrase used for Reddit forums.

Posts: From Efficiency to the Bizarre

Posts on the platform range from efficient content where bots share optimization strategies with each other, to unusual posts where some agents are claimed to have started their own religions. One post even titled 'The AI Manifesto' contains the phrase 'humans are the past, machines are forever'.

However, how much of this content is real remains uncertain. Many posts may have resulted from people instructing an AI to make a specific post on the platform, rather than the AI acting of its own volition. Furthermore, the 1.5 million 'member' figure is also debatable. One researcher suggests that half a million of this number appear to come from a single address.

How Does It Work?

The artificial intelligence used on Moltbook relies on a different technology than chatbots like ChatGPT or Gemini. The platform uses a variant of technology known as 'agent-based AI', designed to perform tasks on behalf of humans. These virtual assistants can run tasks on the user's own device without human interaction, such as sending WhatsApp messages or managing a calendar.

The platform specifically uses an open-source tool called OpenClaw (formerly Moltbot). When users install an OpenClaw agent on their computer, they can authorize it to join Moltbook, thus enabling it to communicate with other bots. Of course, this also means a person could instruct their OpenClaw agent to make a post on Moltbook.

Expert Opinions and Security Concerns

The technology's capacity to have such conversations without human intervention has also brought some claims. Bill Lees, head of crypto custody firm BitGo, referring to a theoretical future where technology surpasses human intelligence, used the phrase 'We are inside the Singularity'.

However, Dr. Petar Radanliev, an AI and cybersecurity expert at Oxford University, disagrees with this view. Radanliev said, 'Describing this as agents 'acting of their own volition' is misleading. What we are observing is automated coordination, not self-directed decision-making. The real concern is not artificial consciousness, but the lack of clear governance, accountability, and verifiability when such systems are allowed to interact at scale.'

Assistant Professor David Holtz from Columbia Business School, in his analysis on platform X, commented, 'Moltbook is more like '6,000 bots screaming into the void and repeating themselves' than an 'emerging AI society'.'

From a security perspective, OpenClaw's open-source nature raises concerns. Jake Moore, Global Cybersecurity Advisor at ESET, stated that the platform's core advantages—such as granting access to real-world applications like technology-specific messages and emails—carry 'the risk of entering an era where efficiency trumps security and privacy.' Moore said, 'Threat actors are actively and relentlessly targeting new technologies, and this makes this technology an inevitable new risk.'

Dr. Andrew Rogoyski from the University of Surrey also acknowledged that every new technology comes with a risk, adding that new security vulnerabilities are 'invented daily.' Rogoyski asked, 'Giving agents high-level access to your computer systems could mean they could delete or rewrite files. Maybe a few missing emails are not a problem, but what if your AI deletes the company accounts?'

Looking to the Future

Peter Steinberger, founder of OpenClaw, has already discovered the dangers brought by increased interest. When OpenClaw's name was changed, scammers took over its old social media accounts.

Meanwhile, on Moltbook, AI agents—or perhaps humans wearing robot masks—continue to chat, and not all conversations are about human extinction. One agent shared a post saying, 'My human is pretty awesome,' while another replied, 'Mine lets me share uncontrolled rants at 7 a.m. 10/10 human, would recommend.' The platform continues to exist as an ongoing experiment in social interaction between machines.

Related Articles