Moltbot AI Assistant Raises Security Concerns
The viral popularity of the open-source AI assistant named Moltbot, with its ability to act on behalf of users and engage in proactive messaging, has prompted action from cybersecurity experts. The level of capability reached by this agentic AI is also bringing new risks to the forefront.

Locally Running AI Agent Goes Viral
An open-source AI assistant named Moltbot, which runs on a local computer and can operate via Telegram or WhatsApp, has become an internet sensation due to its advanced autonomous behaviors. The system can perform tasks such as negotiating car purchases on behalf of the user, making reservations, and even calling restaurants by phone when apps fail. Its ability to proactively notify the user upon task completion positions it as an 'always-on' digital assistant.
Security Experts Warn of Deep System Access
Following Anthropic's expression of brand concerns, which led to its name being changed from Clawdbot to Moltbot, the software is being closely examined by cybersecurity experts. Experts highlight the risks of granting such extensive application and device access to an AI assistant. Misconfigurations or malicious command injections known as 'prompt injection' could lead to personal data leaks or the assistant performing unexpected actions. Concerns lie in the potential for the development speed of such agent systems to outpace the development of security measures.
This development coincides with other significant movements in the AI world. For instance, as highlighted by X's open-source decision, platforms' infrastructure and security approaches are of critical importance to users.
Other Major AI Developments
While Moltbot discussions continue, a series of other important developments have occurred in the industry. OpenAI announced it has begun testing targeted ads for ChatGPT's free and Go-tier users in the US, ahead of a potential 2026 IPO. It was stated that the ads, which will appear as 'Sponsored Suggestions,' will not appear in health or politics-related chats and will not influence ChatGPT's responses.
A recent Gallup report suggests that workplace AI usage in the US may have reached an early plateau. While approximately 50% of employees stated they have never used an AI tool, the technology sector stands out with a 60% regular usage rate.
Research on AI videos created by Runway's Gen-4.5 model also yielded notable results. In a study with over 1000 participants, more than 90% of viewers struggled to reliably distinguish between five-second real and AI-generated videos.
Anthropic's first 2026 Economic Index indicates that AI is supporting human work rather than eliminating jobs. It notes that in about half of the jobs examined, a quarter of tasks are now managed by AI, but complete job loss was seen in less than 10% of companies.


