OpenClaw: 5 Security Risks to Watch Out for in the Rapidly Spreading AI Assistant
The open-source AI assistant named OpenClaw is rapidly gaining popularity with its promise to help users manage their digital lives, while cybersecurity experts are highlighting significant risks.
Popular AI Assistant Raises Security Concerns
OpenClaw, an open-source AI assistant, has rapidly gained popularity in the tech world in recent weeks, while cybersecurity experts are issuing warnings about the risks the system brings. Created by Austrian developer Peter Steinberger and represented by a cute crustacean character, OpenClaw promises to automate tasks for users such as managing emails, sending messages, and even flight check-ins.
Rapid Spread Paves the Way for Malicious Activities
OpenClaw, which has garnered hundreds of thousands of stars on the GitHub platform to become one of the fastest-growing open-source AI projects, is also creating opportunities for fraudsters due to its sudden popularity. Security experts report that malicious actors, taking advantage of the project's name change process, are creating fake repositories and crypto scams. Notably, a fake Clawdbot token reportedly collapsed after raising $16 million.
System and Account Access Pose Risks
For OpenClaw to function at its full potential, users need to grant the system extensive permissions. Cisco's security researchers describe this situation as 'a complete security nightmare.' The assistant's permissions to execute shell commands, read/write files, and run scripts can put user data at risk in case of misconfiguration or malware infection.
Security researchers have identified that some of the hundreds of OpenClaw instances are connected to the internet without any authentication protection, leading to the leakage of sensitive information such as API keys, Telegram bot tokens, and conversation histories.
Prompt Injection Attacks a Critical Threat
One of the issues that most concerns AI security experts is prompt injection attacks. In this attack method, malicious instructions are hidden in web resources or URLs, causing the AI assistant to read and execute these instructions. OpenClaw's own documentation acknowledges that the prompt injection attack problem has not yet been solved for all AI assistants and agents.
Malicious Plugins and Skills Are Spreading
Cybersecurity researchers report that malicious skills compatible with OpenClaw are beginning to appear online. On January 27, a new VS Code plugin named 'ClawdBot Agent' was identified as malicious. This plugin was determined to be a full-fledged Trojan using remote access software, likely intended for surveillance and data theft.
Despite OpenClaw not having an official VS Code plugin, this situation shows that the assistant's growing popularity could lead to the proliferation of malicious plugins and skills. Users accidentally installing such plugins can leave an open door for their systems and accounts to be compromised.
Experts Recommend Caution
Security experts advise caution in adopting AI assistants with a high degree of autonomy and account access. While it's noted that OpenClaw could be the first example of how AI agents might integrate into future lives, it is emphasized that personal security should be prioritized over convenience. Users are recommended to only use trusted repositories and be informed about system configurations.
While the innovative approaches offered by such AI tools are acknowledged as valuable, experts frequently state that security measures should not be overlooked. AI agents and platforms spreading at an unprecedented speed in the tech world require users to act more consciously about digital security.


