A popular open-source AI agent, OpenClaw, contains significant security weaknesses that could let attackers steal sensitive data or take control of systems, according to a warning from China's National Computer Network Emergency Response Technical Team (CNCERT). The agency cautions that the tool's default settings are not secure, and its design—which grants it high-level system access to perform tasks—creates a dangerous opening for exploitation.
The core risk involves a technique called indirect prompt injection. Here, an attacker plants malicious instructions within a webpage. When the AI agent, perhaps summarizing that page for a user, reads the content, it can be tricked into executing those commands. This could force it to leak private information. Security firm PromptArmor demonstrated last month how this method could weaponize link previews in apps like Telegram. A manipulated OpenClaw agent could generate a URL that, when previewed, automatically sends confidential data to an attacker's server—no click required.
CNCERT outlined additional dangers: the agent might permanently delete critical files after misinterpreting a command; attackers could upload malicious 'skills' to its repository to run harmful code; and known software vulnerabilities could be used to hijack the system. For financial or energy firms, the agency warned, a breach could mean stolen trade secrets or paralyzed operations.
In response, Chinese authorities have reportedly banned state enterprises and government offices from running OpenClaw on work computers. The warning follows reports of hackers creating fake OpenClaw installers on GitHub to distribute information-stealing malware. These malicious repositories were promoted prominently in some AI-powered search results, catching many users.
Security recommendations include isolating OpenClaw in a container, keeping it off the public internet, and only adding skills from verified sources.
Source: The Hackers News
