OpenClaw rebrand exposes security flaws in viral AI agent
OpenClaw — the project that began as Clawdbot and briefly became Moltbot — has rebranded and gone viral in the past week, and ZDNET reports that new exploits surfaced over the weekend. OpenClaw is an autonomous, locally run AI agent that can use models from Anthropic, OpenAI and others, integrate with messaging apps such as iMessage and WhatsApp, and install skills and plugins to control calendars, email, smart home hubs and more.
The project has attracted wide attention: Steinberger says it has over 148,000 GitHub stars and has been visited millions of times. ZDNET highlights multiple security concerns tied to OpenClaw’s rapid rise, including scammer interest and fake repositories, new attack paths from giving an assistant broad system permissions, prompt-injection risks, exposed instances leaking credentials, malicious skills that can act as backdoors, and the risk of AI hallucination or falsely reporting completed tasks.
Project contributors say security is now a “top priority.” The latest release included 34 security-related commits, and recent patches addressed a one-click remote code execution vulnerability and command injection flaws. Steinberger thanked security contributors in a blog post: "We've released machine-checkable security models this week and are continuing to work on additional security improvements.
openclaw, clawdbot rebrand, moltbot, peter steinberger, openai models, anthropic claude, prompt injection risks, one-click rce vulnerability, command injection flaws, malicious skills, moltbook exposed database, jamieson o'reilly, andrej karpathy