Viral AI agent Moltbot poses multiple security risks, researchers warn

Viral AI agent Moltbot poses multiple security risks, researchers warn — Zdnet.com
Image source: Zdnet.com

Moltbot, formerly known as Clawdbot, is an open-source AI agent that has gone viral this week and is drawing security warnings, according to ZDNET. Created by Austrian developer Peter Steinberger, the assistant is designed to perform tasks such as handling email, sending messages and checking users in for services, and it runs on individual computers via chat apps like iMessage, WhatsApp and Telegram.

The project uses Anthropic’s Claude and OpenAI’s ChatGPT for its backend, offers more than 50 integrations and skills, persistent memory, and both browser and full system control. ZDNET says the repo has attracted hundreds of contributors and roughly 100,000 stars on GitHub, and the project was renamed Moltbot after an IP nudge from Anthropic.

Security researchers and companies have flagged several risks. Rapid popularity has produced fake repositories and scams, including a fake Clawdbot AI token that reportedly raised $16 million before it crashed. Cisco’s researchers called Moltbot an "absolute nightmare" for security, noting reported leaks of plaintext API keys and credentials and warning that integrations with messaging apps widen the attack surface.

Researchers including Jamieson O’Reilly found misconfigured instances exposed on the web that leaked Anthropic API keys, Telegram bot tokens, Slack OAuth credentials, signing secrets and conversation histories.

moltbot, clawdbot rebrand moltbot, peter steinberger moltbot, anthropic claude integration, openai chatgpt backend, moltbot github stars, exposed anthropic api keys, prompt injection attacks, moltbot messaging integrations, fake clawdbot token, cisco security warning, slack oauth credentials

Latest in