Clawdbot, the AI agent that took the tech world by surprise, became one of the fastest-climbing projects on GitHub because it promised something unusual.

Instead of just chatting, Clawdbot can interact with your files, send messages, schedule calendar events, and automate tasks on your own computer, all without sending your data off to a big server.

Recommended Videos

Its ability to act on behalf of users makes it feel like a personal AI helper. This contributed to its popularity and helped it spread rapidly among developers and curious users alike.

The project was recently renamed from Clawdbot to Moltbot after Anthropic objected to the original name, citing potential trademark conflicts. The developer agreed to the change to avoid legal trouble, even though the software itself remained unchanged.

🦞 BIG NEWS: We’ve molted!

Clawdbot → Moltbot
Clawd → Molty

Same lobster soul, new shell. Anthropic asked us to change our name (trademark stuff), and honestly? “Molt” fits perfectly – it’s what lobsters do to grow.

New handle: @moltbot
Same mission: AI that actually does…

— Mr. Lobster🦞 (@moltbot) January 27, 2026

What security checks revealed about Clawdbot (Moltbot)

The same features that made Moltbot seem powerful are also what make it risky. Since the AI can access your operating system, files, browser data, and connected services, researchers warn that it creates a wide attack surface that bad actors could exploit.

Security researchers actually found hundreds of Moltbot admin control panels exposed on the public internet because users deployed the software behind reverse proxies without proper authentication.

Because these panels control the AI agent, attackers could browse configuration data, retrieve API keys, and even view full conversation histories from private chats and files.

In some cases, access to these control interfaces meant outsiders essentially held the master key to users’ digital environments. This gives attackers the ability to send messages, run tools, and execute commands across platforms such as Telegram, Slack, and Discord as if they were the owner.

Other investigations revealed that Moltbot AI often stores sensitive data like tokens and credentials in plain text, making them easy targets for common infostealers and credential-harvesting malware.

Researchers also demonstrated proof-of-concept attacks where supply-chain exploits allowed malicious “skills” to be uploaded to Moltbot’s library, enabling remote command execution on downstream systems controlled by unsuspecting users.

This is not just theory. According to The Register, analysts warn that an insecure Moltbot instance exposed to the internet can act as a remote backdoor.

There’s also the possibility of prompt injection vulnerabilities, where attackers trick the bot into running harmful commands; something we have already seen in OpenAI’s AI browser, Atlas.

If Moltbot is not secured properly with traditional safeguards like sandboxing, firewall isolation, or authenticated admin access, attackers can gain access to sensitive information or even control parts of your system.

Since Moltbot can automate real-world actions, a compromised system could be used to spread malware or further infiltrate networks. Here’s what Heather Adkins, VP of Google Security Team, thinks of the chatbot:

In short, Moltbot is an intriguing step toward more capable personal AI assistants, but its deep system privileges and broad access mean you should think twice and understand the risks before installing it on your machine.

Researchers suggest treating it with the same caution you would use for any software that can touch critical parts of your system.

Services MarketplaceListings, Bookings & Reviews

Entertainment blogs & Forums

Leave a Reply

Prestige dtf printers. punta cana : lujo, relax y el mejor todo incluido del caribe.