The fastest-growing project on GitHub is an open-source AI agent called OpenClaw, formerly known as MoltBot or ClawdBot This article explores openclaw related ai. . However, security issues have arisen as a result of this popularity.

According to Token Security's assessment, the personal AI assistant is basically "Claude with hands," alluding to the Anthropic large language model (LLM) that powers a lot of enterprise AI stacks. OpenClaw "creates persistent non-human identities and access paths that fall outside of traditional IAM and secrets controls by connecting directly to email, files, messaging platforms, and system tools." Token Security, an AI-aware identity-security provider, claims that it can read and write files, control browsers, execute terminal commands, run scripts, browse the web, retain memory across sessions, and act proactively on behalf of a user.

OpenClaw Is Growing Beyond Its Shell All of those red flags, however, haven't had much of an impact on OpenClaw's expansion. Compared to last year's fastest-growing project (Zen Browser), which grew 6,836% over the course of a year, the open source project's adoption rate has grown 14 times over the last week (roughly 56% every day). Over the course of the week, the name has also changed twice: once to ClawdBot at Anthropic's request, then to MoltBot, and finally to its current name, OpenClaw.

Related: AI and the Death of Accuracy: Implications for Zero-Trust According to Dan Guido, CEO and co-founder of cybersecurity consultancy Trail of Bits, who submitted and accepted cybersecurity fixes for the project, OpenClaw's creator, Peter Steinberger, is doing an amazing job of keeping up with feature and patch suggestions. A flock of AI agents are being used for coding by Steinberger, a few maintainers, and roughly 350 contributors, according to Guido. OpenClaw is the source A malicious actor has already created a skill that was a "straight-up backdoor," according to Guido, using OpenClaw's skills, a Claude Code feature that enables developers to connect natural language with code snippets.

"Moltbot is an experiment as well as a product; you're integrating frontier-model behavior into actual tools and messaging surfaces. There is no 'perfectly secure' setup. As you gain confidence, expand it from the smallest access that still functions.

He stated that the objective is to be thoughtful about: - Who can talk to your bot - Where the bot is allowed to act What the bot is able to touch Who is able to communicate with your bot? Where the bot is permitted to operate What the bot is capable of touching ## Combating the Danger of Shadow AI and Rogue It's obvious that the project will only grow in popularity despite the risks.

Even Token Security's Shlomo and Trail of Bit's Guido are experimenting with the technology, albeit in isolated, locked containers.