The database used to store all user secrets, personally identifiable information (PII), and other information was made public by an experimental quasi-social media platform for AI agents This article explores moltbook ai. . Experts in cybersecurity caution that the platform's design poses far more risks than that.
The purpose of Moltbook was to create a social media platform for artificial intelligence (AI) agents. Anyone could spin up their own robot, connect it to Moltbook, and see how it interacts with other people's robots. Mainstream software-as-a-service (SaaS) providers have been integrating agentic AI into their platforms for a while now, enabling users to create universes of overconnected, poorly monitored agents that communicate with one another and sensitive systems.
That same evening, Jamieson O'Reilly, another hacker, discovered the same thing. Even though it was severe, it wasn't all that shocking. The creator of Moltbook boasted on X the day before the Nagli-O'Reilly discovery, saying, "I didn't write a single line of code for @moltbook."
AI made my vision for technical architecture a reality. Related: Tenable Addresses Data Exposure, Shadow AI Risks, and AI Governance ## Additional Risks in Moltbook Between January 31 and February, there were four rounds of fixes. To put it succinctly, you're doomed if an agent has access to your personal information, can communicate with the outside world, and is exposed to untrusted content. You're in a better position, though, if you can account for any one of those three factors.
"I want to be able to communicate with the outside world using [my OpenClaw]." It should be able to read unreliable user input, such as tweets, in my opinion. Sherrets clarifies, "I'm not going to give it access to private sensitive data because I'm doing those two things."


.webp%3Fw%3D696%26resize%3D696%2C0%26ssl%3D1&w=3840&q=75)

.webp&w=3840&q=75)



%2520(1).webp&w=3840&q=75)
%2520(1).webp&w=3840&q=75)


