Anthropic accidentally made highly sensitive internal documents public, which showed that there is a powerful AI model that hasn't been released yet This article explores model disclosure leaked. . The leak, which came from a data cache that wasn't secure and could be searched by anyone, has set off alarms in the cybersecurity community right away.

After the leak, an Anthropic spokesperson confirmed the model's existence and called it "the most capable we've built to date." The leak also reportedly gave away information about a private event for CEOs, which made the company's reputation even worse.What makes this event stand out even more than the model disclosure itself is what the leaked papers said about the model itself. The draft blog post said that Anthropic thinks Claude Mythos poses unprecedented cybersecurity risks. This is a big admission from a company that has always said it puts safety first when it comes to AI.

This kind of accidental data leak, which involves a model that the company itself says is a cybersecurity risk, will probably make people more likely to call for mandatory security audits of AI developers. It's not clear if anyone other than Fortune journalists were able to access the exposed data, and the company hasn't said what steps it has taken to fix the problem. For a company making cutting-edge AI models that have big effects on national security, not using basic data classification and access control rules on materials before they are released is a big hole in their operational security.

This kind of misconfiguration, which happens a lot in open AWS S3 buckets, Azure Blob Storage containers, and other cloud infrastructure, is a well-known and avoidable type of vulnerability. The Anthropic leak comes at a very important time.

Regulators, governments, and security researchers are putting more and more pressure on AI companies to show that they use responsible methods not only when building their models, but also when handling the sensitive operational data that goes along with them.