When you use traditional software, you get clear instructions on how to use the systems. When we deploy an AI agent, on the other hand, we are essentially hired as a new employee. This worker also has direct access to the API and can start programmatic workflows, which run at machine speed.
The same methods used in traditional cybersecurity are used to protect AI agents. The key is to change these methods so that they work with a new type of entity. Every agent, connector, and service account needs its own set of unique, traceable credentials. Many businesses have trouble putting together a complete list of operational AIs.
You can't manage what's in your environment well without it. The fact that Google has too much power over Vertex AI is a big problem.
The only way to keep the least privilege over time is to do automated access reviews and look for strange behavior by agents. The next wave of AI security breaches won't come from hackers, but from people who have high-level access rights but aren't being watched closely enough. The authors of the whitepaper, which you can download from the company's website here: http://www.gofundme.com/gf2s2, say that the real danger in the age of AI is not bad intentions, but the fact that technology is moving faster than regulations.
When businesses use agents more and more to control data access, system connections, and workflow triggers in different environments, even small holes in security protocols can grow very quickly as automation speeds up.
Just like making sure that AI acts properly requires proper oversight, so does treating AI like a real employee. Controlling identities, privileges, and governance is at the heart of risk management. Security leaders who are good at their jobs must be open about who has access, what they can do, and how their permissions change over time.
To learn more about how to use AI in your business, go to The AI Company's website or read their blog.












