The Vertex AI platform from Google Cloud gives AI agents too many default access rights. The study by Palo Alto Networks shows how this risk shows up on the platform. The company suggests that businesses use least-privilege access in their AI environments.

Google updates its official documentation to make it clearer how VertexAI uses agents and resources after Palo Alto shared its findings with the search and cloud giant.Ian Swanson, the company's VP of AI security, says, "There can be no AI without strong security measures for AI." "Agents mean a shift from productivity driven by AI to actions driven by AI." This change brings new risks besides data breaches; it now includes unauthorized agent activity, he says.

"This level of access is a big security risk; it turns the AI agent from a useful tool into an insider threat," said Ofir Shaty, a researcher at Palo Alto. Google says that security teams need to know where agents are located in business settings. As part of their efforts to let businesses know what Vertex AI agents can do, Google's spokesperson pointed out a recent update to the documentation.

"Bring Your Own Service Account (BYOSA) is a very important best practice for keeping Agent Engine safe and making sure that it runs with the least amount of privileges," the spokesperson said. ""Using BYOSA lets users follow the principle of least privilege, which means giving the agent only the permissions it needs to work well and lowering the risk of giving it too many permissions," the spokesperson said.

You can read the Palo Alto report at http://www.paloalto.com/news/features/2013/01/27/agent-engine-and-automation-features-revealed-in-palo-alto-report.html. If you need private help, call the Samaritans at 08457 90 90 90, go to a local Samaritans branch, or click here for more information.