Companies can build, deploy, and manage AI-powered apps on Google Cloud's Vertex AI platform. Palo Alto Networks says that too many default permissions let attackers take advantage of a deployed AI agent and use it to steal sensitive data. Google has changed its official documentation to make it clearer how VertexAI uses agents and other resources.
This was done after Palo Alto's findings were made public to the search and cloud giant. Ian Swanson, the company's VP of AI security, says that the main thing organizations should take away from this is that they need to be aware of the security risks that AI agents can unintentionally bring. He says, "Agents are a change in how AI works in businesses, from AI that talks to AI that does."
"This level of access poses a serious security risk, turning the AI agent from a useful tool into an insider threat," wrote Ofir Shaty, a researcher at Palo Alto. He said, "The scopes that the Agent Engine sets by default could give access to more than just the GCP environment." It's not just about data leaks anymore; it's also about agents doing things they shouldn't.
Security teams need to be able to find agents no matter where they are in the business world. A Google spokesperson said that the company's recent update to its documentation is one way it has tried to make businesses more aware of the permissions that agents have on Vertex AI.
The Palo Alto report said, "Bring Your Own Service Account (BYOSA) is a key best practice for securing Agent Engine and making sure that execution is done with the least amount of privilege." "With BYOSA, Agent Engine users can enforce the principle of least privilege by only giving the agent the permissions it needs to work," she said. The company says the update is meant to help businesses better understand the risks that agents pose and how they can be protected in the future.
You can find the update on the Google Vertex Agent Engine website.






.webp&w=3840&q=75)

