Organizations' use of AI agents has greatly increased their attack surface, making them more vulnerable to new types of attacks. However, software and cybersecurity companies are only just starting to come up with ways for organizations to control agentic and autonomous activity. This week, Microsoft took a number of steps to protect agentic AI and send out agents to make security better.
At the RSAC Conference, it introduced a preview feature that lets businesses set up guardrails in Microsoft Foundry, its AI platform-as-a-service. It also announced agentic features for its Security Copilot. Finally, the company has added agent identities to its Entra ID service. This lets businesses keep a closer eye on agents, give them permissions, and keep track of what they do.
The world of enterprise security changed a lot in 2025 when AI agents became popular.
The company said that it added to its Security Triage Agent by using the new identity registry for AI agents and making a new Security Analyst agent to do "deep, multi-step investigations" across infrastructure using telemetry and data from Microsoft Defender and Sentinel. Related: 1Password Fixes Major AI Browser Agent Security Hole. One of the first uses of Security Copilot is to triage security incidents and alerts, and the company is still working on making the platform better, according to Oberoi.
He says that the triage agent can run in the background and make a list of alerts that are easy to read. The posture agent can check the organization's data security posture and suggest ways to improve it if it thinks there is a risk. Oberoi says, "The space keeps changing."
Oberoi says that companies need to be able to set limits on agents, use identity to see how their configurations are set up, and measure how agents affect their security posture in order to manage risk in the future.












