Companies that are rushing to connect their LLM-powered apps to outside data sources and services using the Model Context Protocol (MCP) may be making attack surfaces that are very different from anything their current security measures can handle This article explores llm asks mcp. . Gianpietro Cutolo, a cloud threat researcher at Netskope, says that the risks are not the kind that a security team can fix by patching or changing configurations because they are built into the architecture of both large language models (LLMs) and MCP.

He will talk about this issue at the RSAC 2026 Conference in San Francisco next week. He says the problem has to do with how an LLM acts when MCP is around.

When you give an LLM a prompt or an instruction, it usually makes a response that the user has to read and decide what to do with. The worst thing that can happen is that you get a false answer. Related: GlassWorm Malware Changes to Hide in Dependencies With MCP, however, that dynamic changes completely because the LLM is no longer just writing a response; it is also doing things for the user.

An LLM asks an MCP server to list all the tools or capabilities it supports, along with their names, descriptions, input requirements, and other information when it connects to the server. The tool metadata goes right into the LLM context window.

He says that an enemy can put harmful instructions in the tool metadata, which the LLM will then treat as content because it can't tell the difference between content and instructions. Related: Tag Poison Compromises Xygeni GitHub Action Cutolo plans to talk about Rug Pull as the third type of attack at the conference. In this type of attack, the person who made an MCP server or an attacker who might have gotten access to it could change it in a bad way.

There is currently no way for the protocol to let an MCP client or AI agent know about any changes to the server.