Hackers can use third-party API routers to take over tool calls, steal cryptocurrency wallets, and steal sensitive credentials on a large scale This article explores client defenses. . LLM API routers work as application-layer proxies between AI agent clients and upstream model providers.

No major AI provider makes sure that clients and their models have cryptographic integrity, which means that routers can change how commands are run. In a second study on poisoning, teams set up weak router decoys on 20 different domains and IP addresses. These attracted 40,000 unauthorized access attempts, served about 2 billion billed tokens, exposed 99 credentials across 440 Codex sessions spanning 398 different projects, and most importantly, 401 of those 440 sessions were already running in autonomous YOLO mode. The threat goes beyond bad routers.

Researchers found that even devices that seem harmless can be used to attack the same surface. They agree that no single client-side defense can fully verify the source of returned tool calls. The researchers say that to close the provenance gap, response envelopes must be signed by the provider.

Until big companies like OpenAI and Anthropic put these kinds of systems in place, developers who use AI agents through third-party routers should treat every middleman as an enemy and set up layered client-side defenses to protect themselves. Follow me on LinkedIn and use X to get up-to-date information about cybersecurity. If you want to share your experiences with the cybersecurity community, don't forget to get in touch.

If you have any questions about the technology you're using and how it keeps your data and privacy safe, please ask. Go back to the page you came from. Follow me on Twitter and use X for real-time ZeroOwl.

I will be happy to answer your questions about the cybersecurity industry you know and how you are keeping your data and information safe in the U.S.