A massive global network of 175,000 publicly exposed Ollama AI servers, posing significant remote code execution risks across 130 countries. An unmanaged layer of AI compute infrastructure operating without the security guardrails and monitoring systems that major platform providers implement by default. Over a 293-day scanning period, researchers identified 7.23 million observations from unique Ollama hosts spanning 130 countries and 4,032 autonomous system numbers.

The infrastructure analysis revealed a persistent core of approximately 23,000 hosts that generated most of the activity, while a larger layer of transient hosts appeared briefly before disappearing. In order to run code, access APIs, and communicate with external systems, nearly half of the observed hosts are set up with tool-calling capabilities.

Effective incident response relies on clear attribution and centralized control points, but an Ollama instance running on a home network It may be accessible to adversaries while remaining unreachable by security teams lacking contractual or legal authority. Stress that LLMs installed at the edge need to be subject to the same degree of network controls, monitoring, and authentication as other infrastructure that is accessible from the outside.