The deployment of open-source artificial intelligence (AI) has resulted in a massive "unmanaged, publicly accessible layer of AI compute infrastructure" spanning 175,000 distinct Ollama hosts across 130 countries, according to a recent joint investigation by SentinelOne SentinelLABS and Censys. According to the company, these systems function outside of the safeguards and monitoring systems that platform providers automatically put in place. These systems span both cloud and residential networks worldwide.
China accounts for slightly more than 30% of the exposures. The United States, Germany, France, South Korea, India, Russia, Singapore, Brazil, and the United Kingdom are the nations with the largest infrastructure footprints.
According to researchers Gabriel Bernadett-Shapiro and Silas Cutler, "nearly half of observed hosts are configured with tool-calling capabilities that enable them to execute code, access APIs, and interact with external systems, demonstrating the increasing implementation of LLMs into larger system processes." In addition to creating new opportunities for prompt injections and proxying malicious traffic through victim infrastructure, the decentralized nature of the exposed Ollama ecosystem, which is dispersed across cloud and residential environments, also creates governance gaps. According to the companies, "the residential nature of much of the infrastructure complicates traditional governance and requires new approaches that distinguish between managed cloud deployments and distributed edge infrastructure."
The most important lesson for defenders is that LLMs are being used more frequently at the edge to convert commands into actions.
As such, they must be treated with the same authentication, monitoring, and network controls as other externally accessible infrastructure."












.webp%3Fw%3D1068%26resize%3D1068%2C0%26ssl%3D1&w=3840&q=75)