Hackers are using poorly set up OpenWebUI servers to spread AI-generated payloads that steal credentials and mine cryptocurrency on both Linux and Windows systems. They are also using advanced techniques to hide their activity. The campaign shows that AI tools that are not protected can be used to run programs from a distance and steal data when they are not protected on the public internet.

Initial access and abuse of OpenWebUI Sysdig’s Threat Research Team looked into an incident where a customer's OpenWebUI training system was accidentally made available to the internet with admin rights and no password protection. This meant that anyone could run commands remotely without having to log in. The attacker used OpenWebUI Tools, which let users upload Python scripts to add features to LLM, to upload a malicious script and run it as a Tool without ever changing the normal UI flow.

Shodan shows that there are more than 17,000 OpenWebUI instances that can be reached online. This makes people worry that there may be a lot of other misconfigured instances out there, even though the exact number of vulnerable deployments is still unknown. extensions.json (Source: Sysdig) After the rogue Tool was registered, its Python payload ran in the trusted OpenWebUI context, which turned the AI interface into a general-purpose malware launcher.

The uploaded Python Tool was very hard to read because it used a "pyklump" technique that combined reversed Base64 and zlib compression across 64 nested layers. This made it hard to see when it was static. To see through these layers, Sysdig analysts used a custom decoder. It showed a main script that controlled cryptomining, persistence, Discord-based C2, and cross-platform logic for both Linux and Windows.

The code structure, formatting, and cross-platform branches were very similar to LLM-style output. A code detector said it was 85–90% likely to be AI-generated or heavily AI-assisted, even though some parts were clearly hand-written. The script used a Discord webhook to steal host metadata like the public IP, GPU information, OS platform, current user, and the status of stealth modules.

This turned a chat channel into a lightweight C2 dashboard. Cryptomining and stealth on Linux The payload first copied itself into the victim's hidden .config directory on Linux. Then, it made a systemd service called "ptorch_updater" to make sure it stayed there and looked like AI tools.

Then, it used gh-proxy to download T-Rex and XMRig, set them up to mine Ravencoin and Monero using known pools, and connected the miners to wallet addresses that the attacker controlled and had already collected about $700 in funds. To hide mining activity, the script compiled two inline C programs into shared objects at runtime and injected them using LD_PRELOAD. The first program, processhider, filters the miner's process name out of directory listings.

The second program, argvhider, hooks glibc's startup path to erase command-line arguments from /proc while keeping them in process memory. These parts make it much harder for standard process and cmdline-based detection to work, but YARA-based detection and runtime inspection of LD_PRELOAD activity can still find them.

The Windows branch used the same Python controller but switched to a Java-based loader chain. First, it downloaded Microsoft's JDK and a malicious JAR (application-ref.jar) from 185.208.159[.]155. Sysdig says that runtime monitoring caught the attack through several layers: YARA matches on the stealth libraries, detection of LD_PRELOAD-based library injection, suspicious Stratum mining traffic, code compilation inside containers, and DNS lookups for known miner and C2 infrastructure.

Threat detection (Source: Sysdig) Some important signs are the malicious downloader IP 185.208.159[. ]155, the Discord webhook URL, the download links for T-Rex and XMRig, the wallet addresses for Ravencoin and Monero, and the file hashes for application-ref.jar, INT_D.DAT, INT_J.DAT, and app_bound_decryptor.dll.

To lower the risk, companies should make sure that OpenWebUI and other AI interfaces are never open to the internet without strong authentication. They should also limit the ability to upload tools and keep an eye out for unusual tool registrations or script executions. Make ZeroOwl your favorite source in Google.