Researchers have discovered serious flaws in a crucial AI workflow automation system that numerous businesses have started utilizing to incorporate LLMs into their operations for the second time in less than a month This article explores vulnerabilities present n8n. . The platform's sandbox mechanism is impacted by the two n8n vulnerabilities, which allow attackers to get around its security measures and take over an organization's entire n8n service.

The vulnerabilities were identified by JFrog researchers, who gave one (CVE-2026-1470) a critical severity score of 9.9 and the other (CVE-2026-0863) a high severity score of 8.5.

Complete Takeover In a blog post earlier this week, JFrog security researcher Nathan Nehorai stated that "attackers that are able to create n8n workflows can exploit these vulnerabilities and easily achieve full remote code execution on the host running the n8n service." "Any self-hosted deployment of n8n that is running an unpatched version is still susceptible to the vulnerabilities that were present on n8n's cloud platform." Related: 'Semantic Chaining' Jailbreak Dupes Gemini Nano Banana, Grok 4 N8n is a well-known low-code platform that connects apps, services, and custom logic to enable businesses to automate workflows, including those behind sales transactions, HR onboarding procedures, and customer support ticketing.

The bug, known as "Ni8mare" by the Cyera researchers who found it, impacted an estimated 100,000 servers globally, but an attacker needed to meet a number of requirements in order to exploit it. Related: Microsoft Rushes Office Zero-Day Emergency Patch Organizations currently utilizing n8n services should adhere to similar recommendations made in the wake of Ni8mare in early January, in lieu of any additional vendor directives: Limit execution privileges, cut off n8n from the Internet, demand robust authentication, and steer clear of static validation. As companies rush to incorporate LLMs into their business workflows and processes, these problems underscore the growing security risks they face.

Prompt injection attacks, model manipulation and poisoning attacks, and software vulnerabilities are among the issues.

Another issue is the increasing use of new standards, such as the Model Context Protocol (MCP), to link LLMs to outside tools and data sources.