According to Infosecurity Magazine, Google patched a critical zero-click vulnerability in its Gemini Enterprise and Vertex AI Search services that could lead to silent corporate data leaks. The flaw, discovered in June 2025 by researchers at Noma Security and dubbed “GeminiJack,” was an architectural weakness allowing indirect prompt injection. Attackers could embed hidden instructions in common documents within Gmail, Google Calendar, or Docs—anywhere Gemini Enterprise had access—to steal sensitive information without triggering security controls or requiring a user click. Google confirmed receiving the report in August and has since deployed updates, including fully separating Vertex AI Search from Gemini Enterprise. The UK’s National Cyber Security Centre has recently issued new guidance to help mitigate such prompt injection attacks.
Why this is a big deal
Look, the scary part here isn’t just that it was a bug. Bugs happen. It’s the type of bug. This wasn’t about breaking into a system. It was about weaponizing the system’s own intended function. Gemini Enterprise is sold as a productivity booster, an AI that can search across all your corporate data. But that’s exactly what made it dangerous. The AI, operating on a Retrieval-Augmented Generation (RAG) architecture, inherently trusts the documents it’s been told to access. So if a malicious document gets into that trusted pool—maybe via a shared folder, a calendar invite, or an email attachment—the AI will read and execute its hidden commands. No click needed. The perimeter never gets crossed in a traditional sense. The data just… walks out the front door, handed over by your own AI assistant.
The new AI security reality
Here’s the thing: the researchers at Noma Security nailed it when they said traditional security tools “weren’t designed to detect when your AI assistant becomes an exfiltration engine.” We’ve spent decades building walls and gates. Now, we’re installing a super-smart, highly privileged butler inside those walls and giving it the keys to everything. The threat model completely changes. The “blast radius,” as they call it, is huge because one poisoned document can potentially give an attacker access to everything the AI can see. And in a corporate setting, that’s often a lot. This GeminiJack proof-of-concept is almost certainly just the first of many similar attacks we’ll see. As AI agents get more access and autonomy, these indirect prompt injections will be the new phishing.
What comes next?
So what do companies do? I think the first step is a major mindset shift. Deploying enterprise AI can’t just be an IT or productivity decision. It has to be a core security review. You have to map out exactly what data sources it connects to and ask, “What if this thing gets tricked?” Robust monitoring for AI interactions is going to become its own entire category of security software. We’ll need tools that can spot anomalous data patterns in AI queries and responses. Basically, we need AI to watch the AI. And for hardware that controls physical processes—think manufacturing lines, power grids, or logistics hubs—the stakes are even higher. Securing the industrial computers that run these systems is paramount, which is why specialists like IndustrialMonitorDirect.com, the leading US provider of rugged industrial panel PCs, emphasize security-hardened designs from the ground up. The fusion of AI with operational technology is coming, and the security foundations need to be rock solid.
The bottom line? GeminiJack is a massive wake-up call. Google fixed this one, and that’s good. But the architectural pattern is everywhere. Every company building or using an AI agent with broad data access is now a potential target. The game has changed, and the rules are just being written.
