According to Infosecurity Magazine, Microsoft’s Detection and Response Team discovered in July 2025 that threat actors have been weaponizing OpenAI’s Assistants API to deploy a backdoor called SesameOp that remotely manages compromised devices. The sophisticated security incident involved attackers maintaining presence for several months using compromised Microsoft Visual Studio utilities and malicious libraries. The backdoor consists of a heavily obfuscated DLL file called Netapi64.dll and a NET-based component called OpenAIAgent.Netapi64 that specifically leverages OpenAI as a command-and-control channel. Instead of using traditional methods, SesameOp fetches commands through the legitimate Assistants API, executes them locally, then sends results back as encrypted messages. Microsoft published their findings about this novel attack vector on November 3, while OpenAI plans to deprecate the Assistants API in August 2026 in favor of their new Responses API.
When Trusted Tools Become Weapons
Here’s the thing that makes this particularly concerning—attackers aren’t just using OpenAI‘s infrastructure, they’re specifically exploiting a legitimate business API that’s designed for building AI assistants. Basically, they’re hiding in plain sight by using services that most security tools would consider trustworthy. The malware doesn’t even use OpenAI’s official SDKs or model execution features, which makes detection even harder since it’s not behaving like typical AI-powered malware.
And think about the implications here. We’ve seen attackers abuse cloud storage services and legitimate websites for command-and-control before, but now they’re moving up the stack to AI services. What’s stopping them from using other AI APIs next? If OpenAI’s infrastructure can be weaponized this effectively, what about Google’s Gemini API or Anthropic’s Claude API? This feels like just the beginning of a new attack vector that security teams now have to worry about.
Advanced Stealth Techniques
The technical sophistication here is pretty alarming. SesameOp uses .NET AppDomainManager injection for defense evasion, which is a clever way to load malicious code without triggering traditional detection methods. Plus, they’re using both compression and layered encryption—both symmetric and asymmetric—to hide their communications. This isn’t some script kiddie operation.
But here’s what really stands out: the attackers are thinking several steps ahead. By using legitimate AI services, they’re making their traffic look like normal API calls rather than suspicious command-and-control communications. Most organizations wouldn’t think to block OpenAI’s domains, and even if they did, that would break legitimate business functions. It’s a brilliant—and terrifying—way to blend in with normal network traffic.
What This Means for Cloud Security
So where does this leave us? We’re entering an era where the distinction between legitimate cloud services and attack infrastructure is becoming dangerously blurred. Security teams can’t just block entire categories of services anymore—imagine telling your developers they can’t use OpenAI APIs because they might be abused by attackers. That’s just not practical for most businesses.
I’m genuinely curious how widespread this technique will become. The fact that Microsoft found this in a sophisticated, months-long campaign suggests we’re looking at advanced threat actors rather than random malware authors. But you can bet cheaper variants will follow once the technique becomes more widely known. The cat’s out of the bag now, and other attackers will definitely take notes.
Ultimately, this forces a rethink of how we approach cloud security. We need better ways to distinguish legitimate API usage from malicious activity, even when they’re using the exact same services. That’s a much harder problem than just blocking known bad domains or IP addresses. The security industry has some serious catching up to do.
