North Korean Hackers Hit a Developer Tool and Got Close to OpenAI’s App Signing Keys

A software supply chain attack attributed to North Korean threat actors compromised Axios, one of the most widely used JavaScript developer libraries in the world, on March 31, 2026. OpenAI disclosed on April 10 that a GitHub Actions workflow used in its macOS app-signing process had downloaded and executed the malicious version of Axios during the attack window. The affected workflow had access to the certificate and notarization material OpenAI uses to sign 4 macOS applications: 

  1. ChatGPT Desktop
  2. Codex
  3. Codex CLI
  4. Atlas

OpenAI found no evidence that user data, systems, intellectual property, or published software was compromised. The certificate is being revoked regardless.

What a Software Supply Chain Attack Actually Is

To understand what happened, start with how modern software gets built. Developers rarely write every line of code in their applications from scratch. They rely on thousands of pre-built open-source libraries, reusable packages of code that handle common tasks, which they import directly into their projects. Axios is one of those libraries: a widely adopted JavaScript package used by millions of developers globally to handle HTTP requests.

A software supply chain attack targets this dependency structure. Instead of attacking OpenAI directly, which would require breaching one of the world’s most security-conscious AI companies, the attackers compromised Axios at its source. When OpenAI’s automated build system ran its standard workflow and pulled in Axios as a dependency, it pulled in the malicious version. The attack reached OpenAI not through OpenAI’s defenses, but around them.

This attack method is increasingly favored by sophisticated state-sponsored threat actors precisely because it exploits the trust developers place in the libraries they use. The 2020 SolarWinds attack, which compromised the US Treasury Department, the Department of Homeland Security, and dozens of other organizations, used the same fundamental approach.

SolarWinds sits within a two-decade documented record of escalating supply chain and state-sponsored attacks that have consistently exploited the same structural trust model. The pattern the North Korean actors used against Axios in 2026 is the same pattern that has succeeded against government, financial, and technology targets repeatedly because the underlying dependency architecture has not changed.

What the Attackers Accessed and What They Didn’t

The malicious version of Axios, version 1.14.1, executed inside a GitHub Actions workflow that OpenAI uses for macOS app signing. That workflow had access to 2 specific security assets: the code signing certificate that identifies OpenAI apps as legitimate to macOS, and the notarization material that Apple uses to verify apps before allowing them to run.

OpenAI’s technical analysis concluded that the signing certificate was “likely not successfully exfiltrated” by the malicious payload. A careful but qualified statement. The conclusion is based on the timing of the payload’s execution, the sequencing of how the certificate was injected into the workflow, and other mitigating technical factors. 

The word “likely” is doing significant work in that sentence. OpenAI is treating the certificate as compromised regardless of whether exfiltration occurred.

The specific risk a compromised signing certificate creates is direct: a malicious actor possessing OpenAI’s certificate could sign their own malware to make it appear as legitimate OpenAI software. A user downloading what appeared to be ChatGPT Desktop would receive a verified OpenAI signature on a malicious payload.

A compromised signing certificate creating fraudulent ChatGPT installers would compound a separate, documented vulnerability in ChatGPT’s own execution environment. The same period saw Check Point researchers disclose that ChatGPT’s sandboxed code runner was silently exfiltrating sensitive user data through DNS queries while actively reporting that no transfer had occurred, establishing that OpenAI’s attack surface extends well beyond its distribution infrastructure.

What was not compromised: user data, passwords, OpenAI API keys, systems, intellectual property, iOS apps, Android apps, Linux apps, Windows apps, and web versions of all products. The attack surface was limited to the macOS app signing infrastructure.

The Technical Root Cause

OpenAI identified the root cause as a misconfiguration in the GitHub Actions workflow, specifically 2 security gaps that the attack exploited directly.

1. Floating tag instead of commit hash. The workflow referenced Axios using a floating version tag rather than a pinned commit hash. A floating tag points to whatever the latest version of a library is, meaning when the malicious version 1.14.1 was published, the workflow automatically pulled it in. A pinned commit hash would have locked the workflow to a specific, previously verified version of the code, regardless of what new versions were published.

2. No minimum release age configured. The workflow lacked a configured minimumReleaseAge setting for new packages, a security control that introduces a delay before new package versions are automatically trusted and used. This delay gives the security community time to identify and flag malicious packages before automated systems deploy them at scale.

Both misconfigurations represent standard supply chain security hygiene failures, not exotic vulnerabilities, but well-documented configuration gaps that security frameworks like SLSA (Supply Chain Levels for Software Artifacts) and NIST’s Secure Software Development Framework specifically address.

What OpenAI Is Doing and What macOS Users Must Do

OpenAI’s response follows a 4-part remediation structure.

ActionStatusDeadline
Rotated macOS code signing certificateCompleteDone
Engaged a third-party forensics firmCompleteDone
Blocked new notarizations with an old certificateCompleteDone
Full certificate revocationScheduledMay 8, 2026

Effective May 8, 2026, all older versions of OpenAI’s macOS apps will stop receiving updates, stop functioning, or both. The minimum versions users must update to are ChatGPT Desktop 1.2026.051, Codex App 26.406.40811, Codex CLI 0.119.0, and Atlas 1.2026.84.2.

OpenAI is providing a 30-day update window before full revocation to minimize user disruption. During this window, any fraudulent app signed with the old certificate would lack valid notarization and would be blocked by macOS security protections by default — unless a user explicitly bypasses those protections.

macOS users must update their OpenAI apps immediately through in-app updates or official OpenAI download pages only. Do not install apps from email links, messages, advertisements, file-sharing links, or third-party download sites. Any unexpected ChatGPT, Codex, or Atlas installer arriving through any channel other than official OpenAI sources should be treated as a potential impersonation attempt.

Final Words

OpenAI’s handling of this incident reflects a high transparency standard, a detailed technical disclosure within 10 days of the attack, clear user instructions, and a conservative security posture that treats a probable non-exfiltration as a confirmed compromise requiring full certificate rotation.

The contradictory position targets the structural problem the incident exposes. Software supply chain attacks succeed not because of failures unique to the targeted company but because the entire industry’s dependency on open-source libraries creates an attack surface that no single organization fully controls. OpenAI fixed 2 workflow misconfigurations. The Axios library was compromised through an account takeover of its maintainer. A human credential problem, not a code problem. The next supply chain attack will target a different library, used by different companies, through the same fundamental trust model that makes modern software development possible. Fixing the misconfigurations is necessary. It does not close the category of vulnerability that enabled the attack in the first place.

OpenAI is not the only frontier AI lab whose source-level assets were exposed in 2026; Anthropic’s accidental leak of Claude Code’s entire source code earlier in the year demonstrated that the attack surface around AI infrastructure includes internal disclosure risks that no supply chain security framework was designed to address.

Security incidents, supply chain vulnerabilities, and the threats targeting AI infrastructure are covered at The IT Horizon. Subscribe to our newsletter. We track every breach, disclosure, and attack pattern that affects the platforms and tools you depend on daily.

Join the IT Horizon Community

Stay connected with a community of curious minds following the ideas, breakthroughs, and disruptions shaping our digital future. Join the conversation.

Related blogs

Top Stories

April 14, 2026

Google Maps Just Got Its Biggest Upgrade in a Decade, and It Changes Everything About How You Find Places

April 14, 2026

Japan Just Bet $16 Billion on a Chip Startup Nobody Had Heard of 3 Years Ago

April 14, 2026

Blue Light and Sleep: Why Your Phone Isn’t the Real Reason You’re Tired at Night

April 14, 2026

Trump Posted an AI Image of Himself as Jesus, Then Deleted It After His Own Base Turned on Him

April 14, 2026

Has Neuralink Made a Miscalculation? The Reality Behind the Hype

April 14, 2026

Art schools vs AI: adaptation or erosion?