No More Meta-Mercor Relationships: Here’s Why

Meta has indefinitely suspended every project it runs with Mercor (a $10 billion AI data contracting startup) following a confirmed security breach that may have exposed proprietary training data belonging to some of the world’s most valuable AI companies. 

At the same time, OpenAI is investigating its own exposure. Anthropic has not yet commented. The contractors who were doing the work have been told they cannot log billable hours until further notice.

What Is Mercor and Why Does It Matter

The company

Mercor was founded in 2023 and is one of Silicon Valley’s fastest-growing startups. In October 2025, it raised $350 million in a Series C funding round led by Felicis Ventures and was valued at $10 billion. A figure that reflects how central it has become to the AI industry’s most sensitive operations.

What it actually does

Mercor sits at the intersection of AI training and specialised human labour. Major AI labs, including OpenAI, Anthropic, and Meta, do not generate all their training data internally. They outsource significant portions to contractors like Mercor, which recruits networks of specialists, including doctors, scientists, lawyers, engineers, across global markets, including India, to generate bespoke, proprietary datasets. These datasets teach AI models how to reason, respond, and behave in domain-specific contexts.

Why is this data treated as a secret

The datasets Mercor generates are not just content. They are competitive blueprints. The specific data selection criteria, labelling protocols, and model-training methodologies these companies use represent a competitive moat worth billions of dollars. AI labs guard this information because it reveals to competitors, including foreign AI labs, the precise ways they are building and improving their models. Mercor and its competitors, including Surge, Handshake, Turing, Labelbox, and Scale AI, are known for operating under extreme secrecy, using internal codenames for client projects and rarely speaking publicly about the specific work they do.

How the Meta relationship worked

Meta was one of Mercor’s largest clients. 1 specific project (codenamed Chordus) involved training Meta’s AI models to use multiple internet sources simultaneously to verify their responses to user queries. Contractors staffed on Chordus and other Meta-specific projects represent a significant portion of Mercor’s active workforce.

What Actually Happened: The Breach Explained

The attack vector: LiteLLM

The breach did not begin at Mercor. It began at LiteLLM. LiteLLMis a widely used open-source tool that developers use to connect their applications with AI services from multiple providers. LiteLLM is used by millions of developers and integrated into the infrastructure of thousands of companies, including major AI labs and their contractors.

A hacking group called TeamPCP compromised LiteLLM’s CI/CD pipeline, the automated system used to build, test, and publish software updates, and published malicious versions of the library to PyPI, the standard public repository where Python developers download software packages. These tainted versions harvested API keys, cloud credentials, and other sensitive data from any system that installed them, before the malicious updates were identified and removed.

Mercor confirmed the attack in an email to staff on March 31, 2026: “There was a recent security incident that affected our systems along with thousands of other organisations worldwide.

The second claim: Lapsus$

Separately, a group operating under the name Lapsus$, a hacking collective notorious for data extortion, claimed responsibility for breaching Mercor directly and offered to sell stolen data. The alleged stolen material included:

  • A 200+ GB database
  • Nearly 1 TB of source code
  • 3 TB of video and other contractor information
  • Slack data from internal workplace communications
  • Ticketing data
  • 2 videos purportedly showing conversations between Mercor’s AI systems and contractors

Security researchers note that many cybercriminal groups now periodically adopt the Lapsus$ name, and Mercor’s own confirmation of the LiteLLM connection suggests the primary attacker is TeamPCP or a connected actor. 

Allan Liska, ransomware analyst at Recorded Future, stated directly: “There is absolutely nothing that connects this to the original Lapsus$.

Who is TeamPCP

TeamPCP is a financially motivated hacking group gaining rapid prominence through supply chain attacks, attacks that target widely used software libraries rather than individual companies, allowing a single compromise to cascade across thousands of victims simultaneously. 

Beyond data extortion, TeamPCP has collaborated with ransomware groups, including a group known as Vect, and has distributed a data-wiping worm called CanisterWorm through vulnerable cloud instances with Farsi as their default language or clocks set to Iran’s time zone. Liska describes the group as “definitely financially motivated” while noting possible geopolitical dimensions that are difficult to confirm.

What the Breach May Have Exposed

The most sensitive question surrounding this incident is not what data was stolen. It is what that data reveals.

Mercor’s position at the centre of multiple AI companies’ data pipelines means the breach could have exposed 3 categories of competitive intelligence:

  1. Training data content: The actual datasets generated by human contractors, which reflect the specific domains and tasks each AI lab is prioritising
  2. Labelling protocols: The instructions given to contractors about how to evaluate, score, and annotate AI outputs, which directly reflect each company’s approach to alignment and safety
  3. Model-building methodologies: The selection criteria and task design choices that reveal how each lab is attempting to improve its models

These methods are harder to replicate than the datasets themselves and represent competitive advantages that took years and billions of dollars to develop. Whether the data exposed in this breach is complete enough to meaningfully help a competitor is currently unknown. Neither Meta, OpenAI, nor Anthropic has confirmed what specifically was accessed.

The Impact on Contractors

The human cost of this breach is immediate and direct. Contractors working on Meta-specific Mercor projects, including those staffed on Chordus, were told they cannot log billable hours until the projects resume, if they resume. Most were not told why. A project lead in the Chordus Slack channel told staff only that Mercor was “currently reassessing the project scope.

Mercor says it is working to find alternative assignments for affected contractors. For many, particularly those in markets like India where Mercor recruits specialists, the sudden halt means being effectively unemployed with no confirmed timeline for resumption.

OpenAI and Anthropic, What Is Their Involvement?

Both OpenAI and Anthropic are Mercor clients, meaning their proprietary training data passed through the same systems that were compromised.

OpenAI confirmed it is investigating the breach to assess how its training data may have been exposed, but has not paused its Mercor projects. It confirmed that no OpenAI user data was affected.

Anthropic had not responded to requests for comment at the time of publication.

Neither company has confirmed what data of theirs was specifically accessed or whether a competitor could use it.

What Each Company Has Said

Mercor confirmed the breach and stated it “moved promptly” to contain the situation. A third-party forensics investigation has been launched. Spokesperson Heidi Hagburg said the privacy and security of customers and contractors is “foundational to everything we do at Mercor,” but declined to confirm whether customer or contractor data had been accessed or misused. Mercor is now facing a class-action lawsuit alleging inadequate cybersecurity protections.

Meta has paused all work with Mercor indefinitely while it investigates. No timeline for resumption has been provided.

OpenAI has not stopped current projects with Mercor but confirmed it is investigating the incident to assess how its proprietary training data may have been exposed. A spokesperson confirmed the incident does not affect OpenAI user data.

Anthropic had not responded to requests for comment at the time of publication.

Why This Incident Matters Beyond Mercor

The Mercor breach illustrates a structural vulnerability that the AI industry has not yet fully reckoned with: the most sensitive competitive assets of the world’s most valuable AI companies are not held exclusively by those companies. They flow through a network of third-party contractors, data vendors, and shared infrastructure tools, each of which represents a potential entry point for the same nation-state actors and cybercriminal groups that have spent years targeting the AI sector.

A supply chain attack on a single open-source library used by millions of developers can cascade into exposure at OpenAI, Anthropic, Meta, and dozens of other AI labs simultaneously, without any of those companies being directly attacked. The fact that LiteLLM, a tool used by millions, could be compromised through its own publishing pipeline and used to harvest credentials across the entire AI industry is a warning the sector will need to act on. Not just investigate.

Final Takeaway

Mercor’s breach is not just a startup security story. It is a demonstration of how the AI industry’s dependence on shared vendors, shared tools, and shared infrastructure creates shared risk. The training data that makes one company’s AI models better than a competitor’s is now confirmed to have passed through a compromised system. Whether any competitor (domestic or foreign) received anything useful from that data is the question investigators are now trying to answer. The answer will determine whether this incident is a costly wake-up call or something significantly worse.

Cybersecurity incidents, AI industry developments, and the security risks shaping the technology sector, our newsletter covers every story worth knowing about. Subscribe and stay informed.

Join the IT Horizon Community

Stay connected with a community of curious minds following the ideas, breakthroughs, and disruptions shaping our digital future. Join the conversation.

Related blogs

Top Stories

April 14, 2026

Google Maps Just Got Its Biggest Upgrade in a Decade, and It Changes Everything About How You Find Places

April 14, 2026

Japan Just Bet $16 Billion on a Chip Startup Nobody Had Heard of 3 Years Ago

April 14, 2026

Blue Light and Sleep: Why Your Phone Isn’t the Real Reason You’re Tired at Night

April 14, 2026

Trump Posted an AI Image of Himself as Jesus, Then Deleted It After His Own Base Turned on Him

April 14, 2026

Has Neuralink Made a Miscalculation? The Reality Behind the Hype

April 14, 2026

Art schools vs AI: adaptation or erosion?