OpenAI, Anthropic, and Google Just Teamed Up — Here's Why It Matters
Three of the biggest AI rivals in the world are now sharing intelligence to stop Chinese companies from cloning their models. It is the first real sign that the AI race has a copycat problem too big for any one company to solve alone.
By Troy Brown
OpenAI, Anthropic, and Google are fierce competitors. They fight over talent, customers, and headlines every single week. So when all three of them decide to work together on something, it is worth paying attention to.
This week, Bloomberg reported that the three companies have started sharing intelligence through the Frontier Model Forum, an industry group they co-founded with Microsoft back in 2023. The goal is to detect and stop Chinese AI companies from copying their models.
The issue is something called adversarial distillation. In plain English, it works like this: a company sets up thousands of fake accounts, sends millions of carefully crafted questions to a model like ChatGPT or Claude, collects all the answers, and then uses those answers to train a cheaper copycat model. Think of it as photocopying someone else's homework at industrial scale.
Anthropic says it has already seen this happen. The company accused three Chinese firms — DeepSeek, Moonshot AI, and MiniMax — of running coordinated campaigns against its Claude model. The numbers are staggering: roughly 16 million exchanges from around 24,000 fraudulently created accounts, all designed to extract Claude's capabilities and bake them into cheaper alternatives.
OpenAI told U.S. lawmakers that DeepSeek tried to, in their words, free-ride on the capabilities developed by OpenAI and other U.S. frontier labs. Google has reported similar patterns. These are not small-time operations. They are systematic, well-funded efforts to clone billions of dollars worth of research without paying for any of it.
That is why three companies that normally do not share anything are now pooling their detection signals. They are looking for patterns that no single company could catch alone — suspicious API usage, account behaviors that look like automated harvesting, and traffic signatures that suggest someone is systematically extracting model outputs at scale.
For anyone who is not in the AI industry, the simplest way to understand why this matters is money. Building a frontier AI model costs hundreds of millions of dollars in compute, years of research, and some of the most expensive talent on the planet. If a competitor can replicate most of that capability by simply querying your model for a few weeks, the economics of the entire industry start to break down.
It is a bit like spending ten years and a billion dollars developing a new drug, only to have someone reverse-engineer it from the pills on a pharmacy shelf. The original investment still happened, but the return on it just collapsed.
There is also a safety angle that matters. When a model gets distilled this way, the safety guardrails often get stripped out in the process. The copycat model might be able to do most of what the original can do, but without the filters that prevent it from helping with dangerous tasks. Anthropic has specifically warned that distilled models could end up being used for cyberattacks, disinformation, and surveillance by authoritarian governments.
U.S. officials estimate that unauthorized distillation costs Silicon Valley labs billions of dollars per year. That is not just a problem for shareholders. It is a problem for anyone who benefits from these companies having the resources to keep improving their products and investing in safety research.
Now, there is an obvious tension here that is worth naming. These same companies built their models by training on enormous amounts of publicly available data, including content created by writers, artists, and businesses who never agreed to it. The irony of AI labs complaining about their work being copied is not lost on a lot of people.
That does not make the distillation problem fake. But it does make the moral high ground a little less clear-cut than the press releases suggest. The companies asking for protection today are the same ones that took a very liberal view of fair use when it suited them. Both things can be true at the same time.
For small business owners and creators, the practical takeaway is this: the AI tools you rely on are expensive to build and maintain. If the companies behind them cannot protect their work, the quality and safety of those tools could start to erode. A healthy competitive market, where companies invest in genuine research rather than just copying each other, is better for everyone who uses these products.
It is also a reminder that AI is not just a technology story anymore. It is a geopolitics story, a trade story, and increasingly a security story. The tools you use every day to write emails, build websites, and run your business are sitting at the center of a global competition that is only getting more intense.
The alliance between OpenAI, Anthropic, and Google is a signal in itself. When rivals stop fighting each other long enough to fight a shared threat, the threat is real. Whether their response will actually work is an open question. Determined copycats are hard to stop, especially when the product you are protecting is, by design, meant to answer any question you ask it.
But the fact that it is happening at all tells you something important about where the AI industry is headed. The era of building in the open and hoping for the best is over. The next phase will involve harder boundaries, tighter access controls, and a lot more conversation about who gets to use these models, how, and at what price.
For now, the thing worth remembering is simple. The AI tools you use are not free to create. Someone is paying for the research, the compute, and the safety work behind them. When that investment gets undermined at scale, it does not just hurt the companies involved. It shapes what kind of AI the rest of us get to use.
Subscribe
Get the next issue in your inbox.
Join The AI Signal for clear weekly notes on tools, workflows, and the handful of AI developments that are actually worth your attention.