Elon Musk confirmed in court this week that xAI uses OpenAI's models to train its own systems - a practice called distillation that directly contradicts his public positioning of xAI as a competitive alternative built from scratch.
The admission came during testimony in the landmark Musk v. Altman trial, where Musk is suing OpenAI's CEO over claims the company abandoned its founding mission. But the trial's first week delivered something more interesting than legal arguments: a rare look at how the world's richest man actually feels about being outmanoeuvred.
The Distillation Admission
Distillation is a common but rarely discussed practice in AI development. You take a large, capable model - say, GPT-4 - feed it millions of queries, capture its responses, then use that data to train a smaller, faster model that mimics the original's behaviour without needing the original's architecture or training data.
It's legal. It's widespread. And Musk just admitted xAI does it with OpenAI's models.
The significance isn't the practice itself - everyone's doing it. The significance is that Musk has spent months positioning xAI as a pure competitor, an independent effort to build AGI the right way. The reality is messier: xAI's Grok models are learning from the very company Musk is now suing for betraying its mission.
The 2022 Text Message
Musk also revealed he texted Sam Altman in late 2022, shortly after Microsoft announced its multi-billion dollar investment in OpenAI, calling the deal a "bait and switch."
The timeline matters here. OpenAI was founded in 2015 as a non-profit research lab with Musk as co-chair and major funder. By 2018, the board restructured into a "capped-profit" model to raise serious capital - AI research turned out to be more expensive than anyone predicted. Musk left the board but continued as a donor.
Then came the Microsoft deal. Altman positioned it as necessary infrastructure investment. Musk saw it as a philosophical betrayal - OpenAI pivoting from open research to closed, for-profit product development backed by one of the world's largest tech companies.
The "bait and switch" accusation cuts both ways. Musk claims he was sold on a mission that no longer exists. OpenAI's defence is likely to be that the mission never changed - only the funding model evolved to match the reality of what AGI research actually costs.
What This Means for Builders
For developers building on AI platforms, the distillation admission is a reminder of how interconnected this ecosystem really is. The models you're choosing between - OpenAI, Anthropic, xAI, Google - aren't as independent as their marketing suggests. They're learning from each other, iterating on each other's outputs, competing and collaborating in ways that blur traditional competitive boundaries.
The practical takeaway: model choice matters less than most people think. If xAI is distilling OpenAI, and Anthropic is training on public GPT outputs, and Google is doing the same, the meaningful differences aren't in capability - they're in pricing, terms of service, and how each company handles data.
That's the boring stuff. But it's the stuff that actually affects your business.
The Real Story Behind the Lawsuit
Strip away the courtroom drama and what you're watching is a founder who got out-executed. Musk co-founded OpenAI, left when it became clear he couldn't control it, watched Altman turn it into the most valuable AI company on earth, then launched xAI as a competitor - only to discover his competitor is still learning from the thing he walked away from.
It's not a betrayal. It's a divergence in strategy that Musk lost. He wanted AGI research to stay non-profit and ideologically pure. Altman wanted to build something that could actually ship products and compete with Google. Altman won that argument.
Now Musk is in court, admitting his competing effort still relies on the models he's suing over. It's honest testimony. It's also quietly devastating to his case.
The trial continues. Week two will likely bring Altman's testimony and more detail on OpenAI's internal governance decisions. But the shape of the case is already clear: Musk feels betrayed by a shift he didn't see coming. The question is whether "I didn't like how you pivoted" constitutes a legal claim.
The answer, almost certainly, is no. But the process is giving us the kind of transparency into AI development that nobody would volunteer willingly. And that might be the most valuable thing to come out of this trial.