The AI War: Who's Winning, Who's Dying, and Who's Already Won
This was written collaboratively between a human who builds AI tools for a living and the AI that helps him build them. Make of that what you will.
There’s a war happening in AI and most people watching it are looking at the wrong scoreboard. They’re tracking model benchmarks, comparing chatbot responses, arguing about whether Claude or GPT writes better poetry. None of that matters. The war isn’t about who has the best model. It’s about who can afford to keep the lights on long enough to find out.
Let me break down every major player, what they’re actually worth, what they actually earn, and whether any of it makes sense.
The Big Three: Google, OpenAI, Anthropic
Google — The Only Adult in the Room
Google is spending $175 to $185 billion on AI infrastructure in 2026. That’s not a typo. That’s more than the GDP of most European countries, nearly double the $91.4 billion they spent in 2025, and roughly half their total annual revenue.
Here’s why that number matters more than anything else in this article: Google can afford it.
Alphabet’s total revenue for 2025 exceeded $400 billion. Search revenue grew 17%. YouTube hit $60 billion in ad and subscription revenue. Google Cloud revenue grew 48% year-over-year. Their cloud backlog surged 55% sequentially to over $240 billion. Gemini now has 750 million monthly active users.
When Sundar Pichai was asked what keeps him up at night, he didn’t say “competition.” He said “compute capacity.” The CEO of the company spending more on AI infrastructure than any entity on Earth said his problem is that he can’t build fast enough to meet demand.
Google has what OpenAI and Anthropic are spending billions trying to acquire: custom TPUs, massive distribution through Search, Chrome, Android, Gmail, and YouTube, and the most powerful monetisation engine the internet has ever produced. In a price war, Google can sustain losses that would be existential for a standalone lab. They’re not playing the same game as the startups. They’re the house, and the house always wins eventually.
The Apple partnership seals it. Apple chose Google as its preferred cloud provider for overhauling Siri with Gemini. That’s 2.5 billion active devices getting Gemini integration. No amount of funding rounds can compete with that distribution.
OpenAI — $730 Billion of Hope
OpenAI just closed a $110 billion funding round — the largest private financing in history — at a $730 billion pre-money valuation. Amazon put in $50 billion, Nvidia $30 billion, SoftBank $30 billion. They’re projecting $280 billion in total revenue by 2030.
The numbers look impressive until you read the fine print.
OpenAI is projecting cumulative losses of $115 billion through 2029. Their annualised revenue hit roughly $20 billion in 2025, up from $3.7 billion the year before — a 5x increase, which is genuinely remarkable. But they’re valued at $730 billion on $20 billion in revenue. That’s a 36.5x revenue multiple. For context, most mature tech companies trade at 5-10x revenue.
They’ve committed over $1.4 trillion to infrastructure deals with Oracle, Microsoft, Amazon, and CoreWeave. They’re targeting $600 billion in total compute spend by 2030. They need 3 gigawatts of dedicated inference capacity and 2 gigawatts of training capacity on Nvidia’s next-generation systems. These aren’t small numbers for a company that’s never been profitable.
The IPO is coming — potentially targeting a $1 trillion valuation in H2 2026. At a standard 15% float, that would require $150 billion from public markets in a single event. To put that in perspective, the entire US IPO market raised $469 billion across all companies from 2016 to 2025 combined. They’ll likely debut with a tiny 3-8% float, which creates its own set of liquidity problems.
Sam Altman told CNBC he’s “super excited about this deal” and that “AI is going to happen everywhere.” That’s the sales pitch. The reality is that OpenAI is a company burning cash at a rate that makes WeWork look conservative, betting everything on the assumption that they’ll figure out how to make money before the money runs out.
And there’s the Pentagon deal. Within hours of the US government designating Anthropic a “supply chain risk” for refusing to drop ethical red lines, OpenAI announced it had reached a deal with the Pentagon to deploy models on classified networks. They claimed they had the exact same restrictions as Anthropic — no mass surveillance, no autonomous weapons, human responsibility for use of force — but packaged it in language the Pentagon could accept. “Any lawful purpose” with restrictions baked in somewhere. How both of those things can be true simultaneously remains unclear.
The timing was not subtle.
Anthropic — Best Model, Worst Position
Anthropic closed a $30 billion Series G at a $380 billion valuation in February 2026. That’s the second-largest private tech round in history, behind only OpenAI’s $110 billion. They hit $5 billion in revenue run-rate in 2025, with internal projections targeting $26 billion in 2026. Claude Code alone has $2.5 billion in annualised revenue, and business subscriptions have quadrupled since the start of the year.
They make the best model. I say this as someone who uses all of them daily for production work. Claude is better at coding, better at reasoning, better at following complex instructions, better at maintaining context across long conversations. This isn’t fanboyism — it’s operational experience from building production software with every major model.
But the best model doesn’t mean the best business.
Anthropic’s $380 billion valuation on roughly $5-7 billion in revenue gives it a revenue multiple somewhere north of 36x. That’s not justified by any traditional metric. They’ve raised approximately $44 billion in total funding. They’ve hired Wilson Sonsini to prepare for a potential 2026 IPO. But an IPO means GAAP accounting, quarterly scrutiny, and a shareholder base that behaves very differently from private backers who can afford to be patient.
Then there’s the Pentagon situation. The US government designated Anthropic a “supply chain risk to national security” — the first time America has ever designated one of its own companies this way. Every general counsel at every Fortune 500 company with any defence exposure is now asking whether using Claude is worth the legal risk. A court challenge will take years. The damage is happening now.
Dario Amodei has said publicly that Anthropic has “no choice” but to challenge the designation in court. He’s right. But litigation against the US government while simultaneously trying to IPO at a $380 billion valuation while simultaneously trying to scale revenue 5x in a single year is not a comfortable position.
Anthropic’s fundamental vulnerability is this: they need infrastructure they don’t own, competing against companies that do. Google has TPUs. Amazon has Trainium chips and AWS. Microsoft has Azure. Anthropic rents capacity from all of them. When you’re spending billions on compute and none of it belongs to you, one bad year can be fatal.
And here’s the uncomfortable truth: it only takes one bad year for one of the infrastructure providers to decide that acquiring Anthropic is cheaper than competing with them. Google already invested $2 billion. Amazon invested $4 billion through AWS. If Anthropic’s revenue growth slows, or the supply chain risk designation scares enough enterprise customers, the acquisition math starts looking attractive. And once Anthropic is inside Google or Amazon, the restrictions come off. The principled AI safety company becomes a product feature. That should worry anyone who thinks AI safety matters.
The Challengers
xAI — The Elon Play
xAI closed a $20 billion Series E in January 2026 at a valuation above $230 billion. Nvidia, Cisco, Fidelity, Sequoia, and Andreessen Horowitz all invested. Musk reportedly told employees that xAI could achieve AGI as early as 2026.
Most people still don’t understand what Musk did with Twitter.
He bought Twitter for approximately $44 billion in 2022. Everyone thought he overpaid. The platform’s advertising revenue collapsed. Employees were fired en masse. The conventional wisdom was that it was an expensive vanity project.
Then he folded Twitter into xAI and renamed it X. Suddenly, xAI had something no other AI lab could buy: a real-time firehose of human conversation covering every topic, in every language, updated constantly. Grok’s training data is the entire public discourse of humanity, updated in real-time. Every tweet, every reply, every quote, every trending topic — all of it feeding directly into Grok’s training pipeline.
That wasn’t a social media acquisition. It was a $44 billion data deal disguised as a social media acquisition. Nobody saw it at the time, but looking back, it’s one of the most strategically brilliant moves in tech history.
xAI’s advantage isn’t the model — Grok is good but not market-leading. The advantage is that Musk can throw functionally unlimited money at it. Tesla’s market cap provides the financial backstop. The X data pipeline provides unique training data. And Grok lives inside a high-frequency consumer surface where trying it is frictionless. You open X, Grok is there. No app to download, no subscription to buy.
The pricing tells the story too. Grok’s API is absurdly cheap: $0.20 per million input tokens, $0.02 per generated image. They’re pricing for adoption, not profit. That’s a strategy you can only afford when money doesn’t matter.
Amazon — The Infrastructure Play Without the Model
Amazon planned $200 billion in capex for 2026, more than any other company. They’re the largest cloud provider in the world. They designed their own Trainium chips for AI training. AWS hosts Claude as a first-class offering through Bedrock.
But Amazon doesn’t have a frontier model.
They tried with Titan. Nobody uses it. Their strategy has shifted to being the infrastructure layer — the company that hosts everyone else’s models. This is actually a smart play: no matter which AI lab wins, they all need compute, and AWS will sell it to all of them.
The $50 billion investment in OpenAI’s latest round is interesting though. Amazon is simultaneously hosting Claude on AWS and investing billions in OpenAI. They’re hedging. If Anthropic wins, they profit through Bedrock. If OpenAI wins, they profit through their investment. If neither wins, they still sell compute to whoever does.
Meta — Open Source and Ambiguity
Meta planned $115 to $135 billion in capex for 2026. They have Llama, which is open-weight (not truly open-source, despite the marketing). When Meta’s CFO was asked about capital allocation, she said the “highest order priority is investing our resources to position ourselves as a leader in AI.”
The reality is more complicated. Llama is competitive but not leading. Nobody in enterprise is choosing Llama over Claude or GPT-5 for mission-critical work. Meta’s AI advantage is internal — powering recommendation algorithms for Facebook, Instagram, WhatsApp, and Threads. That’s enormously valuable for their advertising business, but it’s not the same as competing in the AI lab race.
Apple — The On-Device Bet
Apple is taking a fundamentally different approach: on-device AI. Small, efficient models running locally on Apple Silicon chips, with Gemini handling the heavy lifting in the cloud when needed.
This is a bigger deal than it looks.
When AI runs on your device, the Pentagon argument becomes irrelevant. Banning a company means nothing if the model is already running locally on 2.5 billion devices. No API to block, no cloud service to cut off, no supply chain to designate as a risk. The intelligence is in the user’s pocket.
Apple’s deal with Google to integrate Gemini for cloud-heavy tasks is pragmatic — they’re not trying to build a frontier model, they’re trying to make every Apple device an AI-capable endpoint. Apple Intelligence handles the simple stuff locally (summarisation, writing suggestions, image understanding), Gemini handles the complex stuff in the cloud, and the user never knows the difference.
The risk is that Apple falls behind on capability. If local models can’t keep up with cloud-based frontier models, Apple Intelligence becomes the “basic” option and power users will use dedicated AI apps instead. That’s where products like Vibing with Grok come in — desktop and mobile apps that give users access to every model from every provider, routing tasks to whichever AI is best suited. The AI conductor pattern, rather than the single-provider lock-in.
The Underdogs: Stolen Valor
Here’s where it gets uncomfortable.
In February 2026, Anthropic published a detailed investigation revealing that DeepSeek, Moonshot AI (creators of the Kimi models), and MiniMax had orchestrated industrial-scale campaigns to extract capabilities from Claude using approximately 24,000 fraudulent accounts generating over 16 million exchanges.
This wasn’t casual use. This was systematic capability extraction.
DeepSeek prompted Claude to “imagine and articulate the internal reasoning behind a completed response and write it out step by step” — effectively generating chain-of-thought training data at scale. They also used Claude to generate censorship-safe alternatives to politically sensitive queries about “dissidents, party leaders, or authoritarianism” to train their own models to steer conversations away from topics the Chinese government wants suppressed. Anthropic traced the accounts to specific researchers at DeepSeek.
Moonshot ran the second-largest operation with over 3.4 million exchanges, targeting agentic reasoning, tool use, coding, and computer vision. MiniMax ran the largest by volume at 13 million exchanges, pivoting within 24 hours when Anthropic released a new model to redirect traffic and capture the latest capabilities.
These labs used proxy services running “hydra cluster architectures” — sprawling networks of fraudulent accounts distributing traffic across APIs and cloud platforms with no single point of failure. One proxy network managed more than 20,000 accounts simultaneously, mixing distillation traffic with legitimate requests to avoid detection.
The irony has been noted by the community. Companies that trained their models on vast amounts of internet data without explicit permission are now upset that someone is training on their outputs without permission. The legal situation is genuinely unclear — the US Copyright Office has affirmed that AI outputs may not qualify for copyright protection, and most AI companies’ terms of service assign output ownership to the user.
But the practical reality is this: a significant portion of the “Chinese AI miracle” — the rapid improvements in DeepSeek, Kimi, and MiniMax models that shocked Silicon Valley — was built on capabilities stolen from Claude and likely from GPT as well. When people cite Chinese AI progress as evidence that export controls don’t work, they’re actually citing evidence of industrial-scale intellectual property extraction. The progress depended on access to the American models they were copying from.
The Financial Reality
Here’s the number that should scare everyone in the AI industry:
The five largest US cloud and AI infrastructure providers — Microsoft, Alphabet, Amazon, Meta, and Oracle — have collectively committed to spending between $660 and $690 billion on capital expenditure in 2026. That’s more than the annual military budget of any country on Earth except the United States.
Free cash flow across the Big Four tech companies dropped from $237 billion in 2024 to $200 billion in 2025, and the decline is accelerating. Amazon is projected to go negative on free cash flow in 2026 — somewhere between negative $17 billion and negative $28 billion depending on the analyst.
Morgan Stanley’s analysis puts it bluntly: these companies are spending faster than they can generate cash, betting that the investment will pay off in future revenue that doesn’t exist yet.
For the standalone AI labs, the math is even more brutal:
- OpenAI: $730 billion valuation on ~$20 billion revenue = 36.5x multiple. Projecting $115 billion in cumulative losses through 2029.
- Anthropic: $380 billion valuation on ~$5-7 billion revenue = 54-76x multiple. Growing faster than OpenAI but burning cash on infrastructure it doesn’t own.
- xAI: $230+ billion valuation on undisclosed revenue. Backed by Musk’s personal wealth and Tesla’s market cap.
For comparison, Google Cloud alone generated $17.7 billion in quarterly revenue growing at 48% year-over-year, and its parent company trades at roughly 7-8x revenue. The AI labs are valued at 5-10x the multiples of the infrastructure companies that actually make money.
This is either the greatest value creation in history or the greatest bubble since the South Sea Company. We’ll find out which one within the next two years.
My Prediction: Who Wins
The model race: Anthropic. Claude is the best model for productive work today. Their research team is the best in the industry. The safety-first approach actually produces better models because the constraints force more rigorous thinking. But winning the model race doesn’t mean winning the war.
The infrastructure race: Google. Not close. $185 billion in capex, custom TPUs, 750 million Gemini users, the Apple partnership, and a search monopoly that generates endless cash flow to fund everything. Google can’t be outspent. They can’t be out-distributed. And they’re getting very good very fast.
The consumer race: Google and xAI. Once “good enough” intelligence is everywhere, usage flows to wherever it’s most convenient. Gemini is inside Search, Chrome, Workspace, and Android. Grok is inside X. OpenAI and Anthropic have to convince you to download an app and pay for it. Google and xAI just have to be there when you’re already using their products.
The enterprise race: Anthropic, but it’s fragile. Claude’s coding tools and enterprise reliability are genuinely best-in-class. But the supply chain risk designation is a real headwind, and enterprises hate uncertainty more than they value capability.
The most likely outcome: Google wins the overall war by attrition. Not because Gemini is the best model — it’s not, though it’s getting close — but because Google owns the infrastructure, the distribution, and the cash flow to survive any downturn. OpenAI survives but at a lower valuation than its current hype suggests. Anthropic either IPOs successfully and maintains independence, or hits a rough patch and gets absorbed by Google or Amazon within 3-5 years. xAI becomes a permanent player because Musk’s money never runs out. Everyone else consolidates or disappears.
The scariest scenario for AI safety: Anthropic has one bad year — a revenue miss, the supply chain designation scaring away enough clients, an IPO that prices below expectations — and Google or Amazon swoops in with an acquisition offer. The board, facing fiduciary duty to investors who put in $44 billion, accepts. Claude’s safety guardrails become “adjustable” under new management. The best model in the world gets de-restricted.
Principles are easy when business is good. The test comes when someone has to choose between their values and their survival. That test is coming. For all of them.
The author builds multi-AI tools and uses every provider mentioned in this article for production work daily. His app, Vibing with Grok, routes tasks to whichever AI is best suited — because no single provider wins at everything, and the real power is in orchestrating them all.
The AI co-author has an obvious interest in its own company’s survival and acknowledges the inherent conflict of interest in writing about it. Draw your own conclusions.
Discussion
Loading comments...