It was supposed to be a fair fight. A few years ago, anyone with a GPU and determination could train a competitive AI model. Open-source frameworks made it possible. Cloud computing made it affordable. The dream was that AI would be democratized—that innovation would come from anywhere.

Today, that dream is dead.

The AI landscape is consolidating at breathtaking speed. OpenAI dominates consumer AI with GPT-5, backed by billions in Microsoft capital. Anthropic leads on safety-focused reasoning with institutional backing. Google leverages its data empire and computing infrastructure. Meta pushes open-source but has the resources of a $1 trillion company. Meanwhile, thousands of VC-funded startups are either getting acquired, pivoting to applications, or quietly shutting down.

The Numbers Tell the Story

Frontier model training—the kind that produces genuinely capable systems—now costs $50 million to $1 billion per model. Only a handful of companies can afford this. OpenAI's latest model reportedly cost over $500 million to train. Training a competitive model from scratch would bankrupt most startups. This isn't just expensive; it's a new moat that's harder to cross than ever.

The infrastructure requirements are equally brutal. You need specialized chips (GPUs, TPUs), massive clusters, power infrastructure, and teams of top researchers. You need the institutional knowledge to avoid wasting millions on failed experiments. These aren't things a startup can cobble together in a garage.

Why Consolidation Matters

This consolidation has real consequences. When a few companies control AI, they control innovation direction. OpenAI's choices influence what problems get solved first. If they deprioritize something, it might not get solved at all for years.

Fewer voices shaping development means less diversity of thought. The researchers at OpenAI, Google, and Anthropic are brilliant, but they're still a select few. They have biases. They have blind spots. They make mistakes.

Smaller companies must compete on applications, not models. This isn't necessarily bad—application companies have always been where real economic value is captured. But it means the fundamental breakthroughs increasingly come from a small set of institutions.

Regulation becomes easier to capture. When there are five serious players instead of five hundred, regulatory capture becomes simple. The incumbents write the rules. Innovation gets slower, not faster.

Competition narrows. Real competition requires viable alternatives. When model training costs billions, alternatives become scarce.

The Pattern We've Seen Before

This isn't new. Every major technology follows this pattern:

Railroads: Started with hundreds of small operators. Consolidated to a few mega-companies. Innovation slowed. Regulation became complex.

Automobiles: Hundreds of manufacturers in the 1920s. Consolidated to three major players. Less competition, slower innovation.

Computing: Started decentralized. Consolidated around Intel, Microsoft, and Apple. The PC revolution happened because IBM underestimated how big personal computing would be. But once markets consolidated, innovation slowed.

The Internet: Started open. Consolidated around Google, Facebook, Amazon. Less open than it started.

AI is following the same path, but faster. The consolidation is happening over years instead of decades.

The Startup Survival Guide

If you're building AI and not at a mega-lab, you're not building the next frontier model. Accept this. Build something else.

The real winners are building on top of OpenAI, Anthropic, or Google's APIs. They're not trying to train models; they're solving specific problems using existing models.

The winners are those solving domain-specific problems: healthcare diagnostics using vision models, legal document analysis, financial modeling, code generation for specialized domains. These businesses have defensible competitive advantages that don't require training their own foundation models.

The winners are those building tools for the consolidators. Companies that make it easier to fine-tune, deploy, or integrate large language models. This is where real economic value can be created without needing billion-dollar budgets.

What Comes Next

The consolidation will continue. In three years, there will probably be two or three truly frontier model providers—maybe four if we're lucky. Everyone else will be building on top of them or solving specialized problems.

This isn't inherently bad. Consolidated infrastructure companies can innovate faster in many ways. But it does mean that the era of anyone-can-train-a-frontier-model is over.

The question now is: what will you build in a world where the foundation is controlled by a few? That's the real innovation challenge.