Open source doesn't solve AI power concentration

Headshot of Adam Jones

Adam Jones

A common response to concerns about AI concentrating power is: "just open source it." The argument goes something like: if model weights are publicly available, no single entity controls the technology, competition flourishes, and power is distributed. While open source models have many benefits,1 solving concentration-of-power is not one of them. This is because the bottleneck isn't generally model access — it's compute.

Power from AI comes from being able to use it at scale, not just from having the weights. Model weights can be thought of as a job description, and compute as the ability to actually hire people to do the work. Yes, it's valuable to know what work needs to be done. But the entity that can hire a million full-time employees has the power, regardless of whether the job description is public or not.

Using AI at scale requires compute — GPUs, datacenters, energy. And compute is extremely concentrated: a handful of companies manufacture the chips, a handful of countries have major datacenters, and ownership maps closely to existing wealth and power concentration. Open-sourcing the weights doesn't change any of this. It shifts the chokepoint from "model access" to "compute access" — but these are largely the same actors anyway.

The gap is staggering. A single consumer GPU compared to a major hyperscaler's datacenter is roughly a 1:75,000,000 ratio in effective AI compute.2 And this gap is growing: hyperscaler AI infrastructure spending hit ~$700 billion in 2026, up 60% from 2025, with total available AI compute growing ~3.3x per year.

Yes, you can run smaller or distilled models on a laptop. You can run bigger models slowly. But this is like comparing a slow part-time assistant to a company or government hiring millions of faster more capable staff: you're not going to be able to shift the balance of power or overthrow a tyrannical government.

What if many individuals pooled their compute?

Even this doesn't close the gap. There are roughly 150 million discrete consumer GPUs worldwide. At ~1/50th the effective AI performance of a datacenter GPU each, that's equivalent to about 3 million datacenter GPUs — roughly 1/25th of what a single major hyperscaler operates. And there are four of them.

In practice it's even worse. Most consumer GPUs are being used for gaming or other tasks, not sitting idle. Coordinating distributed inference across heterogeneous consumer hardware over the internet adds significant latency and communication overhead. And the people with incentive and ability to organize such a network are... probably large companies, bringing us back to where we started.

Open source is good for many things — transparency, safety research, privacy, competition in model development. It's just not a solution to the power concentration problem, which is determined by compute and weights, not weights alone.

Footnotes

  1. Open source AI models enable safety research, improve transparency, and can improve privacy by letting individuals and businesses run models on their own infrastructure. Most AI branded as "open source" isn't really open source — it's open weights (you get the model parameters but not the training data, training code, or ability to reproduce training), but even this shares many of these benefits.

  2. Rough estimate: a major hyperscaler like Microsoft or Google operates on the order of 1.5 million datacenter GPUs (based on ~$180B in GPU/accelerator spending across hyperscalers in 2026 at ~$30K per GPU, split across ~4 major players). Each datacenter GPU (e.g. an H100 or B200) is roughly 50x more capable for AI workloads than a typical consumer GPU, due to higher memory bandwidth, specialized tensor cores, and interconnect. So 1.5M × 50 ≈ 75 million consumer-GPU-equivalents. This ratio is growing: hyperscaler capex is increasing ~35-60% year-on-year, while consumer GPU performance improves much more slowly.