How a coalition of democracies solves AGI's biggest risks
AGI poses three major risks. If many people control it, this invites potentially catastrophic misuse (e.g. bioterrorism). If few people control it, this concentration of power could end democracy. And in any scenario with multiple competing actors, race dynamics encourage cutting corners on safety — making it more likely we lose control of AI systems entirely.
There's no sweet spot on the spectrum from "distributed" to "concentrated." But a coalition of stable democracies jointly governing AGI comes closest to addressing all three.
On power concentration: committees add friction
A single country controlling AGI has a single point of failure: one election, one coup, one leader sliding toward authoritarianism. A coalition of 15+ stable democracies doesn't eliminate this risk — but it massively increases the activation energy. Governance by committee is slower and messier, but that friction makes the slide into tyranny much harder to start and much easier to catch.
I often think of this like an energy landscape. Democracy is a local minimum — stable, but not permanently so. With AGI, tyranny may be a lower-energy state that power tends to fall toward. The question is how high the barrier is between them.
A coalition doesn't guarantee stability forever. But it creates a barrier high enough to buy time, resist shocks, and give citizens a chance to course-correct.
On misuse and misalignment: join or go it alone
Without a coalition, countries face a terrible choice: accept being "beaten" by whoever controls AGI (probably the US), or race to build your own. This intense incentive to race likely means cutting corners on safety — increasing both misuse risks1Misuse can increase when countries race to build their own AI systems, as more governments with powerful AI systems increases the surface areas for cyberattacks to steal powerful AI systems model weights. Countries racing also might cut corners on things like information security or anti-misuse safeguards, further increasing the chance these systems are misused.
and the chance we lose control of AI systems entirely.An open coalition changes the deal. Instead of "race or be left behind," countries can join and get real benefits — in exchange for not developing independent AGI systems and enforcing safeguards within their jurisdictions (input/output filtering, know-your-customer requirements, pattern detection, regular audits by the coalition). This buys time for the alignment research that racing precludes.
In return, members might get tiered access to AGI benefits:2These tiers are illustrative — the actual structure would need careful negotiation. See my detailed plan for more on how this could work in practice.
- Core members (US + key allies with compute/supply chain leverage): full governance rights, direct access to AGI inference, joint oversight
- Standard members: filtered/monitored access to AGI capabilities — effectively AI labour for their economies
- Associate members: downstream benefits — access to AGI-developed drugs, software, economic dividends, low risk AI labour
Carrots for joining
- Access to AGI capabilities (tiered by trust/leverage)
- Governance rights and a say in AGI policy
- Economic dividends from AGI-driven growth
- Security guarantees from the coalition
Sticks for staying out
- Economic sanctions from coalition members
- Diplomatic isolation
- Sabotage of independent AGI development efforts
- In extreme cases, military response
The bigger the coalition gets, the more of the world's compute, supply chains, and talent it controls — making competing against it increasingly futile, sanctions more devastating, and joining increasingly attractive. At some point, staying out means competing against the global economy.
The US is in the lead right now. Why would it dilute its position by setting up a coalition? Because the alternative is worse. The US is not in a conciliatory mood right now, but incentives change — as China's AI capabilities improve, sharing governance with trusted democratic allies (and gaining their compute, supply chains, and legitimacy) starts looking better than trying to maintain sole control while a rival superpower closes the gap.
One structure, three problems
The same structure that distributes power (multiple democracies governing jointly) can also create the enforcement mechanism that prevents reckless racing and misuse (collective leverage over non-members). And by changing the choices available to other countries it can massively reduce race dynamics, buying more time for the alignment research needed to keep AI systems under human control, and making it more likely that such research is actually applied well.
Middle powers — the UK, EU, Canada, Japan, Australia — should be building leverage now. I think a robust way to do this is through building datacenters, so they can negotiate governance rights before AGI arrives and the leverage shifts permanently to whoever controls it.
Footnotes
-
Misuse can increase when countries race to build their own AI systems, as more governments with powerful AI systems increases the surface areas for cyberattacks to steal powerful AI systems model weights. Countries racing also might cut corners on things like information security or anti-misuse safeguards, further increasing the chance these systems are misused. ↩
-
These tiers are illustrative — the actual structure would need careful negotiation. See my detailed plan for more on how this could work in practice. ↩