How AGI could kill US democracy
The Democratic Republic of Congo sits on an estimated $24 trillion in mineral wealth — coltan, cobalt, diamonds, gold. It is one of the most resource-rich nations on Earth. Its citizens are among the poorest people alive.
Equatorial Guinea has one of the highest GDPs per capita in Africa thanks to oil. It also has one of the worst human rights records.
This pattern is called the "resource curse": when wealth comes from the ground rather than from people, governments have no reason to treat people well.
AGI threatens something worse: not just removing governments' need for their citizens, but handing them the tools to surveil, manipulate, and control them — permanently. This article is about how AGI could kill democracy where it's most likely to be developed: in the US.1AGI will most likely be developed in the US — all major frontier labs (Anthropic, Google DeepMind, OpenAI) are US-based, and the US has the most compute infrastructure. China is next most likely, and faces similar or worse problems. I focus on the US throughout, but the arguments apply broadly.
Democracy rests on 3 pillars
Democracies treat people well because governments depend on their citizens. Citizens have two forms of leverage:
- Economic — governments depend on workers paying taxes or doing government jobs
- Physical — governments can be overthrown by citizens protesting, rioting, or revolting
These are reinforced by democratic institutions — checks, balances, rule of law — which help citizens exercise their leverage effectively, and increase the stability of this setup. But these only hold up because citizens with economic and physical power demand they're maintained. Remove the leverage, and institutions are just paper.
AGI could arrive in the next few years, and undermine all this.
Pillar 1: The economy won't need you
AGI could do all remote work, representing a huge fraction of the economy. A lot of robotics engineering is remote work or could be accelerated by AI: meaning we might soon also have competent robotics to do non-remote work. (Additionally, many non-remote jobs might no longer be necessary; e.g. there's no need for an office cleaner if nobody is going into the office).
The pattern across countries afflicted by the resource curse is consistent: when wealth comes from resources rather than people, governments have no incentive to invest in their citizens.
This means AGI could bring about the ultimate resource curse.2My friends and ex-colleagues Luke Drago and Rudolf Laine call this the "intelligence curse" — the resource curse applied to AI. The economic argument in this section summarizes their core idea. I'd recommend reading their piece for a more thorough treatment.
Wealth will come from compute and AI, not from human labor. For the first time, an advanced economy won't need its citizens for anything economic."But surely the government would still provide — UBI, social programs?" Maybe for a while? But UBI funded by AGI-generated wealth is structurally identical to oil-state handouts. These are fundamentally unstable configurations and governments have all the incentive to withdraw such programs.
But even if citizens have no economic value, they still have bodies. They can still march, protest, fight back. Right?
Pillar 2: You won't be able to fight back
Even without economic leverage, citizens have always had one fallback — physical resistance. But AGI gives governments a toolkit that makes resistance nearly impossible: a lethal combination of prevention, detection and response capabilties that can crush any opposition.
Prevent. Personalized propaganda, AI companions that subtly shape your worldview, control over your entire information environment. This is 1984's telescreen, except it actually works — and you carry it in your pocket voluntarily. You don't need to suppress dissent if you can prevent it from forming.
Detect. Mass surveillance, already formidable, becomes total. AGI can aggregate every data point — purchases, location, messages, social graph — into a real-time model of every citizen. Dissidents get flagged before they organize. Mass domestic surveillance is already a live concern.
Respond. If the government controls all economic services and handouts (see pillar 1), it can cut off dissidents' income, housing, and access to services. It can smear them with targeted disinformation. It can jail them on fabricated charges. It's hard to lead an uprising when you're homeless, imprisoned, and separated from everyone you know. And if none of that works — AI and robotics can control militaries and enable autonomous weapons. Military force without human soldiers — and without the human conscience that has historically refused unlawful orders.
So citizens have no economic leverage and far less physical leverage. That leaves institutions — the courts, the constitution, the free press. Surely those will hold?
Pillar 3: The safety net is already fraying
Institutions act as a brake on the slide toward tyranny. They raise the activation energy — a would-be authoritarian has to pack courts, discredit the press, and erode norms before they can consolidate power. That takes time, and it creates friction that gives citizens opportunities to push back.
The Supreme Court has become increasingly politicized. Local journalism is collapsing, and the press that remains faces escalating attacks with the US now ranking 57th out of 180 countries on press freedom, its lowest ever. Norms are being eroded: routine government shutdowns have normalized congressional dysfunction, recent gerrymandering has eliminated competitive districts, and filibuster norms have been steadily weakened by both parties.
This isn't partisan — both democrats and republicans believe there's a serious threat to democracy. Freedom House and others document the decline.
So AGI might be arriving into a context where these checks and balances are already weakened, while the leverage that supports them is removed too, at a time where these institutions are needed most.
The dictator who never dies
Humanity's final safety net against tyranny has been that dictatorships end. They have a 100% historical failure rate. Leaders die. Generals coup. Economies collapse. People rise up. Insiders defect. However, with AGI:
- Death. An AGI-powered regime doesn't depend on one human's lifespan. The system persists.
- Coups. No human generals to form coups. Autonomous military systems just follow orders.
- Economic collapse. AGI is the economy. It doesn't need human workers or consumers to generate wealth.
- Popular uprising. Surveilled, manipulated, and outgunned. (See pillar 2.)
- Defection. In Romania in 1989, Ceaușescu ordered soldiers to fire on protesters. They refused. The army defected. The regime collapsed in a week. This is how dictatorships are supposed to end — humans in the chain of command have a crisis of conscience. AGI removes humans from the chain of command entirely.
For the first time in history, tyranny could be permanent. That is what makes the stakes so high, and this situation so different from previous concentration-of-power risks humanity has faced.
The window is closing
Once we get into this bad state, it's near impossible to recover. The window to act is before AGI, not after.
Some commonly proposed solutions don't address the underlying power problem:
- Open-source / open-weights models: Compute is the bottleneck, not model access. Open-sourcing models doesn't distribute power if the compute to run them is still concentrated.
- UBI: Doesn't fix the incentive structure. A government that provides UBI out of generosity can withdraw it just as easily.
- Laws and regulation: Only work if someone has the power to enforce them against the entity that controls AGI. That's exactly the power citizens are losing.
What might actually help is distributing power before AGI arrives — through international coalitions, distributed compute governance, and strengthening democratic institutions now. These don't fully solve the problem, but they buy time to figure out a robust long-term solution — ideally with the help of aligned AGI itself.
Footnotes
-
AGI will most likely be developed in the US — all major frontier labs (Anthropic, Google DeepMind, OpenAI) are US-based, and the US has the most compute infrastructure. China is next most likely, and faces similar or worse problems. I focus on the US throughout, but the arguments apply broadly. ↩
-
My friends and ex-colleagues Luke Drago and Rudolf Laine call this the "intelligence curse" — the resource curse applied to AI. The economic argument in this section summarizes their core idea. I'd recommend reading their piece for a more thorough treatment. ↩