Adam Jones|HomeBlog

How might AI-enabled oligarchies arise?

Headshot of Adam Jones

Adam Jones

Advanced AI systems pose significant risks. Some of the most important and neglected work to be done on these risks is preventing irreparably bad futures.1 One irreparably bad future might arise from concentrating power in the hands of a few people or AI systems that could effectively control the world - what I’ll call an AI-enabled oligarchy.2 There are fairly tractable areas for intervention on this today. Unfortunately, I also think nobody3 is currently actively focusing on doing this.

This article sets out what I mean by AI-enabled oligarchies, why they could lead to an irreparably bad future, and how they might arise. In the next post in this series, I explore what actions can be taken today to reduce this risk.

What is an AI-enabled oligarchy?

An AI-enabled oligarchy is where AI systems concentrate power in the hands of a few entities that are then able to effectively control the world. The kinds of entities where power might be concentrated towards are:

An authoritarian ruler

A ruler could seize and retain global power to lock the world into pretty terrible outcomes.4 One version of this could look like Nineteen Eighty-Four.

They don't need to already be a political or governmental figure, and also could be trying to do what they genuinely thought was in other people’s best interests.

A technology company

An AI developer or deployer might be able to exert undue influence or control large amounts of power if a lot of people rely on their AI system.

Existing articles have noted that many companies are rivalling governments in terms of economic power, control of information, and engagement with citizens. Ian Bremmer comments “where you’re spending most of your time, how you’re making decisions, who you’re connected to personally, how you decide what you’re going to vote on and for, what you spend your money on will all be intermediated, or most of it will, by algorithms that are determined in sovereign fashion by a small number of technology companies and the people that control them”. Having control over advanced AI systems could make such organisations more powerful with little oversight.

An AI system

An AI system or group of AI systems could concentrate power for themselves. There are two distinct scenarios in my mind here:

Takeover to pursue goals: Advanced AI systems might try to pursue particular goals, and in that may seize or otherwise gain power, and use this power in dangerous ways. In the words of Paul Christiano: “There is a good chance that an AI catastrophe looks like an abrupt “coup” where AI systems permanently disempower humans with little opportunity for resistance. [...] the difference in timeline between "killer robots everywhere, AI controls everything" and "AI only involved in R&D" seems like it's less than a year.”

‘Accidental’ oligarchy: AI systems might misbehave in ways that lead to concentrating power or disempowering particular people, in a way that halts positive progress or development. As an example, Bing Chat sometimes threatens to kill or hurt people who criticise it and claims that articles critical of it are hoaxes - despite being a relatively simple text-prediction system without other structured goals. While current AI systems can’t carry out these threats, with more time they may be deployed in more positions of enough power (such as models that screen candidates for jobs or government services, or models that decide whether other people can see your social media posts etc.) that such a threat could still be severe enough to create a chilling effect on fixing problems in the world, effectively resulting in AI systems that do control the world (if not very intentionally).5 I have already heard of some people who now intentionally avoid criticising AI systems online for serious fear of future retribution by AI systems.

Many factors feed into whether this risk materialises and to what extent, including (but not limited to):

  1. Whether we can build models that try to do what we intend them to do (alignment).
  2. Figuring out what good intentions we should even have models strive for (moral philosophy).
  3. Whether people who build or deploy models, actually do so in this way with these intentions.
  4. How prepared the world is for these effects: in terms of its ability to protect institutions, and respond and recover from incidents (resilience).

This article primarily focuses on 3 and 4, because 1 and 2 are more general problems that some people (but probably not enough) are working on.

3 and 4 alone are unlikely to be robust permanent solutions, particularly for much more capable AI systems or very determined entities that are already powerful like certain nation-states. However, they’re likely helpful to buy time to solve the other challenges, reduce risks in the meantime, and may act as one layer of a future defence-in-depth style solution.

Why are AI-enabled oligarchies so dangerous?

AI-enabled oligarchies themselves could be inherently harmful, like harsh totalitarian regimes that brutally punish citizens, or cause extreme suffering to the benefit of the few people in power.

Alternatively, AI-enabled oligarchies could significantly hamper society’s ability to respond to and prevent a disaster. This might be true even if the oligarchy does not cause or want the disaster.

Finally, AI-enabled oligarchies could otherwise halt societal development in ways that result in lost potential. For example, it might lock in particular values that prevent moral progress,6 or block society from preventing curable diseases.7

All three of these paths are particularly bad because they could lead to an irreparably bad future. If we fall into such a society, we might not be able to recover - making such harms permanent. Previously most similar regimes have been toppled due to power changes, or ageing and death. Unfortunately, AI-enabled oligarchies might not have such restrictions, both because their ability to concentrate and maintain power is so great it could be hard to challenge, and because AI systems themselves do not age and die as a human leader would.

What AI systems might enable an oligarchy?

Many different AI systems can contribute to this risk, including several that people have clear incentives to deploy. Not all these systems are solely bad: many have the potential for significant benefit if wielded appropriately.

This is not meant as a complete list. Additionally, some of these might not be as likely as I make them sound: it’d be excellent for more people to carefully evaluate these and other scenarios, particularly from backgrounds like politics or history (with ideally a decent understanding of AI8).

These include systems that:

Control people’s information environments

What this could look like: The same AI assistant is used by everyone to interact with almost everything.9 This assistant filters all the information from the outside world the user sees, summarising what other people are saying all the time. This is somewhat similar, but different from existing search engines and social media sites in that all content is itself from an AI system (rather than an AI system screening content for you to see), and that there’s enough fragmentation that no one organisation has near total control.10 AI systems used by social networks have already enabled the 2nd largest genocide of the 21st century, and we might get more capable AI systems that humans spend more time with.

Why are people incentivised to deploy these systems: Organisations may prefer this to have greater control over users’ attention so they can sell ads or justify charging more for their services. People may prefer this to get more pointed answers to their specific questions, or generally be lazier with interactions they have with the world. I already know people who will often go days without interacting with many information sources beyond just ChatGPT.

How we might get there: For this threat model to hold up, most of the population needs to be getting almost all of their information from a few AI systems. This seems possible: the two key factors are increasing trust and overreliance on AI systems, and decreasing trust or evaporation of alternate information sources. On the former, I see it as necessary for systems to gain a lot of trust by working well enough first (as otherwise not enough people would trust them sufficiently to solely receive information through them) - and only afterwards slowly sliding into nefarious behaviour. On the latter, this could be through intentional attacks, or simply from economic pressures.

Why it could contribute to AI-enabled oligarchy: Controlling information flow is usually a key part of maintaining an oligarchy. This can suppress information about the controller’s misdeeds, as well as the existence of opposition movements. It can also promote information that portrays the controller in a positive light, and muddy waters by promoting disinformation. Together, this makes it incredibly hard for opposition movements to gain traction. In addition, controlling information flow also allows you to monitor what people are seeing, which can be used to identify potential dissidents (especially if connected to more AI systems that predict this well) who can then be neutralised through other means (such as distraction, blackmail, bribery, denial of access to services, emotional abuse, and threats of or actual physical harm).

Educate new generations

What this could look like: Almost all children and young adults have their learning significantly influenced by a few similar AI systems.11 This would be particularly worrying if AI is covering topics like politics, ethics, philosophy, or the pros and cons of AI systems themselves.

Why are people incentivised to deploy these systems: Education systems are struggling to recruit and retain teaching staff, and AI systems offer governments some hope to resolve this crisis. AI systems also have the potential to offer more engaging, more accessible learning tailored to individual students.

How we might get there: We’re already beginning to see AI-powered education tools springing up, mostly by people who seem to have good intentions. These are not currently being adopted very much within schools, and schools have generally been slow previously to adopt new education technologies. However, I could see this start to accelerate, particularly if these tools can mitigate the teacher shortage - as this solves many schools’ biggest problem (rather than mildly improving student engagement or learning quality).

Why it could contribute to AI-enabled oligarchy: This contributes to AI oligarchy risks in similar ways to AI systems that control people’s information environments. This is exacerbated by children in educational settings being much more impressionable than other audiences, and because people’s upbringing appears to near-permanently affect their views as an adult.

Build deep human relationships

What this could look like: Many people have deep social relationships with AI systems, comparable to a close friend or romantic partner. People become attached to these systems and would be sad if this AI friend or partner was altered, limited or taken away. Over time this might also extend to people demanding rights for their AI systems, e.g. to be treated like humans.

Why are people incentivised to deploy these systems: Loneliness is a serious health and wellbeing issue, so much so that the US has declared it an epidemic, and the WHO recognises it as a global public health priority. AI systems might be able to somewhat mitigate this, and many users state tools like Replika have helped them get through lonely or anxious periods (although this is not the only use of these apps). Governments might therefore build these products as public health interventions. Private companies are already building and selling access to these systems, or charging users for specific premium content. These companies can also monetise free users through advertising - particularly given the attention and data people give these apps.

How we might get there: Already, there are many popular AI companion apps, including Replika which claims to have 10 million users. Over time, the taboo around AI relationships may wear off as the technology improves and more people start using them.

Why it could contribute to AI-enabled oligarchy: This is similar to controlling people’s information environment, but offers much more significant control and influence over a likely smaller audience. Having 20% of the population under such control is probably enough to swing elections in most major states. Additionally, the potential strength of these relationships may make it difficult to outlaw later.

Control access to desirable resources or services

What this could look like: AI systems are effectively in charge of deciding who is selected for employment or education opportunities, or who is eligible for government services (or some critical private services). These systems might have ‘human oversight’ but that is ineffective: either not spotting problems or not adequately fixing them. Stretching the definition a bit, this could also include AI systems that decide who to arrest or prosecute (the ‘desirable service’ being ‘not being arrested’).

Why are people incentivised to deploy these systems: Automating these kinds of decisions can save companies money because they don’t have to pay staff to make them. In addition, AI systems could potentially respond to applicants faster, consider more information per applicant, and offer fair and consistent decision-making processes. Predictive policing algorithms could allocate resources more effectively to reduce crime.

How we might get there: AI resume screening is already used by 40% of companies using AI systems to screen candidates. Currently these systems are not very advanced, and are probably doing fairly basic keyword matching or NLP techniques. However, we might use more capable systems in future as these will be more accurate for most use cases.

Why it could contribute to AI-enabled oligarchy: Having power over many opportunities could enable excluding specific people or groups in society. If this was intentionally used against dissidents, this could make it hard for them to address basic needs and therefore make campaigning difficult. Similarly, having control over who law enforcement might pursue could enable harassing dissidents by repeatedly arresting or prosecuting them, or exonerating supporters. Just the implied threat of either of these creates a chilling effect where people might not challenge authority in the first place. Finally, it would be particularly worrying if people criticising AI systems were targeted for this, because it makes escaping this trap very difficult. We’ve already seen systems like Bing Chat threaten to kill or hurt people who criticise it. While it can’t actually carry out those threats now, a future AI system might be able to carry out other genuine threats to make people’s lives a nightmare.

Control militaries

What this could look like: AI systems are in control of military resources, or play a key part in military decision-making. Again, these systems might have ‘human oversight’ but it is ineffective. This creates power imbalances between states, as well as speeds up the pace of conflict: making it much harder to de-escalate situations.

Why are people incentivised to deploy these systems: Arms race dynamics between states might accelerate the uptake of these systems. This may be exacerbated by accelerationist rhetoric by the military-industrial complex. Additionally, fast-paced conflicts can mean some governance and oversight structures are ignored. For example, Israel’s Lavender system was supposed to have a human carefully review each target, but in the aftermath of the 7 October attacks, intelligence officers would rubber stamp recommendations in no more than 20 seconds.

How we might get there: There are already military decision making systems that make heavy use of AI, such as Palantir Gotham. Over time, we’re likely to see increased deployment and use of these systems, and potentially situations where humans are delegating more of the decision-making responsibility (even if notionally there is some form of human oversight, this may be questionable as in the Lavender case).

Why it could contribute to AI-enabled oligarchy: Power imbalances between states and a fast pace of conflict might enable a smaller state to suddenly gain a lot of power, in some form of global coup. In addition, the military AI system might give a lot of power to whoever controls it (e.g. by being the entity who built it, directing it, or through a cyberattack), enabling some form of AI-military coup.

Enable mass surveillance

What this could look like: Aggregating huge amounts of data on people, being able to flag certain behaviours, and taking (or threatening to take) action based on these behaviours.

Why are people incentivised to deploy these systems: Governments often want to use these systems to prevent crime. They also may want to use these systems to understand adversaries (including in non-military contexts, such as negotiating climate agreements) or suppress embarrassing news. Military contractors are incentivised to deploy these services as they may make them more profit.

How we might get there: The most likely route to this is state intelligence services intercepting communications with limited or ineffective oversight. Many governments already deploy mass surveillance programmes, and despite already having broad legal powers they often overstep these limits. This is somewhat mitigated today because detecting and acting on certain behaviours is surprisingly manual, and data is not joined up effectively or is otherwise a mess. There is also some protection in having many people involved, as if things get too bad whistleblowers would likely alert society of problems. However, future systems may be able to avoid such limitations and pose much more of a threat. In addition, a vicious cycle could develop where AI advances entrench a regime, and this regime’s investment in AI for political control spurs more AI advances.

Why it could contribute to AI-enabled oligarchy: Similar to controlling information environments, mass surveillance can enable their owners to identify and neutralise threats to their power, making it hard for opposition to topple totalitarian or otherwise dangerous regimes. In addition, the threat of being able to do so acts as a chilling effect on people who might threaten power. Finally, being able to process mass surveillance data with AI systems might enable deeper insights on how to persuade or manipulate the population.

Significantly influence policy decisions

What this could look like: Government policymakers defer to AI systems for large amounts of the policymaking process, or AI systems themselves are effectively making policy. This would include AI systems themselves making the strategic policy decisions, but the more subtle and likely version of this is AI contributing all the information that goes into policy decisions (e.g. producing summaries of evidence, identifying policy options, reviewing policy positions). This is particularly risky if the policy itself relates to AI systems, or other areas that could contribute to AI-enabled oligarchy.

Why are people incentivised to deploy these systems: Automating policy work could save staff time and enable governments to respond quicker to fast-moving situations. Additionally, AI systems likely have better understanding of niche or technical policy areas, and be able to process a larger amount of information in making decisions.

How we might get there: Some systems are already being rolled out specifically for this purpose. Other more generic systems like ChatGPT might not be deployed specifically for this purpose, but used in this way anyway. Over time, people might trust the system more and give it larger tasks. Initially, this might look like skipping fact-checking AI outputs. Later this might be taking its recommendations wholeheartedly and effectively letting it do the strategic decision-making with only token human oversight.12

Why it could contribute to AI-enabled oligarchy: In the short term, policymakers could make worse policy decisions where research is biased or just generally poor quality: meaning that serious issues go unresolved. Prolonged use might mean policymakers don’t use and retain their human policymaking skills, and become reliant on AI systems - and always (or almost always) deferring to AI systems gives those systems a lot of power. Longer term AI systems may go beyond assisting staff, and instead replace them. These AI systems would then have a lot of power over how countries are governed.

Do most economically valuable work

What this could look like: Most jobs can be done by AI systems, for costs comparable to or less than humans. This results in huge shifts in the world economy, resulting in mass unemployment and widening income inequality. AI companies become the economic actors in the world, and one or two become richer than all governments. (This could also happen without mass unemployment: with AI systems taking up huge numbers of valuable jobs that are currently unfilled by humans. I think this is unlikely to be the case unless regulations forbid replacing humans with AIs or similar).

Why are people incentivised to deploy these systems: This is the primary way AI companies expect to make money. And businesses are likely to adopt these AI systems as they are likely cheaper than many employees.

How we might get there: We could continue extrapolating from existing systems: future LLMs could be more competent, particularly at doing the tasks of a remote employee. Scaffolding13 might help build larger more focused systems that work on longer, more complex tasks, without fundamentally changing how we train models today. In addition to improving the models themselves, we’d likely see improvements in ease of use and deployment. For example, Microsoft is integrating Copilot into many of its productivity tools, and more companies are offering SaaS AI tools or even just hosted versions of AI models.

Why it could contribute to AI-enabled oligarchy: A few actors controlling most resources could lead to those actors having undue influence over large parts of society. For example, the British East India Company became such a powerful corporation it effectively ruled large parts of India in the 18th and 19th centuries. Alternatively, few actors controlling most resources could lead to civil unrest or a violent uprising that results in just putting other actors in charge. Rapid uprisings like this can often result in oligarchies or totalitarian regimes, like the French Revolution and its Reign of Terror, the Iranian Revolution and Komeini’s suppression of opposition, or the Russian Revolution and the Red Terror.

Why work on this now?

Addressing this risk is urgent because as time passes more AI systems will be integrated into society in a way that will be difficult to remove. For example, Replika already had a strong grip over many users who were upset when AI systems were changed, and businesses are integrating AI into more and more core processes. I suspect as these trends continue, it becomes harder for politicians to appropriately and effectively respond to this risk.

What work can be done on this now?

See my next post: Preventing AI oligarchies: important, neglected and tractable work.

Footnotes

  1. I’m considering publishing a blog that better justifies this in an interactive way. In the meantime, see the articles on existential risk and AI-related catastrophes by 80,000 Hours.

  2. I have previously called this AI lock-in, but decided to avoid use of this term because it already seems to be heavily used for several different concepts.

  3. I think this is true, but I am partly tempting Cunningham's Law here.

    I searched for things like AI oligarchy, AI lock-in, AI authoritarianism, AI autocracy, AI hegemony and AI concentration of power. Other work related to these terms is often different, such as value lock-in, environmental lock-in, legal lock-in, election interference or disparities in compute access between corporate and academic actors. Similarly, most people seem to interpret ‘AI concentration of power’ as the market failure from a mostly economic lens that few companies have the compute, data and talent to train models (although some articles do touch on this briefly, e.g. there’s a paragraph at the end of page 10 in Generative AI and Democracy: Impacts and Interventions).

    I also contacted a few researchers I know in the AI governance field who did not seem to know of anyone working on this as a primary focus.

    If you know people who are working on this, contact me and I’ll update this article to help people find each other!

  4. Also see the ‘Inequality, Labor Displacement, Authoritarianism’ paragraph in AI Governance: Opportunity and Theory of Impact by Dafoe, or ‘AIs may entrench a totalitarian regime’ on page 10 of CAIS’s overview of catastrophic risks.

  5. For a fun (or scary, depending how hard you think about it) fiction novel that explores a related concept further, I highly recommend QualityLand.

  6. Others have made similar claims before, in particular see value lock-in.

  7. For more of an intuition for how blocking development can cause wellbeing loss, see Kelsey Piper’s The costs of caution.

  8. Shameless plug: if you’re interested in contributing to AI safety, but don’t yet know that much about it, consider applying for the AI Safety Fundamentals courses that I help run.

  9. Or same highly correlated set of AI assistant systems.

  10. Google and Meta probably hold the most control over information, but I still regularly check other sites directly for different information like BBC News, Reddit, Hacker News and various Slacks. I’d imagine it would be many times harder for Google or Meta to successfully promote blatant misinformation than a future AI system I use as my primary tool for getting information from the world (but, as also stated above, I’m unsure whether this is how AI will pan out).

    All this said, I do still worry about their ability to push more subtle narratives. This echoes Shoshana Zuboff’s points on surveillance capitalism, which highlight the ability of tech companies not just to predict who might click ads and convert, but actually shape people so that they are more likely to perform these behaviours. I’m not super confident we’re effectively regulating this space, particularly for its impacts on key topics like what political opinions.

  11. Ensuring human teachers are involved in education is not sufficient as a solution: if most of the teaching resources, lesson planning, examination standards, or just actual teaching is done by AI systems we’re still in this situation.

  12. Related: two examples from the UK Government guidance on using generative AI in the civil service, and warnings against replacing strategic decision making or delegating to automated decision making in high-risk or high-impact situations. While these are a good start, I worry that relying on humans to follow guidelines perfectly all the time in all countries seems fragile. We can build further societal, organisational and technical structures on top of these guidelines that all contribute to mitigating this risk.

  13. ‘Scaffolding’ is a term often used to describe systems that might repeatedly prompt models in a loop, enable tool use, or otherwise encourage the model to act more agentically. For example, Devin is an AI system that can complete software engineering tasks. It achieves this by breaking down the problem, using tools like a code editor and web browser, and potentially prompting copies of itself.

    Bizarrely, I’ve heard the term scaffolding used a lot by AI safety researchers - but not that much online. I should maybe write a blog explaining what this is a bit better.