Preventing AI oligarchies: important, neglected and tractable work
In a previous article, I set out what I mean by AI-enabled oligarchies, why they could lead to an irreparably bad future, and how they might arise. To recap, an AI-enabled oligarchy is where AI systems concentrate power in the hands of a few entities that are then able to effectively control the world. This would be particularly hard to recover from because AI systems might enable a regime to create such a power imbalance it’s hard to topple, and AI systems don’t become old and die in the way humans do.
In this article, I explore what actions can be taken today to reduce this risk. This is urgent, important, and nobody is focusing on this.1For more on both these claims, see the previous article.
What interventions could be put in place now?
This is not meant as a complete list. More work is needed here - perhaps one item on this list should be ‘think of other things to add to this list, and help evaluate and prioritise actions to take’.
It's also a bit of an unsorted grab-bag of ideas: it’d be easier to parse, and spur the generation of more ideas if someone wants to come up with some swanky framework for classifying these interventions.2Like this pretty diagram in The Policy Playbook by CSET (page 11).
One idea for a breakdown:
- Functions (
stolenadapted from the NIST Cybersecurity Framework):- Identify: Understand the risk. For example, figuring out whether anything I’ve written here actually holds up, improving threat models, identifying weak points (beward infohazards!).
- Protect: Prevent AI-enabled oligarchies forming. For example, by educating people to the dangers of building certain systems.
- Detect: Horizon scanning and early detection of potential oligarchies. For example, through effective whistleblowing schemes.
- Respond: Preparing to take actions should incidents arise. For example, galvanising the international community to address a rogue state that is quickly gaining power. Or, planning on how we might make it possible for humanity to escape an AI oligarchy.
- Recover: Planning for how we deal with the aftermath of an incident we respond successfully to. For example, even if an initial dangerous coup is subdued, it’s likely there may be significant civil unrest or other serious problems that lead to this outcome that need addressing.
- Policy levers: maybe the ‘styles of government action’
Many of these interventions are also ones that help with other AI risks, and similarly many interventions for other AI risks also somewhat help with preventing an AI-enabled oligarchy. For brevity, I’ve focused on interventions that initially seem particularly good for preventing AI-enabled oligarchy.
Improve understanding of risks
A better understanding of what related risks are most likely and how they arise would help us identify new interventions and evaluate existing ones.
Research you could do in this area includes:
- Flesh out existing threat models (like the ones in my last post). In particular, understanding the inputs or assumptions within certain threat models would help with improving our ability to manage these risks.
- Identify other threat models that haven’t yet been considered.
- Develop ways to continually monitor for new threats, or do other related horizon-scanning work.
- Make quantitative estimates of risks, in terms of impact and likelihood over time.
- Conduct empirical social, psychological or anthropological research to understand the impacts of different types of AI systems. For example, measure differences in people’s opinions about regulating AI after spending a week talking to something like Replika. Or just better understand the motivations and beliefs of people who already use such systems intensively.
Building you could do in this area includes:
- Build safe demonstrations of ‘dangerous’ systems, similar to model organisms of misalignment. This would be useful for studying behaviours in an isolated safe way, to better inform our understanding of the risks.
- Build systems to help monitor risks over time. For example, OSINT monitoring to understand how risks of authoritarianism might be evolving in different countries by analysing social media data, or published news articles.
Other work you could do in this area includes:
- Compile examples of AI systems being used in dangerous ways, ideally in a structured format that makes it easier to analyse. This might be in collaboration with similar projects like the AI incident database. This could help better understand where risks are materialising, and give us data points that we can spot patterns or trends in, and use to forecast where we might be headed.
Improve understanding of interventions
A better understanding of the landscape of available interventions, how they might work and how they might complement each other would help us coordinate to better address the risks.
Research you could do in this area includes:
- Clarify how existing interventions might work. This involves writing up theories of change, explaining any assumptions or limitations, and dependencies between them.
- Identify new interventions. You might also want to develop some kind of framework for breaking down the space more systematically.2
Like this pretty diagram in The Policy Playbook by CSET (page 11).
One idea for a breakdown:
- Functions (
stolenadapted from the NIST Cybersecurity Framework):- Identify: Understand the risk. For example, figuring out whether anything I’ve written here actually holds up, improving threat models, identifying weak points (beward infohazards!).
- Protect: Prevent AI-enabled oligarchies forming. For example, by educating people to the dangers of building certain systems.
- Detect: Horizon scanning and early detection of potential oligarchies. For example, through effective whistleblowing schemes.
- Respond: Preparing to take actions should incidents arise. For example, galvanising the international community to address a rogue state that is quickly gaining power. Or, planning on how we might make it possible for humanity to escape an AI oligarchy.
- Recover: Planning for how we deal with the aftermath of an incident we respond successfully to. For example, even if an initial dangerous coup is subdued, it’s likely there may be significant civil unrest or other serious problems that lead to this outcome that need addressing.
- Policy levers: maybe the ‘styles of government action’
- Functions (
- Make quantitative estimates of the impact of different bundles of interventions. This will probably be related to quantitative risk estimates. This should ideally also include the negative impacts of these interventions such that this forms a cost-benefit analysis.
Other work you could do in this area includes:
- Write up clear definitions of different interventions, so we can better communicate about and coordinate what we’re working on. This might also involve trying to gain some form of consensus from the community e.g. by interviewing policy experts.
(Temporarily) prohibit certain systems
Banning the systems that are most likely to lead to AI-enabled oligarchy seems a fairly tractable and reasonable way to prevent certain risks arising. This could be a temporary ban, until we understand enough about the systems to proceed safely.
Of course, this needs to be evidence-based. Hence the importance of understanding the risks above, and a need to link these risks to certain properties or use cases of AI models that are unacceptably risky.
In addition, a ban’s externalities need to be considered. For example, we might ban centralised AI education on ethics and politics. Our hope might be that this is covered in a fair and balanced way by a diverse set of human teachers. However, this could backfire if governments instead decide to eliminate these topics from education systems because human teachers are expensive.
Research you could do in this area includes:
- Understand what properties or use cases are unacceptably high risk, and why. This could involve thinking about which systems plausibly could contribute to AI-enabled oligarchy, ethnographic research as to how systems are used in the world, or technical research to understand what behaviours arise from different training methods or datasets.
- Develop specific definitions of such systems so they can be regulated. A legal background would be helpful for proposing wording of draft legislation, and a policy background would be useful for writing regulator guidance on what to watch out for.
- Plan how to implement bans logistically. For example, what incentive structures or mechanisms might work to enforce them? Are there analogous bans of other technologies that went well or poorly to learn from?
- Evaluate ban effects. For example, how well are people likely to comply? What are the potential side effects of such bans? When might we have enough evidence of safety to lift a ban, and how would we gather this information?
Advocacy you could do in this area includes:
- Advance international treaties or other forms of global coordination. These could ban the development or deployment of systems that could be particularly dangerous (similar to the EU AI Act’s unacceptable risk level). In addition to the research above, this would need careful thought about international incentives and enforcement. Coalition building could help join this up with other campaigns for international AI treaties, as well as building strong support in multiple international states.
- Support domestic or state-level legislation or regulations. Similar to international treaties, but at a more local level. This likely involves identifying the relevant national policymakers, producing highly targeted and valuable resources for them, and communicating this to them. You could also try to generate public support for such policies.
- Promote sector-specific guidance, legislation or regulations. Again similar to the above, but even more narrowly scoped. This might involve working with a particular body to explain the risks of AI in a way that is relevant to their remit, and supporting them in mitigating these risks. This might involve them bringing in new guidance or regulations, using existing powers more effectively, or using their soft powers (e.g. these bodies reaching out to specific companies and highlighting where they have concerns). Many such potentially useful bodies exist in the UK.3
For example:
- The Equality and Human Rights Commission
- Investigatory Powers Commissioner's Office (oversight over intelligence services and mass surveillance systems)
- The Foreign, Commonwealth and Development Office
- The Department for Education
- The Counter-Disinformation Unit (part of the Department for Science, Innovation and Technology)
- The Defending Democracy Taskforce and the Joint Election Security Preparedness Unit (across a few different departments)
- Ofcom (mainly due to their powers and obligations under the Online Safety Act)
- The Information Commissioner's Office (fair data processing)
- The Financial Conduct Authority (fair and equitable access to key financial services)
- The National Cyber Security Centre (resilience to cyber attacks)
- Secure voluntary commitments with companies using AI. You could work with existing companies or industry bodies to prevent their products from contributing to lock-in risks. This might be through creating a code of practice that they sign on to, agreeing not to create certain systems.
- Secure voluntary commitments with upstream AI suppliers. Rather than targeting many companies using AI directly, you could target a few people further up the AI supply chain. These actors might include foundation model companies, cloud providers or AI chip providers. You might get them to adjust their terms of use or similar to ban certain unacceptable risk level deployments, as well as to try to identify and counter such use cases (similar to and possibly using the same systems as misuse monitoring).
Transparency requirements
Transparency helps by letting us better understand and further address this problem. It might also help identify problems directly, for example if data indicates that a system should actually be prohibited.
We could therefore mandate certain information about high-risk AI systems be made available to different actors such as the public, domestic regulators, or international bodies. This is most relevant for systems that aren’t prohibited but still pose a significant risk: likely systems where there is a lot of value to be had if used appropriately which makes banning them costly.
This could involve adding systems to a public supervisory authorities’ register, increasing transparency of what systems are being deployed. This has analogies to things like financial services registers, healthcare service registers, data protection registers, registers of practising lawyers/doctors, or registers of politician’s meetings.
Research you could do in this area includes:
- Understand what kind of information would be useful to share, and what the trade-offs are for sharing vs not sharing this information.
- Identify the necessary logistical work to enable such registers. For example, what the duties of the register would be, or the regulations or treaties needed to enable this.
- Review how to encourage compliance, for example understanding how to incentivise organisations to keep their entries accurate and up to date.
Building you could do in this area includes:
- Build a crowdsourced or voluntary AI systems register, or open-source tools that enable organisations or countries to build their own registers. Over time, this could be adopted by various states or at least give them a good template to help them build their own.
Advocacy you could do in this area includes:
- Similar to work to prohibiting certain systems above.
Auditing or red-teaming schemes
Effective audits and red-teaming exercises can detect and report on problems. These could complement the transparency requirements above by providing information about the compliance of the system, giving advance notice of possible risks, and building appropriate trust with people so safe systems have a competitive advantage.
This could take inspiration from healthcare,4For example, see the CQC’s audits of healthcare services (example)
investigatory powers,5For example, see the IPCO’s audits of investigatory powers (example), or their annual reports (example)
financial, and data protection audits. Usually, regulators or other third parties would get access to internals of organisations to understand what the systems are doing, and rigorously validate any claims made about systems. Done poorly, this can create a false sense of security and miss ‘obvious’ failings: instead, audits should be carefully constructed so that they are effective.6There’s some writing on effective audits in healthcare, and a little in finance. It is surprisingly hard to find good evidence here given how large audits are in other industries - this might be because it’s drowned out by a lot of low quality nonsense thrown about by large firms.
I worry a lot of ‘AI auditing’ today is currently very low quality, and provides unwarranted assurance that could lead to greater harm.Research you could do in this area includes:
- Understand how to do auditing and red-teaming well. Although this seems surprisingly difficult: many people have gone down this path and have created fairly useless AI auditing frameworks.
Building you could do in this area includes:
- Develop open-source tools that make auditing or red-teaming easier or more effective. These tools might automate or standardise parts of the audit or red-teaming process where it’s safe to do so (e.g. with interactive runbooks or checklists), or help the public understand and verify audit results.
- Evaluate existing systems you might have access to,7
I.e. Search for “AI <recruiting / education / etc.> software”, sign up for a bunch of them, and get testing!
and publicising both the results and how you’ve gone about this, as an example of best practice to make it easier for others to follow. This also might help identify good practices, as well as early signs of trouble. - Found an organisation that does effective AI audits or red-teaming, or otherwise helps shape this ecosystem to be more effective.8
If you’re exploring this, you might want to speak with Apollo Research who have been thinking about auditing and model evaluation.
Advocacy you could do in this area includes:
- Similar to work on prohibiting certain systems above.
Whistleblowing requirements
Effectively detecting where we might be sliding into AI-enabled oligarchy could help nudge things back on track. In combination with auditing, whistleblowing can significantly improve the ability of regulators or other actors to respond to problems before they develop into something more serious.
Research you could do in this area includes:
- Understand what whistleblower protections exist currently, and create clear and accurate resources that help people understand their rights.
- Identify gaps in current whistleblower protection legislation and systems, and suggest ways to close these gaps (for example, asking ‘what would an excellent whistleblowing system look like, why are we not there, and how do we get from where we are to there’).9
If you’re exploring this, you might want to speak with Protect who work on whistleblowing in the UK.
In the UK, these gaps might include:- Unclear legal protection for reporting non-criminalised serious risks to society under section 43B of the Public Interest Disclosure Act 1998. It maybe falls under the health and safety exemption, or the environment exemption - but both feel a bit iffy.
- Unclear who serious AI risks should be reported to: there’s no clear regulator or body for this.
- An assumption from many regulators that people should raise issues internally first. This means regulators have less oversight on issues that are self-resolved, so it’s harder to understand the entire landscape of what might be going wrong. In addition, it makes it more likely that particularly malicious actors are tipped-off and work to cover up wrongdoing.
- The ability for organisations to put intentionally misleading or unenforceable language into contracts to deter people from whistleblowing. For example, NDAs that threaten serious legal action for reporting issues to regulators (even if this is unenforceable due to 43J).
- The lack of punitive damages in most cases means bad behaviour may not be sufficiently disincentivized.
- Protecting certain whistleblowing-related actions from workplace surveillance or retaliation. For example, it probably shouldn’t be legal for an employer to terminate someone for reading up on whistleblowing.
- Develop proposals for international whistleblowing treaties or bodies that can handle cases where states themselves are corrupted.
- Develop runbooks for organisations that might receive reports relating to AI systems in future, like the EU AI Office. This could involve reviewing best practices from other sectors, reviewing options for accepting reports in the first place, and designing processes for handling different types of reports.
Building you could do in this area includes:
- Found an organisation that helps people whistleblow.9
If you’re exploring this, you might want to speak with Protect who work on whistleblowing in the UK.
This might be educating people and organisations about whistleblowing, consulting on how to build strong internal whistleblowing cultures, and advocating for better whistleblower legislation and systems. - Found an organisation that people whistleblow to. This probably wouldn’t have legal protections (at least in the UK, except for maybe 43H). However, it still might be able to handle reports confidentially and effectively, and act as a stop-gap solution until more ‘official’ bodies are set up.
Competition requirements
Ensuring healthy competition10At time of writing, the UK’s CMA is exploring principles for good foundation model competition.
in foundation models could be valuable for preventing AI-enabled oligarchy. Healthy competition would mean it’s unlikely that a single model or company is serving almost all requests, and correspondingly does not build up a lot of unbalanced power or wealth.Many actors creating powerful AI systems poses other serious risks, so naively optimising for this alone could be dangerous. For example, this would create more options for cyberattackers to steal powerful model weights, increasing catastrophic misuse risks. Similarly, more models being developed increases the chance of serious accidents leading to AI takeover. Additionally, more actors could lead to race dynamics between companies, resulting in more corners being cut on safety.
Research you could do in this area includes:
- Understand existing markets, including what underlying models are being used in systems that are most relevant to AI-enabled oligarchy.
Building you could do in this area includes:
- Develop less risky narrow systems that match or beat the performance of general models. This would create greater competition in some areas, while (hopefully) not substantially increasing the risk of other AI disasters.
- Design open standards for AI models so it’s easy to switch them in and out. This could decrease vendor lock-in, increasing the chance that services use a more diverse set of AI backends.
Advocacy you could do in this area includes:
- Advocate for minimum competition requirements, where the same AI model or underlying AI company cannot be responsible for more than a certain percentage of the market in any one high-risk domain for AI-enabled oligarchy.11
An analogy that might be worth looking into here: The Bank of England and FCA are worried that too much of the UK’s critical financial infrastructure is concentrated on AWS. As such, they’ve been experimenting with different policies that encourage greater diversity in cloud services being used in financial services, as well as other critical third parties.
Educate developers, deployers and users of high-risk systems
Relatively little has been written on the risks of AI-enabled oligarchy and similar lock-in (hence why I’m writing these articles!). This means that even people trying to learn more about these risks may struggle to easily understand them.
We also need engaging educational materials that help people mitigate these risks in practice, rather than theoretical takes on these risks like this piece. Good ethics education12Unfortunately, a lot of ethics education falls short of this - especially that which is shoehorned into computer science or related degrees, or employee onboarding programmes.
would:-
Explain why or how certain things are harmful. For example, ethics training usually flags that privacy is important, and that various regulations enforce privacy rights. However, it’s rare for this training to explain why privacy is important, and how privacy breaches lead to real harms (beyond occasional hand-waving about identity theft).13
I’d be interested in recommendations of resources that really nail this: I’ve struggled to find excellent articles on this in the past.
This means that it’s often ignored in practice as people don’t understand the harms they might be causing. -
Explore relevant trade-offs. It’s one thing to discuss abstract problems or say nice things about why being ethical is good. It’s another to practice making ethical decisions in situations people are likely to encounter, especially as transfer between different tasks is often much worse than people expect.
Examples of less helpful questions:
- Which year did <legislation> come into force?
- You want to send marketing emails to individuals in the EU and find someone selling a marketing list online. Can you send emails using this list?
A more helpful question:
- Imagine you’re running an online conference for LGBTQ+ people in tech. One guest has a hearing disability and has requested you record the conversations as a reasonable accommodation. What privacy concerns might arise from this request, and how would you evaluate this against the request for reasonable accommodation? What might you choose to do as a result?
-
Be more practical. Training often helps people identify unlawful behaviour, but not what to do in these cases. Better training could simulate a workplace environment and have people practice challenging harmful or unlawful behaviour, or notifying internal compliance teams or external regulators.
-
Be genuinely engaging. When most professionals think about ethics training, it’s often met with a groan and mentally placed in the category of annoying, irrelevant and janky e-learning modules that force you to click through a load of slides and then answer inane multiple-choice questions. It’s bizarre society is happy to waste so much productive time conditioning people to view ethics as some boring tick-box exercise. Instead, we should be using science of learning principles to ensure people have an excellent experience that gets results.
In addition, many people will not actively look for resources on this. Part of making this intervention successful involves identifying important target audiences, and getting them to engage with high-quality resources.
Research you could do in this area includes:
- Identify outcomes we’d like to see from high-quality education in this area, and ways to measure these outcomes. This should inform research into the kinds of actors that would need to be educated, and about what.
- Experiment with different ways of educating different audiences, to understand what works. This is effectively making the above bullet point empirical.
Building you could do in this area includes:
- Create teaching materials that provide high-quality introductions to different subject areas. This likely overlaps with other actions such as identifying what resources are needed in the first place, or engaging with people creating courses to help them fill gaps in their curricula.
- Create courses that make it easy for people to engage with materials in a structured way, and often more deeply e.g. through exercises, discussion, assessment, and practical projects. This could be:
- An online course
- An in-person training bootcamp
- An employee training
- A series of lunch-and-learns at your company
- A student-led module at your university
- Develop a certification scheme that incentivises people to be properly educated about these risks.
Advocacy you could do in this area includes:
- Push relevant organisations to carry out high-quality internal education on these topics. This might also overlap with other related actions, like building high-quality materials so it’s easy for these companies to do this.
- Leverage industry bodies, regulators, and governments to encourage relevant training. For example, get them to recommend high-quality learning resources, or change national curricula (e.g. the requirements for accrediting computer science degree programmes).
Build safer alternatives
People are much less likely to consider deploying AI systems if there are reasonable alternatives that are less risky. The above interventions have looked a lot at detecting and understanding the risks from AI systems, but we could also improve the viability of less risky alternatives.
Research you could do in this area includes:
- Investigate how to redirect AI innovation towards lower-risk deployment scenarios, or towards safer kinds of systems. For example, designing incentive schemes like grant funding, subsidies, or taxes to promote people working on this over working in higher risk domains.
Building you could do in this area includes:
- Build alternative solutions where high-risk AI systems might be deployed. For example, you might be able to solve the problem by avoiding the use of certain types of AI systems, or through a non-tech solution completely (often by analysing root causes, you can realise the wider system can be fixed so a solution is not necessary).
- Build high-risk AI systems in the safest practical way. I’m hesitant to recommend this, and think it should usually be a last resort. However, if people are already deploying dangerous AI systems it might be a plausible harm reduction strategy. You might also learn a lot about why it’s hard to build in a safe way, which could be useful for identifying new research problems people need to solve, as well as more credibly communicating the difficulties and risks to other key stakeholders like policymakers.
Advocacy you could do in this area includes:
- Encourage governments or large organisations to change tendering requirements to be friendlier to safer alternative solutions.
Generate public support
Many of the interventions considered above have trade-offs that some people will oppose. Generating support for these interventions, while being honest and upfront about these trade-offs, can help give politicians the confidence to implement such measures.
Research you could do in this area includes:
- Consult with the public to understand what concerns people have about the above interventions. Better understanding what people’s key disagreements are can help adjust policies to mitigate these problems.
- Conduct public polling about related topics on AI. For example, how the electorate feel about replacing human teachers with AI systems on issues like politics, or what they think about different interventions. This would help evaluate how any advocacy work is going, and provide insights on public sentiment that might encourage politicians to take action.
Advocacy you could do in this area includes:
- Launch direct outreach campaigns. This might include general awareness campaigns, or have specific goals like encouraging people to write letters of support to political representatives.
- Build coalitions. Many people form views by looking up to leaders or experts in the space. You could build a movement of trusted voices that give credibility to a movement, for example through an open letter.
- Create or influence popular media. Storytelling through media such as books, films, TV shows and video games can have powerful effects on public opinion. For example, WarGames and The Day After are credited with influencing policy to reduce risks from nuclear weapons. You could contribute to this by creating media directly, or by trying to influence media that is created.14
I’m not sure how feasible this is, but I imagine there might be a few reasonably tractable things that could be tried here:
- Setting up a film festival, game jam or similar around AI-enabled oligarchies as a particular theme (or sponsoring a particular prize in an existing competition etc.).
- Reaching out to particular screenwriters with ideas, possibly offering in exchange to help them as a technical consultant to check for accuracy etc. This might be a terrible plan! Would like feedback if you have expertise here.
Interested in contributing?
If you’re interested in working on this, please do start working on it! There’s no need to ask my (or anyone else’s) permission.
That said, I’d be keen to hear from you if you’re pursuing this (or if you considered pursuing it, but decided not). Get in touch via the contact details on my homepage.
Footnotes
-
For more on both these claims, see the previous article. ↩
-
Like this pretty diagram in The Policy Playbook by CSET (page 11).
One idea for a breakdown:
- Functions (
stolenadapted from the NIST Cybersecurity Framework):- Identify: Understand the risk. For example, figuring out whether anything I’ve written here actually holds up, improving threat models, identifying weak points (beward infohazards!).
- Protect: Prevent AI-enabled oligarchies forming. For example, by educating people to the dangers of building certain systems.
- Detect: Horizon scanning and early detection of potential oligarchies. For example, through effective whistleblowing schemes.
- Respond: Preparing to take actions should incidents arise. For example, galvanising the international community to address a rogue state that is quickly gaining power. Or, planning on how we might make it possible for humanity to escape an AI oligarchy.
- Recover: Planning for how we deal with the aftermath of an incident we respond successfully to. For example, even if an initial dangerous coup is subdued, it’s likely there may be significant civil unrest or other serious problems that lead to this outcome that need addressing.
- Policy levers: maybe the ‘styles of government action’
- Functions (
-
For example:
- The Equality and Human Rights Commission
- Investigatory Powers Commissioner's Office (oversight over intelligence services and mass surveillance systems)
- The Foreign, Commonwealth and Development Office
- The Department for Education
- The Counter-Disinformation Unit (part of the Department for Science, Innovation and Technology)
- The Defending Democracy Taskforce and the Joint Election Security Preparedness Unit (across a few different departments)
- Ofcom (mainly due to their powers and obligations under the Online Safety Act)
- The Information Commissioner's Office (fair data processing)
- The Financial Conduct Authority (fair and equitable access to key financial services)
- The National Cyber Security Centre (resilience to cyber attacks)
-
For example, see the CQC’s audits of healthcare services (example) ↩
-
For example, see the IPCO’s audits of investigatory powers (example), or their annual reports (example) ↩
-
There’s some writing on effective audits in healthcare, and a little in finance. It is surprisingly hard to find good evidence here given how large audits are in other industries - this might be because it’s drowned out by a lot of low quality nonsense thrown about by large firms. ↩
-
I.e. Search for “AI <recruiting / education / etc.> software”, sign up for a bunch of them, and get testing! ↩
-
If you’re exploring this, you might want to speak with Apollo Research who have been thinking about auditing and model evaluation. ↩
-
If you’re exploring this, you might want to speak with Protect who work on whistleblowing in the UK. ↩ ↩2
-
At time of writing, the UK’s CMA is exploring principles for good foundation model competition. ↩
-
An analogy that might be worth looking into here: The Bank of England and FCA are worried that too much of the UK’s critical financial infrastructure is concentrated on AWS. As such, they’ve been experimenting with different policies that encourage greater diversity in cloud services being used in financial services, as well as other critical third parties. ↩
-
Unfortunately, a lot of ethics education falls short of this - especially that which is shoehorned into computer science or related degrees, or employee onboarding programmes. ↩
-
I’d be interested in recommendations of resources that really nail this: I’ve struggled to find excellent articles on this in the past. ↩
-
I’m not sure how feasible this is, but I imagine there might be a few reasonably tractable things that could be tried here:
- Setting up a film festival, game jam or similar around AI-enabled oligarchies as a particular theme (or sponsoring a particular prize in an existing competition etc.).
- Reaching out to particular screenwriters with ideas, possibly offering in exchange to help them as a technical consultant to check for accuracy etc. This might be a terrible plan! Would like feedback if you have expertise here.