Adam Jones|HomeBlog

Avoiding unhelpful work as a new AI governance researcher

Headshot of Adam Jones

Adam Jones

High-quality policy research would be really valuable for AI governance.

However, many newcomers start down an unproductive path. The results aren’t helpful to others, and this demotivates newcomers when others don’t get excited by their work.1

This article lists some common bad ideas I think newcomers to the field should avoid.

Before we begin, I should say:

  • Having these ideas is not a bad sign! In fact, many people I now consider excellent AI governance researchers started with these ideas too.
  • I'm not calling out any person or project in particular: I’ve seen multiple attempts at all these ideas.
  • Some of these ideas can be good under the right conditions. I’ll give guidance on what good variants look like.

Developing a new general AI governance framework

Frameworks from newcomers usually massively overlap with existing work, and don't highlight new contributions - this makes it difficult for people to quickly understand why they'd use your framework over an existing one. Some frameworks also can have little action-relevant detail.

Working on developing a general framework is also unlikely to make you the best in the world in a subject area.

A good version of this might look like:

  • Critiquing and suggesting improvements to existing frameworks, ideally those that are popular or relatively recently published.
  • Developing a genuinely novel framework, which is clear about why it’s better than previous work. It can be used for making decisions about AI policy (i.e. it does more than just bucket policies neatly). Usually developing such a framework requires experience with other frameworks and understanding what the real problems with those frameworks are that need fixing: this makes it hard for newcomers.

Building a database of all countries’ AI policies

Building a database of AI policies can feel productive because there’s a lot of work to do, but almost every time I’ve seen this attempted it has resulted in an incomplete snapshot that never gets maintained. What’s particularly difficult is:

  • Balancing the level of detail against being a useful summary.
  • Choosing an appropriate and consistent scope. For example, is data protection or copyright law relevant? What about voluntary commitments? Or general R&D schemes that could cover AI?
  • Understanding what’s going on in other countries, particularly because what is the most loud or public doesn’t always correspond to what is most important.
  • Keeping all of the above up to date.
  • Getting people to use your database over other databases, Google search, or just being uninformed.

I think it’s especially likely this project will not be useful if:

  • You haven’t identified a specific audience. By specific audience, I mean that your audience should be no larger than 10,000 people, and you should usually be able to find an actual person on LinkedIn who is in your audience.
  • You haven’t identified a reason it’ll be 10x better to that audience than the OECD’s AI policy observatory, the UN’s AI policy portal, or other existing resources they have access to.

A good version of this might look like:

Trying to analyse every AI risk in detail

Newcomers often feel compelled to create comprehensive analyses of all potential AI risks. This often seems motivated by stress about forming inside views.

However, trying to evaluate all risks is likely to result in you getting only a very shallow understanding of a few risks before getting fed up, and rarely produces actionable insights for you or others.

Thorough risk assessment is really valuable, and better understanding the greatest risks will help us prioritise work, develop better policies, and ensure any regulation appropriately balances potential risks and benefits.

To do this well, select a narrow subset of risks (or just one risk) that are neglected, or you have some background on already. Build out detailed threat models and start trying to quantify upper and lower bounds early. The book How to Measure Anything (key concepts) can be helpful here. You might also want to make it easier to measure this risk in future: for example, identify what evaluations should be developed to help track this risk over time.

Spreading ‘awareness’ about AI governance generally

This will almost certainly fail if this is directed towards a poorly-defined or very generic audience, like ‘the public’ or ‘policymakers’. Another common failure is spreading awareness to the people closest to you, like ‘my friends’ or ‘my colleagues’.2 Often the fault here is forward chaining instead of backward chaining.

In general, I don’t think newcomers should focus on this as it rarely helps you develop novel skills that will make you more useful in future.

If you are intent on pursuing this, I recommend that you:

  • Identify a specific audience. By specific audience, I mean that your audience should be no larger than 10,000 people, and you should usually be able to find an actual person on LinkedIn who is in your audience.
  • Identify something concrete you want them to do, with strong reasoning for why this will help AI governance. This is usually easier if you’re not actually spreading awareness of AI governance generally, but instead focusing on a narrower topic or specific policy. For example, if your audience is ‘executives of companies larger than 250 employees using AI in London’, maybe you want them to publicly agree to some voluntary commitments.
  • Think through your outreach strategy carefully. I usually suggest newcomers ‘just do it’ instead of spending too long planning, but this is a rare exception.

Teaching other people about AI basics

The final common project AI governance newcomers jump to is teaching people about the basics of building AI systems. For example, teaching others how to build a basic MNIST classifier.

There are already many resources here - while many of these still aren’t perfect, most newcomers fail to significantly improve upon them. Even if you do make better resources, distribution is often a difficult problem as it’s a crowded space - Google lists 1,130,000,000 results for “AI for beginners”.

Big caveat: This is fine if the reason you’re doing this is it acts as a forcing function for you to better understand AI, and you think that understanding will help with your future work (learning by teaching is great!). Just don’t deceive yourself into thinking the impact comes from educating others, rather than your own learning.

Also to be clear, this focuses on AI basics. It is however very useful to write educational materials on more specific topics: there’s a surprising lack of high-quality content in many particular aspects of AI, especially AI safety. A good question to ask yourself is ‘will this be 10x better than anything already existing on this topic?’ (and assume if nothing is written yet on this topic, the answer is yes).

Finding a good project

Avoiding the above will hopefully save you some time getting started in AI governance. However, you probably now want to know what you should do.

The good news is that there are a huge number of valuable things to be doing. I've published a list of concrete AI governance policy areas, many of which seem to have nobody working on them and which you could become the world expert at in under a weekend.

You can also find some ideas for what to do in these articles I’ve written:

You might also enjoy reading: How to succeed as an early-stage researcher: the “lean startup” approach.

Footnotes

  1. Or potentially worse, they continue regardless and end up wasting even more time.

  2. In very rare cases, these audiences might be the right ones if you’re somewhere relevant: for example if you work in the government department responsible for AI policy. But even then you don’t want to ‘spread awareness’: you probably want to educate people about specific developments that are relevant to their jobs.