What early career policymakers can learn from product managers: understanding people’s actual problems is key to effective policy
I run the AI Safety Fundamentals courses, some of the largest and best well-known online courses on AI safety. I’ve also worked in and alongside policy teams in the UK government at the Cabinet Office, Home Office, and DSIT.
I’ve therefore seen hundreds of people trying to enter the policy space. Most of them make the same mistake getting started. This results in them wasting time on bad policy, which can limit their career growth and reduce their impact. It also causes many people to start the same unhelpful projects in AI governance.
The mistake is failing to properly understand people’s problems.
For example, AI discrimination is a problem people face. And there is currently no AI-specific legislation in the UK. But people who suffer from AI discrimination aren’t powerless: they might try pursuing a case using existing legislation. Understanding this enables better policy.
Not understanding the real problem is also a very common mistake made by tech startups! Luckily, product managers have a solution to tackling this issue which we can learn from: user interviews.
In this article I’ll flesh out an example, and show how this technique results in better policy.
Example of mistake
We’ll continue with our example of a policy to reduce AI discrimination harms in the UK - but the idea here can apply to any policymaking goal.
A typical early career policy maker might immediately jump to solutions. For example, banning AI discrimination with steep penalties for violations. They might spend weeks defining what characteristics are protected from discrimination, what counts as discrimination, and how this applies to key areas like employment and education.
Unfortunately, this doesn’t solve the problem. In fact, this is already banned under the Equality Act 2010. We’ve just wasted our time duplicating legislation! Importantly, our citizens are now no better off.
The error we made is that we didn’t understand the problem well enough.1Another common error is to spend a lot of time on categorising the problem, or understanding its impact on people assuming they’re completely passive. While these can be helpful, what’s often still missed is how people are responding to the problem already.
Specifically, we didn’t understand the context of the problem: What other options did people facing the problem have? Why do these not solve the whole problem?User interviews
Product managers have a great trick here: user interviews. This involves talking to people who have the problem. The goals are to understand:
- the problem
- how they’re trying to solve their problem today
- why that solution is not working
Important: do not ask them about new ideas for solutions!
You should ask them questions about their lives and specific events, then let them speak. They should be speaking for at least 80% of the interview. The classic mistake to make is to push your solutions, or get their generic opinions.
A great book for developing the skill of user interviewing is The Mom Test. (Another hot tip: record your user interviews, and watch how you conducted them. You’ll quickly realise how you can better apply these principles and improve rapidly.)
Example of applying user interviews
How does this work in practice? Let’s go back to our AI governance example.
First, we need to identify who to interview. Most of the time this will be the people who suffer because of the problem. (See the notes below if this isn’t appropriate.) In our example, you should therefore interview people who have suffered from AI discrimination.
Then, you’ll have to conduct the user interviews.
In the interviews, ask people what they did when they suffered AI discrimination. One person tells you they went to a lawyer, who advised them to sue under the Equality Act. But unfortunately they struggled to get clear evidence of the AI system being discriminatory, so the court case failed.
You dig into why they weren’t able to get this evidence. They tell you that they tried requesting details about the AI system through the courts' disclosure processes. But this request was refused because courts consider these details to be protected trade secrets.2See disclosure failures from the Horizon scandal. And also wider legal system failures.
Repeat this process with many different people, to get a comprehensive view of the problem. There’s usually not just one alternative solution, and understanding the different paths people took can again strengthen your policy.
You've now learnt a lot more about where people are getting stuck, so you can begin building better policies.3In practice, user interviews usually dig up a bunch of different problems. In this user interview example you’d likely learn of the difficulties of finding a lawyer, financial and time barriers to pursuing the case, delays in the legal system, and the lack of privacy in the courts. Discovering all of these can help you develop better policies! (or one overarching policy that tackles many of these issues)
You might now develop a policy that updates disclosure rules to ensure enough transparency to evaluate whether a decision was discriminatory.This is a better policy than duplicating existing legislation: it tackles the actual problem, and is well-scoped to minimise regulatory burden.
Not all policies should be small incremental updates to existing rules - it just happened to be the case in this hypothetical example. User interviews can result in broad unsolved problems that require sweeping new policy. But they do allow you to make sure it’s the right choice.
Who should I interview when it’s not obvious?
In some cases, it won’t be obvious who you should interview. Commonly this is where:
- The benefit is very diffuse. For example, improving crisis response to protect people from future AI catastrophes.
- The benefit is to non-humans, for example with farmed animal welfare policy. I haven’t tried it myself, but I’d imagine it’s hard to user interview chickens!
In these cases, try interviewing:
- People who will use your outputs. For example, for the AI preparedness example you could interview crisis-response teams4
In the UK government, these will be people in the COBR Unit (a successor to the Civil Contingencies Secretariat) who support COBR.
who would follow your guidance in a crisis. Even if they had not experienced the same crisis before, you could ask them about what problems they faced responding to other crises. - Experts in the area. For example, people at non-profits, research institutes or working in industry. (But beware hidden agendas, particularly from well-trained industry folk). You should still focus on getting them to explain the problem first, before they jump into solutions. They may also be able to help you with empathetic role-playing.
Conclusion
Deeply understanding problems, including the context they’re in, will help you develop more effective policy. Ultimately, this will make you a more useful and unique contributor.
Footnotes
-
Another common error is to spend a lot of time on categorising the problem, or understanding its impact on people assuming they’re completely passive. While these can be helpful, what’s often still missed is how people are responding to the problem already. ↩
-
See disclosure failures from the Horizon scandal. And also wider legal system failures. ↩
-
In practice, user interviews usually dig up a bunch of different problems. In this user interview example you’d likely learn of the difficulties of finding a lawyer, financial and time barriers to pursuing the case, delays in the legal system, and the lack of privacy in the courts. Discovering all of these can help you develop better policies! (or one overarching policy that tackles many of these issues) ↩
-
In the UK government, these will be people in the COBR Unit (a successor to the Civil Contingencies Secretariat) who support COBR. ↩