Adam Jones|HomeBlog

An easy win for UK AI safety: competition law safe harbour

Headshot of Adam Jones

Adam Jones

tldr: We could improve AI safety by creating a safe harbour from competition law for certain safety-relevant agreements. In the UK, the CMA and a Secretary of State could make this change without waiting for parliamentary approval.

Arms races

AI development is often characterised by companies racing to be first to market. This is often a good thing for consumers: in an ideal world it encourages companies to deliver better services sooner.

However, this dynamic can also lead to prioritising speed over safety, potentially resulting in the hasty deployment of insufficiently tested or secured AI systems. It’s believed that both Microsoft’s Bing Chat and Google Bard might have been released while the companies knew they weren’t ready, in order to try to beat or catch up with the competition.

Because of this, some people have suggested companies make agreements to mitigate these arms race dynamics. For example, agreements might require companies to delay model deployment until proper safety testing has been done, or pause development of particularly risky capabilities. This approach could still enable innovation - but just decrease the incentives to rush an unsafe product to market.

Competition law

Disclaimer: this is commentary from somebody on the internet, not legal advice!

The Competition Act 1998 is a law in the UK designed to promote fair competition. It contains provisions intended to prevent agreements that might prevent, restrict or distort competition. Specifically, section 2(2)(b) of the Act prohibits agreements which “limit or control production, markets, technical development or investment.”

These provisions may unintentionally impede collaboration on AI safety. For instance, this would restrict agreements between AI companies to:

  • only release AI models that meet certain safety standards (controlling production or markets)
  • increase investment in safety research, especially if this might take away investment elsewhere (controlling investment)
  • pause the development of certain advanced AI capabilities due to safety concerns (limiting technical development)
  • delay the release of potentially dangerous AI models until society has safeguards in place (controlling production or markets)

Safe harbours

The good news is that the Act already contains a mechanism for addressing this issue. Section 6 outlines a process for creating exemptions, known as "block exemption orders".

The Competition and Markets Authority (CMA) first need to check these agreements meet the section 9 exemption requirements. One route is if an agreement “contributes to improving production or distribution”. Arguably agreements to limit distributing potentially harmful models could improve consumer trust and reduce AI harms, leading to greater longer term benefits in producing usable safe models and being able to distribute them.

The CMA can then recommend that the Secretary of State makes an order that a category of agreements is exempt. They would likely need to think about the wording of the order to ensure it sufficiently covers safety use cases, but doesn’t enable anti competitive behaviour. Figuring out how to word this would be a useful project for a new AI governance researcher with some legal background!

The Secretary of State can then actually make the block exemption order.

(If a section 6 block exemption isn’t appropriate, an alternative route could be adding an exclusion via section 3 of the Act.)

Next steps

To implement this safe harbour for AI safety agreements:

  1. The CMA should review potential AI safety agreements that could benefit from exemption. This would likely involve consulting actors in the AI space.
  2. If there are agreements that would benefit from exemption, the CMA should recommend a block exemption order under section 6 of the Competition Act 1998.
  3. The relevant Secretary of State should then consider making the block exemption order. This is likely either the Secretary of State for Business and Trade, or for Science, Innovation and Technology,
  4. Once implemented, the effectiveness of this safe harbour should be monitored closely. The CMA and relevant government departments should establish a process for regular review and potential adjustments to the exemption as the AI landscape evolves.

This is an easy win for AI safety in the UK: it doesn't require completely new legislation, just a strategic use of existing powers. It also aligns well with the UK’s AI regulation strategy - not getting in the way of helpful innovation, while supporting AI safety at the same time.

If you enjoyed this article, you might also like An easy win for UK AI safety: supporting whistleblowers.