UK to take sector by sector approach to regulating artificial intelligence

30 Mar 2023 | News

Rather than creating a new single regulator, existing oversight bodies will tailor the rules to suit specific sectors. Five guiding principles will be applied to ensure safe and appropriate implementation of the technology

The UK government has set out proposals for regulating artificial intelligence in a way it says will avoid heavy-handed legislation which could stifle innovation.

Instead of giving responsibility for AI governance to a new single regulator, the plan is empower existing regulators, such as the Health and Safety Executive, the Equality and Human Rights Commission and Competition and Markets Authority, to come up with tailored, context-specific approaches that suit the way AI is actually being used in their sectors.

A white paper published this week sets down five principles, including safety, transparency and fairness, to guide the use of artificial AI, as part of a new national blueprint for regulations to drive responsible innovation and maintain public trust in the technology.

The UK’s AI industry employs over 50,000 people and contributed £3.7 billion to the economy last year. According to the government, the UK is home to twice as many companies providing AI products and services as any other European country.

Adopting AI in more sectors could improve productivity, but questions have been raised about the future risks and currently, organisations can be held back from using AI to its full potential because a patchwork of legal regimes causes confusion and financial and administrative burdens for companies in trying to comply with rules.

The white paper outlines five principles that the sector regulators should consider in the industries they monitor. The principles are:

  • safety, security and robustness: applications of AI should function in a secure, safe and robust way where risks are carefully managed
  • transparency and explainability: organisations developing and deploying AI should be able to communicate when and how it is used and explain a system’s decision-making process in an appropriate level of detail that matches the risks posed by the use of AI
  • fairness: AI should be used in a way which complies with the UK’s existing laws, for example the Equality Act 2010 or UK GDPR, and must not discriminate against individuals or create unfair commercial outcomes
  • accountability and governance: measures are needed to ensure there is appropriate oversight of the way AI is being used and clear accountability for the outcomes
  • contestability and redress: people need to have clear routes to dispute harmful outcomes or decisions generated by AI

Over the next 12 months, regulators will issue practical guidance to organisations, as well as other tools and resources like risk assessment templates, to set out how to implement these principles in their sectors.

Stakeholders have until June 21 to comment on the white paper.

Never miss an update from Science|Business:   Newsletter sign-up