UK rejects EU approach to artificial intelligence in favour of ‘pro-innovation’ policy

19 Jul 2022 | News

Post-Brexit, the UK government has set out plans for AI that diverge markedly from the EU. Rather than a single regulator, oversight will be left to a multitude of regulators which will tailor rules for sectors ranging from broadcasting to healthcare

The UK government has unveiled details of how it will regulate artificial intelligence, explicitly rejecting the EU’s approach as not flexible enough, too centralised, and innovation-limiting.

Instead it wants to leave AI regulation up to existing agencies that focus on areas like broadcasting, financial services, healthcare and human rights.

A regulatory rift between Brussels and London on AI is significant because the EU hopes to steer global standards in the technology by introducing the world’s first major legislative package, the AI Act.

And yet the UK is Europe’s biggest hotspot for the technology, receiving more private investment than France and Germany put together in 2021, and hosting some of the world’s leading AI firms like Alphabet subsidiary DeepMind.

Releasing its plans yesterday, the UK government said it is, “taking a less centralised approach than the EU.”

“Instead of giving responsibility for AI governance to a central regulatory body, as the EU is doing through its AI Act, the government’s proposals will allow different regulators to take a tailored approach to the use of AI in a range of settings,” according to the proposals published by the Department for Digital, Culture, Media & Sport.

The EU’s AI Act, proposed in April 2021, would establish sweeping new legislation to regulate the technology across the board.

Overseen by a European Artificial Intelligence Board at the top, member states will each nominate their own bodies to make sure the laws are being enforced, with violations punished by fines of up to €30 million or 6% of annual corporate turnover.

The AI Act would categorise uses of AI into different risk levels that determine levels of oversight. Some “unacceptable” uses will be banned.

Although these definitions are being haggled over as the act is scrutinised by MEPs, this could prohibit AI that subliminally manipulates users, or real-time facial recognition by law enforcement.

No laws, just guidance

The UK plans, by contrast, barely mention any uses of AI that would be prohibited.

Nor is the UK’s regulation going to be backed up by new laws, at least for now. “This is so that we can monitor, evaluate and if necessary update our approach,” the plans say.

Like the UK, the United States is also not planning AI legislation, at least at the federal level, instead hoping to encourage a safety-first culture change in tech firms.

The UK will leave regulation to existing organisations like Ofcom, which regulates broadcasting, and the Competition and Markets Authority, which tackles monopolies.

London makes the argument that creating an EU-style list of risky AI uses would lack nuance. What could be dangerous in one sector might be benign in another, and updating a central list of risky AI could be slow and cumbersome. “This could lead to unnecessary regulation and stifle innovation,” it says.

“The more context-specific approach to AI regulation, relying on existing regulators, has some advantages - particularly given AI is so broadly applicable, there's unlikely to be a one-size-fits-all approach to regulating it,” said Jess Whittlestone, head of AI policy at the London-based Centre for Long-Term Resilience, of the UK plans.

But the UK’s approach won’t be straightforward, she stressed. Regulating AI sector by sector risks leaving important areas to fall through the cracks. Some overarching body needs to look out for gaps, Whittlestone said.

The UK plan acknowledges the risk that sector by sector regulation could end up with a thicket of conflicting rules. “We will seek to ensure that organisations do not have to navigate multiple sets of guidance from multiple regulators,” it says.

It also warns that some existing regulators don’t have enough in-house AI expertise. The UK could create a “central pool of AI talent that regulators can draw on,” suggested Whittlestone.

Six principles

The UK government wants its regulators to follow six principles when devising AI rules.

One of them is to make sure AI is “appropriately transparent and explainable,” echoing an emphasis in the EU’s AI Act that at least for high-risk systems, citizens need to understand why an AI agent has come to a particular decision.

This issue of explainability in AI is likely to be contentious, as some AI models produce outcomes without their reasoning being fully clear to humans.

“Presently, the logic and decision making in AI systems cannot always be meaningfully explained in an intelligible way,” the UK’s plan acknowledges, “although in most settings this poses no substantial risk.”

But in a high-risk case, like a tribunal where the accused has the right to challenge charges against them, regulators might decide that AI should be prohibited altogether if it can’t explain its workings, it says.

In addition, AI-made decisions still need to be contestable by those they affect. AI should also be “safe” and “fair”, according to the plan – although what this means in practice is left to sector regulators to work out in detail.

Testbed

The UK’s new AI rules could allow the country to become a “testbed” for new applications “before they launch big across the channel”, said Max Beverton-Palmer, a technology policy expert at the Tony Blair Institute in London.

The risk though, is that the UK gets left behind as the world adopts the EU’s AI rules as standard, he said. “None of this will matter unless the UK moves sharpish,” he tweeted. “It's unbelievable that the UK has lost the march to the highly bureaucratic EU.”

What’s more, there is a risk that the UK’s rules lack respect and clout unless they actually have new laws backing them up, Beverton-Palmer said.

There will be a white paper in late 2022 setting out further details of the government’s plans.

Never miss an update from Science|Business:   Newsletter sign-up