Unlike the EU’s proposed regulations, the UK’s new strategy is focused on the growth potential of the technology. There is little discussion of ethics or reining in abuses
The UK looks set to shift away from the EU on the regulation of artificial intelligence (AI), promising in a new strategy to build the most “pro-innovation system” in the world.
In contrast to the EU’s proposed AI act, London’s strategy does not propose prohibiting controversial uses of AI, such as subliminal manipulation, and instead stresses the economic benefits of rolling out the technology across the private sector.
“This national AI Strategy will signal to the world our intention to build the most pro-innovation regulatory environment in the world,” says Nadine Dorries, the digital secretary, in a foreword to the document, released on 22 September.
AI is the latest area where the UK is looking to diverge from the EU post-Brexit. The government is already consulting on plans to make it easier to research and grow genetically engineered crops and animals, is planning to changes for rules governing clinical trials and is consulting the industry on regulation of medical devices.
“Having exited the EU, we have the opportunity to build on our world leading regulatory regime by setting out a pro-innovation approach, one that drives prosperity and builds trust in the use of AI,” the strategy says.
“The strategy is a product of politics and identity,” said Charlie Pownall, the founder of the lobby group AIAAIC, which campaigns for transparency and openness in AI.
“The emphasis on innovation and talk of Britain as a 'global AI superpower' fits the rhetoric of a newly independent, outward-facing country boasting real AI expertise looking to carve itself a role as a significant economic force on the world stage,” he said.
The strategy does have a section on “governing AI effectively,” but repeatedly stresses that rules will be relatively light touch.
While the EU’s draft legislation would ban AI that uses subliminal manipulation techniques, China-style social credit scoring systems, and largely prohibit real-time remote biometric identification, the UK’s plan does not mention any specific practices it would outlaw, acknowledging only “concerns” around the “fairness, bias and accountability” of AI systems.
“Ethics, transparency, and accountability for anything other than government appear to be second tier considerations,” said Pownall. “It is also surprisingly thin on detail for a document that has been long in the making,” he said of the 62-page strategy.
In contrast to the EU’s “risk focused” approach, and the “decentralised” attitude to AI in the US, the UK is “focused on innovation and growth,” said Sébastien Krier, technology policy researcher at Stanford University, and a former policy adviser to the UK’s Office for Artificial Intelligence.
Overture to the US
There is little in the strategy about how the UK’s approach would mesh with the regulatory framework being developed in Brussels.
UK researchers will participate in Horizon Europe, “enabling collaboration with other European researchers”, it says.
But there is only one mention of the EU’s AI Act, and the strategy stresses that the UK will work to shape global AI governance “in line with our values” and “prevent divergence and friction between partners”.
Conversely, there appear to be overtures to the US on AI. The strategy commits the UK to ratify the “US UK declaration on cooperation in AI R&D”, a statement last year promising to deepen research cooperation and drive “technological breakthroughs”.
Krier said the UK’s strong AI ecosystem gave it some clout in shaping global rules, and the strategy was a sign it might be more aligned with the US than EU in the discussions to come.
“There seems to be more disagreement between the EU and US, than the US and UK,” he said.
Thus far, federal guidance in the US on the use of AI had been relatively light-touch, Krier said. But other government agencies had been working up their own regulatory regimes. “There’s some stuff going on [in the US], but it’s not as advanced as the EU. It’s a laissez faire approach for now.”
Still, the UK is lagging behind the EU in working up concrete regulatory proposals, he said. “Whether the UK will be able to set world standards, is more unclear,” said Krier.
However, this latest strategy is not the final word on the UK’s regulatory regime. A white paper specifically on governance and regulation is due early next year.
The strategy also contains a few new initiatives for academic researchers. It promises to set up a national AI research and innovation programme, to help turn “fundamental scientific discoveries into real-world AI applications”.
There are also signals that the government believes compliance with the EU’s General Data Protection Regulations, which the UK has adopted, despite Brexit, might stop companies capitalising on AI.
“Navigating and applying relevant data protection provisions can be perceived as a complex or confusing exercise for an organisation looking to develop or deploy AI systems, possibly impeding uptake of AI technologies,” it says. The UK government is currently running a consultation on reforming the UK’s data protection framework.