Peter Kyle wants to work with the EU on artificial intelligence – but also steer a ‘unique pathway’ between Brussels and Washington
The UK’s new secretary of state for technology has talked up scientific and technological cooperation with the EU, in a change of tone from the previous government which trumpeted the benefits of regulatory divergence from Brussels to capitalise on Brexit.
Labour’s Peter Kyle told Science|Business that the previous 14 years of Conservative-led government had denied the UK the “opportunity to think and act globally”.
“There is clear benefit to working cooperatively, fulsomely and in a sustained way, with our key global partners and continental partners,” he said last week on a visit to European Organisation for Nuclear Research (CERN) in Geneva.
“CERN could not have been achieved by Britain alone. It couldn't have been achieved by Switzerland alone,” he said. “Why shouldn't Britain aspire to be a leading part of those kind of partnerships?”
Kyle shifting the tone mirrors a broader attempt by the UK’s new centre-left government to rebuild ties with the EU, damaged by years of messy exit from the bloc. Last month, the UK stressed it is interested in joining FP10, the successor to the EU’s Horizon Europe research programme.
Under the previous Conservative administration, the government talked up the benefits of Brexit as a way to escape what it saw as cumbersome rules hampering some technologies.
In a move largely supported by scientists, the previous government unveiled plans to relax rules on research of genetically engineering crops in a break from the EU’s restrictive system. It also championed a wait and see, light-tough approach to artificial intelligence, just as Brussels constructed the world’s most comprehensive legislation in the form of its AI act.
Since Labour took power in July, Kyle has focused heavily on AI, and in a break from the Conservatives, new legislation is expected – although it’s unclear when it will be released.
The UK is one of the world’s leading AI research powers – hosting the Google-owned lab Deepmind – and Kyle said he wants close cooperation with Washington and Brussels on the technology.
“I believe that if we work closely and collaboratively with our key partners in the US and the UK and in the EU, then we can shape these technologies,” he said.
Working together on safety
The previous government, although it didn’t bring forward legislation, was nonetheless concerned about the more existential risks of the most powerful systems spinning out of human control.
Last year, it convened a global summit at Bletchley Park, site of famous World War Two codebreakers, to hammer out a global statement of concern that AI could cause “serious, even catastrophic, harm”. In a diplomatic breakthrough, China was one of the signatories.
The Conservatives also set up an AI Safety Institute to test the capabilities of the most powerful systems, checking to see if they can manipulate humans, for example, or could be used to create novel biological weapons. The European Commission’s new AI Office will in part fulfill a similar function, also kicking the tyres of the leading, so-called general purpose AI systems.
Kyle will continue this focus on AI safety, and wants the EU, UK, and other countries’ institutes to work together (the US has also set up its own AI safety institute).
“There's a global community of safety institutes that's evolving,” he said. “They are all networked, and credit to the previous government for the Bletchley Summit, which sparked off this global community.”
“I'm insistent that the British institute works collaboratively and as openly as possible,” he said. The next few months will see new initiatives on AI safety, he said, including tools that organisations deploying AI can use to make sure they do so safely.
A middle path
Although he talked up collaboration with Brussels on AI, the UK’s legislation will remain “very focused”, Kyle stressed – in other words, not going nearly as far as the EU’s AI Act.
The legislation will firstly turn a voluntary agreement signed by big AI companies earlier this year into a legal requirement. The agreement commits leading AI companies to publicly show how they are assessing and testing risk, for example.
Secondly, the UK’s legislation will establish the AI Safety Institute as an arm’s length body from government, “so that it has a long-term future, so that it is free to collaborate and build relationships around the world,” said Kyle.
Overall, the UK will try to steer a middle course between the EU’s comprehensive AI Act, and Washington’s approach, which has so far yielded executive orders on AI, but no legislation.
“We can make sure that safety is in at the outset from the start, and we can harness it for public good, but we are going to be doing so in a way that's not negatively disruptive to the regulatory landscape in the EU or the US, but we will find our unique pathway between them,” said Kyle.
The UK, EU and US have “different cultures” and “apply law in very different ways,” and this will influence London’s AI approach.
But, “nobody should be fooled into believing that there is a divergence between us in what [the EU, UK and US ] end objectives are,”Kyle said. “Those shared objectives are what distinguishes us from other parts of the globe who have […] different intentions and uses for AI”.
Lagging behind?
But this middle path, as Kyle styles it, risks leaving the UK behind others when it comes to AI regulation, argue some experts.
“AI regulation is moving fast, and there’s a danger of the new government getting caught flat-footed if it just sticks with the Conservatives’ wait-and-see approach,” said Jack Stilgoe, a professor of science and technology policy at University College London.
Last month, California almost passed a significant AI bill - SB 1047 – which was vetoed by governor Gavin Newson, despite support from lawmakers, Stilgoe pointed out. “Their failure to do so creates an opportunity for the UK to show it can lead on AI regulation.”
The UK’s focus on the most powerful systems, and the very biggest risks of out-of-control AI, also risks neglecting more prosaic abuses.
The EU’s AI act bans certain specific uses of AI, for example, like assessing how likely someone is to commit a crime, creating facial recognition databases, or deploying "subliminal, manipulative, or deceptive techniques" to influence behaviour. There’s no sign that the UK will follow this lead just yet.
Labour has kept some of the Conservative’s most “tiggerish” AI enthusiasts in place as advisers, said Stilgoe. “These people are also the ones likely to talk about existential risks, which isn’t a good basis for solid regulation.”
‘Worst of both worlds’
What’s more, the sheer size of the EU market will mean that AI companies are likely to comply with Brussels’ rules, rather than any framework the UK creates, said Kieron Flanagan, a professor of science and technology policy at the University of Manchester.
“Regulatory power comes from the scale of the market that you can block the [AI] company from being part of,” he said. “The UK will have very little influence over those companies, if it doesn't act together with the EU. And I think that the current government probably understands that.”
What’s more, US companies selling AI products into Europe are unlikely to create a less restricted version just for the UK even if London creates more light-touch regulation, he cautioned. “We'll just get offered what's offered to the EU regardless.”
A middle path between the EU and US could end up being “the worst of both worlds, because we don't get the benefits of strong regulation [and] we don't get the benefits of extreme competition,” Flanagan said.