The Ecosystem: UK companies welcome pro-innovation pitch for new AI regulation

18 Apr 2023 | News

The UK is diverging from the EU in its proposed light-touch approach to AI regulation. The plan is going down well with companies, but much still depends on how it is implemented

Michelle Donelan, UK Secretary of State for Science, Innovation and Technology, says the government's new, light-touch approach to AI regulation is 'based on strong principles'. Photo: Michelle Donelan MP / @michelledonelan / Twitter

The UK is proposing to set up a light-touch approach to regulating artificial intelligence (AI) and following the publication of a government white paper containing further detail on how the system will work, there is cautious optimism this is on the right track.

“This is a very pragmatic approach, and as innovators we are pleasantly surprised,” said Brian Mullins, chief executive of Oxford AI spin-out Mind Foundry. “I don’t think it carries any weakness relative to trying to centralise the process.”

Lila Ibrahim, chief operating officer of Alphabet-owned DeepMind, is also positive. “The UK’s proposed context-driven approach will help regulation keep pace with the development of AI, support innovation and mitigate future risks,” she said.

Investors also voiced support. For Moray Wright, chief executive of AI investor Parkwalk Advisors, the ideal is to balance flexible regulation to enable innovation with measures that ensure sustainable and ethical technological development. “The AI white Paper marks a proportionate step forward, and we hope the UK government uses this opportunity to encourage further investment in this thriving sector,” he said.

With the publication of the white paper it is now clear the UK and the EU are following significantly different paths on AI regulation. The EU is creating technology-specific legislation in the form of the EU AI Act, which envisages new regulatory bodies for AI and will set levels of scrutiny and control according to the perceived risk involved.

In the UK there will be no new legislation (as yet) and no new regulatory body for AI. Instead, responsibility will be passed to existing regulators in the sectors where AI is applied.

This approach is elaborated in the March white paper, in particular setting out the principles that regulators must take into consideration when handling AI. These cover safety, security and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress.

For the moment these principles are advisory; the government will decide at a later date whether to legally oblige regulators to follow them.

Meanwhile, regulators will receive central support to help them understand AI, and in particular to assess risks. This ‘central risk function’ will also be responsible for catching AI applications that appear to be escaping scrutiny or that represent a particular danger.

The government’s decentralised approach makes sense to companies in the UK. “It's obviously still evolving, but I think the UK government is taking a very practical and business-centric approach,” said Robert Newry, co-founder and chief executive of Arctic Shores, a company using AI in recruitment services.

“The guidelines are very reasonable, and the principles they represent are very important,” said Manjinder Kainth, co-founder and chief executive of Graide, which has developed a system that uses AI to assist educators in grading student assignments and giving tailored feedback.

The approach is also seen as promising by venture capital companies active in AI. “I’m cautiously optimistic,” said Manjari Chandran-Ramesh, partner Amadeus Capital Partners. First of all, the principles are already familiar to the AI industry, so the regulations will be a useful step towards codifying this work. But passing oversight to existing regulators has its pros and cons.

On the plus side there will be issues that are specific to certain markets, and others will not be burdened with catch-all regulations. “Working on a case-by-case basis gives you flexibility in resolving problems and in interpretation,” Chandran-Ramesh said. “But I worry that some might fall between the cracks, or that the onus will fall on companies to decipher the signals coming from different regulators.”

Mullins is more positive about the sector-based approach. “When the stakes are high, people need to understand the impact [of using AI], and the only way to understand the impact is to understand the industry or the sector those decisions are being made in,” he said.

Mind Foundry was spun out of the University of Oxford in 2016 by Stephen Roberts and Michael Osborne, two professors of machine learning. Their goal was to apply AI to high-stakes problems in the public and private sector, and the company has gone on to serve customers in insurance, defence and security and government.

Mullins gives insurance as an example of a sector that should prove adept at handling AI. Its regulators already oversee models for statistical analysis of risk, so should have little problem appreciating the difference AI does or does not make. “In other sectors, the regulators will have further to go, but I think it is still the better decision to let the domain experts consider specific impacts,” he said.

Education and recruitment

A more challenging test case for the UK regulations will be AI in education. This is partly because the country has a poor history in this area, with an algorithmic approach to predicting exam results during the COVID epidemic going badly wrong, and partly because there is no obvious regulator.

Possibilities for overseeing Graide’s AI teaching assistant include Ofsted, the government’s schools inspectorate, or the Office for Students, which oversees quality in higher education. While regulating AI systems would be a departure for both bodies, Kainth points out that they will also have to get to grips with related issues, from the use of AI by pupils and students, to how AI should feature on the curriculum.

However, he is confident that Graide will not have a problem answering the questions likely to be raised by this approach. “The issues outlined are issues we’ve been thinking about for years,” he said.

Graide was set up in 2019, building on the founders’ experience as teaching assistants at Birmingham University. Its system uses AI to learn how educators give feedback, and then makes suggestions based on previously graded answers. This speeds up the grading process, and makes it more consistent.

But while Graide has built transparency and explainability into its system already, these qualities may be harder to introduce retrospectively into an AI system. “Companies using black-box, neural network-based systems might find it much more of a barrier,” Kainth said.

The use of AI in recruitment is another area where the choice of regulator is again unclear and the public are sensitive to possible negative impacts. Companies such as Arctic Shores are, in turn, concerned about being dragged into an over-reaching regulatory system.

The company designs behaviour-based assessments for recruitment and automated systems to deliver them, in particular to sift through large volumes of applicants. Some machine learning was used in the original design process when the company was set up in 2014, but delivery is based on plain algorithms. “From a technologist’s point of view, we don’t use AI, but from the point of view of the public and the EU rules, we do use AI,” Newry said.

The most advanced test case for regulation in this sector is New York City, which in 2021 announced a law prohibiting employers from using AI and algorithm-based technologies for recruiting, hiring or promotion unless these tools had been audited for bias. After two postponements, the law is due to come into effect on 6 May, with enforcement beginning on 5 July.

Given the blunt and all-encompassing nature of a legal approach, this seems heavy handed to Newry, as does the EU AI Act. He prefers the UK approach, which rests on existing laws on equal opportunities, discrimination and human rights.

“We do need to ask if these laws can deal with this new form of technology, particularly where it may not be possible for an individual to determine whether bias has been introduced through the use of AI,” he said. “And that’s where the principle of transparency is really important.”

Innovation friendly

The UK government’s ambition to build a regulatory framework that encourages innovation in AI has also received a cautious welcome. “I don’t know if it will encourage more innovation, but it certainly won’t prevent innovation. Maybe that’s the best you can do with a document like this,” Mullins said.

To capture the full benefit, the government will need to follow up. “You start off with a framework that does not discourage innovation, and then look at how you support the businesses that are operating within this framework,” he said.

Kainth is also positive about the overall effect on innovation, although he does have qualms about the level of collaboration required by the planned system. “It’s very difficult to move quickly when you have large, complex organisations trying to work together, alongside a new set of regulations.”

But this risk is counterbalanced by the government’s clear backing for AI. “Provided we don’t get bogged down in the nuances, and the light touch outlined can be achieved, it should be a net positive,” he said.

Newry thinks that the UK approach should avoid the chilling effect of the EU system, which could stifle new companies before they get off the ground. “If you have to show that you comply with all these regulations, unless you are very well funded from the get-go, it will kill off some innovation,” he said.

For Chandran-Ramesh the regulations should not have a negative impact on early-stage investment decisions given VC firms like Amadeus are used to managing both market uncertainties and technical risk. But the calculation might be different for later-stage investors, who may be reluctant to add AI regulation to their due diligence. “Later stage investors will want to find out how the company has managed so far, and how it intends to manage in future and work from there,” she said.

Overall, she is positive. “I don’t think that good companies will find it difficult to raise funds. It might take a little bit longer, but the macroeconomics mean that it is taking longer.”

The UK proposals are also being watched closely by companies based in the EU that work with UK customers, such as Berlin-based Apheris. Founded in 2019, it helps clients build and train large-scale AI models using sensitive and highly regulated data.

“When I first read [the UK white paper] I was positively surprised by the pro-innovation approach, which I think sends is a very strong signal to young businesses,” said Robin Röhm, the company’s co-founder and chief executive.

However, he thinks that it could have provided more clarity about how the system will work in practice. In particular, its focus on AI models neglects the data they learn from and work with. “What I expect are clearer guidelines about how data should be governed, how it can be accessed, and which data can be used for which purposes,” he said. “I’d also like to see clearer guidelines around monitoring and quality assurance for AI systems.”

That said, if Röhm were starting an AI company today, he would prefer to do it in the UK than in the EU. “It’s better to have the right intentions than to propose a system that will ultimately hinder innovation,” he said.

Others are wary that implementation may undermine the UK’s positive philosophy. “You can have a great idea, but then the execution lets you down, and at the moment it is hard to say where the EU and UK will end up,” Kainth said.

Chandran-Ramesh is also cautious. “I’m not convinced that this gives the UK ecosystem an edge over the EU. They are just two different approaches,” she said.

Most companies will have to work with both the UK and EU systems, and it will take time before the relative benefits are clear. “I’m concerned about how the flexibility is going to be handled [in the UK system] and what tailor-made solutions will come forward to address emerging needs,” Chandran-Ramesh said. “That will tell us if it means longer timelines, or if the flexibility means shorter timelines.”

Elsewhere in the Ecosystem…

  • Following a pilot in 2022, the first full round of Horizon Europe’s WomenTechEU programme has selected 134 female-led deep-tech companies for support. Each company gets €75,000 to spend on innovation and growth, while their female founders are offered mentoring and coaching. Spain had the greatest success with 27 winners, while Portugal provided seven of the 20 winners from widening countries.
  • The European Commission is seeking views on its Technology Transfer Block Exemption Regulation and related guidelines, which are due to expire on 30 April 2026. The rules are intended to strengthen incentives for research and development, help spread technologies, and promote competition. Comments are invited by 24 July.

Never miss an update from Science|Business:   Newsletter sign-up