What this week’s flurry of AI policymaking means for researchers

02 Nov 2023 | News

EU universities have been all but left out of the UK’s AI safety summit, while arguably more important is a ground-breaking new executive order from the US, which begins to demand outside scrutiny of the potentially dangerous capabilities of AI models

The AI Summit at Bletchley Park. Photo: Kirsty O'Connor / No 10 Downing Street / Flickr

It’s been a whirlwind week for global artificial intelligence policymaking. Today is the final day of an AI safety summit convened by the UK, which followed a major executive order on AI from the White House earlier in the week, and a new code of conduct from the G7.

These developments have prompted an explosion of lobbying from all sides. Civil society groups worry they are being locked out of discussions in favour of big technology companies; AI luminaries have been fretting humans could lose control of their AI creations; and libertarian investors panicked that all this new regulation will kill innovation.

But what does all this mean for academic researchers, who have been left in the dust by US and Chinese tech firms when it comes to building leading edge AI models? And for the EU’s own AI Act, which is slowly making its way through EU institutions?

Probably the most eye-catching development this week has been the UK’s AI safety summit, held at Bletchley Park, once home to World War Two code-breakers, which managed to get western and Chinese officials in the same room, and to some extent on the same page, about AI.

The guest list had been particularly fraught: there has been controversy in the UK over whether to invite China, and several European leaders, including of France and Germany declined the invite, although Commission president Ursula von der Leyen did attend.

We need to nurture a community of outstanding, independent scientists. Scientists, with access to resources to evaluate the risks of AI, and free to call out those risks, she said during a speech at the summit.

EU universities were almost entirely excluded from the guest list. 46 universities, think tanks and research centres were invited, but only one EU institution – University College Cork – got an invite in the mail, while Quebec alone had three institutions in attendance. Whether this is an inditement of the EU’s AI prowess, or a reflection of the UK’s post-Brexit worldview, is unclear.

The main result of the summit is the Bletchley Declaration, signed by all the world’s major powers (bar Russia, which was not invited).

It covers both types of risk people associate with AI: the real and present dangers of algorithmic discrimination, say, and the future risk of more powerful systems spinning out of our control and causing “serious, even catastrophic, harm.” 

It’s a kind of compromise between those - typically in the tech world - whose biggest worry is that a super smart AI will lead to human extinction or subjugation, and others – often representing civil society - who think these kind of apocalyptic scenarios distract from tackling more mundane, but real problems like AI-generated disinformation right now.

Two camps compromise

“I think it does one very important thing, which is try to reframe the political debate on AI,” said Gautam Kamath, a senior adviser at the Centre on Regulation in Europe, a Brussels-based think tank. “It makes no sense to have these sort of two camps.”

The declaration openly states that AI risks are “best addressed through international cooperation”, and some attendees were glowing about the conversation between countries that occurred.

“The UK summit helps build momentum towards international AI governance,” said David Marti, AI programme manager at the Swiss think tank Pour Demain, who is currently in the UK for the summit. “It is a great effort by the UK government to bring all these states – including China – and other actors together to discuss AI Safety in a collaborative manner.”

Separately, Chinese and western AI experts issued a joint warning that leading edge AI systems “may soon be capable enough to help terrorists develop weapons of mass destruction.”

The UK’s safety summit isn’t just a one-off. South Korea will host another in 6 months, and France in a year.

But the Bletchley Declaration remains just that – a declaration. “We urgently need to move past position statements – there have been a lot of those in recent months – and into concrete proposals about what to do next,” said Gary Marcus, chief executive of the Center for the Advancement of Trustworthy AI, in a statement.

Washington acts on AI

Arguably far more concretely significant is a lengthy executive order from the White House, released on October 30. “It’s pretty comprehensive,” said Samuel Kaski, an AI professor at Manchester and Aalto universities. 

It enacts a huge range of measures, from fair AI in the justice system to citizens’ privacy, but one of the most significant – and controversial for industry – demands is for tech giants to share their safety test results of AI models with the US government.

This represents a step – albeit a small one with potential loopholes – towards outside oversight of AI models to make sure they aren’t capable of anything dangerous when released into the wild.

The US’s National Institute of Standards and Technology, which is working on voluntary guidance for tech firms developing AI, will draw up guidelines on how companies need to test their AI systems for dangerous capabilities, so-called “red teaming”.

Kaski hopes that academic researchers are included in designing these tests. “It’s a fundamental research question,” he said.

A research paper released this week by Oxford-based scholars revealed how academics are frequently shut out from scrutinising the AI models of big tech firms, on grounds of commercial confidentiality or safety.

Research agendas trying to probe what leading-edge AI systems can do are “being dropped due to insufficient access to suitable models,” it found.

“Researcher access [to AI models] is neglected,” compared to safety testing and red-teaming, said Ben Bucknall, one of the authors. One step forward this week, however, is the launch of a the U.S. Artificial Intelligence Safety Institute, which should “provide testing environments for researchers to evaluate emerging AI risks and address known impacts.”

More action on research

The US executive order also contained a grab-bag of other research-related measures, including research to support privacy; the launch of a National AI Research Resource, to give researchers and students access to AI data and resources; and more grants for AI research in fields like healthcare and climate change.

Significantly for the EU’s AI sector, the US executive order also said it wants to, “expand the ability of highly skilled immigrants and nonimmigrants with expertise in critical areas to study, stay, and work in the United States by modernising and streamlining visa criteria, interviews, and reviews.”

Although the current partisan political stalemate in the US has prevented the passing of more comprehensive immigration legislation, a new push for global AI talent could denude the EU even further of its experts.

Democracies forge their own path

The third big development this week emerged from the G7 group of large democracies. The countries agreed on both guiding principles for organisations trying to create new AI models, and on an international code of conduct for advanced AI systems.

The guiding principles included a fresh call for more funding of AI safety research by governments, and for tech companies themselves to do more of this kind of work.

These declarations obviously don’t have any force in and of themselves, but they do help make sure individual governments come up with common approaches to tackling AI, said Kamath. “The G7 code of conduct is really vital,” he said. “Even though it's not legally binding, it is a necessary step because the next step is for governments to come up with different standards, common approaches or procedures.”

One question, though, is whether western democracies will forge their own coordinated path on AI regulation, leaving other countries – most notably China – out of the process.

The G7 route, known as the Hiroshima Process, naturally leaves Beijing out. Other organisations helping to shape global norms, such as the Organisation for Economic Cooperation and Development, and the Global Partnership on Artificial Intelligence, also do not have China as a member.

“We don't want to have a world where you have the west versus the rest,” Kamath said. “Or have the global south, which is not included in these discussions, being like an open playing field for authoritarian models [of AI] to come in. So it's very important to at least understand what's happening in China.”

Whither the AI Act?

Where does all this leave the biggest beast in the AI regulation jungle – the EU’s own AI Act?

Last week, Reuters reported that MEPs, the Commission and Council had struggled to reach consensus as they try to hash out a final version of the act in the so-called trilogue process.

Despite this slow progress, it’s still seen as the only measure globally with the legislative force to actually force tech companies take more action on AI safety. It explicitly sets out to ban certain uses of the technology, like real-time facial recognition in public, in a way the US executive order doesn’t touch.

“The UK summit and the US executive order are welcome and do an impressive job in terms of problem identification, but less on problem mitigation […] these do not equate to regulation,” said Marti. “The EU AI Act therefore remains the crucial piece of AI legislation if we want to set meaningful guardrails any time soon.”

In reality, neither the US nor the EU are going to “win” a race to set global standards for AI said Kamath. “The companies are American […] the threat of legislation is Europe. So of course, they have to work together.”

The EU is still “ahead of the curve” in bringing actual legislation to the table, he said, and many countries will start to incorporate concepts from the Act into their own rules.

This article has been amended to clarify that David Marti is not part of the official Swiss delegation to the UK's AI safety summit.

Never miss an update from Science|Business:   Newsletter sign-up