UK and US to work together to develop safety tests for advanced AI systems

02 Apr 2024 | News

Deal reflects growing government efforts to manage AI roll-outs safely. AI will also be on the agenda of EU-US Trade and Technology Council meeting later this week

UK technology secretary Michelle Donelan (right) and US commerce secretary Gina Raimondo signing the partnership on science of AI safety. Photo credits: Michelle Donelan / X

The UK and US signed a memorandum of understanding 1 April committing to work together to develop tests for the most advanced artificial intelligence models.

The UK and US AI Safety Institutes have agreed to build a common approach to AI safety testing and to share their capabilities, including looking into potential personnel exchanges between the two institutes. They will also perform at least one joint testing exercise on a publicly accessible model.

The UK-US deal is the latest governmental effort to assess and limit safety risks of the tech industry’s rapid roll-out of increasingly powerful AI systems. A top-level EU-US meeting later this week is expected to address similar issues.

As for the UK-US deal, British science minister Michelle Donelan affirmed in a statement that “this agreement represents a landmark moment, as the UK and the United States deepen our enduring special relationship to address the defining technology challenge of our generation…. Ensuring the safe development of AI is a shared global issue. Only by working together can we address the technology’s risks head on and harness its enormous potential to help us all live easier and healthier lives.”

Likewise in a statement, US commerce secretary Gina Raimondo said the partnership would accelerate work on both sides of the Atlantic to address risks to national security and to wider society. “Because of our collaboration, our Institutes will gain a better understanding of AI systems, conduct more robust evaluations, and issue more rigorous guidance.”

As the announcement of the US-UK deal received a warm welcome, some called for a more globally homogeneous and collaborative agenda on AI regulation, including specifically Europe’s voices.

“Failure to integrate the spectrum of stakeholders raises the risk of fragmented approaches taking hold across major regions,” said Anita Schjøll Abildgaard, CEO and co-founder of Iris.ai, a science assistant tool for research navigation and text understanding.

The partnership takes effect immediately, and follows commitments made at the AI Safety Summit at Bletchley Park in the UK last November, following which both countries established their AI Safety Institutes. The summit focused on the risks posed by so-called frontier AI, powerful general-purpose models which the UK fears could pose a threat to national security, for example by facilitating the creation of biological weapons.

A European Commission spokesperson told Science|Business the EU welcomes cooperation among democracies, and shares the objective of ensuring the responsible development of trustworthy AI. The EU will continue to work with the US and the UK on AI stewardship, bilaterally and within organisations such as the G7, the UN and the Council of Europe.

At the EU-US Trade and Technology Council (TTC) in Leuven on 4-5 April, the Commission will set out the cooperation between the EU AI Office and the US AI Safety Institute.

“This will deepen our collaboration, particularly to foster scientific information exchange among our respective entities and affiliates on topics such as benchmarks, potential risks and future technological trends,” the spokesperson said. “We are also in close contact with the UK on the topic of AI safety.”

This week’s meeting, the sixth such transatlantic trade policy gathering since 2021, will include EU commissioners Margrethe Vestager, Valdis Dombrovskis and Thierry Breton alongside Raimondo and Antony Blinken, the US secretary of state.

As most of the hottest AI technologies lately have originated in US industry, the whole issue of AI development and regulation has become a contentious issue in transatlantic trade relations. In March, the European Parliament passed the AI Act, a comprehensive regulation which aims to harmonise rules on AI across the 27 member states. The law has been praised by some as a much-needed measure to prevent reckless AI development, and criticised by others as an interventionist hindrance to free tech trade. 

Meanwhile, in the US, the Biden administration has been gradually increasing its oversight of AI development – so far, in the absence of any legislation in the politically polarised US Congress.

In October 2023, US President Joe Biden signed an executive order establishing new standards for developers of AI systems, and introducing measures to guard against AI being used to create biothreats.

Over the past five years, 17 of the 50 states in the US have enacted legislation to regulate AI.

Last week, the Office of Management and Budget announced new guidelines for federal departments and agencies, requiring them to designate a Chief AI Officer and develop a strategy to advance the responsible use of artificial intelligence.

Editor’s note: This article was updated 3 April 2023 to include comment from industry and the European Commission.

Never miss an update from Science|Business:   Newsletter sign-up