US-EU agreement on artificial intelligence seen as a swipe at China – but little else for now

05 Oct 2021 | News

There is a commitment to ‘mutual understanding’ but beyond the warm words little evidence of common ground on AI regulation

The US and EU are talking up the significance of their new pact on artificial intelligence, but a closer inspection indicates the two sides still have precious little common when it comes to regulating the technology – except a desire to take the moral high ground against China.

The long-awaited agreement was reached when the Trade and Technology Council met for the first time on 29 September in Pittsburgh, with Brussels and Washington vowing to make sure AI systems are “innovative and trustworthy” and “respect universal human rights and shared democratic values”.

The EU and US will “seek to develop a mutual understanding on the principles underlining trustworthy and responsible AI,” the agreement says. But exactly what this means in practice remains to be fleshed out. While both sides said they have noted each other’s domestic regulatory proposals on AI, there is no mention of coordinating their approaches.

One of the most concrete areas of agreement was a condemnation of “rights-violating systems of social scoring.” This is widely seen as a swipe at China’s social credit system, a data sharing programme purporting to measure the trustworthiness of businesses and individuals.

“The European Union and the United States have significant concerns that authoritarian governments are piloting social scoring systems with an aim to implement social control at scale,” the joint statement says. The EU’s proposed AI Act already contains a prohibition on social scoring systems.

China is the “elephant in the room”, said Inga Ulnicane, senior research fellow and expert in technology governance at De Montfort University in the UK.

While the country is “never explicitly mentioned” the “many wordings about democratic values, human rights, authoritarian governments and social scoring reminds of China,” she said. “One way of reading this joint EU-US statement is as an attempt to differentiate themselves from China.”

For Sébastien Krier, technology policy researcher at Stanford University it is notable that the US has agreed to call out authoritarian social scoring systems. “That isn’t something the US government previously did so directly, at least at this level,” he said.

“I do view it as them trying to find common ground,” said Joanna Bryson, professor of ethics and technology at the Hertie School in Berlin. However, the actual agreement reached “seems minimal”, she said.

The wider picture, said Bryson, is one of US unease at the prospect of EU regulations on AI eroding the freedom of its biggest tech companies, with Washington fearful of a “Brussels effect,” whereby EU regulations proliferate outwards to set global standards. Some voices in the US see technology regulation almost as a front in a new “cold war” with China. “[They argue that] if you don’t accept the way the US is doing things, you’re empowering China,” Bryson said.

Banal system

Some see the condemnation of social scoring systems as an attack on a straw man. Western reporting of China’s social credit system, characterising it as having Orwellian potential to monitor and crush dissent, has been “overblown and incorrect”, said Daniel Leufer, Europe policy analyst at Access Now, an NGO that campaigns for digital rights.

“The actual system is, in most [respects], a relatively banal system for keeping track of administrative sanctions, and has nothing to do with AI,” said Leufer. “This applies to the [EU] AI Act’s prohibition on social scoring too: they are trying, badly, to prohibit a sci-fi application that doesn’t exist.”

The Trade and Technology Council declaration on AI is one of a number of areas, including investment screening and export controls, on which the US and EU agreed to cooperate as part of an attempt to revive transatlantic relations battered during the Trump era. 

While lacking on detail it does contain a few other concrete commitments that are seen as useful steps forward. The EU and US “intend to discuss measurement and evaluation tools and activities to assess the technical requirements for trustworthy AI, concerning, for example, accuracy and bias mitigation,” the agreement says.

There is also a commitment to carry out joint research on the impact of AI on the workforce.

Ten working groups will be set up, looking at topics including technology standards; data governance and technology platforms; and misuse of technology threatening security and human rights.

“The future work of these groups seems to be crucial for if and what follows up from good intentions outlined in the statement,” said Ulnicane.

The Trade and Technology Council agreement further complicates an already tangled global landscape on AI regulation. It contains only a brief nod to the Global Partnership on Artificial Intelligence (GPAI), launched in 2020 under French and Canadian leadership, and incorporating 19 member states including the UK, India, South Korea and Australia, as well as the US and EU.

“Will it be complementary or will there be a competition with this EU-US forum emerging as a privileged partnership over more multilateral cooperation?” asked Ulnicane.

Also in the mix, in 2019 the OECD launched its own AI principles, which were adopted in a non-binding, adapted form by the G20 the same year. The latest EU-US declaration is more fulsome in its backing of the OECD principles, pledging to “uphold and implement” them.

On 4 October, at an event in Paris about putting the principles into practice, OECD said that next year it would release a risk assessment framework to classify different AI systems, and a global “AI incidents” database to track failures and near failures.

At the event, Lynne Parker, director of the US’s National Artificial Intelligence Initiative Office, said OECD is the “preferred venue” of the US to discuss AI governance.

The US wants to “lead the world in the development of trustworthy AI,” she said. The US National Institute of Standards and Technology is currently taking views from stakeholders as it works up its own AI risk management framework.

“I think there’s a lot going on, but I have no idea what venue will make a breakthrough,” said Bryson of the various fora now grappling with the creation of global rules. She herself sits on sits on GPAI’s working group for responsible AI.

But with new AI regulatory ideas springing up everywhere from Singapore to the African Union, it would not be left just to the US and EU to hash out regulation of the technology. “I don’t think they will wrap it up as much as people think,” Bryson said.

Never miss an update from Science|Business:   Newsletter sign-up