Shared values and principles did not stop the US and Europe going down different paths in data protection legislation. There is a risk that despite moves to promote harmonisation the same thing will happen in AI
Legislators need to learn lessons from the past and the problems caused by divergent approaches to data protection regulation, as they grapple with how to limit the risks associated with artificial intelligence.
Convergence is key, warns Dragoş Tudorache MEP, co-rapporteur on the EU’s AI Act, which is on track to be adopted in November. There must not be a repeat of what happened with data privacy, where the EU’s General Data Protection Regulation is at odds with the US approach, he said.
While Tudorache is confident that the US is committed to regulating AI, the question is what shape these rules will take. Senate majority leader Chuck Schumer has been vocal in wanting legislation, and senators Richard Blumenthal and Josh Hawley recently proposed a bipartisan framework to set up a licencing regime that would be administered by an independent oversight body.
“We have to accept that diversity of legislation is not necessarily a doomsday scenario,” Tudorache told a conference on the Regulation of AI, Internet & Data (RAID) in Brussels on Tuesday. “We are going to have two different approaches, but if we remain aligned politically at the level of values and principles, then work together on standards, we will be good,” he said.
As the EU AI Act makes its way through the legislative mills, the G7 group of countries is working on a joint code of conduct, as agreed in their most recent meeting in Hiroshima, Japan. This has a particular focus on generative AI systems such as ChatGPT, and could be needed to fill the legislative void before the EU AI Act takes effect in two or three years.
There were plans for a transatlantic voluntary code of conduct within the EU-US Trade and Technology Council, which would then be put before G7 leaders. However, these appear to have been parked in favour of working directly at the G7 level.
In another - separate – initiative, India recently assumed the chair of the Global Partnership on AI, which has 29 member countries. Meanwhile, China planning its own AI regulatory system.
In the US, 15 companies have agreed to the government’s voluntary commitments for the safe and transparent development of AI. The UK is also seeking to take a leading role in shaping international regulations, and in November will host a global AI safety summit.
In her keynote speech to the conference, Elizabeth Kelly, special assistant to the President at the White House National Economic Council, said the US and EU have “shared values and shared principles”, and by collaborating they could “lead the way on the development of a robust regulatory framework to govern the creation and use of AI worldwide.”
Far from seeing the numerous international initiatives as contradictory, Kelly expressed support for the leadership of Japan in the Hiroshima process, of India as chair of the Global Partnership on AI, and the UK with its upcoming summit. “These multilateral forums are critical to ensuring that global standards reflect the input of developing countries and that we leverage AI to support inclusive economic growth around the world,” Kelly said.
But shared principles now may not be enough to maintain a harmonised approach in the long term, noted Julie Brill, Microsoft’s vice president of regulatory affairs. “We should learn some lessons from the privacy experience, which began in 1980 when both the US and the EU had adopted principles that were harmonised, but then went down different paths,” she said.
The importance of harmonisation will vary according to how AI is used said Robert MacDougall, digital markets regulatory strategy director at Deloitte, “Perhaps for certain types of AI, we won’t tolerate international divergence, but for other types there’ll be a much higher tolerance level,” he said.
MacDougall suggested international organisations have a role to play, offering as a comparison the UN’s technical specifications for seatbelts, which came into force in 1970 and were followed by a wave of national laws imposing the use of seatbelts.
For now, the EU AI Act is the most comprehensive legislation in the world, and there are hopes this will create a so-called ‘Brussels effect’, serving as a basis for other legislators around the world.
The AI Act has already inspired other countries in their approaches, claimed Juha Heikkilä, AI adviser in the Commission’s technology directorate general, noting the concept of legislating according to the level of risk different AI systems present is being adopted elsewhere.
“If I look at what’s been proposed in Canada, where an AI and Data Act is in the pipeline, it talks about an impact-based approach. But when you scratch the surface, it has similarities with our risk-based approach. Brazil also has a bill in the pipeline which is very similar to our approach,” Heikkilä, said.
Earlier this year, a joint statement from the EU-US Trade and Technology Council, reaffirmed a commitment to a risk-based approach to AI. This was also stated in a joint roadmap published in December 2022, which is meant to inform approaches to AI risk management.
Heikkilä said the Commission supports international discussions such as those taking place within the G7 to coordinate on certain guiding principles. “With likeminded countries it is easier to take a step forward first, then we can hope to gather momentum to move forward.”