Is global AI harmonisation actually achievable?

04 Apr 2022 | News

Experts detect the emergence of a patchy international convergence around the use of artificial intelligence

Amid rising geopolitical tensions and intensifying polarisation, building a global consensus around the use of artificial intelligence (AI) is likely to be tough. Yet experts at a recent Science|Business Data Rules workshop were cautiously optimistic that the necessary political will exists.

If they fail to achieve some form of coordination, all of the worlds major powers will suffer, according to MEP Brando Benifei, one of the European Parliaments rapporteurs for the EUs AI Act, which could arrive on the statute books next year. I think it would be a problem, not just for Europe, but for all the players involved because artificial intelligence will be a very pervasive technology,” he said.Having two different contexts of application, standards and regulation will make it complicated to deal with all the activities that now interconnect the world. So I think we need really to put an effort into avoiding this situation.” 

While contending that Europe is now leading the world in developing horizontal legislation for AI, Benifei said: It's striking that China as in the last year, partly caught up in the race to regulate… In fact, China has recently had privacy and cyber-security legislation in place and is now also regulating the use of algorithms. The United States, on the other hand, is apparently more reluctant to regulate artificial intelligence seeing in Europe's choices, the intention to penalise American companies.” He described such manoeuvrings as a power struggle” with an uncertain outcome.

The EU still needs to achieve an internal consensus about the best way to regulate AI. Benifei said that the current draft of the AI Act may need to be amended following reflections” on the distribution of responsibilities across the AI value chain, the governance structure and the criteria that determines whether an AI system is deemed high risk or low risk. He also expects Parliament to seek to strengthen the protection of democracy and democratic processes, as well as extending the list of banned activities to encompass so-called social scoring by private companies.

Different geographies, different drivers

While the EU is focused on ensuing AI can be trusted, Asian governmentsprimary objectives vary. Vidushi Marda, senior programme officer at ARTICLE 19, suggested that both India and Singapore are focused on encouraging innovation, while in China, most of the interest and momentum around AI regulation is through the lenses of cyber security, consumer protection, and national security, as opposed to considerations of individual rights and democracy. “So the guiding factor is, I think, a little bit different.” (ARTICLE 19 campaigns worldwide for freedom of expression). At the same time, Marda cautioned about making overly simplistic” assumptions about Asian countriesmotivations. 

Based in India, Marda flagged a big appetite for advanced technology in her home country. Different states in India want to have a 360 degree view of citizens, the AI market in India is booming.” Noting that many Asian countries dont yet have data protection laws, Marda pointed out that there are very low barriers to entry for start-ups developing AI-based applications for credit scoring, or smart city applications, such as predictive policing and facial recognition. However, the Indian government is now looking to regulate data protection, even if the exceptions afforded to State actors are unnecessarily broad, Marda explained. AI systems do feature roughly in the proposed data protection bill,” particularly in the sections focusing on biometric data, she added.  Marda hopes that facial recognition, voice-based recognition and emotion recognition will be regulated down the line, although that isn’t on the government’s agenda at the moment.

Will the EU AI Act be a template for others?

Although India is a member of the Global Partnership on AI (an international effort to build a common approach, known as GPAI), Marda doesnt think Indias policymakers will necessarily see the EUs AI Act as a blueprint. I don't know if the template effect will happen” in a similar fashion to the GDPR, she said, where India takes the same legislation and change a few things. In this case, the priorities and the guiding principles are different. India is focused on safety, security, and business incentives as opposed to a human rights-first approach, Marda explained.

The extent to which the EU Act does bring about some level of global harmonisation could vary by application. Whereas multinational companies are likely to ensure their web sites abide by the EU Act, regardless of where they are being accessed, they may not follow the legislation for other processes outside the EU, explained Alex Engler, a fellow in governance studies at The Brookings Institution. As the proposed EU Act would classify algorithmic hiring software, for example, as high-risk, such systems would have to comply with various transparency, technical and quality requirements. As a result, multinational companies may only make changes to their recruitment systems inside the EU, Engler suggested.

However, he pointed out there is a likely to be a Brussels effect” on global AI platforms – foundational software that underpins AI systems. If the EU AI Act does ban social credit scoring that probably will hit quite a few platforms, or at least services, that are doing data sharing,” Engler noted. In general, he believes greater international collaboration could lead to better rules and tools. There's not as much investment as I'd like there to be in open source AI software for transparency for bias detection, for evaluating data sets,” he added.

Greater political will in the U.S.

Although the U.S. government has become more actively involved in international efforts to build a consensus around AI under President Biden, it isnt attempting to create horizontal AI regulation akin to the EUs proposed AI Act. Moreover, the Biden administration has been slow to implement an executive order [by the previous administration] that should have led to a broad systemic understanding of how the US has the current legal authority to regulate algorithms,” added Engler. This isn't passing a new law, just seeing how current laws affect AI government... that's a necessary step we haven't seen.” 

Still, AI issues are being discussed in the new EU-U.S. Trade and Technology Council. The European Commission is quite strongly engaged and involved in these discussions and it has top level political backing,” noted Juha Heikkilä, adviser for AI, DG CNECT, European Commission. The work is led by two executive vice presidents at the European Commission. So […] that shows you the kind of level of importance that it has on our side.”

Heikkilä also noted that the US began developing a Bill of AI Rights (the Bill of Rights for an Automated Society) in the autumn of 2021. And so there are steps there, which from our perspective, perhaps go in the same direction as we've been moving things. So, I think that the political will is something that is available now more than it was perhaps previously.” Although the federal government in the U.S. isnt following the same regulatory approach as the EU, at the level of individual states, there is regulation enacted and some of that actually goes very much in the same direction as the [EU] AI Act has outlined,” Heikkilä added. There is more appetite now to regulate AI in a reasonable way. Not just in Europe, but also elsewhere.”

Other speakers agreed that there is a growing trans-Atlantic consensus around the governance of AI. Raja Chatila, professor emeritus, ISIR, Sorbonne University, pointed to signs that the EUs risk-based approach is gaining traction in the U.S. Five days ago, NIST (the National Institute of Standards and Technology) in the United States has issued an AI risk management framework,” he said. There is also progress on global standards and certification, Chatila added, describing them as a means for soft governance”. 

The devil is in the detail

One of the most fundamental challenges for proponents of international harmonisation is reaching an agreement on what constitutes AI and what is simply conventional software.  Chatila suggested defining AI is difficult because it includes many techniques stemming from applied mathematics and computer science and also robotics. So it's very wide and all these are evolving.” However, he noted that since 2012, “when we started to speak about deep learning etc.,” the foundational methods for AI haven’t changed much. “We are speaking about the same systems as 10 years ago.” He added that the EU AI Act has mostly adopted the OECD definition of AI, which is also the basis for the GPAI.

A key driver behind international cooperation is the need for interoperability policies for trustworthy AI principles, AI systems classification, the AI system lifecycle, AI incidents and AI assurance mechanisms,, explained Karine Perset, head of unit at the Artificial Intelligence Policy Observatory, OECD. Governments need to focus on achieving this interoperability, using the same terms to mean the same thing, even if we set the red lines - what's okay and whats not okay - at different levels.”

The OECD is advocating a common approach to classifying specific AI applications and distinguishing them from foundational AI models, which need to be assessed and regulated differently. Persets team and partner institutions are also developing an AI incidents tracker to inform this risk assessment work by tracking AI risks that have materialised into incidents and monitoring this to help inform policy and regulatory choices.”

One of the OECDs goals is to help policymakers create a framework through which they can identify the key actors in each dimension of the AI system lifecycle. This is really important for accountability and risk management measures,” Perset explained. And we see some convergence in ISO (the global standards body) and NIST, for example, on leveraging the AI system lifecycle for risk management as the common framework and then conducting risk assessments on each phase of the AI system lifecycle.” But she acknowledged that reaching an international consensus on how to classify risk may take some time and political impetus.

Defining risk can be risky

Controversially, the draft EU AI Act only has two categories of risk (in addition to a banned category). For Evert Stamhuis, professor, Erasmus School of Law, Erasmus University Rotterdam, this approach is too crude. Noting the growing use of AI in the healthcare domain, Stamhuis said the huge diversity in terms of risk” in this arena cannot be reflected in just two risk categories. You cannot achieve any certainty if you have those simple categories.”

More broadly, the dynamic nature of AI makes developing a durable definition difficult. A definition for a longer period of more than five years is going to be totally unviable,” Stamhuis contended.  One of the difficulties with European legislation is the process of getting it enacted is so complicated,” he added. As a result, there is huge resistance in quickly modifying it, which usually brings the European institutions to allocating these kinds of flexibility to bylaws and the side mechanisms, but this is so fundamental,” that it requires an open political debate, he says, cautioning against the use of a side mechanism in this case.

More broadly, Stamhuis called into the question the need for dedicated regulation for AI. He also harbours doubts about the EU AI Acts reliance on certification of AI systems. What are we actually certifying? Is it the models, is it the systems?” he asked. Once an AI system or model has been certified, what happens if the process changes or the model improves itself,” Stamhuis added. If the system is fluid, what is the value of a CE certification given at a certain moment in time?”

Never miss an update from Science|Business:   Newsletter sign-up