G20 statement marks first international agreement on how artificial intelligence should work – but also shows deep divisions between US, EU, Japan, China and Canada on the way forward
Around the world, excitement about artificial intelligence (AI) is spurring new investment, companies, labs, products – and plenty of work for those trying to figure out how to regulate it internationally.
In recent months, a flurry of activity among international organisations has resulted in the first agreed inter-governmental statement on the ethics of AI, endorsed by the 20 largest industrialised nations that are members of the G20. There are calls to go further – with international treaties, oversight bodies or coordinated global programmes. And several ministerial conferences, tasks forces and study groups have been churning out a series of studies and policy statements – from Montreal, Brussels, Beijing and beyond.
Indeed, it seems like “every day there are three to five ethical principles for AI that come out,” observes one European Commission official. Far from policy busy-work, however, it all adds up to a global effort for humanity, as it gropes its way towards agreement on how to manage this new, hyper-disruptive technology.
For Mona Nemer, chief science adviser to Canadian prime minister Justin Trudeau, there is “very rapid realisation that we need to be making a dialogue with society: how do we want to operate this new technology?” Just as with cloning and stem cell technologies in prior years, so with AI today, she said. Scientists and politicians are recognising “AI is a disruptive technology, and just because we can do something, society may not wish to do it.”
At the same time, all the international action is highlighting the big ideological differences in how AI should be regulated – if at all. Among the five major powers in AI development, the US and China have made an odd couple in international meetings, both arguing the virtues of economic growth and national autonomy. Meanwhile the EU and Canada have been urging “AI for Good” policies and calling for international coordination to preserve privacy and social solidarity. Japan, the fifth big AI power as measured by patenting and investment - and chair of the G20 meeting - acted as a broker, according to participants.
Innovation versus disruption
The problem for all the governments is in striking a balance between banning unwanted AI applications, and letting the technology - and wealth it generates - grow as markets dictate. “The technology moves very fast, but governments move very slowly,” says Nathalie de Marcellis-Warin, professor at Polytechnique Montréal and CEO of CIRANO, a scientific advice organisation based in Quebec. Politicians “don’t want to block innovation,” she says. “But at the same time, they don’t want to let it harm society.”
So far, the AI funding bandwagon is racing ahead, with some analysts forecasting it could add as much as $3 trillion to global output between 2019 and 2026. But for all the cool new things AI could do, from diagnosing illness faster to making roads safer, and managing electric grids to running city services more smoothly, there are lots of scary possibilities, too. There are warnings of out-of-control drone warriors, accidental nuclear war, mounting unemployment, rampant online fraud, and the death of democracy. Or as the title for one recent book put it, AI could provide “weapons of math destruction.”
Even science fans have concerns. US Rep. Bill Foster, Illinois Democrat and a physicist, who is heading a Congressional task force on AI in finance, predicts voice simulation technology will soon be so good that, “People will be getting calls from software synthesising my voice, and soon enough my personality.” How, under those circumstances, are people to protect themselves from online fraudsters, especially if operating internationally? And if there is a problem, in which country’s court system will the case be heard? While a formal treaty may not be necessary, Foster says, “I think that international standards are needed.”
Ethical principles
The most significant action was on 29 June, when G20 leaders signed a statement for the first time endorsing a few basic ethical principles for AI. On a quick read, the text is about as bland as a glass of milk, calling for a “human-centred approach to AI”, and “the responsible development and use” of the technology. But the spectacle of leaders from Vladimir Putin to Donald Trump approving the same text about a vital new technology is, well, unusual.
The G20 statement cites work by the Organisation for Economic Cooperation and Development, which has emerged as a pivotal player in AI governance. Before the G20 meeting, OECD gathered national experts for four meetings, to write a set of AI principles. “Every word was scrutinised,” one US official says. As a small example, the G20 statement says it is “drawn from” the OECD work; an earlier draft said “based on.” The hyper-subtle change was to accommodate G20 members, including China and Russia, that aren’t also OECD members. They didn’t want to appear to be endorsing a statement from a club to which they don’t formally belong, even if, legally, neither the G20 nor the OECD statements have any power to compel countries to act on them.
“The (OECD) recommendations are non-binding,” says Andrew Wyckoff, director for science, technology and innovation at the OECD. “But they are very important. It does represent a political commitment, with moral suasion.”
‘Devil in the details’
But there is also a wealth of policy left deliberately unspoken. The OECD and G20 statements are “a good step forward,” says Rebecca Keiser, who heads the US National Science Foundation (NSF) Office of International Science and Engineering. “It’s important that we came to an agreement. But the devil is in the details.”
For instance, the OECD principles say, “AI systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed.” They go on to say that AI organisations “should be held accountable.” That wording, some OECD meeting participants say, could apply to software companies that produce bug-ridden AI code. But others in the same meetings dispute that interpretation, saying if the idea was to make it easier to sue Google or Huawei over bad software the wording would have been different: “liable” instead of “accountable.” (Multinational representatives participated in the OECD drafting sessions.)
Private word games aside, conflicting national approaches and histories on AI, have been on display.
In the US, the Trump administration has been rolling out a new national AI strategy that, while recognising risks, emphasises the economic benefits and the need to “promote and protect AI technology and innovation.” AI is an “industry of the future,” says a US State Department official. “That means a focus on economic growth and innovation. The driving thing for us is to speed deployment of the technology from the research lab to the market, to drive innovation and unlock the potential benefit.”
Any regulation should be national and not through international treaty of any kind. But that doesn’t rule out standards or cooperation. Indeed, the NSF says it is expanding its R&D collaboration on AI, with countries including Japan, Israel and the EU. And on July 2nd, the US National Institute of Standards and Technology issued a draft policy paper on the need for standardisation.
China has also been emphasising the economic importance of AI, launching a national AI investment strategy in 2017. But – particularly after the spectacular row last winter when a Chinese scientist broke international norms to edit the genomes of two human babies – the government has started to think harder about ethics. On 18 June, an expert group for the Chinese Ministry of Science and Technology issued national recommendations on AI ethics, urging, amongst other recommendations, for AI development to observe traditional Chinese values, such as “harmony and friendliness.”
The EU, meanwhile, has been turning AI ethics into a competitive strategy. Or, as a recent report from the Centre for European Policy Studies put it, ethical guidelines are the EU’s “secret sauce” in global AI markets. The idea is that, with the US and China advocating a laisse faire approach to AI as an engine for economic growth, the EU can stand out in the global marketplace as a leader in “safe” AI systems that are well controlled, don’t compromise privacy or security, and come with some degree of moral leadership coded in. A series of Commission reports, most recently from its ethics-advisory group and from a special high-level panel, have fleshed out the details of this strategy, which fed into the work of the OECD and G20. (Indeed, as meeting participants note, common Brussels code words, such as “inclusiveness” appear often in the international statements.)
Impact on society
For some, Canada has been the surprise guest to the top table in AI policy. Partly through policy and partly by luck, Montreal and Toronto have become hot clusters of AI research and start-ups and both federal and provincial governments moved fast to pump money into the universities and incubators developing the technology. With Canada’s EU-like focus on ethics, that quickly led to some of the earliest international “declarations” of AI ethical principles. The economics are important, but asks Rémi Quirion, chief scientist of the Quebec government, “What about the impact on society?” As one example, “The divide between the haves and have-nots is getting wider. AI may make it worse. We can’t have people left behind.”
Japan has adopted a middle position. Its national AI strategy refers to Society 5.0, which a government summary describes as, “A grand concept to realise a human-centric society through the utilisation of STI [science, technology and innovation], especially AI technology.” It fuses “cyberspace and physical space” to connect people better, free humans of “burdensome” tasks, and overcome social problems, such as an ageing population with fewer workers than pensioners. In the G20 negotiations, senior Japanese officials criss-crossed the globe to line up support for the principles put forward by OECD, which is viewed as a relatively neutral expert forum to be talking about new technologies.
But what’s next? For the moment, the only step yet agreed is for the OECD to set up an AI policy observatory: a special unit to gather AI knowledge and compare policy developments around the world. Washington is a strong backer. “The OECD is our preferred venue” for AI follow-up, maintaining the focus on economic issues, says the US State Department official. “The OECD is an economic organisation, not [a] security [agency].” Through international cooperation, he says, countries can promote AI services and products that people trust, which will be vital for the technology’s growth. But as for a possible international treaty on AI, “We don’t see any need for that.”
That’s not enough for other G20 countries. Some – led by the Canadians and French – advocate moving quickly to form some kind of international structure to oversee AI development and sound the alarm about problems. A possible model is the United Nations climate change panel, a group of international scientists widely praised for sifting the imperfect climate data for policy makers and drawing public attention to the problem. The French government plans to raise the issue at the next G7 summit, taking place in Biarritz on 24-26 August.
The Saudi government, the next G20 chair, plans to continue the AI work, but with a focus on employment issues. Meanwhile, the United Nations has set up a task force on the topic, while UNESCO is planning a 2020 AI summit. Even the International Telecommunications Union is planning meetings.
“Everyone wants to be flagged as the enabler of these new forms of soft governance which are emerging,” says one EU official. “They will emerge anyway. The question is who will be involved? Can they steer this process the way they want to?” In the end, “It’s likely there will be a diversity of organisations” acting on different aspects of the problem.
And there are countless unofficial proposals circulating among AI experts. For instance, some have suggested the formation of a “CERN for AI” an inter-governmental research organisation modelled on the one body that runs the particle accelerator in Geneva. Or as two ETH Zurich researchers recently proposed, a “politically neutral hub” for AI policy research in Switzerland. Of course, they aren’t the only ones. Oxford, McGill, the Sorbonne, MIT and several other universities are also rapidly expanding their international AI policy work, bidding for global leadership in proposing the next concrete steps in AI governance.
As Nemer, Trudeau’s science advisor puts it, “I think agreeing on the (G20) principles is the first step. And then you have to agree on the implementation. That is where the rubber hits the road.”