Negotiations on a ‘Global Partnership’ on artificial intelligence would have OECD, Montreal and Paris as starting points for policy discussions – but other organisations may also emerge
How will humanity manage the growth of artificial intelligence systems? To answer that, French and Canadian officials are drafting a blueprint for an expert council that they hope could be a prototype for global cooperation on AI policy.
The Global Partnership for AI (GPAI), advanced over the past year by French president Emmanuel Macron and Canadian prime minister Justin Trudeau, has started to take shape in a series of transatlantic negotiations in the past few months. While many details have yet to be resolved, negotiators hope for a general understanding by the end of this year, according to Malik Ghallab, director emeritus of a French state robotics lab in Toulouse, who is active in the planning process.
The idea is to create a standing forum – involving government, industry and academia – to monitor and debate the policy implications of AI globally. Other countries will be invited to join as the French and Canadian “core” develops the plan, Ghallab said. Under the blueprint discussed so far, the GPAI would have a ruling council that includes government ministers, overseeing public-private expert panels and supported by centres of expertise in Paris, Montreal and at the Organisation for Economic Cooperation and Development.
As several experts have been warning, AI can be both a boon and a threat to humanity – speeding medical diagnosis and helping to understand climate change, or invading privacy, reflecting in-built bias and worsening wars. More than 90 organisations around the world have proposed ethical principles for AI in the past few years. And leaders of the world’s biggest economies – in the G7 and G20 organisations – signed off on a set of extremely vague ethical guidelines earlier this year.
Meantime, industry is rushing into the field with an estimated $37.5 billion in spending this year, according to market-research firm IDC.
But when it comes to agreeing on any form of international oversight or agreement, the geopolitical gulf is wide, with Washington, Beijing, Brussels and other major capitals taking wildly different approaches - free-market, social democratic, nationalistic, globalist or authoritarian.
The global discussion is “complicated,” said Lyse Langlois, director of a Quebec AI centre, who is involved in the Canadian planning. “We have many voices.” Now, she said, “We need a structure that will link these many voices.”
The goal of GPAI, Macron said 30 October at a Paris conference, “is to foster debate and hopefully reach a consensus on key issues…. We will involve researchers and tech companies willing to provide responsible ways of handling this technology. It is impossible to leave the floor to private players, or to some governments that don’t share our common values. [GPAI] is a call for individual responsibility and a call for cooperation.”
The GPAI negotiations so far have mainly involved French and Canadian officials – so it isn’t clear yet how widely the idea will take off internationally. Indeed, the process is already puzzling some in Japan – which was instrumental in getting the show started as chair of the G20 this year.
In an interview, Koichi Akaishi, Japanese vice minister for innovation policy, said that his government isn’t participating in the GPAI negotiations at this point. “Not yet. Because the [GPAI] idea is not so clear for us at the moment. But if the idea becomes clear, maybe we might participate.”
Akaishi added that discussions have been going on also in other groups, such as the United Nations Educational, Scientific and Cultural Organisation, UNESCO which recently published a white paper proposing that it begin drafting formal policy recommendations with its 193 member governments. “Maybe we can make the most of the functions of UNESCO, because the members of UNESCO are all over the world,” Akaishi said. “Let’s see what would happen.”
Whatever the forum, many governments see a need to do something about AI. The European Commission, at the prodding of its incoming president, is preparing a new AI policy for launch in March. Washington earlier this year launched an AI plan; and this month a national security panel advised Congress and the White House “to establish a network of like-minded nations dedicated to collectively building AI expertise and capacities.” China released its own AI policy earlier this year – and has been trying to woo other governments to join it in a coalition. Then there are efforts underway at the International Telecommunications Union, the World Health Organisation and other bodies.
Key issues include security and privacy. Akaishi said that his government would like to see a “privacy framework” agreed internationally. He said prime minister Shinzo Abe “wants to talk about security and privacy. Free flow of data with trust could be a basis for an AI Principles international convention.” Another issue, Akaishi said, is accountability. “If you have a very complicated algorithm is there a good technology to explain the algorithm, and make it accountable? We don’t have that yet.”
OECD database in the works
An early leader in the policy scrum has been the OECD, the think tank for 36 developed countries – and drafter of the core AI ethical principles endorsed this year by G20 leaders.
At its Paris headquarters, the OECD is currently building a massive interactive database on AI policies and trends around the world, which it aims to launch next February. For that, it has surveyed 45 countries. It will include data on investment, patents, publications, news coverage, policy developments and more. The beta version of this “AI Policy Observatory” is already bristling with impressive interactive graphics showing how policy approaches and AI trends are differing around the world.
One aim, says Andrew Wyckoff, director of the OECD’s science, technology and innovation department, is to come up with an agreed international definition of what AI actually is. (As Akaishi put it, “deep learning is so deep that nobody knows what it is.”) The AI label dates to some American computer scientists in the 1950s, but in the past decade a torrent of new ideas and computing power has transformed the field from interesting to disruptive, for good and bad. That means policy and economics toolsets haven’t kept up. “We are trying to do the basic building blocks for the research community globally,” Wyckoff said.
A broader aim, he said, is to be able eventually to monitor how individual countries are handling AI – and ultimately, when they are varying from whatever international guidelines get adopted one day. Wyckoff likened it to the role in the global energy industry of the Nuclear Energy Agency, which monitors safety and performance of power reactors around the developed world.
In the case of AI, Wyckoff said, the key quandary is when to regulate. Too soon, and governments will stifle innovation and potential benefits. Too late, and the technology may have taken a socially unacceptable turn or be so established that governments can’t change it. It’s a problem of what he called “anticipatory governance.”
That’s why, so far, organisers of the GPAI have been proceeding with care – first trying to build consensus on how the forum might work before getting into any specific policy proposals.
Under the GPAI plan, the OECD would be the group’s secretariat, organising agendas and work plans internationally. It would, in turn, support to-be-created Canadian and French “centres of expertise” in Montreal and Paris.
In his speech, Macron said the French state computer-research organisation, INRIA, will “take the lead in coordinating the French partners” – a yet-to-be-confirmed set of research organisations. The government has already announced a five-year, €1.5 billion national plan for AI investment. It includes four international AI research centres, in Toulouse, Sophia Antipolis, Grenoble and Paris, as well as plans to fund 200 new AI research chairs. They will focus on AI in healthcare, cybersecurity, transport, logistics, agriculture and other fields.
The Canadian government has announced, with its provincial Quebec government, C$15 million (€10.3 million) to get a Montreal AI centre up and running. Quebec took an early lead in international AI policy, funding an International Observatory led by Langlois at the University of Laval. There and in Montreal, Toronto, Edmonton and other tech centres, Canada has more than 800 AI start-up companies – and the University of Montreal led the first big international effort to draft ethical guidelines for AI. Trudeau’s government is betting that its early start on AI development and policy will pay off in trade and jobs in years to come.
Four working groups
In GPAI, the Paris and Montreal centres are supposed to support the work of a series of international expert working groups. So far, four are envisioned, on “innovation and commercialisation,” “responsible AI”, “data governance” and “future of work” – though Ghallab said the list could grow or change as other countries get involved. The four working groups reflect the most immediately touchy political issues: AI’s impact on economies and jobs, and on privacy and security.
The working groups, in turn, would be overseen by a series of three committees: a ruling council that includes government ministers, a steering committee, and a “multistakeholders experts’ group plenary” that includes public and private experts. The whole organisation would meet annually – and the Canadians have offered to host an international conference late next year.
For GPAI to progress, said Langlois, “they have a space to take action. They have the volition to do something. It’s young. It’s the first step, but many steps need to be clarified publicly.”