France, Canada and others are pushing for some international ‘rules of the road’ for artificial intelligence. But differing cultures, industries and policies complicate efforts to develop a global approach, say international experts at a Science|Business conference
Get your free copy
Deciding how to regulate artificial intelligence is proving a tough task for lawmakers around the world. In fact, it may take a special AI system to figure it all out.
“We see the global community struggling to develop global solutions” for AI governance, said Andrea Renda, a researcher at the Centre for European Policy Studies, a Brussels think tank.
Indeed, more than 90 public and private groups around the world have published suggested guidelines for AI ethics – but there is no consensus yet on what, in concrete terms, governments should do about it. “It is not enough to focus on general principles,” said Anna Jobin, an ETH Zurich researcher who has analysed all the guidelines. Now, she said, “we need to work on implementation.”
A step towards action was confirmed 30 October by French President Emmanuel Macron, who said France and Canada will fund “centres of excellence” for international AI policy work in Paris and Montreal, working with the Organisation for Economic Cooperation and Development. He described it as an effort “to foster debate and hopefully reach a consensus on key issues, such as facial recognition.” But the European Union, China, the US, Canada, Russia and other AI investors aren’t in agreement yet on what to do exactly – and that could take some years to develop, if it ever does.
The difficulties, discussed at a Science|Business conference on AI governance 23 September, boil down to one overarching issue: How to fashion policies that help humanity get the benefits of AI without the potential harm?
This special Science|Business report summarises the debate.