An analysis by ETH Zurich researchers finds that richer countries are so far dominating the global discussion over how to regulate artificial intelligence
There’s a growing international effort to develop ethical guidelines for artificial intelligence – but the effort itself is skewed so far towards richer countries, say researchers at ETH Zurich.
In a recently published study, the researchers counted 84 groups around the world suggesting ethical principles for AI in the past few years – from governments, multinationals, international organisations and others. But most come from Europe, the US and Japan, with few voices yet heard from the developing world.
“The global south is missing in the landscape” of organisations proposing AI ethics guidelines, said one of the researchers, Anna Jobin, in an interview. While some developing countries have been involved in international organisations drafting guidelines, only a few have yet added their own suggested ethical principles to the growing global debate over AI. That’s important because different cultures may have different attitudes about AI, she said.
Based on the research, “what we are saying is that if some regions have not yet put out [AI] principles, they might not be taken into account” as regulations or treaties start appearing. “How would you call it global if not everyone is involved? The discussion might be shaped by those issuing the principles, which would lead to a skewed principle,” she said. “We cannot just ask everyone to take it once it’s all decided.”
The study, published in Nature Machine Intelligence on 2 September, graphically demonstrates how in just the past few years an international policy industry has sprung up around the drafting of ethical guidelines for the use of AI. It is driven by fears of AI-powered robo-warriers, online fraud, privacy invasion and other unwanted side-effects.
Non-binding principles – so far
The guidelines published so far around the world are mostly vague – and a long way away from any kind of binding international agreements. But a step forward happened last June at a meeting of the Group of 20 largest industrialised nations, where leaders agreed on a set of general, non-binding principles, based largely on work by the Organisation for Economic Cooperation and Development. As a next step, several governments are now pushing to set up an expert panel to monitor AI development – much like the United Nations’ International Panel on Climate Change.
The ETH-Zurich study, by Jobin and colleagues Marcello Ienca and Effy Vayena, analysed 84 published AI guidelines as of April this year (and since then, Jobin says, the number has grown past 90). The largest number, 20, come from US-based organisations, such as the Association for Computing Machinery or multinationals such as Microsoft and Google. EU-based organisations come second, with 19 sets of guidelines, from the European Commission and Parliament, the UK House of Lords and Royal Society, and companies such as Telefonica, Deutsche Telekom and SAP. Many of the other published guidelines are from international organisations such the World Economic Forum or Amnesty International.
Of course, there is some activity beyond the rich countries. Many of the international groups include developing countries. An Indian group published a report; Kenya has started work. And the skew towards richer countries is hardly surprising, given that most AI development is happening there.
But generally, the researchers said in an interview, they think the process of formulating AI guidelines should be more “bottom-up.” As a potential model to follow, Jobin pointed to an extended consultation process begun last year by the University of Montreal and the Quebec Research Funds. “They had a transparent process to design and discuss AI ethics,” she said.
Five main ideas
In ploughing through all the published documents, the researchers found wide agreement on five general ideas for how AI should work as it is deployed around the world. But there’s little agreement on the details of exactly how these ideas would be implemented. The five key areas:
- Transparency: Most groups agree that humans should be able to know how the AI systems they use are working, and so avoid potential problems. But whether that’s through publishing source code, the underlying databases or some other means isn’t clear.
- Justice, fairness and equity: The goal is to prevent unfair bias in the systems that could cost people their jobs or rights, but possible solutions range from strengthening rights to appeal or sue, to giving data protection offices extra power.
- Non-maleficence: Most of the groups are more worried about the potential of AI doing harm than good. Proposals include building safeguards into the design of the software, and setting up new bodies to monitor the systems.
- Responsibility and accountability: Most agree that, when AI goes wrong, somebody should be held to account – but there’s no agreement on who or how.
- Privacy: Given the already-high sensitivity to data privacy around the world, it’s no surprise many of the guidelines call for better controls as AI systems roll out.
The researchers found other ideas in the guidelines, as well – but note that very few mention sustainability. That’s an important gap because AI systems could have both good and bad effects on the planet, potentially improving environmental management but also boosting global energy consumption.
Where this will all lead is as-yet unknown. While the G20 agreed on some generic principles, they sharply disagree over the details of what should be done in practice. Based on past international experience, Ienca said, that means the work is unlikely to lead to an over-arching international treaty on AI generally – but it could lead to agreements on individual issues. And he noted that much of the guideline-drafting has been by committees working from general principles, rather than hard data about specific AI systems.
Their study, he said, “is an attempt to synthesise the work conducted so far on AI ethics; it’s not a conclusion.” But, he observed, “many people are quite eager to jump to conclusions about what the AI principles should be. We have to have a more evidence-based approach.”