With over 117 AI ethics initiatives springing up around the world, there’s a lot of talk of AI regulation, but where is it all heading?
Initiatives to regulate artificial intelligence (AI) have sprung up around the world, spearheaded by the likes of the OECD and UNESCO. It‘s time to harmonise and consolidate, a conference on AI ethics held under the auspices of Slovenia’s presidency of the EU Council heard this week.
“We are clearly at a developmental point where you’ve got a lot of actors right now contributing to this movement from principles to practice, and we simply need to work together in a multistakeholder way to harmonise these approaches,” said David Leslie, of the Council of Europe’s Ad hoc Committee on Artificial Intelligence (CAHAI). Leslie leads on ethics at the UK’s Alan Turing Institute and is the author of the UK government’s official guidance on the responsible design and implementation of AI systems in the public sector.
Between 2015 and 2020, 117 bodies of varying standing published AI ethics principles, and the number keeps growing. In total, 91 came out of Europe and North America. This high number shows there are hopes and concerns related to AI technologies, but also an interest in building some type of mechanism of consensus on AI governance,” said Marielza Oliveira, director for partnerships at the United Nations’ agency for education, science and culture (UNESCO).
All these initiatives are already moving towards harmonisation, she believes. The next big step will come in mid-September with the launch of the Globalpolicy.AI platform, enabling eight intergovernmental organisations, including the European Commission, the OECD, United Nations and the World Bank, to work together on defining principles for future AI applications.
AI holds the promise of changing more or less every industry, boosting productivity, improving forecasting and supporting moves to be more energy efficient. AI-based technologies can help predict crop yields, are already playing a role in drug discovery, being applied to autonomous cars, and automating administrative tasks.
In 2018 market anaylsts, McKinsey estimated AI-based technologies could give the global economy a $13 trillion boost by 2030, amounting to 1.2% additional global GDP growth per year.
But there are evident risks. AI-powered systems can infringe privacy, while applications based on low quality data can lead to bias and discrimination. In 2016, an investigation by the US news organisation Pro Publica showed an algorithm used by US law authorities to assess a criminal offender’s likelihood of reoffending was racially biased.
Rules for AI technologies are needed to limit the risks in areas such as health intervention, credit scoring and insurance ratings.
These principles are expected to eventually translate into a number of binding and non-binding principles for AI governance, with institutionalised impact assessment, assurance frameworks and risk management. These are, “all of the important mechanisms that we see developing in different places at the same time,” Leslie noted.
In a demonstration of the level of global consensus, in November, 193 countries are expected to sign UNESCO’s non-binding recommendations for human-rights based AI ethics. The countries have spent 100 hours negotiating the text. “Reaching consensus among a large number of member states is not easy. But the good thing is what we do have there, starting with the universal declaration of human rights,” said Oliveira.
Lead by example
The EU hopes to lead by example. Earlier this year, the European Commission proposed the first-of-a-kind rules for AI, which foresee a ban on AI systems that present threats to livelihoods, to introduce authorisation requirements for AI technologies in high-risk areas, such as employment, education and law enforcement, and to fine companies that violate the regulations.
EU member states and the European Parliament are now considering these rules. The six-month Slovenian presidency of the EU Council which started its mandate earlier this month hopes to advance the talks, but completing it is likely to take a couple of years and at the end the proposal may look very different.
Despite this, it is most likely to be the first major rulebook for AI. “Europe can and must become a pioneer on this key digital policy and technological issue,” said Christian Kastrop, German state secretary at the ministry of justice.
But critics say strict rules could hamper innovation by putting up barriers to AI development, setting the EU back in the global AI playing field, where it is already lagging behind in terms of investment and innovation. The 27 EU countries and the UK were home to only 7% of companies patenting AI applications between 2009 and 2018, the same percentage as South Korea. China, meanwhile, took the lead accounting for almost 60% of the companies, with the US at 14%.
The Commission hopes to boost investment alongside regulation. The EU’s new €7.5 billion digital R&D programme will invest €2.5 billion in boosting research and development of AI applications in the public and private sectors by facilitating access to testing and experimentation facilities around the EU. This money will be invested on top of funding for AI technologies in the €95.5 billion research programme, Horizon Europe.
In addition, last month, the Commission launched an industrial partnership on AI, data and robotics that aims to boost the entire European value chain, from production to skills and application. All this will be done with ‘European values’ in mind.
EU officials and governments believe regulation will not stifle innovation. “To those who think regulation hinders innovation, I strongly disagree. In fact, better consumer protection and flourishing business models go hand in hand. Without consumer trust, there will be less innovation and less growth,” said Kastrop. “This will really be an international trademark.”