Artificial intelligence is powering ahead. But although there is plenty of discussion on the ethics of how it is applied – and much disquiet about issues such as automated decision-making and systemic bias - there is little serious research into the topic
The ethical issues swirling around artificial intelligence (AI) are under-researched, with surprisingly little serious academic investigation into AI ethics, despite the huge amount of money pouring into the field and the rampant pace at which the technology is advancing.
“I had hoped to be here today and tell you about the wealth of scientific research that’s being done on ethics and AI. But I can’t. Because we found very little,” Maria de Kleijn-Lloyd, Elsevier’s senior vice president of analytical services told a Science|Business conference earlier this month.
“There is a lot of discourse on ethics in AI,” said de Kleijn-Lloyd. But in terms of academia’s two-thousand year tradition of rigorous ethical inquiry, when it comes to AI, “we haven’t really joined the two up,” she said.
De Kleijn-Lloyd was presenting the results of an Elsevier study of trends in the publication and use of academic studies on AI. According to the report, published in December, only around 0.4 per cent of search keywords had any real bearing on ethics, and their use was limited to teaching materials and media sources.
A difficult and heated debate
There was a general view among panellists that the need for more AI ethics research should not be read as a need for more regulation. Elisabeth Ling, managing director of researcher products and research metrics at Elsevier, said that among members of the European Commission’s high-level expert group for AI - of which she is a member - the ethics debate is, “hard and hot.” However, “There seems to be a consensus that jumping to regulation would not be the right thing to do,” she said. “We already have quite strong laws in place in Europe.”
It is important to distinguish between regulating algorithms and regulating the way they are used, said Nick Jennings, vice provost for research and enterprise at Imperial College London. In the former case, “I can’t think of a sensible way in which that would make sense,” he said. But, “when [algorithms have] been trained and have data inside them and they can make decisions and they’re being used in a given application, then it is a different business.”
More dialogue and international cooperation
While there are aspects of AI deployment, for example in defence, where national interests have to come first, AI research and ethics research would benefit from more international cooperation.
“We need to have a clearly agreed European approach,” said Signe Ratso, deputy director general of the European Commission’s DG for research and innovation.
Ling agreed. “I would say my answer would be more dialogue, more international cooperation in figuring these things out together,” she said.
Jennings said basic research in AI is being done in an international context. “As a community the AI community is very open, most of the most powerful machine learning toolkits are freely available on the web, there are lots of good online courses about how to use them and how to become trained in them, and the scientific conferences are absolutely international.”
For Jaak Aaviksoo, rector of the Tallinn University of Technology, a better future for Europe lies in “more joint research and in more cooperation” between European partners.
Europe needs to examine the role of regulation in AI research, Aaviksoo believes. “We have our GDPR [general data protection regulation] that is ethically – understandably - a very high value. But I think we have to look at how we regulate and to what extent [regulation] poses threats to free research, and also development of AI technologies that are based on big data,” he said.
“Big means really big. Big means even bigger than Germany, not to say bigger than Estonia. That means free movement of data. That is a very complicated issue in Europe.”
Aaviksoo was alluding to the free flow of data regulation, coming into effect in May, which seeks to prevent member states from obstructing the cross border transfer of non-personal data. Digital single market commissioner Andrus Ansip says free movement of this data is necessary to support the development of AI in Europe.
Meanwhile, GDPR applies more-or-less the same rule to personal data, with provisos around the use of algorithms in processing this class of data, as well as on the collection and processing of personal information in general — though it does make various allowances for non-commercial research.
AI is a victim of hype
Jennings, professor of artificial intelligence in Imperial College’s departments of computing and of electrical and electronic engineering, argued there are unrealistic expectations about AI that need to be dispelled.
“We continually read and hear stories about AI solving some narrowly defined tasks,” said Jennings. “I think the mistake - and the hype - is the extrapolation from being good at some narrowly defined task to thinking ‘wow, if it can play Go like this, or if it can schedule a factory like this, imagine in a few years’ time it’ll just be able to be super-intelligent’.”
In fact, the gap between performing a narrowly-defined task and artificial general intelligence is vast. “We just don’t know how to do it, and I don’t think we will anytime soon,” Jennings said.