No. But policy makers should consider this and other questions raised by AI, said participants at Vienna conference
The advent of artificial intelligence technology raises all kinds of unexpected difficulties that European policy makers need to think about – from the practical, like how to teach ethics to young engineers, to the more fanciful, such as how to handle an amorous robot.
These were among the implications of AI discussed at a 28 November conference on the social sciences and humanities, organised by the Austrian Presidency of the EU.
Already, participants noted, governments are pouring billions into computer-science research on AI. The European Commission this year announced €2.5 billion in new AI research funding, joining France, Germany, the US, China, Japan and other nations in an R&D race to be first in mastering the new technology. But despite those big budgets, some conference participants in Vienna questioned whether the AI engineers are doing enough thinking about the social dimensions.
For instance, one audience participant, Marieke Schoots of Tilburg University in the Netherlands, raised the startling question of whether robots can fall in love with humans.
“They cannot,” she said, ”because they are totally unaware of context.” She said robots can be very intelligent when dealing with “small” issues, like driving a car in a Western city with well-enforced road traffic laws, but are less well-suited to driving in places like India, where making eye contact with one’s fellow motorists is arguably more important than the highway code.
Schoots said she raised this point because she felt the conversation focused too much on AI “as a technological possibility.” She said, “I think this question, ‘can a robot fall in love with us?’ is symbolic of the issues that need to be raised when discussing AI." The question is not as straightforward as it sounds: it is connected to what philosophers call the “hard problem” of consciousness – figuring out how consciousness arises and how to identify it objectively.
Teaching AI ethics
Another hard problem is how to teach ethics. Harald Leitenmüller, chief technology officer at Microsoft Austria, complained of having had to travel to the US to study a course in AI ethics. “There are a lot of ethics courses” in Europe, “but I did not find one for artificial intelligence and ethics.” The US course was interesting, he said, but based on American law: “It was tough for me to translate it to the European system,” he said.
Indeed, there appears to be strong student demand. Katja Mayer, who teaches joint courses in political science and computer science at the Technical University of Munich, said the undergraduate course dealing with issues of AI and ethics has just 30 places per semester, but receives as many as 250 applications. Despite the course being oversubscribed, Mayer nevertheless argued that, “incentives are not in place for the students to take those courses. They’ll get their points for it, but it’s not in the normal curriculum; it’s extra, most of the time.”
Engineers find ethics hard
The scarcity of relevant courses isn’t the only problem. “For a technician, it’s difficult to start with ethics. It took me quite a while to get into it,” said Leitenmüller of Microsoft. “It’s very tough, because from school, from university—I studied electronic engineering—I had not an hour of education in that matter (ethics).”
Another audience member, Matthias Reiter-Pázmándy from Austria’s education ministry, argued that young engineers “are driven, of course, by their curiosity and eagerness to solve technological problems. If you come with ethical concerns, they are not so much interested in it. They want to solve software problems, or other problems.”
Leitenmüller said the solution is for schools to teach basic ethical concepts early, so that AI engineers will be better prepared to explore the more complex ethical implications of their field. “I heard several sessions before that we have to simplify things. For me, that’s totally the wrong approach. We have to generalise. We have to abstract, to make it relevant for more people,” he said, “but abstraction is difficult.” Concepts like dignity, he said, are abstract and difficult to understand - but tremendously important. “We should have a foundational understanding of these basic principles in school.”
Different countries, different ethics
Abstractions aside, participants also discussed geopolitical concerns. Another participant, Anabela Gago, of the European Commission’s directorate-general for justice, said Europe is lagging far behind China in AI; and she asked how different legal and ethical frameworks in different parts of the world might affect the pace of AI development.
Gago cited a European minister as saying, “the EU is not so much looking into the opportunities of artificial intelligence because they are so much concerned about the ethical aspects.” Gago argued that, “it is fundamental to look at the ethical aspects, of course. But the ethical aspects for countries like China and others are very different [. . .] the legal frameworks, the ethics, impose different constraints.”
Outi Keski-Äij of Business Finland responded, “I would say that we need to discuss, but we need to also act, so that we are not hijacked by China and Americans.”
Huawei’s Walter Weigel suggested that even if tougher ethical standards slow the pace of innovation, they may bring other benefits later. He said there was a “trade-off between ‘let’s be careful’ and on the other hand, ‘don’t hinder innovation.’”
He argued that “I think we will only invest and buy the product if we will trust it,” and that products made to European standards might engender this trust: “Maybe it’s not the latest technology, but it’s one from Europe which you can trust.”