Sponsored by Elsevier
To position itself among its competitors, Europe should strive for a distinctive approach to artificial intelligence, guided by specific ethical guidelines, argued a panel of experts.
Artificial intelligence has the potential to deliver great benefits to European researchers, but there are still a lot of obstacles to overcome – including a lack of trust, experts say.
In a debate at a Science|Business conference about how European research and innovation can benefit from AI, Lucilla Sioli, the Commission’s director for AI and digital industry, said one difficulty is the “problematic uptake” among European SMEs. That is in part rooted in the ethical dilemmas surrounding AI, such as concerns about opaque or biased algorithms. Would a distinctive approach based on well-defined moral principles reinforce Europe’s position?
To address the challenge of the limited uptake, the Commission set up a High-Level Expert Group (HLG) on AI in June last year. The group, composed of 52 experts from academia, industry and civil society, is tasked with identifying the principles on which developers and users are to operate and thus ensure AI’s trustworthiness. “The objective is to change the mindset of companies that are developing artificial intelligence – make sure that they ask themselves some questions and they implement certain steps,” Sioli added.
Loubna Bouarfa, a member of the HLG and CEO of Okra Technologies, argued that clear ethical guidelines could encourage collaboration between stakeholders and unlock Europe’s potential. Okra develops technology to allow healthcare professionals to combine multiple complex data sets and generate evidence-based insights in real time.
However, following a “European way” of trustworthy AI, rooted in ethics and transparency, will have ramifications for the development of AI in Europe, noted Davide Bacciu, assistant professor of computer science at the University of Pisa. “Of course, it could limit the scope and possibly how fast we are. We need to know we are choosing that.”
Still, there was a consensus that researchers need a better understanding of AI, which can be regarded as “black magic” in the words of Bacciu. Erik Schwartz, vice-president, product management, search and application services in research products at Elsevier, noted that the demystification of AI should bolster European research and innovation.
The panel discussed the need for researchers to be trained to open their minds to the potential of AI and acquire the competences required to incorporate it in their work. The issue of limited AI uptake could also be linked to the relative weakness of the European computing industry. In the current geopolitical context, it is crucial for Europe to be self-sufficient and acquire “technological sovereignty”, Sioli argued.
Certainly, the avalanche of data, with over a million articles published every year, as well as the increasing cross-disciplinarity of science, pose challenges for researchers; they are called to extract the right information as fast as possible. AI can help researchers keep up with the data overflow, according to Elsevier’s Rose L’Huillier, vice-president of product management reading in research products, who presented two AI tools developed by Elsevier: One gives quick scientific overviews of certain subjects, and the other predicts whether a user is likely to be interested in an article, reducing significantly the time needed for searches.
Looking into the future, Geleyn Meijer, rector of the Amsterdam University of Applied Sciences, predicted that AI will transform many disciplines. Business studies will turn into “digital business studies”, social sciences into “digital social sciences” and so on. However, it is unclear how long this generational shift will take, Meijer underlined, as some researchers are still reluctant to present research results through an AI lens. He described the adoption of AI in Europe as experiencing a “mid-life crisis.”