Europe must not repeat the UK’s mistake by creating a monolithic central institute for artificial intelligence research
Since we published our advice on artificial intelligence (AI) in science last month, there has been some lively debate.
Interestingly, the research community speaks with one voice on much of it. We know AI has the potential to revolutionise discovery, accelerate progress and boost innovation – indeed, it has already begun to change how the work of science is done. And we recognise the challenges as well as the opportunities: the lack of transparency of the dominant commercial models; the emergence of AI as a geopolitical asset which defines which parts of the world will benefit and which will fall behind; and the urgent need for research into the philosophical, legal and ethical issues that accompany new technologies.
On much of this, the evidence is clear, even as the technology itself evolves with breath taking speed. But when it comes to the details of European governance – which policies and institutions are needed at EU level to harness the benefits of AI and tackle its challenges – there is, perhaps unsurprisingly, more disagreement.
These are political considerations, of course, but evidence can be useful here too. That’s why EU executive vice president Margrethe Vestager asked the Scientific Advice Mechanism for specific recommendations on governance as well as content.
It’s clear that we need to give universities and research institutes across Europe equitable access to state-of-the-art AI facilities. We propose that a new European institute for AI in science could provide massive high-performing computational power, a sustainable cloud infrastructure, and AI training programmes for scientists.
Alongside this, a European AI in Science Council would offer dedicated funding for researchers in all disciplines to explore and adopt AI in their fields. These institutions would also ensure that AI in research aligns with EU core values.
These are not new ideas, of course – they are based on the best available evidence from the scientific community, brought together in a SAPEA evidence review written by experts drawn from academies across Europe. And while nobody is seriously disputing the need for a strong institutional setup, not everyone is on board with our specific proposal that this should be a distributed institute, which we call EDIRAS, or European Distributed Institute for AI in Science.
On the day we published our report, Holger Hoos of the Confederation of Laboratories for AI Research in Europe, itself a distributed organisation which has been campaigning for what it calls a 'CERN for AI' wrote in Science|Business, expressing his doubts about a distributed model, arguing that "any moonshot effort needs a visible focal point".
In preparing our advice, we did not find any consensus on this issue among scientists. Quite apart from the political arguments, there are objective reasons to believe that a decentralised, distributed model would be better equipped to achieve its mission than a monolithic institute flying the AI-in-Europe flag.
One important reason is that we need a broad collaboration, open to all of Europe's research community, to realise the true potential of AI in all disciplines of research – in the physical and life sciences, social sciences, arts and humanities, and not just in AI research. That means allowing all our universities, institutes and individual scholars fair access to the significant new firepower that EDIRAS would represent, in addition to bringing together many existing initiatives.
There's also the fact that our recommendations are not considered piecemeal by policymakers, but as part of an overall policy mix. Creating that "visible focal point" could draw our best people away from their universities and companies, in turn requiring other decentralising policies to compensate.
A third important piece of evidence is the sharp criticism faced by the UK's own centralised AI flagship, the Alan Turing Institute. A review panel found recently that the institute's centralised governance structure was "a hindrance" because it was "not representative of the whole community", and recommended significant changes, including "greater diversity in board membership representative of the wider ecosystem.
Europe must not make the same mistake. It's clear that the EU is taking the challenge of AI in research extremely seriously — that’s why the Commission asked us for advice in the first place.
Europe is moving quickly to ensure that not only scientists, but all of us, can flourish in a world where AI plays an increasingly important role. We must take equally seriously the need for the governance structures of tomorrow to be transparent, open to scholars from all areas and disciplines, and work for the benefit of the entire community.
Nicole Grobert is chair of the group of chief scientific advisers to the European Commission and professor of nanomaterials at Oxford University.
Maarja Kruusmaa is member of the group and professor of biorobotics at Tallinn University of Technology.
Alberto Melloni is member of the group and professor of the history of Christianity at University of Modena/Reggio Emilia.