A dedicated unit, a debate in the European Research Area Forum and pilot projects are in the works, as the Commission looks to set European guidelines for science’s AI revolution. Now member states need to ‘wake up’ to ensure rules don’t diverge
With generative artificial intelligence poised to take the practice of science by storm, the European Commission is laying the ground to ensure the risks are avoided and the benefits delivered.
Plans are in hand to set up a dedicated new unit at the Commission’s research directorate to lay down guidelines, and for a debate on how to handle the use of AI in science to be initiated as part of the European Research Area (ERA) policy agenda.
In July, the Commission’s science advisers published a scoping paper on the issues that are involved, pointing to a lack of “dedicated and systemic policy facilitating the uptake of AI in science.”
“We really cannot afford to sit idly on this,” says Liviu Știrbăț, who as a cabinet member of the Commission’s digital chief Margrethe Vestager, is responsible for research and innovation, among other priorities. “It's really a transformative opportunity that needs to be seized,” he said.
Tackling the rise of AI will be a balancing act between the huge advances it can bring in science and the potential it holds to compromise its integrity.
AI can supercharge research across all fields, but the danger lies in the over-use of tools, for example, for writing proposals and thus generating a lot of ‘noise’ applications that overrun the already swamped evaluation systems.
The first move will be to look at how the take up of AI-based tools impacts proposal writing and evaluation, but Știrbăț hopes the Commission can eventually shift towards “using AI in research itself, to do virtual experiments, to drug discovery, to sifting through data, and all of those uses.”
The most exciting uses for these tools promise major advances in fields such as genomics by employing machine learning to analyse vast amounts of biological data. In fusion research, AI tools are already making headway aiding simulations, among other uses.
AI can be a great help in writing and evaluating research proposals, an increasingly bureaucratic task that eats into the time researchers spend in labs. Scientists are taking to these tools but there’s still a lot of hesitancy, says Victor Botev, chief technical officer and co-founder of Iris AI, which has developed an AI engine for interpreting scientific text.
A survey carried out by Iris AI found that while 55% of corporate researchers use AI tools in one form or another, presently trust is low and that is hindering adoption. “They are sceptical of the quality of the results. That kind of prevents them from trusting and using the tools more often,” says Botev. “Some mistrust is good – you have to be critical – but also have some limit [to the scepticism] to allow you to get the benefits.”
Grand plan
Almost a year since ChatGPT made its debut, the dedicated team at the European Commission that will form the AI guidance unit at the research directorate, is thinking both big and small, discussing policy implications and the nitty-gritty of how the use of AI tools will change the application and evaluation processes of the €95.5 billion Horizon Europe research programme.
The goal is “to look for the first time not at how science helps other policies, but how science itself needs to be helped in this transition,” says Știrbăț. It is hoped to run pilots testing out different ideas over the next few years.
Știrbăț frames the advances in AI as an opportunity, but acknowledges there are dangers too. When it comes to bureaucracy, the fear is taking out the human component and ending up with machines writing applications on one side and reading them on the other side. “That should really not be the case, because then you lose quality, you lose engagement,” says Știrbăț.
For now, it looks like some of the broad guidance will be set down by the Commission-led European Research Area Forum, which brings together research and innovation stakeholders and policymakers. The plans are still at an early stage, but stakeholders expect the topic to be explored as a pilot initiative until 2025, from which point it’s likely to have a dedicated ‘action’ under the 2025-2027 policy agenda.
Mattias Björnmalm, secretary general of the university association Cesaer, who sits on the Forum, expects the first high-level guidelines to be out as soon as the next few months.
Björnmalm notes that many research bodies already have their own guidelines for the use of AI in science and many universities have research offices that are highly knowledgeable on the subject.
For now, he sees Commission’s role as consolidating what’s already there and setting common guidelines to ensure practices don’t differ too much around the EU. “It’s important that we make sure there are no unintended divergences in rules,” says Björnmalm. “This is why it’s important to start quickly – if member states start putting down regulations, the Commission doesn’t want to run behind and fix things.”
When it comes to Horizon Europe, he hopes the Commission’s guidelines will “clarify how it sees the use of generative AI in relation to their own funding programmes.”
As is the case in many EU initiatives, the process begins with guidelines before moving to policy. Știrbăț hints that the next EU research framework FP10, due to start in 2028, will have a stronger AI component, building on the pilots to be carried out over the next few years.
Drafting a wish list
In the long-run, Știrbăț hopes the Commission can help make AI work better for different fields of science, from biology to humanities.
Here, he hopes the Commission could play a role in steering development of custom-made AI tools to better suit researchers’ needs. Policymakers could contribute by drafting a wish list “whether it be in drafting applications, or in improving the productivity and creativity of science,” says Știrbăț. “What the researchers are doing now is sort of picking up the crumbs off the table of corporate applications.”
This would be a root to building trust and boosting adoption. “In general researchers are more excited about the tools when they use them for research,” says Botev. He’s expecting big advances in the future as AI tools are developed specifically for different types of research, from personalised medicine to chemistry and astrophysics.
“My projection is that in the next months, or a year or two things, will chance significantly, because there will be proof of success in [researchers’] day to day lives,” says Botev. Looking at other scientists successfully employ the tools will encourage others to try them out. “Peer pressure is the biggest driver there.”
Humans at the centre
The key is ensuring humans stay at the centre of any applications and policies, says Mathijs Vleugel, interim executive director at the European Federation of Academies of Sciences and Humanities (ALLEA).
“The tools are there and it won’t be possible to change them. There are great opportunities, so we should try to find a way to exploit them as much as possible,” says Vleugel. “That means we need proper regulations and we need ethical standards that need to be set by the research community.”
It’s also still unclear how intellectual property rules will apply in some cases. This needs to be clarified, Vleugel says.
One thing that everyone agrees on: researchers should be active players in the game.
ALLEA, whose code of conduct for research integrity is one of the guiding documents for Horizon Europe, updated the framework earlier this year to reflect the changes brought in by AI.
This wasn’t originally the plan. The code, first launched in 2017, was due for an update and a draft was ready in September 2022 for stakeholders to review. But while this was happening, ChatGPT entered the stage, and the reviewers insisted AI should be covered by the code
The code now highlights the importance of transparency in the use of AI tools for research. But there remain many uncertainties around the risks. It’s something that needs to be discussed first within these communities before we make broad overarching recommendations on this,” says Vleugel.
At the heart of the matter, responsibility to uphold research integrity rests on researchers themselves.
But assurances, or at least an acknowledgement of the changes, from policymakers is still much-needed. Last spring, EU ministers adopted a joint framework on open science publishing. At the time, Cesaer advocated for the ministers to acknowledge that the challenge may require resources to address it.
Similarly, Cesaer is now calling on the Commission to propose a way to coordinating policy and AI tools at an EU level. “The acknowledgement that this is a challenge is the type of guidance I would expect them to provide. It makes it easier for those working in the ministries to tackle,” says Björnmalm.
While it has picked up that baton, member state engagement will be crucial to success, says Știrbăț. “We are trying to lead the way and also work with the OECD in the work that they do in applications of AI in science, but the member states need to wake up as well,” he said.