AI has a place in research, but not in evaluation of Horizon Europe proposals, Commission says

21 Mar 2024 | News

The European Commission’s new guidelines on how artificial intelligence is used in research aim to exploit the potential while preventing misuse

A ban on the use of AI in writing proposals would be “unpoliceable” says Maria Leptin, president of the European Research Council. Photo credits: World Economic Forum / Valeriano de Domenico

The European Commission wants to encourage researchers to use generative AI, but also warns of risks to the scientific process saying,  “These tools could harm research integrity and raise questions about the ability of current models to combat deceptive scientific practices and misinformation.”

New guidelines published on Wednesday recommend researchers “refrain from using generative AI tools substantially in sensitive activities that could impact other researchers or organisations”, citing peer review and the evaluation of research proposals.

As well as limiting the risk of unfair assessment, due for example to bias resulting from the data sets used to train the tools, this is intended to prevent unpublished work from being included in AI models without the consent of the originator.

This stance differs from the European Research Council, which for now has decided not to police the use of AI by researchers or evaluators. A ban would be “unpoliceable”, ERC president Maria Leptin told Science|Business, adding, “[It] is the individual’s responsibility to sign what they submit and to take full responsibility for it.”

“We do not ban other methods of support, such as Google searches or asking your colleagues next door,” she said.

ERC referees do however sign confidentiality agreements that prevent them from “submitting traceable data to public platforms that analyse the text,” but this is not a black and white issue, Leptin said.

“Someone could still rephrase the question, and say ‘A researcher has proposed to do this and that: write out a summary of how original that is’. Then you're not really disclosing a piece of intellectual property from somebody else's research project.”

When it comes to the use of AI in proposals, Leptin says personal interviews will take on a larger role as applicants have to be able to explain their work.

The ERC will continue monitoring these issues, and the Commission’s recommendations are themselves presented as “living guidelines”, meaning they will be updated regularly as the technology develops.

Pace and effectiveness

The Commission developed the non-binding guidelines with European Research Area countries and stakeholders, with the aim of providing funding agencies, research organisations and researchers with recommendations on how to promote the responsible use of generative AI.

On the plus side, the guidelines highlight the potential of AI to improve the effectiveness and pace of research and verification processes, to produce texts in multiple languages, and to summarise and contextualise wide sources of knowledge.

While they are non-binding, research commissioner Iliana Ivanova called on the scientific community “to join us in turning these guidelines into the reference for European research,” in order “to uphold scientific integrity and preserve public trust in science amidst rapid technological advancements.”

Mattias Björnmalm, the secretary general of CESAER, an association of European universities of science and technology, which helped draft the document, said, “I don’t see it as calling for a ban [on generative AI in research evaluations], but for caution, as this is an area we know there could be substantial problems if it is used without taking these issues into account.”

Transparency

The guidelines call on researchers to be transparent about which generative AI tools have been used in their research processes, and remind them that they are accountable for scientific output generated by AI.

Research organisations meanwhile should provide or facilitate training on using AI, and monitor the use of AI systems within their organisations.

To ensure data protection and confidentiality, universities and other organisations should implement locally hosted or cloud-based generative AI tools that they govern themselves, whenever possible.

This does not mean each university is expected to have its own locally hosted system, said Björnmalm. “But we should work with our infrastructures and interconnected services, for example through the European Open Science Cloud, to contribute leadership globally.”

Overall, the guidelines should not be seen as prescriptive but as an “enabling framework” which connects researchers and funders, while avoiding divergences across Europe which would create additional complexities, said Björnmalm.

“It’s very clear [generative AI in research] is not something that will go away, and we don’t want it to go away. We want it to become useful and constructive,” he said.

Margrethe Vestager, executive vice president of the Commission, echoed this message in a speech at the Commission’s Research and Innovation Days conference in Brussels on Wednesday.

“I am deeply convinced that the application of AI in research will allow us to get more socially beneficial or valuable results,” she said, pointing out that the number of articles reporting science carried out with the help of AI has more than doubled in the last five years.

“There is vast potential to accelerate science through AI, and I think it's true for every scientific field, from medicine to anthropology,” she said.

The EU already invests around €1 billion per year in AI through the Horizon Europe and Digital Europe, and the goal is to leverage funding from member states and the private sector. “The point is to reach an investment of more than €20 billion annually in AI over this course of the digital decade,” Vestager said.

The goal of the new guidelines is “not to scare you away”, she said. “It is to say here are things that we should discuss, bear in mind, work on, so that we get to trust the technology with the necessary scepticism.”

The scientific advice mechanism, which provides the Commission with independent advice, is due to publish its own report on 15 April, with recommendations to policymakers on how to facilitate the uptake of AI in research and innovation.

Never miss an update from Science|Business:   Newsletter sign-up