Researchers who use AI in writing research proposals must take ‘full and sole authorship responsibilities’, says the ERC. Its ‘robust’ system can detect ‘text similarities’
The European Research Council (ERC) has fired a shot across the bows of applicants, warning them to maintain academic integrity if they use artificial intelligence (AI) tools to write research proposals.
In a position statement, the ERC Scientific Council said it “recognises” many scientists use AI “to brainstorm or generate ideas, to search the literature, and to revise, translate or summarise text.”
But the Council said, the use of AI in drawing up a research proposal, “does not relieve the author from taking full and sole authorship responsibilities with regard to acknowledgements, plagiarism and the practice of good scientific and professional conduct.”
The ERC is “following the fast developments in the area, and will renew its policies as needed”, but as yet there are no proposals to change the ERC’s grant review system. There is a “robust system for proposal review” which includes “processes” that “are able to detect text similarities”, a spokesperson told Science|Business.
According to Harry Zekollari, a glaciologist at the Free University of Brussels and recent winner of an ERC starting grant, it is almost certain that the ERC and other funding institutions will soon introduce stricter regulations on the use of AI in grant applications. “It’s just a matter of time before we have more stringent rules,” he said.
The ERC is not alone in recognising AI’s potential to disrupt research processes. In November, the European Commission’s DG for research set up a new unit to develop a policy on the use of in science and industry. Assessing AI’s impact on proposal writing will be one of the new unit’s top priorities.
A pervasive issue
There is growing evidence that AI – and especially generative AI software such as ChatGPT – is being used in writing research proposals.
Adéla Jiroudková, the head of the research support office at Charles University in Prague, where research managers do use generative AI, said that she is “very aware” that some consultancies use such software to write grant applications. This arguably constitutes “an even bigger problem” than universities using AI for the same purpose, she said. “They utilise your knowledge, notes, and drafts, inputting them into ChatGPT or similar platforms without the researcher’s consent.”
A recent survey of more than 1,600 researchers by the journal Nature found that one in six currently use generative AI to help write grant applications. The study also found that more than a quarter of scientists use AI to write research papers, while nearly a third rely on it to brainstorm ideas.
AI’s role in grant-writing and carrying out research will only become more prevalent going forward, according to a survey ERC conducted among more than a thousand ERC grantees asking about their current use of AI and their views on future developments by 2030.
Among other findings, an overwhelming majority (81%) of respondents said that it was likely or highly likely that human-AI collaboration will become “widespread” by 2030.
An even greater majority (85%) forecast generative AI will “take on repetitive or labour-intensive tasks”, including writing literature reviews, presentations, papers and grant proposals.
ERC’s concerns about the impact of AI appear to be mirrored by the researchers themselves, with 50% saying one risk is that it might affect research integrity.
Meanwhile, 23% said one of AI’s potential benefits is that it could “track abusive behaviour such as plagiarism.” The irony is that AI could be employed to detect plagiarism in research proposals that were generated in whole or in part by AI.
Hidden bias
For now, the ERC does not use, “AI software for any review or initial screening tasks,” but does not rule this out, the spokesperson said. “As AI technologies are rapidly developing, the ERC cannot and will not make any predictions about any potential future use of AI in this area.”
Jiroudková is concerned that using AI to assess research proposals might lead to hidden biases in the evaluation process and could also raise uncomfortable questions about who – or what – is ultimately responsible for a proposal’s rejection or acceptance.
At the same time, an over-reliance on AI could lead to research becoming more homogeneous, whereby “only proposals that fit a certain mould or tick specific algorithmic boxes are approved,” Jiroudková said. “AI has the potential to assist in managing the increasing volume of research proposals [but] its role should be carefully considered and limited to a supportive capacity, rather than granting it authority in decision-making processes,” she said. “The human element, with its depth of understanding, ethical judgement and creative insight, remains irreplaceable.”