As AI becomes a daily helper for research managers, institutions are awaiting EU guidelines and learning from one another
Universities are slowly adopting generative AI in their work, but in depth guidance is still missing for many institutions, with some waiting for the European Commission to set EU guidelines for safe use.
AI tools are becoming increasingly common in science, but few universities have come out with their own rules instructing academics, researchers and students how they can be used. A UNESCO survey of around 450 educational institutions published in June found only around 13% of universities have provided their staff and students formal guidance.
The survey found that for those universities that have issued guidance, the requirements vary a lot. Only half have detailed instructions, the rest approve of AI but leave it up to users to decide how generative AI apps are applied. In 40% of cases, the guidance is not written, but only communicated orally.
This is a big gap: the coming AI revolution in science and education cannot be avoided, with ChatGPT alone having 180 million users worldwide. If used correctly, it could enable groundbreaking research and free up researchers and academics’ time. But to avoid unforeseen consequences uptake should be thought through first.
“Without institutional guidance of any sort, these technologies are likely to get welded into education systems in unplanned ways with uncertain implications and possible unintended consequences,” said Sobhi Tawil, the UNESCO Director for the Future of Learning and Innovation, commenting on the survey results. “We cannot simply ignore the short- and medium-term implications of these technologies for safety, knowledge diversity, equity and inclusion.”
Since June, some universities have moved to set rules. In the UK, the Russell Group of research universities set out five principles for AI in education. These include ensuring staff and students are AI-literate and adapting teaching and assessment to incorporate the ethical use of generative AI.
Universities around Europe published institutional guidelines over the summer, from KU Leuven in Belgium, to the University of Ljubljana in Slovenia.
Next month, the Coimbra group of 41 European universities is holding conference on the use of generative AI in universities. The spur was a meeting this summer when a group of research managers from the universities realised only one of them had started using generative AI. The rest were very curious.
The exception was Charles University in Czechia. Adéla Jiroudková, head of the research support at the university, says her office was encouraged to take up these tools by their rectors.
While the tools have been a great help since, yet there’s still no firm institutional policy. “We’re waiting to see what will happen at the European level,” says Jiroudková. “From what I’ve heard by the end of year we should have precise guidelines from the EU. We will create institutional guidelines based on this.”
The European Commission is rushing to draw up the first guidelines in the next few months, setting up a dedicated new unit at the Commission’s research directorate and preparing for a debate on how to handle the use of AI in science as part of the European Research Area (ERA) policy agenda.
Daily helpers
In the meantime, Charles University has been encouraging the use of AI tools to lift administrative burdens, and to ensure appropriate adoption held workshops on effective use.
At first, Jiroudková was hesitant but with a green light from the rectors, says AI assistants have become daily helpers. Her office uses these tools to speed up administrative work, brainstorm and check for grammar and style errors when working in English.
As yet, AI is not being used to write proposals for research projects, where many see a lot of potential – and risks – for AI. It’s a highly bureaucratic task that eats into the time researchers spend in labs, but if AI is not appropriately applied, it could generate ‘noisy’ proposals and overrun the already swamped evaluation systems.
For Jiroudková, the idea of machine-written proposals is problematic. She may use it for inspiration or to brainstorm ideas, but a good proposal must have a personal touch, she argues.
Theres is anecdotal evidence that researchers are turning to ChatGPT and other tools to help out with proposal writing, but this is yet to become a major issue.
Either way, Jiroudková says it’s time for the Commission to revisit the research proposal and evaluation system. AI means change is coming, and the system should change with it.
As things stand, many of the criteria used to evaluate proposals are too technical and evaluations are often done by scientists who are not necessarily subject experts. Even when AI tools are employed, these complications may make it difficult to assess which applications were drawn up by researchers and which overused AI-based tools.
“People may use it or not, but the Commission must somehow revisit how the whole evaluation is done,” says Jiroudková. “We’re not afraid of these tools. There will always be people cheating, with or without AI. It just reveals problems that were already in the system.”