While advances have produced breakthrough tools like AlphaFold, AI is unlikely to be revolutionary, and may drown scientists in low quality papers, conference hears

Photo credits: BoliviaInteligente / Unsplash
Artificial intelligence might be useful to scientists in some circumstances, but it’s unlikely to revolutionise the job, according to presentations at a conference held last month that discussed the technology’s impact on research.
At the Metascience 2025 conference, a major get-together in London to discuss how research is conducted, AI in science was one of the hottest topics, but overall, expectations are tempered. Some even fear generative AI could create a wave of scientific spam, overwhelming researchers with even more papers, burying genuinely new findings.
The most recent wave of AI breakthroughs has brought some new tools, “but it's not a fundamental transformation of what it means to be a scientist, necessarily,” said Matt Clancy, an expert in science and innovation at the not-for-profit funder Open Philanthropy.
“Science has long integrated new tools that open up new fields to study and new kinds of data,” he told delegates.
The emergence of large language models (LLMs), the basis for popular tools such as ChatGPT, and more dedicated software like AlphaFold, which predict protein structures from their amino acid sequences, has some hoping that that AI could generate hypotheses, replicate findings in computational research, or summarise existing literature.
On August 19, the EU’s research Commissioner Ekaterina Zaharieva said that it was “impressive” how AI was “transforming research,” and promised an AI in science strategy “soon.” In 2023, the European Commission set up a dedicated AI in science unit and, earlier this year, published 15 case studies arguing AI was speeding up discovery in life sciences.
However, the debate over AI in science comes amid growing doubts over its usefulness in business, which caused a sell-off in technology shares on August 20. A survey of companies by the Massachusetts Institute of Technology in July found that “95% of organisations are getting zero return” from generative AI.
Meanwhile, some AI company heads have made claims that critics see as absurdly overinflated. For example, Demis Hassabis, chief executive of Google DeepMind, the creator of AlphaFold, earlier this month suggested that “we can cure all disease with the help of AI [. . .] maybe within the next decade or so.”
AI is not new
Despite the excitement over new tools like LLMs and AlphaFold, the history of AI in science actually goes back more than half a century, Iulia Georgescu, science and innovation manager at the UK’s Institute of Physics, told the conference.
“Most people think it starts in the 2020s with AlphaGo and AlphaFold,” she said. But the potted history of AI in science that she presented to the conference traces it back to a 1956 tool for proving theorems.
The use of machine learning, synonymous with what is today defined as AI, was widely used in physics in the 1990s, she said, for tasks such as pattern recognition. Machine learning was also used to analyse data that led to the discovery of the Higgs Boson, which was experimentally confirmed in 2012, Georgescu said.
Accountability
Although Clancy doesn’t expect big transformations, he told the conference that AI tools might chip away at the more automatable tasks scientists currently have to do, such as searching and summarising the existing literature, and free them up for other parts of the job, such as explaining their work to policymakers.
However, academics still might be reluctant to hand over tasks to AI tools, because they remain accountable for their work. “I have to put my name on this paper,” said Clancy. “I have to really trust the results that come out of this machine”.
Indeed, the jury is still out on AI tools claiming they can summarise existing scientific literature.
The Columbia Journalism Review recently warned science journalists that the results from five literature review tools it tested were “underwhelming” and in some cases “alarming.” The tools pulled completely different papers from the literature, disagreed on the scientific consensus, and returned different results when asked the same question days later.
AlphaFold
AlphaFold is arguably the biggest scientific gain from recent advances in AI, with Hassabis winning a share of the Nobel Prize in chemistry last year for its invention.
Until its release, researchers had deciphered around 200,000 protein structures, said Anna Koivuniemi, head of DeepMind’s impact accelerator, at the conference. “It was a very time-consuming process,” she said. But AlphaFold has managed to crack the structures of 200 million proteins, with more than three million researchers using these discoveries in their work so far, she said.
Related articles
- China leads EU and US on using artificial intelligence in science
- Viewpoint: Time to strengthen Europe’s leadership through AI in science
- Commission launches new AI in science unit as part of research directorate reshuffle
Koivuniemi acknowledged that AI was far from being able to help with all scientific problems. “I'm sure that you all have stories where an AI initiative didn't add so much,” she said.
Researchers need “good data to train your models,” she said, with AlphaFold reliant on the 200,000 protein structures previously deciphered by scientists. “The fact that we were able to develop AlphaFold was [due to] the work of all structural biologists over 50 plus years,” she said.
Eliminate the routine?
Researchers should be wary about offloading what seems like “routine” scientific work to AI, said Sabina Leonelli, a historian and philosopher of science and technology at the Technical University of Munich.
“What is seen as a routine activity becomes, in fact, a source of discovery, and vice versa,” she told the conference. Rosalind Franklin was the first person to imagine the structure of DNA while working on “supposedly very boring crystallography problems,” Leonelli said.
In academia, there is also a “tendency to constantly underestimate the costs, the significance and the very high demands of validating and maintaining AI models,” she added.
Not convinced
In India, only a small minority of scientists are using large language models, said Moumita Koley, a research analyst at the Indian Institute of Science in Bangalore, who presented a survey of researchers in the country.
“They're not really yet convinced that we are into an era where AI is driving the science,” she said.
One concern is cost. Although there are currently free versions of LLMs, “maybe these models will tomorrow become very expensive,” she said. “The pro versions of all these models [. . .] we cannot really afford.”
The one exception is that Indian researchers use LLMs extensively to help polish their writing, she said, and this could be a huge benefit to academics whose first language isn’t English.
But journal policies prohibiting the use of AI in writing meant this was a “lost opportunity” to level the playing field with native English speakers, she said.
However, earlier this year, Chinese researchers conducted an analysis of how LLMs had changed academic writing, and concluded the tools had caused a “significant decrease in cohesion and readability” in preprint abstracts.
Scientific spam?
Finally, the conference heard fears that LLMs will be used to generate ballooning numbers of academic papers, either by helping academics write more, or by fraudulently generating fake articles.
The risk is that this could further overwhelm researchers already drowning in an exponentially growing number of articles. Some researchers have sounded the alarm that AI-generated fake papers could cause an “existential crisis” for research.
“There will be so much content that is of no value,” said Koley. “Probably these will crowd the space, and the good ideas will not be visible enough.”
“This week I found four papers on Google Scholar ‘written’ by me and my co-authors. Except we didn’t write them. They were AI-generated fake citations,” wrote Liudmila Zavolokina, a digital innovation professor at the University of Lausanne, on LinkedIn earlier this month.