Breaking ranks: Europe’s research assessment reforms come head-to-head with university league tables

30 May 2023 | News

Universities want to reshape research assessment, giving less prominence to metrics such as citations and journal papers published and putting more focus on quality. But will the international rankings add their considerable heft and follow suit?

Since July last year universities across Europe have been busy revamping how they evaluate their research to bring assessments into line with an agreement spearheaded by the European University Association and funding agencies represented by Science Europe, to put less emphasis on metrics such as number of papers published and more on the impact of research.

This reform clashes with the methods used by external university ranking bodies, which mostly rely on clear-cut numbers such as citations, impact factors and staff ratios. Universities have for decades criticised these rankings as reflecting only a fraction of what makes a quality university, at the same time cherry picking the flattering statistics for self-promotion, whilst ignoring the bad ones.

All of which begs the question: as assessment reforms move Europe’s research institutes towards more qualitative systems, will world ranking systems follow suit?

For the world’s major ranking systems, numbers are still king. Nearly all use weighted criteria to create an overall picture of academic success. These criteria are evaluated by academic surveys, a university’s personnel data, and research publications.

For example the UK’s Quacquarelli Symonds (QS) World University Rankings uses six general indicators: academic reputation, employer reputation, faculty-to-student ratio, citations per faculty, and the ratio of international students and faculty.

Meanwhile, The UK Times Higher Education (THE) rankings uses thirteen of its own indicators, and the Shanghai Ranking Consultancy in China uses six.

While some metrics are common to all, they can vary by weight and evaluation. For example the QS ranking gives research citations per faculty member (normalised by subject area) 20% of their score, while THE gives it 30% and the Shanghai ranking 20% to highly cited researchers.

The organisations behind these methodologies stand by their work. “Academics, departments, faculties, and institutions are all evaluated on the quality of the research they produce, and to date, the most reliable metric for this has been on paper and citation outputs,” said Andrew MacFarlane, rankings manager at QS.

“THE’s World University Rankings were originally designed in 2003 primarily as a research assessment tool to help us better understand the shifting geopolitics of knowledge creation and international research competition,” said a THE spokesperson. “This research-focus will remain for the world rankings.” (Shanghai ranking did not respond to a request for comment).

Changing methodologies

University rankings have changed their methodologies in the past decade in a bid to make comparisons between universities fairer. For example, in 2016 QS changed its methodology to weight citations by their field, to compensate for humanities papers being cited less than those in life sciences.

THE’s methodology has been tweaked several times since it launched 18 years ago, though it has only been altered substantially twice; first in 2011 and again in April this year. The update was due to increased participation, from 400 universities in 2011 to over 1,700 today, a THE representative told Science|Business.

This led to the binning of a field-weighted citation metric and adoption of three new measures, each designed to stop exceptionally highly cited papers from skewing things. Other changes include adjusting the international metrics to factor in a country’s size and diversity, and adding a new metric looking at how much research is cited in patents.

Overall quality

Despite these updates, ranking systems cannot easily shake long-running criticism that their league tables are skewed heavily towards research. A 2020 EUA study of eight ranking systems found that few of their indicators were linked to teaching, instead focusing on research outputs, meaning research quality becomes a proxy for a university’s overall quality.

EUA members are pushing for change and adopting a common approach on how they will deploy the data and interact with ranking systems. EUA president Michael Murphy has also announced the EUA will create its own guidelines, which are due to be released this autumn.  

Top-placed universities have substantial soft power when it comes to funding, which in turn affects funds for non-prestige universities. A study last year found that a university gaining one place in rankings can expect a 3.6% rise in its surplus-to-income ratio. Likewise a fall in rankings leads to a narrower ratio, damaging a university’s financial stability.

Academics have hit out at rankings before. In 2013 the German Sociological Association announced that it would boycott university rankings, saying they were driving ‘academic capitalism’, defined as “an academic routine biased towards quantitative performance indicators (research funding, number of doctorates and graduates) and a neglect of qualitative criteria.”

In the US, rankings of top law and medical universities by the US News and World were delayed by several weeks dues to boycotts by universities such as Yale and Columbia. Several Indian Institutes of Technology announced a boycott of the 2023 THE World University Rankings over transparency concerns. (Over the past two years the THE’s ranking process has been brought in-house, which it says helps in direct scrutiny of the data).

The EU tried to go cut through university ranking orthodoxy by launching its own league table, U-Multirank in 2013. Instead the focus on research outputs, it ranks institutions using five broad, more qualitative factors: reputation for research; quality of teaching and learning; international orientation; success in knowledge transfer; and contribution to regional growth. Users can build their own custom rankings based on the indicators they are most interested in.

But U-Multirank has failed to make much of an impression among universities or students. “Whereas the collected data allows a diversified view on performances of universities, the collection process seems to be unnecessarily elaborate and time-consuming. This might be one reason for a certain reticence,” says Bernd Scholz-Reiter, acting president of the German Rectors’ Conference.

Rewarding originality

U-Multirank has not brought about the change that universities wanted. This and other factors prompted EUA and Science Europe to launch the Agreement on Reforming Research Assessment with support from the European Commission. It shifts the focus to assessing research quality, rather than quantity of papers published.

Principles for how universities should reform their research assessment processes include rewarding originality, how research affects science, technology, the economy or society, gender equality, and others.

Those who sign the agreement also need to come up with an action plan with defined milestones on how they are reviewing or developing criteria.

“The idea behind this commitment is to decouple research assessment from university rankings,” says EUA secretary general, Amanda Crowfoot. “It will help avoid metrics used by international rankings, which are inappropriate for assessing researchers, trickling down to research and researcher assessment.”

So far the agreement has over 500 signatories, and in December last year the Coalition for Advancing Research Assessment (CoARA) was set up to help them collaborate. In March CoARA launched its first call to propose working groups and there are now plans to set up at least six CoARA national chapters that will discuss CoARA-relevant issues specific to a country’s organisations.  

CoARA still has its critics, not least the international ranking systems. “The commitment’s statement to help avoid that metrics used by international rankings, which are inappropriate for assessing researchers is, essentially, saying that the practices of many countries and academic institutions alike are inappropriate – which seems counter-intuitive if unhelpful,” says MacFarlane.   

Elsewhere, the German Rectors' Conference has not signed the COARA agreement, nor does it plan to until its criticisms have been resolved, says Scholz-Reiter. The rectors view research excellence as being both gained in competition and defined by the research community.

“The CoARA agreement, however, confuses the prerequisites for excellent scientific performance (diversity, cooperation, openness) with criteria for assessing excellent science,” he said. During the negotiations for the agreement the German rectors also expressed their concern on what they saw as the agreement’s politics.

But the agreement has nonetheless caused the ranking systems to take notice. “We are watching these developments carefully, and with interest,” said the THE spokesperson. “We always welcome open conversations on the appropriate use of metrics and rankings.”

If enough universities reform their research assessment, ranking systems are open to change. “It has always been our view that we reflect the collective intelligence and consensus of the sector,” says MacFarlane. “In this respect, we will of course consider reflecting changing research dynamics in our metrics when they become collectively adopted.”

Those behind the agreement are realistic about its implementation. “Systemic reform does not come overnight; it takes time to change the research culture,” says Crowfoot. “Interoperability is indeed a key aspect here, together with diversity. There will be no one-size-fits-all model, and there is no need for such an approach.”

“Will this encourage university ranking reforms? That remains to be seen,” Crowfoot said.

Never miss an update from Science|Business:   Newsletter sign-up