Measure what matters: Ranking universities in the age of pandemic

04 Mar 2021 | News

Universities are caught in the cross fire of the pandemic. While the value of their research has never been more evident, the quality of online learning is questioned. How will university rankings respond? 

Universities

These are the best of times and worst of times for universities. On one hand, their researchers have been lionised for rapid vaccine development. On the other hand, many students are unhappy with campus lock-downs and online teaching. So, when the COVID-19 dust settles, what marks will each university get?  

That depends partly on how university ranking systems end up weighing all this turmoil – and, while nobody knows the answer yet, how these self-appointed university graders manage the challenge will also affect their own future. For years, a small group of private ranking organisations from Shanghai to New York has compiled and published data on university performance. They have been often criticised – most often by universities that didn’t place highly. But they are never ignored, by the university administrators themselves or by prospective funders, students or faculty.

So far, the ranking industry is playing it safe in handling COVID-19 fall-out. For instance, in the UK, QS World University Rankings, one of the major university ranking organisations, which released the 2021 table this week, has not yet altered its methodology to reflect the changes in research and education, a spokesman confirmed.

This is for two reasons: first, the data used to compile the rankings is collected over a five-year period, and second, the impact of the pandemic on higher education is still unclear. As a policy, the company also abstains from rash changes to the methodology, as it could jeopardise the comparability of their rankings year by year.

But QS is open to change in the future. “Naturally we recognise that if accepted methods of conducting higher education change long-term as a result of this pandemic, then our methods of capturing university performance may need to evolve accordingly. However, it is too early to make those decisions,” the spokesman said.

The composition of the QS top ten is unchanged, with MIT ranked first for the ninth year in a row. There is some slight shuffling, with University College London moving from 8th in 2020 to 10th, Imperial College London rising one place, from 9th to 8th. Cambridge University and ETH Zurich maintain their positions at 7th and 6th, respectively, while Oxford University dropped from 4th to 5th.

Of the leading US universities, Stanford retains the number 2 spot and Harvard number 3. The California Institute of Technology is in 4th, one place higher than 2020.

Monitoring COVID-19 disruptions  

Across the Atlantic, leading ranking organisation, US News, is monitoring higher education disruptions caused by the COVID-19 pandemic. While the company is not planning to share its methodology until its annual best global universities rankings are out this autumn, it is specifically “monitoring to what degree, if any, faculty publications, citations and collaboration has been impacted by COVID,” said Robert Morse, chief data strategist at US News.

Back in the UK, the Times Higher Education (THE) ranking is looking to rework its methodology, with a special focus on updating its approach to bibliometric data, which has been heavily criticised. THE first announced the plans for methodology reforms two years ago, but at the start of the pandemic the company decided to put off the remodelling until the storm settles, with plans to update the 2023 world university rankings, to be released in September 2022.

By then, THE hopes the impact of the crisis on universities around the world will be clearer, especially when it comes to the numbers of international students, staff, and collaborations, which make up 7.5% of the current methodology.

Problematic rankings

Even before the pandemic, university ranking organisations were seen by some as using outdated metrics and failing to keep up with the times.

“We are in the age of open science,” said Ellen Hazelkorn, joint managing partner at BH Associates, an education consultancy. “As we move increasingly away from those kind of very traditional metrics to ‘alt-metrics’ and open access publishing, that is not where the rankings are at.”

There is another problem in that ranking bodies are un-appointed and answer to nobody. In 2016, the International Network of Research Management Societies (INORMS), which brings together research management societies and associations from across the globe, set up a working group to look into how to make research evaluation “more meaningful, responsible and effective.”

“Independent oversight of the rankings is non-existent. What we wanted to do is basically develop a set of criteria by which the ranking agencies’ individual rankings could be assessed,” said Elizabeth Gadd, chair of the expert group, who is research policy manager at Loughborough University.

After long deliberation, the expert group set out four criteria against which to judge university ranking organisations: good governance, rigour of their methodology, transparency and whether they measure what matters.

The initial assessment of six global university ranking organisations found none were up to par, with none scoring 100% for each of the four criteria.

For Gadd, the key criterion of the four is measuring what really matters. There are many metrics that do not say much about a university, such as how many Nobel prize winners have walked through its door, or how many international students it hosts.

Meanwhile, useful indicators, such as a university’s commitment to open science and to fixing the gender pay-gap, are often overlooked.

Current university rankings largely rely on two indicators: research performance and reputation. The former is based on bibliometric and citation data, the usefulness of which has been disputed. For example, the impact factor, measuring the number of citations a given paper gets, says little about its quality.

Reputation, too, arguably, says little about the quality of teaching or research. It rates a university’s popularity, ensuring that the biggest, best-known institutions are the top of the league tables each year. A good reputation boosts the rank, and the rank boosts reputation.

Ranking organisations collect a variety of such data in varying degrees of meaningfulness, boil them down to individual indicators and produce a rank for each university.

U-Multirank, the ranking launched six years ago by the European Commission as the EU’s answer to the long running arguments about league tables, scored the best in the assessment by INORMS. The CWTS Leiden Ranking, based on bibliometric indicators, was a close second.

CWTS Leiden also did the best in terms of rigour by avoiding the use of opinion surveys and in being open about the validity of its indicators.

None of the ranking bodies scored well for good governance and transparency, with QS World University Rankings taking a small lead in best governance practices.

In the end, none of the six ranking organisations met the standards outlined by INORMS. “I think that certain data about universities can be helpful if you are looking one thing at a time,” Gadd told Science|Business. “But the methods used to make those assessments need to be rigorous to ensure they are fair and meaningful. Otherwise, people are making all sorts of decisions based on this data, which are not the best decisions because they are not based on the best evidence.”

Concentration of excellence

Some of the rankings, such as THE, date back to the 1980s, but at that time the focus was on compiling national rankings. The move to rank universities globally really took off in 2003 when the Chinese government set out to measure the impact of its increasing investments in university research, launching the Shanghai Ranking.

They may be charged with many shortcomings, but there is no denying global league tables are influential.  

For example, the poor standing of France’s universities in global rankings was one of the spurs for higher education reform and restructuring from 2007 onwards. One upshot was the merger of several institutions to create Paris-Saclay University, which in June 2020 moved into 14th place in the world in the Shanghai Rankings.

Similarly, India recently introduced Institutes of Eminence, a title given to research institutions the country wants to turn into world leaders.  But the latest QS league table shows the programme is struggling to yield results, with no increase in representation for India’s public Institutes of Eminence recorded this year.

In past years, governments have issued a flurry of such policies in an attempt to position their select universities better. As a result, Hazelkorn says, “We have a concentration of excellence in a few sets of institutions and very little attention being spent elsewhere.”

These policies, though flawed, perhaps made sense in the times of growing globalisation, but today, we live in a very different environment. “The demands for what universities do and how they contribute and impact on society are now forefront,” said Hazelkorn.

Universities are central to addressing societal challenges. Yet, what rankings measure “in no way bears any resemblance to the kinds of issues we are asking universities to deal with today,” Hazelkorn said.

Despite this, university rankings are a powerful influence that cannot simply be ignored. Gadd observes that university leaders have an uneasy relationship with league tables. “To ignore the rankings is a financial and reputational suicide for institutions,” she said. “So, we have to engage with them.”

What about the pandemic?

The issues with university rankings have long been debated, and research management literature on the topic keeps expanding. But last year, the global pandemic mixed up the landscape of higher education, forcing universities to rapidly adopt online teaching, while international student numbers fell. As the comments from THE indicate, rankers are not yet sure if, or how, to react.

One long-term change is likely to be the way COVID-19- driven the acceleration towards digital learning. Hazelkorn says this switch has been a long time coming, but pre-pandemic universities were reluctant to move towards online teaching. “As a consequence, the pandemic is pushing everyone to look at teaching and the quality of what they are delivering. But there is a big difference between emergency online teaching and quality online teaching,” she said.

Reduced student mobility could take a toll on university finances. Here, the inequalities between universities will continue play a role. Wealthy institutions such as Harvard or Oxford University may weather the storm, but smaller universities may not be able to, possibly leading to growing inequalities. “No one is immune, but some of us are more immune than others,” Gadd says.

It’s too early to say what the long-term impact of the pandemic on higher education will be, let alone on the way universities are ranked. “What we are finding with all sorts of research evaluation issues is that the pandemic is not going to impact in the next 12 months. It’s going to be impacting over the next ten years,” said Gadd.

Too big to listen

Last November, Gadd wrote an op-ed for Nature, outlining the findings of the group’s assessment of six top ranking organisations. Given the results are not particularly favourable to the rankers, she was concerned there would be a backlash against the study.

“We’ve been largely ignored, to be fair,” she told Science|Business. However, that proves the point: the ranking organisations recognise they are powerful global organisations. “They don’t need to listen to grassroots organisations like us,” Gadd said.

University ranking organisations are businesses. They may say they do not make money from the rankings, but their accounts are hidden. “I have tried, and my colleagues have tried, to understand the financial modelling. [They] refused to produce it,” Hazelkorn told Science|Business.

A larger issue, she says, is that these businesses hold immense amounts of university data, which should be public. Although extremely valuable, this “evidence” is kept behind paywalls, as are the methods for assessing it. For example, the Times Higher Education Sustainable Development Goals impact ranking is undertaken internally, which makes it impossible for the results to be compared without external interpretations.

“If rankings did adhere to our criteria, they could provide some useful data to the community,” Gadd said.

Yet, if the rankings were fair and measured one thing at the time, using indicators that are a good proxy for the thing they seek to measure, end users would not find them as interesting. Most are interested in a single figure, a point of reference, and do not have the patience to judge which indicators matter to them.

“There are ways that rankings can be improved, but they would become less attractive to the end user that wants to use them as a lazy proxy for the quality of a university. But that might have to happen,” says Gadd.

Never miss an update from Science|Business:   Newsletter sign-up