Not what you know, but whom you know? Study of ERC stirs old scientific controversy

29 Jun 2021 | News

A Dutch study of some 2014 European Research Council grantees claims to find ‘bias’ in their selection. But the agency disputes that, and an academic debate ensues

A new study of grants awarded to early-career researchers by Europe’s premier science agency is reviving an old controversy over the way governments decide which scientists get research money, and which do not.

In March two Dutch researchers published an online analysis of a 2014 ERC Starting grants funding round, suggesting that applicants from the same universities or institutions as the European Research Council’s grant jurists were on average 40% more likely to win the grants. The ERC challenged the findings, and said the agency goes to great lengths to avoid any bias in its decisions. “Indeed, the study makes use of a small sample of evaluated proposals from 2014 and claims to draw far-reaching conclusions,” a spokeswoman said.

And the agency isn’t without defenders. “I think the empirical strategy used by the authors does not allow inferring the existence of a bias,” Natalia Zinovyeva, economics professor at the University of Warwick, told Science|Business.

So, case closed? Not quite, as this statistical dispute hits at a raw nerve in the scientific establishment around the world. For years, academics have debated whether science agencies are fairly judging grant applications – whether in Washington, London or Brussels.

To peer review… or not?

When a scientific grant applicant submits a proposal, it ends up in the hands of other scientists who evaluate it and decide whether the proposal is worth funding. The specifics differ by agency and country, but generally the reviewers are selected for their expertise and reputations. They may first do a preliminary screening to weed out weak applications, then interview the good applicants, and finally debate among themselves which are the strongest ones. With some exceptions, the government agencies usually accept the verdict of the expert panels.

But when success rates for applicants are low – in some ERC calls they sink as low as 8% – the whole process gets difficult. Many have suggested this may not be the most reliable way to discern which research is most deserving of funding, with a potential for bias and inconsistencies in a process that heavily relies on subjective judgement.

A 2016 study from South Korea examined the selection process for national R&D projects in which each proposal was evaluated by three panellists. It found that if none of the panellists came from the same institution as the applicant, the applicant success rate was 27.6%. If one of the evaluators was from the same institution, it increased to 34.3%. Two panellists from the same institution as the applicant increased their chances to secure funding to 39.7%. But there appears to be a limit: when all three evaluators came from the same institutions, the success rate dropped slightly to 39.3%.

Similarly, a 2004 study analysing the grant peer-review procedure of the medicine division of the Swedish Research Council found that principal investigators who had an affiliation with the reviewer’s organisation scored 15% better than non-affiliated counterparts.

“The phenomenon that the (Dutch) authors observed is not unusual,” said Giovanni Abramo, technological research director at the Italian National Research Council, who has conducted similar studies scrutinising the funding programmes in Italy.

What is surprising is that the phenomenon could happen in well-respected international grant competitions like those of the ERC. While on a country level researchers that work in similar fields tend to know one other and help each other out, “I thought at the international level, there would be more social control of panellists coming from different countries,” Abramo told Science|Business.

It is also at odds with the ERC’s reputation for high standards of fairness, a quality observed by past reviewers and vindicated by studies commissioned by the agency. The agency will spend €1.9 billion this year on about 1,000 grantees. Since its 2007 founding, the ERC has funded research producing more than 200,000 published research reports. Its grantees include seven Nobel Prize winners.

Unexpected results

The two authors of the ERC study, Peter van den Besselaar, a social scientist at the Free University of Amsterdam, and Charlie Mom of Dutch consultancy TMC Research, completed the first conference report in 2018 on the prevalence of the nearby panellist effect in the agency’s grant selection process.

They had already looked into possible gender bias in the ERC, finding that the selection process was largely gender neutral. The study was solicited by the ERC which paid for it and provided data on the 2014 Starting grants funding round for the analysis.

The two researchers also thought it relevant to look for indirect causes of gender bias. “For example, it could be the case that men profit more from topical proximity, nepotism, or organisational proximity,” van den Besselaar told Science|Business. “And we indeed found that organisational interest representation does have a gender effect, which we also report in the paper.

But the most controversial finding had nothing to do with gender. The researchers, scrutinising the 2014 data set for statistical correlations, found that applicants with an institutional connection to at least one of the panellists had on average a 40% bigger chance of receiving funding. More specifically, in life sciences the chances were as much as 80% higher, while the effect was barely visible in physics and engineering.

Whether intentional or not, the researchers reported, it looks like “bias and particularism” in grant decisions. “This grant is very important and these panel members to a large extent are also top scientists. We did not think the effect would be that strong,” said van den Besselaar.

Upon publication of the results in an article in Nature magazine, the ERC challenged the methodology of the researchers, suggesting they can’t reasonably draw such sweeping conclusions from one old and limited data set. While the ERC has commissioned several studies looking to improve its evaluation process, “this study is not a useful contribution due to its inherent methodological limits and analytical weaknesses,” the spokeswoman said.

The final judgement is in the hands of other scientists. The fourth draft of the article published in March on the scientific pre-print service, ResearchGate, is now going through the peer review process, in which other academics will judge its contents and validity – a normal process to get the paper into a journal. In advance of it being accepted, van den Besselaar declined to tell Science|Business which journal the paper has been submitted to.

Concentration of excellence or bias?

The key issue, others who have read the paper say, is whether it’s possible to conclude the existence of bias based on the correlations found – and here, the statistical weeds get deep. Organisational proximity is easy to measure, and “the phenomenon they observed cannot be criticised. The question is whether there is favouritism,” said Abramo.

Technically, ERC panel members cannot assess proposals hosted by their own institutions and are excluded from the panel when a person close to them has submitted a proposal. The ERC says it takes this seriously because a single incident could ruin its reputation. “These are key elements that need to be considered in any analysis of ERC evaluation,” said the spokeswoman.

Reinhilde Veugelers, an economics professor at KU Leuven and a former reviewer and member of the ERC’s governing Scientific Council, confirms that in her experience the no-conflict rule was vigorously respected even in cases where it did not make much sense. “There were really good experts that couldn’t be part of the panel,” she recalled. “Sometimes it was really silly.”

Still, could there be unintended biases?

One issue is whether it just happens that good reviewers and good applicants can come from the same institutions simply because the institutions themselves are so good; in the US context, for instance, you would expect a lot of Ivy League reviewers and applicants to end up in a National Science Foundation grant competition. To check whether that’s at play here, the authors rated the applicants’ overall academic performance: how good were they, really? They looked at the number of grants they have secured, citations in journals and numbers of publications. Their final scores did not point to better overall performance by applicants who also had panelists from the same institutions. In fact, they did worse than most other grantees.

But other researchers question that conclusion. Says Zinovyeva of Warwick: “To claim that the bias exists, one would need to compare connected and unconnected applications of essentially identical quality. In practice, the quality of applications varies a lot, and it is very difficult to observe, especially, for outsiders who only have very rough bibliometric indicators at hand.

Moreover, bibliometrics may not show the entire story. While a social scientist may rely on the author’s “h-index”, which measures the productivity and citation impact of their publications, an experienced biologist on a review panel could detect excellent qualities that are not apparent to someone without the expertise, said Zinovyeva.

Looking for solutions

In the end, pinpointing certain biases is difficult and the best course of action may be focusing on improving how application quality is measured. “By accumulating evidence, we can gain a better understanding of the decision-making process,” said Zinovyeva. “In this sense, the paper makes a first valuable step, but it is very far from providing evidence that should start being worrisome for the public and the scientific community.

For van den Besselaar next question is to look at how these grants change the course of the researchers’ careers. He hopes to investigate this in 2023 at the 10-year mark of the funding round in question.

“Increasingly the reputation of universities is important, not only for the university, but the people that work there. If you work in a top university your chances to get funded are much bigger,” van den Besselaar said. But by winning grants, a researcher’s status once again increases, leading to more grants – and the nearby-panellist effect may be contributing to it. To tackle the issue, van den Besselaar suggests the panels should reflect more on their final selection. If too many scientists from the same institution get grants, there should be a discussion on whether this is biased or not.

Another idea would be to provide the panellists with very good bibliometric measures of the candidates, suggests Abramo. “What we showed in a study concerning Italy is that bibliometric assessment had a superior impact than the peer-review assessment on the quality of the publication,” he told Science|Business. “The panellist should at least be informed with the right indicators of performance.”

But Veugelers says bibliometric indicators are only useful in the first stages of the selection when the panellists are forced to weed out the massive pool of applicants. From then on, the evaluators are expected to rely on their expertise to find the best proposals. If the ERC wanted to rely on performance indicators, “they would say they could use an AI,” she said.

Never miss an update from Science|Business:   Newsletter sign-up