How to avoid Horizon 2020 success being tarred by high failure rate

20 May 2015 | Viewpoint
Today massive demand, tomorrow resentment: if the European Commission fails to acknowledge the overheads of applying for grants and act on low success rates, the credibility of its R&D programme could begin to slide

Horizon 2020 is – and will probably remain – the biggest game in town for Europe’s researchers and a high watermark in the history of EU-funded research. 

But one year in it is not flawless. The average success rate for the 45,000 Horizon 2020 grant applicants to date is 14 per cent. This is well below counterparts in the US, with the National Science Foundation success rate standing at 22-24 per cent and the National Institutes of Health at 18-21 per cent. Even in Australia, where public R&D funding has been cut, applicants to the National Health and Medical Research Council have a 21 per cent chance of success.

With only one in seven proposals are being selected – and note that several H2020 sub-programmes like Marie Curie-Sklodowska and the SME Instrument have an even lower odds - this means around 38,700 applications were rejected.

As an innovation and grants consultancy with 20 years’ experience in EU funding, we know that a collaborative single-stage proposal costs between €70,000 and €100,000, on average, in time and effort to develop and write.

Assuming that half of all H2020 competitions are divided in two stages and given that 70 per cent of the total project time and effort goes into developing and writing the stage one proposal, this implies between €2.5 and €3 billion was spent on failed applications. Were this to go on year after year the waste would add up to more than 20 per cent of the total H2020 budget.

Sliding success rates coupled with the huge financial outlays are a worry: researchers could become more sceptical of the whole funding application process.

In a recent interview with Science|Business, the EU Commissioner for Research Carlos Moedas showed he clearly understood that the Horizon 2020 programme may be facing a credibility crisis. But what are his options for turning the tables? The problems, as I see them, concern the cost-benefit of preparing proposals, the credibility of the evaluation process, and poorly-managed expectations. Fixing these will require some practical interventions. 

I don’t claim to have all the answers but I hope some of my ideas trigger further discussion.

Clear up the evaluation muddle

Some of Horizon 2020’s competitions are graded using a two-stage evaluation process. I would like to see the first stage evaluated in such a way that makes it clear to applicants why their project is proceeding to stage two. 

Evaluation reports with concrete suggestions should be given to the applicants that continue the process. The selection of stage two proposals can be made tougher as long as the evaluators have clearly outlined their arguments and the success rate of stage two proposals is moved up to at least a 1:3 chance.

Overall, it is in the Commission’s interest to bring more transparency to the evaluation process. If the number of proposals continues to rise, or the success rate drops further, this will become especially important with regard to re-submissions.

Researchers are allowed to act on evaluators’ comments to improve their proposal and re-submit in a later call round. Currently, re-submissions do not do well, not even those with scores close to the cut-off threshold the first time. Here the Commission should as a standard measure provide the new evaluation panel with the earlier evaluation summary report (ESR) and the earlier proposal. 

As things stand, it is largely left to the discretion of someone at project officer level in the Commission to decide if the new panel sees the old ESR. In future, it is necessary to ensure the new evaluation is consistent with the old one, making it unlikely that the new score falls below the old one (this can happen, sadly).

No new legislation is needed. Rather it is a question of willingness on the Commission’s part to come up with a better regimen for evaluators.

Manage expectations

It is true of course that some projects are just not good enough for H2020.

That message should be given to applicants with more clarity – and sooner - than is currently the case. “Don’t submit (again), because it is not what we are looking for,” is a message that could be relayed by the Commission’s national contact points, whose role is to advise researchers on proposal submission.  

Currently, ideas are screened but prospective applicants are not told to ‘just drop it’ if the project concept is not up to standard. Instead, false hope is given and researchers led to believe that with sufficient fine-tuning, there is still a chance.

Proper project screening, combined with a well-founded opinion by the screening authority on whether to submit, could help reduce the deluge of proposals facing the Commission and the evaluators in future competition rounds.

Do not rely on being bailed out

In his interview with Science|Business Moedas signalled he clearly understood Horizon 2020 may be facing a credibility crisis.

One of his ideas for dealing with the low success rate is to channel proposals that fall between the eligibility and the cut-off thresholds to the national level, where they could be funded out of the much larger €100 billion European regional funding pot for innovation. As the failed Horizon 2020 proposals already have a ‘seal of approval’ from Europe on their technical quality, Moedas suggested, why not let the regional money take care of business?

This may make perfect sense on the face of it, but there would be huge complexity in using regional funding for international research collaborations. The management of the funding process will run the risk of becoming hopelessly bureaucratic. The regional pot may be a good instrument, but rules and procedures between countries for using it need to be better clarified.

Empower researchers

Here’s an idea: what if the Commission periodically set out technology priorities for a limited number of domains and let the ‘domain communities’ define the topics and set up the evaluation structure? In other words, let them govern themselves.

This would promote buy-in from researchers within that community and encourage them to participate in the evaluation process.

The organisation and structures of the many and varied EU-funded public-private partnerships and technology platforms could be taken as a good starting point and model, were the Commission interested in going down this route. 

While not an answer for the low success rate problems of today, it could be part of the answer for the day after tomorrow.

Roy Pennings is managing consultant with PNO Consultants

Never miss an update from Science|Business:   Newsletter sign-up