The pressure is rising on researchers to say what their projects will deliver – for the economy and society
What’s the impact of a research project? For the past few years, the European Commission has been pushing scientists and entrepreneurs to answer that question when applying for an EU grant – and, on current trends, the pressure is going to rise.
That matters to anyone trying to get EU money – and it also matters to politicians currently weighing whether putting more money into EU research is a good investment. A study last year for the Commission claimed an average payback of €11 to the economy and society for every euro spent in its research and innovation programmes. But that figure has been challenged by some – and it begs the question of how one should define impact, in the first place.
Asking researchers to spell out the impact of their projects is entirely reasonable: the Commission wants to know that the research they pay for will pay off in the real world, says Jan Palmowski, secretary-general of The Guild, a university association. But, he adds, many researchers don’t like the requirement: “There are problems for many more theoretical, less applied sciences – it’s much harder for these to state what their impact will be than it is for many applied sciences.”
Yet government interest in evaluating impact is growing. Proposals made to the research councils in the UK require applicants to spell out their impact plans. The Swiss National Science Foundation added a section on broad impact to its application forms in 2011. Starting last year, the European Research Council has begun to grade the success of the projects it funds.
But while the emphasis on impact is rising, the Commission has suggested that it may take a broader view on impact in the next research programme – or a “more sophisticated approach”, as EU Research Commissioner Carlos Moedas had called it.
“We can have a culture that, on the one hand, promotes the measurement of the impact of research, while on the other hand, understanding, intellectually, that not all research will have a concrete and immediate impact,” the Commissioner has said.
‘Blah blah’ no longer
Nearly four years ago, when Brussels launched its monstrously popular research programme, Horizon 2020, it began putting extra emphasis on the section of the application forms requiring a researcher to define the potential impact. Before 2014, satisfying the “impact criteria” in EU research proposals “used to be the blah blah part,” recalls Konstantinos Chilidis, a senior adviser in the faculty of social sciences at the University of Oslo. “Researchers would concentrate on the science section and think of the other parts as being a bit inferior.”
No longer: the Commission has warned researchers they need to make a compelling case for how their research will help the economy and society. Adding new knowledge to the world is nice, but it’ll no longer suffice: Horizon 2020 evaluators are just as concerned about impact as they are about scientific excellence and implementation.
And with the odds of winning an EU grant at a record low, scoring poorly on impact can tip an application into the reject pile: Overall, the programme’s success rate is just 12 to 14 per cent, due to a huge volume of applications far exceeding the money available.
The impact requirement has tripped up many applicants to Horizon 2020. “For the first calls in 2014, it was a slaughterhouse,” said Chilidis. “No one knew how to deal with it; nobody knew what was expected of them. I’m not even sure the Commission fully knew what it wanted.”
Impact divides
The Commission has three criteria on which Horizon 2020 evaluators assess the quality of research proposals: excellence, impact, and the quality of implementation.
Economic impact is important, but considering it’s nearly impossible to put a euro sign before the outcome of some projects, contributions to knowledge and society are also measured. The assessment helps the Commission decide how to distribute more than €10 billion in grants every year.
Supporters of the exercise say that it improves overall research quality. Researchers are human: when their work is evaluated and they are held accountable for it, they tend to do a better job. Politically speaking, it’s also essential for a science or industry minister to have some hard numbers to hand about the value of what they do, when sitting around the Cabinet table arguing budget priorities compared to agriculture, defence, health, education and other services.
‘Superficial’ results?
But critics charge that it eats up time and money. The EU, like any funder, wants to support science that makes a difference, but there is no simple formula for identifying truly important research, says Kathrin Möslein, vice president of research at the Friedrich-Alexander University in Germany.
“It often leads to superficial deliverables instead of deeply rooted impact that stems from asking the right questions,” she adds.
Moreover, the emphasis on impact is associated in the mind of some researchers with a rise in ‘managerialism’, “where researchers feel they are measured within an inch of their lives,” says Mark Reed, professor of social innovation at Newcastle University, and founder of a company that helps researchers realise impact in their research.
For others, “It is hard to shake off the idea that it is a broader political move – a neoliberal agenda to monetise research,” Reed adds.
Not braggers
Part of researchers’ problem with impact is that they are not a group accustomed to bragging, says Heikki Kallasvaara, a senior adviser to Uusimaa Regional Council in Finland, and previously director of research and innovation services at the University of Helsinki.
“Especially the Scandinavians: when it comes to being assertive about their projects, they are way too humble,” he says.
Also, many scientists have a hard time imagining the future of their field, adds Chilidis.
“It’s not really their nature, because it’s not scientific. A protein could be the key to a new drug, but how do you know?” he says. “Researchers don’t want to make predictions for five years down the line.”
How to pack a punch
For Horizon 2020, the Commission provides plenty of guidance on the topic – and, impact notwithstanding, the science or technology involved in a project still has to be top-class. But applicants must be sure that their expected impact is “clearly defined” and rigorous. In a 2015 interview with Science|Business, Brendan Hawdon, an adviser on policy development and coordination in the Commission’s research directorate, said an applicant should say clearly: “Here’s what we want to come out of the project.”
The specifics depend on the topic. For an innovation project, for instance, increasing the world’s knowledge normally wouldn’t count as a concrete impact. By contrast, in a transport project, creating safe devices for a car, which would halve the number of lives lost on the road, might be a better example. Other impacts might be on technical standards, or the economy, he said.
Some researchers bring economists into their projects now, to help map out the future value of an idea. And many universities have brought in professional grant writing and coaching services.
“I would imagine that nearly half the winning proposals in Finland have had consultancy work,” said Kallasvaara.
Puny feedback
Not surprisingly, there is no shortage of advice to the Commission about how to improve its impact assessments.
Give more feedback, so applicants can learn how to make the next application better. For innovation projects, add more business people as evaluators. Be more flexible about defining impact. Consider impact on education and other factors beyond the economic. Focus more on mission-driven, or challenge-driven research – in which impact is part of the definition of the project.
The Commission is considering all of these issues. For instance, it may decide to add more mission-driven grants to its next multi-year programme, starting in 2021. It has somewhat increased the feedback evaluators provide to applicants. And it is seeking a broader pool of evaluators – though one problem it has encounctered is that evaluators from the business world tend to cancel their participation at the last minute more frequently than academic evaluators do.
But the controversy won’t stop anytime soon. Katrien Maes, chief policy officer at the League of European Research Universities, an association of 23 universities, argues against thinking of impact in terms of “a narrow economic act” or something that can follow in a year or two after a project.
“The Nobel Prize is a great example of the long game,” Maes says. “So let’s stay realistic and not drive researchers to the point where they feel they must promise the moon.”