Viewpoint: The latest EU innovation index is out. It’s flawed.

22 Jun 2018 | Viewpoint

Two researchers critique the methodology the Commission uses to compile its annual innovation rankings – and urge a different approach

Charles Edquist and Jon Mikel Zabala-Iturriagagoitia. (Photo: Universidad Autónoma de Madrid)

On 22 June, the latest edition of the European Innovation Scoreboard (EIS) was published by the European Commission. It is a headline-grabbing effort to compare how countries are doing at innovation.  Large resources are invested in producing these annual reports – the 2017 edition was available in 23 languages. This reflects the increasing attention given to innovation policy issues both on the EU level and within its member states.

A major constituent of the EIS is the Summary Innovation Index (SII), the headline number. It claims to provide a “comparative assessment” and ranking of “innovation performance” in the EU member states. The EIS has as an explicit objective to have a real impact on innovation policies. We argue, however, that the interpretation of innovation performance which the SII provides is not useful for policy-makers and politicians.

The SII is currently based upon 27 individual indicators. All of them have some sort of relation to innovation. There are indicators related to innovation outputs and to innovation inputs, as well as indicators reflecting determinants and consequences of innovations. The SII is then calculated as the arithmetic average of the 27 indicators, and all the indicators are assigned the same weight. The higher the value of the SII, the better the innovation performance is claimed to be. Because of the way the SII is constructed, the SII score will increase if a country puts more input resources into its innovation system. A worrisome property of the SII is that its value increases even if the innovation output resulting from additional inputs is zero. This is a very unusual way to interpret what “performance” means.

Digging behind the numbers

We have recently published a scientific article in Research Evaluation entitled “On the meaning of innovation performance: is the synthetic indicator of the Innovation Union Scoreboard flawed?”. What follows is a summary of some arguments included in that article – excluding its methodological and statistical parts. This analysis is based on the 2017 report – but the basic issues we raise stand unaddressed in the latest, 2018 report, as well.

“Performance” is normally a question of achieving (or producing) something. In this case it is a matter of achieving innovations, i.e. innovation output. If we want to measure the performance of an innovation system, we have to relate output to inputs. Remarkably, the SII is not doing this. 

As an illustrative example, let us imagine that two countries are trying to send a rocket to the moon, and both succeed. The first country used $100 billion to achieve the goal while the second used only $1 billion. If only outputs are considered, both countries have achieved the same level of “performance”. Common sense, however, says that the second country has performed much better. 

Analogously, the performance of an innovation system must be measured by relating the innovation outputs of the system to the input resources needed to produce the outputs. It is methodologically flawed to claim to measure innovation performance, as the SII does, and at the same time, mix inputs and outputs into a simple average. The SII does not measure innovation performance in a meaningful way and may therefore mislead researchers, policy makers, and politicians as well as the general public.

In our article, we develop an efficiency-based measure of innovation performance based on non-parametric Data Envelopment Analysis techniques. We single out some of the indicators provided by the EIS, classifying them as either inputs or outputs. We use exactly the same data as SII. However, our results are very different from those reported by the EIS.

Is Sweden really No. 1?

Let us take Sweden as an example. According to the SII, Sweden has, for many years, been ranked number 1 in terms of innovation performance within the EU. What this partly reflects is that Sweden, also using our methodology, was ranked in a high position (number 1) regarding inputs. However, with regard to outputs, Sweden was, by us, ranked number 10. Sweden is thus investing much in inputs, but is not able to efficiently transform them into outputs. This is not high performance. Such inefficiencies should be the point of departure for the design of innovation policy in Sweden – not the widespread belief that Sweden is an innovation leader in the EU.

Another result of our method would be to put the performance of newer EU member states into better perspective. For instance, Sweden invested 7.35 times more than Bulgaria in innovation inputs – yet it got out only 2.77 times as much benefit. By our measure, Bulgaria and other east European countries may actually show greater efficiency in their innovation systems. They manage their fewer resources more productively.

Policy-makers should be able to identify problems in their innovation systems and then try to select policy instruments that might solve them. Therefore, analyzing individual disaggregated innovation indicators in a comparative way by relating outputs to inputs is more useful as a basis for policy design than mechanically aggregating large sets of data to achieve simplistic rankings -  even if media and politicians like that.  

Our approach allows policy makers to identify the sources of inefficiency, and act accordingly, reshaping their innovation systems. For example, if we know that Sweden is much weaker on the output side than on the input side, policy makers can see that they need to concentrate more on making a more efficient use of existing inputs instead of increasing the volume of inputs. The latter is the usual way innovation policy is being pursued, i.e. with a supply side bias. Policy makers could, for example, put more emphasis on demand-side innovation policy instruments; for example, support functional public procurement that enhances innovation, and thereby move in the direction of a more holistic innovation policy.

Charles Edquist is professor of innovation studies at CIRCLE, Lund University, and a member of the Swedish National Innovation Council. Jon Mikel Zabala-Iturriagagoitia is a lecturer at Deusto Business School.

Editor's note: 

At a conference June 25th, Commission officials defended their methodology. Slawomir Tokarski, a director in the agency’s industry department, said Edquist’s own suggestion of an alternate methodology would produce an index with a “counter-intuitive” result that many east European countries would rank at the top of the list, rather than the Nordics or other developed innovation economies. “It’s very difficult to see this” as making sense, he said.  “We welcome any sort of criticism,” he added, but “there are some flaws” in the critique. Officials said they aren’t planning any major changes in the index methodology in the near future, as they wish to maintain some consistency in data from one year to the next.

Never miss an update from Science|Business:   Newsletter sign-up