The Mismeasure of Innovation
The 2020 Science and Engineering Indicators report was recently released by the National Science Foundation (NSF). The finding making headlines is China’s expanding research funding, publications, and technical workforce. Per NSF Director France Córdova, “[the US] may be the innovation leader today, but other countries are rapidly gaining ground.”
Should we be worried? “Innovation” is the reason for public investments in science, but is research and development (R&D) sufficient for innovation? Is it even necessary? The received wisdom is, “of course it is,” but how do we know for sure? What else might be needed? To find out, innovation policy — like innovation itself — must rely on evidence.
NSF collects, reports, and analyzes a vast amount of data on the R&D enterprise. How much money is spent on energy research? How much is spent by government vs business? How does R&D spending compare to GDP? How does the R&D:GDP ratio of the US compare to China? How many STEM graduates were there in the US and China this year? Last year?
All these data are gathered in the interest of informing policy to realize better innovation outcomes. We collect them — in the words of Franklin Roosevelt in a letter to his science advisor, Vannevar Bush — “for the improvement of the national health, the creation of new enterprises bringing new jobs, and the betterment of the national standard of living.”
The types of data we have frame our policy reality. Godin (2006) notes that the linear model of innovation (Research → Development → Innovation), widely used to advocate for public support of science, but regularly critiqued by academics and practitioners alike, has survived for decades because of the statistics we keep for counting resources and allocating money:
“Having become entrenched… and standardized under the auspices of the Organization for Economic Cooperation and Development (OECD) and its methodological manuals, the linear model functioned as a social fact. Rival models, because of their lack of statistical foundations, could not become substitutes easily.”
Consider a recent change to our national R&D accounting made in response to the 2015 update to the OECD Frascati Manual, which sets international standards for R&D data collection. The “D” in R&D was re-scoped to include only “experimental” development; US spending on innovation appeared to drop by $20B a year. Innovation disappeared.
What does this change really mean? In short, the innovative activities to develop the infrastructure surrounding a new technology (non-experimental development) are now off the books. Prototyping and testing a technology is R&D, but creating new tools and facilities to make it fully operational is not R&D (see Figure 1 for an example).
To be fair, this change is reasonable. Data are now consistent across agencies, countries, and sectors. Apples to apples. However, we now have fewer data on innovation activities that are not R&D. Public decisions about how to support innovation will not consider these “last-mile” activities. So, what else do accounting standards prevent us from seeing?
If the purpose of collecting these statistics is to understand our system of innovation, we should broaden the types of evidence we consider. By focusing on “R&D,” and by narrowing what it means, we fall victim to the streetlight effect — only looking where the light shines. Worse, we reify the myth of innovation as the product only of science and engineering.
Working at SRI, I learned the innovation mantra, “NABC.” Innovation requires that a need (N) be met, that it is met by a novel technical approach (A), that the benefits (B) of meeting the need are measurable, and better than the competition (C). A good approach is not enough; success requires a sustainable business model. This perspective is not unique to SRI.
Consider a second set of OECD guidelines. While the Frascati Manual deals in “R&D,” the Oslo Manual deals in “innovation.” Per Oslo, “R&D, as formally defined [by Frascati], is neither a sufficient nor necessary condition for either innovation activity or innovation to occur” (OECD 2018). R&D is only one of eight types of innovation activity listed. The others are:
· engineering, design, and other creative work;
· marketing and brand equity;
· IP-related activities;
· employee training;
· software development and databases;
· acquisition and lease of tangible assets; and
· innovation management.
What impact do these have on innovation, relative to R&D? How much do we spend on them? Unfortunately, there are few data gathered on these factors, so we do not know for sure. We do know from studies of individual firms, such as those conducted by O’Connor (2019), that “investing in innovation pays off, but not if it is limited to R&D.”
Arora et al (2019) found evidence that that non-R&D activities are lacking. Despite the growth of scientific knowledge over the past decades, economic productivity has stagnated. R&D at universities is not translated into commercial innovation. Non-R&D activities could span these boundaries, or prevent them from existing in the first place.
One activity that warrants special attention is design — not mere aesthetic embellishment, but the discipline of aligning function to need through rapid, iterative, empathic prototyping. The OECD, in an attempt to measure the impact of design, recognized it as both a factor of innovation, as well as an entirely different mode of innovating:
“[Design is] a user-centred, creative development activity driving innovation, with… potential overlaps with the Frascati definition of R&D. To some extent, this represents an inversion of the linear model of innovation, where usage considerations drive the creative efforts to ensure the implementation of ideas as potentially radical innovations” (OECD 2015).
OECD’s studies show that design correlates with innovation in firms across sectors, and the more integrated design is to the work of a firm, the more likely it is to innovate. Steve Jobs’s direction at Apple was emblematic of this. The first iPod, for example, was technically similar to existing mp3 players of the time, but offered a far better user experience.
The innovation policy community has a decision to make. If we use R&D statistics to advocate for R&D as an inherently worthwhile endeavor — for the sake of discovery — we need not do anything. Many would agree that there is public value simply in learning how systems work, expanding perceptions, and experiencing the wonder of the cosmos.
However, if we also use R&D statistics to make policy for purposeful, practical applications to improve public welfare, then we must collect, report, and analyze new types of data. With data on a wider array of factors, we can test their respective impacts on innovation (productivity, employment, health outcomes, etc.) and make wiser innovation policy choices.