In the 2016 presidential campaign, Donald Trump promised to “make America great again.” Putting aside the racial subtext of the slogan, it harkens back to an era when America reaped the economic and technological benefits that were inherent to the nation’s status as the world’s preeminent industrial power. During Cold War competition with the Soviet Bloc, innovation and technological advantage were coequal priorities with economic output for the United States. However, the economic, technical, social, and political factors that resulted in unusually high economic growth and global industrial dominance were an anomaly, created by conditions that are virtually impossible to re-create through current economic, technology, and trade policies.
The postwar period in the United States is often remembered as a golden era of American business prosperity and rapid economic growth. The public funding of R&D by the federal government both during and after World War II is frequently credited with providing the impetus for innovation and growth. Largely missing from the historic analysis, however, has been a consideration of the effect of financial regulations enacted during the Great Depression on private investments, as well as the effect of the infusion of intellectual capital provided by those fleeing Nazi Germany and the turmoil of the war. The purpose of this article is to synthesize and analyze the effect of these three trends—public R&D funding, changes in financial structures, and highly skilled immigration—on the postwar innovation ecosystem.
In analyzing the rise and evolution of the postwar industrial system, its concurrent drive for innovation, and its subsequent evolution, the article is broken down into five sections: first, the new mandate from the 1930s onward for the public sector to support innovation and create stability through risk-mitigating regulations and financial intervention in the economy; second, wartime industrial mobilization and the postwar innovation ecosystem; third, the influx of knowledge workers fleeing from the rise of totalitarianism in 1930s Europe; fourth, private finance and risk management; and finally, private financing of innovation. In addition to summarizing our findings, we also conclude with some thoughts about the road ahead for policymakers seeking to sustain the innovation ecosystem despite the changes that have substantially degraded the industrial landscape.
Government Intervention and Support for the Economy
The Innovation Ecosystem
Innovation systems within industrial economies are complex, adaptive, self-organizing, constantly changing systems. The interactions and feedbacks create a dynamic flow of ideas, people, and resources that, like a natural ecosystem, are impossible to fully track. It is possible to decompose an ecosystem into discrete elements and study each individually. This, however, provides a limited understanding of the complete system. Likewise in studying industrial development, an assessment of historic trends and cases can be useful, though necessarily imperfect. We can nevertheless gain numerous insights from a careful study of industrial development within its ecosystem. Our historical analysis of the U.S. industrial system and the innovation it fostered helps provide a more complete picture of the pattern of its unique evolution, current situation, and future prospects. As this work shows, during the decades immediately following the end of World War II, the United States benefited from a unique interplay of policy, financial, and human factors that facilitated comparative industrial supremacy and the concurrent development of innovative technologies that arose within that system. It is now impossible to duplicate that combination of factors, but lessons can be drawn from that period to hopefully inform policy choices today. Despite the fact that it is unrealistic to expect the United States to return to a period of industrial dominance, there is ongoing hope that the nation will continue to be a leader in innovative technologies that will continue to be an important factor in economic prosperity.
The current innovation system in the United States rests on two fundamental pillars: First, the U.S. government plays a foundational role in risk management and innovation in the economy. This role is expected and relied upon, but frequently forgotten and discounted. Second, the private sector’s voluntary participation in innovation is essential and irreplaceable. Unlike the public sector, the private sector demands much greater returns (both in terms of overall rewards and in shorter timeframes) before they are willing to invest their capital. Both of these elements took their current forms by the late 1940s, coming out of developments, experiences, and attitudes shaped by the Great Depression and World War II. Heretofore, the scholarship that holistically considers the U.S. innovation ecosystem has been limited. This is largely due to the fact that the governmental role is the purview of political scientists along with science and technology policy scholars. By contrast, the role of private corporations, private equity, and entrepreneurs falls under the purview of business and economic history. In this article, we consider the interactions between both of these elements.
For our purposes, we use the definition technological innovation that the Organization for Economic Cooperation and Development (OECD) proposed in 1991: innovation is the commercialization of a technology invention.Footnote 1 That is, innovation is a technological development based on invention brought to a commercial market. There are two overarching factors that often drive toward, but occasionally hinder, innovation: problem solving and risk mitigation.
First and foremost, innovation is about solving problems or answering some question.Footnote 2 Often solving problems requires looking at things in new ways, investigating some unexplained aspect of the problem, or designing and building new tools. Thus, in the process of solving some problems, new knowledge and understanding are often created. Though the innovation ecosystem is frequently divided into the categories of basic research, applied research, development, demonstration, and deployment, these are merely conveniences for scholars, managers, and policymakers. Most innovators, whether scientists, engineers, or entrepreneurs, are trying to address some issue that confronts them in their work. These problems can be as simple as “why do I observe this?” or “how do I measure this?” or as complicated as “how can we build and safely operate a nuclear power station?”
The second important driver for innovation is the assessment and mitigation of risk and uncertainty. Actors in the innovation ecosystem have to balance the expected rewards of any type of work or use of resources with the expected risks. Organizations must consider the different possible uses of their cash and other resources and the potential returns relative to the projected costs.Footnote 3 Most firms and individuals will not act unless they expect that the benefits will outweigh the costs. This is not to say that all actors in the innovation process are doing careful cost-benefit analyses. Rather, the innovation process involves constant trade-offs that must be considered by anyone participating in it.
Though the risks and uncertainties are ever-present, evaluations of the innovation system often discount the role of government in the management and mitigation of these risks and uncertainties. Public policies, regulations, and funding are essential for creating an environment where entrepreneurship and innovation can occur. Many of the important innovations that have developed in the postwar period were funded by the federal government, even while most of the direct returns accrued to private companies.Footnote 4 As importantly, the role of government in ensuring economic stability and employment resulted in an era in which companies and financial institutions took on risks and reaped substantial rewards, without having to worry about potentially fatal consequences to the financial and innovation systems, or the larger economy. Despite many changes, that ecosystem largely remains in place.
The Mandate for the Public Sector to Create Stability
Franklin Delano Roosevelt (FDR) was elected President of the United States in the midst of the worst economic crisis of the twentieth century—the Great Depression. During this crisis, the unemployment rate climbed to 25 percent, economic growth was stagnant, and bank failures were common. In the 1930s, there were approximately 9,000 bank failures.Footnote 5 Businesses were suffering and unwilling to expand or invest. When FDR came to office, he saw that a significant part of his mandate was to create economic stability.
The severity of the situation in 1932 would be hard to overstate. Between December 1932 and March 1933, 447 banks failed or were absorbed by other financial institutions.Footnote 6 On March 6, 1933, FDR ordered the immediate suspension of all banking activities and transactions. After one week, only two-thirds of the 17,796 financial institutions reopened.Footnote 7 Depositors were only able to access 5 percent of their total depositsFootnote 8 and deposits fell by one-sixth.Footnote 9 In the three subsequent months, 2,352 of the unlicensed banks were permanently closed, resulting in a loss of deposits of $2.3 billion for individuals and businesses.Footnote 10
The federal government under FDR’s leadership enacted a flood of policies and programs, known collectively as the New Deal, to address the social and economic problems confronting the nation.Footnote 11 In doing so, the Great Depression marked a significant turning point for the United States and arguably for the global community. The role of government shifted from being responsive to economic conditions to trying to control them. Under Keynesian economic ideology and the practice of government intervention, the government took on responsibility for stabilizing the volatility of the business cycle (that is, in enacting countercyclical spending policies) and for ensuring employment.
The New Deal sought to reduce the risks and uncertainties that citizens experienced, both in their personal and working lives. The policies, ideologies, and institutional framework of the New Deal established the right of Americans to have lifelong economic security.Footnote 12 The state became responsible for bearing the risks and insecurities that individuals had previously lived with.
The stock market crash of 1929 is generally accepted as the precipitating event of the Great Depression. One of the priorities of the FDR administration was to stabilize the economic situation and to reduce the risks inherent in the existing system. Regulations were designed to increase transparency and reduce risk-taking by financial institutions.
Numerous laws passed during FDR’s first administration aimed at curtailing financial speculation and increasing oversight of the financial system. The Glass-Steagall Act of 1933 effectively separated commercial and investment banks. Commercial banks were deposit-taking institutions, while investment banks dealt with equities and securities. The Glass-Steagall Act prohibited deposit-taking institutions from dealing with or underwriting securities.Footnote 13 The act also increased the safety of deposits through the establishment of the Federal Deposit Insurance Corporation (FDIC), a government-sponsored corporation funded by banks that guaranteed the deposits (or accounts) of individual customers up to a pre-set level.Footnote 14 Regulation Q, also enacted in 1933, capped the interest rates that could be paid on savings accounts. Coupled with the McFadden Act of 1927, which prohibited multiple state branches of banks (i.e., interstate branching), the regulations and restrictions created regional and state banking monopolies and reduced competition.Footnote 15
Financial transparency and disclosure were also important areas of reform during this period. In 1934, the Securities Exchange Commission (SEC) was established. The purpose of the SEC was to oversee security issuances and trading. The SEC was mandated to establish rules and regulations for businesses and financial institutions that would restore public confidence. It was also given control over accounting standards and practices in the United States.Footnote 16 Before the economic crisis of the 1930s, there had been no consistency in accounting practices and financial statements, and no legal requirement for financial statements to be audited by an independent auditor.Footnote 17 The American Institute of Accountants’ established Generally Accepted Accounting Principles, which were to ensure consistency and quality in business accounting.Footnote 18
While the business community was initially very concerned with the scope of regulatory oversight proposed, FDR appointed Joseph P. Kennedy as the first chairman of the SEC. Kennedy was a successful businessman (and father to the future President John F. Kennedy) and this appointment indicated that the SEC would take into account the concerns of business.Footnote 19 Business leaders continued to be skeptical about FDR’s attitude regarding business and they were reluctant to cooperate with the administration.Footnote 20 Notwithstanding their concerns, the regulations developed by the SEC allowed for considerable flexibility for businesses.Footnote 21 Nevertheless, these policies created significant structure around financial statements and established government oversight over practices that had previously been wholly in the purview of the private sector.
In addition to policies to reduce risk-taking by businesses and financial institutions, other policies also enhanced individual security. In 1935, Congress passed the Social Security Act, which provided for a secure pension income once workers reached a set retirement age. This alleviated concerns that people would be destitute once they were no longer physically able to work. Although relatively few people actually qualified for Social Security, it did establish the precedent that government would provide for a baseline of economic security for workers.
The mandate to provide economic stability continued into World War II, despite the rise in employment because of the wartime mobilization. Polls and surveys of people indicated that secure employment was still of utmost concern.Footnote 22 One survey in 1943 showed that 84 percent of respondents believed that “government, business, and labor should get together now and make plans to try to do away with unemployment after the war.”Footnote 23
In 1946, just after the war had ended and soldiers were returning home, the government enacted the Employment Act of 1946, acknowledging the obligation of the federal government to support employment, business production, and consumer purchasing power.Footnote 24 The act explicitly made employment and economic prosperity a mandate of the federal government. It was signed into law by President Harry S. Truman on February 20, 1946:
The Congress hereby declares that it is the continuing policy and responsibility of the Federal Government to use all practical means . . . [to create and support] conditions under which there will be afforded useful employment opportunities, including self-employment, for those able, willing, and seeking to work, and to promote maximum employment, production, and purchasing power.Footnote 25
In his signing statement that accompanied the legislation, Truman pointed out that “democratic government has the responsibility to use all its resources to create and maintain conditions under which free competitive enterprise can operate effectively—conditions under which there is an abundance of employment opportunity for those who are able, willing, and seeking to work.” He clarified that the government’s role was not “to supplant the efforts of private enterprise to find markets, or of individuals to find jobs.” Instead, he saw the role of government “to create and maintain conditions in which the individual businessman and the individual job seeker have a chance to succeed by their own efforts.” Truman concluded that the act “is not the end of the road, but rather the beginning. It is a commitment by the Government to the people—a commitment to take any and all of the measures necessary for a healthy economy, one that provides opportunities for those able, willing, and seeking to work.” Footnote 26 The Employment Act also established the Council of Economic Advisers (CEA), whose purpose was to provide guidance and advice to the president on policies that would promote economic prosperity and employment. The CEA was comprised of business CEOs, acknowledging the essential role that business played in the economy. The government understood that there was no way for government to achieve its objectives without the voluntary support and engagement of the business community.
The U.S. government also pursued policies to encourage international financial and economic stability. The Bretton Woods Agreement of 1944 established a worldwide financial system, committing the participants to limited flexibility in their trade and financial policies. The price of gold was fixed at $35 U.S. dollars for one ounce of gold, convertible on demand. All other currencies were pegged to the U.S. dollar. In addition, Bretton Woods established the International Monetary Fund (IMF) and the World Bank. The purpose of the IMF was to stabilize international economies and markets by financing short-term deficits when needed and by acting as the “lender of last resort” in times of economic distress.Footnote 27
This policy of financial stabilization, with a lender of last resort, was crucial for postwar economic development. The restrictive financial system was supported because of the widespread belief that stability was needed.Footnote 28 Though markets were liberalized and international trade grew in the 1950s and 1960s, the underlying expectation that economic stability was the goal of policymakers became deeply ingrained in the financial and business communities. This is not to say that businesses and financial institutions were not aware of risk and uncertainty, nor that they disregarded risk. However, there was a bottom placed on the downside of risk by governments and the international financial system. This bottom allowed businesses to take advantage of opportunities without concerns about systemic risks.
The Postwar Innovation-Industrial Ecosystem
The Great Depression followed by the World War II mobilization had been particularly stressful and unsettling economic times. This extended period did show, however, that important productivity and employment objectives could be achieved when government, business, and labor worked together.Footnote 29 However, the nature and scope of business changed during this period. Self-employment and small businesses were much more common before World War II than they were by the 1970s (see Table 1). In 1948, 12.05 percent of the population was self-employed in nonagricultural industries.Footnote 30 By 1960, this had dropped to 10.45 percent, and by 1970 it was down to 6.94 percent.Footnote 31
Source: Bureau of Labor Statistics.
For individuals, working for a company (as opposed to being self-employed) removes many of the risks associated with employment. Large American corporations were generally very prosperous in the 1950s, largely due to the almost monopolistic position that they held both domestically and internationally and the large investments and expenditures that were being made by the federal government to fight the Cold War. Military needs dominated the research agenda and provided a secure source of revenues for defense contractors.
The innovation system had been transformed as well. The system of the 1950s was far more complex, diverse, and extensive than it had been in the 1920s.Footnote 32 In the 1920s and early 1930s, the research community was relatively small. Companies supported their own research. Research universities relied on private philanthropic foundations for funding.Footnote 33 However, the Great Depression put tremendous stresses on research funding. During this period, there was significant debate about what role the federal government could and should play in supporting research. Kurt Compton, the president of the Massachusetts of Technology (MIT), proposed that the federal government should increase its funding to research universities. Compton argued that new technologies developed in universities could help to reduce unemployment and increase business productivity.Footnote 34 Though no conclusions were reached during the Depression, World War II provided the justification for significant research expenditures and showed the potential benefits of such investments.Footnote 35
By the end of the war, political leaders reached a general consensus that scientific research could provide an important foundation for innovation and economic growth.Footnote 36 The report by Vannevar Bush to President Truman in 1945 laid out a plan for funding scientific research in universities and supporting innovation. Massive increases in federal expenditures on R&D occurred for practical rather than ideological reasons. The military needs of the Cold War made spending on research and innovation a top priority. Much of this funding went to companies, as well as academic institutions.Footnote 37 This allowed firms and universities to substantially expand their research capabilities and facilities, with assurances of continued funding.
In the 1930s, the federal government had already recognized its responsibility in providing some social welfare programs and financial regulations. In the 1940s, this commitment extended to providing employment and economic growth. Continued economic growth and employment became the key components of the federal government’s economic policy.Footnote 38 The federal government took responsibility for ensuring economic prosperity and health, financial stability, and funding innovation. By necessity, the government also took much greater responsibility for mitigating any risks associated with the economy, employment, and production for both businesses and individuals.
The Influx of Knowledge Workers
Innovation depends on more than merely financial resources. There must be a capacity to effectively use these resources and an environment in which new knowledge can be created and problems identified and addressed. One of the most important, and often overlooked, components of the innovation system was the influx of human capital into American universities and national laboratories between 1930 and 1950. These individuals immigrated to the United States typically because they were either fleeing the threat of persecution posed by Nazi Germany or because they were specifically recruited after the war for their scientific skills and knowledge. Refugee scientists and engineers provided new knowledge, theories, and methods to the U.S. innovation ecosystem. And they did this without either the cost or time required to generate this capacity domestically.
Almost immediately after Adolf Hitler came to power in January 1933 in Germany, efforts began to rescue German academics at risk.Footnote 39 By 1935, approximately 1,600 scholars, or 32 percent of the academic community, had been dismissed for political or racial reasons. By 1938, approximately 39 percent of the academics in Germany and Austria had lost their positions.Footnote 40 Between January 1933 and December 1941, more than 7,500 German and Austrian refugee scholars came to America, in addition to another 1,500 artists, journalists, or other intellectuals.Footnote 41
With the defeat of Germany in 1945 and the start of the Cold War with the Soviet Union, hundreds of scientists, including many accused of participating in Nazi war crimes, were brought to the United States under a secret intelligence program named “Operation Paperclip.” The purpose of this program was to acquire German scientific skills and knowledge in order to both improve the U.S. military innovation system and to prevent the Soviets from getting them.Footnote 42 Operation Paperclip brought another 2,000 scientific and research specialists, as well as their families, to the United States. Many of these individuals went to work in national laboratories and scientific agencies (such as NASA), while others worked for universities, private companies, defense contractors, or intelligence agencies.Footnote 43
The influx of so many scholars and intellectuals into the American innovation system had a profound effect. Many of the advances in fields such as energy, aerospace, physics, mathematics, and electronics would not have been possible without the expertise and understanding provided by German scholars.Footnote 44 In addition, the benefits were received without the necessity of investing in their education and training, or in waiting for students to mature as scholars. Essentially, when the United States decided to invest heavily in the country’s R&D, the human resources were there to utilize these resources. If these immigrants had not come to the United States, these resources may not have been used as effectively or yielded the innovations that they did. Arguably, American scholars and companies would have eventually developed the human capital internally, but this would have delayed progress while waiting for programs and students to develop to meet the needs.
At the same time, the federal government made substantial investments in research and production facilities. For example, during World War II, private industry provided only 11 percent of the capital for new airplane facilities. Thus, at the end of the war, the government owned the vast majority of aircraft production. The aviation industry relied on the government divesting itself of the $4.6 billion worth of facilities and equipment as surplus war property at significantly discounted prices and military contracts for postwar profitability.Footnote 45
Private Finance and Risk Management
Although the federal government’s new position as a major funder of research, development, and innovation, along with the influx of human resources from Europe, was essential to the U.S. innovation system, this system was (and is) absolutely dependent on the private sector. Private-sector investments and financial intermediation are crucial components of the innovation system. Between the 1930s and the 1950s, the changes in the private-sector funding of the innovation system were every bit as profound as the changes in public-sector funding.
During the 1930s and 1940s, many companies and banks became very conservative with respect to risks they were willing to take. Between 1935 and 1961, fewer than 2,100 new commercial banks opened, resulting in fewer banks than had existed before the depression.Footnote 46 New corporate stock offerings were relatively rare during this period, with most stock issuances coming from existing companies.Footnote 47 Private-sector investments in innovations came largely from internal funds (i.e., from revenues) and were generally concentrated in large corporations that could afford the risks and uncertainties of return associated with longer-term investments.
Although new products and technologies were developed, investors in postwar America were generally cautious about new businesses competing with existing corporations and were reluctant to invest.Footnote 48 The economic prosperity of the 1950s was generally limited to large existing corporations.Footnote 49 This was due to investors being unwilling to finance new companies and new technologies and to company practices that promoted economies of scale and the suppression of competition. Corporate monopolies and dominant market players were commonplace.
Risk and uncertainty were generally dealt with by avoiding them. American business had grown successful and large during World War II and the immediate postwar period. There was a widespread feeling that skillful American management had led to the successes—ignoring the contingent circumstance of the war that had severely degraded the productive capacity of most of the other leading industrial nations around the world. Business schools and writers touted the critical rise of professional managers.Footnote 50 Previously, managers had come from the rank-and-file and were promoted internally into management. Starting in the 1950s, managers were hired externally. Typically these new managers were college-educated and trained in business, rather than production or engineering.Footnote 51 Thus, there was little familiarity with production methods or employment norms. Instead, there was a focus on economies of scale, consolidation in business, and managerial control.Footnote 52 However, the financial success of businesses masked an underlying problem with quality and production.
In the 1950s, American companies were unconcerned about competition or quality. Product demand and production shortages meant that companies could sell everything they produced without worrying about innovation or quality. Instead, professional managers focused on what they understood: meeting schedules and maximizing profits. Managers received financial reports, but generally not on their operations, customers, or quality measures.Footnote 53 Companies were encouraged by consultants such as the Boston Consulting Group (BCG) to maximize their profits by focusing on getting all that they could out of successful production facilities and getting rid of less profitable ones (rather than investing in improving them).Footnote 54 This focus on financial results came at the expense of the declining competitive position of American industry as companies shifted their focus from production to financial success.Footnote 55 It also came at the expense of neglecting product and process improvements (i.e., innovations) in many large, successful industries.
The steel industry is a good example of a successful postwar industry that chose to focus on economies of scale, rather than technological innovation. According to historian Paul Koistinen, the steel industry focused on expansion of their operations using existing (nineteenth century) technologies rather than updating their operations. The managers believed that it was more cost-effective to use existing (and proven) technologies rather than taking on the risks of new innovations and technologies. Steel manufacturers neglected investments in production facilities and quality improvements in favor of maximizing profits and dividends in order to keep stock prices high.Footnote 56 Unfortunately, by the time it was clear that this strategy was not sustainable, international competition from more advanced production facilities put considerable pressure on American steel companies, which then sought financial incentives and trade protection from the federal government in order to preserve their dominant positions in the domestic marketplace.
From 1952 to 1973, a revolution occurred in both the understanding of risk and ideological attitudes toward risk.Footnote 57 In 1952, a graduate student of Operations Research at the University of Chicago, Harry Markowitz, applied the principles of statistics and mathematics to the question of portfolio management. Markowitz demonstrated mathematically that diversification allows investment portfolios to be optimized by reducing risk. He also explicitly connected rates of return with the risks that portfolios carried. His work revolutionized the way that business leaders viewed risk and investments.Footnote 58 Corporations, traders, and businesses began to alter their practices.
The 1960s brought two more important advances in the understanding of risk management and finance. In 1964, Bill Sharpe published a paper in the Journal of Finance, titled “Capital Asset Prices: A Theory of Market Equilibrium Under Conditions of Risk.” The paper describes the Capital Asset Prices Model, or CAPM, which can be used to predict the returns on stocks.Footnote 59 Sharpe separated risk into systematic and unsystematic. Systematic risk was the risk that an investor undertook by being in the market. Unsystematic risk was that of an individual stock. Thus, while unsystematic risk could be diversified away, systemic risk could not.Footnote 60 In 1965, Eugene Fama proposed the Efficient Market Hypothesis, in which he argued that markets were efficient at processing information, and therefore the market price of a stock was always correctly priced. Thus, it was virtually impossible to get returns higher than the market average (i.e., to beat the market).Footnote 61 This work laid the intellectual and mathematical foundations for managing financial risks.
During the 1950s and 1960s, there was a shift from risk avoidance to risk management.Footnote 62 Investors, managers, and bank officers who had come of age during the depression were replaced with those not as vividly aware of the crisis and associated risks.Footnote 63 At the same time, new understandings about risk, economics, and finance emerged. Risk diversification, CAPM, and the Efficient Hypothesis model were coupled with the rise of mathematical economics and econometric models.
The postwar agreements on finance and business began to break down in the 1970s, as did financial and economic stability. Volatility increased substantially. In 1971, President Richard Nixon took the United States off the gold standard.Footnote 69 In 1973, the economic shocks caused by the Oil Crisis disrupted the energy stability that had existed during the postwar period. Federal government expenditures driven by both the Vietnam War and the Great Society programs resulted in rapid inflation. The development of computer technologies and the microprocessor allowed for new processing applications and technologies.Footnote 64
There was an ideological shift away from believing that government was needed to provide economic prosperity and stability to believing that free markets and private business could provide them.Footnote 65 However, this did not change the practicalities of the need for stability and investments in innovation. Thus, whenever the economy has stumbled or businesses have struggled, they have almost always turned to policymakers with an expectation that government will promote and support job growth through investments in private business, especially major corporate players.
Starting in the 1970s and accelerating in the 1980s, the changing regulatory environment, growing international competition, new ideologies and technologies, and increased economic volatility spurred financial innovation.Footnote 66 This innovation was aimed at shifting risks, which had the effect of allowing companies, particularly financial companies, to get as much short-term profit as possible.
Businesses became increasingly focused on short-term financial returns and less willing to undertake longer-term investments. They came to concentrate more and more on shareholder value and the need to demonstrate short-term financial prosperity and growth.Footnote 67 This process of shifting to being more concerned about financial than capital wealth development is called “financialization” and it has two primary causes. First, the rise of large institutional investors, including pension funds, mutual funds, and university endowments, meant that institutional investors began to have a much larger influence on any individual company. Small investors were, and are, much less likely to select and hold individual company stocks for the longer term. Companies were under increasing pressure to satisfy the expectations for returns of large institutional investors that were not loyal to a particular company and have the ability to significantly impact share prices by their buying and selling decisions, instead of responding to the concerns for longer-term health and profitability of the firm. Rather than using their investments to influence operating decisions, institutional investors focused on share prices and financial returns. In other words, institutional investors signaled that they were unwilling to take on the long-term financial risks of investing in a company’s development and innovations. On the contrary, institutional investors will support the company only so long as short-term financial expectations are sufficiently high. That is, the expected financial rewards must be sufficiently high to entice the capital investment.
The second cause of financialization was the rise of the theory of Maximizing Shareholder Value (MSV), which took root in the 1980s with the goal of aligning corporate executive compensation and shareholder interests leading to the coupling of executive compensation with stock prices. MSV established an incentive for managers to focus on increasing stock prices in the short term in order to increase their compensation, even if this had long-term adverse effects.Footnote 68 This incentive was further reinforced by the dramatic increase in the inclusion of stock options as a component of executive compensation during the 1990s, when stock options went from 27 percent of total compensation for CEOs in 1992 to 51 percent in 2000.Footnote 69
These two changes spurred businesses to focus on shareholder value and stock prices. This necessarily resulted in investing less in long-term invention, scientific discovery, and innovation. Traditional business strategy concentrated on innovative products and product markets.Footnote 70 However, business strategy was now shaped less by product markets and more by capital markets and the need for financial returns and economic growth.
Businesses are most commonly interested in the latter components of the innovation process, particularly on the deployment of technology, since this is the period in which returns can be realized on previous investments. Businesses have been reducing their support and involvement in more basic research in recent decades, and have focused on activities that utilize externally produced scientific discoveries and technologies.Footnote 71 Policymakers, university administrators, and the public now expect tangible returns to society from public investments in research. Scientific discovery and innovation have become increasingly privatized, with policymakers, university administrators, and business leaders trying to capitalize on potentially profitable inventions. The Bayh-Dole Act of 1980, by Birch Bayh (D-Ind.) and Robert Dole (R-Kans.) gave universities in the United States the right to claim and protect their ownership of any intellectual property funded by the federal government, rather than allowing this knowledge to automatically go into the public domain. Thus, there has been increasing interaction between industry and academia to create, transfer, and utilize knowledge and inventions that may have some commercial value.
The development of information and computer technologies made financial mobility and innovation much more possible. Financial capital used to be governed by personalized relationships between commercial bankers and companies.Footnote 72 This is no longer the case. The globalization of finance and financial markets has shifted financial relationships from relationship based to transaction based.Footnote 73 These innovations have allowed financial institutions to off-load risks from their own balance sheets to capital markets. Providers of financial capital are not necessarily investing for the long term anymore, but are looking for short-term (3–5 years), high-yield returns.Footnote 74 These components of the new financial paradigm destabilize the economic system and can lead to greater financial crises.Footnote 75
One of the great challenges of this shift toward emphasis on financial wealth is that there has been a decoupling of finance and production.Footnote 76 Investments in innovation are necessarily investments in production capacity and knowledge. Thus, production is tied to innovation in a way that financial resources are not. While financial resources are essential to innovation, they are able to move more freely toward or away from different innovations. This means that innovators need to demonstrate potential benefits to financial investors quickly or else financiers choose more lucrative investments elsewhere.
These financial innovations and technologies altered how firms understood and dealt with risk and uncertainty. Companies and financial institutions began to lobby for fewer restrictions and regulations in the 1970s. They also relied heavily on the mathematical models and economic theories that their risk-management programs were built on. The New Deal programs and policies were based on government taking on the risks that might threaten economic stability and prosperity. Business began to push back, confident that they could now manage the risks and reap the full rewards. Ironically (or perhaps correspondingly) with the rise of tools and technologies to manage risk, the financial markets and economy became more volatile and risky.Footnote 77 Financial institutions and companies demanded greater, and more immediate, returns on their investments.
Budget deficits have been a long-standing political problem in the United States. The push for smaller government was primarily a reaction to the massive expansion of government size and responsibilities from the 1930s through the 1960s. Governments have come under increasing pressure to justify expenditures and balance their budgets.
Public budgeting is the process of allocating scarce public resources among alternative activities and programs.Footnote 78 Numerous budgetary reforms have attempted to evaluate the effectiveness of resource allocations through the public budgeting process.Footnote 79 Though no performance-evaluation system has proven wholly effective and satisfactory, there continues to be increasing pressure on public administrators and policymakers to justify public investments. For example, the National Science Foundation (NSF) funds the Science of Science Policy (SciSIP) Program, which was designed to improve the allocation and effectiveness of investments in research and development in science and technology. With funding restrictions, there also necessarily comes selective funding that targets government priorities. The National Nanotechnology Initiative, National Institute for Aging (NIA), the National Institutes of Health (NIH), and NASA are all examples of targeted funding.
Scientific research at universities is primarily funded by the federal government. Federal expenditures funded 52.8 percent of all R&D expenditures at academic institutions. An additional 25.8 percent of R&D expenditures came from the academic institutions themselves, while state and local governments funded 5.7 percent. Private corporations funded the remaining 6.1 percent.Footnote 80
The need to justify these public investments and increasing revenue pressures has pushed policymakers and university administrators to look for greater immediate tangible benefits of R&D expenditures and shorter pathways through the innovation process. In recent years, academic institutions have embraced greater participation in the commercialization of technologies.Footnote 81 Universities experienced greater incentives to push technology transfer and innovation after passage of the 1980 Bayh-Dole Act.Footnote 82 At the same time, state funding for higher education began to decline and universities began to look for ways to offset revenue losses.Footnote 83 Universities now invest in the development of intellectual property and fund start-up ventures.Footnote 84 Universities are not simply engaged in basic research; they are more actively leveraging and exploiting their intellectual property.
Private Financing of Innovation
The uncertainties and risks for the innovator are high. Thus, development and commercialization of a new innovation is rare and the potential rewards must be high in order to entice investors to risk their capital.Footnote 85 Debt financing for innovation is difficult, both because debt must be repaid regardless of the success of the investment and because banks are typically hesitant to make higher-risk loans.Footnote 86 In the United States, investing in innovation by the private sector has generally been self-funded or financed through equity investments.
Investments in emerging technologies were relatively static during the decades just after World War II, with the bulk of the investments being made by the federal government, or to a lesser extent by large firms. Approximately 75 percent of all research and development done by defense contractors in the 1950s was paid for by the Department of Defence.Footnote 87 Many of these firms had monopoly positions, as well as large contracts with the military, which ensured their ability to extract rents from any investments in their innovations. With the development of the electronics industry and information and communication technologies (ICT), investors became more interested in the potential returns to these investments.Footnote 88 However, these new technologies had a ready customer in the United States military. Investors were willing to speculate that the demand for these technologies by the federal government would increase, particularly after the Soviet Union launched Sputnik in 1957—creating the perception that the Soviets held an edge in technology.Footnote 89
Investments in research and technology are generally assessed on the risk-reward terms proposed by Markowitz. More often than not, these are evaluated once the technology exists, rather than in the early stages of research and knowledge creation. Since proposed ventures need funding, often a lack of private capital becomes the main obstacle to development and commercialization.Footnote 90 Therefore, business scholarship on innovation and entrepreneurship typically focuses on private finance and frequently neglects the role of the public sector.
Undergirding the innovation ecosystem is the implicit assumption that the public sector will bear the risk of the discovery process and typically share the cost of commercialization. The government becomes the mitigator of systemic risks through policies designed to ensure financial and employment stability, as well through its role as the lender of last resort. Limited-liability statutes, bankruptcy laws, unemployment insurance, social security, workers’ compensation, and other regulatory constraints are all designed to limit risk-taking by private-sector companies and ensure that individuals have some protections against economic volatility.Footnote 91
Private finance is inherently conservative. It will not participate in the markets or fund risky ventures without assurances that the potential reward makes it worthwhile. The Great Depression and World War II created an environment in which much of the risk shifted to government, and more recently back to individual workers. Since businesses and financial institutions do not bear the full cost of the risks—that is, because they bear only the individual and not the systemic risk—these firms have been willing to take greater risks, even while telling themselves that they were managing the risks through risk-management programs and mathematical models.
Extracting high financial rewards has been necessary for two reasons. First, private finance has required assurance of these rewards before it is willing to invest and act. While it may seem counterintuitive that companies are actually willing to take on less risk, given that the public sector is underwriting so much of the insurance against the risk, the reluctance of businesses and private finance to make longer-term investments reveals that the economic incentives have to be substantial in today’s innovation system. The second reason is that the role of government as the risk bearer has become enshrined in our society, even while our economic ideology claims it is unnecessary. Financial institutions were convinced that they had solved the problems of risk and uncertainty. It could be insured and parsed away, passed on to individuals who were willing to bear them. It was almost an article of faith that the systemic risk had been eliminated. However, the Great Recession of 2008–9 and the subsequent bailouts of the financial system have demonstrated that the government continues to be the guarantor of the system.
Conclusions
The management and mitigation of risk is an essential role of government.Footnote 92 Americans came out of World War II with vivid memories of fifteen years of economic hardship and sacrifice. Government became responsible for ensuring economic stability and prosperity. In the innovation economy that arose during and after World War II, government became responsible for fostering innovation and ensuring economic competitiveness. In the United States, that meant that government must work through private companies and nonprofit organizations to achieve policy goals. Most federal R&D funds go to private business, particularly through the Department of Defense, the Department of Energy, and the National Aeronautical and Space Agency (NASA).Footnote 93 With the exception of the National Science Foundation, most federal R&D is mission-oriented, focused on addressing specific goals and objectives of the funding agency.Footnote 94
From the beginning of the Great Depression through the 1960s, the expectations of government and business with respect to the economy and risk changed substantially. Government became responsible for ensuring that the economy was successful and stable and that individuals and businesses were protected from systemic risks and uncertainties. Public policies and government grew to reflect this new role. These policies shaped business strategies and technological innovation.Footnote 95 The federal government is expected to fund R&D and to ensure that innovation within industry is supported so that companies can compete internationally.
At the same time, businesses are under increasing pressure to show financial profitability and to increase revenues and stock prices, making it more difficult to invest in longer-term endeavors. Companies are confronted with incentives to avoid investing in intangible assets, such as process and quality improvements or innovations, training, and stronger supplier and customer relations, because the returns on these are difficult to estimate and take longer to realize.Footnote 96 This is not to imply that there have been no private-sector investments in innovation in the past seventy years. That is clearly untrue. However, these innovations must be understood within the context of the risk environment and the roles that the public sector is expected to play in the innovation ecosystem.
Increasingly since the 1970s, there has been a shift of risk onto the public sectors and onto individuals. Companies had worked to rid themselves of all obligations and expenditures that are not directly related to short-term financial returns. Companies have also eliminated many of their internal R&D functions and are relying instead on company venture-capital funds, aimed at identifying new technologies and companies that can be acquired once most of the early-stage risks have been mitigated. They have also pushed the risks of production onto their suppliers.Footnote 97 However, these risks still exist and have to be borne by some entity, and this has fallen to the public sector and individuals.
With the shift in focus to shorter-term profits and financial returns, private-sector companies have sought ever-increasing rewards for their investments. That is, private capital is unwilling to make investments unless the expected returns are sufficiently high to entice capital to invest. Professional managers and risk-management theories, coupled with the technologies to run complicated mathematical models, made companies and financial institutions think that they had mastered the risks and uncertainties of the marketplace. Thus, they were willing to take on greater risks while implicitly relying on public-sector guarantees as the systemic risk manager.
The postwar period saw public and private consensus that the government was responsible for supporting economic prosperity and growth while also mitigating the downside risks.Footnote 98 However, this was at a time when there was little global competition for American industrial production and a willingness to endure the costs of heavy public-sector investments. For instance, in the 1950s the top marginal tax rate for an individual was 91–92 percent,Footnote 99 and for corporations it was 52 percentFootnote 100 (compared to 39 percent and 35 percent, respectively, in 2017). Though the consensus for public responsibility for risk mitigation and economic stability continues, the willingness and tolerance for private-sector contributions has disintegrated.
Manufacturing employed 27 percent of the workforce in 1957, but only 11 percent in 2009.Footnote 101 Between 1998 and 2010, six million U.S. jobs in manufacturing disappeared.Footnote 102 However, economists and business leaders have debated whether this decline is concerning. While some argue (and score political points) by contending that the decline is the result of companies off-shoring to lower-wage countries, others argue that increases in worker productivity and automation have led to the majority of the decline.Footnote 103
Policymakers are confronted with the issue of how to ensure economic prosperity and employment. A declining manufacturing base can have detrimental effects on a region. Workers who lose manufacturing jobs are rarely able to replace their incomes.Footnote 104 Retaining manufacturing jobs corresponds to nonmanufacturing job growth.Footnote 105 Thus, a strong industrial base seems to provide a foundation for regional employment and encourages growth of both industrial and service firms.
Though economists and academics argue about specific intervention strategies and their effectiveness, policymakers find it difficult to be passive during economic decline and job losses. In fact, the activist policy strategy is expected and presumed, as outlined in this article. However, harkening back to America’s golden era of industrial dominance, middle-class manufacturing jobs, and a coal-based economy by enacting trade barriers, lowering environmental standards, lowering corporate tax rates, and relaxing financial regulations fundamentally denies the unique conditions that made the postwar American economic success possible. Public policies can have significant influence on the innovation ecosystem and the economy, but they cannot return the United States to the postwar economy.