It is hard to imagine a better year for this special issue on standards to appear than 2022. It follows, after all, an alarming period of convergence between such global discontinuities as the COVID-19 pandemic, with its sudden, massive effects on healthcare delivery and the nature of work; the rollout of 5 G in the United States, with its threat to aircraft safety and consequent disruption of air travel worldwide; accelerated climate change requiring revision of everything from insurance maps limiting development to the size of culverts and the depth and strength of holding ponds; public disagreement over cybersecurity standards between software engineers favoring open systems and intelligence experts opposing them; and altered international trade patterns such as Brexit with its new U.K. Conformity Assessment (UKCA) and the replacement of the North American Free Trade Agreement (NAFTA), now called the U.S.–Mexico–Canada Agreement (USMCA). In January 2022 alone, the lack of appropriate standards in all these areas was headline news in the mainstream media.Footnote 1 Suddenly, matters that have been practically invisible to the general public, and of interest only to experts, have moved from backstage into the spotlight.
For nearly fifty years after World War II, when the prevailing international system for voluntary standards setting was worked out, major standards regimes, once established, seemed to function more or less like settled law.Footnote 2 In recent decades that regime has been gradually eroded although never completely eliminated. In December 2021, for instance, a new public reversal, widely ridiculed by incredulous critics, aptly symbolized the end of a long mainly settled era for standards. In January 2021 Britain exited the European Union (EU) and proposed to create the UKCA, (the Brexit era's U.K. alternative to the CE mark). While Brexiters had long promoted leaving the EU as an escape from regulatory red tape, most British businesses now had to anticipate a big expense and a tricky transition to a new standards regime. But in his year-end assessment of Brexit, Prime Minister Boris Johnson proudly highlighted a standards reversal as one of Brexit's signal achievements. Because the British government had rolled back its prohibition against using feet, inches, pints, and quarts, its centuries-old predecessor to the metric system, it was once again possible to produce Winston Churchill's beloved pint-sized bottle of whisky. From the politician's point of view, Brexit was about reclaiming national identity. Most British businesses were not amused, with only 8 percent saying they would give up the CE mark. Over time the UKCA might in fact offer a chance to improve on some standards, but meanwhile, British businesses faced a costly effort of standards revision that could very well require them to observe two standards regimes if they continued exporting to the EU. The UKCA, some warned, might even provoke the kind of resistance that stalled the adoption of the metric system in Britain until its entry into the EU and led to a hybrid system of measurement in the United States. In another case of U.S. exceptionalism gone awry, the rollout of the 5 G telecommunications standard by telecoms companies AT&T and Verizon became a cause célèbre for international airlines. It was reported that 5 G was being implemented differently in America than in thirty-one European countries and as such could pose a threat to international airlines flying equipment dependent on varying types of altimeters for which the Federal Aviation Agency had required no common standard in the United States. Such failures to give standards due attention in a timely manner could be found around every one of the discontinuities cited. This raises a critical question: Does the current approach to standards work or is a reboot needed?
The articles in this special issue of the Business History Review, arranged in roughly chronological order, as well as the extensive introductory overview, convey the impression that standards regimes have evolved from government mandates, imposed or rejected, to public-private partnerships under which standards are observed voluntarily, and that this has been a positive trend. These papers, as with much of the historical literature on standards, foreground the front end of the standards process, standards setting, when standards are negotiated or adopted and put in place, or not. But reading between the lines we can see a more complicated picture that has determined the effectiveness of standards regimes over time and geography. The complete standards cycle extends beyond negotiation and formal adoption to implementation, maintenance, revision, and renewal. As such it requires support from underlying institutions and mechanisms that may or may not already exist, enforcement by authorities or voluntary compliance from producers, attention by knowledgeable experts, and regular review and revision in the face of changing circumstances both inside companies and outside in professional societies, universities, and government agencies. One question these articles help us think about is what roles governments have played, and why. Historians are notoriously reluctant to treat their findings as applicable to the future. But this collection of articles brings up some ideas that should be considered, as both private and public sectors internationally face shoring up or replacing existing standards regimes.
The Role of Governments: Timing
As all these articles show, standards have been essential components of industrialization, providing coherent frameworks for the rate and direction of change and offering essential tools for implementation. The timing of adoption relative to each wave of industrialization has been a key factor in determining the specific roles standards have played at the national level. We learn in Anne Hanley's paper, “Men of Science and Standards: Introducing the Metric System in Nineteenth-Century Brazil,” that for countries like Brazil that adopted metric measurement prior to industrialization, adopting standard weights and measures based on metric measurement was a way to shape industrialization from the top down and to fit it for purpose as a driver of international trade. Government mandated and government funded, Brazil's modified metric system was imposed on the country's private sector with relatively little pushback because major investments in fixed capital had yet to be made. In the case of metric measurement, relative timing was critical. In other countries, especially the United States, where private industry seeking to make its investments more productive generated its own industrial standards using its traditional measurement system as a way of harnessing the forces of industrialization already underway, but also in the United Kingdom, and even in France where it originated, the metric system either was resisted or was rejected repeatedly. Another example of problems with timing, also in the United States, was the unusually mandatory standards for food identity issued and enforced by the Food and Drug Administration (FDA) on the authority of congressional legislation. In his paper, “Making Food Standard: The U.S. Food and Drug Administration's Food Standards of Identity, 1930s–1960s,” Xaq Frohlich suggests that these food identity standards might have been more acceptable to both industry and consumers if the producers had not already issued their own voluntary grading standards and then chosen to educate their consumers through a powerful branding process that had strong industry appeal and participation. By the late 1930s the process of setting standards, especially international standards, more generally was becoming both deliberative and deliberate, with meetings occurring up to four years apart and standards issued that required the assent of a deliberately diverse group of people, mostly related to the engineering profession.
In the period after World War II, the International Organization for Standardization (ISO) international standards-setting process, alluded to above, was close to universally accepted. The voluntary consensus-seeking nature of the process, sustained by experts and organizations that benefited from the deliberate pace of standard setting, suited the conditions of the time. Superpowers and powerful oligopolies were in charge, and they controlled the pace of technological change. In the latter part of the twentieth century, however, as more than one powerful new technology emerged at a time, and as lead times shortened and international competition became more intense, both relative timing and speed became more important. As Andrew Russell, James Pelkey, and Loring Robbins demonstrate in “The Business of Internetworking: Standards, Start-Ups, and Network Effects,” getting in on the ground floor and setting the key standards for an emerging technology created ever-bigger opportunities for private investment along with ever more serious issues for governments. The acceptance of Transmission Control Protocol/Internet Protocol (TCP/IP) as a de facto standard for internetworking showed that the formal standards process could be an impediment to the next wave of industrialization, while circumventing the process offered advantages for governments and private investors alike. Either the formal process needed to be accelerated or some other process had to be devised.
Mandatory or Voluntary
As several of the contributors to this special issue have pointed out, from the early twentieth century on, governments have stayed out of imposing standards except in clear cases of public necessity or market failure. The exceptions, Frohlich tells us, have been safety and health, matters that are especially relevant to his subject, food, as an inherently perishable good that is easily adulterated, transported long distances, and traded with difficulty, especially across boundaries. One major reason that government's role in setting standards diminished over time was that governments lacked the expertise, the information, or the other resources required to mandate standards. In Stephen Mihm's article, “Inching Toward Modernity: Industrial Standards and the Fate of the Metric System in the United States,” for instance, the proponents of the metric system are depicted as naifs of several kinds, prominent academics and government officials who lacked the hands-on experience and possibly the appreciation of the likely cost of transition, as well as the painful shared memory of conflict that imposing science-based standards on the shop floor had already engendered in the previous century.
There was another compelling reason for governments to avoid mandates: they could lead to resistance, sometimes violent resistance. Reading between the lines we can see that standards that replaced local systems, concentrating wealth, power, and control in the hands of the elites, as they were often intended to do, could lead to serious unrest. Recognizing this threat, Brazil took the time and did the planning to minimize the violence that France and Portugal, both significant trading partners, had encountered. Even when the stated intention was to promote the public good, as in the case of the New Deal FDA's attempts to protect consumers from fraud and contamination in the expanding packaged-food industry, the ultimate outcome of the mandatory system tended to be resolved in favor of producers rather than the consuming public. On the other hand, the tendency to avoid mandated standards did not necessarily mean that governments, even those that deemphasized regulation, could stay out of the standards process altogether. As Grace Ballor's “CE Marking, Business, and European Market Integration” demonstrates, the CE mark, for instance, was an EU-led initiative that took several decades to come to fruition. After a painful process of trial and error, it finally functioned well when even companies located in non-EU countries could see the value of meeting the simplified overarching standards of a large and growing unified European market. During the Cold War, public/private systems that produced international standards often tapped into significant amounts of government funding, with every intent to reward private interests. In the case of internetworking, as Russell, Pelkey, and Robbins reveal, the U.S. Department of Defense (DOD) took an interest in both of the two leading standards—one formal, one de facto—well before the digital revolution became a reality by pouring massive amounts of funding into the development stage to be sure that standards were achieved, much as it had a generation before in providing a developmental market for the transistor.
The question arises, could governments ever have imposed standards that did not benefit the companies that had to provide the expertise and the manpower to implement them? Ballor's account of the CE mark shows how this story plays out. Early attempts at an EC mark failed because the institutions needed to support it were weak or nonexistent, the methods for certifying were missing or hard to use, and the standards proposed bogged down because the necessary level of detail required too many resources to be achieved in a timely fashion. Timing, again, was an important consideration. When the benefits of a large European market could be observed in reality, then and only then did adopting the CE mark come to be about achieving access and not incurring needless expense. Brussels was able to embrace the mark because it had become an obvious benefit. Where the budgets of government departments were not as voluminous as that of the DOD, the role of those departments was more often to monitor and warn of possible market failures.
Nuts and Bolts: The Need for Many Types of Expertise
A mental image of a standard-setting meeting consists of a group of cosmopolitan engineers, scientists, and other expert functionaries negotiating over a set of specifications, aided by blueprints and reams of data. But the actual nuts and bolts of standards, as portrayed in most of the articles in this issue, involve less well trained people trying to use various forms of precise measurement devices to produce and deliver the exact measures of quality specified in orders received. Or, later, to procure and install the necessary equipment or service to certify compliance with standards designed to serve large powerful interests. In Brazil the challenge would have to be met by a small-town official who had received a heavy box of sometimes rusty reference weights and measures that had been toted on the back of a mule through difficult country in wet conditions. Even the government's public works engineers, who were required to use the metric system to set an example, would have had difficulty sourcing the necessary equipment and measurement devices to follow through.
In the industrial United States of the late nineteenth and early twentieth centuries, compliance with standards involved shop-floor personnel in many different factories, large and small, facing off with engineers sent from headquarters to test their factory's output for flaws and defects before it shipped. Experimental equipment had to be employed, and achieving standard output meant rejecting a lot of semifinished material, increasing costs, and slowing down production on lines where frustrated workers were often paid by piece rate.Footnote 3 In Andrew Carnegie's steelworks, or the foundries at military arsenals, or glass factories wanting to employ unskilled labor, achieving standard output almost always meant wresting control from craftsmen who controlled the (secret and closely held) composition of a melt or the rate of a run, or the setting of a drill or a machine, and assigning the responsibility to supervising engineers who often had to appeal to higher-ups to support their efforts to withhold clearance from nonstandard products. If it was difficult to formulate and apply company-wide standards in glass and metalworks, it was even harder to promulgate industry standards that might lead to increased competition and might even be viewed as decreasing desirable distinctive qualities of a particular product. The fight to achieve basic minimum standards for quality, productivity, and interconnectivity was never-ending, heated up with each new generation of product, from metals to plastics, glass to ceramics, adding machines to computers, glass tubes to semiconductors, punch cards to software, ever more sophisticated, ever more extreme in size either large or small, and ever more demanding of sophisticated equipment.
At the turn of the twentieth century the custodians of standards were engineers, often self-taught or apprenticed in the early years and then increasingly formally and scientifically educated. When these professionals won their shop-floor contests, it was because they could call on science as a backup, and because customers demanded it, or European competitors did it better.Footnote 4 Setting a new standard, especially in areas of strong public need, could give a company a big commercial advantage. Such was the need for standard signal-light colors in 1911 when thousands of deaths on American railroads were occurring because electrification caused random white or yellow electric light to spread through the countryside.Footnote 5 Even the company that set and controlled the new standard still had to train and retrain its factory personnel, equip and maintain the facilities needed to achieve it, and dedicate scientific personnel to monitor, attend frequent standards meetings, and supply other producers with technical information and support.Footnote 6 It is hardly surprising, then, that having weathered decades of conflict with and between factories over established standards, American and British engineers resisted adopting an entire new set of measurements that would change every other standard based on measurement and much of the equipment that achieved or validated its accuracy.
As we see in Ballor's example of the CE mark, even at the national or regional level it was vital not just to specify a standard but also to certify that it had been met, involving a consistent set of methods and measurements and having to be installed and run by a new set of institutions, or departments annexed to old ones. The original EC mark was stymied by weak preparation or follow-through on just such matters. When we consider this aspect of implementing standards, mandatory or voluntary, we can see why, even when it came to digital technology, something like the TCP/IP de facto standard had an advantage over the standard for internetworking being developed with all due deliberate consultation in the formal unrushed process of the international ISO. According to Russell, Pelkey, and Robbins, the TCP/IP was a working set of methods and protocols that were in use in the marketplace and gaining the benefit of feedback collected by one consistent person and his answering machine. Not only did this enjoy a tremendous timing advantage; it was far superior to a prototype that might or might not remain stable. It was a working product in use, around which systems could be built immediately, and it had a group of ingenious, ambitious engineer entrepreneurs pushing its use, supporting its adopters. By contrast, the ISO product was the dutiful output of committee work, put together by committees dominated by teams from Honeywell, IBM, AT&T, and other large companies that intended to benefit from slowing down the progress of internetworking to be compatible with and favor their own existing proprietary systems. We can conclude from the various references to the “nuts and bolts” in these articles that while standards were intended to stabilize new markets through interoperability, and to afford increased economies of scale, the expertise and also the motivation of the people involved in their implementation and maintenance was a vital factor in their effectiveness.
Experts and Elites: The Motivation to Serve
To be effective, standards have depended on experts to be their custodians or gatekeepers. As Hanley describes, in the case of Brazil these experts often represented, or were sponsored by, more traditional elites. To serve as an expert involved with standard setting was a way to achieve elite status without inheriting it. In the articles gathered in this issue, such experts have included government officials seeking to raise the desirability of their countries as trading partners or recipients of foreign investment; engineers defending their homegrown standard systems against international competitors; applied scientists like the food chemists seeking to characterize food integrity in the United States; and computer engineer entrepreneurs seeking the resources to turn their knowledge into working systems. In the century or so when the hallmark of any profession was its contribution to the public good, associating the relatively new occupation of engineer with setting and maintaining standards, especially international standards, was akin to requiring academic degrees for credentialing. When the standards process was most effective, it provided a way for the engineering profession in particular to demonstrate its benefit to the national interest. Of course, most of the professionals involved in standard setting were also full-time paid employees of some institution—society, corporation, government agency, or university. Nevertheless, a large part of the work was essentially voluntary. Had the process of international standard setting not involved large intrinsic rewards, the amount of expertise and painstaking attention required over periods of years, if not over entire careers, would have been impossible to compensate. As it was, much of the compensation was the social capital that accrued to both the professions and their members.
What cannot be overemphasized, then, is that a big part of the history of industrialization, which is well known to have featured many outsized individual contributors, is also indebted to the less well known standardizers who were happy with the intrinsic and collective rewards gained from collaborating. Although many scholars have portrayed the rejection of the metric system by U.S. industrialists as a regressive campaign waged by know-nothings, Mihm's alternative account emphasizes the fervor of shared commitment among several generations of American industrial pioneers to their uniquely American approach to industrialization. One wonders, in fact, if the recurring threats from the metric system provided the impetus for renewal. Far from being a matter of craft versus science, the conflict raged between two groups with different visions of modernity, one derived from the standard hereditary elites’ embrace of international knowledge-sharing among the well connected and well educated, and the other the engineering idealism of people like Frederick Winslow Taylor's associates and followers who believed in finding the “one best way” of putting knowledge to work based on their local experience. This second group, captains of critical industries—machine tools, metalworking, railroads, and textiles—while fiercely competitive among themselves were united in opposing a mandatory and exclusive metric system that would nullify their huge prior investment based on the standard inch. How could this self-made elite not feel contempt for opponents so impractical or so unaware that they could treat as unimportant the huge gains in productivity derived from industrial standards and scientific management? How could they dismiss as trivial the enormous and certain cost of destroying several generations of investment in tooling, machine building, and skill in favor of laboratory science?
The engineer entrepreneurs who ushered in the Fourth Industrial Revolution in the 1980s by producing de facto internetworking standards were a similar breed to the industrial pioneers of an earlier era. They too were attracted less by the prospect of financial rewards, which were at that time unimaginable, than by the intrinsic appeal of the work and the vision they pursued collaboratively. Instigated by a demanding and deep-pocketed funder, the Defense Advanced Research Projects Agency (DARPA), they were energized by the prospect of a new way of sharing information globally and instantaneously, outside the walls of the communications giants of the day. Most of them considered a commercial internet unthinkable. Like the systematizers of the previous industrial revolution they were willing to work exceptional hours and take big risks to embody their knowledge in devices and software and to put them to work. Russell, Pelkey, and Robbins show that while the networking research, and the costly equipment that supported it, was generously funded by the DOD, they needed venture capital from private sources to develop and sell products. Venture capitalists expected short-term results, and one way to speed up the development of products and to produce them rapidly was to bypass the conventional standard-setting process even though it was also being supported by the DOD. By creating what became the de facto TCP/IP standard they also succeeded in bypassing the leading communications and electronics companies of the day, which were well represented in the traditional standard-setting process, and which could use that process to control the pace of change.
It is important to emphasize what a crucial part motivation played in both of these accounts of standard setting and execution. Both Mihm and Russell, Pelkey, and Robbins capture in their accounts the personal qualities of the standardizers who were the revolutionaries of their era. These were highly individualistic people, fervently united in their belief in uniform voluntary standards as a way of harnessing new knowledge to economic growth, and more motivated by this pursuit than by the prospect of financial rewards. In both cases the financial rewards turned out to be major.
Ironically, the eye-watering size of the potential payoffs would ultimately undermine the effectiveness of the international standards-setting bodies. As both the internetworking (Russell, Pelkey, and Robbins) and the CE marking (Ballor) stories demonstrate, by the 1980s it was often the loosening of certain standards or the speeding up the process of setting them that created the bigger financial reward. Such actions not only released floods of government funding but also, with the growth of venture capital, attracted ever-increasing amounts of private investment. As the introduction to this issue revealed, other aspects of the standards-setting process would change as well: the size and risks of the problems needing to be addressed, the ability to attract standardizers with sufficient motivation and continuity, the resources available to support the process in its current form. All these shifts point back to the question raised at the outset. Can the voluntary international standard-setting process still in place address the kinds of issues related to, if not directly dependent on, standards going forward?
Conclusion
The articles in this issue offer few solutions, but they do raise some points to consider and gaps in the historical research that might be addressed. As the headlined issues at the beginning of this commentary make frighteningly clear, the last few decades of globalization have left us in existential peril on several fronts. Standards, both national and international, could play a critical role in helping to address these matters, just as they have in four waves of industrialization. It is hard to imagine, however, from what our issue says about historical trends in standard setting, that the current mostly voluntary system can be effective in addressing accelerating climate change, the likely recurrence of global pandemics, or the related health and safety concerns that affect entire populations. Existential problems require mandatory rules.
As Brazil figured out almost two centuries ago, governments are in unique positions to mobilize resources, to overcome resistance, to demonstrate and highlight what can work, and as they seem to have little trouble doing in wartime, to get the private sector to go along. Brazil's example shows that elites recognizing the possibility of violent resistance to their plans were prepared to follow through with all parts of the standards process, even if it took longer than the planned decade. The proponents of the ineffective EC mark, which was later changed to the much more successful CE mark, had to learn this lesson the hard way. It was only when the necessary supporting institutions were lined up, the means of certification were deployed, and the advantages of bearing the mark could be observed that it became an effective tool for international trade. We need more research on the implementation, revision, and maintenance of standards.
To beat a dead horse, no amount of rhetoric coming out of climate accords about net-zero goal setting for 2035 or 2050 will matter if it is not acknowledged that the full standards cycle involves supportive institutions, large amounts of up-front resources, trained and motivated workforces, and consistent continuous follow-though. No amount of warning by the WHO or other international organizations will bring about a life-saving response to viruses if healthcare standards do not represent the health and well-being of all stakeholders: healthcare staff and patients, not just the interests of private investors or even private foundations. We could use more research by business historians into subjects like the Civilian Conservation Corps, or the role of the Army Corps of Engineers, or the training of huge numbers of draftees in advanced mathematics and physics and electronics during World War II, to show how in many countries intensive efforts to set meaningful standards, follow through with pragmatic institutions, and develop committed and educated workforces made it possible to implement major successful public and private projects for decades after wars. And if such efforts failed, why did they? It would also be good to have more understanding of where and how mandatory rules have worked and why they have not.
Finally, these articles tell us that there is nothing boring or bloodless about standards, when understood as an entire process that has required so much ingenuity, determination, and zeal and that involves many unsung heroes. Mihm says it best: “We live in the world they built.” And, we might add, we might not have survived earlier existential crises if they had not done what they did.