The United States experienced a manufacturing crisis in the 1970s and during much of the 1980s. The competitiveness of American plants declined significantly vis-à-vis factories located in Japan. The crisis was so acute that, in many sectors, U.S. corporations were forced to close a significant fraction of their production facilities. Losing market share to Japanese competitors, integrated steel manufacturers shut down more than half of their mills and laid off 250,000 workers in the 1970s and early 1980s. Massive plant closures occurred in the automotive industry. Entire industries disappeared, notably machine tools and consumer electronics. The troubles of U.S. industrial firms were partially self-inflicted. Convinced of their technological superiority, many corporations had disregarded signs of growing overseas competitiveness. They had underinvested in production capacity and had thereby created an opening for foreign competitors in domestic and international markets. They had emphasized product innovation, giving low priority to the development and management of manufacturing processes. In several industries, they had also given away their technology, including production technology, by licensing it to foreign corporations.Footnote 1
But the manufacturing crisis can also be explained by the superior performance of Japanese industry. Japanese factories were in the main more productive and yielded higher-quality products than their American counterparts. In fact, the firms and industries that survived the Japanese onslaught were those that adopted Japanese manufacturing practices. For example, steelmakers introduced Japanese manufacturing technologies and quality control techniques in their plants. Automotive manufacturers dispatched study teams and set up joint ventures with Japanese corporations in order to learn Japanese production methods. In the process, they adopted many innovations made at Toyota and Nissan: quality circles, just-in-time techniques, and lean production. This pattern can also be found in high-tech industries, where Japan emerged as a major competitor in the second half of the 1970s and the first half of the 1980s. For instance, in order to compete with Canon and Minolta in the office copier business, Xerox followed the Japanese model. It improved the reliability and quality of its products. It also significantly reduced its manufacturing costs.Footnote 2
Another high-tech firm, Intel Corporation, provides an interesting window into the manufacturing crisis and the response of American management to it. In 1982, Intel was the leading maker of integrated circuits in Silicon Valley. It was the eighth-largest semiconductor corporation in the world and a major force in its most dynamic market segments: memories and microprocessors. Intel, however, like many other U.S. semiconductor firms, experienced a major financial and productive crisis in the mid-1980s. The crisis at Intel was triggered by poor manufacturing performance. The corporation fabricated its memory products, DRAMs and EPROMs, at much higher costs than its Japanese competitors—a weakness that these firms exploited by slashing prices to penetrate Intel's markets. Intel's predicament was also caused by its decision to grant Japanese corporations the right to produce its microprocessors. As a result of the price war in memory chips and growing competition in microprocessors, Intel seemed close to bankruptcy in 1985 and 1986. Within the next few years, however, the company staged a dramatic comeback. It grew very quickly and even superseded its main Japanese competitors. By 1992, Intel was the largest semiconductor manufacturer in the world. Its growth accounted for much of the resurgence of the U.S. microelectronics industry as a whole during this period.Footnote 3
Intel's observers and the biographers of its main executives (Gordon Moore, Andrew Grove, and Robert Noyce) interpreted the company's comeback as the result of several factors. Critical was the choice to abandon the DRAM market and refocus the corporation on microprocessors. This enabled the firm to tap into the rapidly growing demand for these chips, as the personal computer market boomed in the late 1980s and the first half of the 1990s. Much has been written about this decision. Robert Burgelman, for example, claims that by allocating manufacturing capacity to microprocessors instead of DRAMs, mid-level managers at Intel de facto made the choice of leaving the DRAM market in 1983 and 1984. According to Burgelman, Moore and Grove later endorsed this decision and reoriented the corporation's engineering resources toward microprocessors.Footnote 4
In their biography of Moore, Arnold Thackray, David C. Brock, and Rachel Jones emphasized another important aspect of Intel's reorientation toward microprocessors: the decision made in 1985 by Moore and Grove, the firm's main executives, to sole-source the 386 microprocessor. Up until that time, chipmakers granted manufacturing rights for their products to at least one other corporation in order to have their integrated circuits accepted in the marketplace. With Intel's first 32-bit microprocessor, the 386, Moore and Grove decided to break with this practice. From then on, Intel became the sole supplier of its microprocessors. This decision enabled the firm to control pricing and, later, to greatly benefit from the boom in the demand for its chips.Footnote 5
Other authors point out that Intel's executives sought to change the rules of the game in order to protect their market position. For example, Leslie Berlin argues that Intel's leaders partially shaped the U.S. government's response to the Japanese challenge in semiconductors. She shows that Noyce became the semiconductor industry's main lobbyist in Washington, D.C. As Sematech's CEO, he later oriented the new organization toward the strengthening of the American manufacturing equipment industry, which was then severely outcompeted by the Japanese.Footnote 6
These accounts significantly enrich our understanding of Intel's resurgence. But they largely ignore manufacturing, the primary source of the firm's difficulties and the main focus of executive attention and concern in the second half of the 1980s. Starting in the mid-1980s, manufacturing at Intel experienced a revival and the firm was gradually transformed into a production powerhouse. This thorough transformation was not the sole cause of Intel's resurgence, but it was a necessary condition for its comeback. It was also essential for the firm's extraordinary expansion for much of the 1990s. This article examines the sources of Intel's production problems. It also investigates the revival of its manufacturing operations and the circumstances that made this renewal possible.
Like other chipmakers based in Silicon Valley, in the 1970s Intel built manufacturing operations characterized by their flexibility and labor intensiveness and the primacy given to process development. This approach to chip fabrication proved increasingly inadequate, however, when Intel faced growing competition from Japanese corporations. Intel's deteriorating competitiveness in manufacturing was not immediately apparent to the corporation's upper management. It took Moore, Grove, and other Intel upper managers much longer than most U.S. semiconductor executives to understand the full extent of the production crisis they faced. They responded to the firm's declining manufacturing position in two phases. At first, in the late 1970s and early 1980s, they focused on reaching Japanese quality levels. They brought in “total quality management” techniques from Japan. They also concentrated on improving yields (yield, a fundamental metric in microelectronics, is the percentage of good chips coming out of the manufacturing line). The results were mixed. The new techniques enabled the production of microchips with fewer defects and higher yields. But the reliability and productivity of Intel's factories remained very low, much lower than those of Japanese plants.
Drastic reductions in the price of Japanese chips in 1985 and the deep crisis that ensued at Intel brought the corporation's weakness in manufacturing to the fore. To keep the firm afloat, Craig Barrett, the new head of manufacturing, initiated a fundamental reform of production. He demanded that Intel broadly adopt Japanese manufacturing technologies and operating procedures. But, under his direction, Intel's engineers also made innovations of their own in order to achieve Japanese-style manufacturing performance. They put production on a scientific footing. They developed a new methodology for rapidly increasing production volumes, the “copy exactly” methodology. Relying on a new articulation between process development and manufacturing and the complete standardization of fabrication units, this methodology augmented overall yields and significantly accelerated the rate at which Intel increased production volumes. By the early 1990s, Intel's plants were as productive and efficient as the factories of its main Japanese competitors.
Manufacturing the Silicon Valley Way
Intel, formed by Moore and Noyce in 1968, quickly established itself as a technology leader in the microelectronics industry. Its process engineers developed the silicon gate manufacturing process that enabled the production of very stable microchips with thousands of transistors. Silicon gate became the dominant process for making integrated circuits in the 1970s. Intel's engineers also originated entirely new product categories: DRAMs, memory chips that needed to be constantly refreshed in order to hold their information; EPROMs, integrated circuits that retained information after the power had been switched off; and microprocessors, microchips that integrated a computer's central processing unit onto a single piece of silicon. It was around these chip categories that Intel built its product line and that very large markets emerged in the 1970s.Footnote 7
Intel manufactured these innovative integrated circuits in ways that closely resembled those of other semiconductor corporations in Silicon Valley. In the 1960s and 1970s, a distinctive style of microchip manufacturing emerged in the region, which differed significantly from those of other semiconductor-producing areas such as Texas, New York, and the mid-Atlantic states. The first characteristic of Silicon Valley's production style was the primacy of process development. Most Silicon Valley firms competed on the basis of their production processes rather than through manufacturing execution. They emphasized flexibility in production. As a result, unlike Motorola and Texas Instruments, they did not automate the processing of integrated circuits. Another important characteristic of the Silicon Valley style of production was its unpredictability. It was not uncommon for the “fabs”—the fabrication units—to experience “yield crashes.” In other words, no salable product came out of the manufacturing line. It took weeks to resume the production of marketable integrated circuits. Overall, in the 1970s the yields of Silicon Valley–based firms were lower than those obtained by IBM, Motorola, and Texas Instruments, sometimes by a factor of two. The final characteristic of the Silicon Valley style was virulent opposition to unions. Every time a union sought to organize a microchip plant, management mounted vigorous defenses. As a result, Silicon Valley's fabs remained nonunionized. This contrasted sharply with semiconductor factories in the East that had union representation, notably at Western Electric.Footnote 8
Intel's production operations exhibited many of the characteristics of the Silicon Valley style of manufacturing. Its fabs were labor intensive. They were nonunionized. Their output was unpredictable. In fact, production at Intel represented an extreme form of Silicon Valley manufacturing. Moore and Noyce's primary strategy, like those of other Silicon Valley firms such as Hewlett-Packard, was to stay at the leading edge and charge high prices for the performance of their products. They were the first to market with new and highly complex microchips. They sold these chips at very high margins (because Intel was their sole producer) and reinvested these profits into the further development of new products and processes. Once other firms copied the firm's microchips and lowered the prices for them, Intel discontinued their fabrication and introduced the next generation of products to the market. Because of their focus on being first to market with complex devices, Intel's upper managers gave a very high priority to the development of advanced processes. For much of the 1970s, Intel had a significant advance in manufacturing techniques. It could make circuits that no other corporation could fabricate.Footnote 9
Another characteristic of production at Intel was its unique articulation of process development with manufacturing. In most large Silicon Valley firms in the 1970s, R&D engineers developed new processes in the laboratory and these processes then moved to the fabrication units. Because the equipment used in the laboratory and that employed in the plant were often different, the production engineers had to re-engineer the processes developed in the laboratory and adapt them to their own machines. As a result, the transfer of new processes from the lab to the fab was time consuming. It was also fraught with uncertainty. To avoid this problem at Intel, Moore and Noyce demanded that development engineers create their processes directly on the manufacturing lines. These engineers employed the same equipment as the production engineers and worked alongside them. This new arrangement abolished the gap between development and production and facilitated the fast transfer of new processes to manufacturing. This capability was critical for the firm, as its business model depended on speed.Footnote 10
In short, manufacturing was neither efficient nor productive at Intel in the 1970s. Production costs were high. Yields were low, even by Silicon Valley standards—routinely in the 10 to 20 percent range. The utilization rate of production equipment was in the order of 20 percent. In other words, workers operated manufacturing equipment only a fifth of the time. This was due to scheduling issues and frequent equipment breakdowns. But in spite of their shortcomings, the production operations enabled Intel's remarkable commercial and financial success. They fabricated complex microchips that were in great demand and could be sold at a high price. Responding to the burgeoning demand for DRAMs, EPROMs, and microprocessors, Intel expanded very quickly. Its sales grew from $4.2 million in 1970 to $854.6 million ten years later. Its workforce increased from 200 to 15,900 during the same period. No other U.S. semiconductor firm expanded as rapidly in the 1970s. None was as profitable. Unsurprisingly, Dun's Review voted Intel one of the five best-managed companies in America in 1980.Footnote 11
Improving Quality, Increasing Yield
Starting in the late 1970s, changes in the competitive environment increasingly questioned the ways in which Intel approached the manufacture of microchips. Japanese chipmakers became a major force in the microelectronics industry, with much of their strength coming from manufacturing. Japanese corporations fabricated higher-quality microchips than their U.S. counterparts. They also produced integrated circuits at much lower cost. Japanese fabs were automated and their managers gave greater attention to cleanliness than their U.S. counterparts. As a result, Japanese plants had higher yields. In 1986, the average yield of Japanese microelectronics firms was in the order of 75 percent. This was fifteen points higher than the average American yield for similar chips at the same time. Japanese firms could sell their integrated circuits at significantly lower prices than American corporations and still make a profit. Finally, Japanese manufacturers increased their production volumes (“ramping” in semiconductor parlance) much faster than U.S. firms, which gave them a significant commercial advantage. They could respond much more quickly to customer demand. In contrast, American corporations did not have this capability. When the global demand for integrated circuits surged in 1980, 1983, and the first half of 1984, they could not fill the customers’ orders. Sometimes, they had to redirect their clients toward Japanese suppliers, as Intel did for DRAMs in 1980.Footnote 12
Japanese strength in production was not immediately apparent to Moore, Grove, Noyce, and other Intel executives. In the late 1970s, and even in the early 1980s, they explained Japanese successes mostly through external factors: government subsidies for research and development, easy access to capital, and the closure of the Japanese home market to foreign competitors. It was only gradually that Moore, Grove, and Noyce came to the realization that Japanese firms had a competitive edge in manufacturing. They generally did so later than most American semiconductor executives. For example, Charles Sporck and Floyd Kvamme at nearby National Semiconductor understood as early as 1977 that the Japanese represented a major competitive threat. It also became clear to them within the next few years that superior manufacturing effiency and productivity were the primary source of Japanese competitiveness. In contrast, at Intel, Moore, Grove, and Noyce focused mostly on issues of quality and yield around the same time. In 1978, they discovered that Japanese corporations produced higher-quality components than they did. This was a rude awakening, as Moore later recalled: “I remember Bob Noyce coming back from Japan where he had visited several customers over there and the thing that he brought back was the fact that there was interest in higher quality than the industry had been delivering. The Japanese were delivering products with many fewer defects than we had been historically. I remember he expressed considerable concern that this was something that we had to pay attention to.”Footnote 13 In Moore and Noyce's view, Japanese corporations had changed the rules regarding quality. It was imperative that their firm reach Japanese quality levels.
In the summer of 1980, Moore learned that Intel significantly lagged behind U.S. and Japanese competitors for another important metric: manufacturing yield. These rumors were later confirmed by hard data supplied by IBM. As Moore later recalled, “IBM was telling us that our yields were way below what they ought to be and way below what theirs were. They generally would not share any data with us, but when I finally got to see some, I was shocked at how much better IBM was doing than we were.”Footnote 14 In the following years, Moore and Grove received more reports about Japanese prowess in microchip fabrication. An important source of information was Intel Japan, the firm's sales office in Tokyo. The American manager of Intel Japan was fully aware of Japanese production capabilities. He campaigned tirelessly within Intel, making the point that Japanese manufacturing lines had higher yields than American ones and that Intel had to close the gap in production. But his reports were received with considerable skepticism within Intel's manufacturing organization. “Nobody believed him, including me,” the head of component production at Intel, Eugene Flath, later reported.Footnote 15
This internal debate shifted when, starting in the fall of 1983, Intel engineers visited Japanese semiconductor plants. This was a time of growing demand for Intel's products, but the corporation did not have enough manufacturing capacity to meet this surge in orders. With Moore and Grove's support, Flath and Willard Kauffman, the head of fab operations, decided to devote Intel's manufacturing capacity to the production of the most advanced and profitable microchips. They chose to outsource the fabrication of their oldest products to Japanese chipmakers. Several Japanese corporations, especially second-tier firms such as Oki and Sanyo, were interested in this business. They opened their plants to Intel's engineers charged with investigating their capabilities. What these engineers found was astonishing. Yields were in the 90 percent range. The factories also produced chips at much lower cost than Intel's plants.Footnote 16
Moore and Grove responded to mounting evidence that Intel was behind Japanese producers and some American corporations in microchip fabrication by instructing Flath and Kauffman to increase production yields and effect changes in microchip testing. Flath and Kauffman strengthened testing groups by hiring more engineers and purchasing advanced testers. These testing groups upgraded their own procedures in order to identify more malfunctioning circuits. They also built trust in Intel's testing techniques, by working closely with the customers’ test engineers and harmonizing their own procedures with theirs. At Moore's request, Flath and Kauffman also initiated several crash programs to increase yields. For example, in mid-1980, they launched a yield improvement program across all fabs. This program identified factors of yield loss such as particle counts, wafer scratches, and thermal stresses. It also developed new techniques to address these issues. For example, engineers processed wafers at lower temperatures in order to avoid thermal stresses and dislocations in the crystals. Advances at one fab were then transferred to the other plants.Footnote 17
Flath and Kauffman also adopted the Japanese philosophy of quality, namely, the idea that product quality derives from manufacturing quality and that the manufacture of quality products requires high yields and the operation of reliable factories. Starting in 1981, they brought in “total quality management” techniques from Japan. They gave more autonomy to workers in order to spot problems earlier in the many steps needed to fabricate microchips. They initiated programs to automate integrated circuit assembly and aspects of wafer processing. They introduced just-in-time techniques to speed up production and reduce inventory. They also brought statistical process control (SPC) techniques—which rely on statistical analysis to monitor and control manufacturing processes—to several fabs. To improve equipment utilization, they required that technicians be better trained and maintain each piece of equipment more carefully in order to avoid expensive and time-consuming equipment breakdowns. Following the Japanese model, they also developed closer relations with equipment and materials suppliers.Footnote 18
The results were mixed. The corporation's manufacturing engineers succeeded in significantly improving product quality. The number of defects per million went down from 8.5 in 1980 to 1 in 1984. Convinced that Intel had significantly tightened up testing and increased the quality of its chips, some customers skipped incoming inspection for its integrated circuits. Intel's engineers also succeeded in doubling manufacturing yields. By mid-1984, yields had reached 50 percent on average. But the productivity of the fabs remained stagnant. In spite of significant training efforts and the introduction of preventative maintenance, the rate of equipment utilization did not change; it was still in the 20 percent range. Intel also encountered significant problems with the fabrication of EPROMs, the firm's most profitable line of products, in its new fab in Albuquerque. Because of its production troubles, the corporation did not manufacture enough EPROMs to meet the growing demand for these chips during the boom times of 1983 and the first half of 1984. It was NEC, Hitachi, and Mitsubishi that filled the customers’ orders.Footnote 19
The Crisis of 1985 and 1986
To take over the market for EPROMs, starting in the fall of 1984 Japanese corporations drastically reduced their prices for these chips. They slashed EPROM prices by 90 percent over a nine-month period. To keep market share, Intel was forced to match these prices. At the beginning of the price war, it sold its most advanced EPROM for thirty dollars apiece. A year later, the same product was in the three-dollar range. A chip that had been highly profitable now sold at a price well below its manufacturing cost. The price offensive in EPROMs was compounded by deteriorating market conditions for Intel's other products. This was especially the case for microprocessors, which represented about a third of Intel's total sales. The market for personal computers, the main outlet for these chips, contracted significantly in 1985 and 1986. Moreover, because Intel had not been able to meet the demand for its microprocessors in 1983 and 1984, it had permitted other firms, including NEC, to second-source its products. In 1985 and 1986, NEC sold its microprocessors at low prices in order to win business away from Intel.Footnote 20
The price offensive in EPROMs and microprocessors and depressed demand for its other products devastated the corporation. Sales went down 22 percent in 1985 and 1986: from $1.62 billion in 1984, they fell to $1.26 billion in 1986. More importantly, as EPROMs—its main moneymaker—became unprofitable, Intel went into the red. The corporation lost money for eighteen consecutive months in the second half of 1985 and for all of 1986. In 1986 alone, Intel lost $200 million. No other U.S. semiconductor firm lost as much money during the downturn. None was unprofitable for so long. Would Intel survive or would it eventually go bankrupt as Synertek, another Silicon Valley corporation, did at the time? This question was front and center in the minds of Intel's upper managers and employees for much of 1985 and 1986. Intel was not alone in this crisis; the entire U.S. semiconductor industry was in shambles. In 1985 alone, it lost a billion dollars and 54,000 jobs.Footnote 21
To safeguard their ailing business from further Japanese encroachments, Moore, Grove, and their staff took aggressive measures. Informed by the collapse of other U.S. industries, they understood how critical it was to act quickly and decisively. They turned to the courts and the federal government. Their main defensive measure was a petition against Japanese corporations for dumping in EPROMs. In May 1985, an Intel employee based in Denver discovered a flyer from the local Hitachi sales office enjoining the firm's salespeople to “find Intel sockets, quote 10% below their prices. If they requote,” the flyer added, “go 10% again. Don't quit until you win!”Footnote 22 The discovery of this flyer declaring an all-out price war on Intel led to studies by Intel's accounting department showing that the Japanese sold EPROMs at significantly lower prices than their manufacturing costs. On the basis of this evidence, in October 1985 Intel, along with several other corporations, filed a petition with the United States International Trade Commission and the Commerce Department against eight Japanese corporations, including Hitachi, Fujitsu, NEC, and Toshiba. It argued that these firms dumped their EPROMs on the U.S. market and requested that the Commerce Department impose a heavy duty on Japanese EPROMs.Footnote 23
The dumping petition persuaded Japanese manufacturers to raise their prices and reduce their shipments of EPROMs to the United States. It brought longer-term benefits as well. Because it was favorably received by the International Trade Commission, and the Reagan administration was about to impose antidumping duties on Japanese EPROMs, the Japanese government signed a trade agreement with the United States in July 1986. This agreement set a price floor for Japanese semiconductors and opened 20 percent of the Japanese market to American chips, thus putting an end to the Japanese price offensive in EPROMs and other circuits. Another factor favoring Intel was the increase in the value of the yen forced on Japan by the U.S. government in 1985. By the end of 1986, Intel retained a sizable share of the world market for EPROMs, but the business was barely profitable and the Japanese were now the dominant players in this market segment.Footnote 24
In the case of microprocessors, Moore and Grove resorted to intellectual property law to protect market share. NEC represented the main Japanese threat for these chips. It had recently introduced a family of microprocessors, the series 5, to the market. These microchips used a variant of Intel's microcode, the software implementing the machine's instructions into circuit-level operations, but they were faster than Intel's products. To defend its dominant position in microprocessors, and taking advantage of growing protection for intellectual property in the United States, in early 1985 Intel filed a lawsuit against NEC contending that the Japanese manufacturer had stolen its microcode. It also asked that NEC be prevented from selling series 5 microprocessors in the United States. Intel's objective was threefold: to establish microcode as a form of software protected under federal copyright law; to forbid NEC and other firms to use its microcode; and to create enough legal uncertainty regarding NEC's chips to convince American system corporations to buy Intel-made microprocessors. Like the dumping petition, these legal proceedings affected the market. They dissuaded system vendors from purchasing microprocessors from NEC. The legal case was later decided in Intel's favor and made it difficult for NEC and other Japanese manufacturers to establish themselves as significant suppliers of microprocessors in the U.S. market.Footnote 25
Rebuilding Manufacturing
The dumping petition, the rising value of the yen, and the lawsuit for intellectual property infringement gave a significant reprieve to Intel—and Moore and Grove made the most of it. In the second half of the 1980s, they reinvented their business and revitalized their manufacturing operations. Their first step was to exit from DRAMs, a technology and market they had pioneered in the early 1970s but in which Intel was now a negligeable player. By 1984, DRAMs represented only 5 percent of the corporation's total sales. They consistently sold at a deficit. Taking stock of their defeat in DRAMs, in November 1984 Moore and Grove decided not to bring a recently engineered one-megabyte DRAM chip to production. They stopped investing in this technology entirely. One year later, they announced that Intel would stop supplying DRAMs. They were not the only semiconductor executives to make this decision. In 1985, most American microelectronics firms left the DRAM market, abandoning it to the Japanese. Moore and Grove also de-emphasized EPROMs. They refocused the firm's technical resources on microprocessors.Footnote 26
To reinforce their position in microprocessors, Moore and Grove sole-sourced their new chip, the 386 (introduced to the market in October 1985). Because it had widely second-sourced its previous processors, Intel had, as Moore admitted in the Management Report of the 1986 annual report, “lost control over a generation of our products and created our own competition.”Footnote 27 To reverse the situation and control pricing for the new chip, Moore and Grove made the bold move of refusing to second-source it to other semiconductor firms. This decision went against all accepted practice in the microelectronics industry and encountered considerable disbelief among customers. Aiding in getting acceptance for sole-sourcing was the fact that the firms buying the 386, such as Compaq, were relatively small (IBM was very late in adopting the 386). These customers were highly dependent on the chip for their own products and, as a result, had little leverage over Intel.Footnote 28
To withstand the Japanese offensive, Moore and Grove also reformed production. By then, they fully understood that they were severely outcompeted by the Japanese in terms of manufacturing. Profound changes in production were required for their firm to survive over the long term. Moore and Grove also needed to greatly improve the yields and reliability of their fabs to convince customers to buy microprocessors, for which Intel was the only source. To transform Intel into a reliable and low-cost producer of integrated circuits, Moore and Grove turned to new management. In late 1984, they demoted Flath, who had headed component manufacturing, and reassigned him to Intel Japan, where he spent the last three years of his career. Kauffman, who directed die production (i.e., all wafer fabs), was asked to quit his position and manage the supporting functions. He soon left the corporation. In their place, Moore and Grove installed Craig Barrett. A former faculty member in materials science at Stanford University, Barrett had joined the corporation in the mid-1970s, heading reliability engineering and quality assurance. He had also managed the firm's worldwide assembly operations and significantly increased the productivity of the assembly plants. In his new post, starting in January 1985, Barrett managed all the fabs. Six months later, he became Intel's manufacturing czar, overseeing die production, die contracting, assembly and test, process development, and the supporting functions.Footnote 29
Barrett demanded that Intel's manufacturing groups make “some very serious readjustments.” He later remembered that “Intel was going to have to be an efficient manufacturing company to survive. We were unpredictable. We were not cost competitive. We were not manufacturing competitive and the realization was that we needed to do things differently.”Footnote 30 Barrett conveyed this message in a forceful way to production engineers and managers. One of his direct reports later recollected that “Craig Barrett basically took a baseball [bat] to manufacturing and said: ‘Damn it! We are not going to get beaten by the Japanese!’”Footnote 31 Making the plants much more efficient and productive became the corporation's primary focus in the second half of the 1980s.Footnote 32
Barrett's first task was to cut manufacturing costs. Moore and Grove gave him the mandate of reducing these costs by half in 1985 and by another half in 1986. To meet these objectives, Barrett shut down the oldest and least productive factories. First to go were the assembly plant in Santa Clara, the testing operation in Santa Cruz, and two fabs located in Silicon Valley and in Aloha, Oregon. In 1986, Barrett discontinued large assembly plants in Puerto Rico and Barbados. In conjunction with these plant closures, Barrett ordered mass layoffs. He let go nearly five thousand operators, more than a fifth of Intel's total workforce (the fact that the firm was not unionized enabled it to pare down the labor force). The sacking of thousands of workers and the shutting down of eight factories reduced costs considerably.Footnote 33
Once Barrett and his staff had closed down outmoded factories and laid off thousands of workers, they focused on increasing the productivity, reliability, and predictability of the remaining plants. To do so, they focused on making microchip production more “scientific,” as the leader of this effort, Eugene Meieran, a PhD in materials science from MIT, later put it. In Barrett and Meieran's view, the fabrication of integrated circuits at Intel remained too empirical. Their goal was to bring the scientific method to the solution of manufacturing problems. “The thing that we really focused on,” Barrett later recalled, “was to start to use statistics, statistical process control and complex design of experiments to make data-based decisions and to drive the organization in a data-based fashion.”Footnote 34 In the early 1980s, his predecessors had introduced SPC in a piecemeal fashion. Some fabs had adopted it, while others had not. In 1986 and 1987, Barrett demanded that these techniques be deployed in all the plants. This necessitated a major training program, as all technicians, production and process development engineers took in-depth classes on this subject. They were told to utilize these techniques intensively on the job. Later in the decade, under Meieran's direction, engineers automated the collection and management of manufacturing data to better control processes and manufacturing lines on a real-time basis. This large-scale program led to a much tighter control of manufacturing processes, higher quality, greater yields, and more reliable and predictable wafer fabs.Footnote 35
At Barrett's urging, manufacturing groups also adopted Japanese practices and technologies on a large scale. In so doing, Intel followed the example of National Semiconductor and other Silicon Valley corporations that had massively adopted Japanese manufacturing approaches in the early 1980s. “Craig Barrett,” a senior executive later recollected, “dragged all factory managers over to Japan on a couple trips to go visit the Japanese factories and said: ‘This is how you are supposed to do it. You do it basically.’”Footnote 36 This required an intimate understanding of Japanese production methods. In 1985 and 1986, Intel engineers and mid-level managers did extended stays at the factories of Oki and Sanyo, firms that produced chips for Intel, in order to observe their production lines, equipment, and operating procedures. The corporation derived much of its production technology from this detailed examination. It was also through the close study of Japanese factories that the firm's executives came to understand that cleanliness and automation were essential for increasing yields. Other important sources of information on Japanese approaches were visits to Japanese universities, especially the University of Tohoku, and technology-sharing agreements with Fujitsu, Mitsubishi, and Matsushita. Intel also created its own intelligence unit to gather information about the Japanese semiconductor industry. This unit combed the Japanese press to learn more about Japanese technological capabilities and surmise the future technical and commercial orientations of Japanese chipmakers.Footnote 37
On the basis of the knowledge and know-how coming from Japan, Barrett and his troops made three fundamental changes in the fabs. First, they systematically removed all sources of contamination. This was a “cultural” revolution, Moore later remembered.Footnote 38 In the 1970s and early 1980s, the firm's production engineers had “adopted an approach which essentially said that they would find a defect and prove that defect created a yield problem, and then when they proved that, they would go back and try to eliminate the source of that kind of defect. Japan took a different approach. The Japanese said, ‘Cleanliness is good. We will clean up everything.’ So they cleaned up their chemicals. They took a broad view at cleaning up the facility, cleaning up all the gas lines. The Japanese eliminated all the defects.”Footnote 39 In the mid-1980s, Intel adopted the Japanese way of reducing contamination. The systematic “cleaning” of the process required substantial investments. For example, in 1986 and 1987 Intel upgraded the cleanrooms of its fabs located in Oregon, Arizona, and New Mexico. Great attention was also given to the elimination of contamination coming from people and equipment. To improve the purity of incoming chemicals and materials, Intel's managers reduced the number of suppliers. The firm also worked with them to purify their products. A growing proportion of these suppliers were based in Japan.Footnote 40
Second, Barrett and his staff focused on increasing the utilization of manufacturing equipment. This was an area where Intel had made no progress since the late 1970s. The utilization rate had stubbornly remained in the 20 percent range. To reach Japanese rates, Intel's managers purchased more reliable manufacturing tools. This often meant buying Japanese equipment. They also put tremendous pressure on American suppliers to improve the dependibility of their machines. Starting in 1987, Intel's leadership also acted on the supplier base through Sematech, the semiconductor manufacturing consortium funded by the Department of Defense and the U.S. microelectronics industry. Headed by Noyce, Sematech focused on assisting the American semiconductor equipment industry. It strengthened channels of communication between equipment suppliers and chipmakers. It financed the development of new fabrication tools. It thoroughly tested them and fed the information back to their designers so that they would improve their performance and reliability. Sematech also introduced Japanese “total quality management” techniques in the equipment industry.Footnote 41
But the procurement of more dependable manufacturing equipment was not sufficient to greatly increase equipment utilization. Production tools also had to be better maintained. Following the Japanese model, the group around Barrett asked equipment providers to service their machines themselves (this was a tactic that other Silicon Valley firms had borrowed a few years earlier from Japan). “We enlisted our equipment vendors,” Moore later recalled. “Where previously we had tried to do the maintenance of the equipment ourselves, we went back to the equipment vendors and said: ‘We want you to maintain the equipment.’ They know more about the equipment than we do. ‘Put your people in our plant, maintain the equipment. We will pay you a bonus if the equipment is working more than a certain amount of time, and we will pay you less if it is not working that much.’ We gave them some real incentives to get the equipment going, so it worked either at lower defect levels or more of the time.”Footnote 42 This new arrangement strongly encouraged the suppliers’ service crews to keep the equipment operating properly for long periods.Footnote 43
Third, Barrett and his crew automated the fabs, like their counterparts at IBM, Texas Instruments, and other Silicon Valley firms. Up until that time, all or nearly all process steps in Intel's fabs had depended on the manual work of female operators. Under Barrett, automation became a major priority. In the second half of the 1980s, Intel invested hundreds of millions of dollars in automation. This represented a large share of Intel's total investments in capital equipment during this period. The objectives of this automation program were threefold: to augment productivity; to eliminate errors made by operators; and to decrease microchip contamination. In the mid-1980s, Intel's engineers focused on automating the transfer of partially processed wafers from workstation to workstation. This involved the development of robots and automatic vehicles. Later, they mechanized the introduction of wafers into manufacturing tools. By the early 1990s, the fabrication of integrated circuits had become a highly automated endeavor at Intel.Footnote 44
Copy Exactly
Another important component of Barrett's reform program was a fundamental change in the articulation of process development with production. Since Intel's formation, development engineers had created their processes on the manufacturing lines. This methodology had enabled the fast transfer of new processes to volume production. But starting in the late 1970s and early 1980s, this approach became problematic. Relations between process and manufacturing engineers were increasingly conflictual, as both groups fought for access to the same manufacturing tools. Tensions became especially acute in 1983, when the demand for Intel's microchips expanded very quickly. The production groups needed all the manufacturing capacity they could muster to fill the customers’ orders. As a result, development engineers had limited access to production tools and it took them longer to create new processes than before. Because Intel was under significant pressure to introduce new products to the market, they felt compelled to transfer their processes to production, although they were not manufacturing-ready and, therefore, had low yields.Footnote 45
The solution to these problems came from the development organization. When Intel left the DRAM business and closed down its fab in Oregon, Gerhard Parker, the head of development, and Richard Pashley, a mid-level manager in charge of development for memories, proposed to transform the plant into a development module for processes used in the manufacture of microprocessors. But this facility would not be a traditional R&D laboratory. It would be a unit doing “large-scale prototype manufacturing.”Footnote 46 Barrett, Grove, and Moore endorsed this proposal. They saw it as a way of speeding up process engineering and ensuring that processes were fully engineered before they reached production. They requested that the module develop these processes up to the point where they were under good-quality process control and reached high yields. As a result, the job of development engineers changed significantly. They not only came up with new processes but continued working on these processes until they were fit for high-volume production. As the development module in Oregon showed definite promise, Barrett created another module in Santa Clara, close to corporate headquarters. Since these modules had to engineer the most advanced and “cleanest” processes, Barrett and his managerial team invested in the construction of state-of-the-art cleanrooms with the highest performance and reliability equipment. The new facilities were completed by 1989.Footnote 47
The formation of development modules radically changed the terms of the transfer problem. Indeed, the transfer would no longer occur from pure “development” to manufacturing, but from pilot manufacturing to high-volume production. In these circumstances, Barrett and Parker decided that the best approach was to “copy exactly” the module's process in the factories and standardize all production tools used in the module and the fabs. This necessitated extensive documentation of the process and the manufacturing equipment. In the late 1980s, the “copy exactly” methodology became even more exacting. With the transfer of the process for the 486 microprocessor from the Oregon module to the factory in Chandler, Arizona, “copy exactly” meant fully replicating the plant and its equipment as a whole. “Fab 9,” an engineering manager later declared, “had to buy all of the same equipment, but also the same options, the same physical dimensions of the plant, even specifying the length of hookups for every intake valve, every water pipe. We did everything we could to make Fab 9 as close to an exact copy of the Oregon fab and process as possible.”Footnote 48 In brief, the fab was a reproduction of the development module: same building, same cleanroom, same organizational setup. Even the safety rules were exactly the same. The only difference was the workforce.Footnote 49
To ensure that the processes used in the module and the plants did not diverge over time, the manufacturing leadership instituted very strict procedures for making process changes. No factory manager or his engineering staff had the authority to change the process on their own. They had to submit the changes they wished to make to the “Process Change Control Board,” a firm-level committee composed of senior managers, engineers, and technicians. The committee examined the proposed changes. It looked at their justifications and decided which ones it would adopt. Once the committee had approved the changes and carefully documented them, the fabs integrated them into their process simultaneously and precisely in the same way. In addition, Intel sent auditing groups of development and production engineers to the factories and development modules. These groups tracked unintended variations in the process. After each audit, they produced a report listing all the differences they had found and proposed a course of action for eliminating them. Standardization was a long-term endeavor.Footnote 50
This was a radically new way of transferring and managing manufacturing processes. No other Japanese or American corporation standardized processes and production facilities like Intel. For example, National Semiconductor centralized the development of new manufacturing processes and decision-making regarding process changes in the late 1980s, but it did not create identical production facilities and the fabrication equipment remained different in the various fabs. In Japanese firms, each plant had its own manufacturing processes. Given the revolutionary nature of the “copy exactly” methodology, it is not surprising that its introduction met significant opposition within Intel, especially among production engineers. “Copy exactly” changed their work and self-identity in a deep way. Up until then, their job—and their pride—had been to raise manufacturing yields. Now, they were asked to exactly reproduce a process conceived by development engineers. They would be judged on replication, not on improvement. And they complained loudly. “It was a huge cultural issue,” Meieran later remembered. “Engineers would say: ‘I am an engineer. I want to make changes to the process. Why should I get through this bureaucratic morass?’”Footnote 51 Some engineers were so incensed that they resigned from the corporation.Footnote 52
Their complaints fell on deaf ears. In Barrett and Parker's view, the “copy exactly” methodology solved several problems at the same time. The plants receiving the process from the development module fabricated microprocessors at the same yield as the module itself, almost instantaneously (in contrast with the two years it took previously to reach this point). As a result, the firm's overall yield increased significantly. Intel could also augment production volumes more quickly. Building a new plant and bringing it online would immediately double the number of chips the firm could ship to customers. In other words, the corporation could “ramp” production very rapidly, almost as fast as the Japanese (Japanese corporations achieved the same result through rapid growth in manufacturing yields). Another major advantage of the new methodology, from a marketing perspective, was that Intel could tell its customers that second-sourcing was not necessary to ensure product availability. It had several internal sources for its microprocessors. In other words, the development of the “copy exactly” methodology reinforced the firm's hand in its negotiations with customers and, along with great strides in reliability and productivity, enabled it to sole-source its microprocessors.Footnote 53
Within a six-year span, Barrett's reform of manufacturing and his reorganization of process transfer bore fruit. Intel became a low-cost and highly efficient chipmaker, on par with Japanese firms such as NEC, Hitachi, and Fujitsu. The productivity of its plants quadrupled between 1985 and 1989. The utilization rate of manufacturing equipment jumped from 20 percent in 1984 to 60 percent in the early 1990s. Yields grew from roughly 50 percent in 1985 to more than 80 percent in 1992. Another indicator of Intel's surge in productivity was the evolution of the ratio of sales per employee, which more than tripled from $69,000 in 1984 to $232,000 in 1992. Finally, Intel could ramp the production of its microprocessors very quickly. This newfound manufacturing prowess eliminated the main competitive advantage of Japanese firms: their abilitity to produce complex chips at very low cost in a predictable way and to expand production very fast to meet customer demand. Intel's manufacturing resurgence enabled the firm to reinforce its position in logic circuits and to benefit from the enormous expansion in the demand for 32-bit chips brought about by the craze in personal computing. Intel's sales grew explosively, from $1.9 billion in 1987 to $5.8 billion in 1992. By that time, it was the largest semiconductor firm in the world. In recognition of the role Barrett had played in the firm's renewal, Grove and Moore promoted him to the position of chief operating officer and gave him a seat on the board of directors in January 1993. In effect, they anointed him as their successor. Barrett became the firm's CEO five years later.Footnote 54
Conclusion
It took nearly fifteen years for Intel to reform its production operations. Between 1978 and 1992, the firm switched from Silicon Valley–type manufacturing to a highly efficient, scientific, and automated form of microchip production. This transformation took place in two phases. At first, construing the Japanese threat in terms of quality and yield, Moore and Grove instructed their production group to address quality issues and increase manufacturing yields. Manufacturing engineers and managers strengthened testing. They adopted “total quality management” techniques, and they established crash yield improvement programs. But Intel did not reach manufacturing parity with Japanese corporations, as the crisis of 1985 and 1986 painfully showed. Intel's dire situation forced the realization on Moore and Grove that excellence in manufacturing was an absolute must if their firm was to stay in business in the long run. Starting in 1985, they reformed production in a profound way to increase productivity and lower manufacturing costs. Barrett, the new head of production, closed down the least efficient factories. The remaining plants adopted Japanese technologies, practices, and procedures extensively. They incorporated SPC methods. Barrett and his crew also innovated by reorganizing process development and transfer and by standardizing the firm's fabrication units.
Why did Intel, long locked into the Silicon Valley style of manufacturing, finally succeed in developing a radically different approach to microchip production? This success can be partially explained by Moore and Grove's commitment to manufacturing reform. In 1985, Moore and Grove recast corporate objectives, making the revival of production the firm's highest priority. They gave specific responsibility to Barrett to transform Intel into a world class manufacturer. To do so, Barrett set ambitious goals. He conveyed the urgency of meeting the Japanese manufacturing challenge and enlisted the support of the production groups. He established internal education programs to suffuse manufacturing with SPC and put microchip fabrication on a scientific footing. With Parker and other development managers, he innovated the “copy exactly” methodology.
These endeavors succeeded in part because they encountered little internal opposition. By pushing Flath and Kauffman aside, Moore and Grove had shown every Intel employee that there was no alternative to the course of action championed by Barrett. In addition, since Intel was an open shop, there could be no organized opposition to reform. The only resistance Barrett encountered came from production engineers objecting to the changing nature of their jobs. But this opposition came relatively late, in the late 1980s. It also subsided as it became clear that the “copy exactly” methodology brought about significant improvements in yields and ramping times.
Another factor explaining Intel's success was the window the corporation gained into Japanese manufacturing through die contracting. During the crisis years of 1985 and 1986, Intel engineers made the most of these relationships by observing the plants and manufacturing procedures of Japanese subcontractors. Out of this detailed examination they gained a solid understanding of the sources of Japanese manufacturing competitiveness. They also became familiar with Japanese production tools, techniques, and operating practices. Techniques and procedures pioneered in Japan were then resolutely integrated into Intel's production fabric.
Maybe more importantly, Intel was favored by luck. It benefited from significant changes in the political economy. The Reagan administration coerced the Japanese government to increase the value of the yen, which made microchip imports significantly more expensive. Also, with some prodding from Intel and other firms, it negotiated a trade agreement that instituted a price floor for Japanese integrated circuits. These measures protected Intel's markets and halted the steep decline of its profit margins, thereby preserving the financial resources that were required to revamp the production operations. More direct government help for manufacturing revitalization came through Sematech. Also critical was the rapid growth of the U.S. market for laptops and personal computers beginning in 1987. The expanding demand for microprocessors, especially 32-bit chips, strengthened Intel's financial position and helped it invest significant resources in automation and the construction of state-of-the-art development modules at the end of the decade.
By 1992, Intel had very efficient manufacturing operations. The firm kept refining its manufacturing machine for much of the decade. At Grove and Barrett's behest, manufacturing engineers and managers continued increasing production efficiencies, especially equipment utilization. Perfecting the “copy exactly” methodology, they achieved increasingly fast ramping times. Yields reached the 90 percent range. In the 1990s, Intel's manufacturing empire also expanded very substantially, with the building of factories in Ireland, Oregon, Arizona, and New Mexico. Between 1992 and 1998, the firm invested $24 billion in new plants and manufacturing equipment. These massive investments enabled Intel to meet the fast-expanding demand for microprocessors and thereby helped sustain its monopoly on these chips. They also supported the corporation's extraordinary growth for much of the 1990s. Sales sextupled from $5.8 billion in 1992 to $33.7 billion in 1998. Thus, a firm on the brink of bankruptcy and characterized by its low productivity became a production powerhouse dominating the world's semiconductor industry. In retrospect, the Japanese challenge in manufacturing may have been a blessing in disguise for Intel.Footnote 55