Introduction
The Business Policy field was founded at the start of the 20th century (Rumelt, Schendel, & Teece, Reference Rumelt, Schendel and Teece1994; Hambrick & Chen, Reference Hambrick and Chen2008) and strategic management was formally born in the 1960s (Amitabh, Reference Amitabh2010), when Chandler (Reference Chandler1962), Ansoff (Reference Ansoff1965) and Learned, Christensen, Andrews, & Guth (Reference Learned, Christensen, Andrews and Guth1965) published their pioneering books. Since then, strategic management has gone through several stages (Ansoff, Declerck, & Hayes, Reference Ansoff, Declerck and Hayes1976; O’Shannassy, Reference O’Shannassy2001), taken many forms (Mintzberg, Ahlstrand, & Lampel, Reference Mintzberg, Ahlstrand and Lampel1998) and changed profoundly. One of the most challenging and unresolved problems in this area is the ‘apparently high’ percentage of organisational strategies that fail, with some authors estimating a rate of failure between 50 and 90% (e.g., Kiechel, Reference Kiechel1982, Reference Kiechel1984; Gray, Reference Gray1986; Nutt, Reference Nutt1999; Kaplan & Norton, Reference Kaplan and Norton2001; Sirkin, Keenan, & Jackson, Reference Sirkin, Keenan and Jackson2005). By failure we mean either a new strategy was formulated but not implemented, or it was implemented but with poor results. This is a simple definition but still consistent with the three features of a successful implementation as defined by Miller (Reference Miller1997): (1) completion of everything intended to be implemented within the expected time period; (2) achievement of the performance intended; and (3) acceptability of the method of implementation and outcomes within the organisation. It is also consistent with the planned and emergent strategy modes. In both strategy modes, strategy may or may not be completed, may achieve different degrees of performance and its acceptability may also vary.
The difficulty of successfully implementing new business strategies has long been recognised in the literature (e.g., Alexander, Reference Alexander1985; Wernham, Reference Wernham1985; Ansoff & McDonnell, Reference Ansoff and McDonnell1990), and a 1989 Booz Allen study (cited by Zairi, Reference Zairi1995) concluded that most managers believe that the difficulty of implementing strategy surpasses that of formulating it. As an example, the study found that 73% of managers believed that implementation is more difficult than formulation; 72% that it takes more time; and 82% that it is the part of the strategic planning process over which managers have least control.
In order to understand the reasons behind failure and improve the success rate of implementation, several researchers have provided comprehensive sets of implementation difficulties (Alexander, Reference Alexander1985; Wernham, Reference Wernham1985; Ansoff & McDonnell, Reference Ansoff and McDonnell1990; O’Toole, Reference O’Toole1995; Beer & Eisenstat, Reference Beer and Eisenstat2000; Cândido & Morris, Reference Cândido and Morris2000; Hafsi, Reference Hafsi2001; Miller, Wilson, & Hickson, Reference Miller, Wilson and Hickson2004; Sirkin, Keenan, & Jackson, Reference Sirkin, Keenan and Jackson2005; Hrebiniak, Reference Hrebiniak2006; Gandolfi & Hansson, Reference Gandolfi and Hansson2010, Reference Gandolfi and Hansson2011; Cândido & Santos, Reference Cândido and Santos2011). Many researchers – some of which following on from the inspiring work of Lewin (Reference Lewin1947/1952) – have also proposed integrated frameworks for strategy formulation and successful implementation (e.g., Ansoff & McDonnell, Reference Ansoff and McDonnell1990; Gioia & Chittipeddi, Reference Gioia and Chittipeddi1991; Baden-Fuller & Stopford, Reference Baden-Fuller and Stopford1994; Kotter, Reference Kotter1995; Hussey, Reference Hussey1996; Galpin, Reference Galpin1997; Johnson & Scholes, Reference Johnson and Scholes1999; Calori, Baden-Fuller, & Hunt, Reference Calori, Baden-Fuller and Hunt2000; Cândido & Morris, Reference Cândido and Morris2001). Some others have adopted a different approach and decided to empirically test the impact of these frameworks and of their success factors (e.g., Pinto & Prescott, Reference Pinto and Prescott1990; Miller, Reference Miller1997; Bauer, Falshaw, & Oakland, Reference Bauer, Falshaw and Oakland2005; Bockmühl, König, Enders, Hungenberg, & Puck, Reference Bockmühl, König, Enders, Hungenberg and Puck2011).
Several major debates in the literature (Eisenhardt & Zbaracki, Reference Eisenhardt and Zbaracki1992) have also contributed to the advancement of possible solutions to the implementation problem, namely those around the rationality of the strategy formation process (Fredrickson & Mitchell, Reference Fredrickson and Mitchell1984; Fredrickson & Iaquinto, Reference Fredrickson and Iaquinto1989; Dean & Sharfman, Reference Dean and Sharfman1993; Papadakis, Lioukas, & Chambers, Reference Papadakis, Lioukas and Chambers1998); the accidental, evolutionary or natural selection approaches of strategy (Alchian, Reference Alchian1950; Cohen, March, & Olsen, Reference Cohen, March and Olsen1972; Nelson & Winter, Reference Nelson and Winter1974; Hannan & Freeman, Reference Hannan and Freeman1977; Aldrich, Reference Aldrich1979; March, Reference March1981; Van de Ven & Poole, Reference Van de Ven and Poole1995); the rate, rhythm or pattern of organisational change (Dunphy & Stace, Reference Dunphy and Stace1988; Weick & Quinn, Reference Weick and Quinn1999); the incremental or emergent additions to intended strategy (Mintzberg & Waters, Reference Mintzberg and Waters1985; Mintzberg, Reference Mintzberg1987; Quinn, Reference Quinn1989); the idiosyncratic nature of each individual strategic decision (Mintzberg, Raisinghani, & Théorêt, Reference Mintzberg, Raisinghani and Théorêt1976; French, Kouzmin, & Kelly, Reference French, Kouzmin and Kelly2011); the impact of top management team composition and relationships between members (Hambrick & Mason, Reference Hambrick and Mason1984; Naranjo-Gil, Hartmann, & Maas, Reference Naranjo-Gil, Hartmann and Maas2008; O’Shannassy, Reference O’Shannassy2010); the alternative management styles and strategic change methods (Hart, Reference Hart1992; Stace & Dunphy, Reference Stace and Dunphy1996; Johnson & Scholes, Reference Johnson and Scholes1999; Balogun & Hailey, Reference Balogun and Hailey2008); the distinction and relationships between strategy process, content and context (Pettigrew, Reference Pettigrew1987; Barnett & Carroll, Reference Barnett and Carroll1995); and also the ‘less rational’: political, cultural, behavioural, learned and even symbolic aspects of effective strategic change (Cyert & March, Reference Cyert and March1964; Carnall, Reference Carnall1986; DeGeus, Reference DeGeus1988; Senge, Reference Senge1990; Gioia & Chittipeddi, Reference Gioia and Chittipeddi1991; March, Reference March1997; Nonaka, Reference Nonaka2007; Goss, Reference Goss2008).
Although remarkable progress has been made in the strategic management field, the problem of strategy implementation failure persists, and it is still an important and ongoing concern for researchers and practitioners (Mockler, Reference Mockler1995; Barney, Reference Barney2001; Hickson, Miller, & Wilson, Reference Hickson, Miller and Wilson2003).
Probably, one of the most important challenges in this area is to discover how to ensure successful implementation. A useful first step in this direction is to assess what the real scale of the problem is. This assessment is important for three main reasons. One important reason is that currently both researchers and practitioners seem to assume that the rates of failure are very high. Considering that some of the high estimates have been used to guide some of the research and practice on strategic management, an assessment of the extent to which they provide an accurate and up-to-date account of the problem of strategy implementation failure is required. This is particularly relevant as some of the estimates presented in the literature have played an important role on the adoption or abandonment of some management tools by practitioners, and on the choice of topics researched by academics. Therefore, a rigorous assessment of the extent of the problem can assist decision makers to make better informed decisions on strategies to adopt and on topics to research.
A second reason in favour of this assessment is that it will allow to determine whether the failure rates estimated over the years show any particular pattern or trend. This can be an important finding as it might indicate important changes in the way strategies have been implemented over the years, changes in the nature of the strategies or changes in the way implementation success has been measured. Therefore, identification of clear patterns or trends in the results can open several avenues for research. In particular, it can be an important catalyst for research on the reasons behind the patterns observed.
The other main reason is that the percentage of strategies that fail to succeed is a controversial issue as no one seems to know what the real rate of failure is. By reviewing and discussing relevant literature, this research provides a clear and comprehensive understanding of the nature of this problem, so that the factors contributing to it can be identified and properly addressed. In doing so, this research exposes the need for, and lays the foundations of a clear protocol to guide researchers in the process of estimating more rigorously the rates of failure on strategy implementation. The development of this protocol is fundamental to assist managers and researchers in making better judgments of the value of strategy types, implementation approaches and management instruments.
This paper therefore aims to contribute to the discussion on the estimation of strategy implementation failure rates. In particular, we aim to show that the current state of affairs on the field of strategic management does not allow a single robust estimate of the failure rate of strategy implementation to be provided. In line with this objective, we also aim to suggest a template for a protocol that can help researchers develop better measures of strategy implementation failure rates. To this purpose, an extensive review of the literature on strategy implementation failure rates is presented and scrutinised.
In pursuit of this research agenda, the remainder of this paper is organised into several sections. It starts by discussing the research methodology and the process we have followed to address the objectives of this paper. It then addresses the issue of what the rate of strategy implementation failure is. A discussion of the literature dealing with this issue ensues and evidence is presented that supports the conclusions we have reached. The paper concludes by deriving implications for the literature and practice on strategy implementation.
Methodology: Search Strategy and Selection Criteria
With the objective of assessing what the rate of business strategy implementation failure/success is, we carried out an extensive review of the literature. First, we tried to identify all publications in scholarly journals at the EBSCO Host Research Databases that present estimates for this rate. Several search strings, including strateg* and fail*, strateg* and success*, strateg* and implement*, transfor* and fail* were applied to keywords, titles and abstracts of the publications. In the search strings above, the asterisk sign (*) represents wildcard characters. Second, within this first set, we identified all papers from business journals. All those publications from the list that were not actually publications in the business area even though they mentioned the search terms in the keywords, title or the abstract were omitted from further analysis. Third, we analysed the abstracts of all publications on this final list in order to assess their relevance for our research. We have considered only relevant studies that could present a percentage of failure (or of success) on business strategy execution consistent with the definition of failure presented previously. Fourth, for those publications considered to be relevant, we analysed the full text in order to determine whether an estimate of the failure rate was provided. Fifth, bibliographic references in the selected papers were also used as a source to identify papers or other evidence not captured in our electronic database search. It is relevant to note that the studies dealing with this issue have been authored by academics and practitioners, including consulting companies, and that not all of these studies have been published in academic journals. Therefore, a search strategy based exclusively on evidence documented in academic journals would be incomplete. Consequently, our sixth step consisted of additional searches carried out on the Internet search engine Google, on the websites of major consulting companies and on several national library on-line catalogues (England, United States, Ireland, Scotland, Canada, Australia and Portugal), which allowed the identification of some additional and relevant studies. Unfortunately, however, some of these studies were not available for consultation and it was not possible for us to gain access to either a hard or an electronic copy. Interestingly, many of these unavailable studies were authored by consulting companies (Arthur D. Little, 1992; A.T. Kearney, Reference Kearney1992; Prospectus Strategy Consultants, 1996) and were abundantly quoted, even by reputed academic researchers. Finally, we have also contacted by e-mail the consulting companies, the individual authors of the reports, when their names were publicly available and the authors who have quoted those studies. In total, more than 45 e-mails were sent. In spite of all the efforts made to obtain copies of the studies, most of these efforts proved unfruitful. Many of the companies and authors contacted replied, but we did not succeed in obtaining the required information either because the studies were no longer available (e.g., A.T. Kearney, A.D.L., Prospectus) or because the companies were unable to assist individuals with specific research requests (e.g., B.C.G., McKinsey).
Therefore, the literature reviewed in this paper includes all the academic studies that have met the search criteria above and the consultancy studies that were relevant and available for consultation. The latter account for 45% of the studies analysed. The results of this search are presented and discussed in the next section.
Strategy Implementation Failure/Success Rates
Although the literature on the topic of strategy implementation failure/success is not in short supply, the existing studies are mixed in terms of their features (e.g., failure rate estimated, amount of effort involved in the estimation, complexity and quality of the methodology used, unit of analysis, criteria adopted to define success and research strategies adopted) and this requires special care in comparing their results. The most significant features of the studies considered for this research are summarised in Tables 1 and 2. Table 1 lists studies that have focused on general business strategies, and Table 2 lists studies that have focused on specific business strategies. Although the former table aims to be exhaustive, in the sense that it shows all the studies that our search strategy identified, the latter, aims to be illustrative of the variability of the available estimates.
Table 1 Studies estimating general business strategy implementation failure rates
Notes.
a Study by a consulting firm or by authors associated with consulting companies.
b The study was not available on-line. We did not receive replies to our e-mails or the replies were negative.
na=information not available.
Table 2 Studies estimating specific business strategy implementation failure rates
Notes.
a Study by a consulting firm or by authors associated with consulting companies.
b The study was not available on-line. We did not receive replies to our e-mails or the replies were negative.
c In the same year, in a study by O’Brien and Voss (Reference O’Brien and Voss1992), the authors concluded that most British organisations were having problems developing TQM. However, they noted that most UK organisations were in the early stages of developing a total approach to quality, that is, in the beginning of implementation.
DCs=developed countries; IVJs=international joint ventures; LDCs=less-developed countries; na=information not available; TQM=Total Quality Management.
The information in these tables is organised in five columns. The first column indicates the author(s) and year of the study and it is listed chronologically. The research method used to estimate the rates of failure/success and the variables against which such rates were assessed are described in the second and third columns, respectively. The fourth column indicates the estimated rate of failure presented by each study. Finally, the last column records some additional comments on each study.
The most appropriate conclusion that can be drawn from the analysis of Tables 1 and 2 is that it is difficult to provide accurate estimates for rates of failure of strategy implementation. The studies carried out so far by researchers and management consulting firms have obtained mixed results regarding the success and failure rates of business strategy implementation. In fact, as can be seen from the fourth column of the tables, the range of variation of the estimates is remarkable. If we first analyse the studies that focus on business strategy implementation in general, we can verify that the estimated rates of failure have ranged from 28 to 90%. When we turn to the studies that have focused on the implementation of specific business strategies this range is even wider. While some studies have obtained rates of failure as low as 7–10% (e.g., Taylor, Reference Taylor1997; Walsh, Hughes, & Maddox, Reference Walsh, Hughes and Maddox2002), others have obtained rates of failure as high as 80–90% (e.g., Voss, Reference Voss1988, Reference Voss1992; A.T. Kearney, Reference Kearney1992). Therefore, although it can be claimed that up to 90% of strategic initiatives fail, as this is the upper limit of the estimates provided in the literature, this is likely to be an overestimation.
Two major reasons support this view. First, most of the higher estimates presented in the literature come from consulting firms (e.g., Kiechel, Reference Kiechel1982, Reference Kiechel1984; Judson, Reference Judson1991; A.T. Kearney, Reference Kearney1992; Prospectus Strategy Consultants, 1996; Hackett Group, 2004a, 2004b; Dion, Allday, Lafforet, Derain, & Lahiri, Reference Dion, Allday, Lafforet, Derain and Lahiri2007). Although we were unable to access the scientific rigour of some of these studies, as it was not possible to obtain details regarding the robustness of the research methodologies used and the results achieved, it has long been recognised that some overestimation may have been committed by consulting firms (Powell, Reference Powell1995). Overestimated failure rates can be used to the advantage of consulting firms, namely as a marketing strategy to convince customers of the importance of adopting their services (Powell, Reference Powell1995). Second, the results on the tables seem to suggest a downward trend on the estimates of failure, indicating that the percentage of strategic initiatives that fail has decreased over time (see Figure 1), a likely result of the scientific progress made in this field over the past two decades and its inclusion in business education programmes. In particular, the identification of obstacles to strategy implementation and a better understanding of the ways they interact with each other, made by both researchers and practitioners (e.g., Alexander, Reference Alexander1985; Ansoff & McDonnell, Reference Ansoff and McDonnell1990; Kotter, Reference Kotter1995; Beer & Eisenstat, Reference Beer and Eisenstat2000; Kaplan & Norton, Reference Kaplan and Norton2001) might have played an important role in improving the rates of failure over the years. Therefore, although some of the higher estimates could have been appropriate and reflect the true dimension of the problem one or two decades ago, they are likely to be outdated nowadays. This is also the case because time since adoption of a new strategy contributes to a better internalisation of the elements of that strategy and consequently to a better performance (Powell, Reference Powell1995; Prajogo & Brown, Reference Prajogo and Brown2006). Considering that some strategies and some management tools have been in practice for a long time, it is likely that familiarity with these strategies and tools has increased leading to the accumulation of knowledge and, consequently, to more successful implementations (Taylor & Wright, Reference Taylor and Wright2003). Several other explanations can be offered to justify the downward trend on the estimates of failure. For example, companies may follow successful early adopters, benefiting from their experience, thus resulting in the improvement of failure rates. Companies may also have become more aware of the need to carefully customise new strategies or management tools to their characteristics and to the contexts in which they operate, instead of blindly adopting general undifferentiated strategies and tools. Independently or in combination each of these factors might help explain the apparent improvement in failure rates.
Figure 1 Business strategy implementation failure rates. Note: For this figure, we used the rates in Table 1. When two rates were given in any one study, we used the average for the figure.
It seems therefore reasonable to assume that the current rates of failure are well below some of the estimates often quoted in the literature. However, if this is the case, what is then the real percentage of strategies that fail? Although there have been several studies on this issue in the past two decades, our view is that the current state of affairs does not allow a robust estimate to be provided. Several reasons can be advanced for this.
First, the studies discussing the success/failure rate of strategy implementation vary considerably in the amount of effort put into the estimation of the rate. In some of these studies, the estimation of the rate of failure/success was their main objective (e.g., Golembiewski, Reference Golembiewski1990; Park, Reference Park1991; Wilkinson, Redman, & Snape, Reference Wilkinson, Redman and Snape1994; Pautler, Reference Pautler2003; Makino, Chan, Isobe, & Beamish, Reference Makino, Chan, Isobe and Beamish2007). In other studies, this objective was part of a broader research agenda (e.g., Beamish, Reference Beamish1985; Voss, Reference Voss1988, Reference Voss1992; Taylor, Reference Taylor1997; Nutt, Reference Nutt1999; Walsh, Hughes, & Maddox, Reference Walsh, Hughes and Maddox2002; Taylor & Wright, Reference Taylor and Wright2003; McKinsey, Reference McKinsey2006), while in others the rates of success/failure were presented as complementary information in an introduction or as an aside (e.g., Gray, Reference Gray1986; Harrigan, Reference Harrigan1988a; Hall, Rosenthal, & Wade, Reference Hall, Rosenthal and Wade1993; Mohrman, Tenkasi, Lawler, & Ledford, Reference Mohrman, Tenkasi, Lawler and Ledford1995; Lewy & Mée, Reference Lewy and Mée1998a, Reference Lewy and Mée1998b; Sila, Reference Sila2007). An implication of the effort put into the estimation of the rate in each study is the consequent impact on the complexity of the computational method used. In some studies, the computation is very simple (e.g., Beamish, Reference Beamish1985; Harrigan, Reference Harrigan1988a; Sila, Reference Sila2007), while in others it is much more complex and demanding (e.g., Golembiewski, Reference Golembiewski1990; Park, Reference Park1991).
Second, these studies are not easily comparable because the criteria used to define success/failure are very distinct and consequently, they can account for some of the differences between estimations. It is possible to distinguish between ‘technical success’ and ‘competitive success’ (Voss, Reference Voss1992), between ‘success as process ease’ and ‘success as process outcomes’ (Bauer, Falshaw, & Oakland, Reference Bauer, Falshaw and Oakland2005) and, similarly, between ‘implementation success’ and ‘organisational success’ (Hussey, Reference Hussey1996; Mellahi & Wilkinson, Reference Mellahi and Wilkinson2004). The higher rates of failure estimated may depend on a stricter sense of success adopted by researchers.
Estimates of technical success and success as process ease may be higher than estimates of success as process outcomes or organisational competitive success in the marketplace, since more internal and external contingencies can affect the latter types of success. In Tables 1 and 2, we have reported mainly failure rates from a ‘competitive success’ or an ‘organisational success’ perspective. Even so, the studies in the tables are not easily comparable because in some cases researchers relied on managements’ perceptions to derive an estimate of success/failure (e.g., Beamish, Reference Beamish1985; Gray, Reference Gray1986; Voss, Reference Voss1988, Reference Voss1992; Taylor & Wright, Reference Taylor and Wright2003), whereas in others, they used more objective measurements (e.g., Golembiewski, Proehl, & Sink, Reference Golembiewski, Proehl and Sink1981, Reference Golembiewski, Proehl and Sink1982; Golembiewski, Reference Golembiewski1990; Hall, Rosenthal, & Wade, Reference Hall, Rosenthal and Wade1993; Pautler, Reference Pautler2003; Makino et al., Reference Makino, Chan, Isobe and Beamish2007). Furthermore, some studies have used a single criterion to define success/failure (e.g., Gray, Reference Gray1986; Walsh, Hughes, & Maddox, Reference Walsh, Hughes and Maddox2002; Sila, Reference Sila2007), whereas others have used multiple criteria (e.g., Golembiewski, Reference Golembiewski1990; Park, Reference Park1991; Wilkinson, Redman, & Snape, Reference Wilkinson, Redman and Snape1994; Mohrman et al., Reference Mohrman, Tenkasi, Lawler and Ledford1995).
Third, different studies have used different research strategies to estimate the rate of success/failure of strategy implementation. Some researchers have adopted a case study approach (e.g., Voss, Reference Voss1988, Reference Voss1992; Hall, Rosenthal, & Wade, Reference Hall, Rosenthal and Wade1993; Lewy & Mée, Reference Lewy and Mée1998a, Reference Lewy and Mée1998b; Nutt, Reference Nutt1999). Others have employed a survey method (e.g., Beamish, Reference Beamish1985; Wilkinson, Redman, & Snape, Reference Wilkinson, Redman and Snape1994; Mohrman et al., Reference Mohrman, Tenkasi, Lawler and Ledford1995; Walsh, Hughes, & Maddox, Reference Walsh, Hughes and Maddox2002; McKinsey, Reference McKinsey2006; Makino et al., Reference Makino, Chan, Isobe and Beamish2007; Sila, Reference Sila2007), while still others have used a combination of methods (e.g., Gray, Reference Gray1986; Harrigan, Reference Harrigan1988a; Charan & Colvin, Reference Charan and Colvin1999; Taylor & Wright, Reference Taylor and Wright2003). It is well know that, while some research strategies allow statistical generalisations to be made, others, like case-based research, only allow analytical generalisations.
Fourth, the unit of analysis varies considerably from one study to another. Some researchers have considered as their unit of analysis a single project, such as developing a new product or launching quality circles, which may be seen as part of wider strategic initiatives (e.g., Nutt, Reference Nutt1987, Reference Nutt1999; Voss, Reference Voss1988, Reference Voss1992; Park, Reference Park1991; Lewy & Mée, Reference Lewy and Mée1998a, Reference Lewy and Mée1998b; Hackett Group, 2004a, 2004b; Lawson, Stratton, & Hatch, Reference Lawson, Stratton and Hatch2006, Reference Lawson, Hatch and Desroches2008). Other researchers have focused on business-wide strategic initiatives, which may in turn be decomposed into several smaller projects (e.g., Kiechel, Reference Kiechel1982, Reference Kiechel1984; Harrigan, Reference Harrigan1988a, Reference Harrigan1988b, Reference Harrigan1988c; Mohrman et al., Reference Mohrman, Tenkasi, Lawler and Ledford1995; Walsh, Hughes, & Maddox, Reference Walsh, Hughes and Maddox2002; Pautler, Reference Pautler2003; McKinsey, Reference McKinsey2006; Sila, Reference Sila2007).
Fifth, some studies prove very difficult to obtain/access, in particular those undertaken by some management consulting firms such as A.T. Kearney, Arthur D. Little, McKinsey, Prospectus and Booz Allen Hamilton. Therefore, any conclusions taken from the estimates they have produced without a proper understanding of the context, methodology and results obtained might lack legitimacy and scientific rigour. In spite of this, it is common to find researchers (e.g., Holder & Walker, Reference Holder and Walker1993; Mintzberg, Reference Mintzberg1994: 25, 284; Smith, Tranfield, Foster, & Whittle, Reference Smith, Tranfield, Foster and Whittle1994; Zairi, Reference Zairi1995; Dow, Samson, & Ford, Reference Dow, Samson and Ford1999; Korukonda, Watson, & Rajkumar, Reference Korukonda, Watson and Rajkumar1999; Kaplan & Norton, Reference Kaplan and Norton2001: 1; Walsh, Hughes, & Maddox, Reference Walsh, Hughes and Maddox2002; Sterling, Reference Sterling2003) that quote the results of these studies not because they have read the original work but because these estimates have been quoted by other researchers or in well-known journals such as The Economist or The Wall Street Journal. Unfortunately, this has lead some of these studies to be widely misquoted and misunderstood (Taylor, Reference Taylor1997).
Finally, it is not always easy to distinguish what is fact and what is fiction in some of the estimates offered in the literature. In particular, there seems to be no scientific grounds behind some of the estimates. For example, Mintzberg (Reference Mintzberg1994: 25, 284), Kaplan & Norton (Reference Kaplan and Norton2001: 1), Burnes (Reference Burnes2004, Reference Burnes2005), Raps (Reference Raps2005) and Sila (Reference Sila2007) quote several sources for the rates of failure they mention in their papers (e.g., Kiechel, Reference Kiechel1982, Reference Kiechel1984; Judson, Reference Judson1991; Dooyoung, Kalinowski, & El-Enein, Reference Dooyoung, Kalinowski and El-Enein1998; Beer & Nohria, Reference Beer and Nohria2000; Waclawski, Reference Waclawski2002; Sirkin, Keenan, & Jackson, Reference Sirkin, Keenan and Jackson2005). However, a detailed analysis of these sources shows that they did not carry out an estimation of the quoted rates of failure. They claim their estimates were based on ‘Interviews’, ‘Studies’, ‘Experience’, ‘The Literature’ or ‘Popular Management Press’, rather than on solid empirical evidence. On other occasions, the sources of the estimates are incorrectly interpreted (e.g., Kaplan & Norton, Reference Kaplan and Norton2001, in interpreting the findings of Charan & Colvin, Reference Charan and Colvin1999). We also found evidence of studies incorrectly identifying their sources (e.g., Dyason & Kaye, Reference Dyason and Kaye1997) and studies not identifying their sources at all (e.g., Jantz & Kendall, Reference Jantz and Kendall1991; Neely & Bourne, Reference Neely and Bourne2000; Becer, Hage, McKenna, & Wilczynski, Reference Becer, Hage, McKenna and Wilczynski2007).
Unless these factors are accounted for, any attempts to present estimates for the real success/failure rates of strategy implementation are doomed to fail or are of little practical value.
Independently of the ‘real’ success/failure rate, and despite success rates that seem to have improved over time, it is reasonable to conclude that the number of strategic initiatives that fail is still considerably higher than would be desirable. This suggests that organisations either need better implementation guidelines or need to make better use of the existing ones. The need for better implementation processes has been widely acknowledged by researchers (e.g., Dean & Bowen, Reference Dean and Bowen1994; Mockler, Reference Mockler1995; Barney, Reference Barney2001; Hickson, Miller, & Wilson, Reference Hickson, Miller and Wilson2003) and research on how to avoid implementation obstacles and improve implementation has been underway for many years (e.g., Stanislao & Stanislao, Reference Stanislao and Stanislao1983; Alexander, Reference Alexander1985; Ansoff & McDonnell, Reference Ansoff and McDonnell1990; Kotter, Reference Kotter1995; Beer & Eisenstat, Reference Beer and Eisenstat2000; Miller, Wilson, & Hickson, Reference Miller, Wilson and Hickson2004; Stadler & Hinterhuber, Reference Stadler and Hinterhuber2005). It is, therefore, imperative to assess the extent to which these guidelines account for some of the improvements achieved as well as to understand the reasons why so many initiatives still fail.
Although efforts should be made to reduce failure rates, it is important to emphasise that failure can be seen as an important part of the strategic learning process within organisations (e.g., Mintzberg, Reference Mintzberg1987; Krogh & Vicari, Reference Krogh and Vicari1993; Sitkin, Sutcliffe, & Schroeder, Reference Sitkin, Sutcliffe and Schroeder1994; Edmondson, Reference Edmondson2011). Unintended past mistakes and deliberate strategic experiments can both generate useful lessons (Wilkinson & Mellahi, Reference Wilkinson and Mellahi2005), which may prove highly advantageous in the marketplace (Krogh & Vicari, Reference Krogh and Vicari1993).
Conclusion
Business strategy implementation has long attracted the interest of researchers and practitioners. In spite of being often quoted that 50–90% of strategic initiatives fail (e.g., Mintzberg, Reference Mintzberg1994: 25, 284; Kaplan & Norton, Reference Kaplan and Norton2001: 1), an exhaustive analysis of the literature on strategy formulation and implementation seems to suggest that some of the evidence supporting these figures is outdated, fragmentary, lacks scientific rigour or is just absent. Much of the uncertainty relating to this issue is also due to the fact that different studies have obtained mixed results. These findings are important to the field of strategy and change management in two different ways. First, they add to the discussion of the appropriateness of the failure rates proposed by some studies, which has attracted interest in recent years but on which much remains to be done. As far as we know, there are only two studies, which explicitly address this issue. A paper by Cândido and Santos (Reference Cândido and Santos2011), which focuses on total quality management rates of failure, and a paper by Hughes (Reference Hughes2011), which questions the assertion that ‘70 per cent of all organisational change initiatives really fail’. Our research presents, however, important departures from these previous studies. While the former study focused on the implementation of a specific business strategy (i.e., total quality management) and the latter focused its analysis exclusively on five selected papers, none of which presenting evidence to support the claim they had made, our study is much broader in focus and comprehensive in the analysis. We scrutinise both the implementation of general and specific business strategies and carry out an extensive review of all the studies that discuss strategy implementation failure rates. In so doing, we have found out that the range of variation of the estimates is remarkable, spanning from a rate of failure as low as 7 to one as high as 90%. Several factors can help explain why there is such variation in the estimates produced including possible overestimation, exposure of organisations to different contextual and environmental factors and differences in the concepts used to define success/failure and in the samples and methodologies adopted. These differences can be attributed to several factors, one of the most important being the lack of a comprehensive review of the relevant literature by some of the studies. This has prevented the authors of these studies from becoming aware of the state of the art on the topic and, consequently, adopting concepts and methods consistent with previous research. Another important explanation for the differences mentioned above relates to the fact that the research objectives vary considerably between studies. Some studies have established the estimation of the rate of failure as their main goal, whereas in others, the estimation of this rate has assumed a less important role. This has had various implications for the higher or lower sophistication of the methodology and of the criteria adopted for the calculations. A third factor explaining the differences in the criteria and methods used to estimate failure rates relates to the use that is intended for these rates. While academic researchers are likely to be more interested in the study of a particular type of implementation approach/tactic, strategy, or even management instrument (such as balanced scorecard or total quality management), practitioners are likely to be more interested in the promotion of a specific kind of consulting service. Finally, the fact that the literature does not offer a clear research protocol to be followed when the objective is to estimate the rate of failure in the implementation of strategic initiatives, also plays a fundamental role in explaining the differences between studies. Given the exceptionally broad range of estimates produced as a result of the factors mentioned above, their quotation in generic terms may have little more than academic value. This conclusion should also be seen as a warning against the use of the current higher estimates of rates of failure (70–90%) to justify any course of action, whether in research or in management practice.
Another important contribution of this research to the literature is that it exposes the need for and lays the foundations of a protocol that guides researchers in the process of estimating strategy implementation failure rates. This is a distinctive feature from the two studies previously discussed. In what follows, we propose a template for such a protocol, which is aimed at enhancing the comparability between estimates and increasing their predictive capability. This protocol should be regarded, however, as a starting point for discussion rather than as a complete proposal. As discussed in what follows, when the objective is to estimate strategy implementation failure rates, there are five principal aspects of the protocol.
First, it is important to accurately characterise the context of the study. In particular, it is fundamental that relevant organisational factors (e.g., firm size, sector of operation, ownership, management style) and environmental variables (e.g., economic, social and cultural context) that might impact on the degree of success or failure of a strategy are clearly identified and discussed. It is well known that some contingency factors might impact on the success or failure of a strategic initiative, and therefore, knowing them is important to enhance comparability between estimates and to design tailor-made guidelines for implementation.
Second, once the context has been established, the actual types of the business strategies being assessed must be carefully detailed. Considering that the process of implementing different strategies can have quite different outcomes within the same organisational and/or environmental context, it is critical to clarify which type of strategy is being analysed. Besides this, it is important to clarify whether the study is assessing the success/failure of modifications of existing strategies or the implementation of whole new strategies and whether it is focused on transactional or transformational changes.
Third, it is fundamental to establish a clear and consistent definition of ‘failure’ or of ‘success’. Although a universally acceptable definition of strategy implementation failure is not compulsory, a clear definition is nonetheless important for methodological consistency as it will ensure a common understanding of what is being assessed and enhance comparability between studies. As part of this definition, it is fundamental to specify the intended outcomes of the implementation process, the measureable indicators of these outcomes and the specific target levels to be attained for these indicators in order for an implementation to qualify as a success (or as a failure).
Fourth, the research methodology used to estimate the rate of failure has to be clearly discussed. Considering that a crucial aspect in estimating the degree of success/failure of strategy implementation is the ability to identify and quantify the outcomes of the process, and that different research strategies have often distinctly different methods for data collection and analysis, it is important that these methods and their assumptions are properly discussed. Information regarding the reliability and validity of the measurement instruments must also be provided to allow for an independent assessment of their methodological rigour and consistency.
Finally, as in any carefully done research, weaknesses of the analysis, which may limit the ability to generalise its conclusions to other contexts, must be identified and characterised. The identification of these weaknesses and the provision of suggestions on how to address them is a key step towards improvement.
Adherence to this protocol is imperative for a better understanding of the reasons behind the different estimates produced and to derive more robust estimates for the rates of strategy implementation failure. Only in this way will we be able to identify the real scale of the problem and plan appropriate corrective actions. Unless it is properly understood whether there are strategic initiatives more difficult to implement than others, whether there are sectors or areas of activity where strategies are more difficult to implement and whether there are cultural issues and other contextual factors that explain the differences on the estimates produced, the mere quotation of these estimates will be of little practical value.
It is important to acknowledge, however, that considerable progress has been made on this topic in the last two decades, and that the lower failure rates recently estimated by some researchers might be a consequence of these advances. However, while progress has been achieved there are still a number of issues that need further understanding to better guide research and practice on this issue.
First, it is important to understand whether the rates of failure are context-dependent. The fact that the estimates produced are so different might suggest context-dependence, indicating that implementation should be tailored in accordance to the characteristics of the organisations and/or of their environment.
Second, it is important to understand whether the apparent improvement in the rates of success is in fact a verified tendency and the extent to which each of the possible explanations advanced here have contributed to this improvement (e.g., scientific progress in the fields of strategy implementation and change management, better management education programmes, time since adoption of a strategy and familiarity with it, accumulation of knowledge, particularly from the experience of early adopters and customisation of general strategies). The identification of best practices, resulting from this line of research, might also play an important role in further promoting successful implementation.
Third, whenever strategy implementation initiatives culminate in failure, it is important to understand what the main causes for the failure were in order to identify if there are causes more important and frequent than others.
Finally, there seems to be, in some sectors of the literature reviewed, the view that some ‘types’ of strategic initiatives are easier to implement and succeed than others. While this view might be correct, it is important to bear in mind that it is not uncommon to find different studies comparing the success rates of the same types of strategic initiatives, which derive considerably different estimates, suggesting that factors other than the type of strategy might play an important role in their success or failure.
Overall, while the real rate of strategy implementation failure might be difficult to determine with certainty, in-depth studies on these issues might shed some light on the matter, and help us understand why so many strategy implementation initiatives fail.
Acknowledgement
The authors thank two anonymous referees for the insightful comments and helpful suggestions. The authors are also pleased to acknowledge the financial support from Fundação para a Ciência e a Tecnologia (SFRH/BSAB/863/2008), FEDER/COMPETE (grant PEst-C/EGE/UI4007/2011), Faculdade de Economia, Universidade do Algarve, and Newport Business School, University of Wales.