Hostname: page-component-745bb68f8f-g4j75 Total loading time: 0 Render date: 2025-02-09T14:05:02.520Z Has data issue: false hasContentIssue false

Strategy implementation: What is the failure rate?

Published online by Cambridge University Press:  14 January 2015

Carlos J F Cândido*
Affiliation:
CEFAGE, Évora, Portugal; and Faculty of Economics, University of Algarve, Faro, Portugal
Sérgio P Santos
Affiliation:
CEFAGE, Évora, Portugal; and Faculty of Economics, University of Algarve, Faro, Portugal
*
Corresponding author: ccandido@ualg.pt
Rights & Permissions [Opens in a new window]

Abstract

It is often claimed that 50–90% of strategic initiatives fail. Although these claims have had a significant impact on management theory and practice, they are controversial. We aim to clarify why this is the case. Towards this end, an extensive review of the literature is presented, assessed, compared and discussed. We conclude that while it is widely acknowledged that the implementation of a new strategy can be a difficult task, the true rate of implementation failure remains to be determined. Most of the estimates presented in the literature are based on evidence that is outdated, fragmentary, fragile or just absent. Careful consideration is advised before using current estimates to justify changes in the theory and practice. A set of guiding principles is presented for assisting researchers to produce better estimates of the rates of failure.

Type
Research Article
Copyright
Copyright © Cambridge University Press and Australian and New Zealand Academy of Management 2015 

Introduction

The Business Policy field was founded at the start of the 20th century (Rumelt, Schendel, & Teece, Reference Rumelt, Schendel and Teece1994; Hambrick & Chen, Reference Hambrick and Chen2008) and strategic management was formally born in the 1960s (Amitabh, Reference Amitabh2010), when Chandler (Reference Chandler1962), Ansoff (Reference Ansoff1965) and Learned, Christensen, Andrews, & Guth (Reference Learned, Christensen, Andrews and Guth1965) published their pioneering books. Since then, strategic management has gone through several stages (Ansoff, Declerck, & Hayes, Reference Ansoff, Declerck and Hayes1976; O’Shannassy, Reference O’Shannassy2001), taken many forms (Mintzberg, Ahlstrand, & Lampel, Reference Mintzberg, Ahlstrand and Lampel1998) and changed profoundly. One of the most challenging and unresolved problems in this area is the ‘apparently high’ percentage of organisational strategies that fail, with some authors estimating a rate of failure between 50 and 90% (e.g., Kiechel, Reference Kiechel1982, Reference Kiechel1984; Gray, Reference Gray1986; Nutt, Reference Nutt1999; Kaplan & Norton, Reference Kaplan and Norton2001; Sirkin, Keenan, & Jackson, Reference Sirkin, Keenan and Jackson2005). By failure we mean either a new strategy was formulated but not implemented, or it was implemented but with poor results. This is a simple definition but still consistent with the three features of a successful implementation as defined by Miller (Reference Miller1997): (1) completion of everything intended to be implemented within the expected time period; (2) achievement of the performance intended; and (3) acceptability of the method of implementation and outcomes within the organisation. It is also consistent with the planned and emergent strategy modes. In both strategy modes, strategy may or may not be completed, may achieve different degrees of performance and its acceptability may also vary.

The difficulty of successfully implementing new business strategies has long been recognised in the literature (e.g., Alexander, Reference Alexander1985; Wernham, Reference Wernham1985; Ansoff & McDonnell, Reference Ansoff and McDonnell1990), and a 1989 Booz Allen study (cited by Zairi, Reference Zairi1995) concluded that most managers believe that the difficulty of implementing strategy surpasses that of formulating it. As an example, the study found that 73% of managers believed that implementation is more difficult than formulation; 72% that it takes more time; and 82% that it is the part of the strategic planning process over which managers have least control.

In order to understand the reasons behind failure and improve the success rate of implementation, several researchers have provided comprehensive sets of implementation difficulties (Alexander, Reference Alexander1985; Wernham, Reference Wernham1985; Ansoff & McDonnell, Reference Ansoff and McDonnell1990; O’Toole, Reference O’Toole1995; Beer & Eisenstat, Reference Beer and Eisenstat2000; Cândido & Morris, Reference Cândido and Morris2000; Hafsi, Reference Hafsi2001; Miller, Wilson, & Hickson, Reference Miller, Wilson and Hickson2004; Sirkin, Keenan, & Jackson, Reference Sirkin, Keenan and Jackson2005; Hrebiniak, Reference Hrebiniak2006; Gandolfi & Hansson, Reference Gandolfi and Hansson2010, Reference Gandolfi and Hansson2011; Cândido & Santos, Reference Cândido and Santos2011). Many researchers – some of which following on from the inspiring work of Lewin (Reference Lewin1947/1952) – have also proposed integrated frameworks for strategy formulation and successful implementation (e.g., Ansoff & McDonnell, Reference Ansoff and McDonnell1990; Gioia & Chittipeddi, Reference Gioia and Chittipeddi1991; Baden-Fuller & Stopford, Reference Baden-Fuller and Stopford1994; Kotter, Reference Kotter1995; Hussey, Reference Hussey1996; Galpin, Reference Galpin1997; Johnson & Scholes, Reference Johnson and Scholes1999; Calori, Baden-Fuller, & Hunt, Reference Calori, Baden-Fuller and Hunt2000; Cândido & Morris, Reference Cândido and Morris2001). Some others have adopted a different approach and decided to empirically test the impact of these frameworks and of their success factors (e.g., Pinto & Prescott, Reference Pinto and Prescott1990; Miller, Reference Miller1997; Bauer, Falshaw, & Oakland, Reference Bauer, Falshaw and Oakland2005; Bockmühl, König, Enders, Hungenberg, & Puck, Reference Bockmühl, König, Enders, Hungenberg and Puck2011).

Several major debates in the literature (Eisenhardt & Zbaracki, Reference Eisenhardt and Zbaracki1992) have also contributed to the advancement of possible solutions to the implementation problem, namely those around the rationality of the strategy formation process (Fredrickson & Mitchell, Reference Fredrickson and Mitchell1984; Fredrickson & Iaquinto, Reference Fredrickson and Iaquinto1989; Dean & Sharfman, Reference Dean and Sharfman1993; Papadakis, Lioukas, & Chambers, Reference Papadakis, Lioukas and Chambers1998); the accidental, evolutionary or natural selection approaches of strategy (Alchian, Reference Alchian1950; Cohen, March, & Olsen, Reference Cohen, March and Olsen1972; Nelson & Winter, Reference Nelson and Winter1974; Hannan & Freeman, Reference Hannan and Freeman1977; Aldrich, Reference Aldrich1979; March, Reference March1981; Van de Ven & Poole, Reference Van de Ven and Poole1995); the rate, rhythm or pattern of organisational change (Dunphy & Stace, Reference Dunphy and Stace1988; Weick & Quinn, Reference Weick and Quinn1999); the incremental or emergent additions to intended strategy (Mintzberg & Waters, Reference Mintzberg and Waters1985; Mintzberg, Reference Mintzberg1987; Quinn, Reference Quinn1989); the idiosyncratic nature of each individual strategic decision (Mintzberg, Raisinghani, & Théorêt, Reference Mintzberg, Raisinghani and Théorêt1976; French, Kouzmin, & Kelly, Reference French, Kouzmin and Kelly2011); the impact of top management team composition and relationships between members (Hambrick & Mason, Reference Hambrick and Mason1984; Naranjo-Gil, Hartmann, & Maas, Reference Naranjo-Gil, Hartmann and Maas2008; O’Shannassy, Reference O’Shannassy2010); the alternative management styles and strategic change methods (Hart, Reference Hart1992; Stace & Dunphy, Reference Stace and Dunphy1996; Johnson & Scholes, Reference Johnson and Scholes1999; Balogun & Hailey, Reference Balogun and Hailey2008); the distinction and relationships between strategy process, content and context (Pettigrew, Reference Pettigrew1987; Barnett & Carroll, Reference Barnett and Carroll1995); and also the ‘less rational’: political, cultural, behavioural, learned and even symbolic aspects of effective strategic change (Cyert & March, Reference Cyert and March1964; Carnall, Reference Carnall1986; DeGeus, Reference DeGeus1988; Senge, Reference Senge1990; Gioia & Chittipeddi, Reference Gioia and Chittipeddi1991; March, Reference March1997; Nonaka, Reference Nonaka2007; Goss, Reference Goss2008).

Although remarkable progress has been made in the strategic management field, the problem of strategy implementation failure persists, and it is still an important and ongoing concern for researchers and practitioners (Mockler, Reference Mockler1995; Barney, Reference Barney2001; Hickson, Miller, & Wilson, Reference Hickson, Miller and Wilson2003).

Probably, one of the most important challenges in this area is to discover how to ensure successful implementation. A useful first step in this direction is to assess what the real scale of the problem is. This assessment is important for three main reasons. One important reason is that currently both researchers and practitioners seem to assume that the rates of failure are very high. Considering that some of the high estimates have been used to guide some of the research and practice on strategic management, an assessment of the extent to which they provide an accurate and up-to-date account of the problem of strategy implementation failure is required. This is particularly relevant as some of the estimates presented in the literature have played an important role on the adoption or abandonment of some management tools by practitioners, and on the choice of topics researched by academics. Therefore, a rigorous assessment of the extent of the problem can assist decision makers to make better informed decisions on strategies to adopt and on topics to research.

A second reason in favour of this assessment is that it will allow to determine whether the failure rates estimated over the years show any particular pattern or trend. This can be an important finding as it might indicate important changes in the way strategies have been implemented over the years, changes in the nature of the strategies or changes in the way implementation success has been measured. Therefore, identification of clear patterns or trends in the results can open several avenues for research. In particular, it can be an important catalyst for research on the reasons behind the patterns observed.

The other main reason is that the percentage of strategies that fail to succeed is a controversial issue as no one seems to know what the real rate of failure is. By reviewing and discussing relevant literature, this research provides a clear and comprehensive understanding of the nature of this problem, so that the factors contributing to it can be identified and properly addressed. In doing so, this research exposes the need for, and lays the foundations of a clear protocol to guide researchers in the process of estimating more rigorously the rates of failure on strategy implementation. The development of this protocol is fundamental to assist managers and researchers in making better judgments of the value of strategy types, implementation approaches and management instruments.

This paper therefore aims to contribute to the discussion on the estimation of strategy implementation failure rates. In particular, we aim to show that the current state of affairs on the field of strategic management does not allow a single robust estimate of the failure rate of strategy implementation to be provided. In line with this objective, we also aim to suggest a template for a protocol that can help researchers develop better measures of strategy implementation failure rates. To this purpose, an extensive review of the literature on strategy implementation failure rates is presented and scrutinised.

In pursuit of this research agenda, the remainder of this paper is organised into several sections. It starts by discussing the research methodology and the process we have followed to address the objectives of this paper. It then addresses the issue of what the rate of strategy implementation failure is. A discussion of the literature dealing with this issue ensues and evidence is presented that supports the conclusions we have reached. The paper concludes by deriving implications for the literature and practice on strategy implementation.

Methodology: Search Strategy and Selection Criteria

With the objective of assessing what the rate of business strategy implementation failure/success is, we carried out an extensive review of the literature. First, we tried to identify all publications in scholarly journals at the EBSCO Host Research Databases that present estimates for this rate. Several search strings, including strateg* and fail*, strateg* and success*, strateg* and implement*, transfor* and fail* were applied to keywords, titles and abstracts of the publications. In the search strings above, the asterisk sign (*) represents wildcard characters. Second, within this first set, we identified all papers from business journals. All those publications from the list that were not actually publications in the business area even though they mentioned the search terms in the keywords, title or the abstract were omitted from further analysis. Third, we analysed the abstracts of all publications on this final list in order to assess their relevance for our research. We have considered only relevant studies that could present a percentage of failure (or of success) on business strategy execution consistent with the definition of failure presented previously. Fourth, for those publications considered to be relevant, we analysed the full text in order to determine whether an estimate of the failure rate was provided. Fifth, bibliographic references in the selected papers were also used as a source to identify papers or other evidence not captured in our electronic database search. It is relevant to note that the studies dealing with this issue have been authored by academics and practitioners, including consulting companies, and that not all of these studies have been published in academic journals. Therefore, a search strategy based exclusively on evidence documented in academic journals would be incomplete. Consequently, our sixth step consisted of additional searches carried out on the Internet search engine Google, on the websites of major consulting companies and on several national library on-line catalogues (England, United States, Ireland, Scotland, Canada, Australia and Portugal), which allowed the identification of some additional and relevant studies. Unfortunately, however, some of these studies were not available for consultation and it was not possible for us to gain access to either a hard or an electronic copy. Interestingly, many of these unavailable studies were authored by consulting companies (Arthur D. Little, 1992; A.T. Kearney, Reference Kearney1992; Prospectus Strategy Consultants, 1996) and were abundantly quoted, even by reputed academic researchers. Finally, we have also contacted by e-mail the consulting companies, the individual authors of the reports, when their names were publicly available and the authors who have quoted those studies. In total, more than 45 e-mails were sent. In spite of all the efforts made to obtain copies of the studies, most of these efforts proved unfruitful. Many of the companies and authors contacted replied, but we did not succeed in obtaining the required information either because the studies were no longer available (e.g., A.T. Kearney, A.D.L., Prospectus) or because the companies were unable to assist individuals with specific research requests (e.g., B.C.G., McKinsey).

Therefore, the literature reviewed in this paper includes all the academic studies that have met the search criteria above and the consultancy studies that were relevant and available for consultation. The latter account for 45% of the studies analysed. The results of this search are presented and discussed in the next section.

Strategy Implementation Failure/Success Rates

Although the literature on the topic of strategy implementation failure/success is not in short supply, the existing studies are mixed in terms of their features (e.g., failure rate estimated, amount of effort involved in the estimation, complexity and quality of the methodology used, unit of analysis, criteria adopted to define success and research strategies adopted) and this requires special care in comparing their results. The most significant features of the studies considered for this research are summarised in Tables 1 and 2. Table 1 lists studies that have focused on general business strategies, and Table 2 lists studies that have focused on specific business strategies. Although the former table aims to be exhaustive, in the sense that it shows all the studies that our search strategy identified, the latter, aims to be illustrative of the variability of the available estimates.

Table 1 Studies estimating general business strategy implementation failure rates

Notes.

a Study by a consulting firm or by authors associated with consulting companies.

b The study was not available on-line. We did not receive replies to our e-mails or the replies were negative.

na=information not available.

Table 2 Studies estimating specific business strategy implementation failure rates

Notes.

a Study by a consulting firm or by authors associated with consulting companies.

b The study was not available on-line. We did not receive replies to our e-mails or the replies were negative.

c In the same year, in a study by O’Brien and Voss (Reference O’Brien and Voss1992), the authors concluded that most British organisations were having problems developing TQM. However, they noted that most UK organisations were in the early stages of developing a total approach to quality, that is, in the beginning of implementation.

DCs=developed countries; IVJs=international joint ventures; LDCs=less-developed countries; na=information not available; TQM=Total Quality Management.

The information in these tables is organised in five columns. The first column indicates the author(s) and year of the study and it is listed chronologically. The research method used to estimate the rates of failure/success and the variables against which such rates were assessed are described in the second and third columns, respectively. The fourth column indicates the estimated rate of failure presented by each study. Finally, the last column records some additional comments on each study.

The most appropriate conclusion that can be drawn from the analysis of Tables 1 and 2 is that it is difficult to provide accurate estimates for rates of failure of strategy implementation. The studies carried out so far by researchers and management consulting firms have obtained mixed results regarding the success and failure rates of business strategy implementation. In fact, as can be seen from the fourth column of the tables, the range of variation of the estimates is remarkable. If we first analyse the studies that focus on business strategy implementation in general, we can verify that the estimated rates of failure have ranged from 28 to 90%. When we turn to the studies that have focused on the implementation of specific business strategies this range is even wider. While some studies have obtained rates of failure as low as 7–10% (e.g., Taylor, Reference Taylor1997; Walsh, Hughes, & Maddox, Reference Walsh, Hughes and Maddox2002), others have obtained rates of failure as high as 80–90% (e.g., Voss, Reference Voss1988, Reference Voss1992; A.T. Kearney, Reference Kearney1992). Therefore, although it can be claimed that up to 90% of strategic initiatives fail, as this is the upper limit of the estimates provided in the literature, this is likely to be an overestimation.

Two major reasons support this view. First, most of the higher estimates presented in the literature come from consulting firms (e.g., Kiechel, Reference Kiechel1982, Reference Kiechel1984; Judson, Reference Judson1991; A.T. Kearney, Reference Kearney1992; Prospectus Strategy Consultants, 1996; Hackett Group, 2004a, 2004b; Dion, Allday, Lafforet, Derain, & Lahiri, Reference Dion, Allday, Lafforet, Derain and Lahiri2007). Although we were unable to access the scientific rigour of some of these studies, as it was not possible to obtain details regarding the robustness of the research methodologies used and the results achieved, it has long been recognised that some overestimation may have been committed by consulting firms (Powell, Reference Powell1995). Overestimated failure rates can be used to the advantage of consulting firms, namely as a marketing strategy to convince customers of the importance of adopting their services (Powell, Reference Powell1995). Second, the results on the tables seem to suggest a downward trend on the estimates of failure, indicating that the percentage of strategic initiatives that fail has decreased over time (see Figure 1), a likely result of the scientific progress made in this field over the past two decades and its inclusion in business education programmes. In particular, the identification of obstacles to strategy implementation and a better understanding of the ways they interact with each other, made by both researchers and practitioners (e.g., Alexander, Reference Alexander1985; Ansoff & McDonnell, Reference Ansoff and McDonnell1990; Kotter, Reference Kotter1995; Beer & Eisenstat, Reference Beer and Eisenstat2000; Kaplan & Norton, Reference Kaplan and Norton2001) might have played an important role in improving the rates of failure over the years. Therefore, although some of the higher estimates could have been appropriate and reflect the true dimension of the problem one or two decades ago, they are likely to be outdated nowadays. This is also the case because time since adoption of a new strategy contributes to a better internalisation of the elements of that strategy and consequently to a better performance (Powell, Reference Powell1995; Prajogo & Brown, Reference Prajogo and Brown2006). Considering that some strategies and some management tools have been in practice for a long time, it is likely that familiarity with these strategies and tools has increased leading to the accumulation of knowledge and, consequently, to more successful implementations (Taylor & Wright, Reference Taylor and Wright2003). Several other explanations can be offered to justify the downward trend on the estimates of failure. For example, companies may follow successful early adopters, benefiting from their experience, thus resulting in the improvement of failure rates. Companies may also have become more aware of the need to carefully customise new strategies or management tools to their characteristics and to the contexts in which they operate, instead of blindly adopting general undifferentiated strategies and tools. Independently or in combination each of these factors might help explain the apparent improvement in failure rates.

Figure 1 Business strategy implementation failure rates. Note: For this figure, we used the rates in Table 1. When two rates were given in any one study, we used the average for the figure.

It seems therefore reasonable to assume that the current rates of failure are well below some of the estimates often quoted in the literature. However, if this is the case, what is then the real percentage of strategies that fail? Although there have been several studies on this issue in the past two decades, our view is that the current state of affairs does not allow a robust estimate to be provided. Several reasons can be advanced for this.

First, the studies discussing the success/failure rate of strategy implementation vary considerably in the amount of effort put into the estimation of the rate. In some of these studies, the estimation of the rate of failure/success was their main objective (e.g., Golembiewski, Reference Golembiewski1990; Park, Reference Park1991; Wilkinson, Redman, & Snape, Reference Wilkinson, Redman and Snape1994; Pautler, Reference Pautler2003; Makino, Chan, Isobe, & Beamish, Reference Makino, Chan, Isobe and Beamish2007). In other studies, this objective was part of a broader research agenda (e.g., Beamish, Reference Beamish1985; Voss, Reference Voss1988, Reference Voss1992; Taylor, Reference Taylor1997; Nutt, Reference Nutt1999; Walsh, Hughes, & Maddox, Reference Walsh, Hughes and Maddox2002; Taylor & Wright, Reference Taylor and Wright2003; McKinsey, Reference McKinsey2006), while in others the rates of success/failure were presented as complementary information in an introduction or as an aside (e.g., Gray, Reference Gray1986; Harrigan, Reference Harrigan1988a; Hall, Rosenthal, & Wade, Reference Hall, Rosenthal and Wade1993; Mohrman, Tenkasi, Lawler, & Ledford, Reference Mohrman, Tenkasi, Lawler and Ledford1995; Lewy & Mée, Reference Lewy and Mée1998a, Reference Lewy and Mée1998b; Sila, Reference Sila2007). An implication of the effort put into the estimation of the rate in each study is the consequent impact on the complexity of the computational method used. In some studies, the computation is very simple (e.g., Beamish, Reference Beamish1985; Harrigan, Reference Harrigan1988a; Sila, Reference Sila2007), while in others it is much more complex and demanding (e.g., Golembiewski, Reference Golembiewski1990; Park, Reference Park1991).

Second, these studies are not easily comparable because the criteria used to define success/failure are very distinct and consequently, they can account for some of the differences between estimations. It is possible to distinguish between ‘technical success’ and ‘competitive success’ (Voss, Reference Voss1992), between ‘success as process ease’ and ‘success as process outcomes’ (Bauer, Falshaw, & Oakland, Reference Bauer, Falshaw and Oakland2005) and, similarly, between ‘implementation success’ and ‘organisational success’ (Hussey, Reference Hussey1996; Mellahi & Wilkinson, Reference Mellahi and Wilkinson2004). The higher rates of failure estimated may depend on a stricter sense of success adopted by researchers.

Estimates of technical success and success as process ease may be higher than estimates of success as process outcomes or organisational competitive success in the marketplace, since more internal and external contingencies can affect the latter types of success. In Tables 1 and 2, we have reported mainly failure rates from a ‘competitive success’ or an ‘organisational success’ perspective. Even so, the studies in the tables are not easily comparable because in some cases researchers relied on managements’ perceptions to derive an estimate of success/failure (e.g., Beamish, Reference Beamish1985; Gray, Reference Gray1986; Voss, Reference Voss1988, Reference Voss1992; Taylor & Wright, Reference Taylor and Wright2003), whereas in others, they used more objective measurements (e.g., Golembiewski, Proehl, & Sink, Reference Golembiewski, Proehl and Sink1981, Reference Golembiewski, Proehl and Sink1982; Golembiewski, Reference Golembiewski1990; Hall, Rosenthal, & Wade, Reference Hall, Rosenthal and Wade1993; Pautler, Reference Pautler2003; Makino et al., Reference Makino, Chan, Isobe and Beamish2007). Furthermore, some studies have used a single criterion to define success/failure (e.g., Gray, Reference Gray1986; Walsh, Hughes, & Maddox, Reference Walsh, Hughes and Maddox2002; Sila, Reference Sila2007), whereas others have used multiple criteria (e.g., Golembiewski, Reference Golembiewski1990; Park, Reference Park1991; Wilkinson, Redman, & Snape, Reference Wilkinson, Redman and Snape1994; Mohrman et al., Reference Mohrman, Tenkasi, Lawler and Ledford1995).

Third, different studies have used different research strategies to estimate the rate of success/failure of strategy implementation. Some researchers have adopted a case study approach (e.g., Voss, Reference Voss1988, Reference Voss1992; Hall, Rosenthal, & Wade, Reference Hall, Rosenthal and Wade1993; Lewy & Mée, Reference Lewy and Mée1998a, Reference Lewy and Mée1998b; Nutt, Reference Nutt1999). Others have employed a survey method (e.g., Beamish, Reference Beamish1985; Wilkinson, Redman, & Snape, Reference Wilkinson, Redman and Snape1994; Mohrman et al., Reference Mohrman, Tenkasi, Lawler and Ledford1995; Walsh, Hughes, & Maddox, Reference Walsh, Hughes and Maddox2002; McKinsey, Reference McKinsey2006; Makino et al., Reference Makino, Chan, Isobe and Beamish2007; Sila, Reference Sila2007), while still others have used a combination of methods (e.g., Gray, Reference Gray1986; Harrigan, Reference Harrigan1988a; Charan & Colvin, Reference Charan and Colvin1999; Taylor & Wright, Reference Taylor and Wright2003). It is well know that, while some research strategies allow statistical generalisations to be made, others, like case-based research, only allow analytical generalisations.

Fourth, the unit of analysis varies considerably from one study to another. Some researchers have considered as their unit of analysis a single project, such as developing a new product or launching quality circles, which may be seen as part of wider strategic initiatives (e.g., Nutt, Reference Nutt1987, Reference Nutt1999; Voss, Reference Voss1988, Reference Voss1992; Park, Reference Park1991; Lewy & Mée, Reference Lewy and Mée1998a, Reference Lewy and Mée1998b; Hackett Group, 2004a, 2004b; Lawson, Stratton, & Hatch, Reference Lawson, Stratton and Hatch2006, Reference Lawson, Hatch and Desroches2008). Other researchers have focused on business-wide strategic initiatives, which may in turn be decomposed into several smaller projects (e.g., Kiechel, Reference Kiechel1982, Reference Kiechel1984; Harrigan, Reference Harrigan1988a, Reference Harrigan1988b, Reference Harrigan1988c; Mohrman et al., Reference Mohrman, Tenkasi, Lawler and Ledford1995; Walsh, Hughes, & Maddox, Reference Walsh, Hughes and Maddox2002; Pautler, Reference Pautler2003; McKinsey, Reference McKinsey2006; Sila, Reference Sila2007).

Fifth, some studies prove very difficult to obtain/access, in particular those undertaken by some management consulting firms such as A.T. Kearney, Arthur D. Little, McKinsey, Prospectus and Booz Allen Hamilton. Therefore, any conclusions taken from the estimates they have produced without a proper understanding of the context, methodology and results obtained might lack legitimacy and scientific rigour. In spite of this, it is common to find researchers (e.g., Holder & Walker, Reference Holder and Walker1993; Mintzberg, Reference Mintzberg1994: 25, 284; Smith, Tranfield, Foster, & Whittle, Reference Smith, Tranfield, Foster and Whittle1994; Zairi, Reference Zairi1995; Dow, Samson, & Ford, Reference Dow, Samson and Ford1999; Korukonda, Watson, & Rajkumar, Reference Korukonda, Watson and Rajkumar1999; Kaplan & Norton, Reference Kaplan and Norton2001: 1; Walsh, Hughes, & Maddox, Reference Walsh, Hughes and Maddox2002; Sterling, Reference Sterling2003) that quote the results of these studies not because they have read the original work but because these estimates have been quoted by other researchers or in well-known journals such as The Economist or The Wall Street Journal. Unfortunately, this has lead some of these studies to be widely misquoted and misunderstood (Taylor, Reference Taylor1997).

Finally, it is not always easy to distinguish what is fact and what is fiction in some of the estimates offered in the literature. In particular, there seems to be no scientific grounds behind some of the estimates. For example, Mintzberg (Reference Mintzberg1994: 25, 284), Kaplan & Norton (Reference Kaplan and Norton2001: 1), Burnes (Reference Burnes2004, Reference Burnes2005), Raps (Reference Raps2005) and Sila (Reference Sila2007) quote several sources for the rates of failure they mention in their papers (e.g., Kiechel, Reference Kiechel1982, Reference Kiechel1984; Judson, Reference Judson1991; Dooyoung, Kalinowski, & El-Enein, Reference Dooyoung, Kalinowski and El-Enein1998; Beer & Nohria, Reference Beer and Nohria2000; Waclawski, Reference Waclawski2002; Sirkin, Keenan, & Jackson, Reference Sirkin, Keenan and Jackson2005). However, a detailed analysis of these sources shows that they did not carry out an estimation of the quoted rates of failure. They claim their estimates were based on ‘Interviews’, ‘Studies’, ‘Experience’, ‘The Literature’ or ‘Popular Management Press’, rather than on solid empirical evidence. On other occasions, the sources of the estimates are incorrectly interpreted (e.g., Kaplan & Norton, Reference Kaplan and Norton2001, in interpreting the findings of Charan & Colvin, Reference Charan and Colvin1999). We also found evidence of studies incorrectly identifying their sources (e.g., Dyason & Kaye, Reference Dyason and Kaye1997) and studies not identifying their sources at all (e.g., Jantz & Kendall, Reference Jantz and Kendall1991; Neely & Bourne, Reference Neely and Bourne2000; Becer, Hage, McKenna, & Wilczynski, Reference Becer, Hage, McKenna and Wilczynski2007).

Unless these factors are accounted for, any attempts to present estimates for the real success/failure rates of strategy implementation are doomed to fail or are of little practical value.

Independently of the ‘real’ success/failure rate, and despite success rates that seem to have improved over time, it is reasonable to conclude that the number of strategic initiatives that fail is still considerably higher than would be desirable. This suggests that organisations either need better implementation guidelines or need to make better use of the existing ones. The need for better implementation processes has been widely acknowledged by researchers (e.g., Dean & Bowen, Reference Dean and Bowen1994; Mockler, Reference Mockler1995; Barney, Reference Barney2001; Hickson, Miller, & Wilson, Reference Hickson, Miller and Wilson2003) and research on how to avoid implementation obstacles and improve implementation has been underway for many years (e.g., Stanislao & Stanislao, Reference Stanislao and Stanislao1983; Alexander, Reference Alexander1985; Ansoff & McDonnell, Reference Ansoff and McDonnell1990; Kotter, Reference Kotter1995; Beer & Eisenstat, Reference Beer and Eisenstat2000; Miller, Wilson, & Hickson, Reference Miller, Wilson and Hickson2004; Stadler & Hinterhuber, Reference Stadler and Hinterhuber2005). It is, therefore, imperative to assess the extent to which these guidelines account for some of the improvements achieved as well as to understand the reasons why so many initiatives still fail.

Although efforts should be made to reduce failure rates, it is important to emphasise that failure can be seen as an important part of the strategic learning process within organisations (e.g., Mintzberg, Reference Mintzberg1987; Krogh & Vicari, Reference Krogh and Vicari1993; Sitkin, Sutcliffe, & Schroeder, Reference Sitkin, Sutcliffe and Schroeder1994; Edmondson, Reference Edmondson2011). Unintended past mistakes and deliberate strategic experiments can both generate useful lessons (Wilkinson & Mellahi, Reference Wilkinson and Mellahi2005), which may prove highly advantageous in the marketplace (Krogh & Vicari, Reference Krogh and Vicari1993).

Conclusion

Business strategy implementation has long attracted the interest of researchers and practitioners. In spite of being often quoted that 50–90% of strategic initiatives fail (e.g., Mintzberg, Reference Mintzberg1994: 25, 284; Kaplan & Norton, Reference Kaplan and Norton2001: 1), an exhaustive analysis of the literature on strategy formulation and implementation seems to suggest that some of the evidence supporting these figures is outdated, fragmentary, lacks scientific rigour or is just absent. Much of the uncertainty relating to this issue is also due to the fact that different studies have obtained mixed results. These findings are important to the field of strategy and change management in two different ways. First, they add to the discussion of the appropriateness of the failure rates proposed by some studies, which has attracted interest in recent years but on which much remains to be done. As far as we know, there are only two studies, which explicitly address this issue. A paper by Cândido and Santos (Reference Cândido and Santos2011), which focuses on total quality management rates of failure, and a paper by Hughes (Reference Hughes2011), which questions the assertion that ‘70 per cent of all organisational change initiatives really fail’. Our research presents, however, important departures from these previous studies. While the former study focused on the implementation of a specific business strategy (i.e., total quality management) and the latter focused its analysis exclusively on five selected papers, none of which presenting evidence to support the claim they had made, our study is much broader in focus and comprehensive in the analysis. We scrutinise both the implementation of general and specific business strategies and carry out an extensive review of all the studies that discuss strategy implementation failure rates. In so doing, we have found out that the range of variation of the estimates is remarkable, spanning from a rate of failure as low as 7 to one as high as 90%. Several factors can help explain why there is such variation in the estimates produced including possible overestimation, exposure of organisations to different contextual and environmental factors and differences in the concepts used to define success/failure and in the samples and methodologies adopted. These differences can be attributed to several factors, one of the most important being the lack of a comprehensive review of the relevant literature by some of the studies. This has prevented the authors of these studies from becoming aware of the state of the art on the topic and, consequently, adopting concepts and methods consistent with previous research. Another important explanation for the differences mentioned above relates to the fact that the research objectives vary considerably between studies. Some studies have established the estimation of the rate of failure as their main goal, whereas in others, the estimation of this rate has assumed a less important role. This has had various implications for the higher or lower sophistication of the methodology and of the criteria adopted for the calculations. A third factor explaining the differences in the criteria and methods used to estimate failure rates relates to the use that is intended for these rates. While academic researchers are likely to be more interested in the study of a particular type of implementation approach/tactic, strategy, or even management instrument (such as balanced scorecard or total quality management), practitioners are likely to be more interested in the promotion of a specific kind of consulting service. Finally, the fact that the literature does not offer a clear research protocol to be followed when the objective is to estimate the rate of failure in the implementation of strategic initiatives, also plays a fundamental role in explaining the differences between studies. Given the exceptionally broad range of estimates produced as a result of the factors mentioned above, their quotation in generic terms may have little more than academic value. This conclusion should also be seen as a warning against the use of the current higher estimates of rates of failure (70–90%) to justify any course of action, whether in research or in management practice.

Another important contribution of this research to the literature is that it exposes the need for and lays the foundations of a protocol that guides researchers in the process of estimating strategy implementation failure rates. This is a distinctive feature from the two studies previously discussed. In what follows, we propose a template for such a protocol, which is aimed at enhancing the comparability between estimates and increasing their predictive capability. This protocol should be regarded, however, as a starting point for discussion rather than as a complete proposal. As discussed in what follows, when the objective is to estimate strategy implementation failure rates, there are five principal aspects of the protocol.

First, it is important to accurately characterise the context of the study. In particular, it is fundamental that relevant organisational factors (e.g., firm size, sector of operation, ownership, management style) and environmental variables (e.g., economic, social and cultural context) that might impact on the degree of success or failure of a strategy are clearly identified and discussed. It is well known that some contingency factors might impact on the success or failure of a strategic initiative, and therefore, knowing them is important to enhance comparability between estimates and to design tailor-made guidelines for implementation.

Second, once the context has been established, the actual types of the business strategies being assessed must be carefully detailed. Considering that the process of implementing different strategies can have quite different outcomes within the same organisational and/or environmental context, it is critical to clarify which type of strategy is being analysed. Besides this, it is important to clarify whether the study is assessing the success/failure of modifications of existing strategies or the implementation of whole new strategies and whether it is focused on transactional or transformational changes.

Third, it is fundamental to establish a clear and consistent definition of ‘failure’ or of ‘success’. Although a universally acceptable definition of strategy implementation failure is not compulsory, a clear definition is nonetheless important for methodological consistency as it will ensure a common understanding of what is being assessed and enhance comparability between studies. As part of this definition, it is fundamental to specify the intended outcomes of the implementation process, the measureable indicators of these outcomes and the specific target levels to be attained for these indicators in order for an implementation to qualify as a success (or as a failure).

Fourth, the research methodology used to estimate the rate of failure has to be clearly discussed. Considering that a crucial aspect in estimating the degree of success/failure of strategy implementation is the ability to identify and quantify the outcomes of the process, and that different research strategies have often distinctly different methods for data collection and analysis, it is important that these methods and their assumptions are properly discussed. Information regarding the reliability and validity of the measurement instruments must also be provided to allow for an independent assessment of their methodological rigour and consistency.

Finally, as in any carefully done research, weaknesses of the analysis, which may limit the ability to generalise its conclusions to other contexts, must be identified and characterised. The identification of these weaknesses and the provision of suggestions on how to address them is a key step towards improvement.

Adherence to this protocol is imperative for a better understanding of the reasons behind the different estimates produced and to derive more robust estimates for the rates of strategy implementation failure. Only in this way will we be able to identify the real scale of the problem and plan appropriate corrective actions. Unless it is properly understood whether there are strategic initiatives more difficult to implement than others, whether there are sectors or areas of activity where strategies are more difficult to implement and whether there are cultural issues and other contextual factors that explain the differences on the estimates produced, the mere quotation of these estimates will be of little practical value.

It is important to acknowledge, however, that considerable progress has been made on this topic in the last two decades, and that the lower failure rates recently estimated by some researchers might be a consequence of these advances. However, while progress has been achieved there are still a number of issues that need further understanding to better guide research and practice on this issue.

First, it is important to understand whether the rates of failure are context-dependent. The fact that the estimates produced are so different might suggest context-dependence, indicating that implementation should be tailored in accordance to the characteristics of the organisations and/or of their environment.

Second, it is important to understand whether the apparent improvement in the rates of success is in fact a verified tendency and the extent to which each of the possible explanations advanced here have contributed to this improvement (e.g., scientific progress in the fields of strategy implementation and change management, better management education programmes, time since adoption of a strategy and familiarity with it, accumulation of knowledge, particularly from the experience of early adopters and customisation of general strategies). The identification of best practices, resulting from this line of research, might also play an important role in further promoting successful implementation.

Third, whenever strategy implementation initiatives culminate in failure, it is important to understand what the main causes for the failure were in order to identify if there are causes more important and frequent than others.

Finally, there seems to be, in some sectors of the literature reviewed, the view that some ‘types’ of strategic initiatives are easier to implement and succeed than others. While this view might be correct, it is important to bear in mind that it is not uncommon to find different studies comparing the success rates of the same types of strategic initiatives, which derive considerably different estimates, suggesting that factors other than the type of strategy might play an important role in their success or failure.

Overall, while the real rate of strategy implementation failure might be difficult to determine with certainty, in-depth studies on these issues might shed some light on the matter, and help us understand why so many strategy implementation initiatives fail.

Acknowledgement

The authors thank two anonymous referees for the insightful comments and helpful suggestions. The authors are also pleased to acknowledge the financial support from Fundação para a Ciência e a Tecnologia (SFRH/BSAB/863/2008), FEDER/COMPETE (grant PEst-C/EGE/UI4007/2011), Faculdade de Economia, Universidade do Algarve, and Newport Business School, University of Wales.

References

Alchian, A. A. (1950). Uncertainty, evolution, and economic theory. Journal of Political Economy, 58(3), 211221.Google Scholar
Aldrich, H. E. (1979). Organizations and environments. New Jersey: Prentice-Hall.Google Scholar
Alexander, L. D. (1985). Successfully implementing strategic decisions. Long Range Planning, 18(3), 9197.CrossRefGoogle Scholar
Amitabh, M. (2010). Research in strategy-structure-performance construct: Review of trends, paradigms and methodologies. Journal of Management and Organization, 16(5), 744763.Google Scholar
Ansoff, H. I. (1965). Corporate strategy. New York: McGraw Hill.Google Scholar
Ansoff, H. I., Declerck, R. P., & Hayes, R. L. (1976). From strategic planning to strategic management. New York: John Wiley & Sons.Google Scholar
Ansoff, H. I., & McDonnell, E. (1990). Implanting strategic management. New York: Prentice Hall International.Google Scholar
Arthur D. Little (1992). Executive caravan TQM survey summary. Cambridge, MA: Arthur D. Little Corporation.Google Scholar
Baden-Fuller, C., & Stopford, J. M. (1994). Rejuvenating the mature business: The competitive challenge. Boston, MA: Harvard Business School Press.Google Scholar
Balogun, J., & Hailey, V. H. (2008). Exploring strategic change. Harlow: Pearson.Google Scholar
Barnett, W. P., & Carroll, G. R. (1995). Modeling internal organizational change. Annual Review of Sociology, 21(1), 217236.Google Scholar
Barney, J. B. (2001). Is the resource-based ‘view’ a useful perspective for strategic management research? Yes. Academy of Management Review, 26(1), 4156.Google Scholar
Bauer, J., Falshaw, R., & Oakland, J. S. (2005). Implementing business excellence. Total Quality Management, 16(4), 543553.Google Scholar
Beamish, P. W. (1985). The characteristics of joint ventures in developed and developing countries. Columbia Journal of World Business, 20(2), 1319.Google Scholar
Becer, E., Hage, B., McKenna, M., & Wilczynski, H. (2007). Performance-improvement initiatives – Three best practices for project success. New York: Booz Allen Hamilton. Retrieved from www.boozallen.com.Google Scholar
Beer, M., & Eisenstat, R. A. (2000). The silent killers of strategy implementation and learning. Sloan Management Review, 41(4), 2940.Google Scholar
Beer, M., & Nohria, N. (2000). Cracking the code of change. Harvard Business Review, 78(3), 133141.Google ScholarPubMed
Bockmühl, S., König, A., Enders, A., Hungenberg, H., & Puck, J. (2011). Intensity, timeliness, and success of incumbent response to technological discontinuities: A synthesis and empirical investigation. Review of Managerial Science, 5(4), 265289.Google Scholar
Burnes, B. (2004). Kurt Lewin and the planned approach to change: A re-appraisal. Journal of Management Studies, 41(6), 9771002.CrossRefGoogle Scholar
Burnes, B. (2005). Complexity theories and organizational change. International Journal of Management Reviews, 7(2), 7390.Google Scholar
Calori, R., Baden-Fuller, C., & Hunt, B. (2000). Managing change at novotel: Back to the future. Long Range Planning, 33(6), 779804.CrossRefGoogle Scholar
Cândido, C. J. F., & Morris, D. S. (2000). Charting service quality gaps. Total Quality Management, 11(4–6), 463472.Google Scholar
Cândido, C. J. F., & Morris, D. S. (2001). The implications of service quality gaps for strategy implementation. Total Quality Management, 12(7/8), 825833.Google Scholar
Cândido, C. J. F., & Santos, S. P. (2011). Is TQM more difficult to implement than other transformational strategies? Total Quality Management, 22(11), 11391164.Google Scholar
Carnall, C. A. (1986). Managing strategic change: An integrated approach. Long Range Planning, 19(6), 105115.CrossRefGoogle Scholar
Chandler, A. D. (1962). Strategy and structure: Chapters in the history of the American industrial enterprise. Cambridge: The MIT Press.Google Scholar
Charan, R., & Colvin, G. (1999). Why CEOs fail. Fortune, 139(12), 6878.Google Scholar
Cohen, M. D., March, J. G., & Olsen, J. P. (1972). A garbage can model of organisational choice. Administrative Science Quarterly, 17(1), 125.Google Scholar
Corboy, M., & Corrbui, D. (1999). The seven deadly sins of strategy implementation. Management Accounting, 77(10), 2930Google Scholar
Cyert, R. M., & March, J. G. (1964). The behavioral theory of the firm: A behavioral science – Economics amalgam. In W. W. Cooper, H. J. Leavitt, & M. W. Shelly (Ed.), New perspectives in organizational research. New York: John Wiley & Sons, 289384.Google Scholar
Dean, J. W., & Bowen, D. E. (1994). Management theory and total quality management: Improving research and practice through theory development. Academy of Management Review, 19(3), 392418.Google Scholar
Dean, J. W., & Sharfman, M. P. (1993). Procedural rationality in the strategic decision-making process. Journal of Management Studies, 30(4), 587610.Google Scholar
DeGeus, A. P. (1988). Planning as learning. Harvard Business Review, 66(2), 7074.Google Scholar
Dion, C., Allday, D., Lafforet, C., Derain, D., & Lahiri, G. (2007). Dangerous liaisons, mergers and acquisitions: The integration game, Report by Hay Group, Philadelphia, USA, pp. 1–16. Retrieved from www.haygroup.com.Google Scholar
Dooyoung, S., Kalinowski, J. G., & El-Enein, G. A. (1998). Critical implementation issues in total quality management. Advanced Management Journal, 63(1), 1014.Google Scholar
Dow, D., Samson, D., & Ford, S. (1999). Exploding the myth: Do all quality management practices contribute to superior quality performance? Production and Operations Management, 8(1), 127.Google Scholar
Doyle, M., Claydon, T., & Buchanan, D. (2000). Mixed results, lousy process: The management experience of organizational change. British Journal of Management, 11(3), S59S80.Google Scholar
Dunphy, D. C., & Stace, D. A. (1988). Transformational and coercive strategies for planned organizational change: Beyond the O. D. model. Organization Studies, 9(3), 317334.Google Scholar
Dyason, M. D., & Kaye, M. M. (1997). Achieving real business advantage through the simultaneous development of managers and business excellence. Total Quality Management, 8(2/3), 145151.Google Scholar
Economist Intelligence Unit (2013). Why good strategies fail: Lessons from the C-Suite. London: Economist Intelligence Unit Limited.Google Scholar
The Economist (1992). The cracks in quality. The Economist, 18, 6970.Google Scholar
Edmondson, A. C. (2011). Strategies for learning from failure. Harvard Business Review, 89(4), 4855.Google ScholarPubMed
Eisenhardt, K. M., & Zbaracki, M. J. (1992). Strategic decision making. Strategic Management Journal, 13(8), 1737.Google Scholar
Franken, A., Edwards, C., & Lambert, R. (2009). Executing strategic change: Understanding the critical management elements that lead to success. California Management Review, 51(3), 4973.Google Scholar
Fredrickson, J. W., & Iaquinto, A. L. (1989). Inertia and creeping rationality in strategic decision processes. Academy of Management Journal, 32(3), 516542.Google Scholar
Fredrickson, J. W., & Mitchell, T. R. (1984). Strategic decision processes: Comprehensiveness and performance in an industry with an unstable environment. Academy of Management Journal, 27(2), 399423.Google Scholar
French, S. N. J., Kouzmin, A., & Kelly, S. J. (2011). Questioning the epistemic virtue of strategy: The emperor has no clothes. Journal of Management and Organization, 17(4), 434447.Google Scholar
Galpin, T. J. (1997). Making strategy work – Building sustainable growth capability. San Francisco: Jossey-Bass Publishers.Google Scholar
Gandolfi, F., & Hansson, M. (2010). Reduction-in-force (RIF) – New developments and a brief historical analysis of a business strategy. Journal of Management and Organization, 16(5), 727743.Google Scholar
Gandolfi, F., & Hansson, M. (2011). Causes and consequences of downsizing: Towards an integrative framework. Journal of Management and Organization, 17(4), 498521.Google Scholar
Gioia, D. A., & Chittipeddi, K. (1991). Sensemaking and sensegiving in strategic change initiation. Strategic Management Journal, 12(6), 433448.Google Scholar
Golembiewski, R. T. (1990). The irony of ironies: Silence about success rates. In R. T. Golembiewski (Ed.), Ironies in organizational development. NJ, USA: Transaction Publications, 1129.Google Scholar
Golembiewski, R. T., Proehl, C. W., & Sink, D. (1981). Success of OD applications in the public sector: Toting up the score for a decade, more or less. Public Administration Review, 41(6), 679682.Google Scholar
Golembiewski, R. T., Proehl, C. W., & Sink, D. (1982). Estimating the success of OD applications. Training and Development Journal, 36(4), 8695.Google Scholar
Goss, D. (2008). Enterprise ritual: A theory of entrepreneurial emotion and exchange. British Journal of Management, 19(2), 120137.Google Scholar
Gray, D. H. (1986). Uses and misuses of strategic planning. Harvard Business Review, 64(1), 8997.Google Scholar
Hackett Group (2004a). Balanced scorecards: Are their 15 minutes of fame over?. Miami: The Hackett Group. Retrieved from www.thehackettgroup.com.Google Scholar
Hackett Group (2004b). Most executives are unable to take balanced scorecards from concept to reality, press release, The Hackett Group, Miami, October, pp. 1–4.Google Scholar
Hafsi, T. (2001). Fundamental dynamics in complex organizational change: A longitudinal inquiry into Hydro-Québec’s management. Long Range Planning, 34(5), 557583.Google Scholar
Hall, G., Rosenthal, J., & Wade, J. (1993). How to make reengineering really work. Harvard Business Review, 71(6), 119131.Google Scholar
Hambrick, D. C., & Chen, M. (2008). New academic fields as admittance-seeking social movements: The case of strategic management. Academy of Management Review, 33(1), 3254.Google Scholar
Hambrick, D. C., & Mason, P. A. (1984). Upper echelons: The organization as a reflection of its top managers. Academy of Management Review, 9(2), 193206.Google Scholar
Hannan, M. T., & Freeman, J. (1977). The population ecology of organizations. American Journal of Sociology, 82(5), 929964.CrossRefGoogle Scholar
Harrigan, K. R. (1988a). Strategic alliances and partner asymmetries. Management International Review, 28(4), 5372.Google Scholar
Harrigan, K. R. (1988b). Joint ventures and competitive strategy. Strategic Management Journal, 9(2), 141158.Google Scholar
Harrigan, K. R. (1988c). Joint ventures: A mechanism for creating strategic change. In A. M. Pettigrew (Ed.), The management of strategic change. New York: Basil Blackwell, 195230.Google Scholar
Hart, S. L. (1992). An integrative framework for strategy-making processes. Academy of Management Review, 17(2), 327351.Google Scholar
Hickson, D. J., Miller, S. J., & Wilson, D. C. (2003). Planned or prioritized? Two options for managing the implementation of strategic decisions? Journal of Management Studies, 40(7), 18031836.Google Scholar
Holder, T., & Walker, L. (1993). TQM implementation. Journal of European Industrial Training, 17(7), 1821.Google Scholar
Hrebiniak, L. G. (2006). Obstacles to effective strategy implementation. Organizational Dynamics, 35(1), 1231.Google Scholar
Hughes, M. (2011). Do 70 per cent of all organizational change initiatives really fail? Journal of Change Management, 11(4), 451464.Google Scholar
Hussey, D. (1996). A framework for implementation. In D. Hussey (Ed.), The implementation challenge. Chichester, England: John Wiley & Sons, 114.Google Scholar
Jantz, C. J., & Kendall, D. A. (1991). Consumer-driven innovative product development. Prism, 1, 2429. Retrieved from www.adlittle.com.Google Scholar
Jørgensen, H. H., Owen, L., & Neus, A. (2008). Making change work. Somers: IBM.Google Scholar
Johnson, G., & Scholes, K. (1999). Exploring corporate strategy: Text and cases. New York: Prentice Hall.Google Scholar
Judson, A. S. (1991). Invest in a high-yield strategic plan. The Journal of Business Strategy, 12(4), 3439.Google Scholar
Kaplan, R. S., & Norton, D. P. (2001). The strategy-focused organization – How balanced scorecard companies thrive in the new business environment. Boston, MA: Harvard Business School Press.Google Scholar
Kearney, A. T. (1992). Total quality: Time to take off the rose tinted spectacles. Kempston: IFS Publications.Google Scholar
Kiechel, W. (1982). Corporate strategists under fire. Fortune, 106(13), 3439.Google Scholar
Kiechel, W. (1984). Sniping at strategic planning. Planning Review, 811.Google Scholar
Korukonda, A. R., Watson, J. G., & Rajkumar, T. M. (1999). Beyond teams and empowerment: A counterpoint to two common precepts in TQM. Advanced Management Journal, 64(1), 2936.Google Scholar
Kotter, J. P. (1995). Leading change: Why transformation efforts fail. Harvard Business Review, 73(2), 5967.Google Scholar
Krogh, G., & Vicari, S. (1993). An autopoiesis approach to experimental strategic learning. In P. Lorange, B. Chakravarthy, J. Roos, & A. Van de Ven (Eds.), Implementing strategic processes, change learning & co-operation. Cambridge: Basil Blackwell, 394410.Google Scholar
Lawson, R., Hatch, T., & Desroches, D. (2008). Scorecard best practices: Design, implementation, and evaluation. NJ, USA: John Wiley & Sons.Google Scholar
Lawson, R., Stratton, W., & Hatch, T. (2006). Scorecarding goes global – Companies around the world are deriving benefits from performance management tools. Strategic Finance, 87(9), 3541.Google Scholar
Learned, E. P., Christensen, C. R., Andrews, K. R., & Guth, W. D. (1965). Business policy – Text and cases. Illinois: Irwin.Google Scholar
Lewin, K. (1947 (1952)). Frontiers in group dynamics. In K. Lewin (Ed.), Field theory in social science – Selected theoretical papers. London: Tavistock Publications, 188237.Google Scholar
Lewy, C. P., & Mée, A. F. (1998a). In de kaart laten kijken, de tien geboden bij BSC-implementaties, versie 1.0. Management Control and Accounting, 2, 3237.Google Scholar
Lewy, C. P., & Mée, A. F. (1998b). Balanced scorecard – Implementing the ten commandments. London: KPMG Consulting.Google Scholar
Makino, S., Chan, C. M., Isobe, T., & Beamish, P. W. (2007). Intended and unintended termination of international joint ventures. Strategic Management Journal, 28(11), 11131132.Google Scholar
Mankins, M. C., & Steele, R. (2005). Turning great strategy into great performance. Harvard Business Review, 83(7/8), 6572.Google Scholar
March, J. G. (1981). Footnotes to organizational change. Administrative Science Quarterly, 26(4), 563577.Google Scholar
March, J. G. (1997). The technology of foolishness. In D. S. Pugh (Ed.), Organisation theory – Selected readings. London: Penguin Books, 339352.Google Scholar
McCunn, P. (1998). The balanced scorecard: The eleventh commandment. Management Accounting, 76(11), 3436.Google Scholar
McKinsey, (2006). Improving strategic planning: A McKinsey survey. The McKinsey Quarterly, 111. Retrieved from www.mckinseyquarterly.comGoogle Scholar
McKinsey, (2008). Creating organizational transformations. The McKinsey Quarterly, 17. Retrieved from www.mckinseyquarterly.com.Google Scholar
Mellahi, K., & Wilkinson, A. (2004). Organizational failure: A critique of recent research and a proposed integrative framework. International Journal of Management Reviews, 5/6(1), 2141.Google Scholar
Miller, S. (1997). Implementing strategic decisions: Four key factors. Organisation Studies, 18(4), 577602.Google Scholar
Miller, S., Wilson, D., & Hickson, D. (2004). Beyond planning strategies for successfully implementing strategic decisions. Long Range Planning, 37(3), 201218.Google Scholar
Mintzberg, H. (1987). Crafting strategy. Harvard Business Review, 65(4), 6675.Google Scholar
Mintzberg, H. (1994). The rise and fall of strategic planning. New York: Prentice Hall.Google Scholar
Mintzberg, H., Ahlstrand, B., & Lampel, J. (1998). Strategy safari – A guided tour through the wilds of strategic management. London: Prentice Hall.Google Scholar
Mintzberg, H., Raisinghani, D., & Théorêt, A. (1976). The structure of ‘unstructured’ decision processes. Administrative Science Quarterly, 21(2), 246275.Google Scholar
Mintzberg, H., & Waters, J. A. (1985). Of strategies, deliberate and emergent. Strategic Management Journal, 6(3), 257272.Google Scholar
Mockler, R. J. (1995). Strategic management: The beginning of a new era. In D. E. Hussey (Ed.), Rethinking strategic management. Chichester: John Wiley & Sons, 141.Google Scholar
Mohrman, S. A., Tenkasi, R. V., Lawler, E. E. III, & Ledford, G. E. Jr. (1995). Total quality management: Practice and outcomes in the largest US firms. Employee Relations, 17(3), 2641.Google Scholar
Morisawa, T., & Kurosaki, H. (2003). Using the balanced scorecard in reforming corporate management systems. Nomura Research Institute Papers, 71, 115.Google Scholar
Naranjo-Gil, D., Hartmann, F., & Maas, V. S. (2008). Top management team heterogeneity, strategic change and operational performance. British Journal of Management, 19(3), 222234.Google Scholar
Neely, A., & Bourne, M. (2000). Why measurement initiatives fail. Measuring Business Excellence, 4(4), 36.Google Scholar
Nelson, R. R., & Winter, S. G. (1974). Neoclassical vs. evolutionary theories of economic growth: Critique and prospectus. The Economic Journal, 84(336), 886905.Google Scholar
Nonaka, I. (2007). The knowledge-creating company. Harvard Business Review, 85(7/8), 162171.Google Scholar
Nutt, P. C. (1987). Identifying and appraising how managers install strategy. Strategic Management Journal, 8(1), 114.Google Scholar
Nutt, P. C. (1999). Surprising but true: Half the decisions in organizations fail. Academy of Management Executive, 13(4), 7590.Google Scholar
O’Brien, C., & Voss, C. A. (1992). In search of quality – An assessment of 42 British Organizations using the criteria of the Baldrige Award, Operations Management Paper 92/02, London: London Business School.Google Scholar
O’Shannassy, T. (2001). Lessons from the evolution of the strategy paradigm. Journal of Management and Organization, 7(1), 2537.Google Scholar
O’Shannassy, T. (2010). Board and CEO practice in modern strategy-making: How is strategy developed, who is the boss and in what circumstances. Journal of Management and Organization, 16(2), 280298.Google Scholar
O’Toole, J. (1995). Leading change: Overcoming the ideology of comfort and the tyranny of custom. San Francisco: Jossey-Bass.Google Scholar
Papadakis, V. M., Lioukas, S., & Chambers, D. (1998). Strategic decision-making process: The role of management and context. Strategic Management Journal, 19(2), 115147.Google Scholar
Park, S. (1991). Estimating success rates of quality circle programs: Public and private experiences. Public Administration Quarterly, 15(1), 133146.Google Scholar
Pautler, P. A. (2003). The effects of mergers and post-merger integration: a review of Business Consulting Literature, draft paper, Federal Trade Commission, Bureau of Economics, pp. 1–41. Retrieved from www.ftc.gov/be/rt/businesreviewpaper.pdf.Google Scholar
Pettigrew, A. M. (1987). Context and action in the transformation of the firm. Journal of Management Studies, 24(6), 649670.Google Scholar
Pinto, J. K., & Prescott, J. E. (1990). Planning and tactical factors in the project implementation process. Journal of Management Studies, 27(3), 305327.Google Scholar
Powell, T. C. (1995). Total quality management as competitive advantage: A review and empirical study. Strategic Management Journal, 16(1), 1537.Google Scholar
Prajogo, D. I., & Brown, A. (2006). Approaches to adopting quality in SMEs and the impact on quality management practices and performance. Total Quality Management, 17(5), 555566.Google Scholar
Project Management Institute (2014). The high cost of low performance. Newtown Square: Project Management Institute.Google Scholar
Prospectus Strategy Consultants (1996). Profiting from increased consumer sophistication – A survey of retail financial services in Ireland and Great Britain. Dublin, Ireland: Prospectus Strategy Consultants.Google Scholar
Quinn, J. B. (1989). Strategic change: ‘logical incrementalism’. Sloan Management Review, 30(4), 4560.Google Scholar
Raps, A. (2005). Strategy implementation – An insurmountable obstacle? Handbook of Business Strategy, 141146.Google Scholar
Rumelt, R. P., Schendel, D. E., & Teece, D. J. (1994). Fundamental issues in strategy. In R. P. Rumelt, D. E. Schendel, & D. J. Teece (Ed.), Fundamental issues in strategy – A research agenda. Boston: Harvard Business School Press, 947.Google Scholar
Senge, P. (1990). The leader’s new work: Building learning organizations. Sloan Management Review, 32(1), 723.Google Scholar
Sila, I. (2007). Examining the effects of contextual factors on TQM and performance through the lens of organisational theories: An empirical study. Journal of Operations Management, 25(1), 83109.Google Scholar
Sirkin, H. L., Keenan, P., & Jackson, A. (2005). The hard side of change management. Harvard Business Review, 83(10), 109118.Google Scholar
Sitkin, S. B., Sutcliffe, K. M., & Schroeder, R. G. (1994). Distinguishing control from learning in total quality management: A contingency perspective. Academy of Management Review, 19(3), 537564.Google Scholar
Smith, M. (2005). The balanced scorecard. Financial Management, 2728.Google Scholar
Smith, S., Tranfield, D., Foster, M., & Whittle, S. (1994). Strategies for managing the TQ agenda. International Journal of Operations & Production Management, 14(1), 7588.Google Scholar
Soltani, E., Lai, P., & Gharneh, N. S. (2005). Breaking through barriers to TQM effectiveness: Lack of commitment of upper-level management. Total Quality Management, 16(8/9), 10091021.Google Scholar
Stace, D. A., & Dunphy, D. C. (1996). Translating business strategies into action: Managing strategic change. In D. Hussey (Ed.), The implementation challenge. Chichester: John Wiley & Sons, 6986.Google Scholar
Stanislao, J., & Stanislao, B. C. (1983). Dealing with resistance to change. Business Horizons, 26(4), 7478.Google Scholar
Stadler, C., & Hinterhuber, H. H. (2005). Shell, Siemens and DaimlerChrysler: Leading change in companies with strong values. Long Range Planning, 38, 467484.Google Scholar
Sterling, J. (2003). Translating strategy into effective implementation: Dispelling the myths and highlighting what works. Strategy & Leadership, 31(3), 2734.Google Scholar
Taylor, W. A. (1997). Leadership challenges for smaller organisations: Self-perceptions of TQM implementation. Omega – The International Journal of Management Science, 25(5), 567579.Google Scholar
Taylor, W. A., & Wright, G. H. (2003). A longitudinal study of TQM implementation: Factors influencing success and failure. Omega – The International Journal of Management Science, 31(2), 97111.CrossRefGoogle Scholar
Van de Ven, A. H., & Poole, M. S. (1995). Explaining development and change in organizations. Academy of Management Review, 20(3), 510540.Google Scholar
Voss, C. A. (1988). Success and failure in advanced manufacturing technology. International Journal of Technology Management, 3(3), 285297.Google Scholar
Voss, C. A. (1992). Successful innovation and implementation of new processes. Business Strategy Review, 3(1), 2944.Google Scholar
Waclawski, J. (2002). Large-scale organizational change and performance: An empirical examination. Human Resource Development Quarterly, 13(3), 289305.Google Scholar
Walsh, A., Hughes, H., & Maddox, D. P. (2002). Total quality management continuous improvement: Is the philosophy a reality? Journal of European Industrial Training, 26(6), 299307.Google Scholar
Weick, K. E., & Quinn, R. E. (1999). Organizational change and development. Annual Review of Psychology, 50(1), 361386.Google Scholar
Wernham, R. (1985). Obstacles to strategy implementation in a nationalized industry. Journal of Management Studies, 22(6), 632648.Google Scholar
Wilkinson, A., & Mellahi, K. (2005). Organizational failure. Long Range Planning, 38(3), 233238.Google Scholar
Wilkinson, A., Redman, T., & Snape, E. (1994). The problems with quality management – The view of managers: Findings from an institute of management survey. Total Quality Management, 5(6), 397406.Google Scholar
Woodley, P. M. (2006). Culture management through the balanced scorecard: A case study, unpublished PhD thesis. Defence College of Management and Technology, Cranfield University.Google Scholar
Zairi, M. (1995). Strategic planning through quality policy deployment: A benchmarking approach. In G. K. Kanji (Ed.), Total quality management: Proceedings of the first world congress. London: Chapman & Hall, 207215.Google Scholar
Figure 0

Table 1 Studies estimating general business strategy implementation failure rates

Figure 1

Table 2 Studies estimating specific business strategy implementation failure rates

Figure 2

Figure 1 Business strategy implementation failure rates. Note: For this figure, we used the rates in Table 1. When two rates were given in any one study, we used the average for the figure.