Cucina, Walmsley, Gast, Martin, and Curtin (Reference Cucina, Walmsley, Gast, Martin and Curtin2017) raise an important issue in evaluating whether our current approaches for key driver analysis on employee opinion survey data are indeed best practices. As has been argued elsewhere (Putka & Oswald, Reference Putka, Oswald, Tonidandel, King and Cortina2016; Scherbaum, Putka, Naidoo, & Youssefnia, Reference Scherbaum, Putka, Naidoo, Youssefnia and Albrecht2010), there is and can be misalignment between current and best practices. We agree with Cucina et al. that our field should engage in larger discussion of these issues. That discussion is critical, as industrial and organizational (I-O) psychologists are competing with those outside our field who have either little knowledge of best practices in data analysis (but who have been empowered by technology that automates the analysis) or little knowledge of psychology (but a great deal of knowledge in big data analytical techniques). I-O psychologists are in the vanguard of survey data analysis (Ducey et al., Reference Ducey, Guenole, Weiner, Herleman, Gibby and Delany2015), and we have a responsibility to maintain the standards of our field as well as to wield our influence to guide other practitioners outside our field on sound theoretical and analytical approaches.
In the spirit of that discussion, we want to raise a number of additional issues to consider when evaluating survey key driver analysis (SKDA) as well as note some alternative ideas on the conclusions and approach offered by Cucina et al. Our commentary is focused around three points: (a) current practices among I-O psychologists for conducting key driver analysis; (b) alternative perspectives on several of the issues Cucina et al. raise in their focal article; and (c) the analysis of the data, which should be viewed within the larger survey and organizational development efforts in which they are embedded.
Current State of Key Driver Analysis
We agree with Cucina et al. (Reference Cucina, Walmsley, Gast, Martin and Curtin2017) that more rigor is needed in the analysis of employee opinion survey data. The results of the analysis of these data are often used to make consequential decisions about individuals and organizations. As previously argued (e.g., Scherbaum et al., Reference Scherbaum, Putka, Naidoo, Youssefnia and Albrecht2010), there are a number of aspects of survey data analysis that could be improved. Although we agree that there is room for improvement, we come to different conclusions about who is primarily responsible for the continuation of the bad practices (e.g., I-O psychologists or those outside our field) and what are the current typical practices among I-O psychologists. Cucina et al. conclude that I-O psychologists are primarily responsible for perpetuating bad practice, and practices that are widely seen as bad represent the typical practice among I-O psychologists. We come to the opposite conclusion in both instances.
Anyone currently working in the field of employee surveys knows it is a challenging time. This area of practice has become increasingly crowded with those outside of I-O psychology offering surveying services (e.g., MBAs, clinical psychologists, market researchers, data scientists). Given the size of the market of employee surveying and the desire for new and novel surveying approaches, it is no surprise that many existing companies outside of human capital and start-up companies are moving into this space. From what we have seen, there are plenty of examples of non-I-O psychologists doing solid work. However, there are also plenty of examples of practices that we would not recommend. Though we cannot control the practices and services of others outside of our field, we can establish best practices, hold ourselves accountable to them, and encourage that others do the same. We believe that Cucina et al.’s focal article makes an important contribution to that effort by evaluating some aspects of our current practices.
Cucina et al.’s evaluation of our current practices concludes that our practices are flawed by suggesting that what most in our field consider bad practice describes our current typical practice. As professionals who have worked in a variety of consulting firms offering survey analytics services, we have not seen these bad practices to be the norm. In fact, many in our field are actively engaged in promoting best practices in analyzing survey data (e.g., Mayflower group, IT survey group). The more problematic underlying issue in Cucina et al.’s perspective is that it essentially assumes that I-O psychologists as a whole know little about appropriately working with data, conducting regression analysis, using multivariate analyses, or the underlying constructs we are measuring. For example, Cucina et al. correctly describe the critical role that the standard deviation plays in survey key driver analyses. However, their argument goes on to assume that we tend to ignore descriptive statistics and would not consider a variable important if it had a very low mean and little variability. If someone were to run a regression analysis (or any analysis) without understanding the distributional properties of their data, it is unquestionably bad practice. To us, this is I-O psychology common sense and part of the basic training that I-O PhD programs provide (see the education and training guidelines of the Society for Industrial and Organizational Psychology [SIOP]). Our experience working with other I-O psychologists, as well as educating them, leads us to a more optimistic perspective on what those in our field tend to do in practice.
Cucina et al. make a critical point about the need to avoid dustbowl empiricism in conducting survey key driver analyses and offering actions for organizations to take based on those results. We could not agree more that it is important to have our work guided by theory. Where we disagree is in the assumption that our field's survey work is for the most part not guided by theory and that our field knows little about the causes of job satisfaction and employee engagement. Although there is much more to learn (e.g., Saari & Judge, Reference Saari and Judge2004), I-O psychologists know a lot about the antecedents of the types of job attitudes that are often the dependent variables in key driver analyses. There are many meta-analyses and countless (experimental and nonexperimental) primary studies on job attitudes, such as engagement, job satisfaction, organizational commitment, and other common dependent variables in survey key driver analyses. There is also an extensive literature on attitudes more generally, including their structure, antecedents, how they change, and how they form. I-O psychologists are well versed in these various literatures. A well-designed employee opinion survey incorporates the findings from the literature by designing a survey to include theoretically based and practically important dimensions and questions. Not knowing which of a set of theoretically selected variables will be the most strongly related to an outcome in a given situation is not dustbowl empiricism (Putka & Oswald, Reference Putka, Oswald, Tonidandel, King and Cortina2016). It is normal research.
Although we agree that dustbowl empiricism is happening outside of our field, we strongly reject Cucina et al.’s assumption about the normative practices of those in our field. As King, Tonidandel, Cortina, and Fink (Reference King, Tonidandel, Cortina, Fink, Tonidandel, King and Cortina2016) note, I-O psychologists are uniquely positioned to address issues of dustbowl empiricism in analytics. Cucina et al. are right to remind us of the need to avoid bad practices, such as ignoring the distribution of our data, using stepwise regression, using univariate analyses when multivariate analyses should be used, entering more variables into a regression model than can be supported by the sample size, or running analyses on subgroups with insufficient sample sizes.
Alternative Perspectives
Cucina et al. raise a number of issues about methodological limitations of survey key driver analysis. We agree with them on several of their points, such as data from nonexperimental research should not be used to infer causality, the importance of understanding the standard deviation, and that survey items and dimensions are often highly correlated. There are others where we think the “drive” toward the psychometric perspective leads to a narrow position, and one that is not consistent with basic theory or organizational reality.
The first area where we think there are alternative perspectives concerns the question of whether key drivers should change over time. To understand if the drivers should change, one needs to consider what makes a variable a key driver in the first place. For a variable to be labeled as a key driver, there needs to be variation on that variable and that variation needs to covary with the variation on the outcome variable. Put simply, some employees need to possess negative perceptions on the drivers and the outcome and other employees need to possess positive perceptions on the drivers and the outcome. Typically, the goal of organizational development efforts based on the survey results is to change the opinions of employees with negative perceptions. In other words, the organization is hoping to increase the mean and reduce the standard deviation. Thus, if organizational development efforts are successful, the variability in the target variable should reduce and in turn its relationship with the outcome variable should also reduce (i.e., the key drivers should change over time if organizational development efforts are successful). At least conceptually, drivers can and should change.
If empirical data show that the key drivers are not changing from year to year, two primary explanations are possible: (a) attitudes and perceptions do not change or (b) the efforts to change them are unsuccessful. The first explanation is not consistent with basic theory on attitudes and attitudinal change (e.g., Petty & Cacioppo, Reference Petty and Cacioppo2012; Wood, Reference Wood2000). The second is consistent with the literature in organizational development that shows the difficulty of change and that change efforts are often not successful (Porras & Robertson, Reference Porras, Robertson, Dunnette and Hough1993). In the context of employee opinion surveying, this is not surprising, given that the targets of our interventions are often leader behaviors, perceptions of interpersonal treatment, and organizational culture. These are notoriously difficult variables to change because of the target of the interventions (e.g., leaders) and that there are often policies and procedures in place that maintain the variability in the distribution (e.g., pay and promotion practices) but are not considered in organizational change efforts. Thus, Cucina et al.’s (2017) findings of the stability of key drivers in a large organization that is historically resistant to change (i.e., the government!) make sense. However, this does not mean that drivers cannot and do not change.
The second area where we think there are alternative perspectives concerns the notion that different organizations should not see different key drivers (i.e., there is no situation specificity). This position is supported by the application of the psychometric individual differences conceptualization to employee attitudes and perceptions. The variables that have been at the core of this conceptualization (e.g., intelligence) are believed to be stable, and their relationships with outcome variables are not moderated by contextual or situational variables (e.g., Schmidt & Hunter, Reference Schmidt and Hunter1977).Footnote
1
However, this perspective is difficult to apply to employee attitudes for a number of reasons. We agree that there may be a set of constructs related to the job attitudes often used as outcomes in survey key driver analysis. This is consistent with existing research and meta-analyses on these types of variables. However, this is not the question being asked in a typical survey key driver analysis.
The question being asked is, “What are the possible areas an organization should consider acting on at the present moment?” What those variables are, in a given organization, depends on the current distribution of those variables in that organization. For example, previous research has demonstrated that leadership is related to job satisfaction (e.g., Gerstner & Day, Reference Gerstner and Day1997). In one organization, there may be a mix of effective and ineffective leaders. As long as there is variability in job satisfaction, we could expect, based on theory, that leadership would be a “key driver” in this organization. In another organization, the leaders are highly effective, and there is little variability in leadership effectiveness. In this organization, we would not expect leadership to be a key driver. It is a classic restriction of range problem. A finding such as this does not mean that leadership is not an important antecedent of job satisfaction. It would simply mean that it is not an area that this organization should focus on to improve job satisfaction at the present time. There are differences between organizations, and as a result, the areas that they need to focus on to improve job attitudes will differ (i.e., different key drivers).
For us, the fundamental limitation of the psychometric approach advocated by Cucina et al. is that it answers an important but different question than what is being asked by organizations conducting survey key driver analysis. Their approach attempts to build on the existing literature to develop a set of variables that are the key drivers. If this is the purpose, the existing practices may or may not be helpful. However, the question that survey key driver analysis was intended to answer is, “What is the set of variables that may be useful to take action on in a given organization at a given point in time to improve a given outcome?” We agree that not all of the analytical approaches that are currently in use do a good job answering this question (e.g., univariate analyses, stepwise regression), but others can generate insights about actions that an organization can take to improve a particular outcome (e.g., model-averaging approaches; see Scherbaum et al., Reference Scherbaum, Putka, Naidoo, Youssefnia and Albrecht2010 or Oswald & Putka, Reference Oswald, Putka, Tonidandel, King and Cortina2016).
Analysis of Survey Data in Service of Larger Organizational Development Efforts
We strongly agree with Cucina et al. (Reference Cucina, Walmsley, Gast, Martin and Curtin2017) that the use of survey key driver analysis as a purely statistical judgment tool is flawed. We have not seen it used that way. As part of a well-designed organizational development effort, properly conducted survey key driver analysis can help leaders make smarter rational judgments with the benefit of better data. These leaders are responsible for the health and success of their teams and sometimes entire organizations. There are rational, data-informed decisions made every day by these leaders, and we have found most of them to be sophisticated data users.
However, not all leaders are sophisticated data users. This is why it is important to think about theoretical and practical relevance in the earliest design stages of any employee research program. A well-designed program will produce insights that leaders find clear, important, credible, and actionable. Producing such insights requires designing survey instruments that measure the maximally relevant issues at a particular point in time. This includes reducing the length and improving the timeliness of our measures. For example, what “drives” engagement (i.e., what needs are most salient to employees) in a seasonal business can vary at different times of the year.
Companies have only recently begun to measure engagement on a quarterly basis. We have observed over 100 such quarterly programs. One of the early emerging patterns is that employee survey statistics, when presented in the same cadence as other business data, become seen as one additional data point that helps improve people-investment decisions as opposed to the metric of leadership effectiveness. Survey key driver analysis then becomes part of a decision-making framework that generally follows as such: (a) What do we need to achieve as a team in the next 3–6 months in order to be successful in the long term? (b) Based on the statistics (e.g., mean score, standard deviation, mean score vs. relevant comparisons, impact on the outcome, patterns among the themes from open-ended comments), what barriers might be getting in the way of us hitting those goals? (c) Based on what I know about our team and the business, which one of those barriers is most actionable and most important to address first?
The variance we have observed across time and teams in key drivers is further evidence that key driver analysis results can be a valuable input in a business-centric, decision-making process. Key drivers indeed shift over time when teams experience a meaningful change (e.g., restructuring, change in operational or brand strategy, significant growth or decline). Key drivers also shift when leaders bring attention to an issue. For example, at a large financial institution, recognition was ranked sixth as a key driver of engagement at the first measurement point. Given its low score and its relevance to a new strategic focus, leaders began talking about recognition and modeling better behavior. Three months later, recognition was ranked third. Nine months later, it was ranked first.
The business-centric interpretation model can be applied successfully to small teams in the organization. With a short instrument, stable and statistically significant key driver models can be produced for teams as small as 30 people. There is significant variance in key drivers across those teams. There is also significant and sometimes substantial variance across very large business lines within an organization. One example of this was at a large retailer with multiple brands operating as a single company (one leadership team, shared corporate functions). The entire factor structure of the instrument was different in many of the brands, including the key drivers. We regularly see the same pattern, even in organizations that do not operate multiple brands (e.g., the key driver ranking for the sales team is often different than the engineering team).
Where Do We Go From Here?
We agree with Cucina et al. (Reference Cucina, Walmsley, Gast, Martin and Curtin2017) that it is time to reflect on our current survey key driver practices. Although we disagree with some of their conclusions and assumptions, we agree that there is room to improve. Cucina et al. offer one approach that could be useful to supplement and expand our existing body of knowledge of job attitudes. However, this approach does not eliminate the need to help a given organization identify the areas with the greatest potential to improve job attitudes at a given point in time. Survey key driver analysis is still needed for this, and depending on the specific analytical approach used, it could be useful.
In addition to the psychometric approach, we think our field also has a responsibility—and a great opportunity—to embrace methods from other disciplines in order to give our clients a more accurate assessment of their organizations. We consistently find nonlinear effects when correlating engagement measures to business outcomes, such as regrettable attrition, sales, and customer satisfaction. Yet we rarely if ever see I-O psychologists describing key drivers in terms of both impact and inflection point. For example, measures of work–life balance typically need to reach “merely not unfavorable” levels to reduce attrition probabilities to below-baseline levels. Machine learning techniques have allowed us to surface patterns in minutes that would have taken days upon days of cross-tabulations and analyses of variance (ANOVAs) to uncover (Oswald & Putka, Reference Oswald, Putka, Tonidandel, King and Cortina2016). These patterns can then be explored, evaluated, and discussed by rational organizational actors. Natural language processing techniques make it possible to glean insights from tens of thousands of open-ended comments from a single page of visualized results. Yet, we have not observed a significant move in the field to using qualitative inputs as large-scale predictors of engagement and performance.
We have observed a consistent trend across the thousands of leaders we have helped use survey data to improve their business: Those who view engagement data as continuous feedback and continuous improvement inputs are better at improving engagement and performance than those who view it as a “test” or “evaluation.” If we treat engagement surveys like tests, our leaders will too. Instead, we advise treating employee attitude survey scores as one of many data points used to make sound business decisions.
Cucina, Walmsley, Gast, Martin, and Curtin (Reference Cucina, Walmsley, Gast, Martin and Curtin2017) raise an important issue in evaluating whether our current approaches for key driver analysis on employee opinion survey data are indeed best practices. As has been argued elsewhere (Putka & Oswald, Reference Putka, Oswald, Tonidandel, King and Cortina2016; Scherbaum, Putka, Naidoo, & Youssefnia, Reference Scherbaum, Putka, Naidoo, Youssefnia and Albrecht2010), there is and can be misalignment between current and best practices. We agree with Cucina et al. that our field should engage in larger discussion of these issues. That discussion is critical, as industrial and organizational (I-O) psychologists are competing with those outside our field who have either little knowledge of best practices in data analysis (but who have been empowered by technology that automates the analysis) or little knowledge of psychology (but a great deal of knowledge in big data analytical techniques). I-O psychologists are in the vanguard of survey data analysis (Ducey et al., Reference Ducey, Guenole, Weiner, Herleman, Gibby and Delany2015), and we have a responsibility to maintain the standards of our field as well as to wield our influence to guide other practitioners outside our field on sound theoretical and analytical approaches.
In the spirit of that discussion, we want to raise a number of additional issues to consider when evaluating survey key driver analysis (SKDA) as well as note some alternative ideas on the conclusions and approach offered by Cucina et al. Our commentary is focused around three points: (a) current practices among I-O psychologists for conducting key driver analysis; (b) alternative perspectives on several of the issues Cucina et al. raise in their focal article; and (c) the analysis of the data, which should be viewed within the larger survey and organizational development efforts in which they are embedded.
Current State of Key Driver Analysis
We agree with Cucina et al. (Reference Cucina, Walmsley, Gast, Martin and Curtin2017) that more rigor is needed in the analysis of employee opinion survey data. The results of the analysis of these data are often used to make consequential decisions about individuals and organizations. As previously argued (e.g., Scherbaum et al., Reference Scherbaum, Putka, Naidoo, Youssefnia and Albrecht2010), there are a number of aspects of survey data analysis that could be improved. Although we agree that there is room for improvement, we come to different conclusions about who is primarily responsible for the continuation of the bad practices (e.g., I-O psychologists or those outside our field) and what are the current typical practices among I-O psychologists. Cucina et al. conclude that I-O psychologists are primarily responsible for perpetuating bad practice, and practices that are widely seen as bad represent the typical practice among I-O psychologists. We come to the opposite conclusion in both instances.
Anyone currently working in the field of employee surveys knows it is a challenging time. This area of practice has become increasingly crowded with those outside of I-O psychology offering surveying services (e.g., MBAs, clinical psychologists, market researchers, data scientists). Given the size of the market of employee surveying and the desire for new and novel surveying approaches, it is no surprise that many existing companies outside of human capital and start-up companies are moving into this space. From what we have seen, there are plenty of examples of non-I-O psychologists doing solid work. However, there are also plenty of examples of practices that we would not recommend. Though we cannot control the practices and services of others outside of our field, we can establish best practices, hold ourselves accountable to them, and encourage that others do the same. We believe that Cucina et al.’s focal article makes an important contribution to that effort by evaluating some aspects of our current practices.
Cucina et al.’s evaluation of our current practices concludes that our practices are flawed by suggesting that what most in our field consider bad practice describes our current typical practice. As professionals who have worked in a variety of consulting firms offering survey analytics services, we have not seen these bad practices to be the norm. In fact, many in our field are actively engaged in promoting best practices in analyzing survey data (e.g., Mayflower group, IT survey group). The more problematic underlying issue in Cucina et al.’s perspective is that it essentially assumes that I-O psychologists as a whole know little about appropriately working with data, conducting regression analysis, using multivariate analyses, or the underlying constructs we are measuring. For example, Cucina et al. correctly describe the critical role that the standard deviation plays in survey key driver analyses. However, their argument goes on to assume that we tend to ignore descriptive statistics and would not consider a variable important if it had a very low mean and little variability. If someone were to run a regression analysis (or any analysis) without understanding the distributional properties of their data, it is unquestionably bad practice. To us, this is I-O psychology common sense and part of the basic training that I-O PhD programs provide (see the education and training guidelines of the Society for Industrial and Organizational Psychology [SIOP]). Our experience working with other I-O psychologists, as well as educating them, leads us to a more optimistic perspective on what those in our field tend to do in practice.
Cucina et al. make a critical point about the need to avoid dustbowl empiricism in conducting survey key driver analyses and offering actions for organizations to take based on those results. We could not agree more that it is important to have our work guided by theory. Where we disagree is in the assumption that our field's survey work is for the most part not guided by theory and that our field knows little about the causes of job satisfaction and employee engagement. Although there is much more to learn (e.g., Saari & Judge, Reference Saari and Judge2004), I-O psychologists know a lot about the antecedents of the types of job attitudes that are often the dependent variables in key driver analyses. There are many meta-analyses and countless (experimental and nonexperimental) primary studies on job attitudes, such as engagement, job satisfaction, organizational commitment, and other common dependent variables in survey key driver analyses. There is also an extensive literature on attitudes more generally, including their structure, antecedents, how they change, and how they form. I-O psychologists are well versed in these various literatures. A well-designed employee opinion survey incorporates the findings from the literature by designing a survey to include theoretically based and practically important dimensions and questions. Not knowing which of a set of theoretically selected variables will be the most strongly related to an outcome in a given situation is not dustbowl empiricism (Putka & Oswald, Reference Putka, Oswald, Tonidandel, King and Cortina2016). It is normal research.
Although we agree that dustbowl empiricism is happening outside of our field, we strongly reject Cucina et al.’s assumption about the normative practices of those in our field. As King, Tonidandel, Cortina, and Fink (Reference King, Tonidandel, Cortina, Fink, Tonidandel, King and Cortina2016) note, I-O psychologists are uniquely positioned to address issues of dustbowl empiricism in analytics. Cucina et al. are right to remind us of the need to avoid bad practices, such as ignoring the distribution of our data, using stepwise regression, using univariate analyses when multivariate analyses should be used, entering more variables into a regression model than can be supported by the sample size, or running analyses on subgroups with insufficient sample sizes.
Alternative Perspectives
Cucina et al. raise a number of issues about methodological limitations of survey key driver analysis. We agree with them on several of their points, such as data from nonexperimental research should not be used to infer causality, the importance of understanding the standard deviation, and that survey items and dimensions are often highly correlated. There are others where we think the “drive” toward the psychometric perspective leads to a narrow position, and one that is not consistent with basic theory or organizational reality.
The first area where we think there are alternative perspectives concerns the question of whether key drivers should change over time. To understand if the drivers should change, one needs to consider what makes a variable a key driver in the first place. For a variable to be labeled as a key driver, there needs to be variation on that variable and that variation needs to covary with the variation on the outcome variable. Put simply, some employees need to possess negative perceptions on the drivers and the outcome and other employees need to possess positive perceptions on the drivers and the outcome. Typically, the goal of organizational development efforts based on the survey results is to change the opinions of employees with negative perceptions. In other words, the organization is hoping to increase the mean and reduce the standard deviation. Thus, if organizational development efforts are successful, the variability in the target variable should reduce and in turn its relationship with the outcome variable should also reduce (i.e., the key drivers should change over time if organizational development efforts are successful). At least conceptually, drivers can and should change.
If empirical data show that the key drivers are not changing from year to year, two primary explanations are possible: (a) attitudes and perceptions do not change or (b) the efforts to change them are unsuccessful. The first explanation is not consistent with basic theory on attitudes and attitudinal change (e.g., Petty & Cacioppo, Reference Petty and Cacioppo2012; Wood, Reference Wood2000). The second is consistent with the literature in organizational development that shows the difficulty of change and that change efforts are often not successful (Porras & Robertson, Reference Porras, Robertson, Dunnette and Hough1993). In the context of employee opinion surveying, this is not surprising, given that the targets of our interventions are often leader behaviors, perceptions of interpersonal treatment, and organizational culture. These are notoriously difficult variables to change because of the target of the interventions (e.g., leaders) and that there are often policies and procedures in place that maintain the variability in the distribution (e.g., pay and promotion practices) but are not considered in organizational change efforts. Thus, Cucina et al.’s (2017) findings of the stability of key drivers in a large organization that is historically resistant to change (i.e., the government!) make sense. However, this does not mean that drivers cannot and do not change.
The second area where we think there are alternative perspectives concerns the notion that different organizations should not see different key drivers (i.e., there is no situation specificity). This position is supported by the application of the psychometric individual differences conceptualization to employee attitudes and perceptions. The variables that have been at the core of this conceptualization (e.g., intelligence) are believed to be stable, and their relationships with outcome variables are not moderated by contextual or situational variables (e.g., Schmidt & Hunter, Reference Schmidt and Hunter1977).Footnote 1 However, this perspective is difficult to apply to employee attitudes for a number of reasons. We agree that there may be a set of constructs related to the job attitudes often used as outcomes in survey key driver analysis. This is consistent with existing research and meta-analyses on these types of variables. However, this is not the question being asked in a typical survey key driver analysis.
The question being asked is, “What are the possible areas an organization should consider acting on at the present moment?” What those variables are, in a given organization, depends on the current distribution of those variables in that organization. For example, previous research has demonstrated that leadership is related to job satisfaction (e.g., Gerstner & Day, Reference Gerstner and Day1997). In one organization, there may be a mix of effective and ineffective leaders. As long as there is variability in job satisfaction, we could expect, based on theory, that leadership would be a “key driver” in this organization. In another organization, the leaders are highly effective, and there is little variability in leadership effectiveness. In this organization, we would not expect leadership to be a key driver. It is a classic restriction of range problem. A finding such as this does not mean that leadership is not an important antecedent of job satisfaction. It would simply mean that it is not an area that this organization should focus on to improve job satisfaction at the present time. There are differences between organizations, and as a result, the areas that they need to focus on to improve job attitudes will differ (i.e., different key drivers).
For us, the fundamental limitation of the psychometric approach advocated by Cucina et al. is that it answers an important but different question than what is being asked by organizations conducting survey key driver analysis. Their approach attempts to build on the existing literature to develop a set of variables that are the key drivers. If this is the purpose, the existing practices may or may not be helpful. However, the question that survey key driver analysis was intended to answer is, “What is the set of variables that may be useful to take action on in a given organization at a given point in time to improve a given outcome?” We agree that not all of the analytical approaches that are currently in use do a good job answering this question (e.g., univariate analyses, stepwise regression), but others can generate insights about actions that an organization can take to improve a particular outcome (e.g., model-averaging approaches; see Scherbaum et al., Reference Scherbaum, Putka, Naidoo, Youssefnia and Albrecht2010 or Oswald & Putka, Reference Oswald, Putka, Tonidandel, King and Cortina2016).
Analysis of Survey Data in Service of Larger Organizational Development Efforts
We strongly agree with Cucina et al. (Reference Cucina, Walmsley, Gast, Martin and Curtin2017) that the use of survey key driver analysis as a purely statistical judgment tool is flawed. We have not seen it used that way. As part of a well-designed organizational development effort, properly conducted survey key driver analysis can help leaders make smarter rational judgments with the benefit of better data. These leaders are responsible for the health and success of their teams and sometimes entire organizations. There are rational, data-informed decisions made every day by these leaders, and we have found most of them to be sophisticated data users.
However, not all leaders are sophisticated data users. This is why it is important to think about theoretical and practical relevance in the earliest design stages of any employee research program. A well-designed program will produce insights that leaders find clear, important, credible, and actionable. Producing such insights requires designing survey instruments that measure the maximally relevant issues at a particular point in time. This includes reducing the length and improving the timeliness of our measures. For example, what “drives” engagement (i.e., what needs are most salient to employees) in a seasonal business can vary at different times of the year.
Companies have only recently begun to measure engagement on a quarterly basis. We have observed over 100 such quarterly programs. One of the early emerging patterns is that employee survey statistics, when presented in the same cadence as other business data, become seen as one additional data point that helps improve people-investment decisions as opposed to the metric of leadership effectiveness. Survey key driver analysis then becomes part of a decision-making framework that generally follows as such: (a) What do we need to achieve as a team in the next 3–6 months in order to be successful in the long term? (b) Based on the statistics (e.g., mean score, standard deviation, mean score vs. relevant comparisons, impact on the outcome, patterns among the themes from open-ended comments), what barriers might be getting in the way of us hitting those goals? (c) Based on what I know about our team and the business, which one of those barriers is most actionable and most important to address first?
The variance we have observed across time and teams in key drivers is further evidence that key driver analysis results can be a valuable input in a business-centric, decision-making process. Key drivers indeed shift over time when teams experience a meaningful change (e.g., restructuring, change in operational or brand strategy, significant growth or decline). Key drivers also shift when leaders bring attention to an issue. For example, at a large financial institution, recognition was ranked sixth as a key driver of engagement at the first measurement point. Given its low score and its relevance to a new strategic focus, leaders began talking about recognition and modeling better behavior. Three months later, recognition was ranked third. Nine months later, it was ranked first.
The business-centric interpretation model can be applied successfully to small teams in the organization. With a short instrument, stable and statistically significant key driver models can be produced for teams as small as 30 people. There is significant variance in key drivers across those teams. There is also significant and sometimes substantial variance across very large business lines within an organization. One example of this was at a large retailer with multiple brands operating as a single company (one leadership team, shared corporate functions). The entire factor structure of the instrument was different in many of the brands, including the key drivers. We regularly see the same pattern, even in organizations that do not operate multiple brands (e.g., the key driver ranking for the sales team is often different than the engineering team).
Where Do We Go From Here?
We agree with Cucina et al. (Reference Cucina, Walmsley, Gast, Martin and Curtin2017) that it is time to reflect on our current survey key driver practices. Although we disagree with some of their conclusions and assumptions, we agree that there is room to improve. Cucina et al. offer one approach that could be useful to supplement and expand our existing body of knowledge of job attitudes. However, this approach does not eliminate the need to help a given organization identify the areas with the greatest potential to improve job attitudes at a given point in time. Survey key driver analysis is still needed for this, and depending on the specific analytical approach used, it could be useful.
In addition to the psychometric approach, we think our field also has a responsibility—and a great opportunity—to embrace methods from other disciplines in order to give our clients a more accurate assessment of their organizations. We consistently find nonlinear effects when correlating engagement measures to business outcomes, such as regrettable attrition, sales, and customer satisfaction. Yet we rarely if ever see I-O psychologists describing key drivers in terms of both impact and inflection point. For example, measures of work–life balance typically need to reach “merely not unfavorable” levels to reduce attrition probabilities to below-baseline levels. Machine learning techniques have allowed us to surface patterns in minutes that would have taken days upon days of cross-tabulations and analyses of variance (ANOVAs) to uncover (Oswald & Putka, Reference Oswald, Putka, Tonidandel, King and Cortina2016). These patterns can then be explored, evaluated, and discussed by rational organizational actors. Natural language processing techniques make it possible to glean insights from tens of thousands of open-ended comments from a single page of visualized results. Yet, we have not observed a significant move in the field to using qualitative inputs as large-scale predictors of engagement and performance.
We have observed a consistent trend across the thousands of leaders we have helped use survey data to improve their business: Those who view engagement data as continuous feedback and continuous improvement inputs are better at improving engagement and performance than those who view it as a “test” or “evaluation.” If we treat engagement surveys like tests, our leaders will too. Instead, we advise treating employee attitude survey scores as one of many data points used to make sound business decisions.