Hostname: page-component-745bb68f8f-cphqk Total loading time: 0 Render date: 2025-02-06T17:47:53.227Z Has data issue: false hasContentIssue false

I've Found It, but What Does It Mean? On the Importance of Theory in Identifying Dominant General Factors

Published online by Cambridge University Press:  23 March 2016

Zhenyu Yuan*
Affiliation:
Department of Management and Organizations, University of Iowa
*
Correspondence concerning this article should be addressed to Zhenyu Yuan, Department of Management and Organizations, University of Iowa, 108 John Pappajohn Business Building, Iowa City, IA 52242. E-mail: zhenyu-yuan@uiowa.edu
Rights & Permissions [Opens in a new window]

Extract

In their focal article, Ree, Carretta, and Teachout (2015) based their definition of a dominant general factor (DGF) on two criteria: (a) A DGF should be the largest source of reliable variance; (b) it is influencing every variable measuring the construct. Although detailed attention has been paid to the statistical properties of a DGF, I believe another criterion of equal if not greater importance is the theoretical justification to expect a DGF in the measurement of a construct. In the following commentary, I will highlight the importance of theory as another important criterion when determining the meaningfulness and usefulness of DGFs, discuss the risks of creating a DGF without any theoretical guidance, and elaborate on the complexities surrounding job performance as a detailed example to illustrate why theory is important before extracting a DGF from performance ratings.

Type
Commentaries
Copyright
Copyright © Society for Industrial and Organizational Psychology 2016 

In their focal article, Ree, Carretta, and Teachout (Reference Ree, Carretta and Teachout2015) based their definition of a dominant general factor (DGF) on two criteria: (a) A DGF should be the largest source of reliable variance; (b) it is influencing every variable measuring the construct. Although detailed attention has been paid to the statistical properties of a DGF, I believe another criterion of equal if not greater importance is the theoretical justification to expect a DGF in the measurement of a construct. In the following commentary, I will highlight the importance of theory as another important criterion when determining the meaningfulness and usefulness of DGFs, discuss the risks of creating a DGF without any theoretical guidance, and elaborate on the complexities surrounding job performance as a detailed example to illustrate why theory is important before extracting a DGF from performance ratings.

Our theory of the construct will largely determine how we actually measure and model it (Hinkin, Reference Hinkin1998). The first risk of ignoring theory in identifying a DGF lies in the ambiguity of its substantive meaning. Assuming that one can find a DGF that affects every variable and explains the largest amount of variance, can the researcher know for sure that the DGF is a theoretically meaningful common factor? The answer is no unless we have a theory that points to the substantive meaning of the common factor. The two criteria given by the authors of the focal article are solely based on statistical results, with no reference to theory. Although they identified possible methodologicalFootnote 1 and theoretical reasons for the emergence of DGFs, this is a post hoc analysis. The purely data-driven process of identifying DGFs is contrary to good psychological measurement and might be even misleading in some cases. For example, if a researcher runs a factor analysis on family satisfaction, job satisfaction, and career satisfaction items, chances are there will be a DGF. But what is the theoretical underpinning of the DGF? Is it a higher-order construct called general life satisfaction, or is it also tapping into personality traits such as positive affectivity (PA)? Without a priori theoretical guidance, it is difficult to specify the substantive meaning of the DGF. What we intend to measure and what we know about the construct that we aim to measure should drive the measurement process, not vice versa. Having a theoretical grasp of the construct beforehand, researchers can also take steps to control for nuisances that might add ambiguity to the meaning of the DGF when designing the study. In the previous example, researchers can consider measuring PA as a covariate and see whether the DGF has any bearing on PA.

The second drawback of overlooking theory is the potential risk of erroneously specifying the measurement model. By simply subjecting items to different types of statistical analysis, researchers are, consciously or unconsciously, imposing their measurement model on the data. For example, if the causal direction is specified from the indicators to the construct, the construct is being treated as formative. Otherwise, it is considered to be reflective (Edwards & Bagozzi, Reference Edwards and Bagozzi2000). If the researcher is using principal component analysis (PCA), he or she is trying to do data reduction, whereas exploratory factor analysis (EFA) is appropriate for identifying common factors (Fabrigar, Wegener, MacCallum, & Strahan, Reference Fabrigar, Wegener, MacCallum and Strahan1999). Generally researchers will be able to correctly model the construct of interest provided that they have a good theoretical grasp of the construct (and statistical analysis of course). However, a problem arises if one limits his or her attention to the two statistical criteria while turning a blind eye to the nature of the construct. The two criteria given in the article are generally under the assumption that common factors underlie the covariance matrix in the first place. This assumption clearly does not apply to every construct. To illustrate this point, the model will be mistakenly specified if a researcher conducts EFA on items measuring socioeconomic status, a formative construct.

Further, in industrial–organizational psychology research, the exact nature of some constructs is more complex than simply being reflective or formative. One such construct is job performance, which was also used as an example in the focal article. However, prior to a detailed discussion of this construct, it is perhaps necessary to point out that among the three sources that the authors used to support the existence of a DGF in performance ratings, Lance, Teachout, and Donnelly (Reference Lance, Teachout and Donnelly1992) actually found that “performance is not a unitary construct” (p. 448). In their study, models that included a general performance factor fit significantly worse than the model with multiple performance dimensions. In another study that was cited (Carretta, Perry, & Ree, Reference Carretta, Perry and Ree1996), PCA was conducted on the supervisor and peer ratings, and the researchers identified a factor (i.e., a weighted linear composite of ratings from different sources) that could explain 92.5% of the variance. The primary objective of PCA is data reduction, which is different from the identification of latent constructs (Fabrigar et al., Reference Fabrigar, Wegener, MacCallum and Strahan1999). Citing this study to argue for the existence of general factors in job performance ratings could be considered as an incorrectly executed measurement model in its own.

Nonetheless, Viswesvaran, Schmidt, and Ones (Reference Viswesvaran, Schmidt and Ones2005), in their comprehensive meta-analysis of job performance ratings, found that a general factor in job performance ratings accounted for more than half of the variation independent of halo and other measurement errors. The authors attributed the general factor to the effects of organizational citizenship behaviors (OCBs) clouding performance ratings on all dimensions and basic individual-difference variables such as conscientiousness. As such, the antecedents to the general factor among job performance dimensions at least involve both personal attributes (e.g., conscientiousness) and situational variables (e.g., raters influenced by ratees’ citizenship behaviors; one can further argue this is a person–situation interaction). It thus suffices to say that the DGF in job performance ratings is more complex than in ability and personality domains that mainly involve individual characteristics (assuming that methodological artifacts are already controlled for, as in Viswesvaran et al.’s study). This is probably one of the reasons that Viswesvaran and colleagues called for theory development in job performance research to address this general factor. Therefore, it seems too early to talk about the implications of DGFs before researchers are able to gain a solid theoretical understanding of DGFs in job performance research.

In fact, Edwards (Reference Edwards2001) classified job performance as an aggregate construct, which is produced when all of its dimensions are added together (Law, Wong, & Mobley, Reference Law, Wong and Mobley1998; Murphy & Shiarella, Reference Murphy and Shiarella1997). This is congruent with Campbell, McCloy, Oppler, and Sager (Reference Campbell, McCloy, Oppler, Sager, Schmitt and Borman1993) regarding the emphasis on the distinct dimensions of job performance. As Viswesvaran and colleagues (Reference Viswesvaran, Schmidt and Ones2005) aptly pointed out, this approach does not run counter to the general factor in job performance ratings.Footnote 2 Either one (focusing on the general factor vs. specific factors) has its value (for a detailed discussion and examples see Viswesvaran et al., Reference Viswesvaran, Schmidt and Ones2005). Furthermore, it is equally important to correctly model the relationships among performance dimensions even in the absence of a general factor. For example, Bergeron, Shipp, Rosen, and Furst (Reference Bergeron, Shipp, Rosen and Furst2013) found that time spent on OCB could be negatively related to task performance given the limited time that an individual can invest into different types of work behaviors. Wallace and Chen (Reference Wallace and Chen2006) found safety performance, a specific aspect of work behaviors, to be negatively related to production performance. As such, the existence of DGFs, or lack thereof, may both have complex theoretical reasons. Simply chasing DGFs in performance ratings without any consideration of the theoretical underpinnings runs the risk of missing the whole picture.

The discussion thus far is not intended to nullify the value of DGFs but to rather highlight the importance of theory as a prerequisite of DGFs. Without a priori theoretical understanding, DGFs might be at best interpreted as a general factor among the variables, its substantive meaning being unknown. Post hoc explanations are possible but will not be as helpful. The delicacy of this issue is further exemplified in the case of job performance. Again, the central thesis is that theory should be an important factor in determining the meaningfulness and usefulness of DGFs (assuming methodological artifacts are already accounted for). Although I have repeatedly referred to theory as the crux of the matter, it is by no means an easy task to have a clearly spelled-out theory for every construct. However, this should not be taken as an excuse to shy away from theory. If we abandon theory, we might get lost in chasing DGFs from random sets of variables in a pure data-driven manner.

To conclude this commentary, I would like to use the same example from the focal article. In validating the construct of core self-evaluation (CSE), Judge, Locke, and Durham (Reference Judge, Locke and Durham1997) went great lengths to find theoretical guidance from clinical and social psychology (e.g., Packer, Reference Packer1985a, Reference Packer1985b) so that they can establish the theoretical legitimacy of CSE. The attention to theory prior to the identification of DGFs (in this case, CSE) is equally, if not more, important as the quantitative validation that was cited (e.g., Judge, Erez, Bono, & Thoresen, Reference Judge, Erez, Bono and Thoresen2003). Although discovering a DGF is tempting, it will be nothing more than an ambiguous general factor unless we have a good theoretical background to derive substantive meaning out of it.

Footnotes

1 Although the authors identified some methodological/statistical reasons for DGFs, I choose to focus on the importance of theory in this commentary in that a DGF solely due to methodological/statistical reasons is best considered as an artifact. Without any substantive meaning, it is difficult to envision a situation where a DGF can have any theoretical and/or practical implications. Furthermore, attributing DGFs to reliance on coefficient alpha might be misleading given that alpha does not speak to the dimensionality of a construct (Cortina, Reference Cortina1993). The possibility that common method variance can produce a DGF might be also overstated, although it may inflate relationships between study variables (Spector, Reference Spector2006).

2 However, according to Edwards (Reference Edwards2001), job performance should be modeled as an aggregate construct with indicators jointly determining the latent construct (i.e., the arrow goes from the indicator to the construct). This appears contrary to the confirmatory factor analysis model in Viswesvaran et al. (Reference Viswesvaran, Schmidt and Ones2005). These different approaches in performance measurement further illustrate the theoretical complexity (or controversy) of job performance as a construct.

References

Bergeron, D. M., Shipp, A. J., Rosen, B., & Furst, S. A. (2013). Organizational citizenship behavior and career outcomes the cost of being a good citizen. Journal of Management, 39, 958984.Google Scholar
Campbell, J. P., McCloy, R. A., Oppler, S. H., & Sager, C. E. (1993). A theory of performance. In Schmitt, N. & Borman, W. C. (Eds.), Personnel selection in organizations (pp. 3570). San Francisco, CA: Jossey-Bass.Google Scholar
Carretta, T. S., Perry, D. C. Jr, & Ree, M. J. (1996). Prediction of situational awareness in F-15 pilots. The International Journal of Aviation Psychology, 6, 2141.Google Scholar
Cortina, J. M. (1993). What is coefficient alpha? An examination of theory and applications. Journal of Applied Psychology, 78, 98104.Google Scholar
Edwards, J. R. (2001). Multidimensional constructs in organizational behavior research: An integrative analytical framework. Organizational Research Methods, 4, 144192.Google Scholar
Edwards, J. R., & Bagozzi, R. P. (2000). On the nature and direction of relationships between constructs and measures. Psychological Methods, 5, 155174.Google Scholar
Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., & Strahan, E. J. (1999). Evaluating the use of exploratory factor analysis in psychological research. Psychological Methods, 4, 272299.Google Scholar
Hinkin, T. R. (1998). A brief tutorial on the development of measures for use in survey questionnaires. Organizational Research Methods, 1, 104121.Google Scholar
Judge, T. A., Erez, A., Bono, J. E., & Thoresen, C. J. (2003). The core self-evaluations scale: Development of a measure. Personnel Psychology, 56, 303331.Google Scholar
Judge, T. A., Locke, E. A., & Durham, C. C. (1997). The dispositional causes of job satisfaction: A core evaluations approach. Research in Organizational Behavior, 19, 151188.Google Scholar
Lance, C. E., Teachout, M. S., & Donnelly, T. M. (1992). Specification of the criterion construct space: An application of hierarchical confirmatory factor analysis. Journal of Applied Psychology, 77, 437452.Google Scholar
Law, K. S., Wong, C. S., & Mobley, W. H. (1998). Toward a taxonomy of multidimensional constructs. Academy of Management Review, 23, 741755.CrossRefGoogle Scholar
Murphy, K. R., & Shiarella, A. H. (1997). Implications of the multidimensional nature of job performance for the validity of selection tests: Multivariate frameworks for studying test validity. Personnel Psychology, 50, 823854.Google Scholar
Packer, E. (1985a). Understanding the subconscious (I). The Objectivist Forum, 6 (1), 110.Google Scholar
Packer, E. (1985b). Understanding the subconscious (II). The Objectivist Forum, 6 (2), 815.Google Scholar
Ree, M. J., Carretta, T. R., & Teachout, M. S. (2015). Pervasiveness of dominant general factors in organizational measurement. Industrial and Organizational Psychology: Perspectives on Science and Practice, 8, 409427.Google Scholar
Spector, P. E. (2006). Method variance in organizational research truth or urban legend? Organizational Research Methods, 9, 221232.Google Scholar
Viswesvaran, C., Schmidt, F. L., & Ones, D. S. (2005). Is there a general factor in ratings of job performance? A meta-analytic framework for disentangling substantive and error influences. Journal of Applied Psychology, 90, 108131.Google Scholar
Wallace, C., & Chen, G. (2006). A multilevel integration of personality, climate, self-regulation, and performance. Personnel Psychology, 59, 529557.CrossRefGoogle Scholar