Hostname: page-component-6bf8c574d5-rwnhh Total loading time: 0 Render date: 2025-02-20T18:42:18.153Z Has data issue: false hasContentIssue false

All General Factors Are Not Alike

Published online by Cambridge University Press:  02 October 2015

John P. Campbell*
Affiliation:
Department of Psychology, University of Minnesota
*
Correspondence concerning this article should be addressed to John P. Campbell, Department of Psychology, University of Minnesota, 75 East River Road, Minneapolis, MN 55455. E-mail: campb006@umn.edu
Rights & Permissions [Opens in a new window]

Extract

In their focal article, Ree, Carretta, and Teachout (2015) argue that a large general factor (DGF), defined as the first component of an unrotated principal components solution, is characteristic of many different domains. In their view, ignoring the DGF in assessment and prediction in industrial and organizational (I-O) psychology is counterproductive. They readily acknowledge that the existence of a DGF does not preclude the existence of distinguishable specific factors. Their message is simply that the general factor (unrotated) frequently accounts for over half the reliable variance, and rather than ignore it, the reasons for it and the usefulness of it should be investigated. Further, the general factor is a construct, and all constructs must be supported by the various kinds of evidence that demonstrate construct validity. The DGF is no exception.

Type
Commentaries
Copyright
Copyright © Society for Industrial and Organizational Psychology 2015 

In their focal article, Ree, Carretta, and Teachout (Reference Ree, Carretta and Teachout2015) argue that a large general factor (DGF), defined as the first component of an unrotated principal components solution, is characteristic of many different domains. In their view, ignoring the DGF in assessment and prediction in industrial and organizational (I-O) psychology is counterproductive. They readily acknowledge that the existence of a DGF does not preclude the existence of distinguishable specific factors. Their message is simply that the general factor (unrotated) frequently accounts for over half the reliable variance, and rather than ignore it, the reasons for it and the usefulness of it should be investigated. Further, the general factor is a construct, and all constructs must be supported by the various kinds of evidence that demonstrate construct validity. The DGF is no exception.

I think virtually everyone would agree with the statements made in the above paragraph. The difficulties arise because all general factors are not alike, and assessment of the specific components can be done for various measurement purposes. For example, Borsboom, Mellenbergh, and van Heerden (Reference Borsboom, Mellenbergh and van Heerden2003) and others (e.g., Diamantopoulos, Riefler, & Roth, Reference Diamantopoulos, Riefler and Roth2008; Edwards, Reference Edwards2001) make a distinction between two kinds of general factors. The DGF of the first kind is intended to represent an actual latent variable that determines (i.e., “causes”) individual differences on any number of observed measures. This kind of DGF can be modeled with a bifactor model (e.g., see Wiernik, Kostal, & Wilmot, Reference Wiernik, Wilmot and Kostal2015) that may include latent specific factors that also play a causal role in determining individual differences on observed measures.

The DGF of the second kind posits that individual differences on the general factor are caused by specific latent factors that produce additive effects. With this model, the DGF is simply a sum score of individual components and does not represent a single latent variable. If individual differences on several specific measures are produced, at least in part, by a similar set of specific latent variables, a principal components analysis of their intercorrelations will produce a general factor of some magnitude, but the general factor itself is not a latent variable. Borsboom et al. (Reference Borsboom, Mellenbergh and van Heerden2003) refer to these two kinds of general factors as a true latent variable versus a sum score. Edwards (Reference Edwards2001) uses the terms superordinate factors versus aggregate factors. Diamantopoulos et al. (Reference Diamantopoulos, Riefler and Roth2008) refer to them as reflective and formative factors.

General Factors of the First Kind

Several of the examples described in the focal article do seem to represent a general factor of the first kind. For example, although its specification is not made explicit by Ree et al., general mental ability (GMA) most likely constitutes a single latent variable that has several distinguishable subfactors (Johnson, Nijenhuis, & Bouchard, Reference Johnson, Nijenhuis and Bouchard2008). That is, it fits a bifactor model, and the general factor is indeed large. The focal article implies that the DGF for mental ability is not verbal ability, quantitative ability, or spatial ability. The implication is that it is something else that explains most of the variance in all three. What then is it? If the DGF is modeled as a single latent variable then it is necessary to produce a substantive specification for what the general factor is. Without any substantive specification, DGF construct validation is difficult. Much of mainstream intelligence research characterizes the DGF as a general capacity to reason and comprehend novel complex ideas, regardless of their specific content (Gottfredson, Reference Gottfredson1997). A summary of the existing explanations for GMA as a factor of the first kind is given by Reeve, Scherbaum, and Goldstein (Reference Reeve, Scherbaum and Goldstein2015), who also make a strong case that the specific latent variables, controlling for GMA, can be very useful in many different prediction situations. In their judgment, Ree et al. take too narrow a view of the criterion space, and they argue that sometimes there are important differential predictions that reflect more than g. A reasonable expectation is that future research will produce a neuroscience-based characterization of the latent variable(s).

Job attitudes in general, and job satisfaction in particular, could be modeled either as a single, general latent variable plus specific factors corresponding to satisfaction with compensation, supervision, coworkers, the work itself, and so forth or as a set of specific factors that sum to an overall satisfaction score (Judge, Hulin, & Dalal, Reference Judge, Hulin, Dalal and Kozlowski2012). Currently, the former seems more likely (Dalal & Credé, Reference Dalal, Credé, Geisinger, Bracken, Carlson, Hansen, Kuncel, Reise and Rodriguez2013), and the general factor can be defined simply as how positive or negative you feel about your “job.” This does not preclude the existence of specific latent factors as well (see Wiernik et al., Reference Wiernik, Wilmot and Kostal2015).

It might also be the case that a general latent variable (the “Big One”) exists in descriptions of personality. However, this example has certain complexities. Currently, the definitive confirmatory factor analysis work, based on the most recent and most comprehensive meta-analyses, is reported by Davies, Connelly, Ones, and Birkland (Reference Davies, Connelly, Ones and Birkland2015). If method variance (in this case attributable to the specific inventory used) is at least partially controlled by comparing DGFs obtained by factoring the Big Five intercorrelations obtained within specific inventories to factoring intercorrelations computed between inventories, the variance accounted by the general factor drops from approximately 50% to approximately 25%. That is, when controlling for specific inventory (i.e., method) effects, the general factor is no longer dominant. Substantively, the resultant general factor seems to represent an individual's general self-evaluation of whether they are a “good” person. These general evaluations appear to be specific to each rater. The general factors obtained from self-ratings and from other ratings (i.e., from a single observer) of the same items are essentially uncorrelated, and the variance accounted by the observer general factor is less. If item intercorrelations are based on different observers, the general factor essentially disappears (Chang, Connelly, & Geeza, Reference Chang, Connelly and Geeza2012). This is consistent with saying that the general factor of personality is of the first kind and represents an individual's overall self-evaluation. Interestingly, observer ratings are somewhat more predictive of external variables than are self-ratings (Connelly & Ones, Reference Connelly and Ones2010).

The General Factor in Job Performance Assessment

Ree et al. seem to have misconstrued this DGF as a general factor of the first kind, which it is not. Certainly, factor analyses of performance data will usually yield a general factor, particularly when supervisor or peer ratings are the measurement method. However, no one, including Ree et al., has produced a substantive content specification for the general factor. That is, if general job performance is a latent variable, what are the content specifications for this construct?

Job performance is not a trait. It is a “state” that has been defined by virtually everyone (see Campbell & Wiernik, Reference Campbell and Wiernik2015) as things that people do at work for the purpose of advancing the organization's goals. Performance itself (i.e., what we actually do in a work role, with varying levels of proficiency) should be distinguished from the determinants of performance (e.g., abilities, skills, motivation) and from the outcomes of performance (e.g., sales) if the outcome is substantially influenced by other factors. The things we do, in the name of performance, have been sorted into categories of similar content (e.g., technical tasks, peer leadership, etc.) by using various methods and by developing actual measures of many different performance facets (see Campbell & Knapp, Reference Campbell and Knapp2001). Identifying the most meaningful categories of performance requirements has been what modeling performance is all about. Our own best effort to synthesize all previous categorizations is recounted in Campbell and Wiernik (Reference Campbell and Wiernik2015), who deal with modeling performance content, performance dynamics, performance assessment, performance goals, and performance adaptability, among other things. Many of these same issues are discussed in Campbell (Reference Campbell and Kozlowski2012) and Campbell (Reference Campbell, Geisinger, Bracken, Carlson, Hansen, Kuncel, Reise and Rodriguez2013).

If general job performance is a latent variable and should be utilized as such, but no specifications for it exist, how then would the following issues be addressed?

  • How would training on the DGF be designed? What would be the substantive training goals? What would be the training content?

  • How would performance problems be diagnosed? Saying, “your DGF needs improvement,” when we can't specify what the DGF is, is counterproductive.

  • How could people be coached on how to improve their DGF?

  • What should be done with technically proficient scientists and engineers who become dysfunctional “leaders”?

  • What would be the substantive content of performance feedback?

  • What would be the procedure for identifying the performance requirements for jobs? (i.e., what would a job analysis designed to specify the DGF for a job's performance requirements look like?) The focal article criticizes the Fleishman, Quaintance, and Broedling (Reference Fleishman, Quaintance and Broedling1994) taxonomy for not dealing with a general factor. However, the Fleishman et al. taxonomy does not deal with performance itself. It deals with the knowledge, skill, and ability (KSA) determinants of performance. For many domains of KSAs, a general factor of the first kind may indeed be operative and may indeed have neurological or physiological substrates. Unfortunately, the I-O psychology literature is replete with confusions between performance itself and both its determinants and outcomes. For further discussion of this issue, see Campbell (Reference Campbell, Geisinger, Bracken, Carlson, Hansen, Kuncel, Reise and Rodriguez2013).

Again, there is usually a large general factor when correlations among a set of performance measures are analyzed. However, given the way performance must be defined, it is simply not possible to specify the DGF as a general factor of the first kind. The question of what are the specifications for the underlying latent variable cannot be answered in any sensible way. It is a general factor of the second kind and must be dealt with as such. Overall performance (i.e., the general factor) can only be defined as the sum (weighted in some fashion) of performances on a specified number of facets or components. Consequently, the authors of the focal article took my assertion that there is no general job performance factor out of context and put it in a different (incorrect) context. Relatedly, there are also a number of incorrect citations in the focal article. For example, the Project A data produced five factors for first term enlisted performance, not eight. Please see Campbell and Knapp (Reference Campbell and Knapp2001) and Campbell (Reference Campbell and Kozlowski2012) for a full account.

If the DGF is a general factor of the second kind, what produces it? Some of the reasons could be the following. The focal article discusses some of them.

  • The individual performance measures that are intended to capture individual differences on different categories (i.e., factors) of performance requirements (e.g., technical performance, peer leadership performance, etc.) could have common determinants. The obvious suspects are GMA, certain personality factors, and the characteristic level of effort an individual usually invests. Such a set of common determinants could produce correlations between performance factors that have very different content but that yield a general factor of the second kind.

  • Common method variance in general (e.g., the use of the rating method) and the special case of using the same rater to estimate performance levels on each performance factor can produce substantial intercorrelations among variables.

  • Various measurement biases such as halo, leniency, overgeneralization of negative information from one dimension to other dimensions, relative liking of the ratee by the rater, impression management by the ratees, and the implicit performance model held by the rater, which may not correspond to the model used to construct the rating scales, are all potential measurement artifacts that can inflate the intercorrelations among criterion variables.

  • Use of outcome measures as performance criteria, individual differences on which may be due to factors not under the individual's control (e.g., such as amount sold or even ratings of productivity or work quality), could produce spurious intercorrelations of various kinds. Nowhere is this more tragic than in the use of “value added” models of changes in students’ achievement test scores to assess the performance of K–12 teachers (see Haertel, Reference Haertel2013).

Considerations such as the above can, and certainly do, produce a general factor. However, virtually by definition, an overall performance measure must be a sum or aggregate score (weighted or unweighted) of a set of individual measures of performance itself, not the determinants of performance. The aggregation can be done explicitly or left up to the implicit theory of the rater or judge. Aggregate scores, controlled for method variance and measurement artifacts, are in fact needed for many different kinds of personnel decisions (Schmidt & Kaplan, Reference Schmidt and Kaplan1971), but this does not mean they reflect a substantive latent factor.

In general, I-O psychology has neglected research on the specification and measurement of our dependent variables (Campbell, Reference Campbell, Geisinger, Bracken, Carlson, Hansen, Kuncel, Reise and Rodriguez2013). It is our collective shortcoming. Ree et al. seem to share this view. Remedying this deficiency is our collective responsibility, particularly with regard to performance itself. The word “performance” is probably misused even more than the word “leadership,” and the body politic is the worse for it. We simply need to know much more about what performance is, how we actually make performance judgments, and how performance can best be assessed.

References

Borsboom, D., Mellenbergh, G. J., & van Heerden, J. (2003). The theoretical status of latent variables. Psychological Review, 110, 203219. http://doi.org/10.1037/0033-295X.110.2.203Google Scholar
Campbell, J. P. (2012). Behavior, performance, and effectiveness in the twenty-first century. In Kozlowski, S. W. J. (Ed.), The Oxford handbook of organizational psychology (Vol. 1, pp. 159196). New York, NY: Oxford University Press. http://doi.org/10.1093/oxfordhb/9780199928309.013.0006CrossRefGoogle Scholar
Campbell, J. P. (2013). Assessment in industrial and organizational psychology: An overview. In Geisinger, K. F., Bracken, B. A., Carlson, J. F., Hansen, J.-I. C., Kuncel, N. R., Reise, S. P., & Rodriguez, M. C. (Eds.), APA handbook of testing and assessment in psychology: Vol. 1. Test theory and testing and assessment in industrial and organizational psychology (pp. 355395). Washington, DC: American Psychological Association. http://doi.org/10.1037/14047-022Google Scholar
Campbell, J. P., & Knapp, D. J. (Eds.). (2001). Exploring the limits in personnel selection and classification. Mahwah, NJ: Erlbaum.Google Scholar
Campbell, J. P., & Wiernik, B. M. (2015). The modeling and assessment of work performance. Annual Review of Organizational Psychology and Organizational Behavior. Advance online publication. http://doi.org/10.1146/annurev-orgpsych-032414-111427Google Scholar
Chang, L., Connelly, B. S., & Geeza, A. A. (2012). Separating method factors and higher order traits of the Big Five: A meta-analytic multitrait-multimethod approach. Journal of Personality and Social Psychology, 102, 408426. http://doi.org/10.1037/a0025559Google Scholar
Connelly, B. S., & Ones, D. S. (2010). An other perspective on personality: Meta-analytic integration of observers’ accuracy and predictive validity. Psychological Bulletin, 136, 10921122. http://doi.org/10.1037/a0021212Google Scholar
Dalal, R. S., & Credé, M. (2013). Job satisfaction and other job attitudes. In Geisinger, K. F., Bracken, B. A., Carlson, J. F., Hansen, J.-I. C., Kuncel, N. R., Reise, S. P., & Rodriguez, M. C. (Eds.), APA handbook of testing and assessment in psychology: Vol. 1. Test theory and testing and assessment in industrial and organizational psychology (pp. 355395). Washington, DC: American Psychological Association. http://doi.org/10.1037/14047-037Google Scholar
Davies, S. E., Connelly, B. L., Ones, D. S., & Birkland, A. S. (2015). The general factor of personality: The “big one,” a self-evaluative trait, or a methodological gnat that won't go away? Personality and Individual Differences. Advance online publication. http://doi.org/10.1016/j.paid.2015.01.006Google Scholar
Diamantopoulos, A., Riefler, P., & Roth, K. P. (2008). Advancing formative measurement models. Journal of Business Research, 61, 12031218. http://doi.org/10.1016/j.jbusres.2008.01.009Google Scholar
Edwards, J. R. (2001). Multidimensional constructs in organizational behavior research: An integrative analytical framework. Organizational Research Methods, 4, 144192. http://doi.org/10.1177/109442810142004Google Scholar
Fleishman, E. A., Quaintance, M. K., & Broedling, L. A. (1994). Taxonomies of human performance: The description of human tasks. Orlando, FL: Academic Press.Google Scholar
Gottfredson, L. S. (1997). Mainstream science on intelligence: An editorial with 52 signatories, history, and bibliography. Intelligence, 24, 1323. http://doi.org/10.1016/S0160-2896(97)90011-8Google Scholar
Haertel, E. H. (2013). Reliability and validity of inferences about teachers based on student test scores. Princeton, NJ: Educational Testing Service. Retrieved from https://www.ets.org/Media/Research/pdf/PICANG14.pdfGoogle Scholar
Johnson, W., Nijenhuis, J., & Bouchard, T. J. Jr. (2008). Still just 1 g: Consistent results from five test batteries. Intelligence, 36, 8195. http://doi.org/10.1016/j.intell.2007.06.001Google Scholar
Judge, T. A., Hulin, C. L., & Dalal, R. S. (2012). Job satisfaction and job affect. In Kozlowski, S. W. J. (Ed.), The Oxford handbook of organizational psychology (Vol. 1, pp. 496525). New York, NY: Oxford University Press. http://doi.org/10.1093/oxfordhb/9780199928309.013.0015Google Scholar
Ree, M. J., Carretta, T. R., & Teachout, M. S. (2015). Pervasiveness of dominant general factors in organizational measurement. Industrial and Organizational Psychology: Perspectives on Science and Practice, 8 (3), 409427.Google Scholar
Reeve, C. L., Scherbaum, C., & Goldstein, H. (2015). Manifestations of intelligence: Expanding the measurement space to reconsider specific cognitive abilities. Human Resource Management Review, 25, 2837.Google Scholar
Schmidt, F. L., & Kaplan, L. B. (1971). Composite vs. multiple criteria: A review and resolution of the controversy. Personnel Psychology, 24, 419434. http://doi.org/10.1111/j.1744-6570.1971.tb00365.xGoogle Scholar
Wiernik, B. M., Wilmot, M. P., & Kostal, J. W. (2015). How data analysis can dominate interpretations of dominant general factors. Industrial and Organizational Psychology: Perspectives on Science and Practice, 8 (3).Google Scholar