Hostname: page-component-6bf8c574d5-h6jzd Total loading time: 0.001 Render date: 2025-02-20T23:37:22.573Z Has data issue: false hasContentIssue false

There Are More Things in Heaven and Earth, Horatio, Than DGF

Published online by Cambridge University Press:  02 October 2015

Paul J. Hanges*
Affiliation:
Department of Psychology, University of Maryland
Charles A. Scherbaum
Affiliation:
Department of Psychology, Baruch College, City University of New York
Charlie L. Reeve
Affiliation:
Department of Psychology, University of North Carolina at Charlotte
*
Correspondence concerning this article should be addressed to Paul J. Hanges, Department of Psychology, University of Maryland, College Park, MD 20742. E-mail: phanges@umd.edu
Rights & Permissions [Opens in a new window]

Extract

In their article, Ree, Carretta, and Teachout (2015) argued that a dominant general factor (DGF) is present in most, if not all, psychological measures (e.g., personality, leadership, attitudes, skills). A DGF, according to Ree et al., is identified by two characteristics. First, the DGF accounts for the largest amount of a measure's systematic variance, and second, it influences every subdimension within the construct domain. They indicate that researchers ignore DGFs and pay inappropriate amounts of attention to the specific dimensions (DSs) even though the DGF provides most of the predictive power and the DS adds little predictive power.

Type
Commentaries
Copyright
Copyright © Society for Industrial and Organizational Psychology 2015 

In their article, Ree, Carretta, and Teachout (Reference Ree, Carretta and Teachout2015) argued that a dominant general factor (DGF) is present in most, if not all, psychological measures (e.g., personality, leadership, attitudes, skills). A DGF, according to Ree et al., is identified by two characteristics. First, the DGF accounts for the largest amount of a measure's systematic variance, and second, it influences every subdimension within the construct domain. They indicate that researchers ignore DGFs and pay inappropriate amounts of attention to the specific dimensions (DSs) even though the DGF provides most of the predictive power and the DS adds little predictive power.

In this article, we raise three issues with Ree et al.’s arguments. First, we argue that the evidence for the DGF, at least in one content area (i.e., personality), is not clean cut. Second, we will show that even if there is a DGF in personality measures, the contribution of specific personality dimensions becomes the more dominant source in the measure once monomethod/mono-operational biases are accounted for in the data. Third, we argue that, in contrast to Ree et al., there is growing evidence that DSs in the very domain that originated the DGF debate, cognitive ability, are useful and have predictive power.

Universal DGF? The Case of Personality

Is there really a personality DGF? Although there are publications supporting this claim, this idea is still actively debated (e.g., Ashton, Lee, Goldberg & De Vries, Reference Ashton, Lee, Goldberg and de Vries2009; Hopwood, Wright, & Donnellan, Reference Hopwood, Wright and Donnellan2011), and there is counterevidence against a personality DGF. It is important to recognize these counterarguments to prevent a false impression that a DGF in the personality literature is widely accepted.

Examples of counterevidence to a DGF include Donnellan, Hopwood, and Wright's (Reference Donnellan, Hopwood and Wright2012) attempt to replicate the Rushton and Irwing (Reference Rushton and Irwing2008) study that supported a personality DGF. Using a new sample, Donnellan et al. (Reference Donnellan, Hopwood and Wright2012) failed to find support for a DGF model. Although failing to replicate findings may be a sign of our times, it is more disturbing that Donnellan et al. couldn't replicate the published DGF model when they recreated Rushton and Irwing's variance/covariance matrix using the published correlation matrix combined with variable standard deviations. Donnellan et al. report that they could recapture the DGF model's published degrees of freedom only by adding additional constraints not reported in the original article. Surprisingly, even with these constraints, the DGF model would not converge with Rushton and Irwing variance/covariance matrix nor with their own sample.

Rather than a DGF, many studies found two higher order factors (e.g., DeYoung, Peterson, & Higgins, Reference DeYoung, Peterson and Higgins2002; Digman, Reference Digman1997). Agreeableness, Conscientiousness, and Emotional Stability loaded on one higher order factor called the alpha or stability metatrait factor. Extraversion and Openness to Experience loaded on another higher order factor called the beta or plasticity metatrait factor (DeYoung et al., Reference DeYoung, Peterson and Higgins2002; Digman, Reference Digman1997). Using Ree et al.’s definitions, these metatrait factors are group factors, not DGFs, because each metatrait factor influences only some of the personality dimensions. The question now appears to be not whether the Big Five load onto a single DGF but whether the two aforementioned group factors are sufficiently correlated to justify a third level DGF. The jury is still out because the correlations between the two group factors have ranged from being uncorrelated (DeYoung, Reference DeYoung2006) to being values between .18 and .48 (DeYoung, Peterson, & Higgins, Reference DeYoung, Peterson and Higgins2002; Musek, Reference Musek2007). The smaller the correlation between the two group factors, the less likely it is that a DGF exists.

DeYoung (Reference DeYoung2006) found that smaller group factor correlations are obtained when the variance associated with different types of data sources (e.g., self-ratings, spouses, friends) is incorporated in the statistical analyses. Specifically, higher correlations between the alpha and beta group factors were obtained when he performed statistical analyses separately for each data source (i.e., analysis done for only self-ratings; another analysis performed using friend data). Interestingly, the two group factors were uncorrelated when he reanalyzed the data and incorporated all data sources explicitly in his analyses.

In summary, declaring that there is a DGF among personality dimensions is premature. This may also be true with the other domains covered in Ree et al.

Dominance of Specific Dimensions in Measures

Even if there is a personality DGF, the DSs can still be the dominant source of influence in personality measures. To illustrate how this could be true, we followed the psychometric and mathematical logic outlined by Kuncel and Sackett (Reference Kuncel and Sackett2014) when they discussed assessment center ratings. First, let us assume that we have a Big Five instrument and that we want to create a single personality score for each individual by adding the dimension score ratings together. This would be reasonable to do if one believed in a personality DGF.

As with any measurement instrument, there are three sources of variance that affect personality scores: (a) personality itself, (b) method bias, and (c) random error. The first two variance sources are systematic and can be split into a general and a specific variance portion. This means that we are assuming there is a general personality (i.e., DGF) variance source and a source of variance attributable to specific personality dimensions (DSs). Similarly, we assume that there is a general method variance portion (method general; MG) due to similar characteristics across all methods and a method specific (MS) variance portion that is unique to each method.

Following the findings of Rushton, Bons, and Hur (Reference Rushton, Bons and Hur2008), we set the percentage of variance attributable to personality (both general and specific) to be 58%. We used Mount, Barrick, Scullen, and Rounds's (Reference Mount, Barrick, Scullen and Rounds2005) reliability-corrected correlations among the Big Five dimensions to estimate the average Big Five correlation to be .29. According to psychometric theory (Ghiselli, Campbell, & Zedeck, Reference Ghiselli, Campbell and Zedeck1981), the correlation between two measures represents the portion of shared variance between the measures. Thus, 29% of the .58 personality variance is attributable to a DGF, and the remainder (71%) is due to DSs. As shown in the first data row in Table 1, the DGF variance is estimated to be .17, and the DS variance is .41. The ratio of DGF variance to DS variance is 40.8%, which is in line with Rushton et al.’s (Reference Rushton, Bons, Ando, Hur, Irwing, Vernon and Barbaranelli2009) finding that a DGF accounted for 37% of the personality source variance.

Table 1. Illustration of Consequences of Aggregating Over Multiple Data Sources

Note. DGF = dominant general factor; DS = specific dimension; MG = method general; MS = method specific.

We next computed the method variance by calculating the average method effect shown in Table 1 of Podsakoff, MacKenzie, and Podsakoff (Reference Podsakoff, MacKenzie and Podsakoff2012). The average method variance was .25. Podsakoff et al. (Reference Podsakoff, MacKenzie and Podsakoff2012) also reports that the average correlation between methods is typically .47. Using these values, we estimated that the percentage of variance attributable to general (MG) and specific (MS) method variance was .12 and .13, respectively. Finally, the error variance was estimated by subtracting the sum of the aforementioned four systematic variance sources from 1. All of these variance estimates are shown in the first data row of Table 1. As seen in this first row, none of the variance sources is fully dominating the measurement instrument. Kuncel and Sackett (Reference Kuncel and Sackett2014) would say that the DS is moderately dominant in the personality measure.

The subsequent rows in Table 1 indicate what happens as data from multiple sources are combined to yield our overall personality score. The second data row of Table 1 shows the portion of variance attributable to the DGF, DS, MG, MS, and error when information from two different sources (e.g., self-ratings, peer ratings) is combined. The portion of variance not shared by the informant sources (i.e., MS and Error) decreases as aggregation occurs. However, the portion of variance shared across informant sources (i.e., DGF, DS, MG) increases as aggregation occurs. The estimates shown in Table 1 are obtained by using the following equation:

(1)$$\begin{equation} {r_{F\mathop \sum \nolimits_1^k x}} = \frac{{\mathop \sum {r_{xF}}}}{{\sqrt {\left( {k + k\left( {k - 1} \right){r_{xx}}} \right)} }} \end{equation}$$

In this equation, k is the number of informant sources being aggregated, and rxF is the correlation between a particular source of variance and a single higher order latent factor. This correlation is estimated by taking the square root of the value in the first data row for that variance source. For example, the estimated correlation between the single latent factor and the portion of the average personality score attributable to a DGF is the square root of .17, or rxF = .410. Equation 1’s numerator when aggregating two informant sources is 2 times .410, or .820.

In the denominator of Equation 1, rxx represents the correlation between the overall personality scores for the two informant samples. This is estima-ted by adding the shared variance components (i.e., σ2DGF + σ2DS2MG), or .70. Equation 1’s denominator is $\sqrt {2 + 2( 1 )*.70}$, or approximately 1.84. The DGF entry in the second data row of Table 1 is .820 divided by 1.94, or .44. This estimate, .44, is the correlation between the total personality score aggregated over two informant sources and a single higher order latent factor. Squaring this correlation yields the variance estimate of .198, which rounds to .20 for a DGF. This logic is repeated for all systematic variance source entries. The computation of the noncommon sources of variance is specified in Kuncel and Sackett (Reference Kuncel and Sackett2014).

As shown in Table 1, the systematic sources of variance increase as more informant sources are combined. Table 1 shows that there is a DGF, and its total variance portion increases as more informant sources are combined. However, the DS has a much larger portion of variance, and after aggregating over three informant sources, it is the dominant source in the personality total score. Thus, specific personality dimensions play a bigger role in a measure, provided that information from multiple sources are collected and aggregated.

Do Specific Dimensions Add Value Over DGFs?

The final component of Ree et al.’s argument is that DGFs are really all that matters and the pursuit of DSs is futile. As they note, “analyses that fail to seek DGFs support the myth of the importance of narrow constructs compared with more general constructs” (p. 17). As has been argued elsewhere (e.g., Lievens & Reeve, Reference Lievens and Reeve2012; Reeve, Scherbaum, & Goldstein, Reference Reeve, Scherbaum and Goldstein2015), this position is not helpful to either science or practice that is aimed at understanding the manifestation of individual differences in the workplace. Indeed, this extreme DGF position is not consistent with recent empirical findings that focus on the interplay between general and specific factors. To demonstrate our point, we will switch the literature domain in which the DGF versus DS debate originated: the cognitive ability literature.

In the cognitive ability literature, there is general agreement that the amount of variance accounted for by a DS is small in comparison with the variance accounted for by a general factor (GF; e.g., Gottfredson, Reference Gottfredson1997; Hunter, Reference Hunter1986; McHenry, Hough, Toquam, Hanson, & Ashworth, Reference McHenry, Hough, Toquam, Hanson and Ashworth1990; Ree & Earles, Reference Ree and Earles1991; Ree, Earles, & Teachout, Reference Ree, Earles and Teachout1994; Youngstrom, Kogos, & Glutting, Reference Youngstrom, Kogos and Glutting1999). However, we believe that a critical evaluation of this research and its methodological issues (Reeve, Reference Reeve2004) along with a consideration of all relevant empirical evidence casts doubt on the Ree et al.’s claim that DSs do not matter.

As identified by Reeve (Reference Reeve2004), there are several methodological issues in the previous research that claims that DSs do not matter. One of the primary methodological issues is that the majority of the research examining this question used the observed test battery subscale scores as if they were construct-valid measures of DS constructs (e.g., Hunter, Reference Hunter1986). As Reeve (Reference Reeve2004) argued, the variance in scores from subscales of ability tests that purportedly measure a DS is often confounded because multiple specific and general abilities influence each subscale. In other words, subscales are not necessarily construct-valid assessments of DSs because the DGF has influence in both general and specific measures. When using these measures of DSs to test the relative contribution of DSs and DGFs, one needs to remove the contribution of the DGF (see Gustafsson, Reference Gustafsson, Braun, Jackson and Wiley2002).

Although this point has been recognized previously in some studies, the methods used in these studies are still problematic. For example, Ree and Earles (Reference Ree and Earles1991) attempted to address the problem of confounded variance sources by using principal components to create atheoretical linear components. Ree and Earles did note that their obtained components do not necessarily reflect any specific ability construct, yet they draw conclusions about specific abilities as if they do. Thus, we question how useful the prior research is regarding the relative contribution of the DS and DGF.

Although not acknowledged by Ree et al., there is published research that finds specific factors to be as important as or even more so than a DGF. For example, the research of Lang, Kersting, Hülsheger, and Lang (Reference Lang, Kersting, Hülsheger and Lang2010) finds that when one uses modern analytical techniques (e.g., relative weights analysis) specifically designed to address problems associated with overlapping predictors (i.e., overlap among general and specific ability dimensions as well as overlap among specific abilities dimensions—an issue raised by Ree et al.), the GF accounts for less variance, and the DSs are more useful than previously believed. Similarly, Wee, Newman, and Joseph (Reference Wee, Newman and Joseph2014) find DSs are valuable when modern analytical techniques are used. Studies such as Lang et al. and Wee et al. raise questions about the viability of extreme positions such as the claim that that only DGFs matter. These claims yield a false impression that this issue has been conclusively answered.

Moreover, the new cognitive ability literature raises questions about the appropriateness of focusing on competitive tests comparing GF and DSs. For example, Lubinski and colleagues’ (e.g., Park, Lubinski, & Benbow, Reference Park, Lubinski and Benbow2008; Wai, Reference Wai2013) research examining the role of specific abilities among high ability groups suggests that GF and DSs work together and that both are necessary to understand intellectual behavior. In their longitudinal study of gifted adolescents, Park et al. analyzed the role that ability level and ability tilt play in professional accomplishments over a 25-year time span. Ability tilt refers to an asymmetry in DSs across different domains. Park et al. examined ability tilt between math and verbal ability. Larger differences between math and verbal SAT scores are indicative of larger ability tilts. Park et al. found that the ability tilts identified at age 13 foreshadowed contrasting forms of professional accomplishment in middle age. Specifically, they showed that although GF contributed to accomplishments, ability tilt was critical for predicting the domain in which these achievements occurred (e.g., securing a tenure-track position in the humanities vs. STEM sciences; publishing a novel vs. securing a patent) over 25 years later. Lubinski and colleagues found similar results for spatial ability (Shea, Lubinski, & Benbow, Reference Shea, Lubinski and Benbow2001; Webb, Lubinski, & Benbow, Reference Webb, Lubinski and Benbow2007).

Theoretical and empirical work similar to Lubinski and colleagues strongly suggests that research seeking to paint a dichotomy between the GF and DS is asking the wrong question. Recent cognitive ability theoretic models emphasize the interplay between general and specific abilities both within and between domains. Essentially, this new theoretical work emphasizes the constellation of interacting DSs that combine to explain individual differences in behaviors. Snow's comprehensive theory of aptitude (Corno et al., Reference Corno, Cronbach, Kupermintz, Lohman, Mandinach and Talbert2002; Snow, Reference Snow1987, Reference Snow1992), Ackerman's (Reference Ackerman1996) intelligence-as-process, personality, interests, and intelligence-as-knowledge (PPIK) theory, and Chamorro-Premuzic and Furnham's (Reference Chamorro-Premuzic and Furnham2005) emerging model of intellectual competence are exemplars of modern thinking emphasizing the collaborative nature (as opposed to competitive nature) of general and specific abilities. Even modern psychometric models of intelligence (e.g., Cattell-Horn-Carroll model of intelligence) emphasize the importance of both DGFs and DSs (Schneider & McGrew, Reference Schneider, McGrew, Flanagan and Harrison2012; Schneider & Newman, Reference Schneider and Newman2015). Finally, even though Ree et al. dismiss the work of van der Maas et al. (Reference van der Maas, Dolan, Grasman, Wicherts, Huizenga and Raijmakers2006), these researchers come to similar conclusions that specific factors are important and dichotomous questions contrasting the GF with the DS are simply not helpful for understanding human behavior and development.

Conclusions

We provided counterarguments against three major points raised in Ree et al.’s article. First, using the personality literature as an example case, we showed that the presence of a DGF is still actively debated and that the empirical evidence supportive of a DGF is not as clean as implied by Ree et al. Second, using psychometric theory, we illustrated how the influence of a DS can grow until it is the dominant source of variance in a measure when multiple sources of data about a target individual are combined. This illustration is consistent with DeYoung's published findings demonstrating that support for a DGF is eliminated when data from multiple informant sources are combined in the statistical model used to test for a DGF. Finally, although debates about the relative value of broad versus specific factors are common in the organizational sciences and can be helpful for scientific progress (Judge & Kammeyer-Mueller, Reference Judge and Kammeyer-Mueller2012), we argue, as have others (Hogan & Roberts, Reference Hogan and Roberts1996; Judge & Kammeyer-Mueller, Reference Judge and Kammeyer-Mueller2012; Reeve & Hakel, Reference Reeve and Hakel2002), that extreme positions such as those expressed by Ree et al. are not scientifically justified, nor are they helpful for promoting science and practice. The false all-or-nothing dichotomies expressed in these debates are not productive and are asking the wrong questions. Given that DSs are the focus of many workplace applications and processes, the extreme stance against DSs puts us further out of touch of the pressing needs of practitioners and may not be helpful for increasing our understanding of the manifestations of individual differences in the workplace.

References

Ackerman, P. L. (1996). A theory of adult intellectual development: Process, personality, interests, and knowledge. Intelligence, 22 (2), 227257.Google Scholar
Ashton, M. C., Lee, K., Goldberg, L. R., & de Vries, R. E. (2009). Higher order factors of personality: Do they exist? Personality and Social Psychology Review, 13, 7991.Google Scholar
Chamorro-Premuzic, T., & Furnham, A. (2005). Personality and intellectual competence. Mahwah, NJ: Erlbaum.Google Scholar
Corno, L., Cronbach, L. J., Kupermintz, H., Lohman, D. F., Mandinach, E. B., . . . Talbert, J. E. (2002). Remaking the concept of aptitude: Extending the legacy of Richard E. Snow. Mahwah, NJ: Erlbaum.Google Scholar
DeYoung, C. G. (2006). Higher-order factors of the Big Five in a multi-informant sample. Journal of Personality and Social Psychology, 91, 11381151.CrossRefGoogle Scholar
DeYoung, C. G., Peterson, J. B., & Higgins, D. M. (2002). Higher-order factors of the Big Five predict conformity: Are there neuroses of health? Personality and Individual Differences, 33, 533552.CrossRefGoogle Scholar
Digman, J. M. (1997). Higher-order factors of the Big Five. Journal of Personality and Social Psychology, 73, 12461256.Google Scholar
Donnellan, M. B., Hopwood, C. J., & Wright, A. G. C. (2012). Revaluating the evidence for the general factor of personality in the multidimensional personality questionnaire: Concerns about Rushton and Irwing (2009). Personality and Individual Differences, 52, 285289.Google Scholar
Ghiselli, E. E., Campbell, J. P., & Zedeck, S. (1981). Measurement theory for the behavioral sciences. San Francisco, CA: Freeman.Google Scholar
Gottfredson, L. (1997). Why g matters: The complexity of everyday life. Intelligence, 24, 79132.Google Scholar
Gustafsson, J.-E. (2002). Measurement from a hierarchical point of view. In Braun, H. L., Jackson, D. G., & Wiley, D. E. (Eds.), The role of constructs in psychological and educational measurement (pp. 7395). Mahwah, NJ: Erlbaum.Google Scholar
Hogan, J., & Roberts, B. W. (1996). Issues and non‐issues in the fidelity‐bandwidth trade‐off. Journal of Organizational Behavior, 17, 627637.Google Scholar
Hopwood, C. J., Wright, A. G. C., & Donnellan, M. B. (2011). Evaluating the evidence for the general factor of personality across multiple inventories. Journal of Research in Personality, 45, 468478.CrossRefGoogle ScholarPubMed
Hunter, J. E. (1986). Cognitive ability, cognitive aptitudes, job knowledge, and job performance. Journal of Vocational Behavior, 29, 340362.Google Scholar
Judge, T. A., & Kammeyer-Mueller, J. D. (2012). General and specific measures in organizational behavior research: Considerations, examples, and recommendations for researchers. Journal of Organizational Behavior, 33, 161174. doi:10.1002/job.764Google Scholar
Kuncel, N. R., & Sackett, P. R. (2014). Resolving the assessment center construct validity problem (as we know it). Journal of Applied Psychology, 99, 3847.CrossRefGoogle ScholarPubMed
Lang, J. W., Kersting, M., Hülsheger, U. R., & Lang, J. (2010). General mental ability, narrower cognitive abilities, and job performance: The perspective of the nested‐factors model of cognitive abilities. Personnel Psychology, 63, 595640.Google Scholar
Lievens, F., & Reeve, C. L. (2012). Where I-O psychology should really (re)start its investigation of intelligence constructs and their measurement. Industrial and Organizational Psychology: Perspectives on Science and Practice, 5, 153158.CrossRefGoogle Scholar
McHenry, J. J., Hough, L. M., Toquam, J. L., Hanson, M. A., & Ashworth, S. (1990). Project A validity results: The relationships between predictor and criterion domains. Personnel Psychology, 43, 335354.Google Scholar
Mount, M. K., Barrick, M. R., Scullen, S. M., & Rounds, J. (2005). Higher-order dimensions of the Big Five personality traits and the Big Six vocational interest types. Personnel Psychology, 58, 447478.Google Scholar
Musek, J. (2007). A general factor of personality: Evidence for the Big One in the five-factor model. Journal of Research in Personality, 41, 12131233.CrossRefGoogle Scholar
Park, G., Lubinski, D., & Benbow, C. P. (2008). Ability differences among people who have commensurate degrees matter for scientific creativity. Psychological Science, 19, 957961.Google Scholar
Podsakoff, P. M., MacKenzie, S. B., & Podsakoff, N. P. (2012). Sources of method bias in social science research and recommendations on how to control it. Annual Review of Psychology, 63, 539569.CrossRefGoogle ScholarPubMed
Ree, M. J., Carretta, T. R., & Teachout, M. S. (2015). Pervasiveness of dominant general factors in organizational measurement. Industrial and Organizational Psychology: Perspectives on Science and Practice, 8 (3), 409427.Google Scholar
Ree, M. J., & Earles, J. A. (1991). Predicting training success: Not much more than g . Personnel Psychology, 44, 321332.CrossRefGoogle Scholar
Ree, M. J., Earles, J. A., & Teachout, M. S. (1994). Predicting job performance: Not much more than g. Journal of Applied Psychology, 79, 518524.CrossRefGoogle Scholar
Reeve, C. L. (2004). Differential ability antecedents of general and specific dimensions of declarative knowledge: More than g. Intelligence, 32, 621652.Google Scholar
Reeve, C. L., & Hakel, M. D. (2002). Asking the right questions about g. Human Performance, 15, 4774.Google Scholar
Reeve, C. L., Scherbaum, C. A., & Goldstein, H. W. (2015). Manifestations of intelligence: Expanding the measurement space to reconsider specific cognitive abilities. Human Resource Management Review, 25, 2837.Google Scholar
Rushton, J. P., Bons, T. A., Ando, J., Hur, Y. M., Irwing, P., Vernon, P. A., . . . Barbaranelli, C. (2009). A general factor model of personality from multitrait-multimethod data and cross-national twins. Twin Research and Human Genetics, 12, 356365.CrossRefGoogle ScholarPubMed
Rushton, J. P., Bons, T. A., & Hur, Y. M. (2008). The genetics and evolution of the general factor of personality. Journal of Research in Personality, 42, 11731185.Google Scholar
Rushton, J. P., & Irwing, P. (2008). A general factor of personality (GFP) from two meta-analyses of the Big Five: Digman (1997) and Mount, Barrick, Scullen, and Rounds (2005). Personality and Individual Differences, 45, 679683.CrossRefGoogle Scholar
Schneider, W. J., & McGrew, K. S. (2012). The Cattell-Horn-Carroll model of intelligence. In Flanagan, D. & Harrison, P. (Eds.), Contemporary intellectual assessment: Theories, tests, and issues (3rd ed., pp. 99144). New York, NY: Guilford Press.Google Scholar
Schneider, W. J., & Newman, D. A. (2015). Intelligence is multidimensional: Theoretical review and implications of specific cognitive abilities. Human Resource Management Review, 25, 1227.CrossRefGoogle Scholar
Shea, D. L., Lubinski, D., & Benbow, C. P. (2001). Importance of assessing spatial ability in intellectually talented young adolescents: A 20-year longitudinal study. Journal of Educational Psychology, 93, 604614.Google Scholar
Snow, R. E. (1987). Aptitude complexes. Aptitude, Learning and Instruction, 3, 1134.Google Scholar
Snow, R. E. (1992). Aptitude theory: Yesterday, today, and tomorrow. Educational Psychologist, 27 (1), 532.Google Scholar
van der Maas, H. L., Dolan, C. V., Grasman, R., Wicherts, J. M., Huizenga, H. M., & Raijmakers, M. (2006). A dynamical model of general intelligence: The positive manifold of intelligence by mutualism. Psychological Bulletin, 113, 842861.Google ScholarPubMed
Wai, J. (2013). Experts are born, then made: Combining prospective and retrospective longitudinal data shows the cognitive ability matters. Intelligence. doi:10.1016/j.intell.2013.08.009Google Scholar
Webb, R. M., Lubinski, D., & Benbow, C. P. (2007). Spatial ability: A neglected dimension in talent searches for intellectually precocious youth. Journal of Educational Psychology, 99, 397420.Google Scholar
Wee, S., Newman, D., & Joseph, D. (2014). More than g: Selection quality and adverse impact implications of considering second-stratum cognitive abilities. Journal of Applied Psychology, 99, 547563.CrossRefGoogle Scholar
Youngstrom, E. A., Kogos, J. L., & Glutting, J. J. (1999). Incremental efficacy of Differential Ability Scales factor scores in predicting individual achievement criteria. School Psychology Quarterly, 14, 2639.Google Scholar
Figure 0

Table 1. Illustration of Consequences of Aggregating Over Multiple Data Sources