Hostname: page-component-6bf8c574d5-956mj Total loading time: 0 Render date: 2025-02-20T22:59:26.403Z Has data issue: false hasContentIssue false

g as Bridge Model

Published online by Cambridge University Press:  01 January 2022

Rights & Permissions [Opens in a new window]

Abstract

Psychometric g—a statistical factor capturing intercorrelations between scores on different IQ tests—is of theoretical interest despite being a low-fidelity model of both folk psychological intelligence and its cognitive/neural underpinnings. Psychometric g idealizes away from those aspects of cognitive/neural mechanisms that are not explanatory of the relevant variety of folk psychological intelligence, and it idealizes away from those varieties of folk psychological intelligence that are not generated by the relevant cognitive/neural substrate. In this manner, g constitutes a high-fidelity bridge model of the relationship between its two targets and, thereby, helps demystify the relationship between folk and scientific psychology.

Type
Cognitive Sciences
Copyright
Copyright 2021 by the Philosophy of Science Association. All rights reserved.

1. Introduction

Psychometric g is a statistical factor that captures the intercorrelations between all of any given individual’s scores on different IQ tests and subtests (probing verbal ability, mathematical ability, analogical reasoning, pattern matching, etc.). The first great finding of the IQ-testing tradition is that subjects who do better than most people on any given one of these subtests are also likely to do better than most people on any of the others (Mackintosh Reference Mackintosh2011). Consequently, g is commonly considered a statistical distillation of what all IQ subtests measure in common. The second great finding of the IQ-testing tradition is that g is predictively fecund—among psychological constructs, only conscientiousness competes with g as a predictor of educational attainment, job complexity, socioeconomic status, and other prominent measures of success in life (Gottfredson Reference Gottfredson1997). Nevertheless, experts are divided about its theoretical interest.

Some skeptics deny that g measures anything more theoretically interesting than the ability to do well on IQ tests, but most intelligence researchers assume that g is a very good model of something of theoretical interest, which they variously refer to as ‘the g-factor’, ‘general intelligence’, or ‘the positive manifold’. Nonskeptics use these terms to pick out one of two target systems: g is purported to be a model of either folk psychological intelligence—the personal-level capacity that ordinary folks invoke when they call somebody “smart” or “stupid”—or the cognitive (or neural) substrate of that capacity.

In this article, I propose that g is of theoretical interest despite being a low-fidelity model of each of these targets. In section 2, I sketch reasons for denying that g is a very good measure of folk psychological intelligence. In section 3, I argue that it is not a very good measure of what is going on in the brains or cognitive systems of (un)intelligent people, either. Finally, in section 4, I suggest that g is nevertheless explanatorily interesting insofar as it idealizes away from those aspects of the relevant neural/cognitive substrates that are not explanatory of the relevant variety folk psychological intelligence and idealizes away from those aspects of the relevant variety of folk psychological intelligence that are not generated by the relevant neural/cognitive substrates. In that manner, g constitutes a high-fidelity bridge model of the relationship between its two distinct targets and, thereby, helps demystify the relationship between folk psychology and scientific psychology.

2. g Is Not a Very Good Measure of Folk Psychological Intelligence

Elsewhere (Curry Reference Curry2020), I have argued for an interpretivist account of intelligence: to be intelligent, in the sense invoked in folk psychological practices, is to be comparatively good at solving intellectual problems that an interpreter deems worth solving. In short, you are intelligent if you behave (in ways that folks deem smart) more successfully than other people, and you are unintelligent if you behave (in ways that folk deem smart) less successfully than other people. Since the extant empirical evidence indicates that different lay interpreters deem different intellectual problems worth solving (and, indeed, deem different problems intellectual), it follows from my definition that what it is to be intelligent varies alongside the lay interpreters in question.

As Sternberg and Grigorenko (Reference Sternberg and Grigorenko2004) and their collaborators in cross-cultural psychology have documented, g tracks some—but not all—of the varieties of intelligence that have emerged in relation to folk psychological practices around the globe. Psychometric g is plausibly a decent model of varieties of intelligence that became salient in the dominant folk psychological discourses of some WEIRD—Western, educated, industrialized, rich, democratic (Henrich, Heine, and Norenzayan Reference Henrich, Heine and Norenzayan2010)—contexts in the twentieth century. But WEIRD intelligences are much less salient in other cultural contexts. Moreover, skeptical philosophers and psychologists have provided serious reasons to doubt that g is a very good measure even of the varieties of folk psychological intelligence that have emerged, alongside IQ testing itself (Hacking Reference Hacking1999, 73), within WEIRD contexts (Block and Dworkin Reference Block and Dworkin1974; Alfano, Holden, and Conway Reference Alfano, Holden, Conway and Zack2016).

So I henceforth assume that g is not a very good measure of what folks are talking about when they talk about intelligence in everyday life: it does not straightforwardly measure intelligence as conceptualized in IQ-test-influenced settings, and it fails to measure intelligence as conceptualized in many other settings.Footnote 1 Nevertheless, my account of folk psychological intelligence leaves open the possibility that g is a great measure of the neural or cognitive underpinnings of what folks are talking about when they talk about intelligence.

3. g Is Not a Very Good Measure of Cognitive or Neural Functioning

Some psychologists and cognitive neuroscientists are increasingly optimistic about unearthing a (set of) neural or cognitive mechanism(s) that is fully responsible for the comparatively superior (or inferior) capacity measured by g and, thereby, discovering intelligence squarely in the brain or cognitive system. I think their optimism about reduction is misplaced. To substantiate my pessimism, I examine a few prominent recent attempts to reduce intelligence to its neural or cognitive substrates.

3.1. Neural Correlates

Jensen (Reference Jensen2006, ix), in a refinement of Spearman’s (Reference Spearman1927, 117) original speculation that g measures a kind of “mental energy,” interprets g as an indirect “measurement of cognitive speed,” which could be more directly measured via reaction time paradigms that correlate strongly with g. Because of this correlation, Jensen was convinced that “intelligence is the periodicity of neural oscillation in the action potentials of the brain and central nervous system” (Reference Jensen2011, 173). In other words, intelligence is nothing more than the frequency of brainwaves, and IQ testing provides a reliable (if indirect) measure of this physical feature of the brain.

Jensen’s simple reductionist theory has not held up in the light of PET and fMRI research. Cognitive neuroscientists have demonstrated that a higher frequency of brainwaves is not straightforwardly correlated with greater neural processing power—nor is any other particular pattern in the frequency of brainwaves (Haier Reference Haier2017). Despite Jensen’s best efforts, Spearman’s notion of mental energy has no neural referent. Nevertheless, more empirically adequate neurological theories of intelligence have risen in Jensen’s theory’s stead.

The best developed among them—Jung and Haier’s Parieto-Frontal Integration Theory—goes a long way toward identifying the neural correlates of the cognitive processes recruited when people take IQ tests. There is something to Jung and Haier’s suggestion that the efficient integrated operation of a parieto-frontal sense-remember-judge-act network underlies varieties of intelligence purportedly measured by g. (It plausibly partially underlies other varieties of folk psychological intelligence as well.) But Jung and Haier have no proposal as to the cause of this efficiency, which could stem from a wide variety of sources, only some of which could be plausibly construed as the incarnation of intelligence in the brain. Indeed, in responding to critics, Jung and Haier back off of the claim to have provided a reductionist theory of the positive manifold modeled by g and instead insist only that “in our view, it is still too early to rule out a neural basis for a general factor of intelligence independent of a neural basis for specific cognitive abilities” (Reference Jung and Haier2007, 176). In other words, Jung and Haier insist it is possible that the neural efficiency underlying successful IQ test taking is generated by intelligence qua mechanism in the brain. They claim to have located that mechanism in a reasonably delimited parieto-frontal network. But they make no claim to have identified the mechanism itself.

Localization is not nearly enough to ground reduction. If researchers hope to reduce intelligence to a neural—or, failing that, cognitive—state or process, then they must first identify a candidate mechanism that produces that state or carries out that process. The most promising candidate currently on offer is working memory capacity.

3.2. Working Memory

The term ‘working memory’ refers to “a domain-general resource that enables representations to be actively sustained, rehearsed, and manipulated for purposes of reasoning and problem solving” (Carruthers Reference Carruthers2015, 12). When you rehearse a phone number in your head while looking for a piece of paper to scribble it down on, you are using your working memory. Working memory capacity is a common measure both of how much information can be maintained in working memory and of how well that information can be processed. Many cognitive scientists take working memory capacity to be a critical component of most complex cognition and have thus become increasingly interested in the hypothesis that intelligence can be explained largely in terms thereof—perhaps even reduced thereto.

This hypothesis makes some intuitive sense: solving intellectual problems usually involves actively sustaining and manipulating information. And, at first glance, the evidence in favor of reducing intelligence to working memory capacity is impressive. When you give somebody both an IQ test and a test of working memory capacity, the two resulting scores correlate positively. In particular, working memory capacity correlates with ‘fluid g’—the factor capturing performance on IQ tests designed to probe pure reasoning, as opposed to reasoning that makes use of what the reasoner knows—somewhere between .6 and .8 (Carruthers Reference Carruthers2015). Moreover, much of the parieto-frontal network that Jung and Haier identify as the neural correlate of g has also been shown to be active in working memory (Deary, Penke, and Johnson Reference Deary, Penke and Johnson2010). Finally, there is some evidence that increases in working memory capacity yield increases in fluid g (Jaušovec and Jaušovec Reference Jaušovec and Jaušovec2012).

Yet, there is evidence that cuts against reduction. Working memory capacity, while quite domain-general, is nevertheless more domain-specific than fluid g: it correlates more with tests of verbal ability than with tests of spatial ability, for instance. And working memory’s contribution to performance on tests of fluid g seems to be independent of the respective contributions of associative learning and information processing speed (Mackintosh Reference Mackintosh2011, 154–55). So there is reason to doubt that working memory is the sole cognitive underpinning of fluid g. Moreover, there is some reason to doubt that working memory is a cognitive underpinning of intelligence at all: some of the researchers responsible for discovering the correlations between fluid g and working memory capacity have argued that the two are explanatorily distinct phenomena that are nevertheless strongly correlated because they share a common underpinning (Shipstead and Engle Reference Shipstead, Engle and Sternberg2018).

For present purposes, these complex questions about the weight and interpretation of the extant evidence can be set aside. My argument against reducing intelligence to working memory capacity instead rests on the premise that the attempted reduction would add nothing to, and subtract something from, scientists’ understanding. In particular, an attempt at reduction would hinder scientists’ understanding of intelligence while adding nothing to their understanding of how cognitive systems work.

With regard to the latter: working memory capacity is already a well-defined construct that measures the operations of a central and reasonably well-delimited (albeit complex and distributed) cognitive subsystem and, thereby, plays a reasonably clear explanatory role in cognitive science (cf. Gomez-Lavin Reference Gomez-Lavin2021). Stipulating that this construct is a measure of intelligence—without making any suggestions for how that stipulation should change scientists’ understanding of working memory or the functioning of cognitive systems more generally—does nothing to enhance its explanatory power. Thus, an attempt at reduction is justified in this case only if it sheds light on the phenomenon being reduced.

But reduction to working memory capacity can only obfuscate intelligence. Even granting that IQ tests measure intelligence well, any attempted reduction of intelligence to working memory capacity hinders scientists’ understanding of intelligence in at least two respects.

First, working memory capacity is impressively highly correlated, not with g but only with one of its component factors, fluid g, which is derived from a minority subset of IQ tests. Most IQ tests also measure other component factors, including most prominently ‘crystallized g’—the factor capturing how well people do on IQ tests that are designed to focus on reasoning that makes use of what the reasoner knows. The calling card of plain old undifferentiated g is that there are strong intercorrelations between how well people do on all IQ tests, including relatively pure tests of fluid g, relatively pure tests of crystallized g, and a wide range of hybrids. The heterogeneous nature of the positive manifold should be telling when it comes to constructing a theory of intelligence: the fact that both fluid g and crystallized g are statistical components of undifferentiated g intriguingly mirrors the fact that folk psychological conceptions of intelligence across cultures tend to invoke both fluid reasoning and the use of crystallized knowledge (Sternberg and Grigorenko Reference Sternberg and Grigorenko2004). Meanwhile, the correlation of crystallized intelligence and working memory capacity, like the correlation of undifferentiated g and working memory capacity, is somewhere between .3 and .6 (Mackintosh Reference Mackintosh2011)—the two are clearly related, but it is equally clear that a direct reduction of one to the other will not be in the offing.

It is possible that fluid g captures the essence of g (and, by extension, of folk psychological intelligence) and that crystallized g is more noise than signal. The IQ tests with the highest g-loadings—that correlate most strongly with g itself—tend to be tests of fluid intelligence (like Raven’s Progressive Matrices). But there is a case to be made that even fluid g measures personality, motivation, temperament, or worldliness to a large degree—it may measure ambition, patience, or test wiseness as well as pure reasoning capacity (Block and Dworkin Reference Block and Dworkin1974)—and these characteristics are not plausibly reduced to working memory capacity. Indeed, my account of folk psychological intelligence suggests that these character traits are (sometimes) rightly taken to be part and parcel of intelligence: intelligence is the capacity to solve intellectual problems comparatively well, and solving problems better than one’s peers takes grit as well as wit (Morton and Paul Reference Morton and Paul2019).

Scientists bent on reduction would likely be undeterred by this first pitfall. They could retort that, by shedding inessential character traits, working memory capacity distills the essence of fluid g, which itself, by shedding crystallized knowledge, distills the essence of undifferentiated g. Regardless, a second pitfall awaits the attempt to reduce fluid g to working memory capacity (and indeed any attempted reduction of a psychometric kind to the workings of a cognitive mechanism).

Even if working memory capacity is the single essential cognitive underpinning of intelligence, g is not a very good model thereof. That is because the g-factor is, by its very nature, comparative—it is an inter- (rather than intra-) individual construct that measures how people do on IQ tests relative to other people in their age cohort. It does not measure how smart people are on a ratio scale; it measures only how much better or worse they perform than the average IQ test taker. Thus, g cannot directly measure an intrinsic characteristic of any individual’s mind, whereas we already have reliable ways of measuring working memory capacity within a single individual on a ratio scale. (To my mind, this is a salutary fact about g, since on my definition folk psychological intelligence is also constitutively comparative.) As Borsboom and colleagues (Reference Borsboom, Kievit, Cervone, Hood, Valsiner, Molenaar, Lyra and Chaudhary2009, 79) have pointed out, absent a theory of how to bridge differential and cognitive psychology, “intelligence dimensions like the g-factor cannot be understood on the basis of between-subject data as denoting mental ability qua within-subject attribute.” Fluid g could not be comprehensibly reduced to working memory capacity absent a unifying theory of how constitutively comparative capacities relate to cognitive mechanisms.

In contrast, it bears repeating that cognitive psychologists already have a decent theoretical understanding of the mechanics of working memory capacity in its own right, not to mention reliable instruments that measure it on a ratio scale. And theorists can give working memory capacity due emphasis as a cognitive underpinning of intelligence without making an attempt at reduction. Thus, in attempting reduction, nothing new is learned, some of the plausibly explanatorily salient dimensions of both folk psychological intelligence and g are elided, and an important distinction—between the intrapersonality of the cognitive mechanism of working memory and the constitutive interpersonality of intelligence—is obscured. So long as there is a viable nonreductive account of intelligence on the table, reduction carries no explanatory benefits and falls into at least two significant explanatory pitfalls.

And there are viable nonreductive accounts on the table. Rather than measuring a cognitive mechanism itself, g plausibly measures an effect of the interactions of several mechanisms, rendering the positive manifold “an emergent property of anatomically distinct cognitive systems, each of which has its own capacity” (Hampshire et al. Reference Hampshire, Highfield, Parin and Owen2012, 1225). At its extreme, this approach leads to the conclusion that “g is ‘not a thing’ but instead is a summary statistic” and thus that “the search for the neural basis of g is meaningless” (Conway and Kovacs Reference Conway, Kovacs and Sternberg2018, 59). If viable, this approach would avoid both pitfalls of reducing intelligence to working memory: it would not exclude features of the positive manifold on an ad hoc basis, and it would have the flexibility to countenance the constitutively comparative nature of the positive manifold.

3.3. Mutualism

In that spirit, van der Maas and colleagues (Reference van der Maas, Dolan, Grasman, Wicherts, Huizenga and Raijmakers2006) have vigorously argued that the intercorrelations between individuals’ IQ test scores can be explained by reference to the dynamic interplay of specialized cognitive mechanisms. They analogize g to the results of predator-prey dynamics in ecology. According to the Lotka-Volterra model, high correlations between predator and prey populations need not be caused by a single underlying factor (e.g., shared food source) that bolsters both populations. Instead, the correlation can be caused by dynamic interactions between the two populations. The size of the prey population increases when the size of the predator population is small (because breeding outpaces being eaten) and decreases when the predator population is large (because being eaten outpaces breeding). At the same time, the predator population grows when the prey population is large (because eating enables breeding) and decreases when the prey population is small (because there is not enough food to go around). These dynamics ensure that a strong correlation between the size of the populations emerges over time, without requiring any underlying factor to influence both populations.

Analogously, van der Maas and colleagues have demonstrated that high correlations between the performance of distinct cognitive mechanisms, which each undergird performance on some IQ subtest or other, need not be caused by a particular underlying factor that fuels each performance. Instead, the correlations are plausibly caused by dynamic interactions between the distinct cognitive mechanisms. Research in cognitive psychology reveals that such relationships exist. Short-term memory improves the development of cognitive strategies, and cognitive strategies improve the efficiency of short-term memory (Siegler and Alibali Reference Siegler and Alibali2005). Language production and reasoning are similarly mutually beneficial: if you can think through it, then you can put it into words better, and if you can put it into words better, then that helps you think through it better (Fisher et al. Reference Fisher, Hall, Rakowitz and Gleitman1994). And so on. These sorts of dynamic interactions between distinct cognitive mechanisms generate positive feedback loops, ensuring that strong correlations emerge over time between how well mechanisms function across the cognitive system.

The g-factor is an explanandum, not the explanans, of the mutualistic functioning of cognitive mechanisms. If theorists force g into the role of explanans, then they will find that it is, at best, a low-fidelity model of that functioning: it idealizes away from all of the independently interesting, messy, and complex mechanistic details. Van der Maas, Kan, and Borsboom (Reference van der Maas, Kan and Borsboom2014) go on to infer that g is of theoretical interest only as something to be explained; it is a predictively powerful construct, but it does not itself do any interesting explanatory work.

4. g as Bridge Model

Although I accept most of van der Maas’s story, I think this last inference is mistaken. Researchers interested in the mind tend to investigate one or the other of two broad categories of phenomena: they investigate the objects of folk psychological interpretations (in what Sellars [Reference Sellars1963, 4–5] termed “the ‘manifest’ image of man-in-the-world”), or they investigate the wiring and connections constituting cognitive mechanisms (in the ‘scientific’ image). As Godfrey-Smith has argued, echoing Sellars, “one of the roles for philosophy … is to describe the coordination between the facts about interpretations and the facts about wirings-and-connections” (Godfrey-Smith Reference Godfrey-Smith, Clapin, Staines and Slezak2004, 149). On my view, g is a theoretically interesting explanans when cast in precisely that role. Van der Maas and his fellow mutualists should embrace the idea that g does explanatory work, not as a model of mechanisms but as a bridge model that illuminates the relationship between folk psychological intelligence and the functioning (and neurophysiological realization) of cognitive systems.

On Weisberg’s (Reference Weisberg2013) influential account, models are (concrete, mathematical, or computational) structures plus construals—scientists’ interpretations of those structures as descriptions of target systems. Bridge models, as I am coining the term, are structures that scientists construe as describing the relationship between two or more target systems. Bridge models are particularly useful as guides to the relationships between systems targeted at different levels (or otherwise partially incommensurate varieties) of scientific explanation.Footnote 2 Most explanatorily powerful models idealize away many irrelevant features of their target systems. In the case of bridge models, this means ignoring many (if not all) of the features of each of the target phenomena that are not directly related to the other target phenomena.

My positive proposal is that the same idealizations and abstractions that render g a low-fidelity model of both folk psychological intelligence and its cognitive underpinnings also render it a high-fidelity bridge model. By distilling the common core of IQ-test-taking ability, g idealizes away from the mechanistic details of cognitive functioning other than the fact that cognitive systems produce a positive manifold. At the same time, g idealizes away from the aspects (indeed, whole varieties) of folk psychological intelligence that are not tracked by performance on IQ subtests. Nevertheless, under the proper respective construals, g serves as a low-fidelity model of each of these phenomena. In so doing, it does not allow researchers to get a very firm grasp on either the folk psychology or the cognitive psychology of intelligence. But, properly construed, it could allow theorists to get a firmer grasp on the relationship between these two varieties of psychological explanation. In Sellarsian jargon: g, construed as a bridge model, could help fuse the manifest and scientific images of intelligence into one synoptic vision.

As construed by van der Maas, g does not provide a mechanistic explanation, but it does capture the fact that cognitive mechanisms dynamically work together to form a general substrate for the constitutively comparative problem-solving capacities that constitute the relevant variety of folk psychological intelligence. Taken from the other direction, g is, at best, a low-fidelity model of folk psychological intelligence: it idealizes away from the multifarious cross-cultural differences between folks’ conceptions of intelligence and from many of the messy and complex details within conceptions. Nevertheless, g is a high-fidelity model of those aspects of folk psychological intelligence that are realized by the mutualistic network of cognitive mechanisms that subserves IQ-test-taking ability.

When properly construed as a bridge model, g thereby helps reveal why one variety of lay intelligence attribution is genuinely powerfully predictive (and in some senses explanatory) of human behavior. An idealization of the attributed suite of constitutively comparative problem-solving capacities maps onto a predictively fecund idealization of the dynamic interactions between cognitive mechanisms.

By the same token, treating g as a bridge model is explanatory of its own high correlation with certain measures of success in life. It is not a great measure of any particular aspect of cognitive functioning. Nor is it a great measure of any particular folk conception of intelligence. But it does help researchers zero in on those aspects of cognitive functioning—the relevant mechanisms and their interactions—that undergird core features of some culturally salient folk conceptions of intelligence. In other words, it is a good isolator of the features of cognitive functioning that many people value when they value intelligence—and thus of the aspects of cognitive functioning that lead to certain kinds of success in a society partly structured by people’s values.

Researchers make a mistake when they infer that g must be a great measure of cognitive functioning, since it is so predictive of success. On the contrary, we should expect g qua bridge model to correlate with success better than any great direct measure of cognitive functioning. After all, most folks (and their social institutions) do not care a whit about rewarding cognitive functioning per se—they care about rewarding those people whose cognitive functioning has put them in a position to accomplish valued goals. At the same time, we should also expect g qua bridge model to correlate with success better than any great direct measure of intelligence as it emerges in relation to any given folk conception, since it zeroes in on aspects of folk psychological intelligence that are actually undergirded by more or less efficient and effective cognitive functioning.

I conclude by drawing a lateral philosophical lesson. Psychofunctionalists sometimes claim that belief attribution must literally describe cognitive functioning, since it is predictively fecund (Fodor Reference Fodor1987). There is something to this thought: the predictive power of belief attribution suggests that folk psychological beliefs must be reliably undergirded by patterns of cognitive functioning. Nevertheless, construing g as a bridge model highlights how intelligence attribution manages to be similarly predictively fecund without literally describing cognitive functioning. The predictive fecundity of belief attribution shows, at most, that if scientists were to construct the relevant bridge model, then they would find a reliable relationship between some aspects of folk psychological belief and some cognitive underpinnings that are responsible for behaviors that can be predicted via belief attribution. It cannot show that folk psychological belief is reducible to those cognitive underpinnings: intelligence attribution is similarly predictively powerful despite being irreducible.

Of course, this lesson does not disprove psychofunctionalism about belief. Some reductions of folk psychological phenomena to cognitive phenomena are well founded. But I have argued that, intrapersonally speaking, human cognitive architectures do not feature anything well-labeled ‘intelligence’. It is still an open question, which will not be settled by appeals to the predictive power of belief attribution, whether they feature anything well-labeled ‘beliefs’.

Footnotes

Thanks to Dan Dennett, Nabeel Hamid, Bill Lee, Angela Potochnik, Jordan Rodu, Sharon Ryan, and Robert Sternberg for comments and conversation; to Sam Curtis, Daniel Hoek, Karen Kovaka, Deborah Mayo, Kelly Trogdon, and other members of my audience at Virginia Tech in October 2020 for a stimulating Q&A; and to two anonymous reviewers for invitations to say more.

1. The central argument of this article relies neither on the details of my account of folk psychological intelligence nor on my view that g does not straightforwardly measure any variety of intelligence. Indeed, g would serve as a more explanatorily powerful bridge model if it were a great measure of (a variety of) folk psychological intelligence.

2. For example, Daniel Dennett (personal communication) has suggested that the Psychopathy Checklist might be fruitfully construed as a bridge model spanning psychopathy (qua personality disorder) and its neural underpinnings.

References

Alfano, Mark, Holden, LaTasha, and Conway, Andrew. 2016. “Intelligence, Race, and Psychological Testing.” In The Oxford Handbook of Philosophy and Race, ed. Zack, Naomi, 474–86. New York: Oxford University Press.Google Scholar
Block, Ned, and Dworkin, Gerald. 1974. “IQ, Heritability and Inequality.” Pt. 1. Philosophy and Public Affairs 3 (4): 331409.Google Scholar
Borsboom, Denny, Kievit, Rogier, Cervone, Daniel, and Hood, S. Brian. 2009. “The Two Disciplines of Scientific Psychology; or, The Disunity of Psychology as a Working Hypothesis.” In Dynamic Process Methodology in the Social and Developmental Sciences, ed. Valsiner, Jaan, Molenaar, Peter, Lyra, Maria, and Chaudhary, Nandita, 6797. New York: Springer.10.1007/978-0-387-95922-1_4CrossRefGoogle Scholar
Carruthers, Peter. 2015. The Centered Mind: What the Science of Working Memory Shows Us about the Nature of Human Thought. Oxford: Oxford University Press.10.1093/acprof:oso/9780198738824.001.0001CrossRefGoogle Scholar
Conway, Andrew, and Kovacs, Kristof. 2018. “The Nature of the General Factor of Intelligence.” In The Nature of Human Intelligence, ed. Sternberg, Robert, 4963. Cambridge: Cambridge University Press.10.1017/9781316817049.005CrossRefGoogle Scholar
Curry, Devin Sanchez. 2020. “Street Smarts.” Synthese, forthcoming. https://doi.org/10.1007/s11229-020-02641-z.CrossRefGoogle Scholar
Deary, Ian, Penke, Lars, and Johnson, Wendy. 2010. “The Neuroscience of Human Intelligence Differences.” Nature Reviews: Neuroscience 11:201–11.10.1038/nrn2793CrossRefGoogle ScholarPubMed
Fisher, Cynthia, Hall, D. Geoffrey, Rakowitz, Susan, and Gleitman, Lila. 1994. “When It Is Better to Receive than to Give: Syntactic and Conceptual Constraints on Vocabulary Growth.” Lingua 92:333–75.10.1016/0024-3841(94)90346-8CrossRefGoogle Scholar
Fodor, Jerry. 1987. Psychosemantics: The Problem of Meaning in the Philosophy of Mind. Cambridge, MA: MIT Press.10.7551/mitpress/5684.001.0001CrossRefGoogle Scholar
Godfrey-Smith, Peter. 2004. “On Folk Psychology and Mental Representation.” In Representation in Mind: New Approaches to Mental Representation, ed. Clapin, Hugh, Staines, Phillip, and Slezak, Peter, 147–62. Oxford: Elsevier.Google Scholar
Gomez-Lavin, Javier. 2021. “Working Memory Is Not a Natural Kind and Cannot Explain Central Cognition.” Review of Philosophy and Psychology 12:199225.10.1007/s13164-020-00507-4CrossRefGoogle Scholar
Gottfredson, Linda. 1997. “Why g Matters: The Complexity of Everyday Life.” Intelligence 24 (1): 79132.10.1016/S0160-2896(97)90014-3CrossRefGoogle Scholar
Hacking, Ian. 1999. The Social Construction of What? Cambridge, MA: Harvard University Press.Google Scholar
Haier, Richard. 2017. The Neuroscience of Intelligence. Cambridge: Cambridge University Press.10.1017/9781316105771CrossRefGoogle Scholar
Hampshire, Adam, Highfield, Roger, Parin, Beth, and Owen, Adrian. 2012. “Fractionating Human Intelligence.” Neuron 76 (6): 1225–37.10.1016/j.neuron.2012.06.022CrossRefGoogle ScholarPubMed
Henrich, Joseph, Heine, Stephen, and Norenzayan, Ara. 2010. “The Weirdest People in the World?Behavioral and Brain Sciences 33 (2–3): 61135.10.1017/S0140525X0999152XCrossRefGoogle ScholarPubMed
Jaušovec, Norbert, and Jaušovec, Ksenija. 2012. “Working Memory Training: Improving Intelligence—Changing Brain Activity.” Brain and Cognition 79:96106.10.1016/j.bandc.2012.02.007CrossRefGoogle ScholarPubMed
Jensen, Arthur. 2006. Clocking the Mind. Oxford: Elsevier.Google Scholar
Jensen, Arthur. 2011. “The Theory of Intelligence and Its Measurement.” Intelligence 39 (4): 171–77.10.1016/j.intell.2011.03.004CrossRefGoogle Scholar
Jung, Rex, and Haier, Richard. 2007. “The Parieto-Frontal Integration Theory (P-FIT) of Intelligence.” Behavioral and Brain Sciences 30 (2): 135–87.10.1017/S0140525X07001185CrossRefGoogle ScholarPubMed
Mackintosh, Nicholas. 2011. IQ and Human Intelligence. 2nd ed. Oxford: Oxford University Press.Google Scholar
Morton, Jennifer, and Paul, Sarah. 2019. “Grit.” Ethics 129:175203.10.1086/700029CrossRefGoogle Scholar
Shipstead, Zach, and Engle, Randall. 2018. “Mechanisms of Working Memory Capacity and Fluid Intelligence and Their Common Dependence on Executive Attention.” In The Nature of Human Intelligence, ed. Sternberg, Robert, 287307. Cambridge: Cambridge University Press.10.1017/9781316817049.019CrossRefGoogle Scholar
Sellars, Wilfrid. 1963. Science, Perception and Reality. Atascadero: Ridgeview.Google Scholar
Siegler, Robert, and Alibali, Martha. 2005. Children’s Thinking. 4th ed. Hoboken, NJ: Prentice-Hall.Google Scholar
Spearman, Charles. 1927. The Abilities of Man: Their Nature and Measurement. London: Macmillan.Google Scholar
Sternberg, Robert, and Grigorenko, Elena. 2004. “Intelligence and Culture: How Cultures Shapes What Intelligence Means, and the Implications for a Science of Well-Being.” Philosophical Transactions of the Royal Society of London B 359:1427–34.Google ScholarPubMed
van der Maas, Han, Dolan, Conor, Grasman, Raoul, Wicherts, Jelte, Huizenga, Hilde, and Raijmakers, Maartje. 2006. “A Dynamical Model of General Intelligence: The Positive Manifold of Intelligence by Mutualism.” Psychological Review 113:842–61.10.1037/0033-295X.113.4.842CrossRefGoogle ScholarPubMed
van der Maas, Han, Kan, Kees-Jan, and Borsboom, Denny. 2014. “Intelligence Is What the Intelligence Test Measures: Seriously.” Journal of Intelligence 2 (1): 1215.10.3390/jintelligence2010012CrossRefGoogle Scholar
Weisberg, Michael. 2013. Simulation and Similarity: Using Models to Understand the World. Oxford: Oxford University Press.10.1093/acprof:oso/9780199933662.001.0001CrossRefGoogle Scholar