In the target article, Burkart et al. highlight the importance of identifying variations in domain-general intelligence across species. However, with the exception of mice and possibly primates, there remains little evidence suggesting that variations in domain-general intelligence (g) underlie intraspecific variations in cognitive performance in nonhuman animals. Moreover, such an attribution remains debatable as procedural differences in test battery design may confound interpretations of the underlying mechanism. Our concern is exacerbated where support for a g factor is sparse and limited to studies that predominantly test subjects in the wild. For example, the mechanisms underlying success on test batteries designed to assess performances of birds in the wild (Isden et al. Reference Isden, Panayi, Dingle and Madden2013; Keagy et al. Reference Keagy, Savard and Borgia2011; Shaw et al. Reference Shaw, Boogert, Clayton and Burns2015) bear little resemblance to those effective in tasks presented to non-avian species tested in captivity (Herrmann et al. Reference Herrmann, Hernandez-Lloreda, Call, Hare and Tomasello2010b).
To accurately address whether it is meaningful to talk about domain-general intelligence in animals, it is important that the inherent design of the items within a cognitive test battery accurately capture domain-specific cognitive abilities, independent of procedural factors, and that relevant testing paradigms are used to assess the cognitive performances of subjects in the wild as well as in captivity. Direct comparisons between species are unavoidably difficult as different animals possess different adaptive specialisations; for example, a human cognitive test battery may assess verbal skills whereas nonhuman test batteries cannot. Test batteries, therefore, also need to consider the inherent differences in cognitive processes between species.
Performances on nonhuman cognitive test batteries, particularly those presented to subjects in the wild, require individuals to first interact with a novel apparatus before experiencing its affordances. Accordingly, such test batteries often use tasks that involve trial-and-error learning to quantify subjects' performances and assess their ability to learn to attend to cues based on reward contingencies. For example, subjects may be presented with tasks that assess how quickly they can learn to differentiate rewarded from unrewarded colours, or learn about the spatial location of concealed rewards. Although performances on such tasks are considered to capture domain-specific abilities, success will inevitably also be mediated by fundamental processes of learning that are common to the inherent design of these problems. As a result, an individual may perform well when learning both colour and spatial discrimination problems, not because this individual excels in anything we would want to call intelligence but because it is a relatively rapid learner of all kinds of association, including those involved in the two novel problems. Hence, what seems to be evidence for domain-general intelligence may reflect individual consistency in speed of associative learning, rather than individual consistency in cognition across different domains.
Between-species comparisons may be further confounded because associative learning ability plays a greater role in task performance in animals than it does in humans, and may play a greater role in some nonhuman species than others. Such differences may be particularly pronounced between evolutionarily disparate species such as primates and birds. Pigeons consistently show purely associative solutions to problems that humans, and to some extent nonhuman primates, tend to solve by the use of rules (e.g., Lea & Wills Reference Lea and Wills2008; Lea et al. Reference Lea, Wills, Leaver, Ryan, Bryant and Millar2009; Maes et al. Reference Maes, De Filippo, Inkster, Lea, De Houwer, D'Hooge, Beckers and Wills2015; Meier et al. Reference Meier, Lea and McLaren2016; Smith et al. Reference Smith, Ashby, Berg, Murphy, Spiering, Cook and Grace2011; Reference Smith, Berg, Cook, Murphy, Crossley, Boomer, Spiering, Beran, Church, Ashby and Grace2012; Wills et al. Reference Wills, Lea, Leaver, Osthaus, Ryan, Suret, Bryant, Chapman and Millar2009). In humans, preferential attention to rules may expedite performances on rule-based tasks (Danforth et al. Reference Danforth, Chase, Dolan and Joyce1990), but may also impair responses to experienced contingencies (Fingerman & Levine Reference Fingerman and Levine1974; Hayes et al. Reference Hayes, Brownstein, Zettle, Rosenfarb and Korn1986). Consequently, as different cognitive processes govern the performances of different species on psychometric test batteries, analogous performances between human and nonhuman animals may be difficult to capture.
To overcome these issues, we highlight the importance of differentiating between performances on tasks that require subjects to “learn” to solve a problem, from performances on tasks that assess whether subjects “know” the solution to a problem. We therefore advocate the use not only of associative tasks such as discrimination learning of colour cues that require trial-and-error experience to solve, but also of tasks that require subjects to be trained beforehand to a particular learning criterion, so that their performance on a subsequent novel test or “generalization” condition can be assessed. Such conditions provide a controlled version of the tests of “insightful” or “spontaneous” problem solving that, from the time of Köhler (Reference Köhler1925) on, have often been considered critical in assessing animal intelligence.
Learning tasks are particularly relevant when assessing individual differences in associative performances and may be more relevant when investigating the cognitive performances of nonhuman animals. Binary discriminations involving spatial or colour cues can be presented to subjects and their rates of learning quantified across these different cognitive domains. Although rates of associative learning may differ across domains (Seligman Reference Seligman1970), individual differences in such tasks may still be correlated, leading to a general factor reflecting associative learning ability (hereafter “a”). However, for reliable comparisons, it also remains important to show that subjects' performances are consistent within domains.
Knowing tasks, by contrast, can be designed to assess the more flexible cognitive processes associated with rule-based learning or generalisation and may be more relevant when assessing cognition in humans. Such tasks require training subjects to a predetermined criterion of success to standardise their understanding of the problem, and then presenting subjects with a single test trial using novel cues. Importantly, performances on knowing tasks may highlight whether the mechanism underlying g in humans resembles that which may be found in nonhuman animals.
By incorporating both learning and knowing tasks into cognitive test batteries, we can address whether a general factor of cognitive performance in human and nonhuman animals is better represented by g or a. Distinguishing learning and knowing problems, therefore, provides a measure of individual variation in both domain-specific and domain-general abilities that do not just reflect speed of associative learning, and so can be used to assess whether variation in nonhuman cognitive performance reflects a dimension of general intelligence of the same kind as is thought to underlie human variation.
In the target article, Burkart et al. highlight the importance of identifying variations in domain-general intelligence across species. However, with the exception of mice and possibly primates, there remains little evidence suggesting that variations in domain-general intelligence (g) underlie intraspecific variations in cognitive performance in nonhuman animals. Moreover, such an attribution remains debatable as procedural differences in test battery design may confound interpretations of the underlying mechanism. Our concern is exacerbated where support for a g factor is sparse and limited to studies that predominantly test subjects in the wild. For example, the mechanisms underlying success on test batteries designed to assess performances of birds in the wild (Isden et al. Reference Isden, Panayi, Dingle and Madden2013; Keagy et al. Reference Keagy, Savard and Borgia2011; Shaw et al. Reference Shaw, Boogert, Clayton and Burns2015) bear little resemblance to those effective in tasks presented to non-avian species tested in captivity (Herrmann et al. Reference Herrmann, Hernandez-Lloreda, Call, Hare and Tomasello2010b).
To accurately address whether it is meaningful to talk about domain-general intelligence in animals, it is important that the inherent design of the items within a cognitive test battery accurately capture domain-specific cognitive abilities, independent of procedural factors, and that relevant testing paradigms are used to assess the cognitive performances of subjects in the wild as well as in captivity. Direct comparisons between species are unavoidably difficult as different animals possess different adaptive specialisations; for example, a human cognitive test battery may assess verbal skills whereas nonhuman test batteries cannot. Test batteries, therefore, also need to consider the inherent differences in cognitive processes between species.
Performances on nonhuman cognitive test batteries, particularly those presented to subjects in the wild, require individuals to first interact with a novel apparatus before experiencing its affordances. Accordingly, such test batteries often use tasks that involve trial-and-error learning to quantify subjects' performances and assess their ability to learn to attend to cues based on reward contingencies. For example, subjects may be presented with tasks that assess how quickly they can learn to differentiate rewarded from unrewarded colours, or learn about the spatial location of concealed rewards. Although performances on such tasks are considered to capture domain-specific abilities, success will inevitably also be mediated by fundamental processes of learning that are common to the inherent design of these problems. As a result, an individual may perform well when learning both colour and spatial discrimination problems, not because this individual excels in anything we would want to call intelligence but because it is a relatively rapid learner of all kinds of association, including those involved in the two novel problems. Hence, what seems to be evidence for domain-general intelligence may reflect individual consistency in speed of associative learning, rather than individual consistency in cognition across different domains.
Between-species comparisons may be further confounded because associative learning ability plays a greater role in task performance in animals than it does in humans, and may play a greater role in some nonhuman species than others. Such differences may be particularly pronounced between evolutionarily disparate species such as primates and birds. Pigeons consistently show purely associative solutions to problems that humans, and to some extent nonhuman primates, tend to solve by the use of rules (e.g., Lea & Wills Reference Lea and Wills2008; Lea et al. Reference Lea, Wills, Leaver, Ryan, Bryant and Millar2009; Maes et al. Reference Maes, De Filippo, Inkster, Lea, De Houwer, D'Hooge, Beckers and Wills2015; Meier et al. Reference Meier, Lea and McLaren2016; Smith et al. Reference Smith, Ashby, Berg, Murphy, Spiering, Cook and Grace2011; Reference Smith, Berg, Cook, Murphy, Crossley, Boomer, Spiering, Beran, Church, Ashby and Grace2012; Wills et al. Reference Wills, Lea, Leaver, Osthaus, Ryan, Suret, Bryant, Chapman and Millar2009). In humans, preferential attention to rules may expedite performances on rule-based tasks (Danforth et al. Reference Danforth, Chase, Dolan and Joyce1990), but may also impair responses to experienced contingencies (Fingerman & Levine Reference Fingerman and Levine1974; Hayes et al. Reference Hayes, Brownstein, Zettle, Rosenfarb and Korn1986). Consequently, as different cognitive processes govern the performances of different species on psychometric test batteries, analogous performances between human and nonhuman animals may be difficult to capture.
To overcome these issues, we highlight the importance of differentiating between performances on tasks that require subjects to “learn” to solve a problem, from performances on tasks that assess whether subjects “know” the solution to a problem. We therefore advocate the use not only of associative tasks such as discrimination learning of colour cues that require trial-and-error experience to solve, but also of tasks that require subjects to be trained beforehand to a particular learning criterion, so that their performance on a subsequent novel test or “generalization” condition can be assessed. Such conditions provide a controlled version of the tests of “insightful” or “spontaneous” problem solving that, from the time of Köhler (Reference Köhler1925) on, have often been considered critical in assessing animal intelligence.
Learning tasks are particularly relevant when assessing individual differences in associative performances and may be more relevant when investigating the cognitive performances of nonhuman animals. Binary discriminations involving spatial or colour cues can be presented to subjects and their rates of learning quantified across these different cognitive domains. Although rates of associative learning may differ across domains (Seligman Reference Seligman1970), individual differences in such tasks may still be correlated, leading to a general factor reflecting associative learning ability (hereafter “a”). However, for reliable comparisons, it also remains important to show that subjects' performances are consistent within domains.
Knowing tasks, by contrast, can be designed to assess the more flexible cognitive processes associated with rule-based learning or generalisation and may be more relevant when assessing cognition in humans. Such tasks require training subjects to a predetermined criterion of success to standardise their understanding of the problem, and then presenting subjects with a single test trial using novel cues. Importantly, performances on knowing tasks may highlight whether the mechanism underlying g in humans resembles that which may be found in nonhuman animals.
By incorporating both learning and knowing tasks into cognitive test batteries, we can address whether a general factor of cognitive performance in human and nonhuman animals is better represented by g or a. Distinguishing learning and knowing problems, therefore, provides a measure of individual variation in both domain-specific and domain-general abilities that do not just reflect speed of associative learning, and so can be used to assess whether variation in nonhuman cognitive performance reflects a dimension of general intelligence of the same kind as is thought to underlie human variation.
ACKNOWLEDGMENT
JvH was funded by an ERC Consolidator grant awarded to Joah Madden (616474).