Stibbard-Hawkes’ analysis of the taphonomic signature of the material culture of contemporary foragers is an innovative and useful contribution to the debate about “behavioral modernity” and the evolution of hominin cognition and culture. We are sympathetic with the author's basic contention that taphonomic and environmental factors mean that paleoarcheological absence of various kinds of artifacts should not be taken automatically as evidence for the “primitive null,” that is, lower cognitive sophistication in groups for whom the record of symbolic material culture is smaller or nonexistent, but rather as reflecting extra-genetic factors, such as demographic patterns, choice of materials, and so forth. This provides another strong argument against “single factor” genetic approaches (such as the one endorsed by Klein, Reference Klein2008).
While sympathetic to Stibbard-Hawkes’ approach, we disagree with his conclusion that we should switch from a primitive null to a “derived” or “cognitively modern” null that assigns every species of the genus Homo modern cognitive capacities “until proven otherwise.” We think it is too early to endorse such a position. While we agree that late members of our genus, such as H. neanderthalensis, probably had a level of cognitive sophistication similar to ours, we think this is not the case when we take earliest hominins into consideration.
First, what is the evidence in favor of the assertion that the earliest representatives of Homo were cognitively modern? Why place the boundary of Homo-sapiens-level cognition at that precise point rather than another? It is true that absence of evidence is not evidence of absence, but this doesn't mean that every case of absence of evidence is the result of taphonomic biases or other phenomena. Sometimes, there is no evidence of a certain behavior in the fossil and archeological records because that species was not capable of the behavior.
Even after considering taphonomic and other sampling biases, there seems to be a clear signal in the fossil and archeological record between the Plio-Pleistocene transition (between 3.5 and 1.5 mya), when Homo emerged, and the present. Our analysis of the record suggests steadily increasing cranial capacity correlating with expanding behavioral repertoire, including more tool diversity, expansion into new habitats, and so forth, rather than the late Pleistocene takeoff that Stibbard-Hawkes also challenges. This pattern continues markedly even after the Plio-Pleistocene transition. But in contrast to Stibbard-Hawkes, we think that an evolution from the “primitive” character of the earliest human species to modern humans seems undeniable, and to state otherwise is to deny a correlation between increased cranial capacity and expanded cognitive abilities in our lineage.
The view that cognitive capacity and brain size are not correlated seems to be gaining momentum in certain areas of paleoanthropology. For example, when describing their excavations of Homo naledi fossils, and given the purported evidence for funerary behavior by this small-brained hominin, Fuentes et al. (Reference Fuentes, Kissel, Spikins, Molopyane, Hawks and Berger2023, p. 4) write, “increases in brain size/EQ may not be a necessary precursor for the appearance of meaning-making behavior in the hominins.” Endorsing this kind of position remains problematic, however, as the evidence in favor of complex cultural behavior by small-brained hominins is sparse and weak (see the public reviews of the three papers published by the H. naledi team, Berger et al., Reference Berger, Makhubela, Molopyane, Krüger, Randolph-Quinney, Elliott and Hawks2023a, Reference Berger, Hawks, Fuentes, van Rooyen, Tsikoane, Ramalepa and Molopyane2023b; Fuentes et al., Reference Fuentes, Kissel, Spikins, Molopyane, Hawks and Berger2023) and conflicts with the existing evidence that cranial capacity and cognitive ability are somehow related in human evolution, as mentioned above.
This dispute echoes the debate about setting the correct null hypothesis in comparative cognition. Morgan's canon (Morgan, Reference Morgan1894) has often been used to justify attributing “lower” cognitive abilities to animals. Similarly, Dennett (Reference Dennett1983) placed hypotheses in animal cognition research on a continuum going from “romantic” to “killjoy” – romantic hypotheses ascribe high cognitive sophistication to non-human animals, while killjoys ascribe low cognitive sophistication – and he took “killjoy” to be the default null. Others have argued against this default (Andrews & Huss, Reference Andrews and Huss2014; Mikhalevich, Reference Mikhalevich2015). These attitudes affect the kinds of experiments that are pursued in comparative cognition and the cognitive abilities attributed by scientists to the animals they study.
We expect the debate that will arise from this target article, and ongoing discussion about Homo naledi and other small-brained hominins, will continue to echo the ongoing debate in the philosophy of animal cognition. Some researchers will tend to minimize the abilities of other representatives of our genus (Klein's view on Neanderthals’ cognitive capacities, e.g., Reference Klein2000) while others will lean in the opposite direction. Stibbard-Hawkes leans toward the romantics, given his rejection of the “primitive null” regarding species belonging to the genus Homo. But the wholesale extension of the modern cognitive capabilities to “all members of at least our genus” is unwarranted in our opinion.
This short commentary is not the ideal place to investigate parallels and differences between these two debates. Nevertheless, there is a useful lesson to be extracted from the philosophy of comparative cognition – and this should not be surprising, since we and our ancestors are animals. If it is necessary to establish a null hypothesis about the cognitive abilities of our ancestors, the best strategy is to do so in ways that are specific to the evidence rather than generic across different species. Several in the field of animal cognition have come to the same conclusion (Fitzpatrick, Reference Fitzpatrick2008; Mikhalevich, Reference Mikhalevich2015; Sober, Reference Sober, Daston and Mitman2005). Clearly the kind of evidence available in the case of extinct hominins is very different from that available to those who study animal cognition, and the biggest challenge for paleoanthropology is to determine what evidence needs to be mobilized and how. Addressing this challenge is preferable to setting the null hypothesis a priori.
In summary, we think that rather than debating about setting the right null hypothesis, the more fruitful task for the science of human cognitive evolution is to find ways to separate empirically the different contributions of genetic and extra-genetic factors. Stibbard-Hawkes’ ethnographic investigations helpfully point us in the right direction, with only minor course corrections required.
Stibbard-Hawkes’ analysis of the taphonomic signature of the material culture of contemporary foragers is an innovative and useful contribution to the debate about “behavioral modernity” and the evolution of hominin cognition and culture. We are sympathetic with the author's basic contention that taphonomic and environmental factors mean that paleoarcheological absence of various kinds of artifacts should not be taken automatically as evidence for the “primitive null,” that is, lower cognitive sophistication in groups for whom the record of symbolic material culture is smaller or nonexistent, but rather as reflecting extra-genetic factors, such as demographic patterns, choice of materials, and so forth. This provides another strong argument against “single factor” genetic approaches (such as the one endorsed by Klein, Reference Klein2008).
While sympathetic to Stibbard-Hawkes’ approach, we disagree with his conclusion that we should switch from a primitive null to a “derived” or “cognitively modern” null that assigns every species of the genus Homo modern cognitive capacities “until proven otherwise.” We think it is too early to endorse such a position. While we agree that late members of our genus, such as H. neanderthalensis, probably had a level of cognitive sophistication similar to ours, we think this is not the case when we take earliest hominins into consideration.
First, what is the evidence in favor of the assertion that the earliest representatives of Homo were cognitively modern? Why place the boundary of Homo-sapiens-level cognition at that precise point rather than another? It is true that absence of evidence is not evidence of absence, but this doesn't mean that every case of absence of evidence is the result of taphonomic biases or other phenomena. Sometimes, there is no evidence of a certain behavior in the fossil and archeological records because that species was not capable of the behavior.
Even after considering taphonomic and other sampling biases, there seems to be a clear signal in the fossil and archeological record between the Plio-Pleistocene transition (between 3.5 and 1.5 mya), when Homo emerged, and the present. Our analysis of the record suggests steadily increasing cranial capacity correlating with expanding behavioral repertoire, including more tool diversity, expansion into new habitats, and so forth, rather than the late Pleistocene takeoff that Stibbard-Hawkes also challenges. This pattern continues markedly even after the Plio-Pleistocene transition. But in contrast to Stibbard-Hawkes, we think that an evolution from the “primitive” character of the earliest human species to modern humans seems undeniable, and to state otherwise is to deny a correlation between increased cranial capacity and expanded cognitive abilities in our lineage.
The view that cognitive capacity and brain size are not correlated seems to be gaining momentum in certain areas of paleoanthropology. For example, when describing their excavations of Homo naledi fossils, and given the purported evidence for funerary behavior by this small-brained hominin, Fuentes et al. (Reference Fuentes, Kissel, Spikins, Molopyane, Hawks and Berger2023, p. 4) write, “increases in brain size/EQ may not be a necessary precursor for the appearance of meaning-making behavior in the hominins.” Endorsing this kind of position remains problematic, however, as the evidence in favor of complex cultural behavior by small-brained hominins is sparse and weak (see the public reviews of the three papers published by the H. naledi team, Berger et al., Reference Berger, Makhubela, Molopyane, Krüger, Randolph-Quinney, Elliott and Hawks2023a, Reference Berger, Hawks, Fuentes, van Rooyen, Tsikoane, Ramalepa and Molopyane2023b; Fuentes et al., Reference Fuentes, Kissel, Spikins, Molopyane, Hawks and Berger2023) and conflicts with the existing evidence that cranial capacity and cognitive ability are somehow related in human evolution, as mentioned above.
This dispute echoes the debate about setting the correct null hypothesis in comparative cognition. Morgan's canon (Morgan, Reference Morgan1894) has often been used to justify attributing “lower” cognitive abilities to animals. Similarly, Dennett (Reference Dennett1983) placed hypotheses in animal cognition research on a continuum going from “romantic” to “killjoy” – romantic hypotheses ascribe high cognitive sophistication to non-human animals, while killjoys ascribe low cognitive sophistication – and he took “killjoy” to be the default null. Others have argued against this default (Andrews & Huss, Reference Andrews and Huss2014; Mikhalevich, Reference Mikhalevich2015). These attitudes affect the kinds of experiments that are pursued in comparative cognition and the cognitive abilities attributed by scientists to the animals they study.
We expect the debate that will arise from this target article, and ongoing discussion about Homo naledi and other small-brained hominins, will continue to echo the ongoing debate in the philosophy of animal cognition. Some researchers will tend to minimize the abilities of other representatives of our genus (Klein's view on Neanderthals’ cognitive capacities, e.g., Reference Klein2000) while others will lean in the opposite direction. Stibbard-Hawkes leans toward the romantics, given his rejection of the “primitive null” regarding species belonging to the genus Homo. But the wholesale extension of the modern cognitive capabilities to “all members of at least our genus” is unwarranted in our opinion.
This short commentary is not the ideal place to investigate parallels and differences between these two debates. Nevertheless, there is a useful lesson to be extracted from the philosophy of comparative cognition – and this should not be surprising, since we and our ancestors are animals. If it is necessary to establish a null hypothesis about the cognitive abilities of our ancestors, the best strategy is to do so in ways that are specific to the evidence rather than generic across different species. Several in the field of animal cognition have come to the same conclusion (Fitzpatrick, Reference Fitzpatrick2008; Mikhalevich, Reference Mikhalevich2015; Sober, Reference Sober, Daston and Mitman2005). Clearly the kind of evidence available in the case of extinct hominins is very different from that available to those who study animal cognition, and the biggest challenge for paleoanthropology is to determine what evidence needs to be mobilized and how. Addressing this challenge is preferable to setting the null hypothesis a priori.
In summary, we think that rather than debating about setting the right null hypothesis, the more fruitful task for the science of human cognitive evolution is to find ways to separate empirically the different contributions of genetic and extra-genetic factors. Stibbard-Hawkes’ ethnographic investigations helpfully point us in the right direction, with only minor course corrections required.
Financial support
This commentary was written with no current funding. Some of the ideas were previously developed by Colin Allen under grant #52935 from the John Templeton Foundation.
Competing interest
None.