1. Introduction
Research on how people understand each other's minds tends to focus in particular on how people attribute beliefs (e.g., Baron-Cohen, Reference Baron-Cohen1997; Call & Tomasello, Reference Call and Tomasello2008; Dennett, Reference Dennett1989; Nichols & Stich, Reference Nichols and Stich2003; Onishi & Baillargeon, Reference Onishi and Baillargeon2005; Saxe & Kanwisher, Reference Saxe and Kanwisher2003). However, people also have other ways of understanding each other's minds, including attributing knowledge. That is, instead of asking “What does this person believe?,” we can ask “What does this person know?” Knowledge attribution has received far less attention than has been devoted to belief. A simple Google Scholar search, for example, demonstrates approximately an order of magnitude more papers that focus on tests for representations of belief than representations of knowledge in theory of mind.Footnote 1
The reasons for this focus on belief are both methodological and historical. Because beliefs can be false, they provide a convenient method for testing whether an agent's mind is being represented independently from one's own representation of the external world. Moreover, historically, beliefs have been taken to be among the most conceptually basic mental states. People use many different concepts to make sense of the way other people understand the world, including the concepts of guessing, assuming, suspecting, presupposing, fully anticipating, and so forth. But the concept of belief may be more fundamental than any of these, and people's use of all of these other concepts may depend on their ability to represent beliefs. Knowledge may well be in the same camp as these other mental states. That is, people's ability to represent knowledge may ultimately depend on a more fundamental capacity to represent belief. This way of understanding the relationship between knowledge and belief has been widespread in the philosophical literature (for a review, see Ichikawa & Steup, Reference Ichikawa, Steup and Zalta2017) and may further justify the focus on belief in research on theory of mind.
Surprisingly, empirical research offers little support for this way of understanding the relationship between knowledge and belief. Instead, most of the empirical evidence to date points in the opposite direction: the capacity to attribute knowledge is more basic than the capacity to attribute belief. We review the evidence for this conclusion from a wide range of fields using a diversity of methodological approaches: comparative psychology, developmental psychology, social psychology, clinical psychology, and experimental philosophy. This evidence indicates that nonhuman primates attribute knowledge but not belief (Section 4.1), that the capacity to attribute knowledge arises earlier in human development than the capacity to attribute belief does (Section 4.2), that implicit knowledge attributions are made more automatically than belief attributions (Section 4.3), that the capacity to represent knowledge may remain intact in patient populations, even when belief representation is disrupted (Section 4.4), and that explicit knowledge attributions do not depend on belief attributions (Sections 5.1 and 5.2) and are made more quickly than belief attributions (Section 5.3). Together these converging lines of evidence indicate that knowledge, rather than belief, is the more basic mental state used to represent other minds.
This abundance of evidence naturally gives rise to a further question of why representations of knowledge play such a basic role in the way we understand others' minds. We argue that the set of features that are specific to knowledge representations suggest a promising answer: a primary function of knowledge representations is to guide learning from others about the external world. This presents a new view of how to think about theory of mind – one that is focused on understanding others' minds in relation to the actual world, rather than independent from it.
2. How should we understand knowledge?
Given our aim, an important initial question concerns what we mean by knowledge. At the broadest level, our proposal will be to start by treating knowledge as the ordinary thing meant when people talk about what others do or do not “know.” While there is some disagreement about how to best make sense of the ordinary meaning (Ichikawa & Steup, Reference Ichikawa, Steup and Zalta2017), there is also near-universal agreement on a number of essential features that knowledge has, and we take those as the signature markers of the kind of representation we are interested in.
Specifically, we focus on four features that are essential to knowledge:
2.1. Knowledge is factive
You can believe whatever you like, but you can only know things that are true. Also, when you represent others' knowledge, you can only represent them as knowing things that you take to be true. If you do not think it is true that the moon landing was faked, you cannot rightly say, “Bob knows the moon landing was faked.” You would instead have to say, “Bob believes the moon landing was faked.” Representations of knowledge can only be employed when you take the content of the mental state to be true (Kiparksy & Kiparksy, Reference Kiparksy, Kiparksy, Bierwisch and Heidolph1970; Williamson, Reference Williamson2000).
2.2. Knowledge is not just true belief
The capacity for attributing knowledge to others is not the same as a capacity for attributing true belief. Many cases of true belief are not knowledge. In the most widely discussed kinds of cases, someone's belief ends up being true by coincidence. For example, suppose that John believes that a key to his house is in his pocket. Unfortunately, that key fell out of his jacket as soon as he put it in. However, another house key, which he forgot about, is in the other pocket of the jacket he is wearing. In such a case, John has a belief that there is a house key in his pocket, and that belief happens to be true. Intuitively though, it seems wrong to describe John as knowing that there is a house key in his pocket (Gettier, Reference Gettier1963; Machery et al., Reference Machery, Stich, Rose, Chatterjee, Karasawa, Struchiner and Hashimoto2017; Starmans & Friedman, Reference Starmans and Friedman2012).
2.3. Others can know things you do not
While you can't represent others as knowing things that are false, you can represent others as knowing things you do not know. You can say, for example, “Suzy knows where to buy an Italian newspaper” even if you do not know where to buy an Italian newspaper (Karttunen, Reference Karttunen1977; Phillips & George, Reference Phillips and George2018). Accordingly, the capacity to represent knowledge involves the capacity to represent two different kinds of ignorance. On the one hand, you can represent yourself as knowing more than others do (“altercentric ignorance”), but at the same time, you can also represent others as knowing more than you do (“egocentric ignorance”) (Nagel, Reference Nagel2017; Phillips & Norby, Reference Phillips and Norby2019).
2.4. Knowledge is not modality-specific
While knowledge may be gained through different forms of perception (seeing, hearing, feeling, and so on), representations of knowledge are not tied to any particular sense modality; knowledge is more general than that. Moreover, knowledge can be gained without perception, for example, by logical inference. So attributing knowledge goes beyond merely representing what others see, hear, feel, and so on.
These four essential features of knowledge helpfully distinguish it from other kinds of mental-state representation. For example, representations of belief lack the feature of being factive, whereas representations of seeing or hearing, unlike knowledge, are modality-specific. Hence, our plan throughout will be to focus on instances of mental-state representations that have these four signature features of knowledge. We will then ask how such a capacity for knowledge representation relates to the ability to represent others' beliefs.
3. Two views about knowledge and belief
We will begin by setting out two broad views about how to understand the relationship between belief attribution and knowledge attribution. We start by considering these views at a purely theoretical level, without looking at empirical data or at prior investigations of these questions. The remainder of the paper then turns to existing empirical and theoretical studies to determine which of these two views is better supported by the evidence.
When considering the role of knowledge and belief in theory of mind, it will be important to distinguish between two closely related questions. The first asks whether one of these ways of representing others' minds is more basic than the others (e.g., whether one preceded the other in evolutionary history, or is present earlier in human development, or is computed more quickly). The second asks more specifically whether the less basic attribution depends in some way on the more basic one. The answers to these two questions will obviously be connected in an important way, since it would be difficult to see how the more basic representation could depend on some less basic one. At the same time, though, the less basic representation might not depend in any way on the more basic one; they could be entirely independent of each other. With these two questions in mind, let us consider two ways of understanding the role of belief and knowledge in theory of mind.
3.1. View 1: Belief attribution is more basic
A view familiar to philosophers holds both that belief is the more basic mental state, and that representations of others' knowledge depend in a critical way on representations of their beliefs. Applying this picture to psychology, people may make knowledge attributions by (a) making a simpler belief attribution and (b) also going through certain further cognitive processes that give rise to a representation that goes beyond merely attributing a belief.
If we are wondering about John's mental states regarding where his keys are, we would default to representing John's mind in terms of where John believes his keys are (for specific proposals on how we might do this, see, e.g., Baker, Jara-Ettinger, Saxe, & Tenenbaum, Reference Baker, Jara-Ettinger, Saxe and Tenenbaum2017; Goldman, Reference Goldman2006; Gopnik & Wellman, Reference Gopnik and Wellman1992; Gordon, Reference Gordon1986; Leslie, Friedman, & German, Reference Leslie, Friedman and German2004). Then, to determine whether or not John knows that the keys are in his pocket, we would additionally need to determine whether his belief also meets the additional criteria required for knowledge.
For a more specific proposal about these criteria, one might turn to the rich philosophical literature, which has explored the ways in which the concept of knowledge goes beyond the concept of belief (Armstrong, Reference Armstrong1973; Dretske, Reference Dretske1981; Goldman, Reference Goldman1967; Lewis, Reference Lewis1996; Nozick, Reference Nozick1981; Sosa, Reference Sosa1999). In what follows, we will not be concerned with those details. Our concern is rather with the broad idea that belief is the more basic way in which we represent others' minds and that representations of knowledge depend on representations of belief. If this idea is right, then we would expect there to be a set of processes that produce comparatively simple representations of what others believe (see, e.g., Rose, Reference Rose2015). This view aligns well with the proposal that the capacity for belief attribution may in fact be a part of core cognition or otherwise innate, and present extremely early in human development (see, e.g., Kovács, Téglás, & Endress, Reference Kovács, Téglás and Endress2010; Leslie et al., Reference Leslie, Friedman and German2004; Onishi & Baillargeon, Reference Onishi and Baillargeon2005; Stich, Reference Stich2013).
On this view, belief is more basic than knowledge in both senses. It is more basic in the sense that we would expect, for example, representations of belief to be computed more quickly than representations of knowledge. Further, belief is more basic than knowledge in that knowledge representations may depend on representations of belief.
3.2. View 2: Knowledge attribution is more basic
An alternative view is that knowledge is the more basic way in which we make sense of others' minds. Rather than representing what others know by first representing what they believe, people may have a separate set of processes that give rise to some comparatively simple representation of what others know. Such a representation clearly would not involve calculations of belief.
The signature features of knowledge illustrate ways in which knowledge representations are substantially more limited than belief representations. For example, knowledge is factive, so if you do not think that it is true that John's keys are in his pocket, then you certainly cannot represent him as knowing that his keys are in his pocket. Moreover, even if you do think it is true that John's keys are in his pocket, but John's belief about the location of his keys only happens to be right as a matter of coincidence, then once again you cannot represent him as knowing that his keys are in his pocket.
In contrast, belief representations are more flexible. You can easily represent John as believing that his keys are in his pocket, or in any other location: his car, his shoe, or Timbuktu. In fact, representations of belief do not seem to be restricted in any way: in principle, you could represent John as believing completely arbitrary propositions. Hence, the set of things you could represent someone as knowing is necessarily smaller than the set of things you could represent them as believing (Phillips & Norby, Reference Phillips and Norby2019). These differences between knowledge and belief representation suggest that there may be some relatively simple representation of what others know and some comparatively complex representation of others' beliefs.
While the knowledge-as-more-basic view denies that knowledge representations depend on representations of belief, it does not make any commitment as to whether the converse holds. Even if knowledge representations are simpler than belief representations, the capacity to represent someone as believing something need not depend on the capacity to represent them as knowing something. These two ways of understanding others' minds may simply be independent.
3.3. Shifting focus
The origin of the focus on beliefs in theory of mind research is easily traceable to Premack and Woodruff's paper, “Does the chimpanzee have a theory of mind?” in Behavioral and Brain Sciences (Reference Premack and Woodruff1978). In the commentaries to that target article, a number of philosophers argued that for both theoretical and empirical reasons, better evidence for a capacity for theory of mind would show that chimpanzees represented beliefs, and in particular, false beliefs (Bennett, Reference Bennett1978; Dennett, Reference Dennett1978; Harman, Reference Harman1978).
The idea that belief may be among the most basic mental representations is common within philosophy, and can be clearly observed in epistemological discussion of whether knowledge may be understood as some, augmented, version of justified true belief (e.g., Armstrong, Reference Armstrong1973; Chisholm, Reference Chisholm1977; Dretske, Reference Dretske1981). On this picture, belief is understood as the basic concept and knowledge is taken to be comparatively complex, that is, belief that has various other properties (being true, justified, and so on). However, philosophers have long questioned whether belief should actually be understood as more basic than knowledge, and recently, have become increasingly excited about the possibility that knowledge is the more basic notion (Nagel, Reference Nagel2013; Reference Nagel2017; Williamson, Reference Williamson2000; see Carter, Gordon, & Jarvis, Reference Carter, Gordon and Jarvis2017 for a collection of recent approaches; see Locke, Reference Locke and Nidditch1689/1975; Plato, 380BCE/Reference Cooper1997, for earlier related discussions). On this approach, knowledge should not be analyzed in terms of belief or other concepts but should be understood as basic and unanalyzable.
Just as many philosophers have rethought the commitment to belief being more basic than knowledge, we think it is time for cognitive scientists to revisit their continued emphasis on belief as to the more central or basic theory of mind representation. In fact, we think that the empirical research from cognitive science provides overwhelming evidence in favor of thinking that it is knowledge attribution, rather than belief attribution, that is, the more basic theory of mind representation.
In the sections that follow, we ask what cognitive science research reveals about this issue. We first review how the tools of cognitive science allow one to test claims about which types of representations are basic. We then examine what each of these empirical methods illustrates about the basicness of belief and knowledge representations. The first group of methods we examine are primarily nonlinguistic and do not involve specifically asking subjects what they think about what someone “knows” or “believes.” Rather, these methods involve operationalizing knowledge and belief within an experimental protocol and then investigating whether subjects' behavior demonstrates sensitivity to either kind of representation. The second group of methods actually employ the terms “knowledge” and “belief” and ask what people's use of these words can tell us about the basicness of the corresponding concepts. As detailed in the following sections, one finds a strikingly similar story across all of these highly varied methods.
4. Is belief understanding more basic than knowledge understanding?
A central task in cognitive science is to uncover and describe the most basic functions of human cognition – what “core” parts of the mind make up the foundation that we use to develop more complex, experience-dependent ways of thinking about the world. To answer this question, scientists have marshaled a set of empirical tools that give insight into these more basic aspects of human cognition.
First, researchers have tested which aspects of human cognition are more basic by examining the evolutionary history of different kinds of capacities, testing which aspects of human understanding are shared with closely related primate species and thus are likely to be evolutionarily foundational. Second, researchers have investigated which capacities emerge earliest in human ontogeny, and thus do not require much experience. Third, researchers have examined which cognitive capacities in adult humans operate automatically, and thus occur independently of conscious initiation. (The underlying logic here is not that if a capacity is automatic, then it is basic since there are capacities that are automatic but not basic, such as reading. Rather, the idea is that if a capacity were a very basic one, it would be surprising if it did not operate automatically.) Finally, researchers have examined which capacities are more basic by testing special populations that are unique either in terms of their experiences or in terms of neural variation (e.g., brain lesions, autism spectrum disorder [ASD], and so on). Capacities that are more basic tend to be conserved across populations despite radical differences in experiences or deficits in other cognitive processes. Taken together, this set of tools can provide important interdisciplinary insights into the question of which aspects of the mind are the most basic and form the foundation for other more complicated abilities.
To see these empirical tools of research, consider another domain in which scholars have also wondered which sorts of capacities are the more basic: the domain of human numerical understanding. Our ability to think about complex numerical concepts requires a host of sophisticated capacities, many of which require the kinds of complex computational abilities that only adult humans possess. But which parts of this numerical understanding are basic and require some additional complex processing? Using the combined empirical approaches described above, researchers have determined that at least two aspects of our adult human numerical understanding – the capacity to represent large sets of objects approximately and the capacity to represent small numbers of individual objects exactly – appear to be basic (see reviews in Carey, Reference Carey2009; Feigenson, Dehaene, & Spelke, Reference Feigenson, Dehaene and Spelke2004; Spelke, Reference Spelke, Kanwisher and Duncan2004). First, closely related primates appear to have both the capacity to exactly represent the number of objects in a small set (e.g., Santos, Barnes, & Mahajan, Reference Santos, Barnes and Mahajan2005) and the capacity to make approximate guesses about large numbers of objects (e.g., Jordan & Brannon, Reference Jordan and Brannon2006). Second, human infants also appear to begin life with both of these capacities: they can track small numbers of objects (e.g., Wynn, Reference Wynn1992) and make quick approximate guesses about large numbers of objects (e.g., Xu & Spelke, Reference Xu and Spelke2000). Third, adult humans appear to perform both of these sorts of tasks automatically: we automatically subitize small numbers of objects exactly (e.g., Trick & Pylyshyn, Reference Trick and Pylyshyn1994) and seem to automatically guess which of two large sets has approximately more objects (e.g., Barth, Kanwisher, & Spelke, Reference Barth, Kanwisher and Spelke2003). Finally, researchers have identified special populations in which participants' understanding of approximate numbers are preserved despite radical differences in experience (e.g., the Munduruku, an indigenous group of people who live in the Amazon River basin, Dehaene, Izard, Spelke, & Pica, Reference Dehaene, Izard, Spelke and Pica2008). These combined developmental, comparative, automaticity, and special population findings make a convincing case that the ability to enumerate objects approximately and track small numbers of objects exactly is more basic than other aspects of human number cognition (see review in Carey, Reference Carey2009). For more discussion of the analogy between number cognition and theory of mind, see Apperly and Butterfill (Reference Apperly and Butterfill2009), Spelke and Kinzler (Reference Spelke and Kinzler2007), and Phillips and Norby (Reference Phillips and Norby2019).
With this analogy in mind, let us return to the first question we posed about the relationship between knowledge and belief: Is belief attribution or knowledge attribution more basic in the sense described above?
In the following sections, we review what these same four empirical tools – comparative cognition research, developmental studies with infants and young children, studies of automatic processing in adult humans, and deficits in special populations – reveal about the foundational aspects of our understanding of other minds. What emerges from examining research with each of these empirical tools is a picture that is just as consistent as the one observed for number representation. All four of these different empirical tools suggest that representations of what others know are more basic than representations of what others believe.
4.1 Knowledge and belief in nonhuman primates
The first empirical tool that we marshal is comparative studies examining what nonhuman primates understand about others' minds. Much research over the past few decades has investigated how a number of primate species think about the minds of others and this study has given us a relatively clear picture of what different primates understand about others' mental states (see reviews in Call & Santos, Reference Call, Santos, Mitani, Kappeler, Palombit, Call and Silk2012; Drayton & Santos, Reference Drayton and Santos2016). Considering this now large body of research, we can ask whether we see an understanding of others' beliefs emerging as a more phylogenetically foundational aspect of mental-state reasoning in primates.
First off, do any nonhuman primates actually represent others' beliefs? Looking to our closest primate relatives, the great apes, one finds mixed evidence for an understanding of beliefs. Three recent sets of studies support the conclusion that chimpanzees and some other great apes can represent others' false beliefs (Buttelmann, Buttelmann, Carpenter, Call, & Tomasello, Reference Buttelmann, Buttelmann, Carpenter, Call and Tomasello2017; Kano, Krupenye, Hirata, Tomonaga, & Call, Reference Kano, Krupenye, Hirata, Tomonaga and Call2019; Krupenye, Kano, Hirata, Call, & Tomasello, Reference Krupenye, Kano, Hirata, Call and Tomasello2016). However, there is also some reason for caution when interpreting these results. Researchers have continued to debate whether these findings are better explained by lower-level processing (Heyes, Reference Heyes2017; Kano, Krupenye, Hirata, Call, & Tomasello, Reference Kano, Krupenye, Hirata, Call and Tomasello2017; Kano et al., Reference Kano, Krupenye, Hirata, Tomonaga and Call2019; Krupenye, Kano, Hirata, Call, & Tomasello, Reference Krupenye, Kano, Hirata, Call and Tomasello2017), simpler representations that do not involve false belief (Buttelmann et al., Reference Buttelmann, Buttelmann, Carpenter, Call and Tomasello2017; Tomasello, Reference Tomasello2018), and even whether the anticipatory-looking paradigms used in this research have reliably demonstrated theory of mind in humans (Dörrenberg, Rakoczy, & Liszkowski, Reference Dörrenberg, Rakoczy and Liszkowski2018; Kulke, Wübker, & Rakoczy, Reference Kulke, Wübker and Rakoczy2018; Schuwerk, Priewasser, Sodian, & Perner, Reference Schuwerk, Priewasser, Sodian and Perner2018). In brief, these studies provide some initial evidence that great apes can represent false beliefs, but additional research continues to be warranted (Martin, Reference Martin2019).
At the same time, many other published studies suggest that apes fail to represent others' beliefs across a range of tasks (Call & Tomasello, Reference Call and Tomasello1999; Kaminski, Call, & Tomasello, Reference Kaminski, Call and Tomasello2008; Krachun, Carpenter, Call, & Tomasello, Reference Krachun, Carpenter, Call and Tomasello2009; O'Connell & Dunbar, Reference O'Connell and Dunbar2003). In one study, for example, Kaminski et al. (Reference Kaminski, Call and Tomasello2008) explored whether chimpanzees could understand that a competitor had a false belief about the location of a hidden food item. Kaminski et al. used a design in which subject chimpanzees competed with a conspecific for access to contested foods (for the first of such tasks, see Hare, Call, Agnetta, & Tomasello, Reference Hare, Call, Agnetta and Tomasello2000). Chimpanzees did not distinguish between a condition where the competitor had a false belief and a condition where the competitor was ignorant (the food was taken out and then replaced in the same container in the competitor's absence). They were more likely to go for high-quality food in both of these conditions than in a knowledge condition where the competitor had seen where the food ended up. These results suggest that chimpanzees fail to account for false beliefs in competitive tasks, but they have no trouble distinguishing knowledge from ignorance (for similar results, see Call & Tomasello, Reference Call and Tomasello2008; Krachun et al., Reference Krachun, Carpenter, Call and Tomasello2009).
In comparison to the mixed evidence one finds for representations of belief in great apes, the picture is clear when it comes to great apes' representations of knowledge: great apes have shown robust success in representing what others know and acting in accordance with those representations (Bräuer, Call, & Tomasello, Reference Bräuer, Call and Tomasello2007; Hare et al., Reference Hare, Call, Agnetta and Tomasello2000; Hare, Call, & Tomasello, Reference Hare, Call and Tomasello2001; Hare, Call, & Tomasello, Reference Hare, Call and Tomasello2006; Kaminski et al., Reference Kaminski, Call and Tomasello2008; Karg, Schmelz, Call, & Tomasello, Reference Karg, Schmelz, Call and Tomasello2015; Krachun et al., Reference Krachun, Carpenter, Call and Tomasello2009; Melis, Call, & Tomasello, Reference Melis, Call and Tomasello2006; Whiten, Reference Whiten2013). Collectively, these studies suggest that great apes can track what others know in competitive tasks, even though they often fail to track others' beliefs in those same tasks (see, e.g., Call & Tomasello, Reference Call and Tomasello2008; MacLean & Hare, Reference MacLean and Hare2012).
Importantly, research on nonhuman primates has also investigated more distantly related primates, like monkeys, which provides insight into which capacities may have evolved even longer ago. The evidence regarding monkeys is even more unequivocal. To date, there is no evidence that monkeys understand other individuals' beliefs, even when tested on tasks that human infants have passed (Marticorena, Ruiz, Mukerji, Goddu, & Santos, Reference Marticorena, Ruiz, Mukerji, Goddu and Santos2011; Martin & Santos, Reference Martin and Santos2016). Research on mental-state understanding in human infants often uses looking-time measures (e.g., Kovács et al., Reference Kovács, Téglás and Endress2010; Onishi & Baillargeon, Reference Onishi and Baillargeon2005). When these same techniques are applied to monkeys, they do not show evidence of representing false beliefs (Martin & Santos, Reference Martin and Santos2014) or using them to predict behavior (Marticorena et al., Reference Marticorena, Ruiz, Mukerji, Goddu and Santos2011). In one study, monkeys watched an event in which a person saw an object moved into one of two boxes and then looked away as the object moved from the first box into the second box. Once the person had a false belief about the location of the object, monkeys appeared to make no prediction about where she would look; they looked equally long when the person reached either of the two locations (Marticorena et al., Reference Marticorena, Ruiz, Mukerji, Goddu and Santos2011; see also Martin & Santos, Reference Martin and Santos2014 for similar results on a different task). These findings suggest that primates more distantly related to humans than great apes fail to represent beliefs, indicating that the human ability to represent beliefs may actually be phylogenetically recent.
In spite of monkeys' difficulty in tracking others' beliefs, there is a large body of work demonstrating that monkeys can understand what others know (Drayton & Santos, Reference Drayton and Santos2016; Martin & Santos, Reference Martin and Santos2016). Rhesus monkeys, for example, understand that they can steal food from a person who cannot see the food (Flombaum & Santos, Reference Flombaum and Santos2005) or who cannot hear their approach toward the food (Santos, Nissen, & Ferrugia, Reference Santos, Nissen and Ferrugia2006). Moreover, when monkeys' understanding of others' knowledge states are tested using looking-time measures, researchers again observe a dissociation in monkeys' understanding of knowledge and belief. For example, when rhesus monkeys see a person watching an object going into one of two locations, they look longer when that person reaches the incorrect location than the correct location, suggesting that they expect people to search correctly when they know where an object is (Marticorena et al., Reference Marticorena, Ruiz, Mukerji, Goddu and Santos2011). These findings suggest that more phylogenetically distant monkey species succeed in tracking others' knowledge states even though they fail to understand others' beliefs.
4.1.1. Do nonhuman primates actually represent knowledge?
An essential further question is whether the research on nonhuman primate theory of mind actually provides evidence regarding knowledge representations specifically, rather than something else, such as a representation of perceptual access. To answer this further question, we need to ask whether there is evidence that the theory of mind representations observed in nonhuman primates carry the signature features that are unique to knowledge. A number of studies provide evidence that this is the case (see Nagel, Reference Nagel2017, for a complementary perspective).
First, there is evidence that nonhuman primates can represent egocentric ignorance; that is, they can represent someone else as knowing something they do not know. For example, in a competitive task involving obtaining food from one of two containers, chimps and bonobos were placed in a position such that they could see that their human competitor could see which container the food was placed in, but they could not see where the food was placed (Krachun et al., Reference Krachun, Carpenter, Call and Tomasello2009). The positions of the two containers were then switched in clear sight of both the subject and their human competitor. When subjects searched for food, they demonstrated a marked preference for taking food from the container their competitor reached for, suggesting that they represented the competitor as knowing where the food was even if they did not.Footnote 2 Critically, in a minimally different false-belief condition where the containers switched positions whereas the competitor was not watching, chimps and bonobos (unlike 5-year-old children) were unable to recognize that they should search in the container the competitor was not reaching for (Krachun et al., Reference Krachun, Carpenter, Call and Tomasello2009).
Second, there is evidence that apes and monkeys fail to represent others' true beliefs in cases where they have no trouble representing others' knowledge (Horschler, Santos, & MacLean, Reference Horschler, Santos and MacLean2019; Kaminski et al., Reference Kaminski, Call and Tomasello2008). Specifically, these studies included conditions where food was placed in one of the two opaque containers in clear sight of both the experimenter and the nonhuman primate. Then, after the experimenter's line of sight was occluded, the food was removed from the container but then put directly back in the same container where it was originally placed. Under such conditions, the experimenter should have a true belief about the location of the food (since it did not actually change locations), but not knowledge (Gettier, Reference Gettier1963). Strikingly, under these conditions, nonhuman primates failed to predict that the experimenter would act on the basis of the true belief. In contrast, they have little trouble making the correct predictions in matched conditions where the experimenter could be represented as having knowledge because they saw the removal and replacement of the food (Horschler et al., Reference Horschler, Santos and MacLean2019; Kaminski et al., Reference Kaminski, Call and Tomasello2008).
Finally, there is evidence that the knowledge representations found in nonhuman primates are not modality-specific. For example, both chimpanzees and rhesus macaques make the same inferences about others' knowledge based on auditory and visual information (Melis et al., Reference Melis, Call and Tomasello2006; Santos et al., Reference Santos, Nissen and Ferrugia2006). Moreover, recent research studies also suggest that both chimpanzees and macaques can attribute inferential knowledge to others that cannot be solely based on perceptual access (Drayton & Santos, Reference Drayton and Santos2018; Schmelz, Call, & Tomasello, Reference Schmelz, Call and Tomasello2011).
Taken as a whole, the lesson from comparative research is that the capacity to represent others' beliefs may be evolutionarily newer than the capacity to represent others' knowledge. There is mixed evidence that great apes track others' beliefs. Nonetheless, there is clear evidence that great apes are able to track others' knowledge. Going yet a further step across the evolutionary tree, there is clear evidence that monkeys can track others' knowledge in a variety of contexts but not others' beliefs. In short, primate research studies to date suggest that the capacity to think about others' knowledge predates an ability to represent others' beliefs.
4.2 Knowledge and belief in human development
4.2.1. Knowledge and belief in infancy
Just as studies of nonhuman primates can provide evidence about which cognitive capacities are evolutionarily more foundational, so too can studies of preverbal infants demonstrate which cognitive capacities are developmentally prior. In the last two decades, a growing body of work using non-verbal methods provides evidence that preverbal infants have the capacity to represent the mental states of others.
The current evidence of non-verbal belief representation in early infancy is, at this point, unequivocally mixed. A number of studies have suggested that infants reason about an agent's actions in terms of her beliefs by15 months of age or earlier (Buttelmann, Carpenter, & Tomasello, Reference Buttelmann, Carpenter and Tomasello2009; Kovács et al., Reference Kovács, Téglás and Endress2010; Onishi & Baillargeon, Reference Onishi and Baillargeon2005; Surian, Caldi, & Sperber, Reference Surian, Caldi and Sperber2007; Träuble, Marinović, & Pauen, Reference Träuble, Marinović and Pauen2010). At the same time, there have been a number of compelling plausible proposals for how key behavioral patterns can be explained without a genuine ability for belief representation (Burge, Reference Burge2018; Butterfill & Apperly, Reference Butterfill and Apperly2013; Heyes, Reference Heyes2014a; Priewasser, Rafetseder, Gargitter, & Perner, Reference Priewasser, Rafetseder, Gargitter and Perner2018). Further, other researchers have argued that some of these looking-time patterns may actually reflect representations of knowledge rather than belief (Wellman, Reference Wellman2014).
More obviously concerning though, recent attempts at replicating or extending the key pieces of empirical data have been unsuccessful (e.g., Dörrenberg et al., Reference Dörrenberg, Rakoczy and Liszkowski2018; Grosse Wiesmann et al., Reference Grosse Wiesmann, Friederici, Disla, Steinbeis and Singer2018; Kammermeier & Paulus, Reference Kammermeier and Paulus2018; Powell, Hobbs, Bardis, Carey, & Saxe, Reference Powell, Hobbs, Bardis, Carey and Saxe2018). At this point, the field is largely in disagreement about whether there is good evidence for a capacity for belief representation in human infants. Rather than taking a side in this debate, however, we simply want to point out that whichever way this debate turns out, the ability to represent knowledge seems to replicably precede an ability to represent beliefs.
There is uncontroversial evidence that infants can appreciate how others' knowledge shapes their actions from at least six months of age. First, six-month-old infants are sensitive to the role of an agent's current or prior perceptual knowledge in constraining that agent's actions toward objects. For example, six-month-old infants usually assume that an agent who reaches for object A over B prefers object A and will continue to reach for that object in the future. However, if the agent's view of object B is occluded during the initial reaching demonstration, infants do not infer this preference; indeed, they make no prediction about the agent's future behavior when the agent has not seen both options. In this way, infants recognize that an agent's knowledge of her surroundings affects agent's future behavior (Luo & Johnson, Reference Luo and Johnson2009). Six-month-olds also make similar inferences based on what an agent has seen previously. For example, after observing an interaction between a “communicator” who prefers object A over B and a naive “addressee,” they do not expect the addressee to provide the communicator's preferred object. However, infants at this age do expect the addressee to provide the communicator's preferred object when the addressee was present and watching during the communicator's initial preference display, or when the communicator uses an informative vocalization during the interaction (a speech sound). Hence, six-month-olds seem to recognize some of the conditions under which an individual will become knowledgeable about information (Vouloumanos, Martin, & Onishi, Reference Vouloumanos, Martin and Onishi2014). These two examples and many others (e.g., Hamlin, Ullman, Tenenbaum, Goodman, & Baker, Reference Hamlin, Ullman, Tenenbaum, Goodman and Baker2013; Luo, Reference Luo2011; Luo & Baillargeon, Reference Luo and Baillargeon2007; Meristo & Surian, Reference Meristo and Surian2013) suggest that within the first year of life infants reason about agents' actions in terms of what those agents know and do not know and how others' knowledge states shape their actions. Yet there is comparatively little evidence that infants before the second year of life have an ability to represent others' beliefs (see Kovács et al., Reference Kovács, Téglás and Endress2010).
An important aspect of these studies (as well as the nonhuman primate research) is that they required researchers to rely on tasks with solely nonlinguistic responses. To complement this study, we next turn to consider studies that directly ask young children about what others “know” or “believe” and consider the developmental trajectory of knowledge and belief in these tasks.
4.2.2. Knowledge and belief in young children
Research on preverbal infants' understanding of others' epistemic states provides evidence about which cognitive capacities emerge earliest in life and may serve as the foundation for other later-emerging capacities demonstrated in verbal reports. While some uncertainty remains about the relationship between these two sets of capacities (see, e.g., Apperly, Reference Apperly2010; Baillargeon, Scott, & He, Reference Baillargeon, Scott and He2010; Carruthers, Reference Carruthers2013, Reference Carruthers2016), research suggests that the developmental sequence observed in preverbal infants bears a striking similarity to the sequence of development found by researchers studying verbal reports in preschool-aged children. Once again, the capacity for identifying and employing representations of knowledge precede those of belief.
One simple way to track the emergence of the concepts of knowledge and belief in childhood is to consider children's naturally occurring language production and comprehension. Studies of children's early language use suggest that children typically grasp factive mental-state terms first, and more specifically understand the mental-state verb “know” before “think.” Toddlers, for example, use “know” in their own utterances before they use “think” (e.g., Bartsch & Wellman, Reference Bartsch and Wellman1995; Bloom, Rispoli, Gartner, & Hafitz, Reference Bloom, Rispoli, Gartner and Hafitz1989; Shatz, Wellman, & Silber, Reference Shatz, Wellman and Silber1983; Tardif & Wellman, Reference Tardif and Wellman2000). While there remains some debate about how children understand these terms (Dudley, Reference Dudley2018), there is good evidence that preschoolers grasp the relative certainty conveyed by “know” before they grasp this for “think” (Moore, Bryant, & Furrow, Reference Moore, Bryant and Furrow1989). Moreover, children make systematic errors when using nonfactive mental-state terms like “think,” which suggest that they may first interpret these terms as factive (e.g., misinterpreting “think” as “know”) (de Villiers, Reference de Villiers, MacLaughlin and McEwen1995; de Villiers & de Villiers, Reference de Villiers, de Villiers, Mitchell and Riggs2000; de Villiers & Pyers, Reference de Villiers and Pyers1997; Johnson & Maratsos, Reference Johnson and Maratsos1977; Lewis, Hacquard, & Lidz, Reference Lewis, Hacquard and Lidz2012; Sowalsky, Hacquard, & Roeper, Reference Sowalsky, Hacquard and Roeper2009; though see Dudley, Orita, Hacquard, & Lidz, Reference Dudley, Orita, Hacquard and Lidz2015). To illustrate, one error that young children often make is to deny belief ascriptions whenever the agent's belief does not meet the standards of knowledge, for example, the belief is false. That is, when children are asked whether an agent “thinks” something, their patterns of answers indicate that they are actually answering a question about whether or not an agent “knows” something. Finally, corpus analyses of toddlers’ uses of the term “know” also reveal an early-emerging understanding of knowledge in that they use these terms to both signal their own ignorance and request that knowledgeable others fill in gaps in their understanding (Harris, Bartz, & Rowe, Reference Harris, Bartz and Rowe2017a; Harris, Ronfard, & Bartz, Reference Harris, Ronfard and Bartz2017b; Harris, Yang, & Cui, Reference Harris, Yang and Cui2017c).
Another large body of research has experimentally varied the information an agent has acquired and then asked children to make inferences about what the agent “knows” or “thinks.” For instance, in a typical task assessing inferences of knowledge, an agent is either shown the contents of a closed container, or is not shown, and children are then asked whether the agent knows what is in the container. Here, success requires attributing knowledge when the agent saw the contents and attributing ignorance when the agent did not. Similarly, in a typical false-belief task, an agent sees that an object is in one location but does not see it get moved to another location, and children are asked where the agent thinks the object is. To succeed here, children must indicate that the agent thinks the object is in the first location, even though children themselves know it is in the second location.
Findings from studies using these verbal measures suggest that children succeed in inferring knowledge states before they successfully infer belief states. Whereas successful attribution of knowledge states often emerges when children are aged 3 (e.g., Pillow, Reference Pillow1989; Pratt & Bryant, Reference Pratt and Bryant1990; Woolley & Wellman, Reference Woolley and Wellman1993), successful attribution of false belief typically occurs only when they are 4 or older (Grosse Wiesmann, Friederici, Disla, Steinbeis, & Singer, Reference Grosse Wiesmann, Friederici, Disla, Steinbeis and Singer2017; see Wellman, Cross, & Watson, Reference Wellman, Cross and Watson2001 for a meta-analysis of findings from false-belief tasks). Particularly compelling evidence for this pattern comes from studies that have used a battery of theory of mind tasks developed by Wellman and Liu (Reference Wellman and Liu2004). These studies show that most children succeed in inferring knowledge states before they succeed in attributing false belief (Mar, Tackett, & Moore, Reference Mar, Tackett and Moore2010; Tahiroglu et al., Reference Tahiroglu, Moses, Carlson, Mahy, Olofson and Sabbagh2014) and that this developmental pattern is stable across a variety of populations, including deaf children and children with autism (Peterson, Wellman, & Liu, Reference Peterson, Wellman and Liu2005), and children from non-Western cultures (Shahaeian, Nielsen, Peterson, & Slaughter, Reference Shahaeian, Nielsen, Peterson and Slaughter2014; Shahaeian, Peterson, Slaughter, & Wellman, Reference Shahaeian, Peterson, Slaughter and Wellman2011; Wellman, Fang, Liu, Zhu, & Liu, Reference Wellman, Fang, Liu, Zhu and Liu2006).
Considering young children's capacity for making explicit, verbal judgments of knowledge and belief, one sees a familiar pattern emerge. Much like in non-verbal tasks, young children succeed in verbal tasks that require facility with representations of knowledge before they succeed in tasks that require facility with representations of belief.
4.2.3. Is this really the development of knowledge representations?
Again, a critical question is whether the research we have reviewed on theory of mind in human development actually provides evidence that infants and young children are representing knowledge rather than something else, such as perceptual access or simply true belief. That is, do we see the signature features of a genuine capacity for representing knowledge?
One important piece of evidence comes from studies asking whether infants have a capacity for egocentric ignorance: are they able to represent that others know something that they do not? Behne, Liszkowski, Carpenter, and Tomasello (Reference Behne, Liszkowski, Carpenter and Tomasello2012) had an experimenter hide an object in one of the two boxes in a way that ensured that infants could not infer which box the object was in. When the experimenter then pointed to one of the two boxes, infants searched for the object in the location pointed to, suggesting that they understood that the experimenter knew something they did not (Behne et al., Reference Behne, Liszkowski, Carpenter and Tomasello2012). Along similar lines, Kovács, Tauzin, Téglás, Gergely, and Csibra (Reference Kovács, Tauzin, Téglás, Gergely and Csibra2014) provided evidence that 12-month-old infants' pointing is used to query others who they take to be more knowledgeable than they are (Kovács et al., Reference Kovács, Tauzin, Téglás, Gergely and Csibra2014). Specifically, they found that infants exhibited a tendency to point in cases where the experimenter was likely to provide knowledge that the infant did not have (compared to a case where the experimenter was likely to share information that the infant already knew). Moreover, Begus and Southgate (Reference Begus and Southgate2012) demonstrated that 16-month-old infant's interrogative pointing is sensitive to the previously demonstrated competence of potential informers, suggesting that this pointing demonstrates a genuine desire to learn what others know, rather than merely believe (see Stenberg, Reference Stenberg2013, for convergent evidence with 12-month-old infants). Collectively, this study provides evidence that infants have an early-emerging capacity to represent others as knowing something that they do not know – a signature property of knowledge representation.
Continuing later into development, this capacity is also evident in 3-year-olds' explicit attributions of knowledge (e.g., Birch & Bloom, Reference Birch and Bloom2003; Pillow, Reference Pillow1989; Pratt & Bryant, Reference Pratt and Bryant1990; Woolley & Wellman, Reference Woolley and Wellman1993) and their decisions about who to ask for help (Sodian, Thoermer, & Dietrich, Reference Sodian, Thoermer and Dietrich2006). Indeed, children's ability to represent others as knowing more than themselves can be seen as underwriting the important and well-studied development of young children's trust in testimony (see Harris, Koenig, Corriveau, & Jaswal, Reference Harris, Koenig, Corriveau and Jaswal2018, for a recent review). From the perspective of a human infant seeking to learn from others, being able to represent others as knowing something you do not know is critical for understanding who to learn from.
Second, studies show that young children fail to correctly attribute true beliefs if they fall short of knowledge (Fabricius, Boyer, Weimer, & Carroll, Reference Fabricius, Boyer, Weimer and Carroll2010; Fabricius & Imbens-Bailey, Reference Fabricius, Imbens-Bailey, Mitchell and Riggs2000; Oktay-Gür & Rakoczy, Reference Oktay-Gür and Rakoczy2017; Perner, Huemer, & Leahy, Reference Perner, Huemer and Leahy2015). These studies have employed scenarios that are similar to “Gettier” cases within epistemology. To illustrate, children in one study were told about a boy named Maxi who knows that his mother placed his chocolate in the red cupboard, but then while Maxi is gone, his sister takes the chocolate out of the cupboard and after eating some, considers putting it in the green cupboard. However, in the end, she decides to just put it back in the red cupboard (Fabricius et al., Reference Fabricius, Boyer, Weimer and Carroll2010). In this situation, Maxi has a justified true belief about his chocolate being in the red cupboard, but he is not properly described as knowing that his chocolate is in the red cupboard (Gettier, Reference Gettier1963). The striking finding is that even at an age where they can clearly represent knowledge (4- to 6-year-olds), children fail to correctly predict where Maxi will look for the chocolate when his true belief falls short of genuine knowledge. In contrast, when the paradigm is minimally changed such that it no longer involves a “Gettier” case but can be solved with genuine knowledge representations, young children no longer have any difficulty correctly predicting where the agent will look (Oktay-Gür & Rakoczy, Reference Oktay-Gür and Rakoczy2017). In short, children fail to correctly predict others' behavior when their true beliefs fall short of knowledge.
Third, there is good evidence that mental-state representation in infants is not completely explained by modality-specific perceptual access relations such as seeing-that or hearing-that. Infants make the same inferences about what others know based on both auditory and visual information, suggesting that there is some common, modality-independent representation of what others know (Martin, Onishi, & Vouloumanos, Reference Martin, Onishi and Vouloumanos2012; Moll, Carpenter, & Tomasello, Reference Moll, Carpenter and Tomasello2014). Additionally, infants attribute knowledge based on nonperceptual inferences that the agent should make. For example, Träuble et al. (Reference Träuble, Marinović and Pauen2010) showed 15-month-old infants an agent who is either facing the display (and thus has perceptual access to a ball changing locations) or is not facing the display but manually adjusts a ramp causing the ball to change locations (and thus can make a physics-based inference about the ball's changed location). In both cases, infants regarded the agent as having knowledge of the changed location of the ball and distinguished these cases from ones where the agent did not have reason to infer that the ball changed locations (Träuble et al., Reference Träuble, Marinović and Pauen2010). In fact, there is striking evidence that young children (3- to 5-year-olds) are actually surprisingly bad at tracking the modality through which agents gain knowledge of an object (O'Neill, Astington, & Flavell, Reference O'Neill, Astington and Flavell1992; Papafragou, Li, Choi, & Han, Reference Papafragou, Li, Choi and Han2007). Young children will, for example, assume that agents who have gained knowledge of an object through only one sense modality (e.g., touch) also have gained knowledge typically acquired through other sense modalities (e.g., what the object's color is). This kind of error suggests that children are relying primarily on a general capacity for representing others as simply knowing (or not knowing) things about the world rather than modality-specific perceptual access, such as seeing-that or feeling-that.
In sum, we find remarkably good evidence that the early-emerging theory of mind capacity has the signature features of genuine knowledge representation.
4.3 Automatic theory of mind in human adults
A third empirical way to test which cognitive capacities are more basic is to ask which processes operate automatically in human adults – that is, which capacities operate even when you do not want them to and continue to function even when the representation being computed is completely irrelevant (or even counterproductive) to the task at hand. To return to the previous example of number cognition, consider the difference between seeing 27 dots appear on a screen and seeing three dots appear on the same screen. When 27 dots appear, whether or not you represent the exact number of dots on the screen just depends on whether you intentionally decide to engage in the controlled process of counting the number of dots. You could spontaneously decide to count the number of dots, but you could just as easily decide not to. The capacity giving rise to representations of 27 dots is not automatic. Representations of three dots work differently. The processes involved in representations of three dots operate automatically: you could not realize that there are three dots, even if you wanted to (Dehaene & Cohen, Reference Dehaene and Cohen1994; Kaufman, Lord, Reese, & Volkmann, Reference Kaufman, Lord, Reese and Volkmann1949; Trick & Pylyshyn, Reference Trick and Pylyshyn1994). A growing body of literature has investigated the question of whether representations of knowledge and belief are automatic.
4.3.1 Current evidence for automatic belief representation
First, consider the evidence for whether people automatically compute belief representations. The most common approach has been to ask participants to make a judgment in response only to the information that they have seen, whereas at the same time, systematically varying the information that was presented to another (irrelevant) agent in the experiment (Apperly & Butterfill, Reference Apperly and Butterfill2009; Kovács et al., Reference Kovács, Téglás and Endress2010; Low & Watts, Reference Low and Watts2013; Samson, Apperly, Braithwaite, Andrews, & Bodley Scott, Reference Samson, Apperly, Braithwaite, Andrews and Bodley Scott2010; Surtees, Apperly, & Samson, Reference Surtees, Apperly and Samson2016a; Surtees, Samson, & Apperly, Reference Surtees, Samson and Apperly2016b). Researchers could then ask whether participants were automatically (i.e., mandatorily) calculating the mental states of the irrelevant agent by asking whether participants' responses were influenced by the information available to this other agent.
The evidence uncovered by research using this kind of paradigm has provided a relatively clear answer: there is little evidence that human adults automatically represent others' beliefs and some positive evidence that they do not. One prominent study has largely served as the primary piece of evidence for automatic belief representation (Kovács et al., Reference Kovács, Téglás and Endress2010). Importantly, however, further research demonstrated that the paradigm used in these studies suffered from subtle confounds in the timing of a critical attention check, and once these confounds were controlled for, or simply removed, the results no longer suggested that participants automatically calculated others' beliefs (Phillips et al., Reference Phillips, Ong, Surtees, Xin, Williams, Saxe and Frank2015). Apart from this prominent piece of evidence, there are also a few other studies that have argued in support of automatic belief representation (Bardi, Desmet, & Brass, Reference Bardi, Desmet and Brass2018; El Kaddouri, Bardi, De Bremaeker, Brass, & Wiersema, Reference El Kaddouri, Bardi, De Bremaeker, Brass and Wiersema2019; van der Wel, Sebanz, & Knoblich, Reference van der Wel, Sebanz and Knoblich2014), and considerable evidence that strongly suggests that belief representation is not automatic (Apperly, Riggs, Simpson, Chiavarino, & Samson, Reference Apperly, Riggs, Simpson, Chiavarino and Samson2006; Kulke et al., Reference Kulke, Johannsen and Rakoczy2019; Low & Edwards, Reference Low and Edwards2018; Surtees, Butterfill, & Apperly, Reference Surtees, Butterfill and Apperly2012; Surtees et al., Reference Surtees, Apperly and Samson2016a, Reference Surtees, Samson and Apperly2016b).
Complementary evidence comes from studies asking whether representations of others' beliefs are computed when attentional resources or executive functions are taxed. This body of work provides clear evidence that representing what others believe requires deliberative attention and executive function (Apperly, Back, Samson, & France, Reference Apperly, Back, Samson and France2008; Apperly, Samson, & Humphreys, Reference Apperly, Samson and Humphreys2009; Dungan & Saxe, Reference Dungan and Saxe2012; Qureshi, Apperly, & Samson, Reference Qureshi, Apperly and Samson2010; Schneider, Lam, Bayliss, & Dux, Reference Schneider, Lam, Bayliss and Dux2012). To illustrate with one example, Dungan and Saxe (Reference Dungan and Saxe2012) had participants view videos in which an agent either formed knowledge of the location of an object or instead formed a false belief about the location of the object. They then asked participants to predict where the agent would look for the object when they were under various forms of cognitive load, using both verbal and non-verbal shadowing. When participants needed to represent the agent as having a false belief about the object's location, they made systematic errors in their predictions of where the agent would look. No such errors were observed when they could simply represent the agent as knowing the object's location.
In summary, the current state of the evidence suggests that not only are belief representations not automatic, but they rely on domain-general executive resources. Representations of beliefs work much more like representations of 27 dots than representations of three dots.
4.3.2. Current evidence for automatic knowledge representation
In contrast, one finds intriguing evidence that human adults may automatically represent what others know. In one study, Samson et al. (Reference Samson, Apperly, Braithwaite, Andrews and Bodley Scott2010) showed that participants took into account what others knew, even in cases where it was counterproductive to the task they were completing. Participants viewed a room with various numbers of dots on two opposing walls, and their task was to indicate the number of dots they saw on the walls. Critically, however, there was also an avatar standing in the middle of the room, facing only one of the walls, such that on some trials, the participant saw more dots than the avatar did. On trials where the number of dots seen by the avatar and participant conflicted, participants tended to make errors in a way that suggested they were automatically encoding the number of dots the avatar saw, and that this representation was conflicting with their representation of the number of dots on the walls (despite the fact that the avatar was completely irrelevant to the task they were currently performing on these trials). This research, along with a number of subsequent studies (Surtees & Apperly, Reference Surtees and Apperly2012; Surtees et al., Reference Surtees, Apperly and Samson2016a, Reference Surtees, Samson and Apperly2016b) collectively suggest that when we automatically encode others' mental states, we represent the things that they know through clear perceptual access (sometimes referred to as “Level-1” perspective taking, see Flavell, Reference Flavell and Keasey1978, Reference Flavell, Beilin and Pufall1992). At the same time, however, there is continuous debate about whether these findings reflect the genuine theory of mind representations or simply lower-level processing required by the task (Heyes, Reference Heyes2014b), with some researchers providing evidence for attentional confounds (Conway, Lee, Ojaghi, Catmur, & Bird, Reference Conway, Lee, Ojaghi, Catmur and Bird2017; Santiesteban, Catmur, Hopkins, Bird, & Heyes, Reference Santiesteban, Catmur, Hopkins, Bird and Heyes2014), and others providing empirical evidence against the proposed confounds (Furlanetto, Becchio, Samson, & Apperly, Reference Furlanetto, Becchio, Samson and Apperly2016; Gardner, Bileviciute, & Edmonds, Reference Gardner, Bileviciute and Edmonds2018; Marshall, Gollwitzer, & Santos, Reference Marshall, Gollwitzer and Santos2018).
Rather than attempting to adjudicate this debate, we instead want to step back and consider what this approach to investigating knowledge and belief has uncovered. There are two possibilities. One is that existing evidence from these paradigms shows that humans are capable of automatically attributing knowledge but are not capable of automatically attributing beliefs. The other is that, despite the initial evidence, the experimental paradigms that have been employed so far are not well-suited to demonstrate the existence of an underlying capacity for automatic theory of mind in general, and new paradigms need to be developed.
4.3.3. New horizons for the automatic theory of mind
The research reviewed in the previous sections on nonhuman primates and human cognitive development provides reason to expect that the capacity for knowledge representation is more cognitively basic than belief. Moreover, many basic cognitive capacities – those shared with close nonhuman primate relatives and early emerging in human development – also tend to operate automatically in humans. Hence, there is some reason to expect that adult humans may indeed have the capacity to automatically represent others' knowledge. If this is right, we should further expect such an automatically functioning capacity for knowledge representation to exhibit the same set of signature features of knowledge representations. Specifically, we would expect this capacity to (i) only support factive representations, (ii) not support representations of true belief when they fall short of knowledge, (iii) allow you to represent someone else is knowing something you do not, and (iv) not be tied to any particular sense modality. To the best of our knowledge, none of these four features have been directly examined in this study on the automatic theory of mind, and thus point to exciting new avenues for future study as work on this topic presses forward.
4.4. Evidence from patient populations
The other tool that cognitive scientists use to determine which capacities are more basic is to ask which capacities are preserved in people who suffer from various cognitive impairments. The underlying rationale is that the more basic capacities tend to be preserved in patient populations. While there has not yet been a great deal of research that has specifically investigated which theory of mind capacities may be preserved across different patient populations, it is worth considering what the existing evidence may reveal about the capacities for representing knowledge and belief.
The most well-studied patient population in the theory of mind research are people with ASD. Research looking at the theory of mind in people with ASD has found that they often have difficulties in correctly representing others' beliefs (Baron-Cohen, Reference Baron-Cohen1997; Baron-Cohen, Leslie, & Frith, Reference Baron-Cohen, Leslie and Frith1985; Frith, Reference Frith2001; Moran et al., Reference Moran, Young, Saxe, Lee, O'Young, Mavros and Gabrieli2011; Schneider, Slaughter, Bayliss, & Dux, Reference Schneider, Slaughter, Bayliss and Dux2013; Senju, Southgate, White, & Frith, Reference Senju, Southgate, White and Frith2009). While there is also some evidence that young children with ASD have some difficulty with inferences about knowledge, representations of knowledge fare better than representations of belief when directly compared (Baron-Cohen & Goodhart, Reference Baron-Cohen and Goodhart1994; Leslie & Frith, Reference Leslie and Frith1988; Perner, Frith, Leslie, & Leekam, Reference Perner, Frith, Leslie and Leekam1989; Pratt & Bryant, Reference Pratt and Bryant1990). For example, in studies that tracked participants' eye movements during true and false-belief tasks, researchers found that the eye movements of people with ASD differ from controls when the agent has a false belief, they actually do not differ when the agent simply has knowledge (Schneider et al., Reference Schneider, Slaughter, Bayliss and Dux2013).
Similarly, a number of studies have investigated how people with ASD differ from typically developing people in terms of the ability for “Level 1” and “Level 2” theory of mind. In general, studies using Level 1 tasks (involving calculations of whether or not someone has perceptual or epistemic access to something) have found that people with ASD often perform just, as well as typically developing controls (Baron-Cohen, Reference Baron-Cohen1989; Hobson, Reference Hobson1984; Leekam, Baron-Cohen, Perrett, Milders, & Brown, Reference Leekam, Baron-Cohen, Perrett, Milders and Brown1997; Reed & Peterson, Reference Reed and Peterson1990; Tan & Harris, Reference Tan and Harris1991). In contrast, studies involving Level-2 tasks (involving taking someone's perspective even though it contradicts one's own) have shown that people with ASD often perform much less than typically developing controls (Hamilton, Brindley, & Frith, Reference Hamilton, Brindley and Frith2009; Leslie & Frith, Reference Leslie and Frith1988; Reed & Peterson, Reference Reed and Peterson1990; Yirmiya, Sigman, & Zacks, Reference Yirmiya, Sigman and Zacks1994).
Taken together, this research suggests that the capacity for representing others' beliefs is disrupted in patient populations, whereas the capacity to represent what other people see or know remains comparatively preserved. This difference in disruptions of knowledge and belief provides evidence that the capacity for knowledge representation is more basic than belief representation. Not only is knowledge more basic in that it is simpler, but knowledge representation clearly does not depend on belief representation, since representations of knowledge are preserved despite disruptions of belief representations.
4.5. Summary
The tools that cognitive scientists often appeal to when investigating which aspects of our minds are the most basic all suggest that it is the capacity to represent knowledge – not belief – that is the more basic component of the theory of mind. Primate study indicates that our ancestors may have begun representing knowledge states before evolving the capacity to think about beliefs. Studies of human infant theory of mind find that infants begin to track what others know before tracking what others believe, and young children can talk about and make predictions using knowledge representations long before belief representations. Tests of the automatic theory of mind in adult humans suggest that representations of knowledge may happen more automatically and effortlessly than representations of beliefs. Also, evidence from patient populations demonstrates that an ability to represent beliefs can be disrupted whereas knowledge attributions remain comparatively preserved.
5. Do attributions of knowledge depend on belief?
Thus far, we have been reviewing evidence for the existence of a comparatively basic theory of mind representation that shares some of the signature features of knowledge. However, many of the studies on which we've focused have not specifically employed the concepts of knowledge and belief. Going forward, we will focus more specifically on the explicit representation of the concepts of knowledge and belief.
Much of the relevant studies have been conducted in the field of experimental philosophy. While we review the evidence in detail below, the lesson that comes from this study should sound familiar at this point: knowledge representations seem to be more basic than belief representations, and representations of knowledge do not depend on representations of belief. This conclusion is supported by the fact that response times for knowledge assessments are faster than response times for belief assessments (Section 5.1), that knowledge attributions sometimes occur when belief attributions do not (Section 5.2), that in the best-fitting causal models of the process of mental-state attribution, the ascription of belief does not cause the ascription of knowledge (Section 5.3), and that knowledge attributions are better predictors of behavior than attributions of belief (Section 5.4). We take up each piece of evidence in turn.
5.1 Response times
Consider again the claim that people attribute knowledge by first determining that someone has a belief and then also checking to ensure that this belief has certain further properties (as outlined in Section 3.1). One way of investigating whether this claim is correct is to examine how quickly people are able to make knowledge and belief attributions. If attributing belief is a necessary step in attributing knowledge, then attributions of belief should be faster than attributions of knowledge. Recent study tested this prediction (Phillips et al., Reference Phillips, Strickland, Dungan, Armary, Knobe and Cushman2018).
In one study, participants read about agents who either (a) had a true belief about some proposition p, (b) were ignorant and thus had no belief regarding p, or (c) believed some other proposition q that was inconsistent with p (Phillips et al., Reference Phillips, Strickland, Dungan, Armary, Knobe and Cushman2018). After reading about agents in these states, participants were asked whether the agent “knows” that p, or instead whether the agent “thinks” that p. They were instructed to answer as quickly and accurately as they possibly could.
Participants were systematically faster in both attributing and denying knowledge than in attributing or denying belief – precisely the opposite of the predictions of the belief-as-more-basic view. This pattern was found to extend cross-linguistically, as well, even for a language where the term “think” is more frequent than the term “know”: French participants were faster to correctly decide what an agent knows (“sait”), than what an agent thinks (“pense”). This provides clear evidence that people's attribution of knowledge cannot depend on a prior attribution of belief.
5.2 Patterns of knowledge attribution versus belief attribution
Still, it might be thought that one prediction that follows from the belief-as-more-basic view is clearly right. Specifically, it should be that all cases in which people are willing to say that someone has knowledge are also cases in which people would be willing to say that someone has the corresponding belief. After all, how could it possibly turn out that a person knows something if she does not even believe it?
Surprisingly, however, results from experimental philosophy provide a reason to doubt that even this prediction is correct. In an important study, researchers tested this claim (Myers-Schulz & Schwitzgebel, Reference Myers-Schulz and Schwitzgebel2013; see also Radford, Reference Radford1966 and Murray, Sytsma, & Livengood, Reference Murray, Sytsma and Livengood2012). In one of the scenarios used in this study, participants read about an “unconfident examinee,” Kate, who studied very hard for an exam on English history. The exam's final question asked about the date Queen Elizabeth died. Kate had studied this fact many times, but she was not confident that she recalled the answer, so she decided to “just guess” and writes down “1603,” which is in fact the correct answer. The vast majority of participants agreed that Kate knew that Queen Elizabeth died in 1603, but only a small minority agreed she believed that. A similar pattern emerged across the other scenarios.
5.3 Does belief attribution predict knowledge attribution?
Thus far, we have been asking whether there are cases in which people are inclined to say that an agent does know something but does not believe it. However, testing whether knowledge attributions are accompanied by belief attributions provides limited information about how these judgments are processed. Even if people strongly tend to attribute knowledge only if they will also attribute belief, it could still be that belief attribution is not central to the psychological process of knowledge attribution.
A series of recent studies suggest that people do not base their knowledge attributions on belief attributions. In one set of studies (Turri & Buckwalter, Reference Turri and Buckwalter2017), researchers asked participants to read simple stories about agents and then record several judgments. These judgments included whether the agent knows a particular proposition, along with judgments about several factors that many theorists associate with knowledge, including whether the relevant proposition is true, whether the agent thinks that it is true, whether the agent has good evidence for thinking that it is true, and whether the agent should base a decision on it. In a multiple linear regression model used to predict knowledge attributions, the strongest predictors were judgments about whether the proposition was true and whether the agent should make a decision based on it. Belief attributions did not predict knowledge attributions even when a large number of other relevant factors were controlled (Turri & Buckwalter, Reference Turri and Buckwalter2017). In another set of studies using a similar paradigm, researchers instead used a causal search algorithm to study the relationship among the judgments. In the best-fitting causal model, no kind of belief attribution was found to cause knowledge attributions (Turri, Buckwalter, & Rose, Reference Turri, Buckwalter and Rose2016). The upshot of this research is that, even if it turns out that there is some form of belief that is entailed by knowledge, there is currently no evidence that even this minimal form of belief consistently plays a role in the formation of knowledge representations.
5.4 Knowledge and belief in action prediction
A dominant perspective in cognitive science is that our ordinary predictions of others' behavior rely primarily on which beliefs we attribute (e.g., Baker, Saxe, & Tenenbaum, Reference Baker, Saxe and Tenenbaum2009; Rakoczy, Reference Rakoczy2009; Tomasello, Call, & Hare, Reference Tomasello, Call and Hare2003). For example, we predict that a traveler will take his umbrella because we attribute to him the belief that it will rain and the desire to stay dry. With the belief and desire attributions in place, there is little additional study left for attributions of knowledge to do in predicting action.
A recent study tested this possibility using a simple paradigm. Participants read brief texts about an agent in various situations which varied the information the agent had access to (e.g., an agent who was following another person and who could or could not see where they turned). After reading the vignette, participants made a belief attribution, a knowledge attribution, and a behavioral prediction. The key question was whether the behavioral prediction would be more strongly predicted by the belief attribution or the knowledge attribution. Knowledge attributions consistently predicted behavioral predictions more strongly than belief attributions did. Moreover, a causal search algorithm suggested that knowledge attributions caused behavioral predictions in contexts where belief attributions did not (Turri, Reference Turri2016a). Whereas previous research has demonstrated a unique role for knowledge attributions in evaluating how people should behave (e.g., Turri, Reference Turri2015a, Reference Turri2015b; Turri, Friedman, & Keefner, Reference Turri, Friedman and Keefner2017), these findings suggest a previously unrecognized role for knowledge attributions in predicting how people will behave.
6. Stepping back
The goal of this paper has been to explore the evidence for two competing views of the basic way in which we make sense of others' minds. We saw that every tool used to date to test which kind of representation is more basic – comparative study (Section 4.1), developmental study (Section 4.2), automatic processing in human adults (Section 4.3), and study with patient populations (Section 4.4) – converges on a clear picture: knowledge attribution appears to be a more basic capacity than belief attribution. Moreover, research from experimental philosophy provided independent evidence that knowledge does not depend on belief (Section 5). Critically, we have also illustrated that the theory of mind capacity revealed by these various methods exhibits a set of signature features that are specific to knowledge (Section 2): (i) it is factive, (ii) it is not just true belief, (iii) it allows for egocentric ignorance, and (iv) it is not modality-specific.
6.1. Why knowledge?
A natural question that remains unanswered is why such a capacity for knowledge representation would have ended up being one that is cognitively basic. While there is some good evidence that knowledge representations are used for action prediction (Section 5.4), there are also many instances in which the kind of capacity we provided evidence for will be poorly suited to predicting other agents' behavior. For example, this capacity for knowledge representation would never allow you to predict others' behavior if they happen to disagree with you about the way things are since it only supports factive representations. Also, it is similarly useless when others do agree with you but do so for the wrong reasons – that is, when they have a true belief that falls short of knowledge. This is odd. Understanding why someone believes what they do seems entirely unnecessary for predicting their behavior. To do that, you only need to know what they believe. Moreover, knowledge representations allow you to represent others as knowing things that you do not know. It is easy to see why this is not particularly useful for action prediction. Imagine a ball was placed in one of two boxes, but you do not know which one. Knowing that someone else knows where the ball is will do you little good in predicting where they will look for it. The question before us is this: given that our more basic ability to represent knowledge has signature features that seem oddly ill-designed for predicting others' behavior, what else might it be for? A promising alternative picture is that the basic capacity for knowledge representation evolved for learning from others.
It is not hard to see how representations of knowledge are well-suited for learning about the extra-mental world (Craig, Reference Craig1990). Return to the example of a ball being in one of the two boxes, but imagine that you simply want to know where the ball is. From this perspective, the features that are unique to knowledge begin to make perfect sense. For example, you likely do not want your understanding of the ball's location be informed by what someone else merely believes about the location of the ball, especially when those beliefs fall short of genuine knowledge – either because they are in conflict with your understanding of the world or because the reason the person came to form them is deviant in some way (Gettier, Reference Gettier1963). In either case, the other person's beliefs will not be a reliable guide to the way the world actually is, and thus if you want to learn about the world, it would be better simply to ignore others' beliefs under such conditions. Moreover, the rather sophisticated capacity to represent others as knowing more than you makes perfect sense here too. While it is clearly not useful for predicting which box the other person will look in, it is incredibly useful for determining who can accurately inform you about where to look. This feature of knowledge (and its contrast with belief) is even reflected in the language we use when talking about others' mental states. We can felicitously talk about others as knowing where something is, knowing how something is done, or even knowing who did something. But we cannot similarly talk about others as believing where something is, believing how something is done, or believing who did something (Egré, Reference Egré2008; Hintikka, Reference Hintikka1975). It is knowledge, but not belief, that allows us to represent others as reliable guides to the actual world.
In short, a promising answer to the question “Why knowledge?” is that knowledge representations are fundamental because they allow us to learn from others about the world. Obviously, this does not mean that we could never use knowledge representations to predict others' actions; in fact, we have provided clear evidence that we can and do (Section 5.4). Rather the suggestion is that the capacity for knowledge representation is clearly better designed for learning about the extra-mental world rather than for predicting others' actions, and thus is more likely to have originated for the former, even if it can also be usefully employed for the latter.
6.2. Learning from knowledge
If this hypothesis is correct, the literature on learning from others might offer us valuable clues about the nature of knowledge representations and their pervasive role in cognition. To take one example, the evidence we have reviewed on nonhuman primates suggests that they can use their capacity for knowledge representation to learn, for example, about the location of food based on what others know (Krachun et al., Reference Krachun, Carpenter, Call and Tomasello2009). More generally, a large body of research has demonstrated that nonhuman primates, such as chimpanzees, have an impressive ability to learn from conspecifics, whether in gaining knowledge of how to forage for food (e.g., Rapaport and Brown, Reference Rapaport and Brown2008) or how to solve novel problems (Call, Carpenter, & Tomasello, Reference Call, Carpenter and Tomasello2005; Call & Tomasello, Reference Call and Tomasello1995; Myowa-Yamakoshi & Matsuzawa, Reference Myowa-Yamakoshi and Matsuzawa2000). Indeed, even capuchin monkeys have an ability to learn foraging techniques from others in a way that is notably sensitive to instances in which others are comparatively more knowledgeable, for example, demonstrations involving novel techniques or unfamiliar foods (Perry, Reference Perry2011). Obviously, there is good reason to think that this ability for learning from others is relying on representations of others' knowledge rather than their beliefs. Not only are knowledge representations generally better suited for learning about the extra-mental world, but there is little evidence for an ability for belief representation in chimpanzees, and even less in the case of monkeys (Section 4.1). Of course, the point here is not that every instance of learning from others is an instance of knowledge representation – there are all kinds of strategies one can use to learn from others (Heyes, Reference Heyes2016). Our point is just that if you have the capacity for knowledge representation, which we think nonhuman primates do (Section 4.2), then you have a capacity that is well-suited for helping you learn from others.
Similarly, the comparatively basic nature of knowledge representations fits nicely with the literature on learning from others in humans (e.g., Buckwalter & Turri, Reference Buckwalter and Turri2014; Koenig & McMyler, Reference Koenig, McMyler, Fricker, Graham, Henderson, Pederson and Wyatt2019; Mills, Reference Mills2013; Sobel & Kushnir, Reference Sobel and Kushnir2013). As recently reviewed by Harris et al. (Reference Harris, Koenig, Corriveau and Jaswal2018), an impressive body of research has documented the capacity to understand others as sources of unknown information from early in human infancy. For example, when young children do not know something themselves, they often request that information from others who know more than they do, even within the first year of life (Kovács et al., Reference Kovács, Tauzin, Téglás, Gergely and Csibra2014), and then selectively learn from others who are knowledgeable rather than not (Hermes, Behne, Bich, Thielert, & Rakoczy, Reference Hermes, Behne, Bich, Thielert and Rakoczy2018; Moses, Baldwin, Rosicky, & Tidball, Reference Moses, Baldwin, Rosicky and Tidball2001). They also seek new knowledge from others who are more likely to understand the relevant part of the world – for example, looking to an experimenter rather than their mother for information about a novel toy (Kim & Kwak, Reference Kim and Kwak2011) or attending selectively to information from others who have demonstrated expertise in a given topic (Stenberg, Reference Stenberg2013). Moreover, they will specifically ignore information from others who have demonstrated themselves to be unreliable as young as eight-month-old (Begus & Southgate, Reference Begus and Southgate2012; Brooker & Poulin-Dubois, Reference Brooker and Poulin-Dubois2013; Harris & Corriveau, Reference Harris, Corriveau, Robinson and Einav2014; Koenig & Harris, Reference Koenig and Harris2005; Poulin-Dubois & Brosseau-Liard, Reference Poulin-Dubois and Brosseau-Liard2016; Tummeltshammer, Wu, Sobel, & Kirkham, Reference Tummeltshammer, Wu, Sobel and Kirkham2014; Zmyj, Buttelmann, Carpenter, & Daum, Reference Zmyj, Buttelmann, Carpenter and Daum2010). As reviewed above (Section 4.2), the current best evidence suggests that children must be relying on representations of knowledge rather than belief in determining from whom to learn.
From one perspective, this kind of proposal may seem surprising or counterintuitive. From another perspective, however, it seems obvious: when parents teach their children some facts about the world, it does not primarily involve them teaching their children to better understand what they think about the world; they are primarily teaching their children to better understand the way the world actually is. Put more simply, we teach others (and expect them to learn) about what we know, not what we believe.
6.2.1. Learning from others, cultural evolution, and what is special about humans
A capacity for reliably learning from others is critically important not only within a single lifespan, but also across them – at the level of human societies. Indeed, this capacity to reliably learn from others has been argued to be essential for human's unique success in the accumulation and transmission of cultural knowledge (e.g., Henrich, Reference Henrich2015; Heyes, Reference Heyes2018). Perhaps unsurprisingly, the argument we have made about the primary role of knowledge representations in cognition fits nicely with this broad view of why humans have been so successful: it is likely supported by our comparatively basic theory of mind representations.
At the same time, this suggestion cuts against another common proposal for which ability underwrites the wide array of ways in which humans have been uniquely successful, namely their ability to represent others' beliefs (Baron-Cohen, Reference Baron-Cohen, Corballis and Lea1999; Call & Tomasello, Reference Call and Tomasello2008; Pagel, Reference Pagel2012; Povinelli & Preuss, Reference Povinelli and Preuss1995; Tomasello, Reference Tomasello1999; Tomasello, Kruger, & Ratner, Reference Tomasello, Kruger and Ratner1993). While the ability to represent others' beliefs may indeed turn out to be unique to humans and critically important for some purposes, it does not seem to underwrite humans' capacity for the accumulation of cultural knowledge. After all, precisely at the time in human development when the vast majority of critical learning occurs (infancy and early childhood), we find robust evidence for a capacity for knowledge rather than belief representation (Section 4.2).
6.3. A call to arms
Since the 1970s, research has explored belief attribution in a way that brings together numerous areas of cognitive science. Our understanding of belief representation has benefitted from a huge set of interdisciplinary discoveries from developmental studies, cognitive neuroscience, primate cognition, experimental philosophy, and beyond. The result for this empirical ferment has been extraordinary, giving us lots of insight into the nature of belief representation.
We hope this paper serves as a call to arms for cognitive scientists to join researchers who have already begun to do the same for knowledge representation. Our hope is that we can marshal the same set of tools and use them to get a deeper understanding of the nature of knowledge. In doing so, we may gain better insight into the kind of representation that may – at an even more fundamental level – allow us to make sense of others' minds.
Acknowledgments
We would like to thank Jorie Koster-Hale, Joe Henrich, Brent Strickland, Angelo Turri, Evan Westra, and Timothy Williamson.
Financial support
This research was supported in part by funding to JT by the Social Sciences and Humanities Research Council of Canada (SSHRC: 435-2015-0598) and the Canada Research Chairs Program (CRC: 950-231217).
Conflict of interest
None.
Target article
Knowledge before belief
Related commentaries (36)
Are knowledge- and belief-reasoning automatic, and is this the right question?
Belief versus knowledge: An epic battle, but no clear victor
Beliefs for human-unique social learning
Beyond knowledge versus belief: The contents of mental-state representations and their underlying computations
Do knowledge representations facilitate learning under epistemic uncertainty?
Do “knowledge attributions” involve metarepresentation just like belief attributions do?
Evolutionary foundations of knowledge and belief attribution in nonhuman primates
Exchanging humpty dumpties is not a solution: Why a representational view of knowledge must be replaced with an action-based approach
Ignorance matters
Infants actively seek and transmit knowledge via communication
Insights into the uniquely human origins of understanding other minds
Intersubjectivity and social learning: Representation of beliefs enables the accumulation of cultural knowledge
Knowing, believing, and acting as if you know
Knowledge is belief – and shaped by culture
Knowledge and the brain: Why the knowledge-centric theory of mind program needs neuroscience
Knowledge as commitment
Knowledge before belief in the history of philosophy
Knowledge before belief: Evidence from unconscious content
Knowledge by default
Knowledge prior to belief: Is extended better than enacted?
Knowledge, belief, and moral psychology
Knowledge-by-acquaintance before propositional knowledge/belief
No way around cross-cultural and cross-linguistic epistemology
Relational mentalizing after any representation
Representation and misrepresentation of knowledge
Representing knowledge, belief, and everything in between: Representational complexity in humans and other apes
Semantic memory before episodic memory: How memory research can inform knowledge and belief representations
Teleology first: Goals before knowledge and belief
The distinctive character of knowledge
The evolution of knowledge during the Cambrian explosion
The role of epistemic emotions in learning from others
Theory of mind in context: Mental-state representations for social evaluation
There's more to consider than knowledge and belief
Three cognitive mechanisms for knowledge tracking
Two distinct concepts of knowledge
Why is knowledge faster than (true) belief?
Author response
Actual knowledge