The ability to represent others' mental states has been thought of as social cognition's primary building block for over 40 years (Apperly, Reference Apperly2010; Premack & Woodruff, Reference Premack and Woodruff1978). Phillips et al. take this one step further in their impressive study of interdisciplinary thought: They argue that knowledge representation is a fundamentally different and more basic process than belief representation. This claim is based, in large part, on the observation that nonhuman primates (sect. 4.1 in Phillips et al.) as well as developmental (sect. 4.2) and clinical (sect. 4.4) populations show atypical behavior in false-belief tasks, despite their intact performance in tasks that may tap into the representation of knowledge.
A basic premise in order to accept the authors' argument is that atypical performance in a false-belief task arises from a lack of ability to represent another's belief (Dennett, Reference Dennett1978). But is this necessarily the case? We recently pointed out in our newly developed relational mentalizing framework (Deschrijver & Palmer, Reference Deschrijver and Palmer2020) that although passing a false-belief task is sufficient to conclude a belief representation ability to be present, failing one does not yield evidence for it to be absent. False-belief tasks are social conflict designs: The other typically holds a mental state that is manipulated to be irreconcilable with their own (e.g., they may think that the ball is in the basket, whereas you think it is in the box). In order to resolve the mental conflict that arises from having to represent both mental states, a neural mechanism may need to inhibit one of the competing representations before focusing on the other (i.e., mental conflict monitoring; Deschrijver & Palmer, Reference Deschrijver and Palmer2020). However, for example in a young child, if the mechanism fails to inhibit their own misaligned mental state despite being able to represent both states, it may manifest as an inability to verbalize or show sensitivity to the other's false belief. Even neurotypical adults, who undoubtedly represent others' mental states, can make errors in a false-belief task if they fail to suppress their own mental representation, suggesting that the mechanism indeed exists (Keysar, Lin, & Barr, Reference Keysar, Lin and Barr2003; for other evidence, see Deschrijver & Palmer, Reference Deschrijver and Palmer2020). Ineffective mental conflict monitoring may also be reflected in more interference by the other's belief if the task requires you to inhibit the representation of the other's mental state to focus on your own, as reported in some adults on the spectrum (Deschrijver, Bardi, Wiersema, & Brass, Reference Deschrijver, Bardi, Wiersema and Brass2016). Developmental (e.g., Kovács et al., Reference Kovács, Téglás and Endress2010), nonhuman primate (e.g., Martin & Santos, Reference Martin and Santos2014), and autistic (e.g., Deschrijver et al., Reference Deschrijver, Bardi, Wiersema and Brass2016) populations may thus show atypical results in false-belief tasks even while representing the other's belief.
The tasks identified by Phillips et al. as showing intact “knowledge” representation in young, autistic, and nonhuman primate populations broadly follow two methodological designs: First, they assess the understanding of the another's ignorance (e.g., Flombaum & Santos, Reference Flombaum and Santos2005; Luo & Johnson, Reference Luo and Johnson2009; Santos, Nissen, & Ferrugia, Reference Santos, Nissen and Ferrugia2006), meaning that the other does not have knowledge about the object of interest. Second, they investigate whether one understands that the other has knowledge (a mental state that is true), about the object's location with the observer either having or not having access to this other's knowledge (e.g., Behne, Liszkowski, Carpenter, & Tomasello, Reference Behne, Liszkowski, Carpenter and Tomasello2012; Luo & Johnson, Reference Luo and Johnson2009). To solve these tasks successfully, there is no need for the brain to engage in mental conflict monitoring regarding the object's location: There is no other-related mental state to be represented (and, therefore, it cannot clash with one's own), or the other's mental state aligns with the own. Populations that do represent others' mental states, but are unable to deal with mental conflict, should hence not experience any difficulties. Consistent with this, difficulties arise if the other's (unknown) knowledge starts misaligning with the own understanding of the world (e.g., Krachun, Carpenter, Call, & Tomasello, Reference Krachun, Carpenter, Call and Tomasello2009), and when the design temporarily involves a manipulation of conflict between the other's and the participant's understanding of reality (e.g., where the object is ostentatiously relocated without the other seeing it, but then put back; Fabricius, Boyer, Weimer, & Carroll, Reference Fabricius, Boyer, Weimer and Carroll2010; Gettier, Reference Gettier1963; Horschler, Santos, & MacLean, Reference Horschler, Santos and MacLean2019; Kaminski, Call, & Tomasello, Reference Kaminski, Call and Tomasello2008). What Phillips et al. consider to be a true belief design with representations that do not doesn't qualify “knowledge,” thus involves a short-term manipulation of mental conflict as well. This means that atypical results in such “true belief” tasks may be attributable to mental conflict monitoring rather than belief representation issues in these populations, too. To show that the distinction between representing another's belief versus knowledge consists of more than semantics, the field may thus need to show that populations such as young children, nonhuman primates, and individuals on the spectrum continue to perform well in “knowledge” tasks that do contain mental conflict (e.g., Samson et al., Reference Samson, Apperly, Braithwaite, Andrews and Bodley Scott2010; see Table 3 in Deschrijver & Palmer, Reference Deschrijver and Palmer2020, for the most optimal dependent measures), and fail in (true and false) “belief” tasks that don't. Mental conflict monitoring is also likely effortful, using cognitive resources and showing relationships with executive functions (Carlson, Reference Carlson2010; Carlson, Mandell, & Williams, Reference Carlson, Mandell and Williams2004a; Carlson & Moses, Reference Carlson and Moses2001; Carlson, Moses, & Breton, Reference Carlson, Moses and Breton2002; Carlson, Moses, & Claxton, Reference Carlson, Moses and Claxton2004b). This may result in seemingly less automatic or slower responses in false-belief designs versus those knowledge designs that do not involve mental conflict (see sects 4.3 and 5.3 in Phillips et al.). From all these arguments, it follows that a differential performance of autistic, developmental, and non-human primate populations in “belief” versus “knowledge'' tasks could be seen not as a consequence of these tasks tapping into two different types of representations that are differentially affected (i.e., ”knowledge“ versus ”beliefs"), but rather as an indication that these populations have issues with solving mental conflict instead of with attributing to others any representation per se.
Representing another's belief versus knowledge is thus not as easily dissociable as the authors may want to portray: If the two representation mechanisms are truly distinct, shouldn't one always be able to assess the truthfulness of another's mental state before representing it? This seems challenging in the real world, where others' mental states are more complex than the ones typically used in the mentalizing domain. Regardless of which party holds the facts, however, mental conflict may arise after representing another's position if it's misaligned. In sum, when taking into account mental conflict monitoring, the idea that the representation of beliefs and knowledge are two fundamentally different things, with one being more basic than the other, may be on shaky grounds.
The ability to represent others' mental states has been thought of as social cognition's primary building block for over 40 years (Apperly, Reference Apperly2010; Premack & Woodruff, Reference Premack and Woodruff1978). Phillips et al. take this one step further in their impressive study of interdisciplinary thought: They argue that knowledge representation is a fundamentally different and more basic process than belief representation. This claim is based, in large part, on the observation that nonhuman primates (sect. 4.1 in Phillips et al.) as well as developmental (sect. 4.2) and clinical (sect. 4.4) populations show atypical behavior in false-belief tasks, despite their intact performance in tasks that may tap into the representation of knowledge.
A basic premise in order to accept the authors' argument is that atypical performance in a false-belief task arises from a lack of ability to represent another's belief (Dennett, Reference Dennett1978). But is this necessarily the case? We recently pointed out in our newly developed relational mentalizing framework (Deschrijver & Palmer, Reference Deschrijver and Palmer2020) that although passing a false-belief task is sufficient to conclude a belief representation ability to be present, failing one does not yield evidence for it to be absent. False-belief tasks are social conflict designs: The other typically holds a mental state that is manipulated to be irreconcilable with their own (e.g., they may think that the ball is in the basket, whereas you think it is in the box). In order to resolve the mental conflict that arises from having to represent both mental states, a neural mechanism may need to inhibit one of the competing representations before focusing on the other (i.e., mental conflict monitoring; Deschrijver & Palmer, Reference Deschrijver and Palmer2020). However, for example in a young child, if the mechanism fails to inhibit their own misaligned mental state despite being able to represent both states, it may manifest as an inability to verbalize or show sensitivity to the other's false belief. Even neurotypical adults, who undoubtedly represent others' mental states, can make errors in a false-belief task if they fail to suppress their own mental representation, suggesting that the mechanism indeed exists (Keysar, Lin, & Barr, Reference Keysar, Lin and Barr2003; for other evidence, see Deschrijver & Palmer, Reference Deschrijver and Palmer2020). Ineffective mental conflict monitoring may also be reflected in more interference by the other's belief if the task requires you to inhibit the representation of the other's mental state to focus on your own, as reported in some adults on the spectrum (Deschrijver, Bardi, Wiersema, & Brass, Reference Deschrijver, Bardi, Wiersema and Brass2016). Developmental (e.g., Kovács et al., Reference Kovács, Téglás and Endress2010), nonhuman primate (e.g., Martin & Santos, Reference Martin and Santos2014), and autistic (e.g., Deschrijver et al., Reference Deschrijver, Bardi, Wiersema and Brass2016) populations may thus show atypical results in false-belief tasks even while representing the other's belief.
The tasks identified by Phillips et al. as showing intact “knowledge” representation in young, autistic, and nonhuman primate populations broadly follow two methodological designs: First, they assess the understanding of the another's ignorance (e.g., Flombaum & Santos, Reference Flombaum and Santos2005; Luo & Johnson, Reference Luo and Johnson2009; Santos, Nissen, & Ferrugia, Reference Santos, Nissen and Ferrugia2006), meaning that the other does not have knowledge about the object of interest. Second, they investigate whether one understands that the other has knowledge (a mental state that is true), about the object's location with the observer either having or not having access to this other's knowledge (e.g., Behne, Liszkowski, Carpenter, & Tomasello, Reference Behne, Liszkowski, Carpenter and Tomasello2012; Luo & Johnson, Reference Luo and Johnson2009). To solve these tasks successfully, there is no need for the brain to engage in mental conflict monitoring regarding the object's location: There is no other-related mental state to be represented (and, therefore, it cannot clash with one's own), or the other's mental state aligns with the own. Populations that do represent others' mental states, but are unable to deal with mental conflict, should hence not experience any difficulties. Consistent with this, difficulties arise if the other's (unknown) knowledge starts misaligning with the own understanding of the world (e.g., Krachun, Carpenter, Call, & Tomasello, Reference Krachun, Carpenter, Call and Tomasello2009), and when the design temporarily involves a manipulation of conflict between the other's and the participant's understanding of reality (e.g., where the object is ostentatiously relocated without the other seeing it, but then put back; Fabricius, Boyer, Weimer, & Carroll, Reference Fabricius, Boyer, Weimer and Carroll2010; Gettier, Reference Gettier1963; Horschler, Santos, & MacLean, Reference Horschler, Santos and MacLean2019; Kaminski, Call, & Tomasello, Reference Kaminski, Call and Tomasello2008). What Phillips et al. consider to be a true belief design with representations that do not doesn't qualify “knowledge,” thus involves a short-term manipulation of mental conflict as well. This means that atypical results in such “true belief” tasks may be attributable to mental conflict monitoring rather than belief representation issues in these populations, too. To show that the distinction between representing another's belief versus knowledge consists of more than semantics, the field may thus need to show that populations such as young children, nonhuman primates, and individuals on the spectrum continue to perform well in “knowledge” tasks that do contain mental conflict (e.g., Samson et al., Reference Samson, Apperly, Braithwaite, Andrews and Bodley Scott2010; see Table 3 in Deschrijver & Palmer, Reference Deschrijver and Palmer2020, for the most optimal dependent measures), and fail in (true and false) “belief” tasks that don't. Mental conflict monitoring is also likely effortful, using cognitive resources and showing relationships with executive functions (Carlson, Reference Carlson2010; Carlson, Mandell, & Williams, Reference Carlson, Mandell and Williams2004a; Carlson & Moses, Reference Carlson and Moses2001; Carlson, Moses, & Breton, Reference Carlson, Moses and Breton2002; Carlson, Moses, & Claxton, Reference Carlson, Moses and Claxton2004b). This may result in seemingly less automatic or slower responses in false-belief designs versus those knowledge designs that do not involve mental conflict (see sects 4.3 and 5.3 in Phillips et al.). From all these arguments, it follows that a differential performance of autistic, developmental, and non-human primate populations in “belief” versus “knowledge'' tasks could be seen not as a consequence of these tasks tapping into two different types of representations that are differentially affected (i.e., ”knowledge“ versus ”beliefs"), but rather as an indication that these populations have issues with solving mental conflict instead of with attributing to others any representation per se.
Representing another's belief versus knowledge is thus not as easily dissociable as the authors may want to portray: If the two representation mechanisms are truly distinct, shouldn't one always be able to assess the truthfulness of another's mental state before representing it? This seems challenging in the real world, where others' mental states are more complex than the ones typically used in the mentalizing domain. Regardless of which party holds the facts, however, mental conflict may arise after representing another's position if it's misaligned. In sum, when taking into account mental conflict monitoring, the idea that the representation of beliefs and knowledge are two fundamentally different things, with one being more basic than the other, may be on shaky grounds.
Financial support
The author received funding from the Research Foundation Flanders – FWO (postdoctoral fellowship).
Conflict of interest
None.