In their target article, Veissière et al. set out to close the divide between different accounts of mindreading by proposing a model framed in terms of the recently popular free-energy framework (Friston Reference Friston2009; Reference Friston2012). As they explain, they aim for “a compromise position between internalist, brain-based approaches (e.g., simulation and theory–theory theories), which emphasize the neural machinery in individual humans’ brains that is necessary to read other minds, and externalist approaches (e.g., radical enactive and cultural evolutionary theory)” (sect. 1.3.3, para. 7). This way, the authors seem to follow recently popular “pluralist” approaches which allow for more than one strategy for understanding other minds (Fiebich & Coltheart Reference Fiebich and Coltheart2015; Newen Reference Newen, Metzinger and Windt2015; Zahavi Reference Zahavi2014), whereas also being strongly committed to the unificatory force of the free-energy formulation. Although we applaud the article's core proposal of establishing a common formal foundation for bringing opposing accounts into a useful dialog, we think that Veissière et al. fail to appreciate important differences in the scope and explanatory aims of these accounts. We want to clarify these differences and point out that the competing positions are cast on different levels of analysis (Marr Reference Marr1982). This means that although the competing models of mindreading do fit with Veissière et al.'s formalization of the target phenomenon, their position counts only as a first step toward a formal analysis of the explanandum and does not allow for disambiguating between different proposals regarding how it comes about.
There are two main sources of disagreement in the literature on mindreading. The first point of contention, as Veissière et al. correctly point out, is the way in which the target phenomenon should be defined. Supporters of theoretic and simulationist accounts of mindreading construe the explanandum as an internal, computational process involving manipulation of representations. Defenders of externalist accounts, on the other hand, claim that the phenomenon in question is something that happens between people and not just inside their skulls. What is crucial here is that this debate can be understood as taking place on what David Marr called the computational level of analysis, one concerned with defining what the problem solved by the cognitive system is. This can be brought to light using the example of Gallagher's (Reference Gallagher2008) enactive view which offers an entirely phenomenological model, which purposefully sets complicated issues of neural processing aside. Gallagher disagrees with proponents of the internalist accounts by challenging the idea that mindreading should be characterized in terms of “prediction and explanation” of others’ behavior. Instead, he takes the target phenomenon to be more akin to “something like evaluative understanding” (Gallagher Reference Gallagher2001, p. 94). Authors such as Gopnik and Wellman (Reference Gopnik and Wellman2012) or Carruthers (Reference Carruthers2015) can happily acknowledge that mindreading often feels like this kind of understanding of others, but it is not the issue that they want to address (see below). It is in the context of this debate about the nature of the explanandum that the authors’ proposal seems most promising. As they point out, the free-energy framework offers a formal toolkit that does not allow for a “strict distinction between dynamics (as emphasized by externalists) and inference (the focus of internalist models)” (sect. 1.3.3, para. 7). Thus, it can not only provide a common platform for formulating and comparing different models of mindreading, but also may promote forming new models integrating insights from both sides of the debate.
However, the target paper does not go beyond redefining the explanandum in terms of free-energy, as it does not touch on the second important issue in the mindreading literature – explaining how we should adequately capture the neural processing which underlies the capacity in question. This debate, waged predominantly between proponents of the two dominant internalist paradigms (though some anti-representationalists are also involved, see e.g., Hutto & Myin Reference Hutto and Myin2017), is concerned with what Marr called the algorithmic level of analysis. In other words, the issue at the core of this disagreement is not about what it is that the brain is doing, but how it is doing it – the nature of the representational vehicles and neural algorithms which make mindreading possible. Admittedly, the authors seem to acknowledge this much when they state that their proposal “would be difficult to test (due to its generality)” (sect. 5.1, para. 4), but they hope that it can help “derive specific integrative models” (sect. 5.1, para. 4). However, it seems to us that Veissière et al.'s account cannot offer serious insights into the algorithmic level as both theory- and simulation-theorists already employ probabilistic computational models compatible with the free-energy formulation (causal Bayesian graphs in the case of the former – Gopnik & Wellman, Reference Gopnik and Wellman2012; probabilistic forward-models in the case of the latter – Gallese, Reference Gallese2003, p. 521) to support their claims. Following Pickering and Clark (Reference Pickering and Clark2014), we think that the only way to make progress in this debate is not to integrate different models under one computational description, but to identify specific constraints and empirical predictions these models place on physical mechanisms that could implement them.
In their target article, Veissière et al. set out to close the divide between different accounts of mindreading by proposing a model framed in terms of the recently popular free-energy framework (Friston Reference Friston2009; Reference Friston2012). As they explain, they aim for “a compromise position between internalist, brain-based approaches (e.g., simulation and theory–theory theories), which emphasize the neural machinery in individual humans’ brains that is necessary to read other minds, and externalist approaches (e.g., radical enactive and cultural evolutionary theory)” (sect. 1.3.3, para. 7). This way, the authors seem to follow recently popular “pluralist” approaches which allow for more than one strategy for understanding other minds (Fiebich & Coltheart Reference Fiebich and Coltheart2015; Newen Reference Newen, Metzinger and Windt2015; Zahavi Reference Zahavi2014), whereas also being strongly committed to the unificatory force of the free-energy formulation. Although we applaud the article's core proposal of establishing a common formal foundation for bringing opposing accounts into a useful dialog, we think that Veissière et al. fail to appreciate important differences in the scope and explanatory aims of these accounts. We want to clarify these differences and point out that the competing positions are cast on different levels of analysis (Marr Reference Marr1982). This means that although the competing models of mindreading do fit with Veissière et al.'s formalization of the target phenomenon, their position counts only as a first step toward a formal analysis of the explanandum and does not allow for disambiguating between different proposals regarding how it comes about.
There are two main sources of disagreement in the literature on mindreading. The first point of contention, as Veissière et al. correctly point out, is the way in which the target phenomenon should be defined. Supporters of theoretic and simulationist accounts of mindreading construe the explanandum as an internal, computational process involving manipulation of representations. Defenders of externalist accounts, on the other hand, claim that the phenomenon in question is something that happens between people and not just inside their skulls. What is crucial here is that this debate can be understood as taking place on what David Marr called the computational level of analysis, one concerned with defining what the problem solved by the cognitive system is. This can be brought to light using the example of Gallagher's (Reference Gallagher2008) enactive view which offers an entirely phenomenological model, which purposefully sets complicated issues of neural processing aside. Gallagher disagrees with proponents of the internalist accounts by challenging the idea that mindreading should be characterized in terms of “prediction and explanation” of others’ behavior. Instead, he takes the target phenomenon to be more akin to “something like evaluative understanding” (Gallagher Reference Gallagher2001, p. 94). Authors such as Gopnik and Wellman (Reference Gopnik and Wellman2012) or Carruthers (Reference Carruthers2015) can happily acknowledge that mindreading often feels like this kind of understanding of others, but it is not the issue that they want to address (see below). It is in the context of this debate about the nature of the explanandum that the authors’ proposal seems most promising. As they point out, the free-energy framework offers a formal toolkit that does not allow for a “strict distinction between dynamics (as emphasized by externalists) and inference (the focus of internalist models)” (sect. 1.3.3, para. 7). Thus, it can not only provide a common platform for formulating and comparing different models of mindreading, but also may promote forming new models integrating insights from both sides of the debate.
However, the target paper does not go beyond redefining the explanandum in terms of free-energy, as it does not touch on the second important issue in the mindreading literature – explaining how we should adequately capture the neural processing which underlies the capacity in question. This debate, waged predominantly between proponents of the two dominant internalist paradigms (though some anti-representationalists are also involved, see e.g., Hutto & Myin Reference Hutto and Myin2017), is concerned with what Marr called the algorithmic level of analysis. In other words, the issue at the core of this disagreement is not about what it is that the brain is doing, but how it is doing it – the nature of the representational vehicles and neural algorithms which make mindreading possible. Admittedly, the authors seem to acknowledge this much when they state that their proposal “would be difficult to test (due to its generality)” (sect. 5.1, para. 4), but they hope that it can help “derive specific integrative models” (sect. 5.1, para. 4). However, it seems to us that Veissière et al.'s account cannot offer serious insights into the algorithmic level as both theory- and simulation-theorists already employ probabilistic computational models compatible with the free-energy formulation (causal Bayesian graphs in the case of the former – Gopnik & Wellman, Reference Gopnik and Wellman2012; probabilistic forward-models in the case of the latter – Gallese, Reference Gallese2003, p. 521) to support their claims. Following Pickering and Clark (Reference Pickering and Clark2014), we think that the only way to make progress in this debate is not to integrate different models under one computational description, but to identify specific constraints and empirical predictions these models place on physical mechanisms that could implement them.