In this commentary, I discuss the general fundamental reason evolutionary psychology and other attempts to design computational accounts of the mind have not been successful, and suggest that architectural changes considered in After Phrenology (Anderson Reference Anderson2014) would likely be insufficient for computational modeling of the mind. Anderson appropriately challenges some of the accepted wisdom, but the challenges, as well as the suggested modifications, are insufficient. The three suggested principles – “mixing and matching” neural elements, “procedural and behavioral reuse” (emphasis in original), and that “not every cognitive achievement … need be supported by a specific targeted adaptation (sect. 1, para. 3) – are useful for overcoming some misconceptions about the working of the mind, but they are not sufficient.
Recent cognitive-mathematical results have connected the Gödelian breakthrough in logic incompleteness (Gödel Reference Gödel, Feferman, Dawson and Kleene2001) to difficulties of the computational modeling of the mind. Specifically, whereas Gödel had demonstrated the insufficiency of logic in the 1930s, to this very day logical models of the mind and artificial intelligence dominate computational accounts of the mind and psychological explanations. Logical bias affects even mathematical approaches specifically designed for overcoming logical difficulties, such as “neural networks” and “fuzzy logic” (Perlovsky Reference Perlovsky1998; Reference Perlovsky2001; Reference Perlovsky2006). Whereas every computational paradigm has its own set of mathematical “reasons” for the failings of the resulting algorithms (Perlovsky Reference Perlovsky2001), a more general reason is the reliance on logic at some algorithmic steps. For example, all learning algorithms have to use an operation that is mathematically equivalent to “this is food,” which is a logical statement. The Gödelian incompleteness of logic was shown to be equivalent to the combinatorial complexity of algorithms, which has plagued artificial intelligence and neural networks since the 1960s (Perlovsky Reference Perlovsky2001; Perlovsky et al. Reference Perlovsky, Deming and Ilin2011).
For a very long time, logic was considered the best way of deducing scientific truths. But this is a misconception. Aristotle taught his students not to use logic for explaining the mind (Barnes Reference Barnes1984; Perlovsky Reference Perlovsky2007b). According to Aristotle, the fundamental operation of the mind is a process, in which illogical forms as potentialities meet matter and become logical forms as actualities. Today we describe this process as the interaction of top-down with bottom-up signals. A mathematical description of the Aristotelian process in which vague-fuzzy representations interact with bottom-up signals and turn into crisp representations is given by dynamic logic, or DL (Perlovsky Reference Perlovsky2006; Vityaev et al. Reference Vityaev, Perlovsky, Kovalerchuk and Speransky2013). The experimental confirmation of this process to be an adequate model of perception in the brain is given in Bar and colleagues (Reference Bar, Kassam, Ghuman, Boshyan, Schmid, Dale, Hämäläinen, Marinkovic, Schacter and Rosen2006). The publication also demonstrated that illogical states and processes in the brain-mind are not accessible to consciousness. This point explains why thinking is biased toward logic: Subjective consciousness accesses only logical states in the brain. This selective access to logical states has fundamental consequences for cognitive science, psychology, and artificial intelligence: Intuitions of cognitive scientists, psychologists, and mathematicians modeling the mind are mostly about logical states and processes in the mind. Correspondingly, cognitive models mostly describe logical mechanisms.
The challenges to widely accepted views about workings of the mind in After Phrenology assail some logical foundations of the accepted models (among which modularity is a most obvious consequence of the logical bias) and hence are highly desirable. But these challenges and the proposed modifications are not sufficient: Logical bias is too strongly ingrained in our thinking, and overcoming it would require conscious examinations of the unconscious fundamentals of existing models, analyses of experimental data, and approaches to designing experiments. It is likely that mathematics can help in these analyses and might be highly desirable. In this commentary, I suggest several reasons for the insufficiency of the current ideas and briefly discuss future directions.
Let me repeat: The suggested principles are useful for understanding the working of the mind, but they are not sufficient. For example, mixing and matching the same neural elements in new ways mathematically is similar to the problem of tracking multiple targets with multiple models. In this problem models have to be mixed and matched to signals; the problem is unsolvable; it faces Gödelian complexity except if using DL – the Aristotelian process “from vague to crisp” (Perlovsky et al. Reference Perlovsky, Deming and Ilin2011); but using of the DL-Aristotelian process requires a new type of intuition about mind mechanisms. It is for the same reason that Hebbian adaptation faces Gödelian complexity: An algorithm would have to augment Hebbian adaptation by searching for synapses belonging to specific “sub-networks.” Similar problems would be faced by “neural search” (sect. 2.4) as well as by mechanisms to perceive “external symbols” (p. 190) as different from just objects. The logic-based system, ACT-R, cannot serve as a computable cognitive model even with modifications.
I would like to mention a few topics that are missing from After Phrenology that are closely related to logical complexity and are important to a computable theory of the mind (Perlovsky Reference Perlovsky2006; Reference Perlovsky, Gudwin and Queiroz2007a; Reference Perlovsky2007b; Reference Perlovsky2009; Reference Perlovsky2010a; Reference Perlovsky2010b; Reference Perlovsky2013a; Reference Perlovsky2013b; Reference Perlovsky2013c; Reference Perlovsky2014; Reference Perlovsky2015):
-
1. The role of emotions as fundamental to cognition;
-
2. Actions, specifically human actions, mostly occur inside the brain-mind;
-
3. Differences between symbols and signs; because symbol is a loaded word in psychology, computational cognition must differentiate between signs as conventional marks and symbols as cognitive processes connecting concepts, emotions, conscious, and unconscious (like “simulators,” Barsalou Reference Barsalou1999). It was the mixing of symbols and signs that destroyed the attempt to construct “symbolic AI” in the 1960s;
-
4. The difference between higher cognition and perception
-
5. Interaction of language and cognition;
-
6. Grammar is important for language emotionality; it is not an “inessential” system of logical rules in a textbook;
-
7. Cultural affordance is a beautiful idea, but it is inexorably logical and therefore would likely be incomputable.
I appreciate After Phrenology's emphasis that the componential computational theory of mind often misleads psychological intuition; yet, adding environmental and bodily structures will not be sufficient to overcome its deficiencies.
In this commentary, I discuss the general fundamental reason evolutionary psychology and other attempts to design computational accounts of the mind have not been successful, and suggest that architectural changes considered in After Phrenology (Anderson Reference Anderson2014) would likely be insufficient for computational modeling of the mind. Anderson appropriately challenges some of the accepted wisdom, but the challenges, as well as the suggested modifications, are insufficient. The three suggested principles – “mixing and matching” neural elements, “procedural and behavioral reuse” (emphasis in original), and that “not every cognitive achievement … need be supported by a specific targeted adaptation (sect. 1, para. 3) – are useful for overcoming some misconceptions about the working of the mind, but they are not sufficient.
Recent cognitive-mathematical results have connected the Gödelian breakthrough in logic incompleteness (Gödel Reference Gödel, Feferman, Dawson and Kleene2001) to difficulties of the computational modeling of the mind. Specifically, whereas Gödel had demonstrated the insufficiency of logic in the 1930s, to this very day logical models of the mind and artificial intelligence dominate computational accounts of the mind and psychological explanations. Logical bias affects even mathematical approaches specifically designed for overcoming logical difficulties, such as “neural networks” and “fuzzy logic” (Perlovsky Reference Perlovsky1998; Reference Perlovsky2001; Reference Perlovsky2006). Whereas every computational paradigm has its own set of mathematical “reasons” for the failings of the resulting algorithms (Perlovsky Reference Perlovsky2001), a more general reason is the reliance on logic at some algorithmic steps. For example, all learning algorithms have to use an operation that is mathematically equivalent to “this is food,” which is a logical statement. The Gödelian incompleteness of logic was shown to be equivalent to the combinatorial complexity of algorithms, which has plagued artificial intelligence and neural networks since the 1960s (Perlovsky Reference Perlovsky2001; Perlovsky et al. Reference Perlovsky, Deming and Ilin2011).
For a very long time, logic was considered the best way of deducing scientific truths. But this is a misconception. Aristotle taught his students not to use logic for explaining the mind (Barnes Reference Barnes1984; Perlovsky Reference Perlovsky2007b). According to Aristotle, the fundamental operation of the mind is a process, in which illogical forms as potentialities meet matter and become logical forms as actualities. Today we describe this process as the interaction of top-down with bottom-up signals. A mathematical description of the Aristotelian process in which vague-fuzzy representations interact with bottom-up signals and turn into crisp representations is given by dynamic logic, or DL (Perlovsky Reference Perlovsky2006; Vityaev et al. Reference Vityaev, Perlovsky, Kovalerchuk and Speransky2013). The experimental confirmation of this process to be an adequate model of perception in the brain is given in Bar and colleagues (Reference Bar, Kassam, Ghuman, Boshyan, Schmid, Dale, Hämäläinen, Marinkovic, Schacter and Rosen2006). The publication also demonstrated that illogical states and processes in the brain-mind are not accessible to consciousness. This point explains why thinking is biased toward logic: Subjective consciousness accesses only logical states in the brain. This selective access to logical states has fundamental consequences for cognitive science, psychology, and artificial intelligence: Intuitions of cognitive scientists, psychologists, and mathematicians modeling the mind are mostly about logical states and processes in the mind. Correspondingly, cognitive models mostly describe logical mechanisms.
The challenges to widely accepted views about workings of the mind in After Phrenology assail some logical foundations of the accepted models (among which modularity is a most obvious consequence of the logical bias) and hence are highly desirable. But these challenges and the proposed modifications are not sufficient: Logical bias is too strongly ingrained in our thinking, and overcoming it would require conscious examinations of the unconscious fundamentals of existing models, analyses of experimental data, and approaches to designing experiments. It is likely that mathematics can help in these analyses and might be highly desirable. In this commentary, I suggest several reasons for the insufficiency of the current ideas and briefly discuss future directions.
Let me repeat: The suggested principles are useful for understanding the working of the mind, but they are not sufficient. For example, mixing and matching the same neural elements in new ways mathematically is similar to the problem of tracking multiple targets with multiple models. In this problem models have to be mixed and matched to signals; the problem is unsolvable; it faces Gödelian complexity except if using DL – the Aristotelian process “from vague to crisp” (Perlovsky et al. Reference Perlovsky, Deming and Ilin2011); but using of the DL-Aristotelian process requires a new type of intuition about mind mechanisms. It is for the same reason that Hebbian adaptation faces Gödelian complexity: An algorithm would have to augment Hebbian adaptation by searching for synapses belonging to specific “sub-networks.” Similar problems would be faced by “neural search” (sect. 2.4) as well as by mechanisms to perceive “external symbols” (p. 190) as different from just objects. The logic-based system, ACT-R, cannot serve as a computable cognitive model even with modifications.
I would like to mention a few topics that are missing from After Phrenology that are closely related to logical complexity and are important to a computable theory of the mind (Perlovsky Reference Perlovsky2006; Reference Perlovsky, Gudwin and Queiroz2007a; Reference Perlovsky2007b; Reference Perlovsky2009; Reference Perlovsky2010a; Reference Perlovsky2010b; Reference Perlovsky2013a; Reference Perlovsky2013b; Reference Perlovsky2013c; Reference Perlovsky2014; Reference Perlovsky2015):
1. The role of emotions as fundamental to cognition;
2. Actions, specifically human actions, mostly occur inside the brain-mind;
3. Differences between symbols and signs; because symbol is a loaded word in psychology, computational cognition must differentiate between signs as conventional marks and symbols as cognitive processes connecting concepts, emotions, conscious, and unconscious (like “simulators,” Barsalou Reference Barsalou1999). It was the mixing of symbols and signs that destroyed the attempt to construct “symbolic AI” in the 1960s;
4. The difference between higher cognition and perception
5. Interaction of language and cognition;
6. Grammar is important for language emotionality; it is not an “inessential” system of logical rules in a textbook;
7. Cultural affordance is a beautiful idea, but it is inexorably logical and therefore would likely be incomputable.
I appreciate After Phrenology's emphasis that the componential computational theory of mind often misleads psychological intuition; yet, adding environmental and bodily structures will not be sufficient to overcome its deficiencies.