Whether intended or not, there is an illuminating ambiguity in the subtitle of the target article. The phrase “toward a science of intentional change” can be interpreted in at least two ways. First, there is the science that studies changes in human intentional or representational systems, such as language and culture. This science would investigate the ways in which the human capacity to represent the world has evolved, perhaps using insights from the evolution of other representational and communicative systems found in other species. This leads to a theoretical question about the nature of human cognition, posed through the lens of the theory of evolution. How can human intentional systems both be adapted to solve certain cognitive problems and yet be flexible enough to occupy a variety of cognitive niches?
Second, there is the science that would attempt to achieve changes in human society intentionally. This science would be not just explanatory but an applied science such as engineering, deliberately aiming to modify human social arrangements in order to achieve certain outcomes. This leads to a more practical question. How can the human environment be purposely altered in order to encourage cooperation and eliminate destructive behavior? There is also perhaps a third reading of the title, which straddles the first two, and concerns the science that would seek to alter human society by purposely changing our intentional or representational systems. How can human intentional systems be modified in such a way as to re-engineer our social arrangements for the sake of better outcomes? In this commentary, I will try to raise concerns about the answers that Wilson et al. give to each of the first two questions in turn, concerns that also pertain to the third question.
It is tempting to answer the first question in a glib fashion, simply by saying something about striking a balance between adaptiveness and flexibility. Indeed, the authors themselves, in using the analogy of the immune system, acknowledge that there is no reason that adaptiveness and flexibility cannot coexist. It is clearly a matter of achieving the right combination of innate responses (so as not to have to reinvent the proverbial wheel for every variant on a familiar situation) and learning (so as not to come up with an inappropriate programmed response to a situation bearing a mere superficial resemblance to a previously experienced one).
A variety of answers to this question have been given by a number of cognitive scientists working within a broadly evolutionary framework (see, e.g., Buller Reference Buller2005; Carey & Spelke Reference Carey, Spelke, Hirschfeld and Gelman1994; Cummins & Cummins Reference Cummins and Cummins1999; Mallon & Stich Reference Mallon and Stich2000). There is a great deal more work to be done on this topic when it comes to specific human cognitive capacities, as the balance is likely to be different when it comes to different human abilities. However, any attempt of this kind seems incompatible with what has been called the “massive modularity hypothesis,” which posits “hundreds or thousands” of cognitive modules (Tooby & Cosmides Reference Tooby, Cosmides and Gazzaniga1995), each specifically designed for a narrowly defined cognitive task. On such an evolutionary model, there is little room for a compromise between adaptation and flexibility, simply because the model emphasizes adaptive cognitive modules to the exclusion of cognitive plasticity. Wilson et al. do not seem to acknowledge that this version of evolutionary psychology is not compatible with what we know about the flexible behavior of human beings.
When it comes to the second question I have two concerns, one theoretical and the other ethical, both of which I think deserve more attention by the authors. The theoretical concern has to do with the feasibility of predicting human behavior reliably enough as to warrant constructing a science of social change. One of the lessons of the cognitive revolution is that human behavior cannot always be predicted, though it can often be successfully explained in hindsight. Not only is the prediction of human behavior not feasible when one restricts oneself only to citing environmental variables; even if one posits internal cognitive states, these states do not always enable one to predict behavior (Andrews Reference Andrews2012). The unreliability of prediction when it comes to complex natural systems, whether meteorological systems, biological ecosystems, or human societies, means that it is risky to intervene to produce certain desirable outcomes. The practice of cloud-seeding in meteorology is just one example of the way in which the attempt to interfere in the workings of a complex natural system can have unforeseen consequences. Similar considerations apply to biological ecosystems: It would be dicey to alter a population's environment in order to get a lineage to evolve in a certain direction. Likewise, an applied science of intentional social change is liable to be on shaky ground, as the specificities of each human community and social context are likely to render prediction quite unreliable.
Given the precariousness of predicting the effect of social interventions, the moral hazards of such attempts at social engineering loom especially large. There have no doubt been various successes when it comes, say, to modifying classroom settings in such a way as to improve learning outcomes; but generalizing from these success stories to human society at large is a risky endeavor. The advantages of enhancing human cooperative behaviors, reducing violence, and other desirable outcomes need to be weighed seriously against the ethical costs of interventions involving social control that may have unforeseen consequences. Among the principles that the authors endorse when it comes to the modification of human behavior is that of “consensus decision making,” which holds that people prefer “to do what we want, not what they want.” But if so, then attempts to become “wise managers” of social behaviors are unlikely to be welcome in general, and are liable to backfire.
Whether intended or not, there is an illuminating ambiguity in the subtitle of the target article. The phrase “toward a science of intentional change” can be interpreted in at least two ways. First, there is the science that studies changes in human intentional or representational systems, such as language and culture. This science would investigate the ways in which the human capacity to represent the world has evolved, perhaps using insights from the evolution of other representational and communicative systems found in other species. This leads to a theoretical question about the nature of human cognition, posed through the lens of the theory of evolution. How can human intentional systems both be adapted to solve certain cognitive problems and yet be flexible enough to occupy a variety of cognitive niches?
Second, there is the science that would attempt to achieve changes in human society intentionally. This science would be not just explanatory but an applied science such as engineering, deliberately aiming to modify human social arrangements in order to achieve certain outcomes. This leads to a more practical question. How can the human environment be purposely altered in order to encourage cooperation and eliminate destructive behavior? There is also perhaps a third reading of the title, which straddles the first two, and concerns the science that would seek to alter human society by purposely changing our intentional or representational systems. How can human intentional systems be modified in such a way as to re-engineer our social arrangements for the sake of better outcomes? In this commentary, I will try to raise concerns about the answers that Wilson et al. give to each of the first two questions in turn, concerns that also pertain to the third question.
It is tempting to answer the first question in a glib fashion, simply by saying something about striking a balance between adaptiveness and flexibility. Indeed, the authors themselves, in using the analogy of the immune system, acknowledge that there is no reason that adaptiveness and flexibility cannot coexist. It is clearly a matter of achieving the right combination of innate responses (so as not to have to reinvent the proverbial wheel for every variant on a familiar situation) and learning (so as not to come up with an inappropriate programmed response to a situation bearing a mere superficial resemblance to a previously experienced one).
A variety of answers to this question have been given by a number of cognitive scientists working within a broadly evolutionary framework (see, e.g., Buller Reference Buller2005; Carey & Spelke Reference Carey, Spelke, Hirschfeld and Gelman1994; Cummins & Cummins Reference Cummins and Cummins1999; Mallon & Stich Reference Mallon and Stich2000). There is a great deal more work to be done on this topic when it comes to specific human cognitive capacities, as the balance is likely to be different when it comes to different human abilities. However, any attempt of this kind seems incompatible with what has been called the “massive modularity hypothesis,” which posits “hundreds or thousands” of cognitive modules (Tooby & Cosmides Reference Tooby, Cosmides and Gazzaniga1995), each specifically designed for a narrowly defined cognitive task. On such an evolutionary model, there is little room for a compromise between adaptation and flexibility, simply because the model emphasizes adaptive cognitive modules to the exclusion of cognitive plasticity. Wilson et al. do not seem to acknowledge that this version of evolutionary psychology is not compatible with what we know about the flexible behavior of human beings.
When it comes to the second question I have two concerns, one theoretical and the other ethical, both of which I think deserve more attention by the authors. The theoretical concern has to do with the feasibility of predicting human behavior reliably enough as to warrant constructing a science of social change. One of the lessons of the cognitive revolution is that human behavior cannot always be predicted, though it can often be successfully explained in hindsight. Not only is the prediction of human behavior not feasible when one restricts oneself only to citing environmental variables; even if one posits internal cognitive states, these states do not always enable one to predict behavior (Andrews Reference Andrews2012). The unreliability of prediction when it comes to complex natural systems, whether meteorological systems, biological ecosystems, or human societies, means that it is risky to intervene to produce certain desirable outcomes. The practice of cloud-seeding in meteorology is just one example of the way in which the attempt to interfere in the workings of a complex natural system can have unforeseen consequences. Similar considerations apply to biological ecosystems: It would be dicey to alter a population's environment in order to get a lineage to evolve in a certain direction. Likewise, an applied science of intentional social change is liable to be on shaky ground, as the specificities of each human community and social context are likely to render prediction quite unreliable.
Given the precariousness of predicting the effect of social interventions, the moral hazards of such attempts at social engineering loom especially large. There have no doubt been various successes when it comes, say, to modifying classroom settings in such a way as to improve learning outcomes; but generalizing from these success stories to human society at large is a risky endeavor. The advantages of enhancing human cooperative behaviors, reducing violence, and other desirable outcomes need to be weighed seriously against the ethical costs of interventions involving social control that may have unforeseen consequences. Among the principles that the authors endorse when it comes to the modification of human behavior is that of “consensus decision making,” which holds that people prefer “to do what we want, not what they want.” But if so, then attempts to become “wise managers” of social behaviors are unlikely to be welcome in general, and are liable to backfire.