1. Introduction
Neuroscientist Donald Pfaff claims that altruistic behaviour is hard-wired into our brains.Footnote 1 Neurobiological mechanisms draw us out of self-interest and into the sphere of others by blurring the lines of identity. Cognitive psychologist Steven Pinker attributes the decline of violence over the centuries to a particular conception of morality, which ‘is a consequence of the interchangeability of perspectives and the opportunity the world provides for positive-sum games’.Footnote 2 The ability to consider different perspectives is a function of our capacity to reason. Yet something clearly has gone awry in our altruistic brains and rationality when considering the magnitude of harm among humankind. Another comment from Pinker provides some insight into the apparent loss of human pro-sociality:
The indispensability of reason does not imply that individual people are always rational or are unswayed by passion and illusion. It only means that people are capable of reason, and that a community of people who choose this faculty and exercise it openly and fairly can collectively reason their way to sound conclusions in the long run.Footnote 3
Pinker's second comment suggests that reason itself has motivating power and that ‘passion’ and ‘illusion’ prevent people from acting on it. The tendency of certain emotions and irrational beliefs to interfere with reason may be part of an explanation for why individuals and groups harm others at local and global levels.
Interpersonal harm is not so much the result of a failure to recognise moral reasons as a failure to be motivated to act on them. Since the question at issue is how to reduce harm by improving moral behaviour, one would expect moral philosophers to offer guidance and provide the impetus for the action necessary to achieve this end. With arguably a few exceptions, however, they have not been successful in this regard.Footnote 4 Philosophical arguments are generally ineffective in moving people and governments to act morally in respecting the rights, needs, and interests of all people. Some of these arguments rely on objective moral truths that presumably exist independently of our beliefs and attitudes.Footnote 5 Many moral philosophers focus primarily on showing why one moral theory is superior to others in purportedly establishing these truths and often use implausible thought-experiments to defend their positions.Footnote 6 The overemphasis on theoretical aspects of morality leaves their arguments motivationally inert.
Educational and political institutions have also failed to instill the motivation for moral behaviour. Noting this failure, Ingmar Persson and Julian Savulescu emphasise the need to enhance our moral sensitivity and agency through biological means:
Our knowledge of human biology, in particular of genetics and neurobiology, is now beginning to supply us with the means of directly affecting the biological or physiological bases of human motivation […]. We shall suggest that there are in principle no philosophical or moral objections to the use of such biomedical means of moral enhancement – moral bioenhancement, as we shall call it – and that the current predicament of humankind is so serious that it is imperative that scientific research explore every possibility of developing effective means of moral bioenhancement, as a complement to traditional means.Footnote 7
As the last part of this passage indicates, Persson and Savulescu do not believe that moral bioenhancement (henceforth MB) alone will move us to engage in the type of moral behaviour necessary to meet the challenges of the ‘predicament of humankind’. Still, they suggest that MB would be the most critical intervention in addressing and ideally resolving this predicament. While they focus mainly on the threat from weapons of mass destruction (WMDs) and the effects of climate change, the problem also includes the harm resulting from genocide, civil wars, economic inequality, and smaller-scale criminal behaviour.
The concept of MB is fraught with scientific, ethical, and political challenges. I discuss MB as a set of collective action problems. Psychoactive drugs or other forms of neuromodulation designed to enhance moral sensitivity would have to produce the same or similar effects in the brains of a majority of people. This is a questionable hypothesis, given structural and functional differences in people's brains and differences in how their neural networks mediate behaviour. A significant number of healthy subjects would have to participate in clinical trials testing the safety and efficacy of the drugs, which may expose them to unreasonable risk. Concerns about the ethical justification of the research alone may prevent such a project from getting off the ground. Even if research demonstrated that the drugs were safe and effective in improving moral motivation, a majority of the population would have to co-operate in a moral enhancement programme for such a project to succeed. This goal could be thwarted if enough people opted out and refused to enhance. It could also be thwarted if a minority who cause much of the world's harm not only refused to enhance but also took advantage of the co-operation of others. To avoid either of these outcomes, Persson and Savulescu argue that moral enhancement should be compulsory rather than voluntary. But the collective interest in harm reduction through compulsory enhancement would come at the cost of a loss of individual freedom.
After analysing and discussing these issues, I conclude on a sceptical note. Although there is an urgent need to promote moral behaviour and reduce harm among humans, there would be too many problems with implementing and enforcing a programme of moral enhancement on a grand scale. Traditional indirect methods of moral enhancement, such as education, parenting, and political institutions, have fallen far short of this goal and at best can only approximate it. Yet even if these methods are limited in what they can achieve, they may be more viable and defensible alternatives to voluntary or compulsory alteration of thought and behaviour by manipulating the brain through pharmacological or other means.
2. The Neurobiology of Moral Decision-Making
The goal of any programme of moral enhancement would be to provide the motivation to act and facilitate the execution of this motivation in action. Although John Martin Fischer and Mark Ravizza develop and defend their theory of reasons-responsiveness to account for moral responsibility, it can be applied to moral enhancement. Reasons-responsiveness consists in the capacity to recognise reasons for or against actions and the capacity to react to these reasons in morally appropriate behaviour.Footnote 8 Reasons-responsiveness involves not only the cognitive capacity to recognise moral reasons but also the emotional and volitional capacity to translate these reasons into right actions. These actions ideally would balance deontological considerations of treating individuals as ends in themselves and not merely as meansFootnote 9 with consequentialist considerations of bringing about outcomes that reduce or prevent harm and increase human welfare.Footnote 10 Unlike the Kantian view, however, the crucial issue is not whether one acts from duty or inclination in treating others as ends in themselves, but that one treats them in this way. It is the action itself that matters. The moral worth of the action depends more on how it affects others than the mental states behind it.
There has been disagreement in the debate on moral enhancement as to whether its goal would more likely be achieved by focussing on cognitive or on emotional processing. In a seminal paper on this topic, Thomas Douglas identifies what he calls ‘counter-moral emotions’ as targets for neuromodulation and a more general programme of moral enhancement.Footnote 11 These emotions include ‘a strong aversion to certain racial groups’ and ‘the impulse towards violent aggression’.Footnote 12 Douglas argues that the most promising way to correct these tendencies would be ‘an enhancement that will expectably leave the enhanced person with morally better motives than she had previously’.Footnote 13 Emphasising the emotional aspect of behaviour, Douglas says that ‘the distinctive feature of emotional moral enhancement is that, once the enhancement has been initiated, there is no further need for cognition: emotions are modified directly’.Footnote 14 In contrast, John Harris argues that moral enhancement is an extension of cognitive enhancement and that ‘emotional moral enhancement is simply ethically otiose’.Footnote 15 Harris claims that ‘it is to rationality and its evolutionary origins, rather than to the emotions that we should look’.Footnote 16 He further claims that ‘it is human imagination that is one of the most potent of cognitive faculties that enables us to put ourselves into, if not the shoes of relevant others, at least into an understanding of the nature and consequences of our acts and decisions, and of the effects of those decisions on others and the world’.Footnote 17
Douglas’ and Harris’ analyses of moral behaviour in terms of separate cognitive and emotional faculties are symptomatic of a discredited dualistic model of explaining this behaviour. This model offers at best an incomplete explanation of moral reasoning and action and thus fails to include all the capacities that need to be targeted for moral enhancement. Increased imagination and decreased counter-moral emotions would strengthen the recognitional component of reasons-responsiveness. But they would not automatically strengthen the reactive component of responsiveness and move one to perform the right actions in different circumstances.
Cognition and emotion are not segregated but integrated processes whose interaction enables moral sensitivity and rational and moral decision-making.Footnote 18 These normative capacities are mediated by a distributed network of neural circuits. The prefrontal cortex (PFC) is particularly important in regulating the capacity to reason. It also has a critical role in emotion regulation. This is not only because of its projections to and connectivity with the amygdala and other limbic structures such as the cingulate cortex, but also because reason and emotion are both partly processed within the PFC itself. The complex function of the PFC shows that it is oversimplified and inaccurate to divide the brain into distinct regions mediating separate cognitive and emotional processing at the mental level.
Psychopathy is instructive in this regard. This is a behavioural disorder characterised by impaired capacity for empathy, impaired responsiveness to fear-inducing stimuli, and failure to conform to social norms. Because of these impairments, many psychopaths cause a substantial amount of harm to others. They have difficulty representing outcomes of actions. This can preclude or weaken their moral judgement by precluding or weakening their ability to foresee how their actions can adversely affect others.Footnote 19 Functional imaging studies of psychopaths’ brains have shown dysregulation in PFC-amygdala pathways, and this finding is part of an explanation of their behaviour. It is the result of dysfunction of not only cognitive or emotional processes but of dysfunctional interaction between them. Any improvement in a psychopath's behaviour would require enhancing his cognitive-emotional processing by enhancing how his neural circuits generate and sustain this processing. Still, it is not clear to what extent psychopaths’ brain abnormalities influence their wrongful and harmful behaviour. One must be cautious in drawing inferences from their brain abnormalities to their impaired moral reasoning and immoral behaviour because the first is not necessarily the cause of the second. Claims about the power of structural MRI and functional PET and fMRI scans to validate these inferences warrant caution as well. Scans of people's brains are only indirect measures of brain activity. They are visualisations of statistical analyses based on averaging large numbers of images. Brain scans are more appropriately described as scientific constructs than direct “real-time” pictures of the brain.
Empathy is a critical component of moral sensitivity. Harris’ comment about moral enhancement depending on the cognitive capacity for imagination in ‘putting ourselves into the shoes of relevant others’ reflects a narrow conception of empathy. While imagination may promote concern for others that extends spatially and temporally beyond local interpersonal bonds held together by emotion, empathy is not a purely cognitive disposition. Nor is empathy separable from reason but can influence it in guiding action. Jean Decety and Jason Cowell explain:
Empathy […] is not always a direct avenue to moral behavior. Indeed, at times it can interfere with morality by introducing partiality, for instance by favoring in-group members. But empathy can provide the motivational fire and push toward seeing a victim's suffering end, irrespective of group membership and culturally determined dominance hierarchies, preventing rationalization of injustice and derogation.Footnote 20
Decety and Cowell further state that empathy consists of cognitive, affective, and motivational components. The cognitive component enables one ‘to consciously put oneself into the mind of another and imagine what that person is thinking or feeling’.Footnote 21 The affective component enables one to become emotionally aroused by others’ condition. The motivational component involves the urge to care for another's welfare. These mental capacities and their neural underpinning are not domain-specific but domain-general and involve an interconnected network of circuits distributed throughout cortical and limbic brain regions. This network comprises the ‘moral brain’.Footnote 22 It is similar to what Andrea Glenn, Adrian Raine, and Robert Schug describe as the ‘moral neural circuit’, which they point out is dysfunctional in psychopaths and may at least partly explain the failure to align their behaviour with moral and legal norms.Footnote 23 This circuit would have to be modulated in order to generate or improve moral sensitivity in the psychopath or others who lack this disposition. MB involving neuromodulation would have to influence not just one but all or most of the components in the moral neural circuit and how they interact with each other in regulating behaviour. Still, what Decety and Cowell describe as the ‘urge’ to care for another's welfare by itself is not sufficient to act in a way that exemplifies this care. One must translate this urge into morally appropriate behaviour. Any form of moral bioenhancement would have to facilitate the exercise of this motivational capacity in action.
Three hypothetical drug interventions have been proposed as possible ways of modulating neural networks associated with moral reasoning. Theoretically, they would enhance the capacity to translate the urge to care for others into action and thereby enhance moral behaviour. But there are neurobiological, psychological, and social reasons to be sceptical of the idea that taking a psychotropic drug targeting these networks would have this effect.
One study conducted by Molly Crockett and co-investigators showed that the selective serotonin reuptake inhibitor (SSRI) citalopram increased harm aversion in healthy subjects.Footnote 24 They claimed that ‘these findings have implications for the use of serotonergic agents in the treatment of antisocial and aggressive behaviour’ in promoting pro-social behaviour.Footnote 25 Crockett is more circumspect about the behaviour-modifying potential of these drugs in a more recent contribution to a symposium on moral enhancement:
Most neurotransmitters serve multiple functions and are found in many different brain regions […] serotonin plays a role in a variety of other processes [than harm aversion] including (but not limited to) learning, emotion, vision, sexual behavior, sleep, pain and memory, and there are at least 17 different types of serotonin receptors that produce distinct effects on neurotransmission. Thus, interventions […] may have undesirable side effects, and these should be considered when weighing the costs and benefits of the intervention.Footnote 26
Differences in how neurotransmitters influence the activity of neural circuits, and how increasing levels of neurotransmitters might also influence this activity, suggest that there would be different effects on the behaviour of healthy people taking SSRIs, including no effects at all. It is unclear whether or how these drugs would influence cognitive-emotional processing. Variable responses to psychotropic drugs could mean that there would not be the uniformity of effects on neural circuitry necessary to reach a threshold of enhanced moral behaviour among a majority of the population. Also, while neurotransmitters have a critical role in regulating the activity of these circuits and the mental processes they sustain, these are not a function of these substances alone but of the influence of genetic, endocrine, immune, and environmental factors on neural circuitry as well. The capacity for moral sensitivity and moral reasoning, and impairments in these capacities, depend on more than the function or dysfunction of a particular neurotransmitter. It is not known how increasing levels of serotonin would affect all of the serotonergic receptors in all of the circuitry of the distributed neural network that partly regulates moral reasoning. So it is unclear whether or to what extent increasing levels of serotonin would make one more responsive to moral reasons when acting. Increasing harm aversion would not imply a corresponding increase in moral sensitivity and make one more attentive to the needs of others. It would not necessarily enhance the motivation to act in response to these needs. A small number of people with enhanced moral behaviour could have some positive effects on interpersonal relations. But unless a majority did this, the effects on reducing collective harm and increasing collective welfare would likely be negligible. Regarding the neurobiological basis of moral behaviour, because no two people's brains are alike and probably would not respond to the drugs in the same way, it is unlikely that broad use of an SSRI such as citalopram would move people to act morally and reduce harm.
The beta-adrenergic receptor antagonist propranolol has been used as an indirect means of cognitive enhancement. The drug inhibits the release of the stress hormones adrenaline and noradrenaline in response to perceived threatening stimuli. This in turn dampens the autonomic response to stress in the form of increased heart rate, palpitations, sweating and other symptoms. By dampening this response, the drug may enable one to avoid being distracted by these symptoms and remain focussed on a demanding cognitive task such as surgery, musical performance, or public speaking. The drug could also be used as one component of moral bioenhancement. One study has shown that propranolol can reduce implicit negative racial bias.Footnote 27 As with drugs used to raise levels of serotonin, however, one cannot infer that a reduction in a fearful or negative perception of groups and harm aversion will result in an increase in morally justifiable actions. Reducing bias would not necessarily translate into pro-social behaviour. The cognitive and affective capacities necessary for co-operation require much more than harm aversion or an absence of bias. Racial bias detected in lab experiments is not predictive of immoral behaviour in real life. In addition, like other pharmacological agents used for moral enhancement, trade-offs between positive and negative effects of propranolol in the body and brain would have to be considered.
Some neuroscientists have claimed that the neuromodulating effects of the neuropeptide oxytocin may have the greatest potential as a morality-enhancing agent. Oxytocin plays a critical role in social cognition.Footnote 28 Its highest levels are found in the hypothalamus, and it influences activity in the hypothalamic-pituitary-adrenal (HPA) axis by inhibiting the fear response to social stimuli in the amygdala. Increasing levels of oxytocin in the brain through intra-nasal administration could reduce fear and increase trust and social co-operation. Yet any positive social effects of oxytocin may be more local than global and limited to particular groups, with negative effects more likely to occur outside the compass of these groups. Studies show that this neuropeptide facilitates social bonding but also produces non-prosocial effects that may have evolved to promote offspring survival.Footnote 29 Oxytocin may promote antisocial rather than prosocial behaviour on a broad scale by strengthening a person's bonding and identification with an in-group and the perception of those in out-groups as competitors or threats. It could promote rather than prevent or reduce aggression between groups and individuals.Footnote 30
It would be an oversimplification to claim that increasing the level of a psychoactive substance would make one less self-regarding and more other-regarding. Moral behaviour is a function of multiple biological, psychological, and environment factors, not just neural chemicals and circuits. It is a disposition that depends on factors both inside and outside of the brain. The idea that pharmacological neuromodulation alone could enhance moral behaviour fails to appreciate the complexity of human moral psychology.Footnote 31
Research in the form of randomised placebo-controlled clinical trials would be the only empirically verifiable way to determine the effects of psychotropic drugs or other brain- and mind-altering interventions on the neural networks mediating moral reasoning. These trials would be necessary to determine whether or to what extent the drugs could enhance moral behaviour. The trials would be necessary to establish a ratio of potential benefits to risks for research subjects taking these drugs. But there would be methodological and ethical problems with conducting this research.
3. Research Challenges
There are two questions about research into MB that are especially ethically fraught. The first question is whether any risk to which human subjects participating in clinical trials testing the drugs would be exposed could be ethically acceptable. Would the potential of the drugs to produce toxic levels of the relevant neurotransmitters or cause other adverse effects influence the permissibility of conducting these trials? The second question is how the outcome would be assessed with regard to the research question driving the trial. How would we know whether a drug actually enhanced moral behaviour?
In medical research, clinical trials are necessary to determine the safety and efficacy of any potential drug therapy. The aim of a Phase I trial for a drug is to determine its safety by determining its toxicity in human subjects following tests in animal models. Toxicity is measured in terms of the highest dose a human can tolerate without serious side effects. This is necessary in order to move to Phase II and III trials testing the efficacy of a drug or procedure in treating a disease or condition. In an MB trial, the main concern about safety would arise when researchers increased normal levels of a neurotransmitter with a drug to determine whether this could increase moral sensitivity. What the research would have to determine is how much of an increase in the relevant neurotransmitter would be optimal for achieving this effect. Again, though, not all people's brains are alike. Different people have different optimal levels of serotonin and other substances in their brains regulating mood, motivation, and behaviour. When researchers tried to determine the dose of a drug necessary to increase harm aversion, some subjects could be harmed by an excess of the neurotransmitter in their brains, which could rise to a toxic level. For example, increasing normal levels of serotonin in the brain has resulted in the serotonin syndrome, which can be caused by the adverse interaction of two or more drugs, one of which is an SSRI. This syndrome occurs in approximately 14–16% of persons who overdose on a drug in this class.Footnote 32 Its symptoms may include euphoria, rapid muscle contractions, hyperthermia, and in severe cases coma and death. Similarly, in an attempt to determine the optimal level of oxytocin to promote trust and pro-social behaviour, it is possible that some researchers would unwittingly administer a dose of this neuropeptide that would raise its level above what the brain could tolerate. Like the serotonin syndrome, this could result in deleterious effects in the brain.
In response to the concern about risk of drug toxicity in an MB trial, some might point out that healthy subjects participating in drug trials for medical treatments are also exposed to risk. But the second type of trial is necessary to test drugs for treating disease. Healthy subjects in an MB trial would be exposed to risk in testing a drug for a condition that is not a disease in the generic medical sense of mental or physical dysfunction. The medical or non-medical purpose of the trial could influence the acceptability of risk. There would be no grounds for medicalising immoral or amoral behaviour because such behaviour is not simply a manifestation of dysfunction in the brain, mind, or body. Immorality is not a disease in the medical sense relevant to the ethical criteria necessary to conduct clinical trials. More precisely, while some types of neurophysiological dysfunction may correlate with some types of immoral behaviour, many factors in addition to processes in the body and brain are needed to explain this behaviour. Others might object that it would be paternalistic to deny healthy subjects the opportunity to participate in an experiment testing the effects of drugs intended to improve behaviour. But investigators conducting the trial would have a duty of nonmaleficence not to expose healthy subjects to unreasonable risk. The potential for positive behaviour modification would probably not be enough to justify exposing these subjects to any risk of neurobiological sequelae. These considerations of risk underscore the difficulty investigators would have in obtaining ethics approval from the appropriate regulatory body to conduct the trials.Footnote 33 The categorical difference between the physiological risk and potential behavioural benefit would make it difficult to calculate a risk-benefit ratio and judge that the risk was justifiable. Assuming that the risk of adverse neuropsychiatric outcomes from drugs increasing neurotransmitter levels was low, it would still be difficult to justify exposing subjects to any risk in testing the effects of a drug used for a condition that was not a disease.
Many people would argue that any improvement in moral behaviour would not offset the risk of neuropsychiatric sequelae from psychoactive drugs used in an MB trial. This would limit voluntary enrolment. Altruistic individuals might agree to be included in the earliest phase of the trial testing drug safety. They would already have a high level of moral sensitivity and would not need moral enhancement. In these respects, they would be exposing themselves to some risk of harm without any compensatory benefit. Self-interested individuals with less moral sensitivity might agree to be included in a later phase of a trial testing drug efficacy. They might benefit from an increase in their moral disposition if the drugs were deemed effective. This could generate unfair inequality between altruistic and self-interested subjects in the distribution of risks to benefits. Such an unequal distribution of potential negative and positive effects from the drugs is an additional reason why research necessary to make MB available to all would have difficulty meeting ethics standards.
Pharmaceutical companies have decreased development of new antidepressant and antipsychotic drugs and funding the research necessary to market them.Footnote 34 The estimated profit from developing and testing these drugs may not be enough of a return on their investment to fund the studies. This may limit the development of new therapies that could treat psychiatric diseases more effectively. These companies would be even more reluctant to underwrite drug trials for moral enhancement because the outcomes of the trials could not be measured by the standard quantitative and qualitative methods used in research designed to control or prevent diseases. Lack of funding from pharmaceutical companies or other private sources would mean that much if not all of the funding for MB research would have to come from publicly funded health care institutions such as the US National Institute of Mental Health (NIMH). In addition to the challenge of recruiting enough subjects for enough trials to determine the drugs’ safety and efficacy, the research would be costly. With limited public resources to conduct research, this would raise the question of whether research into drugs that could enhance moral behaviour should be treated on a par with research aimed at developing safer and more effective treatments for and prevention of neuropsychiatric and other disorders. Some might argue that the harm resulting from behaviour symptomatic of our limited moral outlook is as significant as the harm resulting from disease. But the salience and global burden of neurological, psychiatric, and other diseases, as documented by empirical data, give priority to treating or preventing these diseases over trying to improve moral behaviour.
In her discussion of the ‘urgency of moral enhancement’, Elizabeth Fenton points out that ‘if we do not continue scientific research into enhancement, if we halt it out of concern for the consequences, then we have no hope of achieving the great moral progress that will ensure the survival of our species’.Footnote 35 This comment raises three issues. First, it is questionable whether scientifically sound clinical trials testing drugs for moral bioenhancement have begun or could begin, much less whether they should continue. These trials would be fundamentally different from studies of harm aversion and racial bias because the research question driving them would be different. Second, if the consequences of the research included significant neurological and psychiatric risks, then these risks could make the research ethically unacceptable. Third, we cannot assume that any evidence of the drugs enhancing moral behaviour would be evidence of moral progress.
Among the problems in trying to compare research into treating disease with research into moral enhancement is that the outcomes in the first type are measured empirically while outcomes in the second type would be measured normatively. The outcome of an MB study would be assessed in terms of social expectations and whether people's actions met these expectations and displayed respect for others. This is part of a broader problem of defining what constitutes moral and immoral behaviour and how public consensus on a definition of “moral behaviour” could be reached. Research into MB would involve very crude measures that may not tell us much or anything about whether it actually increased or improved people's moral sensitivity. Most medical research relies on quantitative measures of outcomes, such as the five-year survival rate of subjects receiving a new oncology drug for cancer. Some of the research relies on qualitative measures, such as reports from subjects about symptom relief in a psychiatric disorder or chronic pain. Whether a drug improves people's moral behaviour cannot be measured by these models. The lack of empirical measures of outcomes of a prospective MB study would leave uncertainty about how to assess the efficacy of the intervention. It is not clear how we could judge the success or failure of research other than observing people's behaviour after they had taken a drug for moral enhancement. But which criteria would we use to judge what we observe and whether the behaviour displayed greater moral sensitivity than before and led to a reduction in harm?
One proposed way of confirming whether the drugs enhanced moral behaviour would be to present hypothetical cases or thought experiments of moral conflict to subjects and assess their responses to them. The Trolley Problem is the most well-known of these methods. Philosopher Philippa Foot introduced this problem as a way of distinguishing what we owe to people in the form of aid from what we owe to them in the form of non-interference.Footnote 36 Psychologists and cognitive neuroscientists have adopted this problem in attempting to gain a better understanding of the neurobiological basis of moral judgement.Footnote 37 In a number of studies, subjects have been presented with two scenarios. In the first scenario, an out-of-control trolley is headed toward five workers on a track. The driver could turn the trolley onto another track on which there is one worker. Turning the trolley would save the five on the first track but would kill the one on the second. Subjects are asked whether it would be permissible to turn the trolley and kill the one worker rather than the five. In a variant of this case, subjects are asked if it would be permissible to push a fat man off a bridge and onto the track to stop the trolley. This action would stop the trolley from killing the five workers but would kill the fat man. Whereas most subjects responded that it would be permissible to turn the trolley in the first case, there was disagreement about whether it would be permissible to push the fat man off the bridge to stop the trolley in the second case. Some investigators have claimed that the impermissibility of pushing the man reflects a deontological judgement, while the permissibility of pushing him reflects a consequentialist judgement.Footnote 38 Some of these subjects have been put through fMRI scanners when presented with the question about what to do with the trolley or fat man. Recorded differences in neural activity in different brain regions during their responses presumably correspond to deontological or consequentialist judgements about the right action in these hypothetical cases.
These judgements are not accurate reflections of moral reasoning because they involve only quick intuitive responses that fall short of the consideration and argumentation necessary to justify an action or policy.Footnote 39 This, combined with the fact that fMRI images are scientific constructs rather than actual pictures of brain activity, provides a good reason to be sceptical of combining brain imaging and thought-experiments to assess moral reasoning and moral judgement after taking a psychotropic drug. “Trolley cases” involve unavoidable harm and the question of whether it is permissible to cause a lesser harm to prevent a greater harm. Most of the harm committed by humans is not the result of actions they perform or fail to perform in times of moral conflict but from the more general failure to recognise and react to reasons for performing certain actions. Thought-experiments like trolley cases are generated in artificial settings and tell us nothing about how people act in the real world. There is often a gap between moral judgement and action. One may judge that A is the right thing to do in a situation but choose B when they are actually in that situation. Whether a drug effectively enhanced moral behaviour could not be determined by asking questions about what one would do in hypothetical cases but by observing people's actual behaviour.
Unlike the outcome of a clinical trial in medical research, whether a psychotropic drug enhanced moral sensitivity would not be amenable to assessment by quantitative or qualitative methods. The end-point of an MB trial would be open to varying interpretations of whether people displayed greater attention to and respect for others after taking a drug. There would be considerable variability in the behaviour of subjects after taking a brain-altering substance because of the complex interaction between and among biological, psychological, and environmental factors and the unique effects these factors have in each person's brain. It is thus unlikely that there could be a definitive answer to the research question of whether a psychopharmacological intervention enhanced moral behaviour. The outcome would likely be too crude to measure by any scientific model of research.
4. Co-operate or Defect?
Although particular philosophical arguments fail to provide the motivation to act morally, we need a general moral theory to serve as a framework in which to spell out the goal of moral enhancement and the process by which it might be achieved. This would require public consensus on which theory to adopt, a theory that ideally would accurately reflect human moral psychology. It would be overly simplistic to expect one theory to be sufficient for this purpose. Insofar as the goal of moral enhancement is to reduce harm, some form of consequentialism would be needed. Some form of deontology would also be necessary to recognise people's rights, especially the negative right to non-interference. Still, given that most people are mainly self-interested, something more than recognition of others’ rights and the value of preventing or reducing harmful outcomes of actions or omissions would be needed to move people to act appropriately. An additional theory would have to complement consequentialism and deontology to ground the requisite motivational force. Most people are not naturally altruistic and do not typically act from other-regarding reasons. Some of us act altruistically at times, but not often enough and not to the extent necessary to have sustained beneficial effects on other people and their lives. The most realistic theory would be one consistent with a lower common behavioural denominator – rational self-interest. Accordingly, some version of social contract theory would be the most plausible moral complement to consequentialism and deontology as a theoretical basis for a moral enhancement programme.Footnote 40 It would be based on rational choice and promote social co-operation for mutual benefit. Each person would give up some of her self-interest by co-operating; but all would be better off by doing this. Such a model would be imperfect. In particular, future people who would be more adversely affected by the consequences of our present policies and actions or omissions could not participate in such a contract. A more demanding moral theory would be necessary to respect their claims. Given our limited moral compass and tendency to discount the future, though, a social contract model of moral enhancement, agreed upon by the present generation, would probably have the best chance of achieving a reasonably modest goal.
A social contract model of moral bioenhancement to promote cooperation would involve another collective action problem. A critical number of people would have to agree to take psychotropic drugs enhancing their moral disposition to produce the collective effect of reducing harm and promoting general welfare. Many would rely on this reasoning and assume that others would reason in the same way in deciding to enhance. They would calculate that the threshold effect of a large number of individuals co-operating in an enhancement programme would make co-operating and sacrificing some self-interest mutually beneficial and better for them than if they chose not to enhance. The psychology of this reasoning is similar in some respects to what is involved in the classic prisoner's dilemma. Each of two prisoners accused of a crime must decide whether they will co-operate by confessing or defect by remaining silent or testifying against the other. Defecting would be the most rational decision for each independently of how the other reasons and decides what to do. But each knows that the outcome is a function of what both decide, and that they would have a worse outcome if both defected. Each chooses the second best but all-things-considered most rational option of co-operating.Footnote 41
This game-theoretic strategy would be of limited value in a programme of moral enhancement, however. The number of players is too large to be accommodated by these strategies, which involve cooperation ‘in cozy little local enclaves’.Footnote 42 While these enclaves may evolve into larger entities, they would still be local or relatively small in scale. The collective action problem of MB would be more difficult to resolve because it would be global and involve a very large number of people. The main problems in these scenarios would be having enough people enhancing to produce a positive collective effect and preventing some from free riding on the co-operation of others.Footnote 43 If less than a critical mass of people decided to cooperate, then the goal of reducing harm would not be achieved. Even if a drug could safely and effectively enhance moral behaviour, some would refuse to enhance. They might not trust others to voluntarily enhance and might not want to sacrifice any self-interest for fear of becoming worse off than they would be if they pursued self-interest in an unconstrained way. In a scenario where many people enhanced and a threshold of co-operation had been reached, some would calculate that they could refuse to enhance and benefit from the cooperation of others without sacrificing any self-interest. A few of these who engaged in the most harmful behaviour and needed MB the most would not only refuse to co-operate but would also take advantage of the co-operation of others for their own malevolent ends. In these scenarios, defectors would be making a rational choice. Given the sheer number of people involved, it would be difficult to prevent free riding or manipulation and implement and enforce measures that would punish people who did this.
The number of people and the behaviour that would thwart the realisation of the goals of MB may depend on which moral problem one focussed. Climate change would involve the largest number of players, and the harmful outcome would result from many individual omissions as much as from many individual actions. Civil wars and ethnic conflicts typically involve a smaller number of decision-makers and their more numerous followers adversely impacting the lives of others. The threat of WMDs lies primarily with a few agents or even one individual. An irrational decision by what Martin Rees calls one ‘village idiot’ controlling these weapons could have catastrophic consequences.Footnote 44 In the second and third scenarios, even if a majority of people chose MB, a small number or even one person who refused to co-operate in a project to prevent or reduce the threat of harm could be enough to doom it. To prevent the idiot's irrational decision and its consequences in Rees’ hypothetical scenario, Harris would argue that what we need is not moral but cognitive enhancement aimed at improving rationality. Yet because reasons to prevent outcomes adversely affecting other people are moral reasons, and because moral reasoning involves cognitive and emotional processing, preventing catastrophic outcomes would depend not only on enhanced cognition but also on enhanced cognitive and emotional capacities.
Harris says that ‘[e]thics is for bad guys! The good don't need ethics’.Footnote 45 Further, he claims that ‘ethics is for those occasions in which compassion, altruism, and basic decency fail’.Footnote 46 MB is Persson and Savulescu's way of providing the ethics we need. But too many of the bad guys among us would choose not to enhance. In light of this and the likely general failure of voluntary MB to achieve its normative goal, Persson and Savulescu state: ‘[i]f safe moral enhancements are ever developed, there are strong reasons to believe that their use should be obligatory […]. That is, safe, effective moral enhancement would be compulsory’.Footnote 47 The argument for compulsory MB should not just be to get the bad guys on Harris’ interpretation to improve their behaviour. It should be aimed not only at those whose actions and omissions incrementally contribute to climate change, for example, but also and indeed more so at the “worst guys”, those whose actions directly cause substantial harm to others. On an ideal reading of the prisoner's dilemma and the standard collective action problem, “nice guys finish first” if they reason that cooperating is in their all-things-considered best interests and do in fact co-operate.Footnote 48 Yet the likely scenario of morally deficient persons taking advantage of those who voluntarily enhanced their moral sensitivity would mean that the worst guys who produced most of the world's harm would “finish first” in achieving their own self-serving ends. Incentives to nudge those with the greatest need of moral enhancement would do little to motivate them to act in accord with moral norms.Footnote 49 Because of the extent of their moral deficiency, changing their behaviour would take much more than incentives or nudges. The refusal of the worst perpetrators of wrongdoing to do this voluntarily could make state-sponsored compulsory enhancement necessary to reduce interpersonal harm. Could this be justified?
5. Compulsory Moral Enhancement: The Cost to Freedom
Compulsory MB would depend on research demonstrating the safety and efficacy of the drugs used to enhance. If MB were compulsory, then would participation in this research also be compulsory? Any action regarding MB could in principle be obligatory only if it was proven safe and effective, which the research itself is designed to determine. Since the research would be necessary to determine the safety and efficacy of the drug for public administration and consumption, compulsory participation in the research itself would be unjustifiable. This would be significantly different from military conscription or jury duty, which are compulsory but based on established social and legal institutions that protect people's rights and liberties. It would also be different from requiring young adults to purchase health insurance to ensure a fair distribution of risk across populations.
Harris has argued elsewhere that we have a moral duty to participate in medical research.Footnote 50 This obligation is generated by the fact that people in the present generation have benefited from medical treatments resulting from the sacrifices of people in the past who participated in medical research. These subjects exposed themselves to some risk without any direct therapeutic benefit. We in the present generation have benefited from the actions of those in the past in the form of better treatment and prevention of many diseases. Because of this, we have an obligation to participate in medical research that could benefit future people. This argument about a fair intergenerational distribution of burdens and benefits does not apply to MB, for three reasons. First, morally deficient behaviour is not a disease in the sense of physical or mental dysfunction and thus involves a different assessment of risk of psychotropic drugs intended to improve it. Second, none of us in the present generation has benefited from the results of previous research into MB, since there was no such research. Third, having a duty to participate in research is different from being compelled to participate in it. One can voluntarily fail to discharge a duty, which is not an option in compulsory action. The potential harm from being exposed to any risk of tinkering with neurotransmitters, hormones, or neuropeptides in the brain for the purpose of possibly improving moral behaviour undermines any claim for an obligation to participate in moral bioenhancement research. Harris does not extend his argument about an obligation to participate in medical research to research into MB because of his rejection of the idea of compulsory MB.
Even if one believed that safe and effective MB should be compulsory for all citizens in a liberal democratic society, there would remain the daunting task of enforcing compliance. This would require different levels of co-ordinated social and political action. It would also assume the moral integrity and public acceptance of those empowered with overseeing these tasks. This may assume too much. Harris appropriately asks, ‘[w]ho guards the guardians?’.Footnote 51 More fundamentally, even if compulsory MB significantly reduced harm, it could come at an unacceptable cost: it would leave no space for freedom.Footnote 52 For some, the magnitude of the actual and potential harm resulting from voluntary action might be significant enough to justify imposing limits on action. For others, no amount of harm from our actions could justify such limits. Compulsory MB would undermine the free choice necessary for moral responsibility, praise, blame, and other normative concepts and practices on which our social and political institutions are based and which define us as human agents. By ensuring that everyone always acted in a certain way, MB would eliminate the capacity to choose between different courses of action – good and bad, beneficial and harmful.Footnote 53 Without this capacity, we would no longer be moral agents. As Harris puts it: ‘[a]gents are quintessentially actors: to be an agent is to be capable of action. Without agency in this sense, decision-making is […] morally and indeed practically barren’.Footnote 54 The collective interest in preventing or reducing harm could never justify violating what is inviolable. Harris adds, ‘I, like so many others, would not wish to sacrifice freedom for survival’.Footnote 55
What he calls the ‘freedom to fall’ has its limits, however.Footnote 56 Free actions often have consequences that adversely affect other people. We should have the freedom to fall provided that we do not cause others to fall along with us. This is consistent with John Stuart Mill's principle of liberty. Mill states that ‘over himself, over his own body and mind, the individual is sovereign’.Footnote 57 He qualifies this statement in saying that ‘the only purpose for which power can be rightfully exercised over any member of a civilized society, against his will, is to prevent harm to others’.Footnote 58 The question for MB is whether and in which circumstances the freedom to fall could be overridden by the collective interest in preventing harm. At one end of the harm spectrum, it would take only one “village idiot” to cause or initiate a process resulting in the extinction of the human species through the use of nuclear weapons. At the other end of the harm spectrum, extinction could also eventually be our fate from the collective effects of individual actions and omissions causing climate change. If none of us survived, then freedom to fall would have no value because there would no longer be any human agents, free or unfree.
Some might equate moral progress with moral enhancement insofar as enhancement resulted in a reduction of harm among humankind. But harm reduction alone would not be sufficient for a robust conception of moral progress. Indeed, depriving people of their autonomous agency through compulsory MB would eliminate the “moral” in moral progress. If being a moral agent presupposes the ability to choose between alternative courses of action, and if there were no choice under compulsory MB, then there would be no moral agents. Our actions would not be our own but the products of brain-altering interventions. There is something paradoxical about the idea of moral progress achieved though the elimination of or limitations on moral agency. Any plausible concept of moral progress presupposes responsibility, praise, blame, and other normative concepts, as well as the associated psychological properties of conscientiousness, remorse, and regret. These depend on the capacity to decide and act on our own considered desires, beliefs and reasons, to overcome weakness of will through our own efforts and to learn from our mistakes. These capacities are anathema to the idea of forcibly taking or undergoing a neuromodulating drug or technique to prevent us from performing actions with potentially harmful consequences. Still, the value of reducing harm in promoting the survival of the human species cannot be minimised. Having choice is necessary for moral agency and responsibility. At the same time, reducing or preventing harm resulting from bad or evil choices is necessary for us and especially future generations to have the space for agency. There may be an intractable conflict between individual liberty and collective interest, between the freedom to fall and compulsory MB to reduce or prevent harm.
6. Conclusion
Moral bioenhancement would be a way of increasing people's moral sensitivity in motivating them to act in ways that respected the rights, needs, and interests of others. The aim would be to reduce actual harm and prevent future harm caused by humans at individual and collective levels. Persson and Savulescu's argument for MB as a response to the magnitude of global harm is well-motivated. But it is fraught with scientific, ethical, social, and political problems that would probably doom it as a viable normative project. Voluntary MB would likely fail to reach a critical level to have a positive collective effect because a significant number of people would refuse to enhance. Among those refusing to enhance would be individuals who contributed most to global harm and with respect to whom there would be the most compelling reasons for enhancement. Yet making it compulsory would threaten to undermine freedom of choice and cognitive liberty. While there is an urgent need to protect the collective interest in avoiding harm and promoting the survival of the species, it is doubtful that compulsory MB eliminating free choice would be justifiable. Questions about voluntary or compulsory MB also depend on the scientific question of whether research could determine that drugs designed to enhance moral behaviour were safe and effective. Yet clinical trials designed to test these drugs for this purpose would likely fail to meet standard ethical criteria for an acceptable risk-benefit ratio in medical research. There are also questions about whether drugs intended to modulate brain circuits and networks mediating moral reasoning and decision-making alone would strengthen moral motivation and enhance moral behaviour.
Allen Buchanan claims that ‘biomedical intervention might be one aspect of a multifaceted effort to extend concern and respect to all human beings, not just those who are like us’.Footnote 59 This is consistent with what Persson and Savulescu say about MB ‘as a complement to traditional means’.Footnote 60 The other aspects of the effort Buchanan describes would be psychological and social. Yet even if biomedical interventions were proven to be safe and effective, it is unclear how all three aspects of a bio-psycho-social model would be integrated and how such a model would be implemented and enforced. Acknowledging the hypothetical nature of MB and the obstacles that such a programme would face, Persson and Savulescu state that the development and perfection of moral enhancement is ‘not likely to be possible in the near future’.Footnote 61 Harris makes a stronger claim against MB, stating ‘I believe it will never be possible to the extent the Persson/Savulescu thesis requires, or indeed that Tom Douglas believes […] because moral enhancement has little prospect of preventing idiocy – but of course I could be wrong’.Footnote 62 Perhaps the strongest claim among sceptics of moral bioenhancement is from Harris Wiseman, who says that ‘an explicit project of state-sponsored moral improvement of the general public is unthinkable in liberal states’.Footnote 63 This is in reference to what he calls ‘hard, fine-grained, moral enhancement’ of the compulsory type defended by Persson and Savulescu.Footnote 64 Yet what Wiseman calls a ‘soft, coarse-grained, moral enhancement’ project consisting of nudging and incentives would also likely fail.Footnote 65 These strategies would not provide the necessary motivational force for enough people to co-operate in a moral enhancement project. Nor would they prevent some from taking advantage of this co-operation for their own selfish ends. Education aimed at improving moral behaviour might reduce interpersonal harm to some extent but would not eradicate it. Moreover, not all states and their guardians would be interested in such a plan, much less implement and enforce it.
Derek Parfit expresses a positive view on the possibility of moral progress: ‘[l]ife can be wonderful as well as terrible, and we shall increasingly have the power to make life good. Since human history may be only just beginning, we can expect that future humans, or supra-humans, may achieve some great goods that we cannot now even imagine’.Footnote 66 The goods of which Parfit writes cannot be separated from or achieved without an increase in the level of our moral sensitivity. They would be contingent on preventing or reducing harm. Moral progress requires enhanced moral sensitivity. The key issue is whether this sensitivity and progress can be made through any means. There are indeed many theoretical and practical reasons for scepticism about the concept and goal of moral enhancement. If there is no viable alternative project that could move people to act morally and resolve the most pressing issues of current and future generations, then a bad end may be in store for us all.Footnote *