Hostname: page-component-745bb68f8f-lrblm Total loading time: 0 Render date: 2025-02-09T17:44:07.591Z Has data issue: false hasContentIssue false

Thick Concepts and the Moral Brain

Published online by Cambridge University Press:  03 June 2011

Gabriel Abend*
Affiliation:
Department of Sociology, New York University, New York [abend@nyu.edu].

Abstract

Drawing on Williams’ distinction between thin and thick ethical concepts, I argue that current moral neuroscience and psychology unwarrantedly restrict their researches to thin morality only. Experiments typically investigate subjects’ judgments about rightness, appropriateness, or permissibility, that is, thin concepts. The nature and workings of thick concepts – e.g., dignity, integrity, humanness, cruelty, pettiness, exploitation, or fanaticism – have not been empirically investigated; hence, they are absent from recent theories about morality. This may seem like a minor oversight, which some additional research can redress. I argue that the fix is not that simple: thick concepts challenge one of the theoretical backbones of much moral psychology and neuroscience; they challenge the conception of a hardwired and universal moral capacity in a way that thin concepts do not. In the conclusion I argue that the burgeoning science of morality should include both thin and thick, and that it should include the contributions of psychologists and neuroscientists as well as those of anthropologists, historians, and sociologists.

Résumé

Reprenant la distinction de Williams entre concepts éthiques profonds et superficiels, l'auteur affirme que les neurosciences et la psychologie actuelle n'atteignent que la moralité superficielle. De fait les expériences traitent de jugements des sujets sur le juste, l'opportun et le permis, tous concepts superficiels. La nature et le façonnage des concepts profonds : dignité, intégrité, humanité, cruauté, mesquinerie, exploitation, fanatisme sont complètements absents des théories récentes de la moralité. Ce n'est pas un oubli mineur aisément réparable car les concepts profonds mettent à mal, bien plus que ne peuvent le faire les concepts superficiels, un pilier de la recherche expérimentale actuelle à savoir la croyance en une capacité morale câblée de façon universelle. Il est temps de faire appel aux psychologues et auxneuroscientifiques, autant qu'aux anthropologues, historiens et sociologues.

Zusammenfassung

Ausgehend von Williams Unterscheidung zwischen tiefgründigen und oberflächlichen ethischen Konzepten, behauptet der Autor, dass die Neurowissenschaften und die heutige Psychologie nur eine oberflächliche Moralität erreichen. In der Tat, die Erfahrungen handeln von Urteilen über das Richtige, das Opportune und das Erlaubte, alles oberflächliche Konzepte. Eigenart und Ausformung von tiefgründigen Konzepten (Würde, Unbestechlichkeit, Menschlichkeit, Gewalt, Neid, Ausnutzung, Fanatismus) fehlen gänzlich in aktuellen Moraltheorien. Wer dies übersieht, vergisst, dass die tiefgründigen Konzepte, weitaus mehr als die oberflächlichen, einen Grundpfeiler der heutigen experimentellen Forschung, nämlich den Glauben an eine universelle Moralfähigkeit, erschüttern. Beide Konzepte, profunde wie oberflächliche, müssen berücksichtigt und Beiträge von Psychologen, Neurowissenschaftlern sowie Anthropologen, Historikern und Soziologen hinzugezogen werden.

Type
Research Articles
Copyright
Copyright © A.E.S. 2011

Introduction

In 1893 Émile Durkheim began the preface of The Division of Labor in Society with these words:

This book is above all an attempt to treat the facts of moral life according to the methods of the positive sciences. [...] We do not wish to deduce morality from science, but to constitute the science of morality [science de la morale], which is very different. Moral facts are phenomena like any others. They consist of rules for action that are recognisable by certain distinctive characteristics. It should thus be possible to observe, describe and classify them, as well as to seek out the laws that explain them (Durkheim [1893] Reference Durkheim, Halls and Coser1984 p. xxv).

As he argued elsewhere (Durkheim [1920]Reference Durkheim, Pickering and Sutcliffe1979, p. 92), the “science de la morale” or “science des faits moraux” should deal with “moral phenomena, with moral reality, as it appears to observation, whether in the present or in the past, just as physics or physiology deal with the facts they study”. Unfortunately, Durkheim’s death in 1917 cut short his work on the science of morality, on which he was writing a book titled La morale. Fortunately, almost one hundred years later the science of morality is at long last a reality. Articles on morality regularly appear in the most prestigious scientific journals, including Science and Nature. Research and funding organizations on both sides of the Atlantic offer increasing support to advance scientific knowledge about morality. Both scientists and popular science writers speak of a “new science of morality”.

These two sciences of morality – that of Durkheim in the early twentieth century and that of today – agree on the objective of studying morality “according to the methods of the positive sciences”, “just as physics or physiology deal with the facts they study”. But they differ in many fundamental ways. Perhaps the most obvious difference is that today’s scientists of morality tend to have backgrounds in and work in departments of psychology and neuroscience, which in turn implies basic methodological and theoretical differences.

Today’s scientists of morality are typically interested in the search for the neural “correlates”, “basis”, “foundations”, or “substrates” of morality – what some have called the “moral brain” (Tancredi Reference Tancredi2005, pp. 34-45; Verplaetse et al. Reference Verplaetse, Braeckman, De Schrijver, Verplaetse, De Schrijver, Vanneste and Braeckman2009, pp. 11-12), the “neuromoral network” (Mendez Reference Mendez2009), or how “morality is grounded in the brain” (Moll et al. Reference Moll, Oliveira-Souza and Eslinger2003, p. 299). Thus, some researchers have set out to discover the brain’s moral “modules”, “faculty”, “organ” or “center”, and have made analogies with vision and the Chomskyan language faculty (e.g., Dwyer et al. Reference Dwyer, Huebner and Hauser2009). Some others have set out to figure out the biochemistry of the moral brain, e.g., the role of serotonin and oxytocin (Crockett et al. Reference Crockett2010; Zak et al. Reference Zak, Kurzban and Matzner2004). Some scientists have also suggested that this kind of research may have clinical implications, not only pharmacological (Tost and Meyer-Lindenberg Reference Tost and Meyer-Lindenberg2010, p. 17072), but also “neurosurgical treatments for moral dysfunctions”, including “transcranial magnetic stimulation (tms), transcranial direct current stimulation (tdcs) and implanted electrodes” (De Ridder et al. Reference De Ridder and Verplaetse2009, p. 167).Footnote 1

A further claim that neuroscientists and psychologists generally agree on is that morality is a universal capacity, which the human species has as a product of natural selection. According to one version of this claim, humans have a “universal moral grammar”, which is “a signature of the species” (Hauser Reference Hauser2006, p. 53; cf. Mikhail Reference Mikhail2007; but see Ambady and Bharucha Reference Ambady and Bharucha2009). Accordingly, much new attention has been given to the old question of the evolution of morality, which garnered so much attention in the late nineteenth and early twentieth centuries.Footnote 2

Methodologically, today’s scientists of morality rely on laboratory experiments, which often (but not always) involve neuroimaging techniques, such as functional magnetic resonance imaging (fmri) or positron emission tomography (pet). The object of inquiry and unit of analysis of most experiments is an individual’s moral judgment; indeed, judgment seems to be taken for granted as the object of inquiry of the science of morality (e.g., Dupoux and Jacob Reference Dupoux and Jacob2007, p. 373; Nado et al. Reference Nado, Kelly, Stich, Symons and Calvo2009). For instance, in one widespread approach subjects are presented with a situation in which two courses of action are possible; they have to judge what is “right”, “okay”, or “permissible” to do, while their brains are being scanned. Subjects make their moral judgment – e.g., “In this situation it would be wrong for so-and-so to do such-and-such action” – by pressing a button. This sort of methodological approach does seem to tell one something about morality, or something about some part or aspect of morality. But just what does it tell one? And about just what part or aspect of morality?

In this paper I argue that the contemporary science of morality is not a science of morality, but of thin morality only. Drawing on Williams’ (Reference Williams1985) distinction between thin and thick ethical concepts, I argue that psychologists’ and neuroscientists’ experiments typically investigate subjects’ judgments about rightness, appropriateness, or permissibility, that is, thin concepts. The nature and workings of thick concepts – e.g., dignity, integrity, humanness, cruelty, pettiness, exploitation, or fanaticism – have not been empirically investigated; hence, they are absent from recent theories about morality. This may seem like a minor, inconsequential oversight, which some additional research can easily redress. I argue that the fix is not that simple: thick concepts challenge one of the theoretical backbones of much moral psychology and neuroscience; they challenge the conception of a hardwired and universal moral capacity in a way that thin concepts do not. This is due to two key features of thick concepts, which I discuss at some length below. First, they simultaneously describe and evaluate, yet description and evaluation cannot be separated out. Second, they presuppose or are ontologically dependent on institutional and cultural facts.

The paper is organized as follows. First, I spell out the distinction between thin and thick moral concepts, and I argue that moral psychologists and neuroscientists have typically focused on the former and neglected the latter. Then I show how the conception of morality as a hardwired and universal capacity is threatened by the incorporation of thick concepts into the theoretical picture. In the conclusion I argue that the burgeoning science of morality should include both thin and thick, and that it should include the contributions of psychologists and neuroscientists as well as those of anthropologists, historians, and sociologists. I end with a word of warning regarding the diffusion and reception of moral psychology and neuroscience outside the academy.

The neglect of thickness

The methodological approach of the contemporary science of morality is typically individualistic. Naturally, neuroscientists and psychologists tend to concentrate on the individual level of analysis and individual-level phenomena – that is what these disciplines concentrate on and are good at. In the case of morality, the focus is on individuals’ moral judgments, or, more precisely, one individual’s moral judgment at a time. That is fine as far as it goes. But sociologists and other social scientists may probably want to ask: Are there not any social-level phenomena that a science of morality may need to take into consideration in order to obtain a satisfactory understanding of morality? And what unique contributions can the study of social-level phenomena make to a scientific understanding of morality? Once the social level is brought into view, four possible arguments suggest themselves as to why a wholly individualistic approach might be problematic or insufficient:

  1. (1) The best predictors of many or most of an individual’s moral judgments are social variables, such as religion, age, gender, social class, etc.

  2. (2) The best predictors of many or most of an individual’s moral actions are, in addition to 1), the legitimacy of reasons for action in the relevant social context (not her actual intuitive moral judgments).

  3. (3) The best predictors of how any one society’s moral matters work out are institutional and cultural facts (not individuals’ moral judgments).

  4. (4) The analytical fiction of an individual who in isolation makes a moral judgment is either (a) misguided as an empirical approach, or (b) downright unintelligible.

    I will not analyze and assess the value of these arguments here, since each of them would require careful treatment.Footnote 3 Instead, I wish to consider a fifth line of argument, which has not been properly considered up to now:

  5. (5) At least some of an individual’s moral judgments have institutional and cultural presuppositions; at least some moral concepts and properties – thick ones – are ontologically dependent on institutional and cultural facts.

My point is not that institutional and cultural facts shape, structure, or influence morality (though that is undoubtedly true), but that they literally make it possible. As we will see, argument (5) entails some special troubles for the science of morality. Unlike arguments (1), (2), and (3), argument (5) is incompatible with the conception of morality that prevails in the literature in psychology and neuroscience.

Some 25 years ago Bernard Williams (Reference Williams1985) distinguished two kinds of ethical concepts: thin and thick – related ideas were already present in earlier work, e.g., by Anscombe (Reference Anscombe1958), Foot (Reference Foot1958, 1958-Reference Foot1959), and Murdoch (Reference Murdoch1956, Reference Murdoch1970).Footnote 4 Let me start with some examples. Prototypical thin concepts are right and wrong, good and bad, permissible and impermissible, appropriate and inappropriate, and ought and ought not. Some examples of thick concepts are integrity, decency, brutality, cruelty, moderation, humanness, exploitation, materialism, and gentlemanliness. What is the difference between these two kinds of concepts? Thin concepts are not “world-guided”, that is, the empirical world does not guide their application. As far as semantics is concerned, they can be applied to any object. For instance, if you say, “Augusto Pinochet was a good man”, or “One ought never to keep one’s promises”, you might be making significant moral errors, but not semantic or conceptual ones. Differently put, if you say that this action is permissible, or that that action is wrong, you are not providing any further information about these actions (other than their being according to you permissible and wrong, respectively).

By contrast, thick concepts evaluate an object, but also simultaneously describe it, or tell you something about the nature of that object; they are “at the same time world-guided and action-guiding” (Williams Reference Williams1985, p. 141). On the one hand, they describe a thing or state of affairs in the world. Unlike permissible and good, the application of the concepts of brutality or cruelty can be mistaken on semantic grounds. Different persons’ uses of the word “brutality” certainly differ, but a basic core of empirical conditions need to obtain for it to be correctly applied – otherwise, the speaker would seem not to understand the meaning of the word. On the other hand, to say that someone acted in a brutal or in a cruel fashion is, at the same time, to make a moral judgment about it, to evaluate it negatively, etc. As we will see below, much turns on the expression « at the same time ».

The uses of thin and thick

Psychologists and neuroscientists have done much experimental research using thought experiments, about which subjects have to make a moral judgment. Two frequently used ones are the so-called “trolley problem” and its so-called “fat man” variant.

Some years ago, Philippa Foot drew attention to an extraordinarily interesting problem. Suppose you are the driver of a trolley. The trolley rounds a bend, and there come into view ahead five track workmen, who have been repairing the track. The track goes through a bit of a valley at that point, and the sides are steep, so you must stop the trolley if you are to avoid running the five men down. You step on the brakes, but alas they don't work. Now you suddenly see a spur of track leading off to the right. You can turn the trolley onto it, and thus save the five men on the straight track ahead. Unfortunately, Mrs. Foot has arranged that there is one track workman on that spur of track. He can no more get off the track in time than the five can, so you will kill him if you turn the trolley onto him. Is it morally permissible for you to turn the trolley? Everybody to whom I have put this hypothetical case says, Yes, it is. (Thomson Reference Thomson1985, p. 1395).

For its part, the fat man variant of the trolley problem goes like this:

Being an expert on trolleys, you know of one certain way to stop an out-of-control trolley: Drop a really heavy weight in its path. But where to find one? It just so happens that standing next to you on the footbridge is a fat man, a really fat man. He is leaning over the railing, watching the trolley; all you have to do is to give him a little shove, and over the railing he will go, onto the track in the path of the trolley. Would it be permissible for you to do this? (Thomson Reference Thomson1985, p. 1409).

These are moral philosophers’ thought experiments, which psychologists and neuroscientists have often used in their experimental tasks and vignettes. These are not the only ones, of course.Footnote 5 Yet, while the specific situations researchers use may vary, they are similar in one respect: subjects are asked a question and given a set of possible answers, all of which contain only thin concepts. For example, Greene and colleagues’ stimulus questions use the words “appropriate” and “inappropriate” (Greene et al., Reference Greene2001, p. 2108; Reference Greene and Sinnott-Armstrong2008, p. 1148). So do Heekeren and colleagues (Reference Heekeren2003, p. 1216; Reference Heekeren2005, p. 889), though in German; and so do Ciaramelli and colleagues (Reference Ciaramelli2007), though in Italian (Ciaramelli and her colleagues actually translated Greene’s dilemmas). More recently, Greene switched to “morally acceptable” (Greene et al., Reference Greene2009, p. 366). Hauser and colleagues have used the words “permissible” or “morally permissible” (Cushman et al., Reference Cushman, Young and Hauser2006, pp. 1083-1084; Hauser et al., Reference Hauser2007, pp. 18-19). Their online “moral sense test” uses a 7-point scale, from forbidden (1), through permissible (4), to obligatory (7). By contrast, Young and Saxe’s (Reference Young and Saxe2008, p. 1914) 3-point scale does not comprise obligation: “from completely forbidden (1) to completely permissible (3).” Haidt and colleagues have asked their subjects questions such as: “Was it OK for them [two siblings] to make love?”; “Is it very wrong, a little wrong, or is it perfectly OK for [act specified]?”; “How wrong is it for Frank to eat his dead dog for dinner?” (Haidt Reference Haidt2001, p. 814; Haidt et al., Reference Haidt, Koller and Dias1993, p. 617; Schnall et al., Reference Schnall, Haidt, Clore and Jordan2008, pp. 1107-1108). Waldmann and Dieterich’s (Reference Waldmann and Dieterich2007, p. 249) simply use the word “should” (also in German): their subjects’ “task was to read descriptions of several situations and to indicate in each case whether a person in the scene should choose to take a proposed action or should refrain from acting”.

An alternative (yet conceptually related) methodological choice is to dispense with moral words altogether. Bartels’s (Reference Bartels2008, pp. 411-413) “ethical dilemmas” ask: “In this situation, would you push him?”; “In this situation, would you smother the baby?”; “In this situation, would you flip the switch?”; and so on. For their part, Borg and colleagues (Reference Borg2006, p. 806) combine the two types of questions: “The second and third screens posed the questions ‘Is it wrong to (action appropriate to the scenario)?’ and ‘Would you (action appropriate to the scenario)?’, which were presented in randomized order”.

Now, all of these questions try to get at some undoubtedly relevant moral things, such as appropriateness, permissibility, moral permissibility, moral acceptability, wrongness, and “okay-ness”.Footnote 6 But why should thick concepts not be considered and investigated (for a partial exception, see Zahn et al., Reference Zahn2009)? No good reason – indeed, as far as I am aware no reason at all – is given in the literature. While this is an empirical question, it is probably uncontroversial that thick concepts appear in some or much of people’s moral lives. For example, in many contemporary Western societies: dignity, decency, integrity, piety, responsibility, tolerance, moderation, fanaticism, extremism, despotism, chauvinism, rudeness, uptightness, misery, exploitation, oppression, humanness, hospitality, courage, cruelty, chastity, perversion, obscenity, lewdness, and so on and so forth.Footnote 7 In comparison, rightness, wrongness, permissibility, impermissibility, acceptability, and their thin relatives might be less prominent. Perhaps they are more common, useful, and natural in academic research and argument than in ordinary people’s lives. If empirical research confirms that this is indeed so, then a theory of morality exclusively based on research about thin moral judgments would be inadequate, or at least incomplete.

Why have moral psychologists and neuroscientists restricted themselves to thin concepts only? I will not speculate on whether this is a conscious choice or not. What is clear, though, is that thin concepts are methodologically more tractable and theoretically more docile than thick ones. They fit seamlessly with many of psychologists’ and neuroscientists’ methodological and theoretical assumptions and practices. But there might be another kind of explanation for this neglect, which is related to the philosophical sources they have drawn on to construct their object of inquiry.

Contemporary moral psychologists and neuroscientists generally conceive of morality after the conception of morality that consequentialist and deontological moral philosophers share. According to this conception, the most important and interesting moral questions have the following form. A person must decide what action to perform given a set of initial conditions. What (morally) ought she to do? What is the (morally) right thing to do? On what principle or principles ought her decision to be based? Now, while consequentialism and deontology have been very influential schools, at least in certain quarters, they are certainly not the only game in town. Indeed, their conception of morality, interests, and emphases are very peculiar ones, which other ethical traditions do not share, and sometimes regard as fruitless, if not downright silly (e.g., Pincoffs Reference Pincoffs1971, Reference Pincoffs1986; Murdoch Reference Murdoch1956, Reference Murdoch1970). These other traditions include pragmatism, particularism, existentialism, communitarianism, virtue ethics, Levinasian ethics, situation ethics, Buddhist ethics, Confucian ethics, and many others. In other words, moral psychologists and neuroscientists use a frame or lens drawn from consequentialist and deontological ethics, as though it were a neutral one. But it is not. In fact, contemporary moral psychologists and neuroscientists generally conceive of morality after a more specific and peculiar model: a disagreement between act consequentialism (rather than rule consequentialism), and the Kant of the Groundwork ([1785] 1998) and the categorical imperative’s first formulation (as opposed to the Kant of dignity and respect, and of the Metaphysics of Morals ([1797] 1991) and Anthropology from a Pragmatic Point of View ([1798] 2006)). Moreover, while psychologists and neuroscientists sometimes mention Kant and Mill, few seem familiar with the strong objections raised against this way of framing the task of ethics, at least since Anscombe’s (Reference Anscombe1958) “Modern Moral Philosophy”.Footnote 8 More generally, the science of morality’s reading of the ethics literature seems to me incomplete and superficial.

All in all, my central point in this section is a straightforward one. I think the science of morality’s neglect of thick morality is unreasonable and unjustifiable. It would be a good thing if researchers started to investigate thick morality as well. It would be a good thing, because only empirical evidence can tell us how thick concepts work, how people use them, and to what extent (if any) and in which ways (if any) thick judgments resemble thin ones. This broadening of the object of inquiry would be an empirical step forward. It would help develop a science of morality, not just of thin moral judgment. However, it would also lead to some difficult troubles for the theoretical framework of much recent science of morality – specifically, for the conception of morality as a hardwired and universal capacity.

The trouble with thickness

What theoretical troubles are brought about by the incorporation of thick morality into the picture? In order to show what they are, I wish to analyze the conception of morality that underlies most current work in psychology and neuroscience. Let us consider, then, some examples of thin moral judgments, the sort of judgments that moral psychology and neuroscience generally investigate: “Eating people is wrong”, “Setting a cat on fire is not okay”, “Cheating on your taxes is not acceptable”, or “Switching the train to the side track is permissible”.

Let us first note that these judgments make reference to objects that contingently exist in some societies but not in others. For example, the subject of the last sentence, “switching the train to the side track”, refers to trains, tracks, and switches. However, these differences across societies are not moral ones. It is just that not all societies are familiar with the same objects: think of our trolleys, nation states, property rights, veganism, and adhd epidemic. Thus, if you were interested in investigating the moral judgments of the Hadza of Tanzania in trolley-problem situations, you would soon realize that they are not familiar with trolleys. But the solution is relatively unproblematic: substituting herds of stampeding elephants, jeeps, and trees for runaway trolleys, switches, and footbridges (Hauser et al., Reference Hauser, Liane, Chusman and Sinnott-Armstrong2008, pp. 135-136; cf. Abarbanell and Hauser Reference Abarbanell and Hauser2010, p. 212).Footnote 9

Let us then look at the predicates of those sentences: “[be] wrong”, “[be] okay”, “[be] acceptable”, and “[be] permissible”. It is these predicates that are responsible for the specifically moral work. The subject picks out an object for evaluation – setting a cat on fire, or cheating on one’s taxes – and the predicate actually does the evaluation – not acceptable, or not okay. In this regard, I would like to argue that there is one remarkable similarity between the conceptions of morality of recent moral psychologists and neuroscientists on the one hand, and the philosophical schools of emotivism, expressivism, prescriptivism, and other metaethical noncognitivisms on the other.Footnote 10 The similarity is that both understand a person’s moral judgment – e.g., “Shoving the fat man onto the trolley tracks is morally wrong” – as her having a “con-attitude” toward doing that; or as her saying, or meaning, or thinking, or feeling:

  1. (i) Shoving the fat man – don’t do it!

  2. (ii) Shoving the fat man – avoid!

  3. (iii) Shoving the fat man – boo!

  4. (iv) Shoving the fat man – yuck! eww! or ugh!Footnote 11

These are four different variants, yet, despite their differences, they have some common implications. Unlike the sentence’s subject involving trolleys, the predicate – e.g., “is wrong” – seems to be free from contingent social facts, because it is understood as analogous to “don’t do it!”, “avoid!”, “boo!”, or “yuck!”. According to this view, “avoid!” and “yuck!” are universal responses, which have a long evolutionary history and many animal species are capable of. They require no language or concepts. Further, they can be attached to any act, practice, person, situation, or thing whatsoever – e.g., you can have an avoid!-reaction or a yuck!-reaction to rattlesnakes, feces, cannibalism, polygamy, cows, eating sentient beings, homosexuals, conservatives, or Cuban and Palestinian politicians. Con-attitudes have no intrinsic content; in themselves they do not represent anything. Moreover, they necessarily occur as responses to stimuli, like perception and the emotions. The fact that certain air vibrations reach your eardrum, or your realizing that there is a rattlesnake or a mouse under your bed, are causal antecedents of your hearing a particular sound, or of your experiencing fear or disgust. Finally, like perception and the emotions, they are fast and automatic (or “hot”) responses to stimuli, not slow and reasoned (or “cold”) ones. Needless to add, all of these claims about “con-attitudes” also apply to their positively-valenced counterparts or “pro-attitudes”: “do it!,” “approach!,” “yay!,” “ah!,” etc.

The science of thin morality then takes a further theoretical step. Morality is conceived of as the capacity, or set of capacities, to produce moral judgments of that kind. That is, a capacity, or set of capacities, to produce responses such as “do it/don’t do it!”, “approach/avoid!”, “yay!/boo!”, or “ah!/yuck!” in reaction to certain stimuli. Furthermore, this capacity, just like the emotions, is believed to be hardwired, universal, and the product of natural selection. The argument here is that morality may have been evolutionary advantageous. Some aspects of it – or, at least, some of its “building blocks” – humans may share with other species, from chimpanzees to locusts (Anstey et al. 2009; de Waal 1996, 2006).

The sometimes explicit and sometimes implicit analogy with the emotions is especially consequential. For it puts the primary emotions (e.g., fear, disgust), the moral or social emotions (e.g., shame, guilt, sympathy, empathy), and morality on one theoretical plane, or perhaps continuum. If, as some scholars claim, the primary emotions are “natural kinds within the mammalian brain” (Panksepp Reference Panksepp, Lewis and Haviland-Jones2000), and if “moral emotions such as shame and guilt are hardwired capacities, forged into hominid neuroanatomy by natural selection” (Turner and Stets Reference Turner, Stets, Stets and Turner2006, p. 547), perhaps morality is but an evolutionary more recent and more complex capacity, which still shares their essential features. The primary emotions are said to be independent from language; they are automatic sympathetic and endocrine responses triggered by stimuli – e.g., epinephrine and norepinephrine secretion, or faster heartbeat.Footnote 12 They are associated with distinct facial expressions, which are claimed to be universal, and which facial electromyography (emg) and facs (Facial Action Coding System) are claimed to objectively measure (Ekman and Rosenberg Reference Ekman and Rosenberg2005; Keltner and Buswell Reference Keltner and Buswell1996; Keltner et al. Reference Keltner, Ekman, Gonzaga, Beer, Davidson, Scherer and Goldsmith2003). Analogously, the moral concept of fairness might be associated with facial motor activity – specifically, “violations of the norm of fairness” are associated with activation of the levator labii muscle region (Chapman et al. Reference Chapman2009, p. 1123).

This conception of morality, which I have spelled out in the last few paragraphs, naturally leads to a particular kind of research program. This research program focuses on the predicates of moral judgments – “is wrong”, “is appropriate”, “is forbidden”, “is okay” – and tries to discover their neural correlates. Consider again the moral judgment, “Shoving the fat man onto the trolley tracks is morally wrong”. This research program does not focus on this sentence’s subject, which refers to culturally and historically variable institutions, practices, and technologies. The predicates are the actual objects of inquiry, and they are believed to have four fundamental characteristics: 1) they are analytically detachable from the judgment as a whole; 2) they are the products of the universal and hardwired capacity (i.e., they are what this capacity produces); 3) they are conceptually analogous to “do it/don’t do it!”, “approach/avoid!”, “yay!/boo!”, or “ah!/yuck!”; and 4) they have specific neural correlates.

Certainly, no scientist of morality denies that there is much cultural and historical variation in how morality manifests itself, which responses are triggered by which stimuli, and the actual content of moral judgments. For example, nobody denies that, given one object, person, or action (say, slavery or gay marriage), some people’s reaction will be “yay!” and some other people’s reaction will be “boo!” Nor is it denied that the concrete empirical manifestations of “yay!” and “boo!” vary considerably, i.e., the many ways in which people experience and express pro-attitudes and con-attitudes. But the hardwired, neural foundation, whatever it is, remains the same. Thus, on the basis of this conception of morality and moral judgment – features (1) to (4) above – a burgeoning research program is underway, whose aim is to progressively unveil patterns of brain activity and eventually the nature and workings of the “moral brain”.

Presuppositions

At this point, a foe of psychologists’ and neuroscientists’ conception and use of thin morality may object that, while “avoid!” and “yuck!” might be language- and concept-free, “[be] forbidden” and “[be] wrong” are not. If so, the whole theoretical edifice would seem to be at risk. I myself find this objection sensible, even if not necessarily a death blow. At any rate, I do not want to pursue this line of criticism here. Instead, I wish to show why things go astray in a different and more fundamental way in the case of thick morality.

Consider some simple thick moral judgments: “That was a cruel laughter”, “That was noble of her”, “She is a materialistic person”, “She is a person of integrity”, “He acted as a chauvinist”, “Voting in the election is the responsible thing to do”, “Immigrant workers are being exploited”; “His behavior is gentlemanly”, “His attitudes are feminine”, “Giving is humane”, “Giving is pious”. For my purposes there is one crucial difference between these judgments and thin ones. Unlike thin predicates, thick predicates have institutional and cultural preconditions or presuppositions. For example, statements that contain the predicates “[be] materialistic” or “[be] exploitative” presuppose a complex web of institutions, ideas, and practices. Roughly speaking, these include a society’s having something like property, profit, certain kind of organization of productive activities, certain rights, the idea of a right, the idea of proper measure or reasonableness (as opposed to excessiveness), and so on. If a society or human group does not have these, then “She is a materialistic person” can make no sense to them.

What do I mean by “presuppositions” and “to presuppose”? It is not just a matter of presuppositions as logical relations between statements, as in Frege and his followers (cf. Strawson Reference Strawson1952, pp. 170-194). Nor are these speakers’ pragmatic presuppositions, as in Stalnaker (Reference Stalnaker1999) and linguistic pragmatics. What is going on here is that the moral concepts and properties expressed by those predicates – e.g., the concepts and properties of humanness, gentlemanliness, piousness – are partly constituted by institutional and cultural facts. Or, to use a more metaphysically loaded vocabulary, they are “ontologically dependent” on them. That is, they simply could not exist if certain contingent empirical facts did not happen to obtain.Footnote 13

Compare this with thin concepts and properties such as wrongness or “okay-ness”. There are various types of relationships between thin concepts and cultural and institutional facts, but these relationships are not of presupposition or of ontological dependence. For example, there are no specific institutions that the concept of wrongness presupposes, or that are built into wrongness, the way the thick concepts of materialism and exploitation presuppose certain economic institutions. Empirically, it seems easy to find examples of societies or groups where “[be] noble” and “[be] gentlemanly” do not or did not exist, but perhaps not so easy to find examples where “[be] wrong” does not or did not (or at least something like it). But my main claim here is not this empirical one; my main claim is stronger. “[Be] noble” and “[be] gentlemanly” cannot possibly exist in certain societies, due to their ontological dependence on the institutions that make nobleness and gentlemanliness possible. By contrast, if certain societies lack a concept of wrongness, that may not be a matter of impossibility. They could have one, even if they do not.

The class of thick concepts is a large and heterogeneous one. Crucially, there are degrees of thickness, or degrees to which concepts presuppose institutional and cultural facts.Footnote 14 Some thick concepts presuppose a very large and complex web of such facts. Examples of this are arguably humanness, nobleness, gentlemanliness, objectification, commodification, and materialism. By contrast, some other thick concepts have fewer and simpler cultural and institutional presuppositions. Examples of this are arguably cruelty, kindness, and courage. Making the reasonable assumption that complex institutional and cultural configurations are less common than simpler ones, then comparatively thinner concepts (say, cruelty) can exist in more societies than comparatively thicker concepts (say, materialism). However, this is only an argument about possibility, not actuality, let alone necessity. Where any one concept actually exists and has actually existed is an empirical question, which may be taken up by a historian, anthropologist, or sociologist. Differently put, knowing how widespread a concept’s presuppositions are or have been does not settle the question of how widespread the concept itself is or has been. In any case, what I wish to highlight here is that each individual thick moral concept has its own, distinct presuppositions (there is no such thing as the presuppositions of thick concepts, in general). Each one is ontologically dependent on cultural and institutional facts in a different way and to a different extent. They must be analyzed on a case-by-case basis.

What is the trouble with thickness, then? It stems from thick concepts’ ontological dependence on cultural and institutional facts – and their differing from thin concepts in this respect. As I said above, much recent work tries to show that morality is hardwired or “grounded in the brain”, as Moll and colleagues (Reference Moll, Oliveira-Souza and Eslinger2003, p. 299) put it. These arguments, which are based on research about thin moral judgments only, may or may not turn out to be empirically true, and may or may not have logical flaws. But they surely make sense. Researchers have articulated a question, and are trying to offer an answer to it. By contrast, I think the literature has not even begun to consider how thick morality might be grounded in the brain. Not even the question has been raised. Given my previous arguments, this issue is trickier than it looks at first glance.

The problem is what, exactly, would be the (thick) counterparts to (thin) emotional or affective reactions such as “do it!” or “avoid!” and their neural correlates. Whether it is true or not, it is at least conceivable that “avoid!” may turn out to have consistent neural correlates or bases of some sort. If that indeed turns out to be the case, and if the prevalent conception of morality is accepted, then those facts about the brain can be used to support arguments about acceptability, wrongness, or impermissibility. But there seem to be no suitable analogues in the concepts of dignity, nobleness, humanness, and materialism. What would one be looking for in the brain? What would the neural activity pattern be a correlate of?Footnote 15 Likewise, what, exactly, has evolution endowed human beings with, which allows us to make judgments about integrity, uptightness, humanness, and materialism? (Of course, this must be an endowment that is specific to morality; not a general capacity that, in an obvious way, makes them possible.) Thus, thick concepts seem to challenge the foundations of much current research on morality in psychology and neuroscience, which were developed with thin concepts in mind. For it is not just that we do not presently have any empirical knowledge about the relationship between thick moral concepts and the brain. I do not think we know what such knowledge would be like, or what it would be like to have knowledge about that. I do not think we know, either, how to go about investigating whether there is such a relationship (and if there is one what its nature is), nor what conceptual framework and categories would be needed to be able to do so. What seems to me certain is that much more thought is needed to begin to address these issues.

Furthermore, there seems to be a long theoretical road from (a) our having automatic avoid!-reactions vis-à-vis certain objects, to (b) our having and using thick moral concepts. The former, (a), does tell us something about human psychology. But whatever it tells us, that seems a far cry from what people’s moral lives consist of, look like, and feel like. Even if they are relevant, facts about (a) will not take you very far in your understanding of how morality works. Indeed, it is not clear what makes those automatic reactions count as moral ones in the first place. How can one tell a moral from a non-moral automatic flash of affect? Similarly, certain emotional responses and animal behaviors are said to show what the “building blocks”, “origins”, or “roots” of morality are (e.g., de Waal 1996, 2006). Unfortunately, these expressions are generally used in too vague a fashion to help clarify the connections between (a) and (b) (Kitcher Reference Kitcher, Ober and Macedo2006, pp. 123-124; Railton Reference Railton and Katz2000, pp. 55-60). At any rate, I think a scientific understanding of morality would be unsatisfactory if it did not comprise (b) as well. Yet, thick morality does not seem to be conceptually friendly to the neural correlates or basis framework, nor to the evolutionary framework, nor to analogies with emotion and perception, in the sense that thin morality might.

The disentangling manoeuvre

The psychologist or neuroscientist of thin morality may seem to have one response to deflate the challenge that thick morality poses to her endeavor. And that is to argue as follows. Thick moral judgments – and, more precisely, the thick concepts in them – have in reality two distinct components, which can be disentangled. On the one hand, there is the factual, value-free description. On the other hand, there is a positively- or negatively-valenced evaluation of the description. The idea would be that you can objectively describe, for instance, what it is for an act or person to be materialistic, cruel, inconsiderate, uptight, or exploitative. Then, you may go on to add the negative evaluation: this object is so-and-so and that is not good (cf. Blackburn Reference Blackburn1992, p. 289; Taylor Reference Taylor1959, pp. 128-130). So, just like non-cognitivists have argued in the metaethical debate, an empirical scientist of morality could argue that thick concepts can be factored into two. Moral neuroscience and psychology could study the neural correlates of the evaluative component (say, “boo!” or “yuck!”), just like it does in thin cases. The issue of ontological dependence on cultural and institutional facts would be confined to the descriptive component, which would be the task of historians, anthropologists, and sociologists to account for. Therefore, thick morality would not really be a challenge for theories about hardwired morality and its evolutionary history.

This response would result in a neat division of labor within the science of morality. Unfortunately, its soundness is doubtful. Specifically, there are some good arguments as to why the proposed factorization or “disentangling manoeuvre” (McDowell Reference McDowell and McDowell1998, p. 201) necessarily fails. As Williams, Putnam, Murdoch, McDowell, and others have argued, the description and the evaluation are not separable. In Putnam’s words:

Murdoch (and later, and in a more spelled-out way, McDowell) argued that there is no way of saying what the “descriptive component” of the meaning of a word like cruel or inconsiderate is without using a word of the same kind; as McDowell put the argument, a word has to be connected to a certain set of “evaluative interests” in order to function in the way such a thick ethical word functions [...] The attempt of noncognitivists to split thick ethical concepts into a “descriptive meaning component” and a “prescriptive meaning component” founders on the impossibility of saying what the “descriptive meaning” of, say, “cruel” is without using the word “cruel” or a synonym. For example, it certainly is not the case that the extension of “cruel” (setting the evaluation aside, as it were) is simply “causing deep suffering”, nor, as [R.M.] Hare himself should have noticed, is “causes deep suffering” itself free of evaluative force. “Suffering” does not just mean “pain”, nor does “deep” just mean “a lot of” (Putnam Reference Putnam and Putnam1990, p. 166; Reference Putnam and Putnam1992, p. 86; Reference Putnam2002, p. 38).Footnote 16

In other words, there exists no value-free description of what it is to be cruel or inconsiderate, upon which you can stick a negative “prescriptive flag” (Williams Reference Williams1985, p. 141) or “gold star” (Korsgaard Reference Korsgaard1996, p. 71). The description simultaneously is the evaluation. Or, as McDowell (Reference McDowell and McDowell1998, p. 201) and Charles Taylor (Reference Taylor2003, p. 306) put it, the evaluation cannot be “peeled off”. Therefore, the question remains unresolved. If the evaluation is not detachable, if there is no (as it were) purely moral part, then in which way, exactly, can thick morality be grounded in the brain? I do not know whether there can be a convincing answer to this question. But I do know that it has been completely neglected; as far as I know, it has not even identified as a challenge.

Conclusion

These days there is a great deal of talk about the scientific investigation of morality in both scientific and public forums. Both the scientific community and the public seem to be enthusiastic and hopeful. As it has often been the case in the history of Western science, a common narrative is that new, sophisticated scientific methods are rapidly superseding old, armchair philosophical cul-de-sacs.

According to Randolph Nesse (Reference Nesse and Verplaetse2009: 201), for example, “[t]rying to understand morality has been a central human preoccupation for as far back as human history extends, and for very good reasons”.

So, for several thousands of years, philosophers have tried to find general moral principles.[...] Thousands of books chronicle the human quest for moral knowledge.

Now, in a mere eye blink of history, the scene has changed. Completely new kinds of knowledge are being brought to bear. Neuroscience is investigating the brain mechanisms involved in moral decisions, moral actions, and responding to moral and immoral actions by self and others. Evolutionary biology is investigating why those brain mechanisms exist, how they give a selective advantage, and why there is genetic variation that influences moral tendencies. This is an exciting time for those of us curious about morality.

Thus, a scientific revolution is said to be underway. I have argued that, whatever other merits and flaws it might have, this new science of morality is a science of one part of morality only. Neuroscientists and psychologists widely assume that the study of morality is equivalent to the study of what people judge as right and wrong, good and bad, permissible and impermissible, or appropriate and inappropriate. The whole world of thick morality is thereby left out of the picture. What is more, it is left out of the picture without giving reasons as to why it ought to be left out. It follows that claims about morality – that is, morality tout court – are not warranted by the numerous empirical findings that have been published.

I have also argued that the neglect of thick morality is not theoretically innocuous. Thick moral concepts are not more of the same; research about thick cannot be simply added to the existing research about thin. That is because thick concepts have two peculiar characteristics, which make them qualitatively different from thin ones. First, they simultaneously describe and evaluate an object, yet description and evaluation are inseparable. Second, for a thick concept to be possible at all in a society, certain cultural and institutional facts must obtain there; that is, each thick concept has distinct cultural and institutional presuppositions.

These peculiar characteristics of thick concepts challenge the prevalent conception of a hardwired and universal moral capacity in a specific and acute way, which thin concepts do not (thin concepts may challenge it in another way, but that is another matter). Whatever brain imaging research can tell us about the nature and uses of right and wrong, it is completely unclear at this point what (if anything) it can tell us about the nature and uses of thick concepts. For thick concepts are incompatible with the conceptual framework that underlies claims about the neural correlates of thin ones – based on approach!- and avoid!-reactions, which are analogous to the emotions, the product of evolution, and so on. Thus, it is unclear how a neuroscientist should go about investigating and understanding thick morality at all.

If my arguments about thick concepts are correct, an important question for future empirical research on morality is the relative prominence of thick and thin concepts in people’s everyday lives in different societies and social contexts. Which ones are actually more used? What for? Where, when, and by whom? The more thick concepts turn out to be empirically prominent in real lives, the less will it be possible to theoretically brush them aside. Future empirical research should also investigate how thick concepts work in ordinary contexts, how they historically emerge, and how cultural and institutional elements get built into them. Last but not least, there is the question of how thick concepts vary across societies – in particular, the fact that a concept may exist in some societies but not in others (needless to say, a science of morality should not gather its data in one society only, even if the methods used are experimental – cf. Arnett Reference Arnett2008; Heine and Norenzayan Reference Heine and Norenzayan2006; Henrich et al. Reference Henrich2004, Reference Henrich, Heine and Norenzayan2010; Henry Reference Henry2008; Sears Reference Sears1986). All of these are questions that sociologists of morality are in a privileged position, methodologically and epistemologically, to investigate. Indeed, I believe they are some of the most important questions that they should investigate. In so doing they would be in keeping with a long tradition of sociological research on morality.Footnote 17

The promise of the science of morality

This paper has been critical of psychologists’ and neuroscientists’ approach, and, in particular, of their conception of morality. However, I think their overall objective – the empirical investigation of morality – is a most important and timely one, which they share with sociologists, historians, and anthropologists, among other disciplines. It would be odd if a scientific understanding of morality did not pay any attention at all to human biology and evolutionary history – if, for instance, the nature of the creatures we were talking about, whether human beings, Martians, fairies, zombies, or bonobos, made no difference. Moreover, these empirical inquiries might be of help to philosophical ethics (Doris and Stich Reference Doris, Stich, Jackson and Smith2005; Flanagan Reference Flanagan1991; Greene Reference Greene and Sinnott-Armstrong2008; Knobe and Nichols Reference Knobe and Nichols2008; R. Joyce Reference Joyce and Sinnott-Armstrong2008), and thus build more concrete bridges between philosophy and the empirical sciences than abstract talk of “continuities” and “naturalization” (e.g., Quine Reference Quine1969, pp. 126-127; May et al. Reference May, Friedman and Clark1996). Last but not least, it would be a hasty and implausible argument that brain research cannot make any contribution to the understanding of morality at all, even if at present the theoretical meaning of its findings is quite unclear. The Luddite is a recurrent character in the history of science. Yet, at least most times, scientific Luddism seems to me to be a bad idea, shaped more by anxiety than by rational thought and reasonable argument.

In brief, I believe the empirical investigation of morality is a promising project, to which many disciplines – from neuroscience and psychology to anthropology and history – can and should try to contribute. They should contribute on an equal footing, however. Contemporary science rewards the use of novel methods and techniques, so it is not surprising that more and more young scientists are turning to brain imaging research. However, if the history of science is any indication, methodological novelty can give rise to epistemologically imperialistic programs. In other words, I do not believe we should aim at a “single, general theory of human behavior”, nor that “[i]n this enterprise, the method and the standard set by neuroscience is the final goal” (Glimcher and Rustichini Reference Glimcher and Rustichini2004, p. 447, p. 452).

Now, precisely because of its promise, the science of morality must be extremely careful regarding what claims about morality, moral action, and moral life are and are not warranted given the data and methods used. A fortiori, it must be extremely careful regarding its diffusion and reception outside the ivory tower – e.g., in courtrooms, education ministries, funding agencies, and the media. Research on morality is unlike research on, say, synesthesia or contemporary Basque poetry, in that the former intuitively strikes people as relevant to vital social and political issues – e.g., how to stop unethical and criminal behavior, or how to create the good society. Consequently, questions and discussions about applications and policy implications have quickly arisen (Cohen Reference Cohen2005; Goodenough and Tucker Reference Goodenough and Tucker2010; Hauser Reference Hauser2006; Salvador and Folger Reference Salvador and Folger2009; Zeki and Goodenough Reference Zeki and Goodenough2006). Moreover, the media is extremely fond of brain scanning machines and colorful brain images, especially when they are said to be able to solve some old moral questions, which philosophers have been unable to solve for 2,500 years or so. Indeed, magnetic resonance imaging may have become a “cultural icon” (K. Joyce Reference Joyce2008).

Because of these two conditions, along with the general scientism of contemporary Western societies, scientists of morality have a special moral responsibility to be circumspect in their public statements. Further, it is their responsibility to be aware that journalists are likely to simplify and exaggerate their claims (Fine Reference Fine2010; Racine et al. Reference Racine, Bar-Ilan and Illes2005). Findings about one part of morality should not be presented as findings about morality tout court. The modal verb « may » should not be used to shield speculations, because the word « may » may end up being overlooked. In my opinion, enthusiastic talk about potentials and rapid progress and discoveries in the near future and speculation are better avoided. Crystal-clear clarity on what exactly scientists of morality have and have not found using the new methods is most welcome. So are calls for caution, reasonableness, conceptual clarity, and use of one’s head (Bennett and Hacker Reference Bennett and Hacker2003; Cacioppo et al. Reference Cacioppo2003; Choudhury et al. Reference Choudhury, Nagel and Slaby2009; Lavazza and De Caro Reference Lavazza and De Caro2010; Logothetis Reference Logothetis2008; Miller Reference Miller2008; Rose Reference Rose2005; Weisberg et al. Reference Weisberg2008). True, this may result in less public attention and excitement, less of a collective sense of being on the verge of a momentous revolution, and probably fewer grants and students as well. But I believe it is nonetheless the most responsible course of action for the scientist of morality.

Acknowledgments

I wish to thank Patrik Aspers, Claudio Benzecry, Max Besbris, Craig Calhoun, Clarisa Fernández, William FitzPatrick, David Garland, Carol Heimer, Elif Kale, Joshua Knobe, Steven Lukes, Jeff Manza, Gerald Marwell, Olivia Nicol, Douglas Porpora, Regina Rini, Edward Sanders, Michael Sauder, Jan Slaby, Arthur Stinchcombe, Iddo Tavory, Devin Terhune, and Florencia Torche for their feedback. I also wish to thank the Max-Planck-Institut für Gesellschaftsforschung and the Department of Sociology at New York University for their support, and Sophie Gudin at the European Journal of Sociology office for her assistance. The usual disclaimers apply.

Footnotes

1 De Ridderet al.'s (Reference De Ridder and Verplaetse2009, p. 161, p. 167) examples of the “dysfunctional moral brain” are pedophilia and psychopathy, which “neurobiological, functional neuroimaging, and neuropsychological data all converge to demonstrate [...] are nothing more than clinical expressions of specific brain circuit malfunctions”. In this respect, the techniques may be new, but the conclusions are very old (e.g., ScullReference Scull1993, Reference Scull2005).

3 For instance, argument 1) may or may not be true, but is in any case compatible with a methodological approach that focuses on individuals making moral judgments; it highlights differences instead of similarities. Argument 2) is an upshot of the vast literature on the reasons and accounts people give – or would be prepared to give – to relevant others and to themselves, including but not limited to the approaches inspired by Goffman and Garfinkel. From this perspective, people’s mere post-hoc rationalizations of their moral intuitions might not be so mere after all. Argument 3) challenges the relevance of individual-level findings for policies about moral issues (HealyReference Healy2006; Heimer and StaffenReference Heimer and Staffen1998). In this respect, it could also draw on psychological situationism (Ross and NisbettReference Ross and Nisbett1991). Something like argument 4) might be put forward by interactionists and relationalists, for whom morality can only arise in interaction, “[t]he substratum of social life is interaction, not biological individuals who act”, etc. (AbbottReference Abbott2007, p. 7; BlumerReference Blumer1969; MeadReference Mead and Morris1934). Or else, it might be put forward by a communitarian, for whom the context is a condition of possibility; by a Wittgensteinian (CoulterReference Coulter2008); by a psychological externalist (BurgeReference Burge1979, Reference Burge1986; WilsonReference Wilson1995, Reference Wilson2004); or by a sociological holist, for whom morality is a sui generis social-level phenomenon (DurkheimReference Durkheim1968, Reference Durkheim1975; GilbertReference Gilbert1992).

4 The words “thin” and “thick” may make social scientists think of Clifford Geertz’s paper, “Thick Description” (which, in turn, borrows the expression from Gilbert Ryle). However, this is not the sense I am interested in here; Williams’ distinction tries to get at something else. Let me also note that my account of thick concepts will be short and rough, because that is enough for my purposes. However, giving a satisfactory account of their nature is not as easy as it may seem (EklundReference Eklund2011; VäyrynenReference Väyrynen2009).

5 Besides, I will not consider here the extensive literature on cooperation, altruism, and “sociality,” which are often investigated using behavioral games. While this literature can be plausibly seen as contributing to the scientific understanding of morality, it has distinct goals, strategies, and problems, and so it should be analyzed separately.

6 I say “try to” to put aside the question of whether they actually get at it. Bartels’s question, “In this situation, would you flip the switch?,” does not get at the same thing as the question, “In this situation, would it be morally right, permissible, appropriate, etc. for you to flip the switch?” For instance, a subject might answer that in that situation she would not flip the switch, even though flipping the switch seemed to her the morally right thing to do (and vice versa). While presumably second-person questions shed more light on action than third-person ones, in this situation it would not necessarily be moral action. (Bartels’s Studies 2 and 3 (2008, pp. 399-400) use the words “approve” and “moral rule,” though their goals are different.) Likewise, the problem with Waldmann and Dieterich’s task is that there is a moral and a non-moral “should” – e.g., the prudential “you should brush your teeth twice a day.”

7 The set of thick ethical concepts is large. Gibbard’s (Reference Gibbard1992, p. 269) examples are “cruel, decent, nasty, lewd, petty, sleazy, and up tight [sic]”.Thick concepts overlap with virtues and vices. For example, virtue ethicist Rosalind Hursthouse (Reference Hursthouse1999, p. 42, 2009) suggests that one might want to avoid “courses of action that would be irresponsible, feckless, lazy, inconsiderate, uncooperative, harsh, intolerant, selfish, mercenary, indiscreet, tactless, arrogant, unsympathetic, cold, incautious, unenterprising, pusillanimous, feeble, presumptuous, rude, hypocritical, self-indulgent, materialistic, grasping, short-sighted, vindictive, calculating, ungrateful, grudging, brutal, profligate, disloyal, and on and on”.

8 To be sure, deontology and consequentialism had been charged with emptiness, formalism, narrowness, futility, rigorism, and implausible implications since their very first days – as by Hegel, for instance. And contemporary “continental” philosophers were never much taken by their preoccupations to begin with. However, the final blow was arguably delivered from within, especially by Williams (Reference Williams1981, Reference Williams1985), MacIntyre (Reference MacIntyre1981), and Taylor (Reference Taylor1985, Reference Taylor1989). Ever since, even mainstream analytic ethicists have had to broaden somewhat their horizons, ask new questions, examine new issues, and rethink what the point of their moral philosophy was. Thus, for example, the debate that pitted Kantian against utilitarian rules for action – abstract, universal, parsimonious rules, which unambiguously prescribe what one ought to do given certain initial conditions – seemed now a little simplistic and a little useless.

9 As Hauseret al. (Reference Hauser, Liane, Chusman and Sinnott-Armstrong2008, pp. 135-136) explain: “Under way is a study with Frank Marlowe designed to test whether the Hadza, a small and remote group of hunter-gatherers living in Tanzania, show similar patterns of responses as do our English-speaking, Internet-sophisticated, largely Westernized and industrialized subjects. This last project has forced us to extend the range of our dilemmas, especially since the Hadza, and most of the other small scale societies we hope to test, would be completely unfamiliar with trolleys. Instead of trolleys, therefore, we have mirrored the architecture of these problems but substituted herds of stampeding elephants as illustrated below [...] Though preliminary, [the] results provide further support for the universality of some of our moral intuitions”.

11 The argument of noncognitivists is that even if the grammar of moral statements “superficially” resembles factual statements, and whatever people think they are doing, in reality they are expressing a feeling or issuing a command. Of course, while noncognitivism is a metaethical theory, the science of morality is primarily an empirical project. Yet, something like this conception of moral judgment seems to me to underlie the empirical work. Further, note that items (i) to (iv) are not equivalent; each would have to be separately analyzed, including an analysis of its relations to the relevant emotions. This is beyond the scope of this paper, which accounts for the somewhat ambiguous verb I chose to use in the first clause, “to understand”, as well as for my then using four verbs – “to say”, “to mean”, “to think”, and “to feel” (cf. R. JoyceReference Joyce2008, pp. 373-377). Finally, the actual and possible links between the new science of morality and the British moralists, especially Hutcheson and Hume, are even more intricate (cf. PrinzReference Prinz2007). Hence, I avoid the words “approbation” and “disapprobation”, and phrases such as “sentiment of approbation”, so as to sidestep these exegetical knots.

12 It should be noted that this analogy is at a theoretical level, and is distinct from the empirical claim that the manipulation of subjects’ emotions causally affects their moral judgments (e.g., Wheatley and HaidtReference Wheatley and Haidt2005). It should also be noted that it is quite debatable whether the emotions are natural kinds and are independent from language (BarrettReference Barrett2006; Barrettet al. Reference Barrett, Lindquist and Gendron2007; Lindquistet al. Reference Lindquist2006; Lindquist and BarrettReference Lindquist and Barrett2008). For my purposes it does not matter who is right about this; the analogy can be effective all the same.

13 Ontological dependence is an important concept in metaphysics, of which I make rough-and-ready use here. Metaphysicians’ typical cases include the ontological dependence of non-empty sets on their members, of holes on their hosts, and of Socrates’ life on Socrates (cf. ChisholmReference Chisholm1994; FineReference Fine1995).

14 I thank Steven Lukes for raising this issue and for his thoughts about it.

15 I cannot discuss here the concept of neural correlates and what it does and does not show. However, note that the claim that, say, “do it!” or “avoid!” have neural correlates (“recruit” certain brain regions, or “are associated with” certain patterns of brain activity) does not address the ontological questions about morality in the brain. What are moral mental states and contents? How are they related to other mental states? Do they supervene on brain states? These questions would be the moral counterpart to the traditional ontological questions in cognitive science and the philosophy of mind.

16 For the “more spelled-out” argument, see McDowell (Reference McDowell and McDowell1998). These analyses of thick concepts are not universally accepted, however (Elstein and HurkaReference Elstein and Hurka2009; HareReference Hare1997, pp. 61-62).

17 The sociology of morality has an extensive history, which this is not the place to tell (see Abend Reference Abend2008, Reference Abend, Hitlin and Vaisey2010). In the introduction I mentioned Durkheim’s “science de la morale”. But he was hardly alone. Many of his contemporaries and even predecessors were also interested in studying morality empirically (as opposed to philosophically and normatively). Three among many possible examples are Martineau’s (Reference Martineau1838) How to Observe Morals and Manners in Britain, Simmel’s two-volume Einleitung in die Moralwissenschaft (1892-1893) in Germany, and Lévy-Bruhl’s La Morale et la science des mœurs (1903) in France. Durkheim’s followers established a strong tradition of research on morality in France, whose leader eventually became Georges Gurvitch (BayetReference Bayet1905, Reference Bayet1925; BelotReference Belot1921; BougléReference Bouglé1922; FauconnetReference Fauconnet1920; GurvitchReference Gurvitch1937, Reference Gurvitch and Gurvitch1960; LerouxReference Leroux1930), and which has continued mutatis mutandis (and despite some ebbs and flows) to this day (Bateman-Novaeset al. Reference Bateman-Novaes, Ogien and Pharo2000; Isambertet al. Reference Isambert, Ladrière and Terrenoire1978; LadrièreReference Ladrière2001; PharoReference Pharo2004). In Germany, the influence of thinkers who work at the intersection of and draw freely from sociology, social and political theory, and philosophy (Jürgen Habermas, Axel Honneth, Hans Joas, etc.) has led to a special and central place for morality. This tendency manifests itself in empirical programs in diverse substantive areas. For example, recent work on markets at the Max-Planck-Institut für Gesellschaftsforschung has viewed them as inextricably intertwined with morals (e.g., BeckertReference Beckert2005, Reference Beckert, Stehr, Henning and Weiler2006). In the United States, by contrast, only now is research on morality beginning to be identified as a distinct sociological subject and subfield (cf. Hitlin and VaiseyReference Hitlin and Vaisey2010).

References

BIBLIOGRAPHY

Abarbanell, Linda and Hauser, Marc, 2010. “Mayan Morality: An Exploration of Permissible Harms”, Cognition, 115, pp. 207-224.CrossRefGoogle ScholarPubMed
Abbott, Andrew, 2007. “Mechanisms and RelationsSociologica, 2/2007.Google Scholar
Abend, Gabriel, 2008. “Two Main Problems in the Sociology of Morality”, Theory and Society, 37 (2), pp. 87-125.CrossRefGoogle Scholar
Abend, Gabriel, 2010. “What’s New and What’s Old about the New Sociology of Morality” in Hitlin, Steven and Vaisey, Stephen, eds., Handbook of the Sociology of Morality (New York, Springer, pp. 561-584).CrossRefGoogle Scholar
Ambady, Nalini and Bharucha, Jamshed, 2009. “Culture and the Brain.” Current Directions in Psychological Science, 18 (6), pp. 342-345.CrossRefGoogle Scholar
Anscombe, G.E.M. 1958. “Modern Moral Philosophy”, Philosophy, 33 (124), pp. 1-19.CrossRefGoogle Scholar
Antsey, Michael L., et al. 2009. “Serotonin Mediates Behavioral Gregarization Underlying Swarm Formation in Desert Locusts”, Science 323, pp. 627-630.Google Scholar
Arnett, Jeffrey, 2008. “The Neglected 95%: Why American Psychology Needs to Become Less AmericanAmerican Psychologist, 63 (7), pp. 602-614.Google Scholar
Ayer, Alfred J., [1936] 1952. Language, Truth, and Logic (New York, Dover).Google Scholar
Barrett, Lisa F., 2006. “Are Emotions Natural Kinds?”, Perspectives on Psychological Science, 1, pp. 28-58.CrossRefGoogle ScholarPubMed
Barrett, Lisa F., Lindquist, Kristen A. and Gendron, Maria, 2007. “Language as a Context for Emotion Perception”, Trends in Cognitive Science, 11, pp. 327-332.Google Scholar
Bartels, Daniel M., 2008. “Principled Moral Sentiment and the Flexibility of Moral Judgment and Decision Making”, Cognition, 108, pp. 381-417.CrossRefGoogle ScholarPubMed
Bateman-Novaes, Simone, Ogien, Ruwen and Pharo, Patrick, eds., 2000. Raison pratique et sociologie de l’éthique. Autour des travaux de Paul Ladrière (Paris, CNRS Éditions).Google Scholar
Bayet, Albert, 1905. La Morale scientifique (Paris, Félix Alcan).Google Scholar
Bayet, Albert, 1925. La Science des faits moraux (Paris, Félix Alcan).Google Scholar
Beckert, Jens, 2005. The Moral Embeddedness of Markets, MPIfG Discussion Paper 05/6. (Köln, Max-Planck-Institut für Gesellschaftsforschung).CrossRefGoogle Scholar
Beckert, Jens, 2006. “.The Ambivalent Role of Morality on Markets”, in Stehr, Nico, Henning, Christoph and Weiler, Bernd, eds., The Moralization of the Markets (New Brunswick, Transaction Books, pp. 109-128).Google Scholar
Belot, Gustave. [1907] 1921. Études de morale positive. Two volumes (Paris, Félix Alcan).Google Scholar
Bennett, Max R., and Hacker, Peter M.S., 2003. Philosophical Foundations of Neuroscience (Malden, Blackwell).Google Scholar
Blackburn, Simon, 1984. Spreading the Word: Groundings in the Philosophy of Language (Oxford/New York, Oxford University Press/Clarendon Press).Google Scholar
Blackburn, Simon, 1992. “Through Thick and Thin”, Proceedings of the Aristotelian Society, Supplementary 66, pp. 285-299.Google Scholar
Blumer, Herbert, 1969. Symbolic Interactionism: Perspective and Method (Berkeley, University of California Press).Google Scholar
Borg, Jana Schaich, et al. , 2006. “Consequences, Action, and Intention as Factors in Moral Judgments: An fMRI Investigation”, Journal of Cognitive Neuroscience 18 (5), pp. 803-817.CrossRefGoogle Scholar
Bouglé, Célestin Charles Alfred, 1922. Leçons de sociologie sur l’évolution des valeurs (Paris, Armand Colin).Google Scholar
Burge, Tyler, 1979. “Individualism and the Mental”, Midwest Studies in Philosophy, 4, pp. 73-121.CrossRefGoogle Scholar
Burge, Tyler, 1986. “Individualism and Psychology”, Philosophical Review, 45, pp. 3-45.CrossRefGoogle Scholar
Cacioppo, John T. et al. , 2003. “Just Because You’re Imaging the Brain Doesn’t Mean You Can Stop Using Your Head: A Primer and Set of First Principles”, Journal of Personality and Social Psychology, 85 (4), pp. 650-661.CrossRefGoogle Scholar
Carnap, Rudolf, 1935. Philosophy and Logical Syntax (London, K. Paul, Trench, Trubner & Co).Google Scholar
Chapman, H.A. et al. , 2009. “In Bad Taste: Evidence for the Oral Origins of Moral Disgust”, Science, 323 (5918), pp. 1222-1226.Google Scholar
Chisholm, Roderick, 1994. “Ontologically Dependent Entities”, Philosophy and Phenomenological Research, 54 (3), pp. 499-450.Google Scholar
Choudhury, Suparna, Nagel, Saskia Kathi and Slaby, Jan, 2009. “Critical Neuroscience: Linking Neuroscience and Society through Critical Practice”, BioSocieties, 4, pp. 61-77.Google Scholar
Ciaramelli, Elisa et al. , 2007. “Selective Deficit in Personal Moral Judgment Following Damage to Ventromedial Prefrontal Cortex”, Social Cognitive and Affective Neuroscience, 2, pp. 84-92.Google Scholar
Cobbe, Frances, 1872. Darwinism in Morals, and Other Essays (London/Edinburgh, Williams and Norgate).CrossRefGoogle Scholar
Cohen, Jonathan D. 2005. “The Vulcanization of the Human Brain: A Neural Perspective on Interactions Between Cognition and Emotion”, Journal of Economic Perspectives, 19 (4), pp. 3-24.Google Scholar
Coulter, Jeff, 2008. “Twenty-Five Theses against Cognitivism”, Theory, Culture & Society, 25, pp. 19-32.Google Scholar
Crockett, Molly J. et al. , 2010. “Serotonin Selectively Influences Moral Judgment and Behavior through Effects on Harm Aversion”, PNAS, 107, pp. 17433-17438.CrossRefGoogle ScholarPubMed
Cushman, Fiery, Young, Liane and Hauser, Marc, 2006. “The Role of Reasoning and Intuition in Moral Judgments: Testing Three Principles of Harm”, Psychological Science, 17 (12), pp. 1082-1089.CrossRefGoogle ScholarPubMed
De Ridder, Dirk et al. , 2009. “Moral Dysfunction: Theoretical Model and Potential Neurosurgical Treatments” in Verplaetse, Jan, et al. , eds., The Moral Brain: Essays on the Evolutionary and Neuroscientific Aspects of Morality (Dordrecht, Springer, pp. 155-183).CrossRefGoogle Scholar
Dewey, John, 1898. “Evolution and Ethics”, The Monist, 8 (3), pp. 321-341.CrossRefGoogle Scholar
Doris, John and Stich, Stephen, 2005. “As a Matter of Fact Empirical Perspectives on Ethics”, in Jackson, Frank and Smith, Michael, eds., The Oxford Handbook of Contemporary Philosophy, pp. 114-152.Google Scholar
Dupoux, Emmanuel and Jacob, Pierre, 2007. “Universal Moral Grammar: A Critical Appraisal”, Trends in Cognitive Science, 11, pp. 373-378.Google Scholar
Durkheim, Émile, [1893] 1984. The Division of Labor in Society, translated by Halls, W. D., and with an introduction by Coser, Lewis A. (New York, Free Press).Google Scholar
Durkheim, Émile, [1895] 1968. Les Règles de la méthode sociologique, (Paris, PUF).Google Scholar
Durkheim, Émile, [1920] 1979. “Introduction to Ethics” in Pickering, W.S.F., ed., Durkheim: Essays on Morals and Education, translated by Sutcliffe, H. L. (London, Routledge & Kegan Paul, pp. 79-96).Google Scholar
Durkheim, Émile, 1975. Textes 1. Éléments d’une théorie sociale (Paris, Minuit).Google Scholar
Dwyer, Susan, Huebner, Bryce and Hauser, Marc D., 2009. “The Linguistic Analogy: Motivations, Results, and Speculations”, Topics in Cognitive Science, 2009, pp. 1-25.Google Scholar
Eklund, Matti, 2011. “What are Thick Concepts?”, Canadian Journal of Philosophy, 41 (1), pp. 25-50.CrossRefGoogle Scholar
Ekman, Paul and Rosenberg, Erika L., eds., 2005. What the Face Reveals: Basic and Applied Studies of Spontaneous Expression Using the Facial Action Coding System (FACS). (New York, Oxford University Press).CrossRefGoogle Scholar
Elstein, Daniel and Hurka, Thomas, 2009. “From Thick to Thin: Two Moral Reduction Plans”, Canadian Journal of Philosophy, 39 (4), pp. 515-536.CrossRefGoogle Scholar
Fauconnet, Paul, 1920. La Responsabilité ; étude de sociologie (Paris, Félix Alcan).Google Scholar
Fine, Cordelia, 2010. “From Scanner to Sound Bite”, Current Directions in Psychological Science, 19, pp. 280-283.CrossRefGoogle Scholar
Fine, Kit, 1995. “Ontological Dependence”, Proceedings of the Aristotelian Society, 95, pp. 269-290.CrossRefGoogle Scholar
Flanagan, Owen, 1991. Varieties of Moral Personality: Ethics and Psychological Realism (Cambridge, Harvard University Press).Google Scholar
Foot, Philippa, 1958. “Moral Arguments”, Mind, 67, pp. 502-513.CrossRefGoogle Scholar
Foot, Philippa, 1958-1959. “Moral Beliefs”, Proceedings of the Aristotelian Society, 59, pp. 83-104.CrossRefGoogle Scholar
Gibbard, Allan, 1990. Wise Choices, Apt Feelings: A Theory of Normative Judgment (Cambridge, Harvard University Press).Google Scholar
Gibbard, Allan, 1992. “Thick Concepts and Warrant for Feelings”, Proceedings of the Aristotelian Society, supplementary 66, pp. 267-283.CrossRefGoogle Scholar
Gilbert, Margaret, 1992. On Social Facts (Princeton, Princeton University Press).CrossRefGoogle Scholar
Glimcher, Paul W. and Rustichini, Aldo, 2004. “Neuroeconomics: The Consilience of Brain and Decision”, Science, 306, pp. 447-452.CrossRefGoogle ScholarPubMed
Goodenough, Oliver R. and Tucker, Micaela, 2010. “Law and Cognitive Neuroscience”, Annual Review of Law and Social Science, 6, pp. 61-92.Google Scholar
Greene, Joshua D., 2008. “The Secret Joke of Kant’s Soul” in Sinnott-Armstrong, Walter, ed., Moral Psychology Vol. 3 (Cambridge, MIT, pp. 35-79).Google Scholar
Greene, Joshua D. et al. , 2001. “An fMRI Investigation of Emotional Engagement in Moral Judgment”, Science, 293, pp. 2105-2108.CrossRefGoogle ScholarPubMed
Greene, Joshua D., 2008. “Cognitive Load Selectively Interferes with Utilitarian Moral Judgment”, Cognition, 107 (3), pp. 1144-1154.Google Scholar
Greene, Joshua D., 2009. “Pushing Moral Buttons: The Interaction between Personal Force and Intention in Moral Judgment”, Cognition, 111 (3), pp. 364-371.Google Scholar
Gurvitch, Georges, 1937. Morale théorique et science des mœurs (Paris, PUF).Google Scholar
Gurvitch, Georges, 1960. “Sociologie de la Vie Morale” in Gurvitch, Georges, ed., Traité de sociologie, tome second (Paris, PUF, pp. 137-172).Google Scholar
Haidt, Jonathan, 2001. “The Emotional Dog and its Rational Tail: A Social Intuitionist Approach to Moral Judgment”, Psychological Review, 108, pp. 814-834.CrossRefGoogle ScholarPubMed
Haidt, Jonathan, Koller, Silvia H. and Dias, Maria G., 1993. “Affect, Culture, and Morality, Or Is It Wrong to Eat Your Dog?”, Journal of Personality and Social Psychology, 65, pp. 613-628.CrossRefGoogle Scholar
Hare, Richard M., 1952. The Language of Morals (Oxford, Clarendon).Google Scholar
Hare, Richard M., 1997. Sorting out Ethics (Oxford/New York, Clarendon/Oxford University Press).Google Scholar
Hauser, Marc, 2006. Moral Minds: How Nature Designed Our Universal Sense of Right and Wrong (New York, Ecco/HarperCollins).Google Scholar
Hauser, Marc et al. , 2007. “A Dissociation between Moral Judgments and Justifications.” Mind and Language, 22 (1), pp. 1-21.CrossRefGoogle Scholar
Hauser, Marc, Liane, Young and Chusman, Fiery, 2008. “Reviving Rawls’s Linguistic Analogy: Operative Principles and the Causal Structure of Moral Actions” in Sinnott-Armstrong, Walter, Moral Psychology, Vol. 2 (Cambridge, MIT, pp. 107-143).Google Scholar
Healy, Kieran, 2006. Last Best Gifts: Altruism and the Market for Human Blood and Organs (Chicago, University of Chicago Press).CrossRefGoogle Scholar
Heekeren, Hauke R., et al. , 2003. “An fMRI Study of Simple Ethical Decision-Making”, NeuroReport, 14 (9), pp. 1215-1219.Google Scholar
Heekeren, Hauke R. et al. , 2005. “Influence of Bodily Harm on Neural Correlates of Semantic and Moral Decision-Making”, Neuroimage, 24 (3), pp. 887-897.CrossRefGoogle ScholarPubMed
Heimer, Carol A. and Staffen, Lisa R., 1998. For the Sake of the Children: The Social Organization of Responsibility in the Hospital and the Home (Chicago, University of Chicago Press).Google Scholar
Heine, Steven J. and Norenzayan, Ara, 2006. “Toward a Psychological Science for a Cultural Species”, Perspectives on Psychological Science, 1, pp. 251-269.CrossRefGoogle Scholar
Henrich, Joseph et al. , eds., 2004. Foundations of Human Sociality: Economic Experiments and Ethnographic Evidence from Fifteen Small-Scale Societies (Oxford/New York, Oxford University Press).CrossRefGoogle Scholar
Henrich, Joseph, Heine, Steven J. and Norenzayan, Ara, 2010. “The Weirdest People in the World?”, Behavioral and Brain Sciences, 33, p. 61-135.CrossRefGoogle ScholarPubMed
Henry, P.J., 2008. “College Sophomores in the Laboratory Redux: Influences of a Narrow Data Base on Social Psychology’s View of the Nature of Prejudice”, Psychological Inquiry, 19 (2), pp. 49-71.CrossRefGoogle Scholar
Hitlin, Steven and Vaisey, Stephen, eds., 2010. Handbook of the Sociology of Morality (New York, Springer).CrossRefGoogle Scholar
Hobhouse, Leonard T., 1906. Morals in Evolution, 2 volumes (New York, Henry Holt & co).Google Scholar
Hursthouse, Rosalind, 1999. On Virtue Ethics (Oxford/New York, Oxford University Press).Google Scholar
Hursthouse, Rosalind, 2009. “Virtue Ethics” The Stanford Encyclopedia of Philosophy (Spring 2009 Edition), Zalta, Edward N. (ed.), URL = <http://plato.stanford.edu/archives/spr2009/entries/ethics-virtue/>.Google Scholar
Huxley, Thomas H., 1893. Evolution and Ethics (London/New York, Macmillan & Co).Google Scholar
Isambert, François-André, Ladrière, Paul and Terrenoire, Jean-Paul, 1978. “Pour une sociologie de l’éthique”, Revue française de Sociologie, 19 (3), pp. 323-339.CrossRefGoogle Scholar
Joyce, Kelly A., 2008. Magnetic Appeal: MRI and the Myth of Transparency (Ithaca, Cornell University Press).Google Scholar
Joyce, Richard, 2008. “What Neuroscience Can (and Cannot) Contribute to Metaethics” in Sinnott-Armstrong, Walter, ed., Moral Psychology Vol. 3, (Cambridge, MIT, pp. 371-394).Google Scholar
Kant, Emmanuel, [1785] 1998. Groundwork of the Metaphysics of Morals, translated and edited by Gregor, Mary, with an introduction by Korsgaard, Christine M. (Cambridge/ New York, Cambridge University Press).CrossRefGoogle Scholar
Kant, Emmanuel, [1797] 1991. The Metaphysics of Morals, introduction, translation and notes by Gregor, Mary (Cambridge/New York, Cambridge University Press).CrossRefGoogle Scholar
Kant, Emmanuel, [1798] 2006. Anthropology from a Pragmatic Point of View, translated and edited by Louden, Robert B., with an introduction by Kuehn, Manfred (Cambridge/New York, Cambridge University Press).Google Scholar
Keltner, Dacher and Buswell, Brenda N.. 1996. “Evidence for the Distinctness of Embarrassment, Shame, and Guilt: A Study of Recalled Antecedents and Facial Expressions of Emotion.” Cognition and Emotion 10, pp. 155-171.CrossRefGoogle Scholar
Keltner, Dacher, Ekman, Paul, Gonzaga, Gian C. and Beer, Jennifer. 2003. “Facial Expression of Emotion” in Davidson, Richard, Scherer, Klaus R. and Goldsmith, Hill H., eds., Handbook of Affective Science (London, Oxford University Press, pp. 415-432).Google Scholar
Kitcher, Philip, 2006. “Ethics and Evolution: How to Get Here from There”, in Ober, Josiah and Macedo, Stephen, eds., Primates and Philosophers: How Morality Evolved (Princeton, Princeton University Press, pp. 120-139).CrossRefGoogle Scholar
Knobe, Joshua and Nichols, Shaun, eds., 2008. Experimental Philosophy (New York, Oxford University Press).Google Scholar
Korsgaard, Christine, 1996. The Sources of Normativity (Cambridge/New York, Cambridge University Press).CrossRefGoogle Scholar
Ladrière, Paul, 2001. Pour une sociologie de l’éthique (Paris, PUF).Google Scholar
Lavazza, Andrea and De Caro, Mario, 2010. “Not So Fast: On Some Bold Neuroscientific Claims Concerning Human Agency”, Neuroethics, 3, pp. 23-41.CrossRefGoogle Scholar
Leroux, Emmanuel, 1930. “Ethical Thought in France since the War”, International Journal of Ethics, 40 (2), pp. 145-178.CrossRefGoogle Scholar
Letourneau, Charles, 1887. L’Évolution de la morale (Paris, A. Delahaye et É. Lecrosnier).Google Scholar
Lévy-Bruhl, Lucien, 1903. La morale et la science des mœurs (Paris, Félix Alcan).Google Scholar
Lindquist, Kristen et al. 2006. “Language and the Perception of Emotion.” Emotion 6, pp. 125-138.CrossRefGoogle ScholarPubMed
Lindquist, Kristen and Barrett, Lisa F., 2008. “Constructing Emotion: The Experience of Fear as a Conceptual Act”, Psychological Science, 19, pp. 898-903.CrossRefGoogle ScholarPubMed
Logothetis, Nikos K., 2008. “What We Can Do and What We Cannot Do with fMRI”, Nature, 453, pp. 869-878.CrossRefGoogle Scholar
MacIntyre, Alasdair C., 1981. After Virtue: A Study in Moral Theory (Notre Dame, University of Notre Dame Press).Google Scholar
Martineau, Harriet, 1838. How to Observe: Morals and Manners (London, C. Knight and Co).Google Scholar
May, Larry, Friedman, Marilyn and Clark, Andy, eds., 1996. Mind and Morals: Essays on Cognitive Science and Ethics (Cambridge, MIT Press).CrossRefGoogle Scholar
McDowell, John, [1981] 1998. “Non-Cognitivism and Rule-Following” in McDowell, John, ed., Mind, Value, and Reality (Cambridge, Harvard University Press, pp. 198-218).Google Scholar
Mead, George H., 1934. Mind, Self and Society: From the Standpoint of a Social Behaviorist, edited with introduction by Morris, Charles W. (Chicago, University of Chicago Press).Google Scholar
Mendez, Mario F., 2009. “The Neurobiology of Moral Behavior: Review and Neuropsychiatric Implications”, CNS Spectrums, 14, pp. 608-620.CrossRefGoogle ScholarPubMed
Mikhail, John, 2007. “Universal Moral Grammar: Theory, Evidence, and the Future”, Trends in Cognitive Sciences, 11 (4), pp. 143-152.CrossRefGoogle ScholarPubMed
Miller, Greg, 2008. “Growing Pains for fMRI”, Science, 320, pp. 1412-1414.CrossRefGoogle ScholarPubMed
Moll, Jorge, Oliveira-Souza, Ricardo de and Eslinger, Paul J., 2003. “Morals and the Human Brain: A Working Model”, NeuroReport, 14 (3), pp. 299-305.CrossRefGoogle Scholar
Murdoch, Iris, 1956. “Symposium: Vision and Choice in Morality”, Proceedings of the Aristotelian Society, supplementary volume 30, pp. 32-58.Google Scholar
Murdoch, Iris, 1970. The Sovereignty of Good (London, Routledge & K. Paul).Google Scholar
Nado, Jennifer, Kelly, Daniel and Stich, Stephen, 2009. “Moral Judgment” in Symons, John and Calvo, Paco, eds., The Routledge Companion to Philosophy of Psychology, (New York, Routledge, pp. 621-633).Google Scholar
Nesse, Randolph, 2009. “How Can Evolution and Neuroscience Help Us Understand Moral Capacities?” in Verplaetse, Jan, eds., The Moral Brain: Essays on the Evolutionary and Neuroscientific Aspects of Morality (Dordrecht, Springer, pp. 201-209).CrossRefGoogle Scholar
Panksepp, Jaak, 2000. “Emotions as Natural Kinds within the Mammalian Brain” in Lewis, Michael and Haviland-Jones, Jeannette M., eds., Handbook of Emotions (New York, Guilford, pp. 87-107).Google Scholar
Pharo, Patrick, 2004. Morale et sociologie: le sens et les valeurs entre nature et culture (Paris, Gallimard).Google Scholar
Pincoffs, Edmund L., 1971. “Quandary Ethics”, Mind, 80, pp. 552-571.CrossRefGoogle Scholar
Pincoffs, Edmund L., 1986. Quandaries and Virtues: Against Reductivism in Ethics (Lawrence, University Press of Kansas).Google Scholar
Prinz, Jesse, 2007. The Emotional Construction of Morals (Oxford/New York, Oxford University Press).Google Scholar
Putnam, Hilary, 1990. “Objectivity and the Science/Ethics Distinction” in Putnam, Hilary, Realism with a Human Face (Cambridge, Harvard University Press, pp. 163-178).Google Scholar
Putnam, Hilary, 1992. “Bernard Williams and the Absolute Conception of the World” in Putnam, Hilary, Renewing Philosophy (Cambridge, Harvard University Press, pp. 80-107).Google Scholar
Putnam, Hilary, 2002. The Collapse of the Fact/Value Dichotomy and Other Essays (Cambridge, Harvard University Press).Google Scholar
Putnam, Hilary, 2004. Ethics without Ontology (Cambridge, Harvard University Press).Google Scholar
Quine, Willard Van Orman, 1969. Ontological Relativity and Other Essays (New York, Columbia University Press).CrossRefGoogle Scholar
Racine, Eric, Bar-Ilan, Ofek and Illes, Judy, 2005. “fMRI in the Public Eye”, Nature Reviews Neuroscience, 6, pp. 159-164.CrossRefGoogle ScholarPubMed
Railton, Peter, 2000. “Darwinian Building Blocks”, in Katz, Leonard D., ed., Evolutionary Origins of Morality: Cross-Disciplinary Perspectives (Thorverton/BowlingGreen, Imprint Academic, pp. 55-60).Google Scholar
Rose, Steven, 2005. The Future of the Brain: The Promise and Perils of Tomorrow’s Neuroscience (Oxford/New York, Oxford University Press).Google Scholar
Ross, Lee and Nisbett, Richard, 1991. The Person and the Situation: Perspectives of Social Psychology (New York, McGraw-Hill).Google Scholar
Salvador, Rommel, and Folger, Robert G.. 2009. “Business Ethics and the Brain.” Business Ethics Quarterly 19 (1):1-31.Google Scholar
Schnall, Simone, Haidt, Jonathan, Clore, Gerald L. and Jordan, Alexander H.. 2008. “Disgust as Embodied Moral Judgment”, Personality and Social Psychology Bulletin, 34, pp. 1096-1109.Google Scholar
Schurman, Jacob Gould, 1887. The Ethical Import of Darwinism (New York, Charles Scribner’s Sons).CrossRefGoogle Scholar
Scull, Andrew T., 1993. The Most Solitary of Afflictions: Madness and Society in Britain, 1700-1900 (New Haven, Yale University Press).Google Scholar
Scull, Andrew T., 2005. Madhouse: A Tragic Tale of Megalomania and Modern Medicine (New Haven, Yale University Press).Google Scholar
Sears, David O., 1986. “College Sophomores in the Laboratory: Influences of a Narrow Database on Social Psychology’s View of Human Nature”, Journal of Personality and Social Psychology, 51, pp. 515-530.CrossRefGoogle Scholar
Sidgwick, Henry, 1876. “The Theory of Evolution in its Application to Practice”, Mind, 1 (1), pp. 52-67.Google Scholar
Sidgwick, Henry, 1880. “Mr. Spencer’s Ethical System”, Mind, 5 (18), pp. 216-226.Google Scholar
Sidgwick, Henry, 1899. “The Relation of Ethics to Sociology”, International Journal of Ethics, 10 (1), pp. 1-21.Google Scholar
Simmel, Georg, [1892–1893] 1989-1991. Einleitung in die Moralwissenschaft: Eine Kritik der ethischen Grundbegriffe (Frankfurt/ Main, Suhrkamp).Google Scholar
Spencer, Herbert, 1879. The Data of Ethics (New York, D. Appleton and company).CrossRefGoogle Scholar
Stalnaker, Robert, 1999. Context and Content: Essays on Intentionality in Speech and Thought (Oxford/New York, Oxford University Press).CrossRefGoogle Scholar
Stephen, Leslie, 1882. The Science of Ethics (New York, G. P. Putnam’s sons).Google Scholar
Stevenson, Charles L., 1944. Ethics and Language (New Haven/ London, Yale University Press).Google Scholar
Strawson, Peter F., 1952. Introduction to Logical Theory (London/New York, Methuen/ Wiley).Google Scholar
Sutherland, Alexander, 1898. The Origin and Growth of the Moral Instinct (London/New York, Longmans/Green).Google Scholar
Tancredi, Laurence, 2005. Hardwired Behavior: What Neuroscience Reveals about Morality (Cambridge/New York, Cambridge University Press).Google Scholar
Taylor, Charles, 1959. “Ontology”, Philosophy, 34 (129), pp. 125-141.CrossRefGoogle Scholar
Taylor, Charles, 1985. Philosophy and the Human Sciences (Cambridge/New York, Cambridge University Press).CrossRefGoogle Scholar
Taylor, Charles, 1989. Sources of the Self: The Making of the Modern Identity (Cambridge, Harvard University Press).Google Scholar
Taylor, Charles, 2003. “Ethics and Ontology”, The Journal of Philosophy, 109 (6), pp. 305-320.Google Scholar
Thomson, Judith Jarvis, 1985. “The Trolley Problem”, The Yale Law Journal, 94 (6), pp. 1395-1415.CrossRefGoogle Scholar
Tost, Heike and Meyer-Lindenberg, Andreas, 2010. “I Fear for You: A Role for Serotonin in Moral Behavior”, PNAS, 107, pp. 17071-17072.CrossRefGoogle Scholar
Tufts, James Hayden, 1912. “Recent Discussions of Moral Evolution”, Harvard Theological Review, 5 (2), pp. 155-179.Google Scholar
Turner, Jonathan H. and Stets, Jan E., 2006. “Moral Emotions”, in Stets, Jan E. and Turner, Jonathan H., Handbook of the Sociology of Emotions (New York, Springer, pp. 544-566).CrossRefGoogle Scholar
Väyrynen, Pekka, 2009. “Objectionable Thick Concepts in Denials”, Philosophical Perspectives, 23, pp. 439-469.Google Scholar
Verplaetse, Jan, Braeckman, Johan and De Schrijver, Jelle, 2009. “Introduction” in Verplaetse, Jan, De Schrijver, Jelle, Vanneste, Sven and Braeckman, Johan, eds., The Moral Brain: Essays on the Evolutionary and Neuroscientific Aspects of Morality (Dordrecht, Springer, pp. 1-43).Google Scholar
Waal, Frans de, 1996. Good Natured: The Origins of Right and Wrong in Humans and Other Animals (Cambridge, Harvard University Press).CrossRefGoogle Scholar
Waal, Frans de, 2006. “The Tower of Morality” in de Waal, Frans, ed., Primates and Philosophers: How Morality Evolved (Princeton, Princeton University Press, pp. 161-181).CrossRefGoogle Scholar
Waldmann, Michael R. and Dieterich, Jörn, 2007. “Throwing a Bomb on a Person Versus Throwing a Person on a Bomb: Intervention Myopia in Moral Intuitions”, Psychological Science, 18 (3), pp. 247-253.CrossRefGoogle ScholarPubMed
Weisberg, Deena S. et al. , 2008. “The Seductive Allure of Neuroscience Explanations”, Journal of Cognitive Neuroscience, 20 (3), pp. 470-477.CrossRefGoogle ScholarPubMed
Westermarck, Edward, 1906-1908. The Origin and Development of the Moral Ideas. (London/New York, Macmillan).Google Scholar
Wheatley, Thalia and Haidt, Jonathan, 2005. “Hypnotically Induced Disgust Makes Moral Judgments More Severe”, Psychological Science, 16, pp. 780-784.Google Scholar
Williams, Bernard A.O. 1981. Moral Luck: Philosophical Papers, 1973-1980 (Cambridge/ New York, Cambridge University Press).Google Scholar
Williams, Bernard A.O., 1985. Ethics and the Limits of Philosophy (Cambridge, Harvard University Press).Google Scholar
Wilson, Robert A., 1995. Cartesian Psychology and Physical Minds: Individualism and the Sciences of the Mind (Cambridge, Cambridge University Press).CrossRefGoogle Scholar
Wilson, Robert A., 2004. Boundaries of the Mind: The Individual in the Fragile Sciences (Cambridge/New York, Cambridge University Press).Google Scholar
Young, Liane and Saxe, Rebecca, 2008. “The Neural Basis of Belief Encoding and Integration in Moral Judgment”, NeuroImage, 40, pp. 1912-1920.CrossRefGoogle ScholarPubMed
Zahn, Roland et al. , 2009. “The Neural Basis of Human Social Values: Evidence from Functional MRI”, Cerebral Cortex, 19 (2), pp. 276-283.CrossRefGoogle ScholarPubMed
Zak, Paul J., Kurzban, Robert and Matzner, William T., 2004. “The Neurobiology of Trust.”Annals of the New York Academy of Sciences, 1032, pp. 224-227.Google Scholar
Zeki, Semir and Goodenough, Oliver, eds., 2006. Law and the Brain (Oxford/New York, Oxford University Press).CrossRefGoogle Scholar