The Hedgehog Review

The Hedgehog Review: Vol. 18 No. 3 (Fall 2016)

Where the New Science of Morality Goes Wrong

James Davison Hunter and Paul Nedelisky

The Hedgehog Review

The Hedgehog Review: Fall 2016

(Volume 18 | Issue 3)

Can science provide the foundation of morality?

The social implications of this question are enormous. We live in a time rife with disagreement, conflict, and violence—a time marked by clashes that are often rooted in competing conceptions of the good. How, then, do we settle disagreements among rival ethical systems? Surely in a day of cosmopolitan sophistication, there must be some rational method that might help arbitrate these disputes, some compelling logic that might provide a common foundation for moral belief and commitment. Does science offer the answer?

Many would reflexively say “no.” But such dismissals overlook what some of the brightest lights in Western philosophy and science have attempted to do during the last four centuries. And while that quest has had its ups and downs, it continues in the efforts of some of the most prominent philosophers and scientists at work today. Those efforts have already greatly influenced public discourse, not least through books and articles whose authors claim to show “how science can determine human values” or to disclose “the universal moral instincts caused by evolution.” Should we be surprised? The methods of science—observation, experimentation, theory building—have delivered a persuasive picture of the material world, producing a broad consensus from a wide range of physical and natural sciences. Perhaps science can do for morality what it has done for physics, chemistry, biology, astronomy, and mathematics, and the technologies upon which they are based.

Indeed, some believe that we are at the start of a new age, when the power of science will dispel myths surrounding morality and moral difference and establish a truly rational foundation for ethical truth.1 If so, this age will be based on a new moral synthesis that derives from the conceptual architecture of three main schools of Enlightenment thinking on this matter. The first is the psychologized sentimentalism of David Hume: the idea that the basis of moral judgment lies in human psychology, which can be studied empirically, like any other aspect of the physical universe. The second strand is the evolutionary account of the mind derived from the work of Charles Darwin. After all, something must explain why human moral psychology is the way it is—why we have the moral impulses and dispositions we do—and the answer is provided by research into the pressures and environments that shaped our evolutionary ancestors. The third school of thought is a utilitarian conception of the nature of morality, pioneered by Jeremy Bentham and John Stuart Mill. According to this conception, what is right and wrong is understood primarily in terms of beneficial or pleasant consequences: the greatest good for the greatest number.

To be sure, the general synthesis of these elements is as novel as the new technologies (such as brain scans) that aid its pursuit, but in terms of the conceptual tools that guide contemporary efforts to fix a scientific foundation for morality, the apparatus is well established. It is within this paradigm, then, that moral psychology and neuroscience operate and where experimental insight currently accumulates.

The hope that the new moral science will provide new answers to old questions runs strong, to say the least. Not all of the new moral scientists aim to directly, empirically discover moral truths.2 But all of them think that their scientific approaches can make progress on moral questions where traditional approaches have failed.

Modern science has indeed taught us much about so many things. Fine-grained observation, along with systemization, has permitted confirmation and disconfirmation of theories, revealing the deep levels of physical reality, the chemical building blocks of life, and the wider nature of the universe. This knowledge has brought previously unimaginable utility. Applied to the problems of human existence, science has bestowed immeasurable benefits for health, longevity, comfort, ease of living, and security. A central factor in the achievement of science is the indubitability of its method and findings. Yet how science proceeds on the front of human morality is not entirely clear. All the excitement notwithstanding, there is a fundamental dilemma facing any moral science. If a genuine science of morality is to be established—one capable of adjudicating moral differences—it must meet at least two challenges: the challenge of definition and the challenge of demonstration. As we will explain, the dilemma emerges because a science of morality can meet at most only one of these two challenges; grasping one horn requires releasing the other.

The Challenge of Definition

Science requires clarity and consensus about the phenomenon under study. Biology would not be much of a science if biologists could never agree on which things were cells, and chemistry would not be much of a science if chemists constantly wrangled over the elements that make up the periodic table. Similarly, it is essential to a science of morality that morality be conceptualized in a way that fits the underlying reality and convincingly defines moral terms in a way that closely fits the target phenomena. That is, a conceptualization of morality must make clear that what it describes really is morality and not merely something that vaguely resembles, approximates, or accompanies morality—otherwise, it will be open to the charge that it isn’t really an account of morality at all. It will fail to provide sufficient intellectual consensus for scientific authority and incremental progress in the development of a body of scientific knowledge.

Scientific theories of morality that fail to meet the challenge of definition can fail by including things in their definitions of moral terms that don’t belong (e.g., whether an action is done on a Tuesday). They can also fail by excluding things that do belong (e.g., rights or duties). But they can also fail by using moral terms that are clearly defined but don’t fit the context in which they are used. The fact is, the term morality has several legitimate meanings, each of which has decisive implications for what exactly a scientific theory of morality in those terms is really showing us.

The Prescriptive, the Descriptive, and the Prudential

Consider three different types or senses of morality and their implications for scientific theories of morality.

First, morality can mean the realm of right and wrong, good and bad, whether those terms are grounded in fundamental moral laws or in the value of particular things and states of affairs. This is the sense of morality we mean when we say that, for instance, killing innocent people for fun is morally wrong or that racism is immoral. This morality is prescriptive, in that it is supposed to guide human action on a morally justifiable basis. This is the kind of morality we might variously call “genuine,” “real,” “prescriptive,” or “authoritative” morality.

Second, morality can mean the realm of social rules and practices that describe what groups of human beings take to be necessary to constrain what is forbidden or encourage what should be pursued or promoted. This is the sense of morality we intend when we talk about a society’s moral code without intending to say anything about whether such a society’s code “really is” right or wrong. We might call morality, in this sense, descriptive.

Third, morality can mean something more practical or instrumental. In this sense, morality concerns what one should and shouldn’t do, but the “should” isn’t a moral “should” in the genuine, prescriptive sense. That is, there’s a kind of “ought” that is practical without being ethical. It’s the sort of “ought” we mean when we say things like “Well, if you want to win the lottery, then you ought to buy some tickets.” In such cases we aren’t trying to say that anyone morally ought to buy lottery tickets, but simply that if someone’s goal were to win the lottery, then in order to achieve that goal he would have to buy some lottery tickets. This kind of normativity is sometimes called “instrumental” or “hypothetical.”

Recent books on the science of morality are full of various claims about what this or that neuroscientific discovery—or evolutionary developmental story—tells us about morality. But do such claims mean to tell us something new about what really is the case, morally speaking? Do they say something about what we should or shouldn’t do beyond just what certain people might happen to think we should do? Or do such claims merely describe certain human practices or impulses that are causally related to this or that neurological property or evolutionary event? That we can’t have this or that moral feeling without activity in a particular region on the brain? Which of these is meant matters a great deal for figuring out what is being shown.

Moving from “Ought” to “Is”—A Case from Neuroscience

As it turns out, much of the recent scientific study of morality makes little attempt to clarify which sense of morality is under discussion. In fact, it is a common practice that those who advocate a science of morality conflate the various meanings of the word as though these differences didn’t exist or matter.

Consider, as a case in point, philosopher Patricia Churchland’s inquiry into morality—Braintrust: What Neuroscience Tells Us About Morality. Initially, it seems clear that Churchland intends to illuminate genuine, prescriptive morality by looking at empirical research. Early in her book, she describes how she felt that philosophy had little to offer in answer to philosophical questions like “What is it to be fair? How do we know what to count as fair?” She says that while it did seem plausible that Aristotle, Hume, and Darwin were right that humans are social by nature, “without relevant, real data from evolutionary biology, neuroscience, and genetics, I could not see how to tether ideas about ‘our nature’ to the hard and fast.” But by drawing on these empirical fields of study, along with a “philosophical framework consilient with those data,” she writes, “we can now meaningfully approach the question of where values come from.” She says her aim is “to explain what is probably true about our social nature, and what that involves in terms of the neural platform for moral behavior.” This will be helpful, she explains, because “a deeper understanding of what it is that makes humans and other animals social, and what it is that disposes us to care about others, may lead to greater understanding of how to cope with social problems.”3

The impression left on the casual reader is that philosophy can’t answer big moral questions, but that science can—and that Churchland intends to help answer these big moral questions. So it would seem that she is using moral terms in the genuine, prescriptive sense. But on a more careful reading, it is less clear that this is the case.

First, notice the shift that occurs. She begins with big moral questions—moral in the prescriptive sense: “What is it to be fair?” and so forth. Philosophy couldn’t help her answer these questions; she felt she needed “hard and fast” data. So she turned to the sciences, which gave her data on the “neural platform for moral behavior,” thereby helping her figure out “where values come from.” Churchland thinks that examination of human brains reveals the neurochemical nature of certain impulses that give rise to social behavior—these impulses she describes as “values”—and that we can figure out what is best to do given that we have these values:

The truth seems to be that the values rooted in the circuitry for caring—for well-being of self, offspring, mates, kin, and others—shape social reasoning about many issues: conflict resolution, keeping the peace, defense, trade, resource distribution, and many other aspects of social life in all its vast richness. Not only do these values and their material basis constrain social problem-solving, they are at the same time facts that give substance to the processes of figuring out what to do—facts such as that our children matter to us, and that we care about their well-being; that we care about our clan. Relative to these values, some solutions to social problems are better than others, as a matter of fact; relative to these values, practical policy decisions can be negotiated.4

But what would knowledge of the neural platform for moral behavior or knowledge of the chemical requirements for valuing something tell us about what it is to be fair? It is hard to tell. The “hard and fast” data Churchland seeks from science has something to do with morality, but does it illuminate prescriptive morality—what we should do—or just tell us something descriptively about the machinery that supports our ability to consider, make, and follow moral judgments? Churchland offers us little help toward understanding the putative relationship between the detailed scientific results she describes and the big, genuinely moral questions that originally animated her research.

There is another confusion. Churchland explicitly claims that morality is real. “It is as real as real can be,” she says, explaining that, in fact, “some social practices are better than others, some institutions are worse than others, and genuine assessments can be made against the standard of how well or poorly they serve human well-being.” At first blush, it seems as though she is referring to a prescriptive understanding of morality. At the same time, she claims that morality is best understood as a form of human social behavior. As she puts it, “Social behavior and moral behavior appear to be part of the same spectrum of actions, where those actions we consider ‘moral’ involve more serious outcomes than do merely social actions such as bringing a gift to a new mother.”5 The difference lies in the “seriousness” of the elements we call “moral.”6

But what is this seriousness that makes some social actions moral? Churchland doesn’t elaborate, and yet this is a crucial question. Either this seriousness is itself some valence of genuine, prescriptive morality, or it isn’t but, instead, is some kind of ultimately non-moral way of distinguishing the actions we take to be moral from those we don’t. If she means the first—that moral actions really are prescriptively guided—then we’re again left with no sense of how the contingent and manipulable presence of neurochemicals could be legitimately action-guiding. But if she means the second—that the “seriousness” that makes some social actions moral doesn’t bestow genuine action-guiding prescriptivity—then she’s no longer addressing the big, genuinely moral questions she originally claimed to be pursuing. In this case, she would merely be articulating a theory describing how the neurochemical components of our moral judgment arose and function, without telling us anything about what’s right and wrong, or where values come from.

In the end, Churchland can only assert, without evidence, that values are just those things humans care about: Given that we care about certain things, there are facts about what course of action would best achieve or promote these things. So the sense of morality that Churchland thinks science illuminates is instrumental or hypothetical—tied to what we happen to want—rather than a morality that might legitimately claim to show us how to live.

The Case of Altruism

Consider altruism as another example of the challenge of definition. In a recent book, Does Altruism Exist?, biologist David Sloan Wilson argues that science can demonstrate that altruism is real. Whether this is an interesting claim depends entirely on how Wilson defines altruism. He gets off to a promising start, beginning his book by defining it as “a concern for the welfare of others as an end in itself.”7 This definition places altruism firmly in the prescriptive realm, given, among other reasons, the inclusion of the ideas of the goodness of intention and a Kantian concern for human beings as ends in themselves. If Wilson could provide a scientific argument for the existence of this kind of altruism, it would be a major breakthrough.

But there are two different senses of altruism at play in Wilson’s argument. One is prescriptive and the other is descriptive; one is ethically intended and directed and the other is biological elucidated. In biology, when one organism increases another organism’s reproductive fitness at a cost to its own, that is altruism. This definition stands in contrast with an ethical understanding of altruism, in which the idea is framed in terms of acting with the intention of benefiting another, without regard for the cost to oneself.

Unfortunately, Wilson eventually migrates from plausible, ethically relevant understandings of altruism to a more empirically tractable behavioral definition. Toward the end of his book, Wilson is explicit about his act of redefinition:

Altruism exists. If by altruism we mean traits that evolve by virtue of benefitting whole groups, despite being selectively disadvantageous within groups, then altruism indubitably exists and accounts for group-level functional organization we see in nature.8

This account of altruism takes us far from a recognizably prescriptive definition of altruism. Wilson is aware of this, but is unconcerned:

Altruism is often defined as a particular psychological motive that leads to other-oriented behaviors, which needs to be distinguished from other kinds of motives. Once the existence of altruism hinges on distinctions among motives, it becomes difficult to study because motives are less transparent than actions.… But to the degree that different psychological motives result in the same actions, we shouldn’t care much about distinguishing among them, any more than we should care about being paid with cash or a check. It’s not right to privilege altruism as a psychological motive when other equivalent motives exist.9

So Wilson claims that satisfying the biological, behavioristic definition of altruism is all that really matters: “It doesn’t matter whether he gets paid in cash or by check.”

But of course this assertion is far from evident, and will certainly face much resistance. Do you only care that your spouse acts as though he loves you? That he says complimentary things to you, that he appears to enjoy conversation with you, that he pulls his weight on the household chores, that he contributes income to the family, that he says the things a loving spouse should say, that he appears to be sexually attracted to you, that he remembers your birthday? What if you discovered that he does all of these things without feeling anything for you—or worse: He does all these things while secretly detesting you? Wilson’s claim is that this is just a “cash or check” situation—just so long as he’s doing all the observable things he would do if he really did love you, then the underlying motives, intentions, and desires are irrelevant to whether he’s acting altruistically. It is difficult to imagine that Wilson himself would be indifferent to these motives and intentions. Such a relationship would be functional, but loveless—indeed, missing precisely the element that makes acts genuinely altruistic.

Wilson’s definition of the key moral term in his study ultimately renders his account incoherent, and possibly even irrelevant to a science of altruism. This is a common outcome for scientific accounts that falter on the challenge of definition.

Definition and Specificity

Questions of method and measurement are critical to science, but they are irrelevant if researchers are unable to clearly mark out the object of inquiry in a credible, consistent, and persuasive way. Conceptual clarity, if not precision, is essential, yet this is often missing in the new moral science. What unfolds looks something like a shell game. Scholarship presents itself as addressing questions of prescriptive morality, but through a sleight of hand it puts descriptive and instrumental definitions of morality into play in ways that conflate the meanings of the terms. This is confusing, to say the least.

Yet there is another fundamental problem attending the absence of conceptual clarity, and it bears on what is understood to be moral reality for scientific purposes. Invariably, the science of morality is directed toward unearthing and understanding universally shared moral principles. These are ethical generalities that take shape as moral-philosophical abstractions. The evidence used to address this stratum of moral reality is presumed to be species-wide, whether it is drawn from data from neurochemistry, the evolutionary record, or public-opinion surveys.

This presumption is fine as far as it goes, but it barely scratches the surface of morality as it exists empirically in the lives of individuals, groups, communities, and nations. While it may be possible to speak of universal moral principles, nearly all of what we actually know and experience of morality exists only in its particularity, in a bewildering array of complex, contradictory, and, more often than not, conflicting moral traditions, stories, and ideals—made all the more complex by virtue of race, ethnicity, gender, regionality, political economy, and history. In this empirical complexity, the new moral science shows little interest or curiosity. It is as if the best way to address empirical difference is to ignore it altogether.

But any intellectual inquiry that disregards empirical specificity, especially in its messiness, fails to meet the most rudimentary requirements of a science. More basic still is that without the rigors of an inductive method working upward through empirical complexity, there can be no confidence that the concept of morality that is used will bear any resemblance to morality as it exists in individual and social life. Such inquiry can produce only vague generalities and broad speculations but nothing like conclusions rooted in scientific rigor. The challenge of definition, then, is a formidable challenge. But there is another.

The Challenge of Demonstration

Early in the quest to find a scientific foundation for morality, the seventeenth-century Dutch scholar Hugo Grotius thought that postulating rights would help establish a moral basis for laws that “if you rightly consider, are manifest and self-evident, almost after the same Manner as those Things are that we perceive with our outward Senses.”10 Did he succeed in this ambition? Given the continued disagreement over morality in the centuries since, it’s clear that he did not. But why? A number of answers could be given here, but perhaps one of the main reasons is that rights—if there are such things—are not evident to the senses in the way Grotius thought. Joshua Greene, a Harvard University psychologist and neuroscientist, explains the problem this way:

Appeals to “rights” function as an intellectual free pass, a trump card that renders evidence irrelevant. Whatever you and your fellow tribes-people feel, you can always posit the existence of a right that corresponds to your feelings.… Rights and duties are the modern moralist’s weapons of choice, allowing us to present our feelings as nonnegotiable facts. By appealing to rights, we excuse ourselves from the hard work of providing real, non-question-begging justifications for what we want.… We have, at present, no non-question-begging way to figure out who has which rights.11

In short, rights, even if they exist, can’t be demonstrated.

The problem here isn’t that rights are irrelevant to morality or that they couldn’t potentially help explain morality. The problem is that insofar as morality is explained in terms of rights, to that degree it cannot be empirically demonstrated. Perhaps this isn’t a problem for most of us, or even for most moral theories. But this is an ineradicable problem for any theory of morality that claims to be scientific, that is, that intends its claims about morality to be empirically observable or demonstrable in some way.

This is the challenge of demonstration: For a theory of morality to be scientific, it must tie its claims about the nature of morality to observable reality strongly enough to demonstrate that it is getting the nature of morality right. Put more sharply: A science of morality must be able to demonstrate empirically that its claims about morality are true.

The Case of Well-Being

The new moral scientists sometimes provide certain examples that they think illustrate that science has demonstrated (or can demonstrate) that certain moral claims are true or false. A favorite is the health or medical analogy. Neuroscientist and author Sam Harris, for example, employs the health analogy to argue that science can demonstrate moral value. A bit more circumspect than some who use the analogy, he recognizes that he’s assuming that certain observable properties are tied to certain moral values. Harris puts it this way:

Science cannot tell us why, scientifically, we should value health. But once we admit that health is the proper concern of medicine, we can then study and promote it through science…. I think our concern for well-being is even less in need for justification than our concern for health is.… And once we begin thinking seriously about human well-being, we will find that science can resolve specific questions about morality and human values.12

Harris makes two assumptions—first, that well-being is a moral good, and, second, that we know what the observable properties of well-being are. Yet he doesn’t see these assumptions as problematic for the scientific status of his argument. After all, he reasons, we make similar assumptions in medicine, but we can all recognize that it is still a science. But he still doesn’t recognize that this thinking is fatal to his claim that science can determine moral values. To make the problem for Harris more vivid, compare his argument above with arguments that share the same logic and structure:

  • Science cannot tell us why, scientifically, we should value the enslavement of Africans. But once we admit that slavery is the proper concern of social science, we can then study and promote it through science. I think our concern for embracing slavery is even less in need for justification than our concern for health is. And once we begin thinking seriously about slavery, we will find that science can resolve specific questions about morality and human values.
  • Science cannot tell us why, scientifically, we should value the purging of Jews, gypsies, and the mentally disabled from society. But once we admit that their eradication is the proper concern of social science, we can then study and promote it through science.
  • Science cannot tell us why, scientifically, we should value a prohibition on gay marriage. But once we admit that such a prohibition is the proper concern of social science, we can then study and promote it through science.

Although these parallel arguments are outlandish to our ears today, they all, in fact, have historical precedent—and from not so long ago. Most tellingly, these arguments rely on the same logic as Harris’s. But of course they have little hope of showing that we should approve slavery, prohibit gay marriage, and bring about the elimination of Jews, gypsies, and the mentally disabled. Why? Because these arguments merely assume that we should, then recommend the scientific study and promotion of these ends.

Harris is doing the same thing. When he applies this argument to health, it can seem more powerful than it really is. After all, most of Harris’s readers will already agree that health is good and, as such, can be illuminated via scientific study. But once we see that this line of thought can be applied just as readily to values we do not agree with, it becomes clear that the assumption of a particular value doesn’t make it scientifically demonstrable. If the assumption that gay marriage should be prohibited or slavery encouraged seems to undermine the scientific status of any attempt to empirically support that position, then so should Harris’s attempt to empirically support his assumptions about the value and nature of well-being.

The problem with Harris’s argument and the parody arguments is that not just any value assumption, when pursued empirically, can be sanctified as science. Harris is right that even paradigmatic instances of science rely on value assumptions, for example, that the truth should be pursued, or that claims should be verifiable insofar as possible. But valuing truth is required for any genuine inquiry, and valuing empirical verification is helpful for finding out which aspects of the world can be demonstrated in a strong way. This is why these assumed values are so widely and easily assumed, across so many cultures. The same cannot be said of just any value assumption at all, not least Harris’s.

What Harris presents, then, isn’t science, but is, rather, science-plus-a-controversial-assumption. And it’s an assumption that cannot be demonstrated empirically. This is fine so far as it goes: The world should have room for confessional pursuit of answers given assumed starting points. But such confessional, “science-plus” approaches must not be confused with the sort of science that has given us contemporary chemistry, physics, and biology. It also places Harris on the same field of play as anyone else who wants to add a controversial assumption—including those who oppose gay marriage or might support slavery. Harris presents himself as promising a legitimate, chemistry-worthy science of morality. But upon closer examination, all he can muster is a “science-plus” wherein the interesting moral claim—the “plus”—is assumed and asserted rather than demonstrated.

The Case of Happiness

The challenge of demonstration plays out differently in the proliferation of studies addressing the phenomenon of happiness or well-being in human life.

First, the idea that happiness can be tightly coupled to morality is, historically, undeniable. In the Western world, whether in ancient or medieval thought, happiness presupposed an objective moral order to which one would conform one’s life. For Aristotle, for example, happiness was not the effect of momentary pleasure, but rather the result of the cultivation of a life lived virtuously—as he said in the Nicomachean Ethics, “an activity of the soul in accordance with virtue.”

It really wasn’t until the Enlightenment that anything like the contemporary notion of happiness was given expression. As the French revolutionary Louis Antoine de Saint-Just put it in 1794, “Happiness is a new idea.” While far from true, his observation called attention to the fact that the meaning of happiness had shifted to something we might today call “positive emotion” or “psychological well-being” and was then taken as an end in itself. This was new.

There is some variation today. Most define happiness openly, in the tradition of Bentham, as a subjective state of buoyancy, positive feeling, pleasurable emotions, merriment, and the like.13 Others attempt to define it more broadly, in the Aristotelian tradition, as flourishing—a conception that includes valuable activity.14 But even in the latter effort, the virtues often are not understood as intrinsic and mind-independent goods, but rather are redefined into functional capacities oriented toward generating positive emotions. In the end, the road still leads back to Bentham.

What is taken as “happiness” today is, then, far from a universal feature of the human condition; rather, it is an artifact of modern and late-modern history. What is more, the empirical studies that seek to understand happiness reflect even further bias by grounding those studies overwhelmingly in subjects who are “Western, educated, industrialized, rich, and democratic” (the acronym WEIRD)—that is, subjects who are poor representatives of the broader world.15

The historically and culturally tendentious way in which happiness is defined in the new moral science is a problem for those making claims about the universality of human nature. It is compounded by a demonstration problem because the actual “science” of happiness takes form through self-reported levels of subjective positive emotion that tend to specify, along a single metric, how satisfied respondents are with their lives.16

Let us leave aside the dubious idea that happiness is one-dimensional; one might nevertheless think that at least the definition of happiness is clear. But the demonstration problem arises here in that there is no objective natural category which we can empirically identify as happiness.17 What people actually mean by happiness or satisfaction is not stable or universal, and therefore is difficult to meaningfully compare from one person to another or from one time to another. Certainly, in the present-day West, in the wake of Enlightenment thought, the ethical value of subjective positive feelings seems self-evident. But the support for this belief isn’t empirical. As the religious studies scholar James Pawelski has put it, “Perspectives shift dramatically. What is taken for granted about happiness in one cultural context seems foreign in other cultural contexts.… Because of these dramatic shifts, we must avoid the mistake of thinking that our current views on happiness necessarily hold true for cultural contexts different than our own.”18

The challenge of demonstration looms larger still. Suppose, for the sake of argument, that we could measure or empirically detect happiness in a meaningful way. Suppose we could calculate the presence and degree of people’s subjective positive feelings, the depth of their engagement, how meaningful they take their life to be, the positivity of their relationships, and their accomplishments. What would this tell us about morality? Nothing, by itself. For these data to have relevance to questions of morality, we would have to know that subjective positive feelings, deep engagement, and the like are ethically good or are worth pursuing. But no empirical technique yet known can uncover this. In other words, such surveys may tell us who has what level of positive emotion or life satisfaction, but the ethical relevance of these properties is beyond the scope of the studies. As the philosopher Martha Nussbaum put it,

Pleasure is only as good as the thing one takes pleasure in: if one takes pleasure in harming others, that good-feeling emotion is very negative; even if one takes pleasure in shirking one’s duty and lazing around, that is also quite negative. If one feels hope, that emotion is good only if it is based on accurate evaluations of the worth of what one hopes for and true beliefs about what is likely.19

As we say, there is no way to scientifically demonstrate the goodness or value of the ends to which happiness is directed.

The Dilemma Endures

For over four centuries, approaches to the science of morality have varied between two points on a certain spectrum. The spectrum is made up of possible definitions of morality. On one end are definitions of morality that involve genuine “goods” and “oughts” that have some real prescriptive authority over human behavior. At the other end are definitions of morality that permit empirical assessment.

What a closer look at the historical record and at the leading conceptual logics today suggests is that a moral theory can approach one end of this spectrum only by distancing itself from the other. If a theory of morality is understood to involve genuine value, absolute prohibitions, and the like, it will stand a better chance of being recognized as a genuine candidate for being a theory of morality. But in defining morality toward this end of the spectrum, moral theorists have ruled out possible scientific demonstration of their theory, since value, duty, rights, and the like cannot be empirically detected. On the other hand, scientific approaches sometimes attempt to reimagine morality in empirical terms, thereby approaching the empirical end of the spectrum. But in so doing, they stray from an understanding of morality that is adequate to its lived experience—straying as they must from including rights, duties, value, etc. The consequence is that such scientific approaches fail to persuade people that their empirical conclusions are really about morality. The hope of resolving moral disagreement by appealing to scientific research therefore faces an internal barrier, since moral disagreements appear not to turn on issues that admit of empirical resolution.

Endnotes

  1. See, for example, Sam Harris, The Moral Landscape: How Science Can Determine Human Values (New York, NY: Free Press, 2010); Michael S. Gazzinga, The Ethical Brain: The Science of Our Moral Dilemmas (New York, NY: Harper-Perennial, 2006); Fiery Cushman, “Morality: Don’t Be Afraid—Science Can Make Us Better,” New Scientist, October 2010, https://www.newscientist.com/article/mg20827821-700-morality-dont-be-afraid-science-can-make-us-better/.
  2. Although Sam Harris does (Moral Landscape).
  3. Patricia Churchland, Braintrust: What Neuroscience Tells Us about Morality (Princeton, NJ: Princeton University Press, 2012), 2–4.
  4. Ibid., 8–9.
  5. Ibid., 200, 59.
  6. Ibid., 10.
  7. David Sloan Wilson, Does Altruism Exist? Culture, Genes, and the Welfare of Others (New Haven, CT: Yale University Press, 2015), 3.
  8. Ibid., 141.
  9. Ibid., 142.
  10. Hugo Grotius, The Rights of War and Peace (Indianapolis, IN: Liberty Fund, 2005), Preliminary Discourse, XL, 110–11. Original work published 1625.
  11. Joshua Greene, Moral Tribes: Emotion, Reason, and the Gap between Us and Them (New York, NY: Penguin, 2013), 302, 304, 305.
  12. Harris, The Moral Landscape, 37.
  13. Martin Seligman, Flourish: A Visionary New Understanding of Happiness and Well-Being (New York, NY: Free Press, 2011), 10.
  14. Ibid., 16–20. Seligman prefers to use the term “happiness” for subjective positive emotion, but his broader account acknowledges a richer view traditionally considered to be a candidate for what happiness is. He notes this on page 11: “‘Happiness’ historically is not closely tied to such hedonics.”
  15. See Joseph Henrich, Steven Heine, and Ara Norenzayan, “Most People are not WEIRD,” Nature 466, no. 1 (2010), 29; Henrich, Heine, and Norenzayan, “The Weirdest People in the World,” Behavioral and Brain Sciences 33, nos. 2–3 (2010): 61–83; Jeffrey Arnett, “The Neglected 95%: Why American Psychology Needs to Become Less American,” American Psychologist 63, no. 7 (2008): 602–14. What makes the problem even worse (or weirder) is that the subjects are overwhelmingly drawn from undergraduate student populations.
  16. The question favored by Daniel Kahneman and others is: “Taking all things together, how satisfied are you with your life as a whole these days?” See Daniel Kahneman and Alan B. Krueger, “Developments in the Measurement of Subjective Well-Being,” Journal of Economic Perspectives 20, no. 1 (2006): 3–24. Emphasis in the original.
  17. This helps to explain why happiness scholarship has turned to self-reporting. But the problem here, among other things, is that it is tough to tell whether the same concept is being identified as “happiness” across different subjects.
  18. Pawelski here is summarizing an argument made by Darrin McMahon in “The Pursuit of Happiness in History,” The Oxford Handbook of Happiness, eds. Susan A. David, Ilona Boniwell, and Amanda Conley Ayers (Oxford, England: Oxford University Press, 2013): 252–62. Pawelski’s comment comes from the same volume, “Introduction to Philosophical Approaches to Happiness,” 248.
  19. Martha Nussbaum, “Who Is the Happy Warrior? Philosophy Poses Questions to Psychology,” Journal of Legal Studies 37, no. S2 (2008), S93.

James Davison Hunter is the founder and executive director of the Institute for Advanced Studies in Culture and LaBrosse-Levinson Distinguished Professor of Religion, Culture, and Social Theory at the University of Virginia. His many books include Culture Wars: The Struggle to Define America and The Death of Character: Moral Education without Good or Evil. Paul Nedelisky, a postdoctoral research scholar at the Institute for Advanced Studies in Culture, received his PhD in philosophy from the University of Virginia. This essay is drawn from Hunter and Nedelisky’s forthcoming book on the new science of morality, to be published by Yale University Press.

Reprinted from The Hedgehog Review 18.3 (Fall 2016). This essay may not be resold, reprinted, or redistributed for compensation of any kind without prior written permission. Please contact The Hedgehog Review for further details.

Who We Are

Published three times a year by the Institute for Advanced Studies in Culture, The Hedgehog Review offers critical reflections on contemporary culture—how we shape it, and how it shapes us.

IASC Home | Research | Scholars | Events | Media

IASC Newsletter Signup

First Name Last Name Email Address
   

Follow Us . . . FacebookTwitter