The Hedgehog Review

The Hedgehog Review: Vol. 15, No. 1 (Spring 2013)

Threats to Reason in Moral Judgment1

John F. Kihlstrom

Reprinted from The Hedgehog Review 15.1 (Spring 2013). This essay may not be resold, reprinted, or redistributed for compensation of any kind without prior written permission. Please contact The Hedgehog Review for further details.

The Hedgehog Review

The Hedgehog Review: Spring 2013

(Volume 15 | Issue 1)

How we make moral judgments has been a concern of philosophers and psychologists for a long time. In the West, this history begins with the Greeks and their debates over “the good life”: the Sophists and Plato, the Stoics and the Epicureans. There is the Judeo-Christian tradition, with the Ten Commandments, Jesus’s summary of the Law, the Sermon on the Mount, and his “new commandment” that we love one another. The medieval period gave us Aquinas’s marriage of Platonic and Christian thought. The Enlightenment brought us Hobbes’s ethical naturalism, Hume’s utilitarianism, and Kant’s categorical imperative.

The twentieth century saw the rise of meta-ethics, concerned with the nature of moral judgment, rather than with questions of right and wrong per se, and that is where the psychology of moral judgment begins as well. For example, Jean Piaget distinguished between the heteronomous stage of moral development, involving rigid adherence to rules and obedience to authority, and an autonomous stage, in which, through interactions with others, children begin to reason about issues of fairness. Lawrence Kohlberg offered a neo-Piagetian stage theory of moral development, describing the transitions from pre-conventional to conventional to post-conventional reasoning. More recently, Carol Gilligan differentiated between rational moral judgments based on justice and relational judgments based on compassion, and Elliot Turiel distinguished moral judgments, which involve questions of harm, welfare, and fairness, and mere social conventions, which do not.2

The Challenge to Reason

This conception of moral judgment as based on reason dominated psychology textbooks for a long time—not just because it was virtually the only game in town, but also because it was consistent with the cognitive revolution in psychology of the 1960s and 1970s. Beginning in the 1980s, however, there arose a number of challenges to the view that people reason their way to moral judgments. First, the field embraced a distinction between automatic and controlled processes.3 Controlled processes are performed consciously and deliberately and require cognitive effort. By contrast, automatic processes are unconscious and involuntary and consume few or no cognitive resources. In a related development, Daniel Kahneman and Amos Tversky, among others, analyzed errors in reasoning to reveal subjects’ systematic departures from the principles of normative logic––our tendency to rely on heuristic shortcuts, rules of thumb that allow for judgment under conditions of uncertainty, but also increase the probability of making judgmental errors.4 Psychologists also identified a number of biases that lead our reasoning astray, such as a tendency to seek evidence that confirms our hypotheses and to deflect personal responsibility for negative outcomes. Taken together with automaticity, the heuristics and biases approach strongly suggested that we neither think too deeply about things, nor all that well—even when rendering something as important as a moral judgment.

The new cognitive psychologists, focused as they were on problems of knowledge acquisition, representation, and use, paid little attention to emotion and motivation—the other parts of Kant’s trilogy of mind.5 Many viewed emotion simply as a cognitive construction—a belief about one’s feelings that is a product of a more or less rational analysis of the situation in which one finds oneself physiologically aroused. Beginning in the 1980s, however, the hegemony of cognition was challenged by an affective counterrevolution, exemplified by a debate between Robert Zajonc and Richard Lazarus. Zajonc argued that emotion was at least independent of cognition, if not actually primary: “preferences need no inferences.”6 Paul Ekman proposed a set of reflex-like basic emotions that were part of our phylogenetic heritage, and a number of neuroscientists proposed that emotional reactions are controlled by brain structures that are different from those involved in cognitive processing.7 The implication of the primacy of affect is that certain social judgments may be influenced more by emotion than by reason.

Automaticity, heuristics and biases, and affect come together in a critique of reason in moral judgment that has become quite popular. A salient case in point is David Brooks’s recent book, The Social Animal (2011). Brooks is probably the foremost interpreter of psychological research and theory to the general public, and in this book he refers constantly to Hume’s dictum that “reason is and ought to be only a slave to the passions, and can never pretend to any other office than to serve and obey them.”8 For Brooks, and the psychologists whose work he relies on, thought and action are dominated by unconscious processes of emotion, intuition, and automaticity.

Also on point is a series of essays on moral judgment commissioned by the John Templeton Foundation as part of its “Big Questions” series. The Big Question for 2010 was: “Does moral action depend on reasoning?” Among other authorities, five psychologists were asked to respond, and four of them said, essentially, “No.”

  • Michael Gazzaniga, a distinguished cognitive neuroscientist, led off by asserting that “all decision processes…are carried out before one becomes consciously aware of them.”
  • Joshua Greene, a social psychologist, wrote that “moral judgment depends critically on both automatic settings and manual mode.”
  • Jonah Lehrer (not a psychologist, strictly speaking, but one of the foremost interpreters of psychology to the general public and the immediate source for much of Brooks’s book) asserted that “moral decisions often depend on…moral emotions” that “are beyond the reach of reason.”
  • Anthony Damasio (a cognitive neurologist, if not exactly a psychologist) wrote that “morality is based on social emotions that have their origins in ‘prerational’ emotional brain systems, neuromodulator molecules…and genes which have ‘early evolutionary vintage.’”9

Critique of Moral Intuitionism

Each of these writers, in his own way, reflects a point of view proposed by Joshua Greene and Jonathan Haidt known as moral intuitionism.10 Greene and Haidt suggest that morality serves two important functions: at the micro level, it guides our social interaction, while at the macro level, it binds groups together. But where does morality come from? Greene and Haidt argue for intuitive or rational primacy in moral judgments. Far from reflecting the operation of human reasoning, these judgments are the product of evolved brain modules that generate what has been called the yuck factor—an intuitive, emotional “gut feeling” that certain things are, well, just plain wrong. When we are asked to justify them, the reasons we give for our moral judgments are neither necessary nor sufficient; rather, they are more like post-hoc rationalizations.

Although moral intuitionism is relatively new as a psychological theory, the general idea is old enough to have been critiqued by John Stuart Mill in his 1843 treatise on A System of Logic. When we rely on intuitions, Mill wrote, there is no need to question prevalent moral judgments, nor any need to explain how our intuitions came to be what they are; nor do we have any means of resolving competing individuals’ intuitions. They just are what they are. Mill agreed that intuition played an important role in some fields, such as mathematics, but he thought that a reliance on intuition should not extend to ethics and politics, because it “sanctifies” traditional opinions and provides an intellectual buttress to conservatism.

Indeed, moral intuitionism can be seen as a threat to democracy. How do you debate, how do you compromise, with someone whose moral judgments rely on intuitions? In this respect, I was put in mind of a quote by Heinrich Himmler, commander of the Gestapo in Nazi Germany, who in a 1936 speech to the Committee for Police Law said that “in my work for the Führer and the nation I do what my conscience tells me is right and what is common sense.”11 Of course, it does not matter if moral intuitionism is a threat to democracy if in fact it is a valid scientific theory about how moral judgments are made. Accordingly, it is important to examine the evidentiary base for moral intuitionism to determine the extent to which it is actually supported by empirical evidence.

As far as I can tell, the reference experiment for moral intuitionism is a philosophical conundrum known as the Trolley Problem, originally devised by Phillipa Foot and popularized by Judith Jarvis Thomson, among others. Imagine a trolley speeding toward a group of five people on the tracks; the collision will kill them all, but you can throw a switch that will divert the trolley onto another track, where it will strike and kill only one person. There are actually several versions of the problem, to which people respond quite differently. For example, many more people think that it is morally justifiable to switch a trolley from one track to another, sacrificing one life to save five, than think it is morally justifiable to push a fat man off a bridge onto the trolley tracks, killing him but saving those same five lives. The trick, of course, is that both versions of the Trolley Problem involve the same expected outcome—one life lost, five lives saved. The implication is that rational choice cannot account for people’s moral judgments. Something else must be involved, and that something else consists of emotional intuitions—the “yuck factor” generated by a specialized brain module that became part of our phylogenetic equipment over the course of evolutionary time.

But it turns out that there are problems with the Trolley Problem. In the first place, it strikes me that the Trolley Problem lacks ecological validity.12 It is not at all clear that the Trolley Problem is representative of the kinds of moral dilemmas that confront us in the ordinary course of everyday living. When was the last time you were on a bridge, next to a fat man, with a trolley racing along the tracks below you toward five people tied to the track by some Snidely Whiplash? But, of course, that just may be my intuition, and there’s no arguing with intuitions.

More important, note that, in the Trolley Problem, reason is ruled out by experimental fiat. That is, the Trolley Problem has been constructed such that all outcomes are rationally equivalent, and subjects cannot make a choice based on expected outcomes or utilities. They have to do something else. Perhaps, under such circumstances, people do rely on their moral intuitions, or on some other basis for judgment. But it hardly seems correct to conclude, from their responses in this highly constrained situation, that emotion supplants reason in moral judgment. Nor is there any comparison of effect size. What we would really like to see, in an experiment such as this, is an experimental manipulation of both emotional and rational factors, so we can determine whether emotion indeed dominates reason, under what circumstances, and by how much.

Finally, there is no consideration of a “cognitive” alternative. In fact, a cognitive alternative to moral intuitionism is available. Inspired by Noam Chomsky’s notion of a Universal Grammar underlying human language, John Mikhail has offered a universal moral grammar that analyzes various versions of the Trolley Problem in purely cognitive terms—it is a grammar, after all—without invoking emotions or intuitions.13 Mikhail begins, like any good cognitive psychologist, by invoking what he calls “the poverty of the moral stimulus”—that the situations that demand moral judgment usually do not contain enough information to enable us to make that judgment. People form a mental representation of the situation and then apply a moral grammar to render a moral judgment. It is all very cognitive—all very rational. And, in fact, Mikhail’s moral grammar gives a pretty good account of the empirical findings from various versions of the Trolley Problem—all of which are rationally equivalent.

The bottom line is that there is no good empirical reason to think that emotion and intuition rule moral judgment. Maybe, as in the Trolley Problem, affect and intuition act as a sort of tie-breaker in those circumstances when rational choice does not suffice. Maybe reason serves to challenge and correct our moral intuitions. Or maybe affect serves as information for cognition. In any case, neither cognition nor emotion is dominating the other. Rather, it seems that in moral judgment, as in other aspects of mental life, cognitive, emotional, and motivational processes work together, and the balance between them varies depending on the situation.

Notice that, in this formulation, emotion is more than a cognitive construction. As a cognitive psychologist, I have always distrusted the idea that emotions are merely cognitive constructions—that we don’t really feel anything, we just think we do. I have long preferred the formulation by Kant, who asserted that “there are three absolutely irreducible faculties of mind: knowledge, feeling, and desire.”14 What Kant meant was that none of these faculties could be reducible to the other(s), as in the cognitive-constructivist account of emotion. Emotion, by this argument, has an existence that is independent of cognition. But just because emotion is not reducible to cognition does not mean that cognition and emotion cannot interact. We know that emotion can color perception, memory, and thought, and we know that thinking can generate, and regulate, emotion. We can dispense with arguments about the primacy of either cognition or affect and get on with the business of discovering how they work, separately and together, and how they each play a role in matters such as moral judgment.

Critique of the Critique of Conscious Will

Recently yet another threat has emerged to reason in moral psychology—namely, a critique of the concept of conscious will. After all, the very concept of moral judgment depends on the freedom of the will, legitimizing causal attributions to the actor’s internal mental states—what the lawyers call mens rea. Neither concept applies in the natural world, where events are completely determined by events that went before. Moral judgment only applies when the actor who is the target of the judgment has a real choice, the freedom to choose among alternatives, and when his or her choices make a difference to his or her behavior. The problem of free will, of course, is that we understand that we are physical entities: specifically, the brain is the physical basis of mind, and the brain, as a physical system, is not exempt from the physical laws that determine everything else that goes on in the universe; neither are our thoughts and actions exempt. So the problem of free will is simply this: how do we reconcile our conscious experience of freedom of the will with the sheer and simple fact that we are physical entities existing in a universe that consists of particles acting in fields of force?

Philosophers have debated this problem for a long time—at least since materialism began to challenge Cartesian dualism. Those who are compatibilists argue that the experience of free will is compatible with physical determinism, while incompatibilists argue that it is not, and we must reconcile ourselves to the fact that we are not, in fact, free to choose what to do and what to think. Those incompatibilists who have read a little physics may make a further distinction between the clockwork determinism of classical Newtonian physics and the pinball determinism of quantum theory, maybe invoking Heisenberg’s observer effect and uncertainty principle (they are apparently not the same thing) as well. But injecting randomness and uncertainty into a physical system is not the same as giving it free will, so the problem remains where it was.

Psychologists, too, have entered the fray: those of a certain age will remember the debate between Carl Rogers and B. F. Skinner over the control of human behavior.15 These days, many psychologists appear to come down on the side of incompatibilism, arguing essentially that free will is an illusion—a necessary illusion, if we are to live in a society governed by laws, but an illusion nonetheless. As a case in point, consider The Illusion of Conscious Will, in which Daniel Wegner invokes the concept of automaticity and asserts that “the real causal mechanisms underlying behavior are never present in consciousness.”16 Just to make his meaning clear, he presents the reader with a diagram contrasting the “apparent causal path” between thought and action with the “actual causal path” connecting action to an “unconscious cause of action.” More recently, Michael Gazzaniga has picked up on the theme, writing that the “illusion” of free will is so powerful that “we all believe we are agents…acting willfully and with purpose,” when in fact “we are evolved entities that work like a Swiss clock”—no pinball determinism for him!17 To illustrate his point, Gazzaniga recounts an instance in which, while walking in the desert, he jumped in fright at a rattlesnake: he “did not make a conscious decision to jump and then consciously execute it”—that was a confabulation, “a fictitious account of a past event.” Rather, “the real reason I jumped was an automatic nonconscious reaction to the fear response set into play by the amygdala.”18

This argument extends beyond the scientific world. In its March 23, 2012, issue, The Chronicle of Higher Education published a forum entitled “Free Will Is an Illusion,” with a contribution by Gazzaniga; the May 13, 2012, issue of The New York Times carried an op-ed piece by James Atlas entitled “The Amygdala Made Me Do It”; and the May-June 2012 issue of Scientific American Mind featured a cover story by Christoph Koch detailing “How Physics and Biology Dictate Your ‘Free’ Will.” These are not the only examples, so something is happening here. What we might call psychological incompatibilism is beginning to creep into popular culture—which, like moral intuitionism, is okay if it is true. The question is: is it true?

Both Wegner and Gazzaniga are inspired, in part, by a famous experiment performed by the late Benjamin Libet, a neurophysiologist.19 When someone makes a voluntary movement, an event-related potential appears in the EEG about 600 milliseconds beforehand: this is known as the readiness potential. In Libet’s experiment, subjects viewed a light that revolved around a circle at a rate of approximately once every 2.5 seconds; they were instructed to move their fingers anytime they wanted, but to use the clock to note the time of their first awareness of the wish to act. Libet discovered that the awareness of wish preceded the act by about 200 msec—not much of a surprise there. But he also discovered that the readiness potential preceded the awareness of the wish by about 350 msec (200 + 350 = c. 600 msec). So there is a second type of readiness potential, which Libet characterized as a predecisional negative shift. Libet concluded that the brain decides to move before the person is aware of the decision, which manifests itself as a conscious wish to move. Put another way, behavior is instigated unconsciously (Wegner’s “unconscious cause of action”); conscious awareness occurs later, as a sort of afterthought, and conscious control serves only as a veto over something that is already happening. In other words, conscious will really is an illusion, and we are nothing more than particles acting in fields of force after all.

Libet’s observation of a predecisional negative shift has been replicated in other laboratories, but that does not mean that his experiment is immune to criticism and his conclusions are correct.20 In the first place, there is considerable variability around those means, and the time intervals are such that that the gap between the predecisional negative shift and the readiness potential could be closer to zero. And there are a lot of sources of error, including error in determining the onset of the readiness potential and error in determining the onset of the conscious wish (as for the latter, think about keeping track of a light that is rotating around a clock face once every 2.5 seconds). Still, that difference is unlikely to be exactly zero, and so the problem does not go away.

At a different level, Libet’s experiment has been criticized on the grounds of ecological validity. The action involved, moving one’s finger, is completely inconsequential and shouldn’t be glibly equated with choosing where to go to college, or whom to marry, or even whether to buy Cheerios or Product 19—much less whether to throw a fat man off a bridge to stop a runaway trolley careening toward five innocents. The way the experiment is set up, the important decision has already been made—that is, to participate in an experiment in which one is to raise one’s finger while watching a clock. And that decision has been made out of view of the EEG apparatus. I find this argument fairly persuasive. But still, there remains the nagging possibility that, if we recorded the EEG all the time, in vivo, we would observe the same predecisional negative shift before that decision was made, too.

More recently, though, Jeff Miller and his colleagues found a way to address this critique.21 They noted that the subjects’ movements are not truly spontaneous, for the simple reason that they must also watch the clock while making them. They compared the readiness potential under two conditions. In one, the standard Libet paradigm, subjects were instructed to watch the clock while moving their fingers and report their decision time. In the other, they were instructed to ignore the clock and not asked for any reports. Subjects in both conditions still made the “spontaneous” decision whether, and when, to move their fingers. But Miller and his colleagues observed the predecisional negative shift only when subjects also had to watch the clock and report their decision time. If Miller is right, Libet’s predecisional negative shift is wholly an artifact of the attention paid to the clock. It does not indicate the unconscious initiation of ostensibly “voluntary” behavior, nor does it show that “conscious will” is illusory. Maybe it is, but the Libet experiment does not show it.

Miller’s experiment is important enough that I would like to see it replicated in another laboratory, though I want to stress that there is no reason to think that there is anything wrong with his original study. When Miller did what Libet did, he got what Libet got. When he altered the instructions, but retained voluntary movements, Libet’s effect disappeared completely—not just a little, but completely. The ramifications are pretty clear.

This does not mean that the problem of free will has been resolved in favor of compatibilism, though it does suggest that compatibilism deserves serious consideration. Personally, I like the implication of a paper by the philosopher John Searle, titled “Free Will as a Problem in Neurobiology.”22 We all experience free will, and there is no reason, in the Libet experiment or any other study, to think that this is an illusion. Free will may well be a problem for neurobiology, and if so it is a problem for neurobiologists to solve. I do not lose any sleep over it. But if free will is not an illusion, and we really do have a meaningful degree of voluntary control over our experience, thought, and action, then moral judgment is secure from this threat as well. We should be willing to make moral judgments, using all the information—rational and intuitive—that we have available to us.

Free Will, Within Limits

Culturally, we seem to be in the midst of a retreat from, or perhaps even an assault on, reason in everyday life. Some of this is politically motivated, but some is aided and abetted by psychologists who, for whatever motive, seek to emphasize emotion over cognition, the unconscious over the conscious, the automatic over the controlled, brain modules over general intelligence, and the situation over the person.

Moral intuitionism represents a fusion of automaticity and emotion, and like the literature that comprises the “automaticity juggernaut,” it relies mostly on demonstration experiments that reveal that gut feelings can play a role in moral judgments.23 There is no reason to generalize their findings to what people do in the ordinary course of everyday living.

Human experience, thought, and action are constrained by a variety of factors, including evolution, written law and cultural custom, overt social influences, and a range of more subtle social cues. But within those limits we are free to do what we want, and especially to think what we want, and we are able to reason our way to moral judgments and action. This freedom of the will justifies moral judgment. It is easy for social psychologists and “experimental philosophers” to contrive experimental situations in which moral reasoning seems to fail us. When this happens, we must rely on our intuitions, emotional responses, or some other basis for judgment. But that does not imply that we do not reason about moral issues in the ordinary course of everyday living—or that we reason poorly, relying excessively on heuristic shortcuts, vulnerable to various errors and biases. It only means that moral reasoning entails more than a calculation of comparative utilities, not least because it typically occurs under conditions of uncertainty where there are no algorithms available. (If a judgment takes place under conditions of certainty, it is probably not a moral judgment to begin with.)

As I concluded my own Templeton essay:

If you believe in God, then human rationality is a gift from God, and it would be a sin not to use it as the basis for moral judgment and behavior. If you do not believe in God, then human rationality is a gift of evolution, and not to use it would be a crime against nature.24

Endnotes

  1. This article is based on a presentation to the Moral Psychology Group at the University of California, Berkeley. I thank Audun Dahl, Elliot Turiel, and Kevin Uttich for their comments.
  2. For a comprehensive review of moral psychology, see Elliot Turiel, “The Development of Morality,” Handbook of Child Psychology: Social, Emotional, and Personality Development, vol. 3, ed. Nancy Eisenberg, William Damon, and Richard M. Lerner (Hoboken: Wiley, 2006) 789–857.
  3. John F. Kihlstrom, “The Automaticity Juggernaut—or, Are We Automatons After All,” Are We Free?: Psychology and Free Will, ed. John Baer, James C. Kaufman, and Roy F. Baumeister (New York: Oxford University Press, 2008) 155–80.
  4. Daniel Kahneman, Paul Slovic, and Amos Tversky, Judgment Under Uncertainty: Heuristics and Biases (Cambridge: Cambridge University Press, 1982).
  5. Ernest R. Hilgard, “The Trilogy of Mind: Cognition, Affection, and Conation,” Journal for the History of the Behavioral Sciences 16 (1980): 107–17.
  6. Richard S. Lazarus, “A Cognitivist’s Reply to Zajonc on Emotion and Cognition,” American Psychologist 36.2 (1981): 222–23; Richard S. Lazarus, “On the Primacy of Cognition,” American Psychologist 39.2 (1984): 124–29; Robert B. Zajonc, “Feeling and Thinking: Preferences Need No Inferences,” American Psychologist 35 (1980): 151–75; and Robert B. Zajonc, “On the Primacy of Affect,” American Psychologist 39 (1984): 117–23.
  7. Jaak Panksepp, “Affective Neuroscience: A Paradigm to Study the Animate Circuits for Human Emotions,” Emotion: Interdisciplinary Perspectives, ed. Robert D. Kavanaugh, Betty Zimmerberg, and Steven Fein (Mahwah: Lawrence Erlbaum, 1996) 29–60.
  8. David Hume, A Treatise of Human Nature, 2.3.3.4.
  9. John Templeton Foundation, “Does Moral Action Depend on Reasoning?” (2010): <http://www.templeton.org/reason/>.
  10. Joshua D. Greene and Jonathan Haidt, “How (and Where) Does Moral Judgment Work?” Trends in Cognitive Sciences 6 (2002): 517–23; and Jonathan Haidt, “The New Synthesis in Moral Psychology,” Science 316.5827 (2007): 998–1,001.
  11. Peter Longerich, Heinrich Himmler: A Life (Oxford: Oxford University Press, 2012) 205.
  12. Martin T. Orne, “On the Social Psychology of the Psychological Experiment: With Particular Reference to Demand Characteristics and Their Implications,” American Psychologist 17 (1962): 776–83.
  13. John Mikhail, “Universal Moral Grammar: Theory, Evidence and the Future,” Trends in Cognitive Sciences 11.4 (2007): 143–52.
  14. Immanuel Kant, The Critique of Judgment (1791) as paraphrased in John M. Watson, The Philosophy of Kant: As Contained in Extracts from His Own Writings (New York: Macmillan, 1888).
  15. Carl R. Rogers and B. F. Skinner, “Some Issues Concerning the Control of Human Behavior,” Science 124 (1956): 1,057–1,066; and Trenton W. Wann, ed., Behaviorism and Phenomenology: Contrasting Bases for Modern Psychology (Chicago: University of Chicago Press, 1964).
  16. Daniel Wegner, The Illusion of Conscious Will (Cambridge, MA: MIT Press, 2002) 97.
  17. Michael S. Gazzaniga, Who’s In Charge? Free Will and the Science of the Brain (New York: Ecco, 2011) 105–106.
  18. Gazzaniga 76ff.
  19. Benjamin Libet, Curtis A. Gleason, Elwood W. Wright, and Dennis K. Pearl, “Time of Conscious Intention to Act in Relation to Onset of Cerebral Activity (Readiness-Potential): The Unconscious Initiation of a Freely Voluntary Act,” Brain 106 (1983): 623–42.
  20. For extended discussions of Libet’s work, including replies and rejoinders, see William P. Banks and Susan Pockett, “Benjamin Libet’s Work on the Neuroscience of Free Will,” The Blackwell Companion to Consciousness, ed. Max Velmans and Susan Schneider (Malden: Blackwell, 2007) 657–70; and Benjamin Libet, “The Timing of Brain Events: Reply to the ‘Special Section’ in this Journal of September 2004, edited by Susan Pockett,” Consciousness and Cognition 15.3 (September 2006): 540–47.
  21. Jeff Miller, Peter Shepherdson, and Judy Trevena, “Effects of Clock Monitoring on Electroencephalo-graphic Activity: Is Unconscious Movement Initiation an Artifact of the Clock?” Psychological Science 22.1 (January 2011): 103–109.
  22. John R. Searle, “Free Will as a Problem in Neurobiology,” Philosophy 76.4 (October 2001): 491–514.
  23. See Kihlstrom, “The Automaticity Juggernaut,” 155–80.
  24. John Templeton Foundation, “Does Moral Action Depend on Reasoning?” (2010): <http://www.templeton.org/reason/>.

John F. Kihlstrom is Professor in the Department of Psychology, University of California, Berkeley. A cognitive social psychologist with clinical training, he previously held faculty positions at Harvard, Wisconsin, Arizona, and Yale. His research interests include hypnosis, unconscious mental life, memory, and the self.

Who We Are

The Hedgehog Review is an intellectual journal concerned with contemporary cultural change published three times per year by the Institute for Advanced Studies in Culture at the University of Virginia.

IASC Home | Research | Scholars | Events | Media

IASC Newsletter Signup

First Name Last Name Email Address
   

Follow Us . . . FacebookTwitter