MARC
D. HAUSER, an evolutionary psychologist and biologist, is Harvard College
Professor, Professor of Psychology and Program in Neurosciences, and Director of
Primate Cognitive Neuroscience Laboratory. He is the author of
The Evolution of Communication,
Wild Minds: What Animals Think,
and
Moral Minds: How Nature Designed Our Universal
Sense of Right and Wrong.
He told us about where
morality lives in the brain, how to coax it out and what lies ahead for the
future of moral science.
Futurist: What have you been doing to discover the basis of moral reasoning?
Hauser: We've been using a variety of techniques. The question of the source of our moral judgment is one that has to be hit form a variety of different directions. For example, several years ago, some students and I built a Web site call the moral sense test and that Web site, up for more than three years and running, has attracted some 300,000 subjects. When people log on, they provide information about who they are in terms of their nationality, their background, their education, gender and so forth. And they proceed to respond to a series of moral and non-moral dilemmas. They display judgment. That Web site provides a really powerful engine to look at very large data sets with some cultural variation to see what people make of these different types of moral dilemmas. Sometime, they're familiar cases. Sometimes, they're very unfamiliar, made-up cases. Each question targets some kind of psychological distinction. For example, we're very interested in the distinction between action and omission when both lead to the same consequence. It's an interesting distinction because it plays out in many areas of bio-medical technology and experiences. Most countries reject the idea that doctors should be allowed to give a patient in critical care and in pain with no cure an overdose injection and end that person's life, but it is legally permissible to allow that same patient to terminate their own life in the same way.
Futurist: What significant conclusions have you drawn?
Hauser: Even though there's been a very long and philosophical and scientific discussion about moral psychology, what's happened in the last ten years is there's been a lot of excitement about the revival of the question, in part because of new technologies and new theoretical perspectives. Two of my grad students, Lee Ann Young and Mike Koenig, looked at patient population for one study. They looked at individuals who, in adulthood, suffered brain damage bilaterally, to both hemispheres to an area in the frontal lobe, particularly an area called the ventral medial pre-prefrontal cortex. This area, in many previous studies, had been implicated as the crucial area for connecting our emotional experiences with our higher level social decision making. So when I make the decision about how to interact with somebody, or what to do when I'm interacting with somebody, that area will be active--where our own welfare and someone else's welfare critically links with our emotional experiences. Much of the work that had been done with these patients suggested that when that area is damaged, the [patients] lost the ability to make moral decisions. We decided to have a re-look at the patient because much of the work that had been done looked at [the patient's] past to justify moral judgments. And one of the critical ways in which the work that we've been doing has been able to change that is to make a distinction between the intuitions, often unconscious, that may drive our moral judgment and the factors that determine how we behave in a particular moral situation.
So to give a quick example that you may be familiar with, about a year ago Leslie Autry, a man standing on the platform of a subway station in New York with his two daughters, leapt onto to the track to save a man who had fallen in front of a train and easily could have been killed. So while the behavior is rare, most people won't do it, if you ask people 'is it permissible to jump onto a track like that?' they'll say of course it's permissible. But if you ask the question, would it be obligatory or forbidden, people will say no. The judgment provides one kind of an angle on our moral knowledge. We looked at that, went back to the patients, and created a whole bunch of dilemmas. What we found was a very interesting pattern. For the non-moral dilemmas, these patients were no different than healthy people, making social decisions as if they had no moral weight. Secondly, within the class of moral dilemmas, there were some that we called impersonal, meaning [the dilemma] involved an action by one individual that did not involve contacting anyone; it didn't involve hurting or pushing anyone; it involved maybe flipping a switch on a trolley track to let the trolley go somewhere. So those cases, which were emotional and moral, were none-the-less judged by these patients in the same exact way as a healthy subject. Here was a very important result, because even though these patients had brain damage that basically knocked out their social emotions, they were nonetheless judging these cases as though they had a perfectly intact moral brain. Even though emotion may play some role in our moral psychology, it doesn't seem to be causally necessary for these kind of judgments.
There was set of dilemmas where the patient did show a difference, specifically when the action itself was personal and involved actually hurting somebody, specifically hurting them where the consequence was saving the lives of many. Here's where the [brain-damaged] patient, in contrast to healthy subjects, went for the greater good. They said, 'this action is worth it because I'm saving many people'--willfully hurting one person to save many people. Healthy subjects went the opposite direction, 'using someone as a means to the greater good is not okay. Therefore I say no.' Here was a case where the lack of emotional insight was causing a difference....
Futurist: ...Makes one more available for the presidency of the United States, one might argue....
Hauser: That's an interpretation. Some people argue that utilitarianism is the right way to think about the moral world, that when our emotions get in the way and we don't think about the utilitarian outcomes and that's when we fail. The Nile Levin case is interesting. One, it was a decision by the United States government that it would be okay to shoot down a plan under terrorist control to serve the greater good. If that was an option, that's what the government would do. Interestingly, the German government that decided the same exact case after 9/11, decided against. They said it would not be permissible to do that. Their reasoning really fell along the lines of the structure of German law, which is strictly anti-utilitarian to a large extent because of course the Nazi period was one in which people justified bad behavior on utilitarian grounds. So here we have cases where two legal systems that have diverged and one of things we're very interested in is the extent to which explicit laws actually impact upon intuitive psychology. Our assumption is that it will not, that the law will give people very local rules, very local specific cases, but when you move people away from the specific cases they won't show any pattern different from any where else in the world.
Futurist: Explain to me this idea of moral grammar.
Hauser: There's a strong and weak version of the idea. The strong version is that the way morality works it really is like language in the sense that you have a very encapsulated system in the brain that basically traffics only in moral situations. The anatomical features that are specific to the moral domain don't overlap with other areas or thoughts. The principals and rules that underlie our moral knowledge are unconscious and inaccessible. When we make moral judgments, we're unaware of the principals that are driving those judgments. Damage to certain parts of the brain would take out the moral system and leave everything else in intact and so forth. It really does seem to work like language, with clear universal rules. The variation that we see in the moral domain comes not from difference in what people know about morality but how a particular culture puts emphasis on a particular way morality could be substantiated in that culture, in the same way that a child who speaks English, if they had been born in Spain would, speak Spanish.
What the moral system does is give us a tool kit for building our own moral system, and they vary by culture in the same way languages and lexicons vary by culture. That's a radical hypothesis. But we're just starting. The less radical hypothesis is that we use our understanding of language, the questions that have been raised come from Chomsky in the fifties, carried forth by many people--we use those questions about the nature of language to ask the same questions about morality. It doesn't work just like language but the crucial questions are the same. For example, is there a critical period in development for acquiring our moral system? Once you acquire your first moral system is acquiring a second one like acquiring a second language? Is it hard, whereas the fist [acquisition] is more natural? So those are the kinds of questions you would ask about morality that really have not been asked. That's what I find exciting about this is that these questions, regardless of what the answers are will be interesting to understand.
Futurist: What sort of reaction have you received from people who adhere to a more conventional moral code?
Hauser: It varies. I've had some interesting responses from students, certain people at public lectures. It's a mixed bag. Some people see this work as artificial, that what morality is really about is how we behave, therefore, the judgments, this research is irrelevant. That's one form of disagreement. If that were true the entire analogy with modern linguistics, with Chomsky would have to be thrown out because it's all about the nature of judgments and intuition. There are some people who expressed anxiousness, of course if you're religious, your moral view of the world is very different, and on that level, maybe what we do winds up being different because the devils and angles on our shoulder are different, so there's an anxiousness in part because one possibility, and again, we're really at the early days, but much of the work that we've done suggests that a religious background doesn't have an effect on these intuitive judgments.
The hypothesis that we're tracking goes something like this--and this is independent of the benefits that people obtain from being associated with religion, I have nothing to say about that, to each his own--does having a religious background really change the nature of these intuitive judgments? The evidence we've accumulated suggests, no. If you look at the variety of moral dilemmas we've presented to people, with fairly large sample sizes, you simply make a contrast between people who claim to be religious, and people who claim to be atheists, you take the extremes, and you ask is the pattern of judgment different, the answer is no.
Now this is for cases that are not familiar. If I ask people, is abortion right or wrong, of course I'll get a different response. What's interesting nowadays about stem cell research and the ethics that surround that debate, if you walk down the street and ask most people, do you think stem cell research is morally good or morally bad, many people will say bad. But then you ask what is a stem cell, most people won't have a clue. What they've often done, they've masted 'stem cell research' onto 'killing a baby.' If killing a baby is bad then stem cell research is bad. That's a matter of using a moral problem one is familiar with judging a new case, one is not familiar with. We do that all the time.
The question becomes, to what extent is the resemblance between those two questions reasonable? What science should be doing is trying to educate, say look, the blasctocyst is a cluster of cells that stem cell research is focusing on, a cluster of cells, where we're getting the power to formulate new organs are nothing like a baby. It's the potential--with lots of change and development--the become a baby, okay. But it's not a baby. There's an onus on researchers to educate, in the absence of education, what people do is examine moral cases in terms of what they're familiar with.
Futurist: What about bioethics, one criticism of your book is that this research--reducing morality to the sum of its physical parts, has a way of devaluing ethics in the decision-making process. I'm speaking specifically of Richard Rorty's review of your book in the New York Times. He says this fascination with morality that expresses itself through surveys, through answering questions, side-steps the role of ethics in morality and all of these more murky moral questions that can't simply be answered in the yes or no kind of way.
Hauser: When people read things about the biology of x, and x could be attractiveness, morality, language, they do one of two things, first, they often assume that the biology of something implies fixedness, a predetermined outcome. That's a misunderstanding of biology and what it is. Rorty, in his review, and this has been true of other people as well, missed the distinction I belabored to show between how people behave and how they judge. The book is about the science of judgment. The fact that people do what we often consider to do morally outrageous things like clitorectomies, really, really, really horrible, that's not what the science is trying to explain. Of course there's going to be that kind of variation culturally. But what the science is trying to say is look, could the variation we observe today be illusory? Could there be real regularity, universals that underpin that variation fundamental to how the brain works? That's sort of the second response to the Rorty criticisms. The third response, there's no doubt that there are a lot of issues we don't just have these flashes about. Because we're confronted so often by moral dilemmas with which we aren't familiar.
We also encounter situations all the time where we may experience a flash of intuition about what's right and wrong but that intuition is ill-formed because, again, we're looking at it in the context of a moral decision we've already made, we're making it resemble something we're already familiar with. There's two things to keep in mind, of course you can't really have a fully formed intuition about certain things. Second thing, just like John Rolph purposed, and many of the ideas I'm pursuing, is that you have these intuitions, but ultimately what we want is to do is have these intuitions, think about them, and place them in a context to determine whether or not they are reasonable.
Futurist: Put in that way, it sounds like what you're doing is just presenting new tools to people that they can use in decision making processes, as opposed to something Orweillian, a new way to rewire your moral system in order to arrive at some new "evolved" state of moral decision making. Understanding that the science is in its infancy, do you think that there are possible future policy-ramifications for this research? What would social policies that more effectively take these findings into account look like?
Hauser: It's premature to say. I think the goal here is more general. The goal in some sense is to provide a rich, descriptive set of information about how people come to their moral judgments. What are the psychological distinctions? How do they breakdown in brain damage, how does imaging reveal which circuits are different physically? How does that then play out in terms of what is often described as the proscriptive side of decision making--what we ought to do? At this point, the best I can say is one would think that a proscriptive morality, of the sort that institutions traffic in, would be better informed by an awareness of kinds of intuitions that people are going to bring to bare on particular moral cases. So, for example, we already know that how you frame something, the words you use, can greatly affect how people end up with certain kinds of judgments. A Jury could be greatly biased depending on whether you frame something as an action or an omission. In some of the work we're now exploring, we're very interested in this question of--are the details of a story more memorable when they're described as actions as opposed to omissions, even when the consequences are the same. There's a lot of work ahead but at this point in time, our cause is to really showcase the psychology that's brought to bear on people's moral judgment and our hope will be that that will inform how law is carried out, how one might think about the power of any particular doctrine in terms of how it affects people's behavior, before enhancing a doctrine or law.
Futurist: Thinking about the work that lies ahead, what's the big breakthrough that happens in this research in the next ten years that's going to really change the way we think about how we make decisions?
Hauser: There are some questions that are open questions that the behavioral sciences are unlikely to answer. For example, there's a real question right now we're focused on--we know that emotion plays a role in our moral psychology in general. The question is, does emotion follow from the moral judgment or is it the inspirational source of the moral judgment? Take people who have been caught and convicted of serious crimes that involve harm to others. The classic clinical diagnosis is: these are people who have very limited emotional development. They don't feel guilt, shame, or remorse. Because of those deficiencies, they just don't know what's right or wrong. That may be what's going on, but here's an alternative, they know what's going on, they just don't care. This brings us back to that distinction between the intuitive systems that allow us to make judgments as opposed to those that allow certain kinds of behavior. The alternative is that when we test, we're now in the process of doing, when you test psychopaths in a wide variety of moral dilemmas, our predication is that they'll make judgments very much like normal non-psychopathic individuals, but when it comes to behavior, they will do the wrong thing. Emotion failed to check the behavior, but did not affect their moral knowledge. That has some very serious implications for how the law works. This is a case where the richness of the philosophical discussion that's been going on for hundreds of years married with new technologies in the neural sciences will greatly enrich how we understand how the brain makes moral judgments.
END