A magazine of forecasts, trends, and ideas about the future
Related:  Q and A with Marc Hauser                    David Poeppel's lab                           The "Pyschocivilzed Society"

January-February 2009 Volume 43, No. 1

Reinventing Morality

page 3

The Moral Hardware

In keeping with our computer–brain analogy, some aspects of the moral decision-making process are fixed; namely, the platform on which this process occurs. You might call this the hardware, the physical brain itself. We all process moral decisions based on different assumptions or beliefs, but the process happens in the same place for each of us, an area in the front of the brain called the ventromedial prefrontal cortex. This is where our emotional experiences — religious, traumatic, joyous — connect with our higher-level social decision making to give us a sense of good or bad.

So now that science has found the region involved in moral decisions, how long before some Silicon Valley start-up gives us a machine to read good or ill intentions, a veritable evil detector?

Not anytime in the foreseeable future. The human brain is an object of unfathomable complexity. To imagine that it might suddenly be rendered as transparent and simple as the items in an Excel spreadsheet is to commit hubris. This is why David Poeppel of the University of Maryland likes to keep expectations realistic. He studies language in the brain. Just as Hauser is focused on the language of morality, Poeppel is focused on how vibrations in the ear become abstractions. It’s next to impossible, he says, to see how a brain formulates big abstractions, like Locke’s Second Treatise of Government. He hopes one day to understand the neural processing of words like dog or cat.

Poeppel’s current work involves magnetoencephalography (MEG), an imaging technique that measures the brain’s electrical signals in real time. He was kind enough to invite THE FUTURIST to watch some experimentation. We found him in a lab with some of his brightest doctoral students, several gallons of liquid nitrogen, a $4 million MEG machine, and a girl named Elizabeth — who was having her brain activity, her inner-most thoughts, displayed on a big bank of monitors.

It looked like squiggles.

“What we’re looking at are the electrical signals her brain is giving off as she responds to certain stimuli,” Poeppel told me. In the case of Elizabeth, the stimuli were blips on a monitor and ping noises. The spikes and squiggles on the graph indicated that she was “seeing” the blips, without her having to make any other signal.

Poeppel doesn’t believe we’ll ever be able to hook people to a machine and get a complete transcript of their thinking. “We aren’t capable of that kind of granularity,” he says. But what his — and his students’ — experiments with MEG do show is the brain reacting to stimuli in real time, which can later reveal which parts of the brain react to which stimuli and how much electricity those regions throw off.The way the brain reads little blips may not seem to be correlated with morality, but it is. Returning to the brain computer analogy, Poeppel says that the moral rules we follow, the impulses that tell us when to push the button and divert the trolley and when not to, are set in a sort of default position when we’re born, just like the default settings on your PC. “Those are constant, immutable. They form the basis of morality. And then the switches are set to particular values as a function of experience. There’s a close interaction between the universality (meaning the brain hardware) and cultural specificity (the software).”

One day, MEG research, trolley surveys, and other aspects of moral science will reveal the key aspects of that correlation.

Amazingly, even though neuroscience is still in its infancy, it’s already yielding insights into moral issues, such as race bias. According to Poeppel, studies have shown that “people make decisions that reflect race biases even when they’re aware of what they’re doing.” Race bias is a reaction that rises from lived experience. What MEG, fMRI, and other neuro-imaging techniques give us is a picture for how those experiences change the physical brain and how the physical brain recreates, reimagines, recomputes them all the time.

“Does this reflect very deeply imbedded mechanisms of decision making? If you’re aware of it, can you neutralize it, can you override it and reeducate the system? Of course you can. The brain is plastic. It changes all the time. That’s what learning is. But we still don’t have a real explanatory theory for how that works.” He adds, “It’s an area where we will see progress in the years ahead.”

In terms of mysteries of morality, that progress will likely take the form of more questions than answers.

 

Page 1, 2, 3, 4