The Future of Human Enhancement

Subject(s):
David Wood's picture

Is it ethical to put money and resources into trying to develop technological enhancements for human capabilities, when there are so many alternative well-tested mechanisms available to address pressing problems such as social injustice, poverty, poor sanitation, and endemic disease? Is that a failure of priority? Why make a strenuous effort in the hope of allowing an elite few individuals to become “better than well”, courtesy of new technology, when so many people are currently so “less than well”?

These were questions raised by Professor Anne Kerr at a public debate earlier this week at the London School of Economics: The Ethics of Human Enhancement.

The event was described as follows on the LSE website:

This dialogue will consider how issues related to human enhancement fit into the bigger picture of humanity’s future, including the risks and opportunities that will be created by future technological advances. It will question the individualistic logic of human enhancement and consider the social conditions and consequences of enhancement technologies, both real and imagined.

From the stage, Professor Kerr made a number of criticisms of “individualistic logic” (to use the same phrase as in the description of the event). Any human enhancements provided by technology, she suggested, would likely only benefit a minority of individuals, potentially making existing social inequalities even worse than at present.

She had a lot of worries about technology amplifying existing human flaws:

  • Imagine what might happen if various clever people could take some pill to make themselves even cleverer? It’s well known that clever people often make poor decisions. Their cleverness allows them to construct beguiling sophistry to justify the actions they already want to take. More cleverness could mean even more beguiling sophistry.
  • Or imagine if rapacious bankers could take drugs to boost their workplace stamina and self-serving brainpower – how much more effective they would become at siphoning off public money to their own pockets!
  • Might these risks be addressed by public policy makers, in a way that would allow benefits of new technology, without falling foul of the potential downsides? Again, Professor Kerr was doubtful. In the real world, she said, policy makers cannot operate at that level. They are constrained by shorter-term thinking.

For such reasons, Professor Kerr was opposed to these kinds of technology-driven human enhancements.

When the time for audience Q&A arrived, I felt bound to ask from the floor:

Professor Kerr, would you be in favour of the following examples of human enhancement, assuming they worked?

  1. An enhancement that made bankers more socially attuned, with more empathy, and more likely to use their personal wealth in support of philanthropic projects?
  2. An enhancement that made policy makers less parochial, less politically driven, and more able to consider longer-term implications in an objective manner?
  3. And an enhancement that made clever people less likely to be blind to their own personal cognitive biases, and more likely to genuinely consider counters to their views?

In short, would you support enhancements that would make people wiser as well as smarter, and kinder as well as stronger?

The answer came quickly:

No. They would not work. And there are other means of achieving the same effects, including progress of democratisation and education.

I countered: These other methods don’t seem to be working well enough. If I had thought more quickly, I would have raised examples such as society’s collective failure to address the risk of runaway climate change.

Groundwork for this discussion had already been well laid by the other main speaker at the event, Professor Nick Bostrom. You can hear what Professor Bostrom had to say – as well as the full content of the debate – in an audio recording of the event that is available here.

(Small print: I’ve not yet taken the time to review the contents of this recording. My description in this blogpost of some of the verbal exchanges inevitably paraphrases and extrapolates what was actually said. I apologise in advance for any mis-representation, but I believe my summary to be faithful to the spirit of the discussion, if not to the actual words used.)

Professor Bostrom started the debate by mentioning that the question of human enhancement is a big subject. It can be approached from a shorter-term policy perspective: what rules should governments set, to constrain the development and application of technological enhancements, such as genetic engineering, neuro-engineering, smart drugs, synthetic biology, nanotechnology, and artificial general intelligence? It can also be approached from the angle of envisioning larger human potential, that would enable the best possible future for human civilisation. Sadly, much of the discussion at the LSE got bogged down in the shorter-term question, and lost sight of the grander accomplishments that human enhancements could bring.

Professor Bostrom had an explanation for this lack of sustained interest in these larger possibilities: the technologies for human enhancement that are currently available do not work that well:

  • Some drugs give cyclists or sprinters an incremental advantage over their competitors, but the people who take these drugs still need to train exceptionally hard, to reach the pinnacle of their performance
  • Other drugs seem to allow students to concentrate better over periods of time, but their effects aren’t particularly outstanding, and it’s possible that methods such as good diet, adequate rest, and meditation, have results that are at least as significant
  • Genetic selection can reduce the risk of implanted embryos developing various diseases that have strong genetic links, but so far, there is no clear evidence that genetic selection can result in babies with abilities higher than the general human range.

This lack of evidence of strong tangible results is one reason why Professor Kerr was able to reply so quickly to my suggestion about the three kinds of technological enhancements, saying these enhancements would not work.

However, I would still like to press they question: what if they did work? Would we want to encourage them in that case?

A recent article in the Philosophy Now journal takes the argument one step further. The article was co-authored by Professors Julian Savulescu and Ingmar Persson, and draws material from their book “Unfit for the Future: The Need for Moral Enhancement”.

To quote from the Philosophy Now article:

For the vast majority of our 150,000 years or so on the planet, we lived in small, close-knit groups, working hard with primitive tools to scratch sufficient food and shelter from the land. Sometimes we competed with other small groups for limited resources. Thanks to evolution, we are supremely well adapted to that world, not only physically, but psychologically, socially and through our moral dispositions.

But this is no longer the world in which we live. The rapid advances of science and technology have radically altered our circumstances over just a few centuries. The population has increased a thousand times since the agricultural revolution eight thousand years ago. Human societies consist of millions of people. Where our ancestors’ tools shaped the few acres on which they lived, the technologies we use today have effects across the world, and across time, with the hangovers of climate change and nuclear disaster stretching far into the future. The pace of scientific change is exponential. But has our moral psychology kept up?…

Our moral shortcomings are preventing our political institutions from acting effectively. Enhancing our moral motivation would enable us to act better for distant people, future generations, and non-human animals. One method to achieve this enhancement is already practised in all societies: moral education. Al Gore, Friends of the Earth and Oxfam have already had success with campaigns vividly representing the problems our selfish actions are creating for others – others around the world and in the future. But there is another possibility emerging. Our knowledge of human biology – in particular of genetics and neurobiology – is beginning to enable us to directly affect the biological or physiological bases of human motivation, either through drugs, or through genetic selection or engineering, or by using external devices that affect the brain or the learning process. We could use these techniques to overcome the moral and psychological shortcomings that imperil the human species.

We are at the early stages of such research, but there are few cogent philosophical or moral objections to the use of specifically biomedical moral enhancement – or moral bioenhancement. In fact, the risks we face are so serious that it is imperative we explore every possibility of developing moral bioenhancement technologies – not to replace traditional moral education, but to complement it. We simply can’t afford to miss opportunities…

In short, the argument of Professors Savulescu and Persson is not just that we should allow the development of technology that can enhance human reasoning and moral awareness, but that we must strongly encourage it. Failure to do so would be to commit a grave error of omission.

These arguments about moral imperative – what technologies should we allow to be developed, or indeed encourage to be developed – are in turn strongly influenced by our beliefs about what technologies are possible. It’s clear to me that many people in positions of authority in society – including academics as well as politicians – are woefully unaware about realistic technology possibilities. People are familiar with various ideas as a result of science fiction novels and movies, but it’s a different matter to know the division between “this is an interesting work of fiction” and “this is a credible future that might arise within the next generation”.

What’s more, when it comes to people forecasting the likely progress of technological possibilities, I see a lot of evidence in favour of the observation made by Roy Amara, long-time president of the Institute for the Future:

We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.

What about the technologies mentioned by Professors Savulescu and Persson? What impact will be possible from smart drugs, genetic selection and engineering, and the use of external devices that affect the brain or the learning process? In the short term, probably less than many of us hope; in the longer term, probably more than most of us expect.

In this context, what is the “longer term”? That’s the harder question!

But the quest to address this kind of question, and then to share the answers widely, is the reason I have been keen to support the growth of the London Futurist meetup, by organising a series of discussion meetings with well-informed futurist speakers. Happily, membership has been on the up-and-up, reaching nearly 900 by the end of October.

The London Futurist event that occurred – on the afternoon of Saturday 3rd November – picked up the theme of enhancing our mental abilities. The title was “Hacking our wetware: smart drugs and beyond – with Andrew Vladimirov”:

What are the most promising methods to enhance human mental and intellectual abilities significantly beyond the so-called physiological norm? Which specific brain mechanisms should be targeted, and how? Which aspects of wetware hacking are likely to grow in prominence in the not-too-distant future?

By reviewing a variety of fascinating experimental findings, this talk will explore:

  • various pharmacological methods, taking into account fundamental differences in Eastern and Western approaches to the development and use of nootropics
  • the potential of non-invasive neuro-stimulation using CES (Cranial Electrotherapy Stimulation) and TMS (Transcranial Magnetic Stimulation)
  • data suggesting the possibility to “awaken” savant-like skills in healthy humans without paying the price of autism
  • apparent means to stimulate seemingly paranormal abilities and transcendental experiences
  • potential genetic engineering perspectives, aiming towards human cognition enhancement.

The advance number of positive RSVPs for this talk, as recorded on the London Futurist meetup site, has reached 129 at the time of writing – which is already a record.

(From my observations, I have developed the rule of thumb that the number of people who actually turn up for a meeting is something like 60%-75% of the number of positive RSVPs.)

I’ll finish by returning to the question posed at the beginning of my posting:

  • Are these technological enhancements likely to increase human inequality (by benefiting only a small number of users),
  • Or are they instead likely to drop in price and grow in availability (the same as happened, for example, with smartphones, Internet access, and many other items of technology)?

My answer – which I believe is shared by Professor Bostrom – is that things could still go either way. That’s why we need to think hard about their development and application, ahead of time. That way, we’ll become better informed to help influence the outcome.

About the author

David Wood is a futurist based in the U.K. This essay was reposted with permission from his blog Dw2blog.com.

Comments

"We"

I think that few of the technologies under consideration or discussion here are even remotely proximate enough for legislators, policy makers, or even gonzo investor types to enter into serious deliberation about them on their own terms, and frankly I think that few of the phenomena (consciousness, intelligence, flourishing, wisdom) into which these techniques are presumably supposed to be intervening are even remotely well understood enough to provide the basis for confident assessments.

That is to say, I am presumably one of those poor benighted souls who are "woefully unaware about realistic technology possibilities" like "smart drugs, genetic selection and engineering, and the use of external devices that affect the brain or the learning process." Although my ignorance is attributed by Mr. Wood to a scientific illiteracy shaped by watching too much science fiction, it occurs to me that, quite to the contrary, hyperbolic claims about total rapid transformations of the human condition through the intervention of fantastically efficacious techniques and devices are in fact to be found more in science fiction than actual science practice or science policy (and let me say that when I refer to science fiction here, I am including an enormous amount of advertising imagery and the promotional discourse one finds Very Serious Futurologists indulging in with PowerPoint presentations in think-tank infused/ enthused conference settings). In this regard, it isn't exactly confidence inspiring to hear a breathless reference to record breaking RSVP's for a talk proffered as a sign of… well, who knows what exactly? Even if Very Serious "transhumanists" like Nick Bostrom manage to get a million facebook "likes" for their pitch this is not, you will forgive me, a reason to think "means to stimulate seemingly paranormal abilities and transcendental experiences" are indeed, as Mr. Wood suggests, "apparent." I must say I do not agree with the article's conclusion that there is any proper connection between indulging in wish fulfillment fantasizing and being "better informed."

Nevertheless, I still think it is important to take articles like this one seriously because they have impacts in altogether different domains than the ones they say they mean to shape. It is crucial to recognize that whenever one speaks about "enhancement," that term is freighted with unstated questions -- enhancement for precisely whom? according to what values? in the service of what end? at the cost of what end?

There simply is no such thing as a neutral "enhancement" that benefits everybody equally without costs, let alone unintended consequences. What is interesting about this sort of discussion is that it pretends all of the stakes are aligned, all the relevant facts are known, all the values are already shared, when of course none of that is the least bit true. "Enhancement" discourse evacuates inextricably political debates of their political substance, inevitably in the service of the implementation of a particular ideology, a particular agenda, a particular constellation of norms (always uninterrogated, often even unconscious). Again, while few of the techniques under discussion here are actually either real or emerging, they function as symptoms of the underlying politics they disavow, but they also function as frames that would refigure and rewrite humanity (in the present, not in "The Future" at all, mind you) in terms more congenial to those underlying politics. That is to say, the apparently technical, apparently neutral, apparently universal, apparently apolitical language of "enhancement" seeks to do political work in the most efficacious imaginable mode, the mode of not doing politics at all.

To get a better sense of what I mean here, notice the exchange of views highlighted in the piece between a critic of this techno-utopian moral-engineering eugenicism, Anne Kerr, and the author. In Mr. Wood's summary of her views Professor Kerr pointed out that "enhancements provided by technology… would likely only benefit a minority of individuals, potentially making existing social inequalities even worse than at present." The upshot of this observation is that it is inapt to use the word "enhancement" in the first place to describe these sorts of little futurological allegories. She presumably went on to illustrate her point with a few imaginary examples: "Imagine what might happen if various clever people could take some pill to make themselves even cleverer? It’s well known that clever people often make poor decisions. Their cleverness allows them to construct beguiling sophistry to justify the actions they already want to take… Or imagine if rapacious bankers could take drugs to boost their workplace stamina and self-serving brainpower -- how much more effective they would become at siphoning off public money to their own pockets!"

Hearing Kerr's concerns, Mr. Wood declares he felt "bound" to respond: "would you be in favour of the following examples of human enhancement, assuming they worked? An enhancement that made bankers more socially attuned, with more empathy, and more likely to use their personal wealth in support of philanthropic projects? An enhancement that made policy makers less parochial, less politically driven, and more able to consider longer-term implications in an objective manner? And an enhancement that made clever people less likely to be blind to their own personal cognitive biases, and more likely to genuinely consider counters to their views? In short, would you support enhancements that would make people wiser as well as smarter, and kinder as well as stronger?"

Of course, to assume in advance that such "enhancements" worked is precisely the issue under discussion so it seems a rather flabbergasting concession to demand in advance, but for me the greater difficulty is the way such a discussion has already been framed by Mr. Wood's response as one in which what we mean when we say a device is "working" is the relevant vocabulary to deploy when what we are discussing is moral development or political reconciliation or human flourishing. In pointing out that clever people often behave foolishly, part of what Kerr is calling into question is whether or not we are quite right to value clever people as clever or right to pretend we mean the same things when we speak of cleverness at all. Mr. Wood seems in his cleverness to have missed that point, predictably enough. Why should readers concede, as his response to Kerr demands we do, that we all know and share a sense of what he means when he speaks of a banker being enhanced into "social attunement"? How does one square enhancement with attunement even in principle? Attunement to what, when, how long, how often, exactly? Would it be right to describe as "philanthropic" a person re-engineered to reflect some person's idiosyncratic image of what a philanthropist acts like? Was Kerr even bemoaning a lack of philanthropy when she expressed worries about the recklessness and fraudulence of too many bankers? Who is to say in advance what the relevant "cognitive biases" are that frustrate good outcomes? Aren't both the biases and goods in question here at least partially a matter of personal perspective, a matter of personal preference? Why is it assumed that parochialism always favors the shorter term over the longer-term? When Keynes reminded us that "in the long run we are all dead" he was not recommending short term thinking in general, but pointing out that sometimes avoidable massive suffering in the short term demands risks (stimulative public deficit spending) that long-term prudence would otherwise disdain.

Wisdom is a tricky business -- if I may condense several thousand years of literature into a chestnut -- and it scarcely seems sensible to fling questions around like "would you support enhancements that would make people wiser as well as smarter, and kinder, as well as stronger?" when there are so many vital questions at the heart of what we mean when we speak of wisdom, smartness, kindness, strength in the first place. Not to put too fine a point on it, it seems to me that whatever the answers to the questions Mr. Woods is posing here, everybody engaging in this conversation on these terms looks to me to be made rather more dumb than I think we need be. Is that what Mr. Wood means by "working"?

Unsurprisingly, Professor Kerr apparently responded to Mr. Wood's challenge by rejecting it, and proposing instead that we focus on processes of education and political democratization. Wood countered by complaining, "These other methods don’t seem to be working well enough." He writes that he wishes he had elaborated the example of the failure of our political processes to be equal to environmental problems as an example of what he means -- I suspect Mr. Wood would also be a booster for "geo-engineering" then, angels and ministers of grace defend us! Of course, I wonder what it might mean to say democracy isn't "working," exactly? Does that mean the outcomes Mr. Wood would prefer have not yet prevailed? Does it mean he thinks desirable outcomes in failing now, must then always fail? If he wants to circumvent these failed processes with "technology," does he discount the political processes through which "technology" ends up being funded, regulated, implemented, maintained, its effects distributed and understood, and so on?

When Mr. Wood glowingly quotes Julian Savulescu and Ingmar Persson about how we are "unfit" for "The Future," and how "there are few cogent philosophical or moral objections to… moral bioenhancement. In fact, the risks we face are so serious that it is imperative we explore every possibility… We simply can’t afford to miss opportunities…" I find myself wondering just who this "we" he and they are talking about consists of. Who is included in, and excluded from, this "we"? Who is deciding what a "cogent objection" to this line of what looks to me like incoherent hyperbolic bs consists of? Who is deciding what "opportunities" can't be missed by whom? Whose pet vision of "The Future" exactly are you talking about here? Given that the democratic "we" has already been bagged for disposal in this chirpy little number, I think the answers to these questions take on a certain urgency.

Human moral enhancement . . .

Trials of human moral enhancement are now already underway. They are open to all interested parties. The only prerequisite is the ability to question the limits of human nature itself. Full details are available at http://www.energon.org.uk

Belated reply to Dale Carrico

Dale, I'm very sorry that I didn't notice your long reply to my blog-posting until earlier today. (My article was kindly copied here by an editor of The Futurist.) Belatedly, let me offer a few words of response now.

Please let no-one interpret my lack of response up till now as acceptance of Dale's criticism. Nor that I think his criticisms unworthy of reply. He makes several interesting points, though he also leads himself into undue vexation by jumping to wrong conclusions several times. (He seems to be a very self-confident individual.)

To try to move to the essence of the discussion: Dale seems to want to deny that it makes sense to talk about moral bioenhancements "working". That black-and-white word seems to offend his appreciation of the messiness of real-world biology, sociology, economics, and so on.

Well, let me try a simple analogy. Do diets ever "work", in terms of improving someone's health? It's a messy topic. What works for one person may fail to work for another. Different dietary ideas (low-carb, low-calorie...) can interfere with each other in unexpected ways. The impacts of dietary change also depend on someone's exercise regime, and the level of support from their family and colleagues. And a diet might change someone's appearance so drastically that friends no longer like how the person looks. Their loss of weight can prove embarrassing to friends, disrupting existing social relations. Etc.

But so what? Do we give up on all attempts to figure out better diets? Of course not.

Yes, moral bioenhancement is going to be messy as well. Much messier than diets. But that's no reason to say we shouldn't explore what might be possible.

Is Dale really telling us that no changes in brain nutrition, epigenetic expression, or whatever, can ever alter moral capacity for the good? I see such a claim as equally implausible as saying that no change in diet can ever alter health for the good.

As for who do I mean by "we", of course I mean all of us, collectively contributing to the clarification of the pros and cons of various courses of action, with an open mind as to which of our cherished hypotheses might sink or swim.

// David W.

Re geo-engineering

Reply to Dale, footnote.

I'll put this bit separately, as it's a tangential piece of crossfire from Dale.

Not for the only time, he jumps to an incorrect conclusion in this part of his comment (before going on to work himself into more of a frenzy, all without reason):

He ["Mr. Wood"] writes that he wishes he had elaborated the example of the failure of our political processes to be equal to environmental problems as an example of what he means -- I suspect Mr. Wood would also be a booster for "geo-engineering" then, angels and ministers of grace defend us!
My views on geo-engineering match what many other observers of the climate change issue have written:
  1. we should be investigating geo-engineering as a Plan B
  2. We should carry out that investigation well aware of the potential risks involved with it
  3. We may well find that some forms of geo-engineering are more hazardous than others - e.g. spraying additional particles into the sky is likely to be more hazardous than removing CO2 from the atmosphere via CCS
  4. But in strong preference, we should be pursing a Plan A of encouraging the rapid development and deployment of greener energy sources, whether that is solar, some forms of nuclear, geo-thermal, or other.

Now, that may well count as being a "booster" for geo-engineering. But why should we need "angels and ministers of grace" to defend us against that particular viewpoint?

// David W.