Futurist_logo_yellow_72dpi.jpg (24529 bytes)

A magazine of forecasts, trends, and ideas about the future


Back Issues

Reprints/ Permissions

Writer's Guidelines

Send a Letter to the Editor

Forecasts for the Next 25 Years

Visit Our Twitter Page


Advertise in THE FUTURIST



The Singularity, Explored

Michael Vassar is one of the brightest young minds in foresight and future technology.  He's also the president of the Singularity Institute. The Institute's fourth annual Singularity Summit is taking place in New York from October 3-4th.  Speakers at this year's event include PayPal founder Peter Thiel, longevity author Aubrey de Grey, and inventor and futurist Ray Kurzweil, among others.

We asked Vassar about the summit, the Singularity, and the technological breakthroughs of tomorrow.

Futurist: Why establish a summit to discuss technological developments that have not yet occurred? How do you think your view of the future of technology differs from that of most people?

Vassar: One can argue over whether the track record for proactive discussion of speculative technology has been successful enough to justify further efforts of this kind, but to me the efforts at self-censorship made by the atomic scientists of the Manhattan Project, by preventing the apocalyptic scenario of Nazis with nukes, more than adequately demonstrates the value of such efforts.

My view of future technology is not strikingly different from that of most people who seriously try to have as detailed and accurate model of future technology as possible. Naturally, there are not many such people. The financial rewards to knowing which neighborhoods have trendy real estate, which cities have low costs of living relative to quality of life or which college degrees have higher income potential, for instance, are much greater than those from having a realistic model of the long term future. Someone who makes accurate predictions 8 years out may be able to increase their investment five or ten-fold over 8 years by doing so. Someone who makes accurate predictions 40 years out can make a five or ten-fold return over 40 years, but they could also do that by investing in an index fund. In addition to holding consensus views on the long term future, I hold consensus views on the prospects of long term peace in Afghanistan, but regarding emerging technology my views are the expert consensus, and in military forecasting they are the vague consensus of the generally educated but not particularly informed. 

The group of serious long-term technology forecasters is, as noted, fairly small. It's more comparable in size to metamaterial physics than to physics. Further, the group is inconspicuous compared to two much larger groups, namely technophiles and science fiction enthusiasts. The main forecasting method used by the former group is to assume that in "The Future", whether 5 years away or 50, every technology currently under development at the Angel Investment stage or beyond will be completely developed and ubiquitous but to not consider any implications of interactions of those technologies or human limitations of the ability to fully utilize a new technology's potential. The forecasting method of the latter group is to play with ideas, usually one at a time, considering technologies far beyond anything currently in a lab but without much concern for non-arbitrary timelines or estimates of relative complexity.

One of the things that I want to communicate through this conference is that the Singularity is not a fringe idea among people who take the future very seriously. It’s only a fringe position among people who don’t think about the future and among the subculture of technophiles who think only about the next few years and about the technologies currently under development. For whatever reason, such people reliably fail to extrapolate linearly. They never imagine a 2050 that is twice as different as their 2030, and react negatively to the suggestion that this is a realistic thing to expect.

Futurist: Many newcomers to the Singularity concept find it to be an extremely optimistic view of the future. Part of the mission of the Singularity Institute and the Singularity Summit is to point out the potential risks and challenges associated with rapidly developing technology, artificial general intelligence in particular. What are some of those risks and why should people pursue the development of artificial general intelligence in spite of them?

Vassar: The major risks associated with AI are difficult to talk about because people are very prone to invoking inappropriate metaphorical schemata. The word 'robot' was taken directly from the Russian for worker, and early stories about 'robots' were not very veiled metaphors for worker's rebellion. This haunts casual discussion of AI dangers to this day, and lulls scientists who understand AI well enough to recognize the absurdity of that metaphor to tend to be dismissive of risk in general. 

Why pursue AI? Well, the simplest reason is that all sorts of AI applications hold practical short-term promise. Most of this sort of AI doesn't contribute anything to the accumulation of technique necessary to ultimately build a general AI. I definitely don't think that people should try to develop general AI without all due care. In this case, all due care means much more scrupulous caution than would be necessary for dealing with Ebola or plutonium. Likewise, any such efforts should be done in careful collaboration with critics who can scrutinize one's work and pay careful attention to one's unnoticed assumptions, which largely will come from bad analogies to narrow AI, from overt or subtle anthropomorphism, and from naive philosophy. 

Futurist: Many people encountering transhumanist ideas for the first time compare them to religious prophecies; they seem to promise freedom from death, strife, super-abilities, and abundance of all earthly goods. What do you think of the comparison of transhumanism to religion?

Vassar: People have a terribly narrow conception of religion. When you use the same word to refer to a nun kneeling in church, a dervish spinning into battle or a rice farmer chanting traditional songs while picking rice... well, a category is going to resemble a lot of things if the members of that category don't resemble one another. It therefore depends on what you mean by religion. 

In general though, both religion and transhumanism promise what people want—all of what they want rather than just some of it. When you’re talking about religion, you’re talking about controlling the most powerful thing in the universe that the religion posits to exist—God, mind, whatever. Tranhumanism is positing that humans can gain control of the one thing that science believes in, namely matter. Both agree that there is an ultimate reality, that it can be controlled, at least in principle, and as a logical consequence, that such control would enable you to satisfy your preferences, so long as you are able to adequately specify what you want. There are real similarities in that sense. A core difference is that religion starts with an assumption of what ultimate reality is and how it can be manipulated while science starts with the assumption that one has to figure such things out. 

Futurist: On a related note, how does the media sometimes misrepresent or misinform the public about these ideas and concepts? How can they help the public understand these ideas a bit better?

Vassar: My honest position, and a lot of people are going to disagree with me on this, is that the media does a very good job on technology and scientific issues. The quality of presentation is a lot higher in many newspapers than in many scientific papers. Most scientists don’t write for lay people at all. It’s noteworthy that the New York Times employed Paul Krugman before he won the Nobel Prize.

I honestly think that scientists complain unfairly. Most put very little effort into communicating with the public, possibly even negative effort. It's easy to get stuck in the attitude that if what you are saying is intelligible it can't be all that deep and so to signal depth by being unintelligible. It wouldn’t be that difficult for many scientists to put out understandable versions of the articles they publish. It wouldn’t take much effort for them to find English majors to do it for them if this was included as a standard expense to be covered by grants. We don't do that as a society. We would rather, rightly or wrongly, put very few resources into communicating science to the public. As a result, one science journalist has to cover a lot of scientists. Naturally they get some things wrong, but so would most scientists if they were working in so many fields at once. In general I have an extremely high opinion of the media; they get a lot of unfair complaints. 

Futurist: You’ve said that, in this century, small connected groups will wield considerably more power and influence than they do today, and that important progress will have to come from these small groups of individuals rather than from large national or corporate endeavors. What are the signs you see of this happening today?

Vassar: The idea that little progress is more likely to come from large groups finds its inspiration in modern startup culture. Most companies can’t innovate except by acquiring a startup. There are many prominent exceptions to this: Toyota, GE, Apple, Proctor & Gamble, Google and Costco are all good examples of innovative large-scale private enterprise. That said, for every such company it is easy to name several equally large companies that are much less effective at innovation It has obviously been a trend over the last few decades that smaller groups have been becoming more powerful and responsible for a larger fraction of innovation. I would expect that to continue. 

One reason for this trend is that computers are now very cheap. You can do better work with one desktop and 10 excellent programmers than with a hundred-million-dollar supercomputer and ten thousand commodity programmers. Start-up culture is also more adaptive because the programmers are in communication with one another. We know this intuitively. You can’t communicate effectively among a thousand people. Did you know every single person in your high school? 

Futurist: You’ve said…
“The development of molecular nanotechnology (MNT) promises to lead rapidly to cheap superior replacements for a large majority of durable goods, a substantial fraction of all non-durable goods, all existing utilities, and some services. For this reason and due to the relatively low expected cost of developing nanofactories, MNT represents the largest commercial opportunity of all time…. MNT also has the potential to impact the timeframes and severities of a number of major global risks such as those of terrorism, emergent disease, global warming, war, etc.” What specifically can governments or citizens do to better prepare for sweeping changes of the sort that you’ve laid out?

Vassar: When people start to ask what 'should' be done they are prone to be very vague about what they mean. Governments aren't people and can't in any very useful sense, be thought of as deciding to do things. Different citizens, in different situations and with different values, should do different things. One good general rule is probably to invest more of their effort in real interpersonal relationships which manifest as a readiness to provide actual aid. In an unstable world, relying on the government or on personal savings means relying on the same source of stability as millions of others. You are likely to put demands on your source of support at the same time as these others do, just the time when it is most likely to fail. Social networks with real norms of mutual aid are one type of real diversification in one's investment portfolio in a world where efficient market theorists have created correlations between assets that greatly reduce the value of traditional forms of diversification. Whether things go well or poorly, tighter community bonds are worth having intrinsically. That intrinsic value is, of course, a manifestation in your evolved psychology of their extrinsic value for survival, but unlike sugar and fat, this is a case where evolution is steering you in a direction appropriate to the modern world.

Futurist: What do you think is the best-case scenario for technological development over the course of the next twenty years, what is the worst-case scenario?

What you probably want to hear about is the best case and the worst case for worlds very similar to our own. A summary of what is fairly certain in tech in a fairly short time frame, and what we might hope to gain from such tech. I could go on about that, but for a sampler totally realistic virtual-reality scenarios and immensely more powerful computers revitalize entertainment to such a degree as to seriously threaten the survival of most other forms of activity. If tech develops well we will have computers all over the place that are more powerful than the human brain, though in 20 years we will probably lack the knowledge to build an AI or even a more efficient economy. In the best-case scenario, we get everything offered by the 1960s positive vision of the future—except stupid things like jetpacks—and everything offered by the positive 1990s vision of the future, like very good virtual reality. People are more respectful of intellectual thought, and—this is unlikely but not implausible—they put together more sensible economic policies.

Realistic worst case scenarios? These mostly involve positive feedback between bad economics and bad policy leading to substantial erosion of freedom and innovation. 

Obviously neither of these is a "best case" or "worst case" because in the best case full AI exists or and pursues goals humans would upon reflection care about while in a worst case it would simply eliminate the humans. 

Futurist: What’s the most important thing the average person can do today to improve the odds of survival of the human race in this century?

Vassar: What is good global citizenship? It depends on who you are and what you want. If you are a starving person in Ethiopia the best thing to do is very different from if you are a Dot Com billionaire. Very simply put, if you are any sort of billionaire, or even any sort of person with tens of millions of dollars, and you seriously want to improve the well being of mankind, you should very seriously consider discussing it with me personally. My contact information if fairly public and its worth my time to respond specifically to your interests and values.



This interview conducted by Patrick Tucker, Rick Docksai. Amended 9.22.09



THE SINGULARITY SUMMIT 2009—THINKING ABOUT THINKING: The Singularity Summit will be held October 3-4, 2009, at the 92nd Street Y, 1395 Lexington Avenue, New York, New York.

 The Summit is an annual event to further understanding and discussion about the Singularity concept and the future of human technological progress. It was founded in 2006 as a venue for leading thinkers—whether scientist, enthusiast, or skeptic—to explore the subject.

 Speakers at this year's summit include PayPal founder Peter Thiel, longevity author Aubrey de Grey, and inventor and futurist Ray Kurzweil, among others. Register at http://www.singularitysummit.com/

 The World Future Society is a proud media partner for the Singularity Summit.



Join WFS for $59 per year and receive THE FUTURIST, Futurist Update, and many other benefits.

COPYRIGHT 2009 WORLD FUTURE SOCIETY, 7910 Woodmont Avenue, Suite 450, Bethesda, Maryland 20814. Tel. 301-656-8274. E-mail info@wfs.org. Web site http://www.wfs.org. All rights reserved.