The Futurist Interviews Ray Kurzweil

World Future Review: What does it mean to build “new and improved” human intelligence? And where are we in terms of bringing this to reality?

Kurzweil: There are two components must be achieved to create a human-level artificial intelligence.

First, the hardware capacity of the human brain and, second, emulating the brain’s own software techniques.

There are a number of different ways to analyze what the hardware requirements are. If you take the most conservative analysis, which is 1016 calculations per second [10 million billion calculations per second or 10 billion MIPS], we’ll actually have that by next year in a supercomputer and we’ll have it for about $1000 by 2020. By 2029, that level of computation will be very inexpensive....

But the goal is not just to create a simulation. The actual goal is to understand how it works, understand its basic principles. That’s the software. We can engineer systems that don’t have the restrictions of a human brain, which, for example, has to fit into a less than one cubic foot skull that runs on a chemical substrate that sends messages at a few hundred feet per second (which is a million times slower than electronics), that computes at a mere 200 calculations per second, and so on. We won’t be limited to a billion pattern recognizers in the cerebral cortex—we could have a trillion. And if we understand the basic principles by which the brain creates intelligent behavior, we can focus and leverage it and create much more powerful systems.

I’m actually writing a new book to amplify that case called How the Mind Works—and How to Build One, which will talk about the tremendous progress since The Singularity Is Near came out in 2004 in this reverse-engineering project. Human level intelligence in machines is not going to displace us, compete with us, it’s not an invasion coming from Mars—these are tools we’re creating to basically expand ourselves, who we are. And that’s what we’ve done with tools since we’ve had tools. Ever since we picked up a stick to reach a higher branch, we’ve used it to extend our reach—the things we couldn’t otherwise do. First physically and now mentally.

World Future Review: What are the most pressing environmental issues that we should be concerned about as we move forward? And in a world where nanoengineered photovoltaic panels have eliminated fossil fuels, what will our obligation to the environment be?

Kurzweil: The first industrial revolution technologies were a compromise. They are harmful to the environment. Like, for example, fossil fuels. We are running out of energy if we limit ourselves to 19th century technologies like fossil fuels, but obviously we don’t need to do that.

We have the opportunity to move away from fossil fuels. Solar has the most headroom but there are others … [for example,] there’s also a tremendous amount of geothermal energy. There are many different renewable, decentralized, environmentally-friendly technologies that ultimately will be extremely inexpensive. There’s a 50% deflation rate to information technology (an implication of the law of accelerating returns). It’s actually about 25% in the case of solar energy—a 25% deflation rate each year—but that means that it ultimately will be very inexpensive—much less expensive than comparable fossil fuels—and it has the added advantages of being environmentally-friendly and decentralized, unlike today’s supertankers and nuclear power plants, which are centralized and therefore vulnerable to catastrophic centralized destruction. New technologies in general are decentralized, and that makes them safer. The Internet is decentralized—if a piece of it goes down, the information just routes around it.

Over the next one or two decades, there will be another food revolution. We’ll go from horizontal agriculture, which has dominated humanity for the last several thousand years, to vertical farming—basically, computer-controlled factories creating hydroponic plants for fruits and vegetables and in vitro-cloned meat, which could be engineered to be much healthier. [For example,] you could have beef with Omega 3 fats rather than saturated fat.

Same thing for housing. There’s an emerging industry of three-dimensional printing. Right now, the key features are at the microscale, but within 20 years, it will be at the nanoscale and we’ll be able to print out three-dimensional objects of extreme complexity. Today, we can print out modules to build inexpensive housing that’s very sturdy, earthquake proof, and basically snap them together Lego-style. These little modules have all the pipes and communication lines built in. One of the projects at Singularity University was to use three-dimensional printing to create low-cost housing for the developing world. We can house people very comfortably if we convert resources in the right way. Ultimately, with nanotechnology being able to produce inexpensive modules for houses as well as everything else we need, we’ll be able to do that at very low cost.

World Future Review: You recently said in a interview with H+ Magazine, “whereas we can articulate technical solutions to the dangers of biotech, there‘s no purely technical solution to a so-called unfriendly AI. We can‘t just say, ‘We‘ll just put this little software code sub-routine in our AIs, and that‘ll keep them safe.’ I mean, it really comes down to what the goals and intentions of that artificial intelligence are. We face daunting challenges.” In THE FUTURIST in 2006, you acknowledged that unlike nanotechnology, “superintelligence by its nature cannot be controlled.” Can you elaborate a little more on the risks and dangers? Also, given those risks and dangers, if there’s no real way to safeguard things from a dystopian scenario, why is strong AI desirable?

Ray Kurzweil: I don’t think we should envision it with a model of, someone’s going to create this Strong AI in a laboratory and unleash it on the world. That’s not the way it’s going to happen. We have hundreds of examples today of Narrow AI—programs doing tasks that used to be done by human intelligence but doing them better and less expensively—and the narrowness is gradually getting less narrow. And this intelligence is deeply integrated with our own already, even if, for the most part, it’s not yet in our bodies and brains. There’s going to be a continuous exponential progression of computers getting more powerful, getting smaller, and we’re going to become more and more integrated with them. And they’ve already made us smarter, and I don’t just mean as measured by IQ tests. I mean by measurement of intellectually capability of our civilization, which includes all of the things that we can do with biological and non-biological intelligence working together.

That integration is going to become more and more intimate. In 2035, you’re not going to be able to walk into a room and say, “humans on the right side, machines on the left.” It’s going to be all mixed up and integrated—one complex, dynamic, chaotic human/machine civilization. Gradually over time, the nonbiological portion of humanity’s intelligence is going to grow exponentially. The biological portion is fixed. It’s really not going to change—not to any significant degree. So, over time, nonbiological technology will predominate. But it’s still going to be one civilization with people having different philosophies and arguing about values.

I would maintain we actually have much more consensus on human values than might appear. People focus on our differences and talk about culture wars, and yes, there are certain issues, but what we all agree on is actually much more pervasive than what we disagree on. This includes a belief in progress. The idea of progress is a fairly recent concept in human history. People didn’t think in terms of progress a thousand years ago. There actually was progress, but it was so slow as to be unnoticeable.

World Future Review: There are tens of billions of devoutly religious people around the globe. How do you sell the idea of super-intelligence, technological human enhancement, and virtual immortality to a global populace who would have to give up their core religious beliefs to embrace such a future? And would traditional religious beliefs be compatible with a world governed by technology?

Ray Kurzweil: First, I think we should recognize that the major religions emerged in pre-scientific times and we need to update our philosophies based on what we’ve learned in the thousand years or so that we’ve had science. However, such ideas are not necessarily inconsistent with religious beliefs. In fact, the major religions have embraced technology and technological progress and the idea of human beings applying tools to overcome human suffering and extend life here on earth,. The major religions tend to be very pro-life and clearly support medical and scientific progress to expand human longevity. While they may not necessarily talk about radical life extension, [such concepts] are just natural extensions of the idea of human progress which the major religions do endorse. Even the pope has endorsed the idea of using science to overcome disease.

World Future Review: Speaking of e-commerce, you point to a future economic boom based on the exponentially increasing capability of computer power, coupled with decreasing cost, through the fulfillment of Moore's law. Can you tell us a little about the explosion of wealth that will follow the explosion of technology?

Ray Kurzweil: We have economic growth every year. If there’s a very slight downturn one year, we consider that a disaster and call it a recession. But there is economic growth in almost every year and all of that comes from information technology. The information industries grow 18% in constant dollars each year, despite the fact that you can get twice as much each year for the same price, because as price performance reaches a certain level, whole new applications explode. People didn’t buy iPods for $15,000 each15 years ago, which is what they would have cost. Social networks weren’t feasible six or seven years ago. And as new applications become feasible, they suddenly take off. E-books are now taking off because all the enabling factors are in place.

Every industry is gradually transforming into an information industry. Health and medicine is making that transformation now. Most of the economy will be information technology in the 2020s. … This is what’s providing economic growth. The non-information technology industries are shrinking.

World Future Review: I want to talk about something a little different, and that’s the role of creativity in a post-Singularity world. You're the author of some of the first computer programs that compose poetry and music. What place is there in a post-Singularity world for those classic works of art and literature produced by non-enhanced humans—Shakespeare and DaVinci, for example—and how will we redefine creativity and the creative process in general? What will be lost if we give up these processes to software programs?

Also, is there room in the digital future for analog processes? There’s no linear progression when it comes to artistic tools—but there are constellations of widely-varying processes that are different from—but not superior to—the others. Movies didn’t render plays obsolete, for example. What will be lost if we give up these processes in our haste to embrace a fully-immersive technological future?

Ray Kurzweil: Well, first of all, digital technology has already revolutionized the creation of art in every field, including graphic arts and music. Perhaps less so in language—although even there, certainly, research tools and other online tools are certainly helpful. But I was recently at the National Association of Music Merchants show, which I’ve gone to since 1983, and aside from the elaborately-dressed musicians and the cacophony of musical sounds that you hear on the trade show floor, it really looks and reads like a computer conference. I mean, there are some acoustic instruments, but for the most part, the instruments are very sophisticated from a technological perspective and the users are speaking in very sophisticated terms of single-processing and other computer paradigms. Same thing at a graphic arts conference. Graphic artists are using very sophisticated tools. Almost all of commercial music—at least popular music—is done by synthesizers. The digital world is doing a better and better job of emulating specific art forms that have evolved using real-world methods. It’s really just one aspect of virtual reality. I’ve been very involved with that in the musical field.

The ability of the digital world to emulate the real world is advancing and getting more and more subtle. Virtual reality today is cartoon-like, but if you look at Second Life, over the last 18 months, it’s become much more realistic. You can see where it’s headed to being very realistic and three-dimensional and full-immersion. That is the goal of the digital world: to emulate the natural world.

There are still many things that we can’t do in the digital world. You can simulate brush strokes and so on with digital tools, but you can’t yet really achieve the three-dimensional effect of an oil painting. But that’s the direction we’re headed in.

This interview was conducted by Aaron M. Cohen for World Future Review. Patrick Tucker contributed to this interview.

About the Interviewee:
Ray Kurzweil was the principal developer of the first CCD flat-bed scanner, the first omni-font optical character recognition, the first print-to-speech reading machine for the blind, the first text-to-speech synthesizer, the first music synthesizer capable of recreating the grand piano and other orchestral instruments, and the first commercially marketed large-vocabulary speech recognition.

His many books include The Age of Spiritual Machines: When Computers Exceed Human Intelligence (Viking, 1999), The Singularity is Near (Viking, 2005), and his most recent, co-authored with Terry Grossman, Transcend: Nine Steps to Living Well Forever (Rodale, 2009). He is the co-founder (along with X Prize Foundation chairman and CEO Peter Diamandis) of Singularity University. He was also a keynote speaker at WorldFuture 2006 and WorldFuture 2010, the annual conference of the World Future Society.


Theory of Mind in machine intelligence

In human intelligence the feature that possibly overides all the logic and computation functions that we are so proud of as human beings, is that of "Theory of Mind": a human thinking ability that enables us to have an idea about what other people may be thinking when we are interacting with them. This capability can be and is often imperfect, yet it represents an incredible ecoomy in human interaction, one that was most likely very adaptive from an evolutionary point of view. Very few animals other than human beings have that capability and even then it remains very limitd contrary to that of human beings. It is extremely important in social relations and one of the most produtive hypothesis about autism difficulties in social relation development came from researches in this domain in particular researches from Uta Frith in the Medical Research council in London UK.

Now count the times when you have been mad at your computer program for not really doing what you expected it to do, just because you forgott to precise something in your question... So far the best AI systems aren't capable of having a theory of mind about the user that is interfacing with the program...

I have a son with autism, he is of the "high functioning category. I have written many articles about special talents in autism. Some of which would be hard to reproduce in machine intelligence, while other would on the contrary be extremely easy to reproduce...

Feel free to contact me for more on this specific question "Will machine thinking include theory of mind in the programs that will be developped? I'll send another message on Isaac Asimov view about Machine thinking that I find one of the most brilliant analysis ever produced in simple language.

Isac Asimov and machine intelligence (second part of prev mesa)

As I said in my previous message I think this is one of the best analysis that I have been able to read among hundreds of book I read on AI. It is about what differentiates human thinking from machine thinking. Although extracted from a Science Fiction Novel the depth of analysis that Asimov put in that dialog between two robots trying to understand why human beings were capable of solving problems with much less memory and computing power at their disposal than they had as robots.

Human thinking vs machine thinking

Isaac Asimov
Robots and Empire
Balantine Books, NY 1985
Page 54

In the whole series, appear one human detective “Elijah Bayley and two high performing humanoid robots who serve as support to the detective…

The two robots in this fiction book are named Giskard and Daneel. Giskard who is trying to understand human thinking tells Danneel “Human beings have ways of thinking about human beings that we have not. Giskard’s is searching fo the “laws of humanics” which he is assuming to regulate human thinking just as the famous Asimov’s three laws of robotics regulate completely “robots thinking’ and actions.
For that, Giskard says he had searched whole libraries trying to discover if such laws governing human behaviour ever existed or if they could be deducted from past human behaviour analysis.

Giskard continues “Every generalisation that I try to make, however broad and simple has its numerous exceptions. Yet if such laws existed and if I could find them, I could understand human beings and be more confident that I am obeying the Three laws in better fashion.

Giskard keeps on going “Since detective Elijah understood human beings, he must have have had some knowledge of the laws of Humanics.”
Daneel answers: “Presumably. But he knew through something that human beings call intuition, a word I don’t understand, signifying a concept I know nothing of. Presumably it lies beyond reason at my command.”

Giskard again: “That and [robot Memory!] Memory that doesn’t work after human fashion, of course. It lacked the imperfect recall, the fuzziness, the addition and subtractions dictated by wishfull thinking and self interest, to say nothing of the lingerings and lacunae and backtracking that can turn memory into hour-long day-dreaming. it was robotic memory ticking off the events exactly as they had happened, but in vastly hastened fashion. The seconds reeled off in nanoseconds…

Amasing isn't it?
It is perhaps some of our imperfections and our capabilities to rebound on imperfections that makes us in fact different than machine intelligence. I have developped that I dea about the strange creativity demonstrated by some people with autism. Temple Grandin, one of the most famous person with autism likes to end up her conferences "If Autism had been eradicated very early on in prehistory, human beings would probably still be socialising around a camp fire at the entrance of a cave...
another subject on which I have written several papers...