Futurist: What would you say is the principal obstacle before general AI?
Brooks: It's nice to think of it in terms of being a single technical hurdle. But, I don't believe that's the case. There's a whole raft of things we don't understand yet. We don't understand how to organize it, we don't understand what its purpose is. We don't understand how to connect to perception, we don't understand how to connect it to action. We've made lots of progress in AI. We've got lots of AI systems out there that affect our everyday lives all the time. But general AI, its early days, early, early days.
Futurist: Looking ten to twenty years in the future, give me a headline. What's the big AI development that's going to change the way people think about the field?
Brooks: If you went back to 1985 and you told people they would have lots of computers in their houses, they would have thought that was crazy. Where would we put those big boxes with the spinning discs? That was the conception of computers at that time. I think what people are going to see in their houses in fifteen or twenty years are robots, many robots. There's already more than two million people in the U.S. that have cleaning robots....
Futurist: Well thanks to you..
Brooks: They're simple little things. I think we're going to see lots, lots more robots. There are so many big companies involved in robots now that, twenty years from now, we'll look back and say, oh, yea, we've got robots.
Futurist: So you think the most visible and obvious manifestation of AI will be in the form of beings that operate in physical space?
Brooks: I think the biggest manifestation that people will associate with AI is going to be robots but the reality is people who use Google are using a big AI system all the time. You book an airline flight online, the airline's routing is written by an AI lab, an AI lab at MIT is writing such things. So, already, in peoples' everyday lives, they're using AI systems all the time, but they don't think of them as AI systems; they think of them as a web application, a cell phone, a map, but they're AI systems.
Futurist: Do you see any negative consequences in the way we use AI right now? You mentioned Google, many people use Google to access information, but you could make the argument that it has a negative effect on research skills, on critical thinking ability....
Brooks: When I was a boy, in elementary school, there was a big fuss about using ball point pens, even fountain pens. We had to know how to use a nib and ink because, they said, 'if we lost that skill later in life, we would not be able to get along.' People keep saying, they're losing that skill and this, they're gaining other skills and their adapting to modern life. I just don't buy it. People can become fantastic at using Google and getting information. Maybe a different set of people were fantastic at using other skills, but it's a set of survival skills and people that are better at it will prosper.
Futurist: What sort of stock do you put in the notion of runaway AI?
Brooks: I don't think we're going to have runaway AI in any sort of intentional form. I think there may well be accidents along the way where systems fail in horrible ways because of a virus, bug or something, but I don't believe that AI with malicious intent makes sense. People using systems as a vehicle may have malicious intent, but I don't think malicious intent from the AI itself is something that I'm going to lose sleep over in my lifetime. Now, 500 years from now, maybe. But I don't think in my lifetime it's going to be anything like an issue.
Futurist: Obviously we would hope that any artificially intelligent entity would reflect our values, but we're still in a process of deciding those values. Isn't that a key issue?
Brooks: We are the ones who are going to be building these systems so we are unlikely to be building ones we don't like. We could build really dangerous trains, but we don't. We just don't do it.
Futurist:How would you frame this issue for a general audience? What do you think is the big message that a lot of people aren't getting?
Brooks: You have to understand that technology will change the world around you, will change your life. Every so often, I go to the world economic forum in Davos in January. The industry and government leaders bring some of us technology leaders as entertainment, on the side. My argument is, 'we're the ones who are going to change the world you're going to have to deal with. You're struggling with digital rights management and copyright; that's because of technology.' But they don't want to hear about the technology even though technology is going to change the world.
Futurist: If you could give any advice to the young people who are going to be living in this world that you're creating, what would you tell them? What advice, similarly, would you give to people who aren't so used to change?
Brooks: Well the first thing I would say to the young people is that there's been a very unfortunate impression cast that jobs in IT are being exported overseas. In fact we're facing a tremendous shortage of skilled, Information Technology workers. The smartest thing you can do is major in computer science in college and you are guaranteed employment for life. I keep having parents come up to me saying, 'I heard all the jobs are going to India.' Not true, so, young people, go into computer science. You will be well served. Second thing is, one can be scared of technology changes, or one can think of change as opportunity. I like to think of change as opportunity. How can I do things more interesting? How can I do them better?(top)
Futurist: What are some books or texts that influenced your thinking?
Voss: That's kind of hard to pin point because there's really a very broad spectrum of things that I read. I studied lots of philosophies but one philosophy I got very involved with was objectivism. There's a lot of things I disagree with objectivism on, but there were some really important insights in terms of how important concept formation and context is and really a theory of knowledge that I got out of that on how knowledge needs to be acquired, integrated and represented and that it isn't just a closed, logical system. I couldn't really pin point one source of inspiration. I have a long list on my Web site.
Futurist: As you know, there's a debate in the field right now as to whether an AGI will be built or evolve of its own accord. Where do you fall on that? Do you think it's possible to build an AGI?
Voss: I think there are two paths, one is that we just continuing developing narrow AI and the systems will become generally competent. It will become obvious how to do that. When that will happen, I don't know, or how it will come about, whether through sim bots or some DARPA challenge or something, it would be a combination of those kinds of things. But that goes fairly well out in terms of my time frame. The other approach is try to specifically engineer a system that can learn and think. That's the approach that we're taking. Absolutely I think that's possible and I think it's closer than most people think.
Voss: I would say less than five years.
Futurist: This brings us to the most important aspect of these trends in terms of our readers: how will the advent of this technology affect human civilization? Obviously, it's impossible to tell for certain because it's the future, but can you give me a best case and a worst case scenario for what you think the advent of AGI might mean for people?
Voss: I believe it will make us better people over all. It will improve our morality, health and wealth, and dramatically our longevity as well, if we choose to live longer. We don't know that for sure, but I think that there's a very good chance that it will be a positive outcome. Clearly, not everyone's going to be happy with change. There are a lot of people now who don't like mobile phones, iPods, television or whatever. If you feel threatened by it, that's a problem. Overall, I think it will just make us more competent. Better human beings, better society. There are risks, clearly, we can't know that it won't end in tears. We don't know that. For one, it could be used incorrectly in some ways; there could be some kind of escalating warfare of some kind using AI. I would argue against that likelihood. I'm not saying it's impossible but I think machines will inherently make us more rational, and through that rationality, more moral. I mean, if people had more foresight, if they could think things through more clearly, would they go to Iraq, for instance? You could maybe argue that they would have gone in with more force, or be more competent at it, but I would argue that people would better foresee the consequences of their actions. I think that much of what passes for immorality is actually irrationality and much of AI can help us think better, make better decisions.
Futurist: Is it possible that by outsourcing more human activities to artificially intelligent entities we're paving the way for our own obsolescence? For instance, could AI render the written word as we know it a functionally obsolete technology, and through that, an entire generation of people less informed, less cultured, possibly with an entirely different and arguably inferior value system than people who were extremely literate? You could say that Plato and Socrates saw the same thing with the advent of writing, Socrates of course said that advent of writing itself would mean that people didn't have to remember as much. But you could also say that the converse is also true and that human knowledge in general ballooned with the advent of writing. Thinking about the potential consequences of these vast amounts of mechanical intelligence and how they might affect human intelligence, human ability, and human culture, do you think that AI might render something like literary culture obsolete?
Voss: Well I think that it will radically transform it. It's hard to make a case that we would be better off without writing. Yes, people maybe don't speak as well anymore, and maybe speakers aren't valued as they were, but people can use new abilities in different ways, so other skills will become important. One of things we haven't spoken about is how will upgrade ourselves through AGI. If we suddenly develop photographic memory through our AI companion, we have access to any fact that we want to, any text that is written anywhere, anything that's accessibly on any computer network, that will allow us to do a whole bunch of other things we can't do now. So yes, the current skills that we value will fall by the wayside as technology progresses, but I think it will allow us to do greater things.(top)
Futurist: Tell me a little about yourself and your various companies.
Omohundro: My company is called Self-Aware Systems and I started it a couple of years ago to help search engines search better. We built a system for reading lips and a system for controlling robots. The particular angle that I started working on is systems that write their own programs. Human programmers today are a disaster. Microsoft Windows for instance crashes all the time. Computers should be much better at that task, and so we develop systems that I call self-improving artificial intelligence, so that's AI systems that understand their own operation, watch themselves work, and envision what changes to themselves might be improvements and then change themselves.
Futurist: Sounds like Niles Barcellini; he made the point that computers should be much better at writing programs than humans could ever be because it's their own language. So, thinking about Artificial intelligence and where it may go in the next five to ten years, what do you envision?
Omohundro: Well, it's really difficult to use precise time-scales because many of us feel there are a few key ideas that AI needs. AI has been a field that has over-promised and under-delivered for fifty years, and so it's very dangerous to say, 'oh, by this date, such and such will happen.' There are still some key concepts underlying intelligence that we don't yet understand. But when you look at technological power, I don't know if you've seen Ray Kurzweil's book The Singularity is Near; he's analyzed trends in computational power and also trends in our ability to model the human brain and in the next few decades, we will most likely have the power to simulate brains on inexpensive computer hardware. That gives a base-point that says, hey, we're probably going to get something on the level on the intelligence of people, in the next few decades. What the consequences of that are is the main point of this conference.
Futurist: There are two schools, one that says you can build a human-level AI and another that says that one will evolve naturally. Your work seems to suggest that it has to be B, because a computer can write its own code better than can a person, much like people can speak better than reptiles. Are you an advocate of B?
Omohundro: It's a complicated issue, the history of AI started with a dichotomy between the neats and the scruffies, where the neats wanted a clear, mathematical description that used theorem proving, the scruffies were in favor of throwing circuits together like artificial neurons. Bayesian nets are good example of systems that have this ability to learn, but the approach is rational and conceptually oriented. That's the direction I'm going in, it's a merger of the two schools. The kinds of systems I build are very carefully thought out and have a powerful rational basis to them, but much of the knowledge and structure comes from learning. Their experience of the world and their experience of their own operation.
Futurist: Do you think that they learn? How do you chart their learning?
Omohundro: There are a ton of systems today that use learning. The very best speech recognition systems are an example. You have a training phase; you speak to it. You use some known passages and it adapts its internal model, and it adapts its internal model to the particular characteristics of your voice. Systems that don't do that, that have one model that fits all people, tend to be very rigid and don't work very well.
Futurist: What do you think the growth in AI capability might mean for humans?
Omohundro: These changes are so momentous and so big, we have to focus full attention to what's going on. There are two components to what we need. One, we have to build into the logic of this technology. We have to know, if we build a certain type of structure, how is it likely to behave? Secondly, we need to introspect. We need to explore the depths of our human preferences, figure out what kind of a world we want. We have to create a vision that captures the things that make us most human so the technology doesn't just go off in some direction that we're not happy with.
Futurist: There's a theory that we'll build safeguards into the systems as we develop them and, as a result of that, they will necessarily contain or exhibit our values. But we, as species, have a lot of work to do to determine those values. What do you think is a best-case and worst-case scenario for a fully powerful AI? what might go horribly wrong; what might go fantastically right?
Omohundro: I think the worst case would be an AI that takes off on its own, its own momentum, on some very narrow task and works to basically convert the world economy and whatever matter it controls to focus on that very narrow task, that it, in the process, squeezes out much of what we care most about as humans. Love, compassion art, peace, the grand visions of humanity could be lost in that bad scenario. In the best scenario, many of the problems that we have today, like hunger, diseases, the fact that people have to work at jobs that aren't necessarily fulfilling, all of those could be taken care of by machine, ushering in a new age in which people could do what people do best, and the best of human values could flourish and be embodied in this technology.
Futurist: You say the advent of AI could allow us to push aside a lot of the tasks that we sometimes don't have the patience for, tasks that are too rigorous or too arduous. Do you think there might be something ennobling in doing some of those tasks, something that we might miss out on by virtue of not having to do those things? There are some activities, of course, that could truly be thrown to the wayside, but I'm not sure I'm qualified to know which from which, and I'm not sure I know who is. Might we lose something in the transition to--not the worst-case scenario--but the best?
Omohundro: I absolutely agree with you that that's an example of one of many, many moral dilemmas and decisions that we're going to have to make, for which it's not quite clear to us what is most important to humanity. I think we need an inquiry to establish some answers to questions like, is chopping wood something that strengthens you and brings you to nature or is it a boring tasks that doesn't ennoble you? How do you make those distinctions and who makes those distinctions?
Futurist: Is AI going to help us with that at all, or is that something we have to figure out on our own?
Omohundro: Well, AIs will certainly build good models of human behavior, and so, at the behavioral level, I think, for example AIs will be very helpful in terms of raising children. I think AIs will make very patient nannies, they'll understand what a child needs in order to grow in a certain direction. IAt that level they can certainly help us, at the core though, there are fundamental questions about what it is we most care about and those questions we don't want to delegate to AIs.
Futurist: Do you think there's a possibility that out of sheer laziness we might increasingly wind up doing so?
Omohundro: A tremendous danger. I've heard that the average American spends an average of six hours a day watching television, which is the delegation of conversation and story generation, so instead of being actively involved in your entertainment, you become a passive consumer of it and I think that's a huge danger. It's scenario where AI becomes the source of entertainment 24 hours a day and we lose some of the essence of what we currently most value in humans?
Futurist: what do you see the big obstacle to reaching this level of AI in the future? You had mentioned 3D brain scanning resolution, what else?
Omohundro: I don't prefer the brain scan idea as a route to AI. I don't think we want to build machines that are copies of human brains. The advantage of that scenario is that we can see roughly what it takes to do it. So we can predict pretty accurately when that's going to be possible. The direction I'm actually pursuing, potentially, could actually produce much more powerful systems based on theorem proving. So they can mathematically prove that a program doesn't have bugs in it, and is subject to certain kinds of security flaws. But theorem proving, for example, is very hard. No one has been able to do it. I believe I have some new ideas on how to do that. It's going to take some trying them out and running them. Back in the early sixties, people thought that something like machine vision would be a summer project for a master's student. Today's machine vision systems are certainly better than they were in the 60s, but no machine vision system today can reliably tell the difference between a dog and a cat, something that small children have no problem doing. Because it's so easy for us, it's hard for us to even understand why that's a challenging task. Whereas things like chess, which people thought would be difficult--it's hard for people, people have to study to become a grand master--was pretty easy for machines. So, it turns it on its head. What we thought was easy turned out to be hard, what we thought was hard turned out to be easy.(top)
Futurist: What's your time horizon for bringing a conversational AI to market?
Pell: There's two pieces, one is what we're doing at Powerset and one is
what is happening in the industry in terms of natural language. Powerset is
building a new search engine based on natural language understanding. Search
engines today are built on a concept of keywords. They don't really
understand the documents that you search; they don't really understand the
user's query. Instead what they do is they treat the document as a bag of
keywords. And they take your query as a bag of words, and they try to match
the keywords to keywords. The result is that as a user, the human has to try
to figure out what words would appear in the documents that I want that
would match that would work with this kind of search engine. Some
people are very good at that game and using very advanced syntax and
features and they get a better search experience. Whereas the rest are
missing something. That's not good enough because search is now our way of getting all of our information from the Internet.
people are better than others is because some people are better at working
with computers. But it's not like we don't know how to express our intent.
We do it everyday. Every one of us is very good at expressing intent. The
problem is that computers are not yet able to work with us on our own level.
We feel inferior and like we're missing out on something. The time is
coming when people will be able to use their own natural built in power to
say what they want just in English, for example, and have computers rise to
the level where they are able to work with the meaning and the expression
and match that against the meaning of the documents and give you a whole
different search experience. Not just matching meaning to meaning like
keyword to keyword, but also presenting the results in a way that shows that
the system understands your question and can help you focus on the parts
that are interesting and helps you follow up with more of a dialogue to help
you clarify your intention to guide you to something that's good.
Futurist: Where do you see the entire industry going in the next five to ten years; I know Wikipedia and Google were working on similar software tracks right now?
Pell: I think this is an inevitable future. I think natural language is the inevitable destiny of search. It's going to become the center of the whole industry. I make the prediction that within ten years, people are going to expect routinely that all of the electronic devices that we interact with have some level of linguistic processing ability.
That doesn't mean that every single device will all have very high language processing capability. It will mean that for every category of electronics at the lowest end will all have very large language processing capability, but it will mean that our expectations will be that for every category of things we would expect to have some language capability. Within search, it will be over the next five years that people will begin to expect to be able to use natural language as your query syntax and have that actually an impact as opposed to right now where that actually hurts you.
Futurist: On a personal note, what got you interested in AI?
Pell: Actually I've been interested in AI my whole life. I loved games from an early age. I always wondered how it is that we think and get better in these types of games. I realized that the reasoning ability of intelligence was central to being human. When I was an undergrad, I went to Stanford and I initially declared my major in symbolic systems, studying AI and cognitive science. I started working in natural language while I was an undergrad. My entire career afterward has been about AI research and then developing applications and commercialization aimed at changing the world. I'm interested in how building other systems that could be intelligent reflects on our mind, which I think is infinitely fascinating, and also the transformative power to create useful systems to transform people's lives in fundamental ways. AI has both of those aspects for me.
Futurist: Can you describe a moment where you were working on your current project and you realized the future was unfolding in front of your eyes?
Pell: When I first started Powerset, I went to evaluate a bunch of different companies based on the technology that I thought was going to be required. We found the technology at Park that looked like it would be a good basis for what we were doing. Before we licensed the technology, we worked very closely with Park to create a prototype. At first, there were a lot of bugs. It was hard to get anything to work at all. It was all very strange. Then there was a change in January 2006, the second version of the prototype. My goals were previously to have something work at all, getting the system to recognize any type of meaning. In this new version, the system was doing more of those type of things so well that now I was complaining because the tense was wrong, or I asked for 'who did IBM acquire,' and it told me 'who IBM might acquire.' That you could even be that picky about a system meant that a new threshold had been crossed.
Futurist: You're saying that by working on this invention, it really forced you to focus on the words you used at an almost Proustian level, to learn language almost entirely again?
Pell: When you interact with a natural language based system, it actually draws out our expectations of what a language should be and makes you more aware. One fun thing about these systems is that they find new interpretations of words and phrases that humans gloss over because of our rich knowledge, getting an interpretation that we never heard of.
Futurist: Is there a particular example that comes to mind?
Pell: We were looking at the kinds of concepts the system had extracted from Wikipedia, and sorting them by what concepts and relationships occurred most frequently. We found that there was a really high proportion of sexual acts happening in Wikipedia data. This puzzled everyone to see. Obviously there's some sexual content in Wikipedia, but could it really be that much? We looked deeper at the sentences the system thought were examples of people having sex. They included things like 'john took his dog to the park.' It turns out that there's a sense of 'take' and 'have' and 'do' that's sort of biblical. This was before our system was doing proper word sense and disambiguation. So all those different meanings were equally likely for the system. It was basically finding double entandres and innuendo in everything.
Futurist: Give me headline: Ten years from now, AI does such and such and it changes the way people think about AI on a very fundamental level.
Pell: Natural Language Queries Replace Keywordese. There's already people tracking the length of the average query and it's been steadily increasing from two words to three words, steadily approaching four words. There'll be a crossover point where queries expressed in regular English will exceed the proportion are using keywords. Its a concrete metric we can track. I'm going to call that in five years. Search is something people use everyday. Once that point is reached, companies will start pouring more money into natural language technology, AI, conversational interface and semantics. The pace will pick up and it will take people by surprise.
Futurist: Do you envision something like a Hal 9000 for every home?
Pell: Absolutely. I think we will definitely get to the point were you will expect to engage your household systems in conversation. We're a long way from that. Over the next decade, we'll expect to be using voice rather than typing to interface with all our systems. Voice-in, voice and data out. Voice will become first order citizens in terms of the way we interact with computers.(top)
Futurist: If there was one key breakthrough in AI that would radically change the way people think about it that was printed, on the front page of the New York Times, what would the headline read?
Norvig: That's a hard one. It would have to do something that people care about. It could be winning a prize competition, there's various turning prizes. It could be a product. I guess that's why they call it the Singularity, because you don't know what it's going to be.
Futurist: You do a lot of work with language, how has your experience as director of research at Google affected your appreciation of how technology changes the way people think, speak and write?
Norvig: I certainly believe language is critical to the way we think. Not necessarily in the morphian* way, but in the way we can form abstractions and think more carefully. The brain was meant for doing visual processing primarily. A large portion of the cortex for that, it wasn't meant for doing abstract reasoning. The fact that we can do abstract reasoning is an amazing trick; we're able to do it because of language. We invent concepts and give them names and that lets us do more with the concept because we can move it around on paper. Language derives all our thinking. How is that changing as a result of search engine technology? We now have access to so much more. We now have an expectation that if you have a question, it's resolvable in the amount of time that's worth it. It used to be, you would have a question, you would have to consider, 'Gee, I have to go to the library, it's going to take me half an hour to get there, another half an hour to go through the stacks. Is answering this question worth it?' For most questions the answer is no. For the important things you still go through that process. Now, it's 'it's going to take me ten minutes to do this search. So there's a much lower barrier, and more things you're willing to find out. There's a change in what you have to memorize vs. what you know you can get on demand.
Futurist: There's a downside to this embodied in Christine Rosen's term "ego-casting" which is the narrow pursuit of one's personal interests online. A lot of people say that search-engine technology and the Internet enables more people to do this, the fault of course being not with the technology but with the whims of the people who use that technology. How do you see future breakthroughs mitigating or meliorating that, or is even possible because it's a flaw that's unique to humans?
Norvig: I do see a trend in that way. In many cultures, people have elected to have practices that tie things together. So, you have your traditions, stories, and myths that everybody learns. In some cultures you have a universal school curriculum, where everybody learns the same thing. In other cultures you have hit TV shows that everybody watches. So there's a shared sense of "I know this," and "I'm attracted to people that now the same thing." The internet allows you to go broader. But I don't necessarily see that as isolating or egotistical, because you're still connecting to other people. By definition, if you saw it on the Internet, its because someone else wrote it. It's just that you're connecting with a smaller group, not necessarily physically close by.
Futurist: Considering that 50% of high school seniors can't tell the difference between an objective Web site and a biased source, what avenue is there to preserve and empower critical thinking skills in the wake of this extremely convenient and efficient technology that does, in many ways, a lot of the thinking for you?
Norvig: I think that is important, I don't know the statistics, but I agree that that's a problem. Kids have good skills in finding answers but poor skills in telling the difference between The New York Times and The Onion. What can you do about that? I think part of it is education. We're used to teaching reading writing and arithmetic; now we should be teaching these evaluation skills in school and so-on. Some of it could be just-in-time education. Search engines themselves should be providing clues for this.
Futurist: What do you see as the key breakthrough that has to occur in order for AGI to be fulfilled in some recognizable way?
Norvig: I don't think we know enough. If we knew that answer we would be working on that problem. I think there's lots of possibilities. In the meantime, we should let lots of different groups work in different things.(top)
* As transcribed. Possibly "whorfian." Amended 2.8.08
The above interviews
were conducted by FUTURIST senior editor Patrick Tucker and are
featured as a special supplement to the March-April 2008 edition of THE FUTURIST.