The Futurist Interviews Steve Omohundro, Center for Complex Systems Research, Self-Aware Systems
Steve Omohundro
March 2008
Steve Omohundro is the co-founder of the Center for Complex Systems Research, Self-Aware Systems
Futurist: Tell me a little about yourself and your various companies.
Omohundro: My company is called Self-Aware Systems and I started it a couple of years ago to help search engines search better. We built a system for reading lips and a system for controlling robots. The particular angle that I started working on is systems that write their own programs. Human programmers today are a disaster. Microsoft Windows for instance crashes all the time. Computers should be much better at that task, and so we develop systems that I call self-improving artificial intelligence, so that's AI systems that understand their own operation, watch themselves work, and envision what changes to themselves might be improvements and then change themselves.
Futurist: Sounds like Niles Barcellini; he made the point that computers should be much better at writing programs than humans could ever be because it's their own language. So, thinking about Artificial intelligence and where it may go in the next five to ten years, what do you envision?
Omohundro: Well, it's really difficult to use precise time-scales because many of us feel there are a few key ideas that AI needs. AI has been a field that has over-promised and under-delivered for fifty years, and so its very dangerous to say, 'oh, by this date, such and such will happen.' There are still some key concepts underlying intelligence that we don't yet understand. But when you look at technological power, I don't know if you've seen Ray Kurzweil's book The Singularity is Near, he's analyzed trends in computational power and also trends in our ability to model the human brain and in the next few decades, we will most likely have the power to simulate brains on inexpensive computer hardware. That gives a base-point that says, hey, we're probably going to get something on the level on the intelligence of people, in the next few decades. What the consequences of that are is the main point of this conference.
Futurist: There are two schools, one that says you can build a human-level AI and another that says that one will evolve naturally. Your work seems to suggest that it has to be B, because a computer can write its own code better than a can a person, much like people can speak better than reptiles can. Are you an advocate of B?
Omohundro: It's a complicated issue, the history of AI started with a dichotomy between the neats and the scruffies, where the neats wanted a clear, mathematical description that used theorem proving, the scruffies were in favor of throwing circuits together like artificial neurons. Bayesian nets are good example of systems that have this ability to learn, but the approach is rational and conceptually oriented. That's the direction I'm going in, it's a merger of the two schools. The kinds of systems I build are very carefully thought out and have a powerful rational basis to them, but much of the knowledge and structure comes from learning, their experience of the world and their experience of their own operation.
Futurist: Do you think that they learn? How do you chart their learning?
Omohundro: There are a ton of systems today that use learning. The very best speech recognition systems are an example. You have a training phase; you speak to it. You use some known passages and it adapts its internal model, and it adapts its internal model to the particular characteristics of your voice. Systems that don't do that, that have one model that fits all people, tend to be very rigid and don't work very well.
Futurist: What do you think the growth in AI capability might mean for humans?
Omohundro: These changes are so momentous and so big, we have to focus full attention to what's going on. There are two components to what we need. One, we have to build into the logic of this technology. We have to know, if we build a certain type of structure, how is it likely to behave? Secondly, we need to introspect. We need to explore the depths of our human preferences, figure out what kind of a world we want. We have to create a vision that captures the things that make us most human so the technology doesn't just go off in some direction that we're not happy with.
Futurist: There's a theory that we'll build safeguards into the systems as we develop them and, as a result of that, they will necessarily contain or exhibit our values. But we, as species, have a lot of work to do to determine those values. What do you think is a best-case and worst-case scenario for a fully powerful AI? what might go horribly wrong, what might go fantastically right?
Omohundro: I think the worst case would be an AI that takes off on its own, its own momentum, on some very narrow task and works to basically convert the world economy and whatever matter it controls to focus on that very narrow task, that it, in the process, squeezes out much of what we care most about as humans. Love, compassion art, peace, the grand visions of humanity could be lost in that bad scenario. In the best scenario, many of the problems that we have today, like hunger, diseases, the fact that people have to work at jobs that aren't necessarily fulfilling, all of those could be taken care of by machine, ushering in a new age in which people could do what people do best, and the best of human values could flourish and be embodied in this technology.
Futurist: You say the advent of AI could allow us to push aside a lot of the tasks that we sometimes don't have the patience for, tasks that are too rigorous or too arduous. Do you think there might be something ennobling in doing some of those tasks, something that we might miss out on by virtue of not having to do those things? There are some activities, of course, that could truly be thrown to the wayside, but I'm not sure I'm qualified to know which from which, and I'm not sure I know who is. Might we lose something in the transition to--not the worst-case scenario--but the best?
Omohundro: I absolutely agree with you that that's an example of one of many, many moral dilemmas and decisions that we're going to have to make, for which it's not quite clear to us what is most important to humanity. I think we need an inquiry to establish some answers to questions like, is chopping wood something that strengthens you and brings you to nature or is it a boring tasks that doesn't ennoble you. How do you make those distinctions and who makes those distinctions?
Futurist: Is AI going to help us with that at all, or is that something we have to figure out on our own?
Omohundro: Well, AIs will certainly build good models of human behavior, and so, at the behavioral level, I think, for example AIs will be very helpful in terms of raising children. I think AIs will make very patient nannies, I think they'll understand what a child needs in order to grow in a certain direction. I think at that level they can certainly help us, at the core though, there are fundamental questions about what it is we most care about and I think those questions we don't want to delegate to AIs.
Futurist: Do you think there's a possibility that out of sheer laziness we might increasingly wind up doing so?
Omohundro: A tremendous danger. I've heard that the average American spends an average of six hours a day watching television, which is the delegation of conversation and story generation, so instead of being actively involved in your entertainment, you become a passive consumer of it and I think that's a huge danger. It's scenario where AI becomes the source of entertainment 24 hours a day and we lose some of the essence of what we currently most value in humans?
Futurist: what do you see the big obstacle to reaching this level of AI in the future? You had mentioned 3D brain scanning resolution, what else?
Omohundro: I don't prefer the brain scan idea as a route to AI. I don't think we want to build machines that are copies of human brains. The advantage of that scenario is that we can see roughly what it takes to do it. So we can predict pretty accurately when that's going to be possible. The direction I'm actually pursuing, potentially, could actually produce much more powerful systems based on theorem proving. So they can mathematically prove that a program doesn't have bugs in it, and is subject to certain kinds of security flaws. But theorem proving, for example, is very hard. No one has been able to do it. I believe I have some new ideas on how to do that. It's going to take some trying them out and running them. Back in the early sixties, people thought that something like machine vision would be a summer project for a master's student. Today's machine vision systems are certainly better than they were in the 60s, but no machine vision system today can reliably tell the difference between a dog and a cat, something that small children have no problem doing. Because it's so easy for us, it's hard for us to even understand why that's a challenging task. Whereas things like chess, which people thought would be difficult--it's hard for people, people have to study to become a grand master--was pretty easy for machines. So, it turns it on its head. What we thought was easy turned out to be hard, what we thought was hard turned out to be easy.
This interview was conducted by Patrick Tucker
- About WFS
- Resources
- Interact
- Build

Like us on Facebook