March 2008
Peter Voss is the founder of Adaptive A.I. Inc.
Futurist: What are some books or texts that influenced your thinking?
Voss: That's kind of hard to pin point because there's really a very broad spectrum of things that I read. I studied lots of philosophies but one philosophy I got very involved with was objectivism. There's a lot of things I disagree with objectivism on, but there were some really important insights in terms of how important concept formation and context is and really a theory of knowledge that I got out of that on how knowledge needs to be acquired, integrated and represented and that it isn't just a closed, logical system. I couldn't really pin point one source of inspiration. I have a long list on my Web site.
Futurist: As you know, there's a debate in the field right now as to whether an AI will be built or evolve of its own accord. Where do you fall on that? Do you think it's possible to build an AI?
Voss: I think there are two paths, one is that we just continuing developing narrow AI and the systems will become generally competent. It will become obvious how to do that. When that will happen, I don't know, or how it will come about, whether through sim bots or some DARPA challenge or something, it would be a combination of those kinds of things. But that goes fairly well out in terms of my time frame. The other approach is try to specifically engineer a system that can learn and think. That's the approach that we're taking. Absolutely I think that's possible and I think it's closer than most people think.
Futurist: When?
Voss: I would say less than five years.
Futurist: This brings us to the most important aspect of these trends in terms of our readers. How will the advent of this technology affect human civilization? Obviously, it's impossible to tell for certain because it's the future, but can you give me a best case and a worst case scenario for what you think the advent of AGI might mean for people?
Voss: I believe it will make us better people over all. It will improve our morality, health and wealth, and dramatically our longevity as well, if we choose to live longer. We don't know that for sure, but I think that there's a very good chance that it will be a positive outcome. Clearly, not everyone's going to be happy with change. There are a lot of people now who don't like mobile phones, iPods, television or whatever. But you see there's a threat, if you feel threatened by it, that's a problem. Overall, I think it will just make us more competent. Better human beings, better society. There are risks, clearly, we can't know that it won't end in tears. We don't know that. For one, it could be used incorrectly in some ways; there could be some kind of escalating warfare of some kind using AI. I would argue against that likelihood. I'm not saying it's impossible but I think machines will inherently make us more rational, and through that rationality, more moral. I mean, if people had more foresight, if they could think things through more clearly, would they go to Iraq for instance? You could maybe argue that they would have gone in with more force, or be more competent at it, but I would argue that people would better foresee the consequences of their actions. I think that much of what passes for immorality is actually irrationality and much of AI can help us think better, make better decisions.
Futurist: Is it possible that by outsourcing more human activities to artificially intelligent entities we're paving the way for our own obsolescence? For instance, could AI render the written word as we know it a functionally obsolete technology, and through that, I see an entire generation of people less informed, less cultured, possibly with an entirely different and arguably inferior value system than people who were extremely literate? You could say that Plato and Socrates saw the same thing with the advent of writing, Socrates of course said that advent of writing itself would mean that people didn't have to remember as much. But you could also say that the converse is also true and that human knowledge in general ballooned with the advent of writing. Thinking about the potential consequences of these vast amounts of mechanical intelligence and how they might affect human intelligence, human ability, and human culture, do you think that AI might render something like literary culture obsolete?
Voss: Well I think that it will radically transform it. It's hard to make a case that we would be better off without writing. Yes, people maybe don't speak as well anymore, and maybe speakers aren't valued as they were, but people can use new abilities in different ways, so other skills will become important. One of things we haven't spoken about is how will upgrade ourselves through AGI. If we suddenly develop photographic memory through our AI companion, we have access to any fact that we want to, any text that is written anywhere, anything that's accessibly on any computer network, that will allow us to do a whole bunch of other things we can't do now. So yes, the current skills that we value will fall by the wayside as technology progresses, but I think it will allow us to do greater things.
This interview was conducted by Patrick Tucker