Towards the Singularity

Ian Pearson's picture

This piece was originally written a year ago for ACM proceedings but got lost in their review process, so rather than waste it, here it is before it passes its use-by date. A recent powerpoint presentation highlighting the potential of the singularity but setting that against some of the dangers that we may instead be dragged into a dark age is here.

http://futurizon.com/articles/singularitydarkage.pdf

Anyway, here is my article:

Towards the singularity

About 25 years ago, inspired by the invention of field programmable gate arrays, many engineers recognised that in principle these could be used as the basis of an evolving machine, using a biomimetic approach. Starting with an array of FPGA-like machines and evolutionary algorithms, clearly the hardware would be able to evolve to its physical limits. It wasn’t long after that before the first simple evolving software and then hardware was achieved. The early 90s saw an explosion in evolutionary development, with evolutionary software as the prime focus due to low range of reconfigurable circuitry. While evolutionary computing got bogged down in biomimetic integrity and genetic algorithms, those of us engineers with futurist mindsets looked towards the far end of the development wedge. We saw that positive feedback across the wider science and technology R&D system would cause development eventually to race ahead of Moore’s Law, as smarter machines enabled faster development and faster discovery in every field. What we now call the singularity is a simple extrapolation of ongoing positive feedback in technology development.

We know that evolution works in nature, and have already proved that we don’t have to fully understand stuff to develop it, just point it in vaguely the right direction and let it evolve and find its own way. Whether via evolution or design, computers will eventually surpass human intelligence, amplify positive feedback still further, and that will lead to the extremely rapid invention with the familiar almost vertical development curve. That is inevitable. Even without evolutionary computing, the singularity will still come, but will be slower, since it would be limited by human knowledge, squandering the potential contribution of machine assistance.

The singularity initially is appealing, inspiring visions of potential technotopia, and the potential would be real if mankind was ready to deal with it, but problems are starting to show through and realisation of them and the consequential actions will slow it down.

Firstly, invention is only the first stage of development, and there are limits on how fast physical development can take place, even with all the self-replicating machines we may expect, however smart they get. So the way the singularity manifests itself at best will be as a rapidly growing gap between creativity and realisation. It will be as if advanced ETs had landed and given us a manual on how to build all their technology. But we still wouldn’t be able to have it all instantly and would have to decide on a priority list.

This isn’t just a theoretical problem. We already have a large creativity gap (i.e., the pile of spare inventions that have been thought up but haven’t yet been developed) – and that indicates that the impact of the singularity will be restricted. If you go to the R&D department of any large technology company, you will find a huge pool of ideas backed by a relatively small pot of funding. Most engineers will be familiar with the frustration of brainstorms where most of the ideas they scribble on post-its get thrown away. Ideas are two a penny even today, but only so many can be developed. If the singularity is to have any real economic significance, it needs to be about more than just quantity of ideas. Even an infinite creativity gap isn’t valuable per se; it needs to be about quality and purpose too. By focusing on the near vertical invention curve, perhaps we miss the point. If you are offered anything you want this afternoon, you still need to ask yourself what it is you want, and that introduces another hurdle to jump over. Clearly, while humans control the allocation of resources and permission to build things, we will hold back development to our human imagination and cultural limits. The singularity could theoretically arrive around 2025, but the practical implications of it will arrive much more slowly.

Secondly, the decisions on what to build depend on our economic culture. In a pure capitalist system, if a new technology allows cheap automation, fewer employees will be needed, and wealth moves towards capital owners. While new jobs are created sufficient quickly, this is just a retraining issue and the economy as a whole can grow, but when automation exceeds the rate at which new jobs can be created, it becomes a problem. If too few people have enough money to buy output, demand falls and the economy spirals downwards. Consequently, many people are already looking at new designs for capitalism to make it economically and socially sustainable (environmentally sustainability is moving quickly towards third place). We don’t have to wait for the singularity; again, signs of this downward spiral are already starting to appear.

In a world eager for the next pad, it is easy to be enthused about future technology if your future income is secure. As technology catches up with human intelligence and even people in well-paid professional jobs start to be replaced, it is easy also to imagine a backlash building, especially if new technologies are used to increase government control of our lives, as they often are. The potential backlash would build until politicians are forced to deal with it, one way or another. Capitalism can’t properly exploit the singularity in its current form, and will have to be redesigned. But how? It will take time to decide.

Thirdly, the singularity presents many existential threats and thereby another reason to force powerful restrictions on scope and rate of development. These could and may well force very different development paths and delay it very significantly, perhaps by decades. It is likely that the military will want to push for powerful new weapons, but a singularity-based arms race could tip the balance rapidly and greatly increase temptation for first strike action. Laser and plasma rifles already exist, at least in experimental form (http://en.wikipedia.org/wiki/Shiva_Star). Terawatt solar wind deflector ray-guns and zombie viruses are within the scope of the 2025 singularity technology (http://futurizon.com/articles/madscientists.pdf). Many more can be listed. Starting with only six known ways that life on earth could be wiped out back in 2000 (nearby supernova, major solar storm, asteroid or comet strike, GM accident, or global nuclear war), my own studies suggest that the number increases exponentially to over 100 by 2050. If each optimistically has a 1 in 10,000 chance of occurring in a single year by accident or deliberate action, the probability of extinction rises to 1% per annum and continues to grow exponentially. Do the sums and you end up with an ETA for extinction of 2085, hardly the technotopian future promised by the singularity up front. To avoid such a result, we will be forced to intervene. But how? At the very least we need more time.

Fourthly, we are becoming more and more vulnerable. In a world containing many people who wish to harm us, our dependence on highly complex technology systems is already a significant known military risk, as well as social and economic. Asymmetry is the key word here. But it isn’t just deliberate harm we need to worry about. Recently, solar storms brought our dependency problem into sharp focus. We no longer have the old systems as a backup, nor even people who knew how they worked. As we engineer in ever more complexity and systemic interdependence, we surely build our prosperity on sand. A failure of any part of our critical systems for any reason could quickly lead to cascade failures, and riots for the last bottles of water. Before we rush to grab hold of the singularity, we need first to get a hold of failsafe design and the practice of keeping a backup, not just for our computers but for our whole life support system. I don’t worry about complexity or whether I understand how the system works. I worry about how I and my family will manage when it fails. But complexity isn’t the only vulnerability.

One of the well-known scenarios that results from all of this is the Terminator scenario, and I am not convinced at all that we have solved this problem yet. (For the uninitiated, the Terminator Scenario is thus called after the Terminator series of film. In this series, the US military develops a powerful satellite-based computer system called Skynet to control their missiles so that they could respond faster to a threat, but the computer system achieves consciousness, decides that humans are actually the threat, and sets about wiping out humanity). Machines already do most of the design work on the next generation machines. Human engineers make some of the key decisions and tell the machines what to design, mostly, but the proportion of human input is falling. Particularly when we use evolutionary design, the human understanding of the technology that results can be very low indeed. Imagine a scenario where a few smart students plan a prank, and use an off-the-net virus pack to infect millions of machines with an algorithm. The algorithm is very crude but attempts to achieve elements of consciousness or thinking, just for fun, to see what happens, to see how far they can get. Some of the students are in IT, some from bio-tech and nano-tech, some from neuroscience, and a few others. The algorithms are crude but designed as well as they can, using all their latest knowledge of how the neural networks in the brain work. And so they spawn them, on a million machines, each with 1% of the raw processing power of the human brain. And they use evolution in that huge aggregated processing pot to experiment with variants of the algorithm. Over time, the system accumulates a toolbox of different algorithms and circuits that achieve a wide variety of neural functions to some degree to achieve key components of mind or consciousness or awareness. By experimenting with automatically linking these together in many combinations, the students hope to achieve larger and larger degrees of AI. And they might as well harness that AI to refine the evolutionary algorithms too, and make the virus better at infecting even more machines and adapting better, and hiding better. All automatically. Can we be sure that such a prank would always fail? Or could it work, and achieve consciousness in a distributed machine, just like the Skynet from Terminator?

But if you go to singularity timeframes, there are even further dangers. Some people already belong to hobbyist genetic engineering groups or play with 3d printing – and some of those mess with printing electronics too. Circuits can harvest energy from changes in the environment or passing radio waves and so won’t necessarily need batteries. People will try to push the boundaries via those routes too and 2025 is a good way off so lots of progress will occur in all these fields by then. With feedback among all these bio-nano-info-cogno technologies, it is not hard to imagine how students or a terrorist group could make good progress even without proper funding, even while staying anonymous, based anywhere. As hidden net-based programs become smarter and more autonomous, they could notionally get to the point where they interact with genetic assemblers and printers and design biological and electronic devices in a feedback loop. When thinking of a grey goo scenario, forget little micro-mechanical machines. Think bacteria, think GM assemblers, think AI-led environmental adaptation and think of a distributed organism that is part in the machine world and part in the ecosystem. Much of that is achievable long before we get the singularity and the rest very soon after. Transhumanists forget that transbacteria may not allow them to proceed. Smart bacteria may link together into super-smart organisms that think of humans merely as competition for resources. We could be building the engines of our own destruction, even while aiming for technotopia.

I am no doom monger, and I always manage to convince myself that we will muddle through. Sure, we’ll do it badly and get half of the benefit at twice the price and twice the mess. We already know the problems above. They are being addressed in organisations such as the Lifeboat Foundation, there are often conferences or symposia along singularity lines. Government is even starting to react. Studies covering NBIC (nano, bio, info, cogno) convergence issues were initiated by the EU before 2000. The US and Canadian governments have bother run conferences debating ways that mad scientists could use future technologies to cause great harm. So the problems won’t come unexpectedly. Where do we end up?

The problems above are possibilities and even likely if we take the default path of ongoing unfettered development. Positive feedback would deliver on some of the promises, and some of the problems would appear along the way. In the real world, it won’t happen like that. Social and political feedback loops, educated by many ongoing debates such as this symposium, will ensure that regulation is implemented that slows it down, restricting what can legally be done, what can be developed, what can be bought, and by whom. It has to. What we can also be sure of is that much of the regulation will be reactive and badly thought out. So it will be a mess, we will barely muddle through, but muddle through we will. What we can hope for is that it might be a relatively safe mess and the reward at the end is worth it. But let’s start by acknowledging that what we call the singularity is only a theoretical concept, and it can’t be achieved in its pure form. The real world development path will surely be very different, constrained and forced down different paths by physical, cultural and economic limits and forced to comply with a wide range of legal precautions.

About the Author

Dr. Ian Pearson is a leading futurist, keynote speaker and after dinner speaker. All over the world, he has delivered over 1000 provocative talks about the future of many aspects of our daily lives - from work to leisure, fashion to climate change. He has written several books and appeared over 450 times on TV and radio.

This post originally appeared on his blog.

Comments

Singularity

The real risk is nano technology not AI, "The small size, portability, and rapid potential for proliferation will make nano-built weaponry difficult to control and hard to keep out of the hands of terrorists. With robots, it is something we can see so, if it malfunctions, you can unplug it and shut it down. If you have billions of nano-particles, there is no way you can do the same thing." Prof Dautenhahn
I fully agree.

Towards the Singularity

I very much like your article in that it invites people to seriously think about the pros and cons of the forthcoming singularity.In summary, I would like to add the following:
1)Scientists are a special kind of creature.They exist only to create.However, not all of their inventions will materialize.Not all seeds germinate and bear fruits.
2)Because of resource allocation,cheap imports and anticipated financial returns,not all inventions become marketable innovations.Yes, technological innovations create structural unemployment.It means that corporate and national (re)training program is of crucial importance.
3)Indeed, Singularity presents many opportunities and serious threats.A new eupraxsophy is needed for the next decades to ensure Homo sapiens survival.
4)Information makes choice possible.In this respect, your article has already made a substantial contribution.

You forget...

...that the singularity theory accounts for these discrepancies. Computer science and manufacturing and all the other technologies may not be in line, but the idea that they are growing exponentially historically then any problems that exist won't just disappear, they will be overcome. I agree it poses existential threats, and there is no way to expect what will come with it, but we shouldn't fear the future. It's coming if we like it or not, and restricting development is a hopeless cause. It just gives more power to the government who will try to retain the values that will be simply outdated by the singularity.