This piece was originally written a year ago for ACM proceedings but got lost in their review process, so rather than waste it, here it is before it passes its use-by date. A recent powerpoint presentation highlighting the potential of the singularity but setting that against some of the dangers that we may instead be dragged into a dark age is here.
Anyway, here is my article:
Towards the singularity
About 25 years ago, inspired by the invention of field programmable gate arrays, many engineers recognised that in principle these could be used as the basis of an evolving machine, using a biomimetic approach. Starting with an array of FPGA-like machines and evolutionary algorithms, clearly the hardware would be able to evolve to its physical limits. It wasn’t long after that before the first simple evolving software and then hardware was achieved. The early 90s saw an explosion in evolutionary development, with evolutionary software as the prime focus due to low range of reconfigurable circuitry. While evolutionary computing got bogged down in biomimetic integrity and genetic algorithms, those of us engineers with futurist mindsets looked towards the far end of the development wedge. We saw that positive feedback across the wider science and technology R&D system would cause development eventually to race ahead of Moore’s Law, as smarter machines enabled faster development and faster discovery in every field. What we now call the singularity is a simple extrapolation of ongoing positive feedback in technology development.
We know that evolution works in nature, and have already proved that we don’t have to fully understand stuff to develop it, just point it in vaguely the right direction and let it evolve and find its own way. Whether via evolution or design, computers will eventually surpass human intelligence, amplify positive feedback still further, and that will lead to the extremely rapid invention with the familiar almost vertical development curve. That is inevitable. Even without evolutionary computing, the singularity will still come, but will be slower, since it would be limited by human knowledge, squandering the potential contribution of machine assistance.
The singularity initially is appealing, inspiring visions of potential technotopia, and the potential would be real if mankind was ready to deal with it, but problems are starting to show through and realisation of them and the consequential actions will slow it down.
Firstly, invention is only the first stage of development, and there are limits on how fast physical development can take place, even with all the self-replicating machines we may expect, however smart they get. So the way the singularity manifests itself at best will be as a rapidly growing gap between creativity and realisation. It will be as if advanced ETs had landed and given us a manual on how to build all their technology. But we still wouldn’t be able to have it all instantly and would have to decide on a priority list.
This isn’t just a theoretical problem. We already have a large creativity gap (i.e., the pile of spare inventions that have been thought up but haven’t yet been developed) – and that indicates that the impact of the singularity will be restricted. If you go to the R&D department of any large technology company, you will find a huge pool of ideas backed by a relatively small pot of funding. Most engineers will be familiar with the frustration of brainstorms where most of the ideas they scribble on post-its get thrown away. Ideas are two a penny even today, but only so many can be developed. If the singularity is to have any real economic significance, it needs to be about more than just quantity of ideas. Even an infinite creativity gap isn’t valuable per se; it needs to be about quality and purpose too. By focusing on the near vertical invention curve, perhaps we miss the point. If you are offered anything you want this afternoon, you still need to ask yourself what it is you want, and that introduces another hurdle to jump over. Clearly, while humans control the allocation of resources and permission to build things, we will hold back development to our human imagination and cultural limits. The singularity could theoretically arrive around 2025, but the practical implications of it will arrive much more slowly.
Secondly, the decisions on what to build depend on our economic culture. In a pure capitalist system, if a new technology allows cheap automation, fewer employees will be needed, and wealth moves towards capital owners. While new jobs are created sufficient quickly, this is just a retraining issue and the economy as a whole can grow, but when automation exceeds the rate at which new jobs can be created, it becomes a problem. If too few people have enough money to buy output, demand falls and the economy spirals downwards. Consequently, many people are already looking at new designs for capitalism to make it economically and socially sustainable (environmentally sustainability is moving quickly towards third place). We don’t have to wait for the singularity; again, signs of this downward spiral are already starting to appear.
In a world eager for the next pad, it is easy to be enthused about future technology if your future income is secure. As technology catches up with human intelligence and even people in well-paid professional jobs start to be replaced, it is easy also to imagine a backlash building, especially if new technologies are used to increase government control of our lives, as they often are. The potential backlash would build until politicians are forced to deal with it, one way or another. Capitalism can’t properly exploit the singularity in its current form, and will have to be redesigned. But how? It will take time to decide.
Thirdly, the singularity presents many existential threats and thereby another reason to force powerful restrictions on scope and rate of development. These could and may well force very different development paths and delay it very significantly, perhaps by decades. It is likely that the military will want to push for powerful new weapons, but a singularity-based arms race could tip the balance rapidly and greatly increase temptation for first strike action. Laser and plasma rifles already exist, at least in experimental form (http://en.wikipedia.org/wiki/Shiva_Star). Terawatt solar wind deflector ray-guns and zombie viruses are within the scope of the 2025 singularity technology (http://futurizon.com/articles/madscientists.pdf). Many more can be listed. Starting with only six known ways that life on earth could be wiped out back in 2000 (nearby supernova, major solar storm, asteroid or comet strike, GM accident, or global nuclear war), my own studies suggest that the number increases exponentially to over 100 by 2050. If each optimistically has a 1 in 10,000 chance of occurring in a single year by accident or deliberate action, the probability of extinction rises to 1% per annum and continues to grow exponentially. Do the sums and you end up with an ETA for extinction of 2085, hardly the technotopian future promised by the singularity up front. To avoid such a result, we will be forced to intervene. But how? At the very least we need more time.
Fourthly, we are becoming more and more vulnerable. In a world containing many people who wish to harm us, our dependence on highly complex technology systems is already a significant known military risk, as well as social and economic. Asymmetry is the key word here. But it isn’t just deliberate harm we need to worry about. Recently, solar storms brought our dependency problem into sharp focus. We no longer have the old systems as a backup, nor even people who knew how they worked. As we engineer in ever more complexity and systemic interdependence, we surely build our prosperity on sand. A failure of any part of our critical systems for any reason could quickly lead to cascade failures, and riots for the last bottles of water. Before we rush to grab hold of the singularity, we need first to get a hold of failsafe design and the practice of keeping a backup, not just for our computers but for our whole life support system. I don’t worry about complexity or whether I understand how the system works. I worry about how I and my family will manage when it fails. But complexity isn’t the only vulnerability.
One of the well-known scenarios that results from all of this is the Terminator scenario, and I am not convinced at all that we have solved this problem yet. (For the uninitiated, the Terminator Scenario is thus called after the Terminator series of film. In this series, the US military develops a powerful satellite-based computer system called Skynet to control their missiles so that they could respond faster to a threat, but the computer system achieves consciousness, decides that humans are actually the threat, and sets about wiping out humanity). Machines already do most of the design work on the next generation machines. Human engineers make some of the key decisions and tell the machines what to design, mostly, but the proportion of human input is falling. Particularly when we use evolutionary design, the human understanding of the technology that results can be very low indeed. Imagine a scenario where a few smart students plan a prank, and use an off-the-net virus pack to infect millions of machines with an algorithm. The algorithm is very crude but attempts to achieve elements of consciousness or thinking, just for fun, to see what happens, to see how far they can get. Some of the students are in IT, some from bio-tech and nano-tech, some from neuroscience, and a few others. The algorithms are crude but designed as well as they can, using all their latest knowledge of how the neural networks in the brain work. And so they spawn them, on a million machines, each with 1% of the raw processing power of the human brain. And they use evolution in that huge aggregated processing pot to experiment with variants of the algorithm. Over time, the system accumulates a toolbox of different algorithms and circuits that achieve a wide variety of neural functions to some degree to achieve key components of mind or consciousness or awareness. By experimenting with automatically linking these together in many combinations, the students hope to achieve larger and larger degrees of AI. And they might as well harness that AI to refine the evolutionary algorithms too, and make the virus better at infecting even more machines and adapting better, and hiding better. All automatically. Can we be sure that such a prank would always fail? Or could it work, and achieve consciousness in a distributed machine, just like the Skynet from Terminator?
But if you go to singularity timeframes, there are even further dangers. Some people already belong to hobbyist genetic engineering groups or play with 3d printing – and some of those mess with printing electronics too. Circuits can harvest energy from changes in the environment or passing radio waves and so won’t necessarily need batteries. People will try to push the boundaries via those routes too and 2025 is a good way off so lots of progress will occur in all these fields by then. With feedback among all these bio-nano-info-cogno technologies, it is not hard to imagine how students or a terrorist group could make good progress even without proper funding, even while staying anonymous, based anywhere. As hidden net-based programs become smarter and more autonomous, they could notionally get to the point where they interact with genetic assemblers and printers and design biological and electronic devices in a feedback loop. When thinking of a grey goo scenario, forget little micro-mechanical machines. Think bacteria, think GM assemblers, think AI-led environmental adaptation and think of a distributed organism that is part in the machine world and part in the ecosystem. Much of that is achievable long before we get the singularity and the rest very soon after. Transhumanists forget that transbacteria may not allow them to proceed. Smart bacteria may link together into super-smart organisms that think of humans merely as competition for resources. We could be building the engines of our own destruction, even while aiming for technotopia.
I am no doom monger, and I always manage to convince myself that we will muddle through. Sure, we’ll do it badly and get half of the benefit at twice the price and twice the mess. We already know the problems above. They are being addressed in organisations such as the Lifeboat Foundation, there are often conferences or symposia along singularity lines. Government is even starting to react. Studies covering NBIC (nano, bio, info, cogno) convergence issues were initiated by the EU before 2000. The US and Canadian governments have bother run conferences debating ways that mad scientists could use future technologies to cause great harm. So the problems won’t come unexpectedly. Where do we end up?
The problems above are possibilities and even likely if we take the default path of ongoing unfettered development. Positive feedback would deliver on some of the promises, and some of the problems would appear along the way. In the real world, it won’t happen like that. Social and political feedback loops, educated by many ongoing debates such as this symposium, will ensure that regulation is implemented that slows it down, restricting what can legally be done, what can be developed, what can be bought, and by whom. It has to. What we can also be sure of is that much of the regulation will be reactive and badly thought out. So it will be a mess, we will barely muddle through, but muddle through we will. What we can hope for is that it might be a relatively safe mess and the reward at the end is worth it. But let’s start by acknowledging that what we call the singularity is only a theoretical concept, and it can’t be achieved in its pure form. The real world development path will surely be very different, constrained and forced down different paths by physical, cultural and economic limits and forced to comply with a wide range of legal precautions.
About the Author
Dr. Ian Pearson is a leading futurist, keynote speaker and after dinner speaker. All over the world, he has delivered over 1000 provocative talks about the future of many aspects of our daily lives - from work to leisure, fashion to climate change. He has written several books and appeared over 450 times on TV and radio.
This post originally appeared on his blog.
- About WFS
- Contact Us
- Frequently Asked Questions
- History of WFS
- Board and Council
- Press Room
- Futurist Gear
- Are You the Next CEO of the World Future Society?
- Book a WFS / Futurist Magazine Speaker
Essays and comments posted in World Future Society and THE FUTURIST magazine blog portion of this site are the intellectual property of the authors, who retain full responsibility for and rights to their content. For permission to publish, distribute copies, use excerpts, etc., please contact the author. The opinions expressed are those of the author. The World Future Society takes no stand on what the future will or should be like.
Free Email Newsletter
To sign up for Futurist Update, our free monthly email newsletter, enter your email in the box below and click Save.
March 8, 2014 - This is a great time to be an astronomer. It appears we are discovering exoplanets by the boat load and we are finding out that solar systems, some like ours, and some bizarrely different, seem to be the rule and not the exception.
March 8, 2014 - For several years I worked with an Australian company developing a new skimmer technology for remediating oil spills. That got me very interested in this subject area. So I keep my eyes open for new technological innovations that can address what remains an industry-wide problem for fossil fuel and transportation providers.
How will we change as technology learns to communicate with our emotions?
March 7, 2014 - The greatest challenge renewable energy providers face is achieving a sustainable continuous supply of guaranteed power delivered to consumers either through the grid or off grid. That's the single issue holding back large-scale adoption of renewables.
Yesterday my wife Deb and I had lunch at one of our favorite Chinese restaurants, and afterwards we’re given the typical fortune cookies that come with the bill. Jokingly I broke open the first one and asked, “I wonder if it’d be possible to create a real fortune sometime in the future and put it into these cookies?”
March 6, 2014 - I am finally back from Florida and once again sifting through the content my web crawlers and affiliations with social networks that provide me with the fodder I turn into 21st Century Tech blog.
Seth MacFarlane, the multitasking comedian and creator of Family Guy, and other raunchy fare, happens also to be the driving force behind the new version of Carl Sagan's classic science show COSMOS, which will appear Sunday on Fox and simultaneously on other networks, hosted by Neil deGrasse Tyson. I know a number of the writers and producers who have striven to create something stunning, vivid and updated for the 21st Century.