September-October 2011, Vol. 45, No. 5

  • The Coming Robot Evolution Race
  • Thank You Very Much, Mr. Roboto
  • The Accelerating Techno-Human Future
  • Exploring New Energy Alternatives
  • Five Principles of Futuring as Applied History

The Coming Robot Evolution Race

By Steven M. Shaker

Homo sapiens may have “won” the evolutionary race to perfect humankind, but artificial intelligence and robotics will evolve faster and farther. Rather than compete with them, we may do well to make them our allies and co-evolve, suggests a technology trend analyst.

Some people believe that humanity’s evolutionary advance into the future is driven by how our genetic pool responds and adapts to climate change and cultural and societal dynamics. These external factors contributed to how we evolved in the past and became human. Extending that same evolutionary view forward by a few hundreds of millions of years, we arrive at comedic vision of our collective future: We’ll have become creatures with a huge forehead for expanded cranial capacity and a small body due to lack of any manual labor, etc.

Most futurists, however, realize that we now have the means to shape and influence our own evolution and cause substantial change within periods spanning only hundreds and thousands of years. The interplay between our ability to map and manipulate our own DNA, as well as to integrate mechanical mechanisms into our own physiology, is driving this evolutionary adaptation. We will adapt our DNA to more readily accept the enhancements from nanotechnology and other bionic devices, and we’ll engineer these to synch up with our DNA improvisations. As a result, humanity’s evolutionary momentum will spiral quicker and quicker. Fashion, self-image, and social bonding will influence the “look and feel” as much as utility. So hopefully, humans won’t resemble the Borgs of Star Trek, except for those of us making an aesthetic choice to do so.

Writers such as Joel Garreau, author of Radical Evolution (Doubleday, 2005), have suggested that accelerating technology could lead to an evolutionary bifurcation between the haves and have nots. Economic, religious, philosophical, and cultural views may prevent some geographical or demographic groups from participating in actions advancing their self-evolution.

The masses of humanity may not be able to afford such enhancements to themselves or their offspring. Those who can obtain genetic and artificial organ replacements may be able to live longer and healthier, and thus will be more likely to survive and reproduce. It is possible that, over time (that is, in much quicker periods than afforded through natural evolution), genetic differences between humans who augment and alter their genetic code may differ enough from those who do not. The variance may prevent interbreeding. This would lead to the creation of a separate new species.

Now, a new competitor is also emerging on the scene. This one is all artificial, with no flesh or DNA. The arrival and evolution of humanoid robots competing against cyborgs and those humans who have resisted change may be reminiscent of the competition between Homo sapiens, Neanderthals, Homo erectus, and the “hobbit” people of the Indonesian island of Flores.

Competition in Robotic Evolution

Homo sapiens chauvinists like to think we were the fittest for survival and outcompeted the other hominids. We did have some fine competitive traits, but our success has to do with some degree of luck.

There were two points when Homo sapiens almost went extinct. Between 195,000 and 123,000 years ago, Earth was in the middle of a glacial phase and the Homo sapiens population was estimated to have gone from about 10,000 inhabitants down to as few as 600 people. Approximately 70,000 years ago, drought may have shrunk the human population down to just 2,000 folks. However, this was soon followed by the “flight out of Africa,” which led to a rapid expansion both in geography and in numbers for mankind. What a very exciting and competitive ancient world that Homo sapiens resided in! Machine evolution will be both more exciting and far more rapid.

Certainly, machinery endowed with artificial intelligence does not have to be robotic; it may be like HAL in 2001: A Space Odyssey, and reside within a computer’s memory core, or be part of a networked set of computers. Robots do not need to be humanoid like the Asimo robot developed by Honda. They can be wheeled or tracked unmanned vehicles like Stanley, the self-driving car that completed the 2005 DARPA Grand Challenge race. They could have multiple legs like Boston Dynamic’s famous Big Dog robot.

There are far better forms for robots than “human,” depending on what the robot is designed to do. But robots that are designed to perform multiple chores previously done by humans—from throwing out the garbage to walking the dog to repairing a satellite—will likely be humanoid in nature. These humanoids would be our most immediate competitors.

Accelerating Robotic Evolution

Some scientists and science commentators have expressed skepticism that sentience could ever be created in a machine setting. They’re impatient that humanistic AI has not yet been achieved, even though researchers have been aggressively pursuing artificial intelligence for decades.

Others disagree. Hans Moravec, the renowned roboticist at Carnegie Mellon University and author of Mind Children (Harvard, 1990), predicts that robots will surpass human intelligence by 2030, will develop humanlike consciousness, will be aware of the world and social interactions, and will gain the ability to replicate themselves and pace their own evolution. Physicist Michio Kaku, author of Physics of the Future (Doubleday, 2011), predicts that helpful robots performing the role of butlers and maids will be available by the year 2100. He is unsure how intelligent they will be, but they will have the capacity to mimic all sorts of human behavior.

Whether either Moravec or Kaku is off by a decade or two, or even several hundred years, it is really insignificant when compared to the glacial pace of natural evolution. In his 2000 paper “Robots, Re-Evolving Mind,” Moravec compares the evolution of intelligence in the natural world with the progress occurring in the field of information technology.

Natural intelligence evolution starts from wormlike animals with a few hundred neurons occurring more than 570 million years ago. Very primitive fish that appeared 470 million years ago had about 100,000 neurons. One hundred million years later, amphibians with a few million neurons emerged from the swamps. One hundred fifty million years later, the first small mammals appeared and had brain capacities with several hundred million neurons. The bigger co-inhabitants at the time, the dinosaurs, had brains with several billion neurons.

After the extinction of the dinosaurs 65 million years ago, mammalian brains also reached sizes of several billion neurons. The first hominids of about 30 million years ago had brains of 20 billion neurons. You and I, and our contemporary human colleagues, have brains operating with approximately 100 billion neurons.

Compare this to the artificial intelligence evolutionary track beginning with the first electromechanical computers built around 1940, which had a few hundred bits of telephone relay storage. By 1955, computers had acquired 100,000 bits of rotating magnetic memory. Ten years later, computers had millions of bits of magnetic core memory. By 1975, many computer core memories had exceeded 10 million bits, and by 1985, 100 million bits. By 1995, larger computer systems had reached several billion bits. By the year 2000, a few personal computer owners had configured their PCs with tens of billions of bits of RAM.

If one accepts the comparison of computer bits to neurons as described by Moravec, then the computer’s growth in evolution expanded each decade what it took Mother Nature to achieve every hundred million years. Moravec calculates that human engineering of artificial intelligence is occurring at 10 million times the speed of natural evolution.

An approach to AI called embodiment, or embodied embedded cognition, maintains that intelligent behavior occurs out of the interplay among the brain, the body, and the world. Some philosophers, cognitive scientists, and AI researchers believe that the type of thinking done by the human brain is determined by certain aspects of the human body. Ideas, thoughts, concepts, and reasoning are shaped by our perceptual system—our ability to perceive, move, and interact with our world. Roboticists such as Moravec and Rodney Brooks (founder of iRobot Corp. and Heartland Robotics Inc.) maintain that, in order to achieve human-level intelligence, any AI-endowed system would have to deal with humanlike artifacts, and thus a humanoid would be the optimal robot to achieve this.

The new field of evolutionary robotics, like its namesake of evolutionary biology, relies on the Darwinian principle of the reproduction of the fittest. This view posits that autonomous robots will develop and evolve from interaction with the environment. The fittest robots will reproduce by observing their interactions with the environment and incorporating mutations that increase their survivability.

Humans will be unable to match the rapid evolutionary jumps afforded to completely artificial beings, even with advances in cybernetics and genetic engineering. Robotic humanoids will only be limited by the laws of physics and not by those of biology, which even genetic engineering can’t alter. Hopefully, the sort of destructive competition that eliminated the rivals to Homo sapiens in the past—including such competitors as Homo erectus and the Neanderthals—will not be repeated in the next evolutionary stage.

In the best possible future, non-altered humans, humans with cybernetic implants, and robotic humanoids will learn from each other, borrow and share technology, and engage in friendly collaboration, cooperation, and competition to benefit all. In considering which robotic designs to support or, on the national level, to fund, that seems a good ideal to aim for.

About the Author

Steven M. Shaker is an executive in a market research and training firm. He is an authority on technology assessments, forecasting, and competitive intelligence. He is co-author, with Alan Wise, of War Without Men: Robots on the Future Battlefield (Pergamon-Brassey’s, 1988) and, with Mark Gembicki, of The WarRoom Guide to Competitive Intelligence (McGraw-Hill, 1998). E-mail steve.shaker@cox.net.

Thank You Very Much, Mr. Roboto

By Patrick Tucker

Japan’s unique research and development environment for robotics telegraphs how robots and humans will co-evolve.

I’m in a strangely lit subterranean room in Kyoto, Japan, and for the sake of the experiment in which I am participating, I’m pretending to be lost. A large “Mall Map” is mounted on a wall in front of me. I move toward it at a leisurely pace, in the manner of a man trying hard not to draw attention to himself. When I reach the map, I stop. A whirring sound of gears moving in a motor rises up behind me. I hear the thrush of wheels passing quickly over carpet. Something mechanical this way comes.

“Konichiwa,” says a cheery voice that sounds like it’s emerged from a Casio keyboard. I recognize the greeting: “Good afternoon.” I turn and see two enormous black eyes staring up at me. “My name is Robovie,” says the robot in Japanese. “I am here to help. Are you lost?”

“Yes,” I answer.

Robovie swivels on his omni-directional wheelbase, extends his motorized arm to the left corner of the room, and points to a large sheet of paper displaying the word “Shoes.”

“May I recommend the shoe store?” Robovie asks. “It’s right over there.”

“Dōmo arigatō,” I tell the robot. (“Thank you very much.”) It bows and wheels slowly backward. The experiment concludes successfully.

Welcome to Japan, which has been one of the world’s most important centers for robotics research since Ichiro Kato unveiled his WAP-1, a bipedal walking robot, at Waseda University in 1969. Fifteen years later, Kato presented a robot that could read sheet music and play an electric organ.

Robovie may seem like a step back compared with an assemblage of metal and wire that can sit down and coerce Kitaro’s “Silk Road” from a keyboard without missing a note. But Robovie, in fact, is far more human than his most impressive predecessors. In ways that are subtle but nonetheless significant, he represents an important turning point in the field. He’s also a moving, talking poster boy for all that is wonderful about Japanese robotic research. The future of human–machine interaction can be found in Robovie’s dark, watchful eyes.

Japan: Robot Central

MIT-trained robotics engineer Dylan Glas is one of Robovie’s very many chaperones. He’s lived in Japan for eight years now, and this has given him a uniquely international perspective on robotics culture. He also represents a reverse brain drain. He holds multiple degrees from the most prestigious technical school in the United States, but he left his country of birth to pursue better research opportunities abroad. Glas says that the allure of Japan wasn’t financial. He had plenty of offers to design robots in the United States. The problem, as he explains it, was that he didn’t want to build war machines.

A participant in MIT’s Middle East Education Through Technology program, Glas worked teaching JAVA programming to Israeli and Palestinian high-school students in Jerusalem in 2004. The experience was instructive.

“I saw how people who are parts of larger warring groups can form friendships,” says Glas. “So I came straight from trying to make peace to looking at building things that killed people.” He explored other research opportunities online and found a picture of Robovie (an earlier iteration) hugging a little girl. He knew at that moment he was moving to Japan.

The United States and Japan lead the world in robotics research. But the two countries are dramatically apart on what they’re building these bots to do. The United States, which spends more on its military than do the next 45 spenders combined, has devoted most of its robotics research funding, on a national scale, to putting machines in dangerous battlefield situations, deep behind enemy lines, over the mountains of Afghanistan and Pakistan in the place of humans. The goal is not so much to replace the human soldier but to automate the deadliest parts of the job so the soldier becomes more technician, less target. The iRobot Corporation, the most successful private robot manufacturer in the United States, didn’t get its start building Roomba vacuum cleaners but designing military machines like the PackBot.

Japan is looking to fill a very different need. Demographically, it’s the oldest country in the world; nearly 20% of the population is older than 65. In the rural countryside, the proportion is closer to 25%. Japan is also shrinking. The number of children under age 15 reached a record low of 16.9 million in 2010. Many of Japan’s best-known robotics research projects, such as Asimo, indirectly address the rising population of seniors and growing dearth of able-bodied workers.

Meeting Social Challenges

Many Japanese argue that the country could address its demographic challenges through policy, such as allowing more willing immigrants into the country. There’s evidence to suggest that a more relaxed immigration policy would benefit Japan economically. But immigrants here face social and even linguistic barriers to real integration. Japanese is a tough language to learn; rules and usage can vary tremendously from prefecture to prefecture, between superiors and subordinates, between waiters and restaurant goers, and even between men and women. Linguistic and social customs can be very important to older Japanese, even if hip and media-savvy kids in Tokyo don’t think much of these cultural norms.

Formality and routine are particularly important in work settings, as anyone who has lived in Japan can testify. The degree of professionalism, focus, and seriousness that people bring to even menial jobs is impressive. This is not a country where you encounter baristas texting while they’re making lattes. That emphasis on completing tasks in a very specific “right” way contributes to greater acceptance of automation, says Glas.

“At work, there is no deviation from the established best practice,” he notes. “When I go to the supermarket, they always say exactly the same thing and deliver customer service exactly the same way. So I think the idea of robots doing that sort of work is very natural.”

All of these factors—aging and decreasing population, lack of immigrant labor, electronics expertise, available research funding, and cultural openness to automation—make Japan the key destination for humanoid robotic research, the study of how humans and robots interact in casual, civilian settings.

The Intelligent Robotics and Communication Laboratories, where Glas works, puts people and robots together in interesting settings. Their field tests offer a snapshot of a future where humans and machines work and play side by side. One of Glas’s favorite experiments involved a teleoperated robot who served as a tour guide in the town of Nara, Japan’s imperial capital some 900 years ago and home to some of the most important Buddhist temples in the world.

Touring Nara is more fun with the right guide, someone who has spent awhile learning the history and who knows a secret or two about the place (such as where to find the sacred herd of deer that eat directly from your hand). But the average age of a tour guide here is 74. Therein lies the problem. The walk from the train station to the best sites, like the famous giant bronze Buddha, can be challenging for young bodies, let alone someone in her 70s. Glas and his colleagues saw an opportunity to put a remote-controlled robot (as opposed to a fully autonomous one) in a unique setting to serve as the voice, eyes, and ears of a real person.

“Having this robot there helps [the guides] be there from home, so they can still talk and share their enthusiasm for Nara and the history of Nara,” Glas notes. “When I tell people this, a lot of Americans say, with a blasé shrug, ‘interesting.’ Japanese people light up and say, ‘Oh, we really need that!’ The perception of necessity is very different. That’s a cultural difference that guides the way people perceive how robots should be in society.”

My conversation with Robovie reenacts another field test that took place in an Osaka shopping mall in 2008 and 2009. The goal in that situation was not so much to empower people through telerobotics as to instruct robots how to interact with humans. The setting was a strip of stores by the entrance to Universal Studios Japan. Robovie had a 20-meter turf, sandwiched between a collection of clothing and accessory boutiques and a balcony. The first challenge was learning to distinguish between people who were passing through the area in a hurry from those who were just window-shopping or who were lost. The second group might be open to a little conversation; the first group represented a hazard.

The mall test is a classic example of the sort of pattern-recognition task that humans are great at, but robots just don’t do; there are too many open questions. How do you explain human states like “in a hurry,” “window-shopping,” and “lost” in binary code, a language that the robot can understand?

The researchers outfitted the area with six sensors (SICK LMS-200 laser range finders) and, over the course of one week, collected 11,063 samples of people walking, running, and window-shopping. They analyzed the data in terms of each mall goer’s velocity and direction, and isolated four distinct behaviors: fast walkers, idle walkers, wanderers, and people who were stopped or stuck looking at a map. These classifications helped Robovie learn how to recognize different types of people based on their behavior.

Next, Robovie had to say something to the folks he chose to converse with, and the back-and-forth had to seem fluid and natural to the human participant. You would assume that teaching a robot to make chitchat would be a snap after all the time humans have spent over the last decade talking to computerized agents over the phone. But in a real-world setting, the interaction is a lot harder for the machine to handle gracefully. “People think [computerized] speech recognition is so great,” says Glas. “It is, if you have a mic next to your mouth. But if you have a mic that’s far away or on a noisy robot, or there’s music in the background and a group of three people is trying to talk to the robot at once, it’s not feasible.”

Robots have the same hearing problem that afflicts people with King–Kopetzky syndrome, also called auditory processing disorder. Picking up sound isn’t the issue. It’s distinguishing words and meaning from all the other background noises. The problem lies in the brain, which is where most of what we call hearing actually occurs.

To compensate for these deficiencies, the researchers made sure Robovie could call for backup. A human operator would monitor the exchanges from an isolated location, and if Robovie heard a word he didn’t recognize (or got himself lost in some corner of the mall), the operator could chime in and help the robot with the recognition problem. This cut down on the time it took the robot to respond to questions. Human–robotic interaction will likely proceed along these lines—partially human, partially robot—for the foreseeable future, says Glas.

“Even in big automated factories, you need a human. You always will,” Glas avers. “My goal is to increase the automation level, decrease the role of the operator, and work towards more automation. So instead of one person fully operating a telerobot, you have one person monitoring 400 robots; when one runs into a novel operation scenario, he calls the operator.”

The lab’s field tests have yielded a plethora of interesting and counterintuitive findings. For one thing, people trust robots that look like they just came out of the garage, with bolts and hinges exposed, more than they do bots concealed in slick plastic frames. Also, kids and adults interact with robots in very different ways. Adults kept asking Robovie for directions and treated him like a servant, while kids asked the robot’s age and favorite color.

These experiences are why Glas loves his job. They also reveal how the study of humanistic robots involves much more than sensors and hardware. It draws from psychology, anthropology, and a host of other so-called soft sciences. It makes use of intuition and observation in a way that formal robotics research under a military grant doesn’t. This, in part, explains why one of the most important figures in human–robotic interaction research is himself an artist.

The Oil Painter

In the myth of Pygmalion, a sculptor creates a female statue so convincing that the gods make her real. Japanese roboticist Hiroshi Ishiguro has never heard of Pygmalion, but shares the tragic hero’s obsession: creating a work of art so lifelike that—in the imaginations of those who behold her—she becomes real. Ishiguro is known internationally for his very particular robotic creations modeled after real people, including himself, his daughter, and various beautiful Japanese women. He’s also one of the senior fellows at the Intelligent Robotics and Communication Labs.

On a warm November day, I get to meet him at his office at Osaka University. My friend and I are shown into a large space decorated in modern furniture of plastic and glass. At the far end of the room, a man draped entirely in black and leather, and sporting an Elvis-like pompadour that extends from his forehead, is watching two different television monitors and smoking with a feverish intensity. He looks not so much like one of the most important figures in modern robotics as a Los Angeles record producer circa 1985.

Ishiguro began his university studies with a dream of becoming a visual artist. Computer science was a backup. Eventually, he was forced to give up on oil painting as a career. “I couldn’t get the right shade of green,” he says. Ishiguro has put his arts training to good use. It’s his artistic sensibility that informs his unique approach to robotic design. “Oil painting is the same thing [as building robots]; the meaning of oil painting is to re-create the self on canvas.”

Ishiguro believes that some understanding of humanity (and some formal humanities training) is essential to success as an engineer. “We need to start with the question, What is human?” he says, “and then design and build from there. Our brain function is for recognizing humans, but in engineering robots, we don’t study the importance of the human appearance.… We need to establish a new type of engineering. Appearance is key, then movement, then perception and conversation.”

He takes us across the hall to show us his lab. A mannequin-like figure is sitting erect on a metal stool. I ask if I can investigate, and he nods. I step hesitantly forward and poke the android in the cheek. Its eyes open wide, and it turns to stare in my direction. The mouth hangs slightly open in an expression of surprise.

The demonstration is simultaneously amazing and unnerving. Ishiguro admits that his creations have secured a reputation for creepiness. When his daughter met her android doppelgänger, she burst into tears. Like a true artist, Ishiguro says he’s thankful for every honest response he gets.

“People are surprisingly sensitive of the humanlike appearance. The older people accept this type of robot. I exhibited one at the World Expo. Elderly people came up and asked ‘Where is the robot?’ Young people knew right away.”

Ishiguro has recently turned his attention to the theater. Two years ago, he and Japanese playwright Oriza Hirata began the Robot-Human Theater project, an ongoing collaboration placing flesh-and-blood actors next to tele-operated androids and other robots. Last year, on a trip to Tokyo, I caught a performance of Hataraku Watashi (I, Worker), a staid, 30-minute piece exploring themes of mortality, autonomy, and what it means to be human. Hataraku Watashi also serves as a live experiment in human–robotic interaction. Takeo and Momoko, the play’s robotic stars, faced the same challenges as their human co-stars: line delivery, timing, blocking, and conveying emotion and meaning.

Ishiguro and Hirata’s most recent piece, Sayonara, starred actress Bryerly Long and Geminoid F, a female tele-operated android. Following the play’s debut last November, Long told Reuters that she felt “a bit of a distance” between herself and her co-star. “It’s not some kind of human presence,” she said.

Ishiguro expressed confidence that future performances will get better. “We think we need humans for theater, but this is not so,” he told me. “The android can change the theater itself.”

This assertion begs a question that is either philosophical or semantic depending on the answer: Can robots act?

The late Russian literary critic M. M. Bahtkin might say Yes, insisting that the success of any piece of art, including theatrical performance, rests entirely on the reaction it creates in the audience. By this view, Takeo, Momoko, and Geminoid F are already accomplished actors.

The late Lee Strasberg, founder of method acting, would argue otherwise. The robot has no internal memories, no painful or elating life episodes with which to breathe credibility into the performance. “The human being who acts is the human being who lives,” he said. The presence of life, ergo, is a necessary precondition to “acting.” The robot is a prop, or, in the case of a remotely controlled android, just a puppet. It’s an interesting gimmick but not a thespian. Real-life experience is a precursor to genuine acting, and a robot will never be able to experience life in the way that humans do.

Or will it?

The Pattern-Recognition Machine

Turn your attention back to Robovie for a moment. Picture him standing alone in his stretch of mall. A potential conversational partner enters his area of operation. Robovie has to make a decision: Is it safe to approach or isn’t it? The window for that decision is closing.

“You don’t want [the robot] to go super fast in crowded commercial spaces,” says Glas. “But people walk quickly. If you’re walking through that door, and Robovie wants to talk to you, he has to start early. We have to predict people’s behavior in the future, predict not only where they are, but what they’re going to be doing and where they’re going. The system gets a couple of seconds of data.”

Herein is the reason Robovie represents a great leap forward in artificial intelligence. He’ll never play chess as well as IBM’s Deep Blue played against Garry Kasparov. He won’t win on Jeopardy and can’t vacuum better than Roomba. What Robovie does is learn about the people in his environment. He takes in information about his setting and the live actors in that setting and responds on the basis of a perceived pattern, moving toward reward and away from threat. This is incredibly human.

In his 2004 book On Intelligence: How a New Understanding of the Brain Will Lead to the Creation of Truly Intelligent Machines, neuroscientist and Palm Computing founder Jeff Hawkins argues that the neocortex evolved expressly for the purpose of turning sensory data, in the form of lived experiences, into predictions.

“The brain uses vast amounts of memory to create a model of the world,” Hawkins writes. “Everything you know and have learned is stored in this model. The brain uses this memory-based model to make continuous predictions of future events. It is the ability to make predictions about the future that is the crux of intelligence.”

This ability to anticipate is a function of the neocortex, which can be traced back to humankind’s reptilian ancestors of the Carboniferous Period. Over the course of millions of years, the neocortex increased in complexity and size and emerged as the central component in human cognitive intelligence—not because of what it is, which has changed materially, but because of what it does.

The process of prediction forms the very basis of what makes us human. We see, we gather data from our senses, we predict, therefore we are. By that metric, Robovie’s every interaction, his every encounter, every question he asks, and every response he picks up brings him a little bit closer to humanity.❑


Kizoa slideshow: Thank You Very Much, Mr. Roboto - Slideshow

About the Author

Patrick Tucker is the deputy editor of THE FUTURIST magazine and director of communications for the World Future Society. In 2010-2011, he spent five months in Japan researching and writing about the future. His previous article, “My First Meltdown: Lessons from Fukushima,” appeared in the July-August 2011 FUTURIST. E-mail ptucker@wfs.org.

Exploring New Energy Alternatives

By David J. LePoire

What is most likely to satisfy our energy needs in the future—wind farms and photovoltaic arrays, or something yet to be invented? Options for the world’s energy future may include surprises, thanks to innovative research under way around the world.

Much discussion about going beyond petroleum includes the development of wind farms, solar thermal concentrators, solar cells, and geothermal energy production. But will these satisfy our energy needs in the future? We hope that renewable sources will provide enough energy to supply the world’s future needs, but there are still many uncertainties.

How much will low-intensity sources of energy cost over their life spans, and what will their environmental impacts be? The answers depend on research and on the operational experience gained in deploying these technologies and their associated storage, transmission, and conversion systems.

Another area of uncertainty is the growth in world demand for energy. If everyone in the world used energy as the United States does, the rate of energy production would have to increase by a factor of four. In addition, the energy use per person in the developed world might not be stagnant; it might increase. Could renewable sources keep up with this demand?

The following is an overview of a few conventional renewable energy sources that may be expanded in the near future, as well as some more speculative potential “surprises.” As the time horizon increases, the uncertainties associated with the technologies, economics, and political scenarios increase.

Energy Today

Fossil fuels currently account for 83% of the U.S. energy supply and slightly less (80%) of the world’s energy supply, but energy conservation and efficiency since the oil crises of the 1970s have suppressed growth of energy demand. If energy use had grown as fast as the economy, the United States would be using an estimated 60% more energy than it does now. We’ve improved energy use in buildings, electrical appliances, cars, and industrial processes. These applications are often motivated by cost savings.

The attainment of energy efficiency through conservation or improved technology allows us to extract more applied energy from a comparable amount of fuel. This has led to growth that has been quicker in the economy than in energy use.

Current nuclear power plants extract the remnant energy from supernova explosions stored in the heavy element of uranium. Since these stellar explosions occurred billions of years ago, before the solar system formed, nuclear power is not renewable. However, there is still much more energy stored in the heavy elements than the amount that is currently utilized. Techniques are being explored to expand the possible fuel materials to include other types of uranium and thorium.

Hydroelectric power is renewable but demonstrates some limitations: Though inexpensive, electricity generated from hydropower (for example, along the Tennessee, Colorado, and Columbia rivers) affects large tracts of land and is generally limited to a few select spots where the topography of the land supports a good reservoir location. Growth globally is limited because prime locations have already been developed.

Direct solar-energy technologies such as solar photovoltaic cells are being rapidly developed and deployed, and other technologies are also advancing our ability to efficiently convert wind, waves, ocean currents, and biofuels into usable energy.

Beyond Conventional Renewable Energy Sources

To hedge our energy bets and reduce future uncertainty, researchers are exploring new options for future energy sources, including ways to improve older ideas, such as fusion energy, space-based solar power satellites, Moon bases, and advanced nuclear fission options.

The strategy of maintaining a variety of energy options could be likened to the strategy of reducing risk in an investment portfolio. For example, our current energy technologies have costs, environmental impacts, and maturity levels that are relatively well known. Researchers are now testing newer renewable technologies, with the aim of cutting production expenses, minimizing negative environmental impacts, and enhancing scalability.

The hypothetical space-based, fusion, and advanced fission energy production systems introduce an extra level of uncertainty, because some technical aspects are not solved and because their relative costs depend on the construction of new infrastructure to support them.

Infrared Solar Technology

Nanotechnology offers a tool that could help create designs that convert energy more efficiently. For example, nano-scale antennas could be built to capture infrared light from the Sun—light that we cannot directly see but we do experience as heat. A solar cell that could extract this infrared energy would be able to provide energy both day and night (although not as much at night).

An antenna is more efficient at capturing energy and absorbs at a wider range of angles than conventional cells, and it does not require exotic materials to make. However, the antenna has to be about the same size as the wavelength of the light. For radios, this is about 1 meter. For cell phones, it is a few inches. For infrared light, the wavelength is about one twenty-fifth of the width of a human hair. One antenna would not only be difficult enough to make, but it would also result in very little energy production.

The challenge to easily produce millions of these small antennas was successfully met at Idaho National Laboratory (a U.S. Department of Energy laboratory), along with other laboratories, in work that received the 2007 Nano50 award. The laboratories were able to “print” 250 million metal antennas on plastic about the size of a standard sheet of paper. However, the problem remains to convert the absorbed energy (10 Ghz frequency) into useful electricity (60 Hz frequency).

Nuclear Fission

Nanotech could also improve energy-conversion efficiency of fission technology by allowing particles of uranium atoms to be converted to electricity before they collide and generate heat. This might be achieved by integrating the fuel and electricity extraction zones at the nano scale. When charged particles hit gas in the small pores, they strip the gas of electrons. The separation of charges then generates a voltage difference. This work is being pursued by a former Los Alamos National Laboratory scientist.

In a traveling wave reactor, only a small slice of a cylinder core is undergoing intense nuclear reactions with fast neutrons. The reactor needs an initial ignition with enriched uranium, but then it burns much like a candle. Its advantages include the ability to use unenriched fuel such as natural uranium, waste (depleted) uranium, thorium (much more plentiful than uranium), and spent nuclear fuel (considered a waste product of current nuclear power electricity generation).

This design was originally proposed in the 1950s, but no actual reactor has been built. Currently, TerraPower has developed designs for such a reactor, which were publicized in a 2010 Technology Entertainment and Design (TED) presentation by Bill Gates. These reactors would use the fuel more efficiently by using more of the available uranium and thorium, and would operate at higher temperatures and thereby allow higher thermal efficiency. They would also be contained, such that the fuel would last for 60 years, and generate much less waste as more material was burned.

Nuclear Fusion

Fusion is the process of merging two small atomic nuclei into a larger one. If the resulting nucleus is lighter than iron, the reaction also releases energy. The difficulty lies in getting two electrically charged nuclei close enough for the merger, or fusion, to occur. For energy production, the nuclei need to be pushed together in a controllable, energy-efficient, and economical way.

In nature, there is one system—stars—that controllably and efficiently generates energy. However, it is impossible to replicate the confinement mechanisms that stars use, since it requires the gravitational attraction of the mass of the Sun. The process of confining plasma is necessary for generating controllable, energy-efficient, and economical fusion energy. Although the concept of nuclear fusion for energy generation was identified early after World War II, its implementation has been frustrated because of the various ways the plasma finds to escape confinement. It seems that fusion has been “about 30 years away” for the past 50 years!

One way to confine the plasma is through the inertial forces from an implosion. This is the technique used by the large facility at Lawrence Livermore National Laboratory—the National Ignition Facility—whose construction just ended in 2010 and is scheduled for experiments.

Another technique to magnetically compress hydrogen long enough for fusion to take place is to run a large current through wires. The large current vaporizes the wire into a plasma, while simultaneously creating a large magnetic field to compress the plasma and hydrogen. The Z machine at Sandia National Laboratories has been experimenting with this concept for many years.

Artificial Photosynthesis

A 25-year quest for scalable solar energy solutions has drawn from biomimicry for inspiration. In its search for creating artificial photosynthesis, an MIT team led by Dan Nocera recently identified two natural biological techniques that had previously remained hidden. Nocera noticed that some life-forms use cobalt in photosynthesis. He then developed a long-lived cobalt-based catalyst that uses sunlight to convert water into oxygen and hydrogen gas.

This work supports Nocera’s goal of finding a chemical process that could be distributed (e.g., on houses) and robust (e.g., not decay) in converting sunlight into liquid chemicals (e.g., alcohols) that store the energy for later use in transportation as a gasoline substitute or as electricity with a fuel cell.

The MIT team’s recent discoveries have led to a startup company, Sun Catalytix, that is partially funded by the U.S. government’s Advanced Research Projects Agency-Energy (ARPA-E) program, which funds selected promising energy-related innovations. In the lab, it seems like the catalyst also works in impure water, which could lead to it being used not only to generate and store solar energy but also to purify water.

Nate Lewis at Caltech is also searching for artificial photosynthesis in a different way, by using nanotubes along with a membrane to generate hydrogen from light.

Space-Based Solar Technology

The idea of space-based solar energy extraction has been around for decades. Obstacles include the high price of sending reliable equipment into space and of maintaining it there and the uncertainties associated with transmitting the energy back to Earth.

Two locations are currently being explored: geosynchronous orbit and on the Moon. The latter offers the advantages of using existing materials and providing a more conventional work environment.

A Japanese company, Shimizu, is exploring the use of semi-autonomous robots to do the primary conversion of materials and build the solar energy collection system. The idea is to create a continuous strip of land, perhaps going all around the Moon’s equator, of solar cell collectors built with lunar materials.

The resulting LUNA RING, a complete equatorial ring, would allow continuous energy collection. The Sun shines only half the time on the far side of the Moon, yet the same side of the Moon is always facing the Earth, so just a limited number of transmitters would be needed. [Editor’s note: For more on the LUNA RING concept, see “Solar Power from the Moon” by Patrick Tucker, THE FUTURIST, May-June 2011.]

To make this plan more feasible, space travel and the movement of materials need to be more economical. There have been several attempts to improve the space elevator concept, which was first proposed by Russian scientist Konstantin Tsiolkovsky in 1895. A major obstacle is the strength of the material needed for the spine of the elevator, which must reach more than 24,000 miles from the geosynchronous orbit down to near the Earth level. Recently, NASA and physicist Brad Edwards have been updating the design on the basis of the idea that carbon nanotubes, which have the necessary strength, can be scaled up to provide enough material and consistency for the long cable.

Speculative Physics: Dark Energy, Muons, and Mini Black Holes

Still further in the future, and associated with far greater uncertainty, are speculations about using new potential physics discoveries. Although a surprise might arise from this area, the probability of any one technique being successful is small, and it would take a large amount of effort to develop it into an integrated energy production system.

History has shown that surprises can revolutionize energy generation. In the mid-twentieth century, nuclear fission power was able to go from the lab to the power station in about 40 years. There are still many natural mysteries that might point the way to new energy technologies.

Among these mysteries are dark matter and dark energy, which account for about 95% of the energy in the universe. Accelerators such as the CERN Large Hadron Collider might discover new particles, as predicted by a variety of competing theories. Or they might produce mini black holes, whose physics would be interesting to explore. Physicists have begun speculating about potential theories and about how new forms of matter and energy might be exploited to generate useful energy.

For example, heavy, negatively charged particles can catalyze fusion. This is seen when muons enter water. Muons are a heavy relative of the electrons that are produced by natural cosmic rays or accelerators, which have been known since the 1950s. Hydrogen nuclei are attracted to the heavy negatively charged muons and form atoms with the nuclei that are orbiting the muon. This is a form of containment of the hydrogen nuclei. Eventually, the nuclei fuse, releasing energy. Therefore, no large temperature or containment facility is necessary. The muons are then released to catalyze more reactions.

However, the muons are unstable and eventually decay. Currently, the energy necessary to produce the muons is more than the energy generated by the limited number of fusions they catalyze. If a new, more stable, negatively charged particle is found, the economical catalysis of fusion might be developed.

Another possibility is that mini black holes might one day be produced and controlled to extract energy from the material fed into them. As the material entered, some of the energy would radiate out. Very small theoretical black holes would be too unstable and radiate before control was established, but there might be a “sweet spot” of black-hole size that would radiate at a beneficial rate. Mini black holes have been proposed as an energy source for a spaceship in the far future.

Finally, there are aspects of quantum physics that are still very puzzling. Researchers are exploring the connection between quantum physics and gravity, as well as the fundamental aspects of quantum physics behavior, such as the way in which spin influences collective behavior. Another possibility is finding a way to extract energy from vacuum energy (zero point energy).

Diversity for the Energy Portfolio

These examples of potential new energy sources highlight an essential ingredient in the future of energy: the diversity of the organizations involved in developing it.

Some projects are government-based, such as those sponsored by DOE. Others are collaborations between a government and an industry, such as the Japanese Artemis group. Some projects are sponsored by individual philanthropists and investors, such as Bill Gates and Vinod Khosla. And some, such as the ITER fusion reactor, require international collaboration. The space elevator, for example, would probably require similar international agreements and cooperation.

Besides direct research funding, other ways to foster innovation include contests in which many different types of organization can participate. Successful contests include the X Prize for space travel and the Defense Advanced Research Projects Agency (DARPA) Grand Challenge for autonomous vehicle navigation.

Energy is a major determinant in economic development, not only with regard to heating, transportation, and entertainment, but also with regard to staples such as food, shelter, and health. The energy fuel types have periodically changed over the last 200 years, and our current dependence on fossil fuels may soon be at an end.

We have been applying energy-efficiency methods to curb energy demand, and we have been developing renewable energy sources, such as solar and wind power, to increase supply. However, these energy sources might not be able to meet all future energy needs because of their economics or environmental impacts.

Searching for more potential future sources of energy to prepare for the challenges ahead requires research. New tools that employ nanotechnology, supercomputers, and space technology enable such exploration. A balanced portfolio of energy options and organizational support can reduce uncertainty and minimize the potential for surprises.

About the Author

David J. LePoire is an environmental analyst at Argonne National Laboratory. E-mail dlepoire@anl.gov. This work was supported by the U.S. Department of Energy under Contract No. DE-AC02-06CH171357.

This article draws from his essay, “Beyond (Conventional) Renewable Energy?” in the World Future Society’s 2011 conference volume, Moving from Vision to Action, which may be ordered from www.wfs.org/wfsbooks.

The Accelerating Techno-Human Future

By Braden R. Allenby and Daniel Sarewitz

Technology and humanity are co-evolving in ways that past generations had never imagined possible, according to the authors of The Techno-Human Condition. This is not necessarily a good thing, they warn. With unprecedented levels of innovation come new societal tensions and cultural clashes. People everywhere are challenged to adapt to accelerating change.

We’re all used to thinking of cognition as something that happens within individuals. Now we are seeing augmented cognition built into weapons systems in Iraq, automated cognition built into our automobiles, and human memory collected in Google. We’re diffusing cognition across integrated human–technology networks. What you have is not just humans networking with other humans, but humans in integrated human–technology networks functioning differently than they were before. Cognition may be going away from the individual and more toward a techno-human network function. This represents a profound change in our cognitive patterns. In the past, we had to carry more in our brains.

That is not to say that we’re getting more intelligent as a culture. We have different kinds of intelligence. Acting in the face of uncertainty and disagreement seems to be as difficult as ever. Part of the problem is, on the one hand, excessive optimism that we can solve big challenges like climate change or terrorism with more analysis and more information, and, on the other hand, the belief that we’re completely screwed. The technological optimists are as much a problem as the Luddites, because they have a one-sided view that we can solve all our problems if we just apply enough technology and reason. This kind of black-and-white dichotomy is not helpful given the actual complexity of socio-technological evolution.

Too Much Information Running Through My (Distributed) Brain

Everyone is awash in information. There is this weird cognitive dissonance: You can participate in everything and are totally connected—comment on blogs, visit chat forums, etc.—but also there seem to be forces beyond anybody’s control, and an increasing inability to filter out what’s important and reliable.

We have the capacity to generate and process much more information, but there’s a lot more information that doesn’t get dealt with. It just sits out there: dark information, like the dark matter that dominates the universe. It’s pretty clear that there is a broadening and a networking of cognitive capacity. For example, we might use our e-mail as outsource memory for names because we can’t remember them all. But it’s countered by the fact that we’re overwhelmed and can’t remember all our e-mails.

Moreover, we’re seeing technological change across the entire frontier of technology. If you look at past clusters—steam, railroad, automobiles—those clusters tended to have one dominant technology. Other technologies also changed in response, but you could always identify the dominant technology of the era.

Now, you really can’t pinpoint a single dominant technology. Change is occurring across all frontiers of technology: biotech, nanotech, information, cognitive science, and so on. That means adaptation is that much harder, because there are no stable domains, no safe harbors. Moreover, the rate of change is more rapid than ever before and is still accelerating. The pure rate of technological evolution is unprecedented.

For those two reasons—the fact that change is across the entire technological frontier, and the fact that the rate of change is unprecedented and accelerating—our situation is different, and the issues we face are different. And just when we need more institutional agility, we are becoming more rigid.

Because they have to innovate, the private sector will generally adapt to change more rapidly. They’re not happy about it, because it’s difficult. But it’s also an opportunity, and they have no choice. At the level of the firm, confusion and ossification are punished fairly severely in the market, so there’s some selective pressure for adaptability. If you don’t innovate as rapidly as possible, you’re going to lose market share to competitors. You have to innovate or you’ll fall off the rapidly changing edge of the technology. Even sectors that are slower to change have to innovate or risk being replaced. In construction, for instance, people are talking about machines that enable you to print out a house.

The same phenomenon is true for the military, only the stakes are even higher—not just economic survival, but survival, period. This is why so many new technological frontiers start out with military applications.

Individuals are having a hard time understanding, let alone managing, the rate of change, and it’s fair to say that society as a whole is having a hard time. Many people meet the uncertainty and discomfort that characterize rapid and accelerating change by retreating into relatively rigid belief systems or, less aggressively, apathy. In general, therefore, the radical change has the contraindicated effect of driving dysfunctional behavior. This is a very dangerous trend, because what’s needed is more awareness and engagement, not less.

At a social level, part of a predictable response to the accelerating, unpredictable, and uncomfortable change in many domains is rejection of flexibility. So in many ways, we’re moving away from developing appropriate coping mechanisms. That appears to be the case in politics in many countries. We’re getting more and more entrenched politics on either side, instead of the kinds of discussions we need that can let us understand and adjust to continual socio-technological change.

The more that people feel knocked back on their heels by this change, the more they seem to be retreating into the worldviews that make them less able to understand and respond. Meanwhile, the private sector—which is the source of most of the innovation—gets better at accelerating the change, often by leveraging off of military efforts. It would be easy to offer pop-sociological interpretations of this phenomenon, but part of the problem is the expectation of control that comes from the Enlightenment commitment to rational action. That control impulse is now wired into our culture, if not our genes.

More Innovation and Inequality

It is reasonable to expect that social tensions have risen, and will continue to rise, as the rate of change accelerates. A certain level of conflict seems to correlate (if not contribute to) cultural, economic, and technological development. Life expectancy in many developed countries hovers around 80 years; in developing countries in Africa, it can be in the low 40s.

Factors contributing to this gap include existing human-enhancement technologies, such as vaccines (which engineer the immune system to contribute to a much longer life than would occur naturally). Combine this with the ability of a developed society to provide for basic needs so as to facilitate knowledge workers—as opposed to requiring hard and continuing labor simply to exist—and we see huge competitive advantages for developed economies.

This gap is augmented by the fact that technology as a domain is self-perpetuating, especially as technological innovation integrates across categories (robotics and neuroscience, for instance). This momentum will favor societies with an integrated capability across the technological frontier as a whole.

Now, what if states aggressively pursue particular technologies? For instance, what if China goes from its “one child” policy to an “enhanced child” policy, whereby all schoolkids are required by the state to take some sort of cognitive enhancement? The outcomes of such a policy are difficult to anticipate, but may be quite disruptive. The outcome of the “one child” policy (combined with technologies that can promote gender selection) has included a gender imbalance that is potentially destabilizing to Chinese society. An “enhanced child” policy would surely have similarly unpredicted consequences.

Plenty of room exists for social tensions, though not necessarily due to old triggers such as immigration. The innovation hothouses of Silicon Valley and the Route 128 corridor wouldn’t exist except for Indian, Chinese, and other immigrants. Russian immigrants to the United Kingdom easily assimilate to their new society, and, indeed, are characteristic of a globalizing elite that migrates fairly effectively across national borders. And certainly much of the backlash against modernity in the United States is not from immigrants, but from domestic groups that find social and technological evolution challenging and distasteful.

The real question is whether we will overshoot productive conflict and get into destructive conflict, as some argue that American politics may have done. However, the older conflicts, such as natives versus migrants, may become obsolete with the globalization of economic, institutional, and technological systems.

Unless policies are enacted to reduce inequities, human-enhancement technology will probably magnify existing gaps between the capabilities and assets of different groups. We may have been seeing an early version of this in the United States over the past 40 years, where a commitment to an elite-driven education system, and the absence of a social or political commitment to redistribution policies, leaves us pretty much unable to offset the consequences of Schumpeterian “creative destruction.” The result has been a de-skilling of the workforce and the elimination of the conventional manufacturing sector, among other problems.

It is hard to understate how stressful life is becoming for many in developed countries as a result of technological evolution. For a digital immigrant, for example, trying to stay networked and multitasking can be very stressful, while, for a digital native, that level of information flow may be not only comfortable, but necessary to feel psychologically connected to others. The pressure to be more productive is indeed pervasive, though perhaps advanced information technologies themselves may not be the source of stress: It still takes decisions by people, and their institutions, to damage other people this way. Yet, individuals may have little choice but to conform to evolving technological demands if they want to remain enfranchised in society.

Technology and Generational Divides

We should not forget that stress is part of the human condition: Staying alive as a poor farmer in India, or surviving drought in Africa or Asia, is also stressful. Moreover, for many people across cultures, change can be very stressful, regardless of productivity demands, which adds a complicating factor to any analysis. In the longer run, human psychology may be increasingly subject to deliberate engineering, raising the interesting question of whether stress—in appropriate amounts depending on personality and genetics—is a necessary component of being human as we know it, and can be designed to more preferable levels. But we reemphasize that any such efforts are likely to lead to unforeseen consequences that may overwhelm the original intent of the actions.

Many middle-aged adult professionals may feel that the connectedness of the wired world adds enormously to their stress levels, but it is doubtful that their kids would look at it that way. This is not meant to dismiss the concern; adjusting successfully to the information flow—which to you may be unmanageable but to your kid is normal—will undoubtedly bring other problems. We might never be able to finely tune such attributes as stress levels for an entire population, and this is probably just as well: Psychological diversity is clearly an asset for the species. Yet, that may not keep us from trying.

It is reasonable to expect that a radical increase in longevity will exacerbate rather than mitigate generational conflict. After all, think about how mature human-resource workers differ in their view of informal social networking photographs and materials compared with the young people who post it: Older folks tend to judge it by their experience, when much potentially questionable behavior simply never got recorded. Today, all your friends have cell phone cameras, and it may amuse them to post embarrassing pictures of you. It isn’t the behavior that has changed; it’s what technology allows to be classified as personal versus public.

Such technologically mediated misunderstandings tend to get worse as the pace of technological evolution accelerates. Moreover, it is also probable that accelerating technological evolution will result in increased social, institutional, and financial upheaval. Another consequence of life extension will be an older workforce that may resist giving way to younger job seekers.

In our book, we include a brief quote from Gulliver’s Travels on the issue of intergenerational conflict. Swift describes an island of people who live forever, and he foresees them as an alienated, isolated generation of sufferers. This of course is a satirical device, but his questioning, nearly 300 years ago, the notion that longevity is automatically a boon to either the person or the society, is still apt. Totally cut off from the younger generation, these aged beings are bitter and alone.

As we pursue radical life extension through technological means, Swift’s challenge may become ours. Why assume that the cumulative effect of lots of people living longer is going to be a boon to society? We’re definitely going to pursue it, but the objective ought to be more years of healthy life, not just more years. Unfortunately, when we see more years of life, we also see more years of unhealthy life. Anyone who is middle aged is dealing with unhealthy parents and the slow decline that we all slowly go through. The flipside of living a lot longer is that dying takes a lot longer, too. Dealing with the continual challenges of demographic change is thus another aspect of the techno-human condition.

One must always remember how unpredictable the future is. It might well be that older, less-innovative brains perform important functions in integrated techno-human cognitive networks, and thus act as a source of balance and stability without impeding cognition. And there is a lot of work going on now with augmented-cognition technology (for example, with automobiles), which may well lend itself to developing cognitive tools that can compensate for aging adults. For example, we may see AI avatars with the experience and caution of adults and the playfulness and experimentalism of youth.

To the extent that human judgment will never be fully replaced by artificial intelligence, one might hope that a population of wise elders could indeed be a resource for society. In the case of science, there are some fields where the biggest contributions are made by kids, such as math and physics, and others where experience really does count, such as geology and engineering.

So we’ll see problems and benefits as technological development accelerates and as cognition becomes increasingly networked. We are very likely to see a set of problems that we haven’t figured out how to deal with, and that are historically unique. Technology affects how culture evolves, and culture affects how technology evolves. They’re not separate categories. You can’t understand one without understanding the other.

About the Authors

Braden R. Allenby is a professor of civil and environmental engineering at Arizona State University. E-mail braden.allenby@asu.edu.

Daniel Sarewitz is a professor of science and society at Arizona State University. Email daniel.sarewitz@asu.edu.

They are co-authors of The Techno-Human Condition (MIT Press, 2011).

This article was based on interviews with the authors by Rick Docksai.

Five Principles of Futuring as Applied History

By Stephen M. Millett

A historian and futurist offers a theoretical framework for developing more credible and useful forecasts. The goal is to help individuals and organizations improve long-term foresight and decision making.

When I was working on my doctorate degree in history, people would quip: “Why study history? There’s no future in it!”

On the contrary, there may be a great deal of history in the future. Throughout my four-decade career as a historian engaged in futuring, I have used the past to explore the future. Like the study of history, futuring is heavily based on facts, evidence, solid research, and sound logic—more science, less science fiction.

Futuring is an example of what I call “applied history,” or the use of historical knowledge and methods to solve problems in the present. It addresses the question “What happened and why?” in order to help answer the question “How might things be in the future and what are the potential implications?” Futuring, at least in a management context, combines applied history with other methods adapted from science, mathematics, and systems analysis to frame well-considered expectations for the future. This process will help us to make decisions in the present that will have positive long-term consequences. In the language of business, futuring is an aspect of due diligence and risk management.

History provides indications of the future. Identifying historical trends helps us see patterns and long-term consistencies in cultural behavior. History may not repeat itself, but certain behaviors within cultures do. We can spot patterns in persistent traditions, customs, laws, memes, and mores. Debating whether a historical event is unique or a manifestation of a long path of behavior is like arguing whether light is a particle or a wave—the answer depends entirely upon your perspective.

The past provides precedents for future behavior. When you understand how things happened in the past, you gain much foresight into the things that might happen in the future—not as literal reenactments, but rather as analogous repetitions of long-term behavior that vary in their details according to historical conditions.

Let me hasten to qualify my view of history by saying that I see no immutable forces in the flow of history, no invisible hands of predestination, fate, or economic determinism. Time may be like an arrow, in the words of Sir Arthur Eddington, but I very seriously doubt that it has a prescribed target. I am also skeptical of the concept of political or economic cycles recurring with regular periodicity. If there were any determinism or predictability whatsoever in human behavior, it lies in our evolutionary biology and cultures. Luck, randomness, and the idiosyncrasies of free will play important roles in determining the future as well.

While the study of history has been rich in philosophy, it has lacked theories such as those found in the natural and social sciences. Most historians have not pursued such theories, because they see each period of history as being unique and as having little or no practical applications for problem solving today. Futuring as applied history, however, needs basic principles upon which to build forecasts that can be used for long-term decision making.

A Framework for Understanding the Future

The study of the future is very sparse in both philosophy and theory. Theories (which may also be seen as mental or analytical models) provide a framework for forecasts and give them a credibility that increases managers’ willingness to take calculated risks. In addition, they can help us utilize our knowledge of demonstrated trends, interactions, and causes to better anticipate the future. The theories do not have to be rigid, but they do need to provide an explicit framework that can be modified, expanded, and even rejected by experience.

To that end, I have been working on a set of theoretical principles for futuring from the point of view of an applied historian. I offer them now as working guidelines until others can offer better.

Futuring Principle 1>> The future will be some unknown combination of continuity and change.

After an event occurs, you can always find some evidence of the path that led up to it. Sometimes when viewed in hindsight, the path looks so linear that it is tempting to conclude that the outcome was inevitable all along. In reality, it is the historical research that is deterministic, not the events themselves.

No historical event has ever occurred without antecedents and long chains of cause-and-effect relationships. Nor was there ever a time when decision makers did not have choices, including the simple option to do nothing. Yet, in the present moment, one can never be certain which chains of events will play out. While there are continuities in the past and the present, there are also changes, many of which cannot be anticipated. Sometimes these changes are extreme, resulting from high-impact, low-probability events known as “wild cards.”

Thus, the future always has been and most likely always will be an unknown combination of both trend continuities and discontinuities. Figuring out the precise combination is extremely difficult. Therefore, we must study the trends but not blindly project them into the future—we have to consider historical trends, present conditions, and imagined changes, both great and small, over time. You might say that trend analysis is “necessary but not sufficient” for futuring; the same goes for imagined changes, too.

Futuring Principle 2>> Although the future cannot be predicted with precision, it can be anticipated with varying degrees of uncertainty, depending upon conditions. Forecasts and plans are expectations for the future, and they are always conditional.

As twentieth-century physicist Niels Bohr famously said, it is very hard to make predictions, especially about the future. Yet, we can and do form expectations about the future ranging from ridiculous to prescient. David Hume, Werner Heisenberg, and Karl Popper cautioned us to be wary of drawing inductive inferences about the unknown from the known. This caution applies as much to futuring as it does to science.

All events occur in the context of historical conditions; likewise, all events in the future will almost certainly occur within a set of conditions. Therefore, all forecasts are conditional.

We may not be able to anticipate specific events in the future, but we can form well-considered expectations of future outcomes by looking at specific conditions and scenarios. For example, “When will the United States experience again an annual GDP growth rate of 7% or higher?” is a much more elusive question to address than “Under what likely conditions would it be reasonable to expect the United States’ annual GDP growth rate to be 7% or higher in the future?”

Futuring Principle 3>> Futuring and visioning offer different perspectives of the future—and these perspectives must complement one another.

This principle draws a distinction between futuring and visioning. Futuring looks at what is most plausibly, even likely, to unfold, given trends, evolving conditions, and potentially disruptive changes. It emphasizes conditions that are partially if not largely out of your own control.

Visioning, on the other hand, involves formulating aspirational views of the future based on what you want to see happen—in other words, how you would like events to play out. Of course, just because you want a certain future to happen does not guarantee that it will.

Strategic planning is a manifestation of visioning. If an organization does not engage in forecasting with all the rigor of historical criticism and good science, strategic planning can be just so much wishful thinking. I find that wishful thinking is alive and well in many corporations and institutions. Both futuring and visioning are necessary and they go hand-in-hand—just be careful to correctly identify which you are doing and why.

Futuring Principle 4>>

All forecasts and plans should be well-considered expectations for the future, grounded in rigorous analysis.

Futuring methods fall into three broad, fundamental categories: trend analysis, expert judgment, and scenarios (also known as multi-optional or alternative futures). Historical research methods and criticism play well in all three categories.

As a futurist, I have no data from the future to work with. I cannot know in the present whether a forecast of mine will turn out to be “right,” or “accurate,” or even “prescient,” but I know what I can and cannot convincingly defend as being well-considered expectations for the future.

In this regard, the soundness of philosophical premises and theories, along with familiarity with best research practices, will add much to your foresight credibility and to the usefulness of your futuring activities.

Futuring Principle 5>>

There is no such thing as an immutable forecast or a plan for a future that is set in stone.

Forecasts and plans must be continuously monitored, evaluated, and revised according to new data and changing conditions in order to improve real-time frameworks for making long-term decisions and strategies.

A forecast is a well-considered expectation for the future; it is an informed speculation or a working hypothesis, and as such is always a work-in-progress. Forecasts, like historical research, can never be completed. There is always more to be said on the subject as time passes. We must continuously use new and better information to evaluate and modify our expectations for the future.

* * * *

Futurists, like historians, must examine events in a large and complex context. My challenge to futurists, forecasters, strategic planners, and decision makers is to apply a historian’s rigor to their futuring endeavors. Think through a foundational philosophy of the future and theories concerning why some futuring methods are more trustworthy than others.

When generating forecasts, rely upon well-tested theories and best practices to justify your methods and conclusions. Use the five futuring principles offered above to guide your formulation of forecasts as well-considered expectations for the future.

About the Author

Stephen M. Millett is the president of Futuring Associates LLC, a research and consulting company, and teaches in the MBA program at Franklin University. He received his doctorate in history from The Ohio State University. His career at the Battelle Memorial Institute spanned 27 years. He is a past contributor to THE FUTURIST and World Future Review and he was a keynote presenter at WorldFuture 2003 in San Francisco. He may be reached at smillett@futuringassociates.com.

A more thorough discussion of these principles and supporting case histories appear in his forthcoming book, Managing the Future: A Guide to Forecasting and Strategic Planning in the 21st Century, to be published by Triarchy Press, www.triarchypress.com/managing_the_future.

Thomas Bayes and Bruno de Finetti: On Forming Well-Considered Expectations of the Future

The theories of subjective probabilities advocated by eighteenth-century English mathematician and theologian Thomas Bayes and by twentieth-century Italian statistician Bruno de Finetti are very applicable today when we assign likelihood to any future conditions or outcomes.

Bayes (circa 1702-1761) used prior knowledge as a starting point for calculating the probabilities of events. “Prior knowledge” may mean a hunch or an educated guess in lieu of non-existing facts. To illustrate this concept, Bayes’s one paper on the topic described how an initial estimate of the positions of balls on a billiard table may lead to more accurate calculations of where they are likely to land next. With increasing information, one may see patterns that both explain unknown causes and anticipate the future.

Bayes’s approach has led to the information theory stating that expectations for the future must always be modified by new information. Yet, some critics contended that probability should be reliant on data-based statistics rather than subjective judgment.

About two centuries later, de Finetti (1906-1985) provided a proof that all probabilities, particularly those concerning the future, are subjective. He concluded that it is better to admit your subjective judgment than to hide it in apparent objectivity. One way to do this is to assign a future event an a priori probability: While it may or may not be prescient, it can reveal how likely you think a future event may be according to your own biases—information that can give you a sense of how objective your forecasting actually is.

Stephen M. Millett

Tomorrow in Brief

Virtual Therapy for Phobias

Simulating an environment or situation that evokes fear is one way that psychologists help treat patients with severe phobias. Now, therapists can deploy a range of virtual world simulations to help their patients.

In a virtual café or pub, for instance, individuals with social phobias can learn to deal with fears associated with being in public, such as being looked at or talked about, according to Delft University of Technology researcher Willem-Paul Brinkman.

While their patients are engaged in the simulation, therapists will be able to observe and record physical reactions such as heartbeat and perspiration, then encourage patients to test alternative behaviors in the simulation.

Source: Delft University of Technology, http://www.tudelft.nl.

Mobile Water and Power

Places without access to clean water and convenient power may soon have a solution to both problems.

Developed by Purdue University researchers, a new alloy of aluminum, gallium, indium, and tin can split salty or polluted water into hydrogen and oxygen. The hydrogen feeds a fuel cell housed in a relatively lightweight (under 100 pounds) portable unit to produce electricity, with steam as a byproduct. The steam purifies the water.

The technology may be used not only for poor, remote villages, but also for military operations, according to Jerry Woodall, distinguished professor of electrical and computer engineering.

Source: Purdue University, www.purdue.edu.

Space Junk Detector

A new European space surveillance system is being developed in the hope of keeping outer space tidy—and space traffic flowing smoothly.

Futurists have long warned that increased human activity in space would have one inescapable byproduct: increased orbiting junk. Space junk haulers, reclaimers, and recyclers were even listed among THE FUTURIST’s “70 Jobs for 2030” (January-February 2011).

Now, Fraunhofer Institute researchers are working with the European Space Agency to develop radar systems with sufficiently high resolution to track the estimated 20,000 orbiting pieces of debris that threaten to damage or destroy any satellite or vehicle they may encounter.

Source: Fraunhofer-Gesellschaft, www.fraunhofer.de.

The Internet of Bodies

As sensors and transmission technologies continue to shrink in size, they will enable us to monitor and manage our own bodies—and connect with others.

As with the so-called Internet of things, an Internet of bodies may soon be built, thanks to work under way in research labs such as the Department of Informatics at the University of Oslo.

Such a “bodnet” could allow frail elderly individuals to live independently at home, as well as improve public health monitoring and prediction systems, as data can be collected from widely distributed populations.

Source: University of Oslo, www.uio.no.

WordBuzz: Protopia

A proposed destination for a desirable future. Protopia, as defined by Wired senior maverick Kevin Kelly, would be a future that is better than today but would not attempt to be a utopia in the sense of a problem-free world.

Technology futurist Heather Schlegel would like to take the concept a bit further. Protopia, she argues, should represent a positive portrayal of the future. Protopians would actively tackle big problems and develop new tools, mind-sets, and paradigms for doing so.

Sources: Kevin Kelly’s blog The Technium, www.kk.org/thetechnium.

Heather Schlegel’s blog Heathervescent, www.heather vescent.com.

Future Active

Custom Teaser: 
  • Pros and Cons of the African Brain Drain
  • Envisioning the Museum of Tomorrow
  • Futuring Goes to Town

Pros and Cons of the African Brain Drain

Africans are investing in higher education, but the lack of job opportunities for graduates is helping to drive a brain drain, argued University of California, Davis, economist Philip Martin in a recent online discussion hosted by the Population Reference Bureau. Martin is the author of the new PRB report, “Remittances and the Recession’s Effects on International Migration.”

In his PRB Discuss Online appearance on May 26, Martin pointed out that a lack of opportunities for university graduates with advanced degrees in their home countries gives them little choice but to seek employment elsewhere.

“Many African countries spend relatively more on higher education than on K-12 schooling, which leads to ‘too many’ university graduates who cannot find jobs, prompting them to emigrate,” he said. Martin projects that international migration of both educated and non-educated African workers will continue to increase.

The remittances that these workers send back to family members and loved ones provide a bit of a boost to their home countries’ overall economies, Martin observes in his report. These remittances can help create jobs and fund startup costs for small businesses in the migrant workers’ home countries. However, “sending workers abroad and receiving remittances cannot alone generate development,” Martin warns.

Although these monetary gifts may not counterbalance the loss of skilled (as well as so-called “unskilled”) workers, they make a significant impact. During the online discussion, Martin cited World Bank statistics: In 2010, remittances sent by workers from developing countries back home totaled around $325 billion. Projections for 2011 are even higher, and, according to the World Bank, that figure should increase by $50 billion in 2012. This is triple the amount of international aid money received.

There are other benefits and drawbacks to the brain drain, as well. “Migration can set in motion virtuous circles, as when sending Indian IT workers abroad leads to new industries and jobs in India, or set in motion vicious circles, as when the exit of professionals from Africa leads to less health care and too few managers to operate factories,” Martin explained during the Q&A session.

Martin recommends that policy makers in countries to which workers are migrating create legislation that protects them rather than trying to limit migration or restrict migrants’ rights.

Source: Population Reference Bureau, www.prb.org.

Envisioning the Museum of Tomorrow

A daylong workshop on futures thinking, forecasting methods, and strategic planning was offered to museum professionals attending the American Association of Museums’ annual meeting and expo in Houston in May 2011.

“Forecasting the Future of Museums: A How-to Workshop” was organized by the association’s Center for the Future of Museums (CFM). The workshop tied in nicely with the overall theme of the conference, “The Museum of Tomorrow.” Both the forecasting workshop and the general conference focused on ways that museums can evolve and adapt to the various shifts—political, economic, environmental, technological, and cultural—now taking place.

The workshop was led by CFM founding director Elizabeth Merritt; Peter Bishop, director of the Future Studies program at the University of Houston; and Garry Golden, lead futurist at the management consultancy futurethink. The workshop covered both the principles of foresight and museum futures specifically.

“We reviewed the basics of futures studies in the morning, explaining how trends and events can disrupt our path to the ‘expected’ future,” Merritt explains. The leaders also conducted an exercise: “Participants created cards for a forecasting deck in the course of these exercises, which we then used in the afternoon as they learned how to create scenarios to explore potential futures.”

Most of the afternoon was devoted to creating and exploring scenarios. Several wild cards were considered, including the possibility that museums could lose federal tax-exempt status and the occurrence of an event such as a pandemic or terrorist act that “might radically restrict travel or people’s willingness to congregate in public places,” Merritt says.

The workshop closed by looking at ways that museum directors can incorporate forecasting methods such as trend analysis, visioning, and scenario building into their strategic planning. According to the CFM, strategic long-term planning is essential for museum professionals, but short-term planning is currently more prevalent.

Those who couldn’t be at the conference in person had the opportunity to “attend” a virtual component taking place simultaneously. During the CFM’s online presentation, “Practical Futurism: Harnessing the Power of Forecasting for Your Institutional Planning,” several museum directors addressed the need to identify what Merritt describes as “the trends that challenge their local communities … and their museums’ own sustainability”—and to respond to them accordingly.

“The two other activities CFM specifically orchestrated were an ‘Ask a Futurist’ booth, staffed by faculty and students from the University of Houston, and an installation on the future of natural history museums by artist Tracy Hicks,” says Merritt. The art installation, titled Helix: Scaffolding #21211, also explored natural history museums’ projected influence on the Earth’s ecology.

Such events provide clear indication that museums—sometimes considered mere repositories of history—are orienting toward the future as well.

Sources: American Association of Museums, www.aam-us.org.

Center for the Future of Museums, www.futureofmuseums.org.

Futuring Goes to Town

From smart growth to traffic control measures, citizens of the Township of Delta in Michigan recently had the opportunity to voice their preferences on issues affecting their future.

In May 2011, the township’s community development department held a futuring session to gather information on issues surrounding the township’s growth and development. The meeting was part of an effort to review and update the township’s parks and recreation plan, non-motorized transportation plan, and comprehensive land use and infrastructure plan. Around 70 participants offered input to help community developers set objectives and goals for the future.

According to planning director Mark Graham, “participants were asked numerous questions pertaining to the future of the township in relation to urban sprawl, public transit, environmental protection, placemaking, recreational amenities, and the provision of public services.”

Those in attendance voted anonymously, via hand-held electronic devices, on 21 multiple-choice questions such as, “Which one of the following environmental issues do you feel will present the biggest challenge to the quality of life for township residents in the future?”

Afterwards, the results of the poll were tallied. Citizens participating in the exercise clearly saw “loss of open space” as a detriment to Delta Township’s future, followed by “high fuel prices [that] make suburban commuting less desirable.”

“The survey results from the futuring session will be one of the data sources used in compiling goals and policies for the updating of the township’s comprehensive plan,” Graham says.

An online version of the survey augmented the futuring session’s results and enabled those who could not attend to have a voice. The next step will be to schedule a public hearing to gain crowd feedback on a proposed draft of future plans.

Source: Delta Township Community Development Department, www.deltami.gov.

Future Scope

Custom Teaser: 
  • Accelerated Carbon Emission Rates
  • Broadening the Definition of Arts Participation
  • TV Is Going Off the Air
  • Agencies Are Unprepared for Climate Change

Accelerated Carbon Emission Rates

Carbon is now being released into the atmosphere nearly 10 times as fast as a similar period of climate change nearly 56 million years ago, according to a team of scientists led by Lee R. Kump of Penn State.

The researchers examined rock cores from the Paleocene-Eocene Thermal Maximum (PETM) event that were collected in Spitsbergen, Norway. The samples contained a large amount of sediment, enabling the researchers to infer the amount of greenhouse gases that produced the carbon content and the temperature that would have resulted.

“We looked at the PETM because it is thought to be the best ancient analog for future climate change caused by fossil fuel burning,” Kump explains. The researchers believe that the Earth experienced a warming of 9°F to 16°F during PETM, accompanied by an acidification event in the oceans.

During the PETM, ecosystems had 20,000 years to adapt to carbon release, but at current rates of emission, “it is possible that this is faster than ecosystems can adapt,” warns Kump.

Source: Penn State University, www.psu.edu.

Broadening the Definition of Arts Participation

Attendance at classical music concerts and art galleries has declined in the United States, but this is not a complete picture of people’s interest or participation in arts activities. The National Endowment for the Arts is broadening its traditional benchmarks for measuring arts participation.

Now, the NEA is looking at a wider variety of artistic genres and including people’s arts participation via electronic media, as well as involvement in personal arts creation. By this measure, some 75% of Americans are active arts participants.

Studying these trends will help promoters, managers, and curators become more engaged with their prospective audiences, such as through innovative arts education programs.

Source: National Endowment for the Arts, www.arts.endow.gov.

U.S. Hispanic Population Is Booming

The Hispanic population in the United States is growing at four times the rate of total U.S. population. The numbers increased by 43% (15.2 million) between 2000 and 2010. The nation as a whole increased by 9.7% (27.3 million) during that time, according to the U.S. Census Bureau.

The fastest growth occurred among the largest subgroup: Hispanics of Mexican origin, who represented 63% of total U.S. Hispanic population—up from 58% in 2000.

Source: U.S. Census Bureau, www.census.gov.

TV Is Going Off the Air

Free, over-the-air TV viewing has been declining steadily since 2005, according to the Consumer Electronics Association.

Sales of “rabbit ears” and rooftop antennas are thus falling, as viewers seem reconciled to paying for television content. Those who do “cut the cable” to pay TV are switching to the Internet rather than free airwaves.

The digital broadcast transition has offered consumers many new viewing options, including Internet streaming services such as Hulu and Netflix. These options also allow for mobile TV viewing when delivered to smartphones and other devices, including video monitors in cars.

Source: Consumer Electronics Association, www.ce.org.

Agencies Are Unprepared for Climate Change

Floods, fires, tornadoes, and other catastrophes associated with climate change may increasingly result in water shortages, epidemic diseases, skyrocketing insurance costs, and other disruptions.

Federal agencies in the United States are ill-prepared to handle such threats, warns a report by Resources for the Future that draws from experts in economics, ecosystems, insurance markets, and risk management.

Whether or not climate change can be mitigated or reversed, agencies need to be flexible and informed, allowing local actors to respond to crises quickly. The report recommends policies that emphasize adaptation innovation, and suggests crafting legislation that creates synergies across multiple policy areas.

Source: “Reforming Institutions and Managing Extremes: U.S. Policy Approaches for Adapting to a Changing Climate,” Resources for the Future, www.rff.org/adaptation.

The Troubling Future of Internet Search

Data customization is giving rise to a private information universe at the expense of a free and fair flow of information, says the former executive director of Moveon.org.

By Eli Pariser

Someday soon, Google hopes to make the search box obsolete. Searching will happen automatically.

“When I walk down the street, I want my smartphone to be doing searches constantly—‘did you know?’ ‘did you know?’ ‘did you know?’ ‘did you know?’ In other words, your phone should figure out what you would like to be searching for before you do,” says Google CEO Eric Schmidt.

This vision is well on the way to being realized. In 2009, Google began customizing its search results for all users. If you tend to use Google from a home or work computer or a smartphone—i.e., an IP address that can be traced back to a single user (you)—the search results you see incorporate data about what the system has learned about you and your preferences. The Google algorithm of 2011 not only answers questions, but it also seeks to divine your intent in asking and give results based, in part, on how it perceives you.

This shift speaks to a broader phenomenon. Increasingly, the Internet is the portal through which we view and gather information about the larger world. Every time we seek out some new bit of information, we leave a digital trail that reveals a lot about us, our interests, our politics, our level of education, our dietary preferences, our movie likes and dislikes, and even our dating interests or history. That data can help companies like Google deliver you search engine results in line with what it knows about you.

Other companies can use this data to design Web advertisements with special appeal. That customization changes the way we experience and search the Web. It alters the answers we receive when we ask questions. I call this the “filter bubble” and argue that it’s more dangerous than most of us realize.

In some cases, letting algorithms make decisions about what we see and what opportunities we’re offered gives us fairer results. A computer can be made blind to race and gender in ways that humans usually can’t. But that’s only if the relevant algorithms are designed with care and acuteness. Otherwise, they’re likely to simply reflect the social mores of the culture they’re processing—a regression to the social norm.

The use of personal data to provide a customized search experience empowers the holders of data, particularly personal data, but not necessarily the seekers of it. Marketers are already exploring the gray area between what can be predicted and what predictions are fair. According to Charlie Stryker, a financial services executive who’s an old hand in the behavioral targeting industry, the U.S. Army has had terrific success using social-graph data to recruit for the military—after all, if six of your Facebook buddies have enlisted, it’s likely that you would consider doing so, too. Drawing inferences based on people like you or people linked to you is pretty good business.

And it’s not just the Army. Banks, too, are beginning to use social data to decide to whom to offer loans. If your friends don’t pay on time, it’s likely that you’ll be a deadbeat, too. “A decision is going to be made on creditworthiness based on the creditworthiness of your friends,” says Stryker.

If it seems unfair for banks to discriminate against you because your high-school buddy is bad at paying his bills or because you like something that a lot of loan defaulters like, well, it is. And it points to a basic problem with induction, the logical method by which algorithms use data to make predictions. When you model the weather and predict there’s a 70% chance of rain, it doesn’t affect the rain clouds. It either rains or it doesn’t. But when you predict that, because my friends are untrustworthy, there’s a 70% chance that I’ll default on my loan, there are consequences if you get me wrong. You’re discriminating.

One of the best critiques of algorithmic prediction comes, remarkably, from the late nineteenth-century Russian novelist Fyodor Dostoevsky, whose Notes from Underground was a passionate critique of the utopian scientific rationalism of the day. Dostoevsky looked at the regimented, ordered human life that science promised and predicted a banal future. “All human actions,” the novel’s unnamed narrator grumbles, “will then, of course, be tabulated according to these laws, mathematically, like tables of logarithms up to 108,000, and entered in an index … in which everything will be so clearly calculated and explained that there will be no more incidents or adventures in the world.”

The world often follows predictable rules and falls into predictable patterns: Tides rise and fall, eclipses approach and pass; even the weather is more and more predictable. But when this way of thinking is applied to human behavior, it can be dangerous, for the simple reason that our best moments are often the most unpredictable ones. An entirely predictable life isn’t worth living. But algorithmic induction can lead to a kind of information determinism, in which our past clickstreams entirely decide our future. If we don’t erase our Web histories, in other words, we may be doomed to repeat them.

Exploding the Bubble

Eric Schmidt’s idea, a search engine that knows what we’re going to ask before we do, sounds great at first. We want the act of searching to get better and more efficient. But we don’t want to be taken advantage of, to be pigeon-holed, stereotyped, or discriminated against based on the way a computer program views us at any particular moment. The question becomes, how do you strike the right balance?

In 1973, the Department of Health, Education, and Welfare under Nixon recommended that regulation center on what it called Fair Information Practices:

  • You should know who has your personal data, what data they have, and how it’s used.
  • You should be able to prevent information collected about you for one purpose from being used for others.
  • You should be able to correct inaccurate information about you.
  • Your data should be secure.

Nearly forty years later, the principles are still basically right, and we’re still waiting for them to be enforced. We can’t wait much longer: In a society with an increasing number of knowledge workers, our personal data and “personal brand” are worth more than they ever have been. A bigger step would be putting in place an agency to oversee the use of personal information. The European Union and most other industrial nations have this kind of oversight, but the United States has lingered behind, scattering responsibilities for protecting personal information among the Federal Trade Commission, the Commerce Department, and other agencies. As we enter the second decade of the twenty-first century, it’s past time to take this concern seriously.

None of this is easy: Private data is a moving target, and the process of balancing consumers’ and citizens’ interests against those of these companies will take a lot of fine-tuning. At worst, new laws could be more onerous than the practices they seek to prevent. But that’s an argument for doing this right and doing it soon, before the companies who profit from private information have even greater incentives to try to block it from passing.

Eli Pariser is the board president and former executive director of the 5 million member organization MoveOn.org. This essay is excerpted from his book, The Filter Bubble: What the Internet Is Hiding From You. Reprinted by arrangement of The Penguin Press, a member of Penguin Group (USA), Inc. Copyright © 2011 by Eli Pariser.

Finding Connection And Meaning in Africa

A doctor discovers meaningfulness in a simpler, survival-oriented culture.

By Tom Murphy

As a radiologist physician, I went to Moshi, Tanzania, in June of 2007 to spend three and a half weeks teaching and working with radiology doctors in training at Kilimanjaro Christian Medical Center (KCMC) Hospital at the base of Mt. Kilimanjaro. As I said in my e-mails home, “Every minute was an adventure and every day a safari.”

The medical milieu was one in which we dealt with basic human existence. We encountered a spectrum of problems, from extreme and untreatable infectious disease to the new plague of “Western disease” (diabetes, early obesity, heart attack, and stroke), and the growing presence of cancer. There were also wild cards, such as the curse of inexpensive but toxic Chinese drugs, as well as infants with congenital and rheumatic heart disease that were on waiting lists for surgical repair in India. Disease crossed all ages, from babies to teens to a 26-year-old male with terminal parasitic Echinococcus filling his lungs to old men with testicular tumors the size of a grapefruit.

The hospital was open-air, Christian, 500 beds, and a major training center for nursing, anesthesia, radiology, dermatology, and other specialties. It was also a research center for Duke University (AIDS, dermatology, and medical students).

But I went there for another purpose. I had been working with the Millennium Project, a futures think tank in Washington, D.C., for which I had been studying global issues for seven years. Africa, and particularly sub-Saharan Africa, has been at the forefront of many issues—AIDS, poverty, corruption, and so on. I had heard so much about Africa that I was more intrigued by what I could learn than what I could teach.

What do the African people have to teach the rest of us about the future? Besides thousands of years of history and culture, there is the magical attraction of Africa, which is a palpable sense of connection—connection to the past, connection to the earth, and connection to each other. It is simply people expressing themselves honestly while living in a world where meeting the basic needs for food, shelter, clothing, and human kindness fill up the day.

The human kindness is broad. It encompasses the solidarity of survival of everyone and the spirit of the individual. These are a proud people in their demeanor, their voice, their language, and their respect for each other. They are self-confident enough for the young to say Schiamoo (“I place myself at your feet”) to the elders and mean it, and to welcome all into their homes to get to know people and their personalities. When I asked Korakola, one of the radiology residents, to review a talk I was going to give to the staff, she said, “Say whatever you wish and we will decide what to take from it.”

The hospital itself offered me another perspective: Health is not taken for granted here. There is a sense that anyone here could become seriously ill, that death is nearby. And I saw an unspoken acceptance of this reality in the pediatric wards on the faces of women with their ill children. Their thankfulness for any help is profound.

This presence of illness and death is a part of the culture. The AIDS prevalence rate in Tanzania is 8%, and, as in much of sub-Saharan Africa, waterborne infectious disease remains the major cause of death. The cultural side of this is a gratefulness for life. Many say they do not think of the future. Is this hopelessness, acceptance, or contentment?

Along with connection to each other is connection to the earth. There are billboards that say, “In Tanzania, there is no such thing as waste.” Small croplands of maize, beans, and bananas are interspersed with the prairie landscape, and grass cuttings are bicycled to the farmers and bartered for milk. Homes have few appliances, clothes are washed in available cold water sources (our maid used the shower). Water is treasured, and safe water is mostly created. Food production is local, unpackaged, unprocessed, bartered, shared, and completely consumed. Clothing is simple, beautiful, locally stitched and repaired. And all of this is a point of pride for them as they joked with me about the “fat” people of the West.

So what does this have to do with the future? This is a people tempered by thousands of years of culture, still living in contact with their own human needs and their earth. They nurture grateful, kind, affectionate, children with fully developed personalities, and live in community—simply but not simplistically. They live in a time frame that allows for human interaction without rancor. There is simple joy here without the false anxieties of excess.

As I contemplate the future, I wonder if this is not what we are all looking for. Is there an endpoint to future studies? Is there a goal? Could this be it?

As I have written this, I have avoided a few topics: violence (which I did not see here, but it is hard to ignore the slaughters of nearby Darfur), tribalism, corruption, lack of infrastructure, polluted and infected water (and its medical consequences), and the overall environmental degradation that plagues the southern African countries. Anyone would know that these must be dealt with.

And there are other puzzlements. Would these gentle people, if given the opportunity, forsake this beauty of culture for the wealth and “ease” of Western countries? I know that they crave education for their children, and perhaps that is a long-term answer. It is up to the Tanzanians to choose how they will develop.

There is also a global issue here. If the world chooses to develop an ecologically and humanly healthy global footprint, are those of us in wealthy countries willing to do it consciously? In other words, would we choose to not consume, even though we have the money, in order to have a resource-secure Earth?

Perhaps we can solve it all with technology, but there was something grounded about the sense of living that I found in Africa. There, the real, everyday human needs of food, shelter, and community were profound: Everything had meaning.

Tom Murphy, MD, is a physician in Des Moines, Iowa, and has been an associate of the Millennium Project for 10 years, focusing on global health and epidemics (challenge number 8 of the 15 Challenges of the Future). E-mail temmmmm@aol.com.

The Sounds of Wellness

Music may have charms to suppress the savage gene.

One ancient therapy has been gaining increased currency among health practitioners in multiple fields of medicine: music. Doctors and nurses increasingly credit music with demonstrable healing powers and anticipate that it can play a major role in treating or preventing many health conditions.

“Sound was really overlooked as a healing modality for a long time. But more recently because of the amount of studies—and because it’s a low-cost intervention—we’re seeing it being used more in medical centers,” says Brenda Stockdale, director of mind–body medicine at RC Cancer Centers.

Stockdale’s cancer center incorporates music into a six-week program for patients who are recuperating from—or trying to prevent—heart disease, autoimmune conditions, cancer, diabetes, and other illnesses. The program dedicates one full week to therapies involving sound, with other weeks focusing on nutrition, physical therapy, and other traditional health areas. In Stockdale’s experience, patients who incorporate sound-based therapies and music into their health regimens attain the best results.

“Using a technology of sound can round out a wellness program,” says Stockdale.

Her facility also plans on replacing televised news in the waiting room with spa-like music. This will soothe patients who might exhibit what Stockdale calls “white coat syndrome”—i.e., being nervous about visiting a doctor.

“That is a great way for medical facilities to start using sound from the moment a person comes in, creating a healthier atmosphere,” says Stockdale.

Music might even influence our genes, Stockdale adds: Her colleague Barry Bittman, medical director of Meadville Medical Center’s Mind-Body Wellness Center, leads sessions in which patients play spontaneous tunes and rhythms on musical instruments. Findings suggest that, following many sessions, the genes each patient carries for heart disease and other conditions are less likely to become active.

We cannot actually change our genes, but outside stresses and conditions may determine whether certain genes will be expressed, Stockdale explains. Music is a healthy influence, and when patients add it to their environs, they raise the odds that genes for health problems will not activate.

“We’re changing the cellular environment. We’re helping healthy genetic expression,” she says.

Stockdale cautions that this study is ongoing, and it is too early to draw hard conclusions from it. But if the preliminary results prove valid, then physicians might eventually design targeted music regimens to actively shape gene expression.

“We will have enough information of the genetic potential, and we will have enough information with all the markers available, to start using music intentionally,” she says.

In the meantime, music therapy is a growing treatment field. Its practitioners use music and sound sequences to help patients manage or relieve chronic pain, immune system disorders, brain damage, mental and emotional disorders, and some developmental disabilities.

Elena Mannes, documentary producer and author, recounts many applications of music therapy in her 2011 book, The Power of Music. For example, she notes that patients in England who had received anesthetics recovered more quickly and had fewer complications if they listened to classical music. Canadian patients who listened to soothing music at regular intervals needed half as much anesthetics as other patients. And at the Beth Abraham Hospital in New York City, patients whose speech was impaired due to a stroke regained some of their speaking abilities after undergoing music therapy.

“To see tears come to the eyes of a neuroscientist as music enables a stroke patient to speak is to witness a moment filled with promise. Science is opening doors to medical applications of music that were unimaginable a decade or so ago,” Mannes writes, adding that “scientists predict a future in which music will routinely be used as a prescription.”

Stockdale leads her patients through sessions using music recordings such as “auto-genics,” which feature acoustics tailored to help the listener’s brain wave rhythm slow down into a more relaxed state. When the brain registers ambient music, Stockdale explains, it secretes chemicals linked to many desirable health effects, including boosting immunity and slowing the heart rate.

“Years ago, we had a mechanistic view of the body. Now we know that the mind and body communicate seamlessly. It’s a constant conversation between mind and body,” Stockdale says. “It is mind and matter affecting each other.”—Rick Docksai

Sources: Brenda Stockdale (interview), RC Cancer Centers, www.brendastockdale.com.

Elena Mannes (interview), Mannes Productions, www.mannesproductions.com. See The Power of Music: Pioneering Discoveries in the New Science of Song by Elena Mannes (Walker, 2011).

Fast Fashion: Tale of Two Markets

Should retailers put the brakes on quick-response manufacturing?

Fashion trends are perpetually changing. A number of large clothing retailers have successfully capitalized on this via a streamlined system involving rapid design, production, distribution, and marketing—what’s known as the “quick-response method.” This method often involves hiring low-paid factory workers in developing countries to manufacture the apparel, keeping prices low for consumers in developed countries.

The quick-response method is an integral part of the “fast fashion” industry, which became prevalent around five years ago and continues to grow. Fast fashion is centered around relatively inexpensive, cheaply made designer knockoffs that go in and out of style faster than the traditional cycle of four fashion seasons.

Fast fashion has grabbed a large share of the apparel market in the United States, according to a study by Iowa State University assistant professor Elena Karpova and graduate student Juyoung (Jill) Lee. This may indicate what could be deemed a long-term fashion trend: Consumers in the United States are gravitating toward lower-priced attire over high quality, longer-lasting clothing. But the opposite holds true in the Japanese market.

In their study, Karpova and Lee compared U.S. and Japanese government data on the issue over a 10-year period.

“I think because U.S. consumers have been price conscious, they generated the whole trend in the industry called ‘fast fashion,’” says Karpova. “American consumers want styles to change quickly, and they want to see new merchandise in their favorite stores almost every week—and at affordable prices.”

Karpova and Lee report that, in general, American consumers frequently replace the inexpensive, lower-quality clothes they purchase. The U.S. apparel industry has attempted to compete with more successful overseas chains by manufacturing their own fast fashions, yet this has backfired somewhat: Americans believe imported clothing to be of higher quality than that produced domestically. To put it another way, American consumers believe that cheap clothing from large European retailers is somehow superior to cheap clothing from large American retailers. (Clothing exports from the United States are not faring much better abroad, either.)

On the other hand, Japanese consumers are willing to pay more for domestically made products with higher price tags, which has resulted in fewer purchases of less-expensive imports. The Japanese apparel industry’s emphasis on more expensive, higher quality goods distinguishes them from foreign competitors in a positive way.

Marketing efforts help drive these trends, Karpova and Lee note. Clothing stores in Japan target older consumers, who are likely to be more interested in long-lasting quality than keeping up with the latest styles, while American advertising targets younger consumers interested in just the opposite.

While it’s likely that both trends will continue, there also exists the possibility that they could reverse course. Recently, Swedish “fast fashion” chain H&M debuted with a big splash in Tokyo’s trendy, style-conscious Harajuku and Shibuya districts and is quickly making inroads in Japan. Meanwhile, Parisian designers at high fashion house Hermès have begun emphasizing what is being dubbed “slow fashion.”

Regardless, fast fashion is not as socially or ecologically responsible as that which is well-made, long-lasting, free of sweatshop labor, and capable of being appreciated for longer than a few weeks, Karpova and Lee conclude.—Aaron M. Cohen

Sources: Iowa State University, www.news.iastate.edu.

“The U.S. and Japanese Apparel Demand Conditions: Implications for Industry Competitiveness,” Journal of Fashion Marketing and Management (volume 15, issue 1), Emerald Group Publishing, www.emeraldinsight.com.

The Gamification of Education

Why online social games may be poised to replace textbooks in schools.

The world has entered a bright new technology-driven era, yet the education system remains rooted in a gray industrial past. At least, this is the argument that a growing number of education professionals are making.

One idea for reform that is steadily gaining popularity involves moving learning almost entirely online and declaring textbooks more or less obsolete. Some suggest taking Web-based learning one step further: Online social gaming may become the educational tool of choice.

While traditional education proponents may be quick to dismiss computer games as inconsequential, others argue that a strong precedent for independently motivated online game-based learning has already been established. Examples include PBS KIDS’s interactive whiteboard games, which teach basic subjects to very young children, and the Learning Company’s hugely popular historical learning game, The Oregon Trail.

Advocates for gaming in education also point to professional training situations where games are increasingly replacing lectures and presentations. Further afield, Jane McGonigal, the director of game research and development at the Institute for the Future, has designed award-winning games to help ignite real-world solutions to pressing social and environmental challenges, such as global food security and a shift to renewable energy.

In their book, A New Culture of Learning (CreateSpace, 2011), Douglas Thomas and John Seely Brown argue that curiosity, imagination, and a sense of play—three aspects integral to learning—are largely missing from the traditional textbook-and-test based education system. What’s more, the authors point out, these are all present in massively multiplayer online role-playing games (MMORPGs) like World of Warcraft.

In Thomas and Brown’s view, such games “are almost perfect illustrations of a new learning environment.” In the social gaming world, “learning happens on a continuous basis because the participants are internally motivated to find, share, and filter new information on a near-constant basis,” they write. Unlike midterms and final exams, games associate learning with fun and allow for trial and error (basically, the freedom to make mistakes). They can also encourage exploration, collaboration, and the exchange of ideas while removing unwanted pressures that can interfere with students’ abilities.

Thomas and Brown further point out that players must do a great deal of reading and research (typically on blogs, wikis, and forums) in order to complete quests in MMORPGs. In other words, well-designed games can also motivate kids to read, the authors believe.

Already, one well-funded experimental New York City public charter school, Quest to Learn (Q2L), has practically eliminated textbook-based learning and largely replaced it with game-based learning. (A sister school, ChicagoQuest, is scheduled to open in September 2011.) Q2L describes itself as “a school that uses what researchers and educators know about how children learn and the principles of game design to create highly immersive, game-like learning experiences in the classroom.” There, basic classes such as math, science, languages, and social studies take place in virtual game worlds. There are bad guys and monsters to defeat along the way.

The school also utilizes game design as a teaching tool, with the goal of creating a solid game-based pedagogical model. “Games work as rule-based learning systems, creating worlds in which players actively participate, use strategic thinking to make choices, solve complex problems, seek content knowledge, receive constant feedback, and consider the point of view of others,” according to Q2L.

That being said, some subjects, such as math and science, are more easily “gamified” than others, such as discussion- and essay-based subjects in the humanities (it would be difficult to parse out the subtleties of, say, To Kill a Mockingbird, by teaming up against bad guys in a MMORPG).

Other advantages and disadvantages need to be weighed. One potentially large drawback is that addiction to game play is engineered into the games themselves, according to Scot Osterweil, research director of MIT’s Education Arcade, which develops (and advocates for) educational games. Parents may want their children to study calculus every night, but they may become concerned if that practice were to become habit-forming, Osterweil noted during a panel on gaming and education at South by Southwest Interactive (SXSW) in March 2011.

Game addiction becomes a much more complex issue when studying and learning is involved, observes Alan Gershenfeld, who serves on the advisory board of nonprofit organization Games for Change. At another 2011 SXSW Interactive panel, Gershenfeld noted one potential solution for game addiction that is being considered by designers: The characters in the game might be programmed to get tired and ask the kids to take a break.

Then there is what Gershenfeld termed the “chocolate and broccoli question”: How do you convince children to play games that are educational and thus less appealing and hip than games like World of Warcraft? It’s not easy, but it’s doable, he said.

Gamified learning is in the early experimental stage. The jury is still out on whether game mechanics may be more effective than linear presentations of educational content with intermittent quizzes. The only thing that can be said with almost certainty is that the number of such experiments is poised to increase.—Aaron M. Cohen

Sources: Quest to Learn, www.Q2L.org.

A New Culture of Learning by Douglas Thomas and John Seely Brown (CreateSpace, 2011).

Biomimicry to Fight Blindness

Doctors design neuron-compatible implants to restore lost eyesight.

The human eye and a digital camera are structurally very alike, according to University of Oregon physicist Richard Taylor. For that reason, he hopes to adapt computer chips into components that surgeons might use to restore blind patients’ eyesight.

“The ultimate thrill for me will be to go to a blind person and say, ‘We’re developing a chip that one day will help you see again,’” says Taylor. “For me, that is very different from my previous research, where I’ve been looking at electronics that go into computers.”

Taylor’s idea is to use nano-sized, flower-shaped electrodes topped with photodiodes, which collect incoming light. The electrode will relay signals from the photodiode to the eye’s own nerves (neurons), transferring the signals along a pathway to the brain for processing into visual images.

The “nanoflowers” would undergo construction in a specialized factory. Once they are built, a surgeon could place them in a patient’s eye using a scalpel and other basic surgical tools that are already found in operating rooms everywhere.

A healthy human eye has photodetectors and an optic nerve, explains Rick Montgomery, a doctoral student working with Taylor on the project. When light hits the eye’s surface, the photodetectors respond by sending signals to the optic nerve, which relays them to the brain. The brain then creates sight. Blindness results when neurons are damaged, preventing the signals from reaching the brain. The nanoflower components will bridge the disconnections, however, by receiving light from their own photodiodes and sending signals to still-functioning neurons.

Photodiodes are now a common fixture in solar panels. In Taylor’s electrode model, they generate working vision instead of electrical energy. “It’s like putting a panel of solar cells in the eye and using that energy generated by that cell to let the brain know what it’s seeing,” says Montgomery.

Medical labs have previously tried to jumpstart vision in impaired eyes by means of photodiodes, according to Montgomery. But computer chips and neurons don’t fit well together, since neurons are slender and branchlike and computer chips are square. This means that, no matter how finely crafted a computer chip may be, many of its outgoing signals will be misdirected and never reach the neurons; consequently, the person will not see as much.

Taylor and Montgomery get around this problem by shaping their nanoflowers in, literally, flower shapes because they imitate the neurons’ geometry. The nanoflowers’ ends reach out far enough toward neurons to send them more signals.

“We’re mimicking biology,” says Montgomery. “We’re trying to use what evolution has come up with, with the complex geometry that the neuron has, and we’re mimicking that in our electrode.”

Montgomery began working this summer with Simon Brown at the University of Canterbury, in New Zealand, on experiments with various metals to grow the nanoflowers on implantable chips. The two researchers are refining the production techniques and to determine which metals would be most compatible with patients’ bodies. The technology could be ready for testing on people in the next 10 years, Montgomery believes. Taylor and Brown will probably start a company that grows and sells the nanoflowers in conjunction with other nanotech companies.

Brown told THE FUTURIST that nanoflowers could achieve even more ambitious goals than curing blindness: If a nanoflower can interact with human neurons to generate eyesight, it might also work with neurons to restore mobility in a person suffering paralysis, vastly improve the functionality of prosthetic limbs, or undo effects of Alzheimer’s and Parkinson’s diseases.—Rick Docksai

Sources: Richard Taylor (interview), University of Oregon, www.uoregon.edu.

Rick Montgomery (interview), University of Oregon, www.uoregon.edu.

Simon Brown (interview), University of Canterbury, New Zealand, www.canterbury.ac.nz.

Futurists and Their Ideas: Marvin J. Cetron on Terrorism and Other Dangers

By Edward Cornish

To protect the United States against terrorists and other aggressors, Defense Department agencies often call on Marvin J. Cetron and his private consulting firm, Forecasting International.

Marvin J. Cetron, founder of Forecasting International Ltd., was born in Brooklyn, New York, in 1930. His father, an accountant, moved the family frequently during Marvin’s early years, but eventually settled in Lebanon, Pennsylvania.

Cetron attended Pennsylvania State University, where he majored in industrial engineering. After graduating, he got a job with the U.S. Navy Department. It was the start of a 20-year-career with the Navy.

His first assignment was to the Naval Applied Science Laboratory in the old Brooklyn Navy Yard, where he specialized in planning and resource allocation—tasks that required a great deal of forecasting.

In addition to his day-to-day work, the Navy sent Cetron to Columbia University to earn a master’s degree in production management. “On my own time!” Cetron notes. “It took three years.”

In 1953, the Navy transferred him to Washington, D.C., where he testified before Congress on the need to raise the pay scale for government engineers. “We could not hire them because companies like Sperry Gyroscope were paying 50% more.” While in Washington, Cetron spent two hours briefing then-senators John Kennedy and Lyndon B. Johnson.

“But the proudest thing I ever did for the Navy Yard was setting up a program at New York’s Pratt Institute in which students could study for six months and then work for the Navy’s Applied Science Laboratory for six months. We hired 50 bright students who had not been able to go on to college. They got their degrees in five years and then worked for the Navy for at least three years. Every one of them graduated.”

In the 1960s, the Navy transferred Cetron first to the Marine Engineering Laboratory at Annapolis and then to the Navy’s Advanced Concepts Group in Washington. The Bureau of Ships had 19 laboratories at the time, and Cetron was in charge of forecasting for all of them.

“A lot of my work was resource allocation,” he says. “We would compare Navy missions with what was going on in science and applied research. Then we would allocate dollars according to the importance of the mission.”

During that time, Cetron planned and carried out one of the largest studies of American science and technology ever conducted. It was called QUEST, for Quantitative Utility Estimates for Science and Technology, and it attempted to anticipate new technologies and how they could be applied to naval and marine missions. The Marine Corps’s Harrier vertical take-off fighter jet and ground-effect landing craft both emerged from this work.

In this period, Cetron also toured NATO countries to explain what the U.S. Navy was doing in forecasting, in an attempt to get other governments to establish their own forecasting programs.

Meanwhile, he spent six years of his rare spare time earning a doctorate in research and development management at American University in Washington. He recalls that his most difficult challenge in those years came from then–Secretary of Defense Robert S. McNamara.

“McNamara was determined to cut government waste by combining duplicate functions,” Cetron reports. “The Army and Marines had to use the same tanks. The Navy and Air Force would use the same airplanes. And we had to combine the service laboratories whose functions overlapped. I was responsible for all the basic and applied research labs. Fortunately, my master’s and doctoral theses had been in the Program Evaluation and Review Technique, or PERT. I had first used it in what later became the Polaris program. For McNamara’s plan, it was just what we needed.”

After 20 years of government service, Cetron retired from the Navy and founded his own firm, Forecasting International Ltd., in Arlington, Virginia. The firm prospered immediately and has remained active ever since. Over the years, Forecasting International has carried out forecasts of the computer industry for Apple and IBM; the hospitality industry for Marriott and Best Western hotels; energy technologies for Siemens; and policy planning for the Indonesian Ministry of Economics, the Kenyan Ministry of Finance, and the Brazilian Ministry of Planning.

In 1977, two years before the fall of the Shah in Iran, Cetron advised his clients to pull their investments out of the country.

“The gap in income and wealth between the richest and poorest tenths of Iranian society was enormous, and that is always a warning of instability,” Cetron explains. “Then the Shah very hastily doubled the salaries of his imperial guard and top officers. He was obviously afraid. Once that happened, we knew the end was near.”

Forecasting Terrorist Activities

In 1994, the U.S. Defense Department selected Cetron and his colleagues to plan and manage its Fourth Annual Defense Worldwide Combating Terrorism Conference.

“The first three meetings of the conference were limited to specialists in the terrorism field,” Cetron reports. “But I had the idea of inviting some general forecasters as well. They might not know much about terrorism, but they understood how to look into the future.”

Cetron’s innovation worked out “spectacularly well,” he says. The meeting report, called Terror 2000, “anticipated virtually the entire course of global terrorism in the years ahead.

“The use of coordinated attacks on distant targets and the probability of a second, much more successful attack on the World Trade Center all appeared in that report,” Cetron notes. “We even predicted the use of hijacked aircraft to attack the White House or Pentagon, but this last forecast was later removed from the report at the request of the State Department, which feared giving terrorists ideas they might not have on their own.

“All of these insights came from the futurists,” Cetron continues. “The subject specialists rejected most of them, but we were sure enough about our forecasts to include them over their objections. I’m sorry we turned out to be right, but it was hard not to feel some satisfaction. When I first suggested consulting futurists, back in the early 1950s, the admiral in charge of the Bureau of Material told me that if I got involved with some of those ‘nuts’ I would lose my security clearance!”

Teamwork at Forecasting International

Cetron attributes much of the success of Forecasting International to his long-time colleague, Owen Davies.

“We met in 1985 when an editor suggested that Owen help me write a book. I had written a number of textbooks by then, but it was not until Owen and I began working together that my writing for the general public began to take off.

“It turned out that Owen is a very capable forecaster in his own right. Over the last 20 years, we have probably carried out 200 studies together, and he has participated fully in all of them. In many ways, he has become the air beneath my wings.

“In the year 2000, Forecasting International undertook a study of a large Asian nation for a government agency aligned with the intelligence community. Part-way through our work, it became clear that trends alone would not be enough for this forecast. We needed a set of scenarios to guide our analysis. Owen prepared them in an afternoon, and they shaped the remainder of our research. When the project was over and he wrote up the result, that effort formed nearly one-fourth of our report.”

Then early in 2004, Davies sent Cetron a brief note about the nature of Islam and the origins of extremist antipathy toward the West. Cetron encouraged him to expand his thoughts, and these eventually were supplemented by a survey of futurists, terrorism specialists, military officers, and industry executives whose companies were likely to be affected. The resulting report became required reading at all three of the major military graduate schools.

More recently, Davies has speculated that, if the United States loses access to Middle Eastern oil, the nation might rapidly develop its shale oil resources and thus become a world leader in oil production. “In the long run,” Davies suggested, “America would grow much richer.”

After the tsunami struck Japan in March 2011, Forecasting International received a request for a quick study of natural disasters that might devastate American cities. Davies’s research revealed that Honolulu has twice been the target of tsunamis vastly greater than the 30-foot wave that struck Japan. One of these waves was 255 feet high, and the other, more than 1,000 feet.

Davies also found that a subduction zone similar to the one responsible for Japan’s tsunami stands at the end of a narrow sea channel leading to Anchorage, Alaska, and that a wave resulting from an earthquake there would likely destroy parts of Oakland and San Francisco, California.

But the most endangered U.S. city, Davies reported, may be St. Louis, Missouri, which faces earthquakes from the New Madrid fault, flooding by the Mississippi River, tornadoes, and—less-natural disasters—massive environmental pollution, and the highest crime rate in the United States.

Cetron now worries a little less about Islamic terrorism but more about home-grown terrorism in the United States, as alienated Americans attack the people and institutions they are angry at. The need to protect America from its own citizens may lead to further intrusive invasions of people’s privacy and security measures suggestive of the world described by novelist George Orwell in his novel 1984.

About the Author

Edward Cornish is the founding editor of THE FUTURIST. E-mail ecornish@wfs.org.

As Blogged: Insights on the Futuring Profession

Futurist bloggers reflect on what it means—and what it takes—to be a futurist.

What does it mean to be a professional futurist? Or a student of futures studies? What are the most necessary skills, the most important attributes, the most integral responsibilities? Here are a few excerpts from bloggers weighing in on the subject on WFS.org.

When They Say You Cannot Know the Future, They Are Planning It For You

Posted by Eric Garland, Wednesday, June 6, 2011

In 15 years of work in the field of foresight, I have learned two things:

1. You can always know more about where your future is heading.

2. When somebody says it’s impossible to know the future, it is usually because they are planning yours for you, and theirs for them.

Notice at no time do I mention predicting the future. … This is a question of knowing who you are, where you are, and where you are going—as an individual, a group, a nation, a species. The people who say we cannot know more about our future through a simple understanding of large, powerful trends are not only wrong, they are doing harm.

… If somebody tells you that you cannot do this, that you should not do this, that it is impossible—ask yourself why they want you to keep thinking the way you are. I bet it is not so that you can be more innovative, flexible, or successful. Perhaps it is because they like the way things are just fine.

… Our world absolutely cries out for the ability to see over horizons, to anticipate the next shock and the golden opportunity.

Top 10 Attributes of FS [Futures Studies] Students

Posted by Alireza Hejazi, Thursday, April 28, 2011

… Become a skillful questioner. As a student of FS you need to ask good questions. Asking good questions at good times is an art, and one of the missions that FS students should accomplish is mastering this art. Different questions can be raised in different areas and times, but a good question is one that is targeted at a definite goal at the most appropriate time.

… Rationalize your expectations. … Our expectations should be based on logical and reasonable foundations. We are not going to solve all of the world’s problems in just one night.

… Learn to teach FS to others. … Lifelong education is needed for all of us, but our college years are [finite]. After some years of education you’ll graduate and perhaps find an opportunity to teach FS to others in both academic and informal ways.

… Develop your personal strategic plan. Not only as a futurist, but also as a [normal] person, you need to develop your personal strategic plan. You may be always asked to forecast for others, but firstly you should learn to [forecast] for yourself. After or during your college years, you should learn how to apply FS tools and techniques in your personal strategic planning. Designing such a plan gives you necessary direction and leads you through your life and your futuring endeavor.

An old saying, “If you are a physician, heal yourself first,” reminds us of the necessity of personal futuring before forecasting for other [people’s] lives and work.

Small Business Futures

Posted by Verne Wheelwright, Sunday, March 20, 2011

I strongly believe that everyone can best learn about futures tools and methods by starting with Personal Futures. This is not just because I am so invested in Personal Futures (which I acknowledge). It’s about learning systems. How do we learn?

We learn best and quickest from what we can experience, and Personal Futures is based on each individual’s life experience. This allows individuals to learn a totally new method or tool and relate that method or tool to personal experience. The result is instant learning, because the experience is already built in. This approach also appears to be effective in large organizations for leadership training in long-term thinking.

About the Authors

Eric Garland is the founder and managing partner of Competitive Futures Inc. and author of How to Predict the Future and WIN!!! (Competitive Futures, 2011).

Alireza Hejazi is founder and developer of the FuturesDiscovery Web site.

Verne Wheelwright is author of It’s YOUR Future … Make It a Good One! (Personal Futures Network, 2010).

Turbulence-Proofing Your Scenarios

By Rick Docksai

Investing in an effective scenario-planning exercise and using the experience wisely can have a big payoff for organizations.

Scenario Planning in Organizations: How to Create, Use, and Assess Scenarios by Thomas J. Chermack. Berrett-Koehler. 2011. 272 pages. Paperback. $34.95.

Plenty of scenario-planning books tell readers how to build scenarios, but some pieces are missing. Few books offer advice for implementing scenarios or for determining if one’s organization is achieving optimal results from them, according to Colorado State University scenario-planning professor Thomas Chermack.

“Pick up any of the popular scenario planning books and check the index for assessment, evaluation, or results. I predict that you will not find these entries,” Chermack writes in Scenario Planning in Organizations.

Trying to pick up where he thinks other books have left off, Chermack introduces “performance-based scenario planning.” In his view, the work does not end with building scenarios. He presents tools for first developing scenarios, then carrying them out and measuring their comparative worth.

Chermack speaks through live narrative, using a real-life tech firm—he assigns it the alias “Technology Corporation”—which adopted scenario planning in order to better formulate mission strategies, more efficiently manage team projects, and execute needed internal reforms. A team of the corporation’s staff met for six rounds of planning throughout eight weeks.

The team compiled copious amounts of data about their organization’s business model, the industry environment, and the critical forces at play. They identified and ranked by strategic importance the factors that were certain and those that were uncertain. As Chermack explains, “When truly uncertain forces have been isolated, energy can be spent trying to understand those forces and how they might play out across a range of possible futures.”

Then they developed sets of scenarios that explored the external environment and how their organization would likely respond to changes within it. In follow-up sessions, they speculated how their scenarios might change if certain elements in the environment changed. Chermack calls this latter phase “wind tunneling,” in reference to the wind tunnels that aerodynamics researchers use to test new airplane models.

“Turbulence is an environmental characteristic that puts stress on the object in question, be it an airplane or an organization,” he writes.

Chermack lays out specifically how the team gathered data and analyzed it, and how they used Web sites, podcasts, and other digital media to communicate scenarios to each other and to the rest of their organization. He adds further suggestions for how any leader can effectively manage scenario projects and avoid many potential pitfalls.

Technology Corporation’s endeavor ends with the team members agreeing to expand from exclusive production of intellectual property to production of new, useful technology products. Toward that end, they formulate plans for more contracts with R&D partners, selling new technologies, and increasing cross-functional collaboration. Chermack observes that, as they worked, they noted approvingly that communication and understanding among their organization’s staff had improved markedly.

“Many expressed surprise that such a simple exercise could have such profound results,” he writes.

Finally, the team members assessed the results post-implementation. They completed short surveys about their satisfaction or dissatisfaction with the exercise and its degree of usefulness, and—more importantly, according to Chermack—what they learned: They detailed what they knew now that they did not know before, and how they and others would function differently following the scenario project. Chermack specifies several ideal survey formats and the kinds of questions that they should include.

“Learning is a prerequisite to change,” Chermack writes. “People cannot change their behaviors, have strategic insights, or create a novel way of seeing a situation if they have not learned.”

Chermack’s assessments also include performance questionnaires on the improved productivity, new ideas, and cost savings that participants expect will result from the scenario projects. Technology Corporation’s participants reported that an investment of $100,000 in the exercise generated ideas that would net them $250,000 in new revenue: a benefit of $150,000.

“While costs of scenario projects can seem high at first, consider the implications of saving from one major catastrophe or one major strategic insight,” writes Chermack.

Not all organization leaders are convinced that scenario activities hold merit. Chermack’s Scenario Planning in Organizations addresses their doubts head-on. The author acknowledges where prior scenario literature may have left unanswered questions, and he combines background theory and real-world strategizing to fill in the gaps. His book will be a useful addition to the libraries of organization leaders everywhere.

About the Reviewer

Rick Docksai is an assistant editor of THE FUTURIST. E-mail rdocksai@wfs.org.

The Uncertain Future of the English Language

By Edward Cornish

Parlez vous “Globish”? If English is your only language, you’re probably doing okay now. But you might not be prepared for the future, suggest the authors of Globish and The Last Lingua Franca.

Globish: How the English Language Became the World’s Language by Robert McCrum. W.W. Norton. 2010. 331 pages. $26.95.

The Last Lingua Franca: English Until the Return of Babel by Nicholas Ostler. Walker & Company. 2011. 330 pages. $28.

When a Spaniard talks with a Chinese person, what language do they speak?

Chances are good that it isn’t either Spanish or Chinese. Instead, it’s English, the language they are most likely to have in common.

The rise of English as the leading language for international communications makes a fascinating story, and Robert McCrum, associate editor of the London Observer, tells it well in Globish: How the English Language Became the World’s Language.

McCrum begins with the humble origins of English among the Angles, or Anglii, a people living in what is now Denmark and northern Germany during the days of the Roman Empire.

During the Dark Ages, many Anglii migrated to England, where their German dialect gradually evolved into the English language of today. Along the way, English picked up words from French, Latin, and other languages.

Living on an island, the English people became intrepid seafarers, who carried their language around the world. Today, every continent has a substantial group of English speakers, and English has gained increasing importance as a lingua franca, a language used among people who do not share the same mother tongue. Other languages, such as Greek, Latin, and French, have served this purpose in the past, but English is now the most popular choice.

The need for a lingua franca has intensified in recent years with the growth of travel and international sports, as well as the globalization of the economy. To succeed in today’s world, individuals and governments alike recognize the value of knowing how to speak and read English.

To make things easier for people whose native language is not English, Jean-Paul Nerrière, a French-speaking former IBM executive, has developed a simplified version of English that he calls Globish.

“Globish,” reports McCrum, “starts from a utilitarian vocabulary of some 1,500 words, is designed for use by non-native speakers, and is currently popularized in two handbooks: Découvrez le Globish and Parlez Globish.

Nerrière believes that Globish will not only improve global communications, but will also limit the spread of English. Many French people are horrified when English words like hot dog and jumbo jet infiltrate their beloved French language.

Globish is not the first attempt to simplify the English language. Back in 1930, the English linguist Charles K. Ogden invented what he called Basic English, which got much publicity after World War II. Basic English had an 860-word list for the beginner’s vocabulary.

Interest in Basic English later faded, but recently it influenced the creation of the Voice of America’s “Special English” for news broadcasting and “Simplified English,” designed for technical manuals.

Globish, Basic English, and other simplifications of English can help non-English speakers to acquire a working knowledge of the language, but most people will need to go beyond a stripped-down vocabulary if they want to get the full benefit of the world’s vast English-language resources. So regular users of English-language resources will want easy access to a good dictionary.

Meanwhile, totally artificial languages continue to have advocates. Esperanto, a language developed by Polish scholar L. L. Zamenhof in the nineteenth century, has a vocabulary based on a variety of European languages, so it is more “neutral” than a language based solely on English. However, the world’s intellectual resources are largely in English; relatively little is written in Esperanto or any other invented language.

Surveying the Lingua Francas

In contrast to McCrum, Nicholas Ostler, chairman of the Foundation for Endangered Languages, takes a less triumphalist view of the English language in his recent book, The Last Lingua Franca: English Until the Return of Babel.

Ostler describes the rise and fall of lingua francas through the centuries. Greek, Persian, Latin, French, and many other languages have had their day in the sun but later declined as other languages came into favor.

So it will likely be with English, Ostler suggests in his concluding chapter, “Under an English Sun, the Shadows Lengthen.” However, Ostler admits that “the current status of English is unprecedented.”

He adds that, simultaneously, English “has a preeminent global role in science, commerce, politics, finance, tourism, sport, and even screen entertainment and popular music. With no challenger comparable to it, it seems almost untouchable. Even in China, the only country with a language that has more native speakers, every school child now studies English. And India, set to overtake China in population by 2050, is already trading on an expertise in English inherited from the British Empire and studiously preserved and fostered ever since.”

So, Ostler concludes, “two polar opposites define the extremes of what is possible. International English might grow to become Worldspeak, as a single fully global lingua-franca might be called, available as a universal auxiliary (or indeed primary) language to every educated adult. Or it might retreat as other powers advance, losing its global users and status until it is confined to the lands where it is still spoken as a mother tongue. A third, intermediate, option would see English retained as a world language, but developing on a separate standard from that used by native speakers.”

Ostler offers some intriguing explanations for the rise and fall of languages. When the Romans ruled much of the world, their language became popular with people who wanted to get ahead. When Roman power declined, Latin might have been expected to decline with it. But Latin found new strength as the official language of the Roman Catholic Church, and most books were written in Latin until Johannes Gutenberg developed movable type and books began to be printed in quantity.

McCrum explains in Globish that, before Gutenberg, books were costly, handmade, and rare. But Gutenberg’s development of movable type allowed books to be published quickly and cheaply, so people of modest means could buy them, and they did, but they preferred books published in their own languages—French, German, Italian, etc.—rather than Latin, which most people had difficulty with.

During the Enlightenment, Latin got another boost when it became, for a time, the language of science and scholarship. Physicist Isaac Newton had his Principia published in Latin in 1687, and many other scientists published in Latin well into the nineteenth century. But then the tide had turned decisively against Latin, because most readers preferred to read texts in the vernacular (their mother tongues). For a time, German became popular as the favored language for scientific publishing, but the popularity of German in science declined sharply after the Nazis took control in Germany.

Both McCrum and Ostler do well in outlining the history and current situation of English and its rivals, but they fail to tackle the policy issue: Would it be desirable for the world to have a single language, and, if so, should it be English?

From an economic standpoint, a single language might seem highly desirable: Business transactions would be easier, and considerable money could be saved by not having to hire translators. On the other hand, the initial cost of training millions of non-English speakers to be fluent in English would be enormous, and there would later be the problem of finding new employment for thousands of teachers who have long made a living teaching French, Spanish, German, Chinese, and other languages.

About the Reviewer

Edward Cornish is the founding editor of THE FUTURIST.

Books in Brief

Edited by Rick Docksai

Living Libraries

The Atlas of New Librarianship by R. David Lankes. MIT Press. 2011. 408 pages. Illustrated. $55.

A library in this century will be valuable not so much for its book collections as for its community space, argues library information sciences professor R. David Lankes in The Atlas of New Librarianship. He describes a new ethos of “participatory” librarianship taking hold in the profession: Librarians as dynamic facilitators of conversation and knowledge creation in their communities.

Lankes cites one survey in which a majority of teenagers said they wanted their local librarians to run blogs that would review and recommend books, with space for readers to comment. This would enable them not only to see book recommendations, but also to know who was recommending them.

Although librarians aren’t blogging en masse just yet, some are hosting faculty blogs and servers through which users can explore academics’ articles. Also, many are quickly adopting social-networking sites, such as Flickr and Facebook. Lankes further describes how library catalog systems are becoming more user-friendly; they may reach the point where, as with iPhones, users can tailor them for personal use by adding or removing custom apps.

Some libraries construct live social space, such as a café or a music performance center that has a stage with pianos on which musicians can practice. Lankes also discusses how libraries can encourage aspiring local entrepreneurs and cultivate civic awareness among their neighborhoods’ elementary- and secondary-school students.

Lankes wrote The Atlas of New Librarianship with librarians and scholars in mind, but the text covers such a vast array of pertinent subjects that almost any reader—parent, community leader, business professional, student, job seeker, etc.—may find a few topics of personal interest.

Neighborhood-Based Futuring

Collective Visioning: How Groups Can Work Together for a Just and Sustainable Future by Linda Stout. Berrett-Koehler. 2011. 198 pages. Paperback. $17.95.

You don’t have to be a prolific speaker, brilliant writer, or gifted organizational leader to bring about change in your community, says nonprofit director Linda Stout in Collective Visioning. What you need, in her view, is a collective vision around which you can rally people to work together to achieve.

Stout’s principle is “collective visioning,” and it means focusing on an ideal of what you want your community to be, rather than on the particular problem that you want to solve. She shares stories of organizations, faith groups, and circles of neighbors and friends who successfully applied collective visioning. For example, residents of a low-income community in Louisiana prevailed on the state’s legislature to close down a juvenile prison that had been abusing its inmates, and then convert the property into a community college.

In another case, after Hurricane Katrina struck in 2005, a group of students in a dilapidated school in New Orleans took charge to plan repairs of its classrooms and buildings. They also implemented brand-new garden plots, outdoor meeting spaces, and energy-efficient architectural designs.

Stout guides readers on how they, too, can carry out collective visioning in their own communities. She explains how one would bring together a diverse group of people and get them to interact in an atmosphere of equality and acceptance; then, through session exercises and activities such as storytelling, he or she would inspire them, break down barriers of mistrust, and make sure that everyone is sufficiently heard.

Collective Visioning is a powerful depiction of the positive impacts a motivated group of people can have on their community. Community activists and all who want to improve their neighborhoods’ quality of life may find in it both inspiring examples and useful tips.

Arctic Ice in the Hot Seat

The Fate of Greenland: Lessons from Abrupt Climate Change by Philip Conkling, Richard Alley, Wallace Broecker, and George Denton. Photographs by Gary Comer. MIT Press. 2011. 216 pages. Illustrated. $29.95.

As Greenland’s climate goes, so may go the climate of the rest of the world, according to conservationist Philip Conkling, glaciologist Richard Alley, oceanographer Wallace Broecker, and geologist George Denton. In a firsthand account richly illustrated with dozens of photographs of Greenland’s landscapes and glaciers, they explain how researchers’ findings about the land mass’s geological past and present raise grave concerns about its future—and ours.

Researchers agree that Greenland experienced several major climate shifts in its past, and each one precipitated weather changes and sea-level rise across the globe. Greenland seems to be on the verge of yet another major shift due to warming trends that melt gradually larger and larger quantities of its ice sheet. The world cannot afford not to pay attention.

Uncertainty lingers over exactly how much warming will take place. Some amount is inevitable, however, and it will surely be higher if humans persist with business as usual, the authors warn. As small amounts of ice continue to disappear from the ice sheet’s edges, the center will lower and warm up. Eventually, warming will imperil all of the remaining ice. The full process would take place over the next few centuries, but coastal cities everywhere could be in jeopardy from flash floods within the next few decades. Meanwhile, the changing climate would inflict desertification and storm patterns that wreck economies and food supplies on every continent.

The Fate of Greenland beautifully presents the challenges of forecasting climate change and the care that researchers must put into getting it right. It also compellingly explains the serious harms that humanity stands to suffer if it mistakes forecasters’ uncertainty for an excuse to take no action on greenhouse gas emissions. Scientists and non-scientists from all walks of life will find this an eloquent and timely read.

Forward-Thinking Classrooms

The New Digital Shoreline: How Web 2.0 and Millennials Are Revolutionizing Higher Education by Roger McHaney. Stylus. 2011. 247 pages. Paperback. $29.95.

Web 2.0 is second nature to millennial-generation students, but it baffles many educators, notes management information systems professor Roger McHaney. He has good news for the grownups: If they learn to understand Web 2.0 and incorporate it into their classroom practices, they will stay relevant and their students will stay engaged.

McHaney profiles many virtual learning software programs, educational Web sites, and mobile apps, and how teachers can use each. He also identifies larger market trends, such as printed textbooks’ replacement by wikibooks and e-books.

He further describes how digital media influence the millennials’ learning patterns—e.g., they are more inclined to collaboration with peers, creativity, and processing multiple streams of information. Over time, he speculates, schools will adapt by basing more course material on projects from previous classes of students and by expanding provisions of video editing software, recording facilities, and Internet interfaces. The most effective teachers, according to McHaney, will act less like instructors and more like facilitators, guiding the students as they take charge of their own learning experiences.

Also, mobile Web services will become components of classroom instruction. Students will consult search engines during class discussions and ask professors questions by texting them, while the professors podcast their own lectures for reuse by classrooms everywhere.

As McHaney makes clear, teachers have much to learn. But they have much to contribute, as well. Students need teachers’ help to separate valuable information from useless information, and to use digital technologies properly while avoiding the pitfalls of laziness, sloppy scholarship, and compliant thinking.

The New Digital Shoreline is a fascinating overview of where education is heading. Parents, teachers, and everyone else involved in learning would be well-advised to read this book.

New U.S. Leadership for a New World

The Next Decade: Where We’ve Been … and Where We’re Going by George Friedman. Doubleday. 2011. 243 pages. $27.95.

This century will challenge U.S. leaders to exercise wider foreign-policy vision than ever before, according to George Friedman, founder and CEO of geopolitical intelligence firm STRATFOR, in The Next Decade.

The United States has traditionally considered certain countries more strategically key than others, but in this century practically every country on earth will matter, Friedman argues. Leaders will need to develop a balanced global strategy that is not singularly focused on combating terrorism, but on myriad issues taking place on all corners of the globe.

Friedman sees major shakeups ahead in U.S. foreign policy. For example, the United States will distance itself from Israel and strive to accommodate Iran; it will also attach far more importance to several countries now regarded as only somewhat important, such as Poland and Singapore.

Across the globe, alliances will shift, Friedman predicts. Germany will build closer economic ties with Russia, while Turkey and the Arab states increasingly eye Iran as a competitor and adversary. Europe will struggle with internal economic rivalries and fade as a global power center. Brazil might become a formidable economic and military influence in Africa.

As Friedman assesses each global region, he details how it will affect U.S. national interests and how leaders should respond. In general, he advises pragmatic policy focused on cultivating balances of power within each region, rather than building democracy or preserving historic alliances.

Friedman displays fresh thinking on many of the oldest, most complex diplomatic problems facing the United States and its allies. Foreign-policy enthusiasts may not all agree with every argument he presents in The Next Decade, but they will surely admire its depth of research and clarity of voice.