Fifteen years ago, Future Survey editor Michael Marien organized a debate at the World Future Society’s annual meeting to address the issue of whether the ongoing Information Technology Revolution would turn out to be all that good for us. In other words, is more information coming at us a good thing or a bad thing?
I’ve always felt that more information is better than less information, but now I’m not so sure. As Marien said, “having much more information is bad for our heads. … It produces infoglut, which may well be the greatest under-studied problem of our time.” (“Information Technology Revolution: Boon or Bane?” THE FUTURIST, January-February 1997, page 11.)
Among the consequences of infoglut that Marien forecasted were the devaluation of information and increased stress and workloads. All of these problems have largely come to be. So now the question becomes, what do we do about it?
In this issue of THE FUTURIST, business trend analyst Erica Orange offers a “data abacus” to help organizations to assess and leverage the increasingly digital lifestyles of consumers and citizens (“Augmented, Anonymous, Accountable: The Emerging Digital Lifestyle”). One aspect of that digital lifestyle involves our money, so David R. Warwick, an advocate for the cashless society, shows what governments need to do to advance digital transactions that improve society’s safety and security (“The Case Against Cash”).
As for dealing with the data itself, one major issue is keeping it secure. International IT security advisor William H. Saito advocates higher standards at the design level, with better self-regulation of the industry (“Our Naked Data”).
Also in a special Web-exclusive, Eli Pariser, the former executive director of MoveOn.org warns that more-personalized Internet searching may have hidden side effects. Pariser will be exploring this topic in greater depth for our September-October issue so stay tuned(“Escaping the Filter Bubble“).
And since the “data deluge” is likely to continue accelerating, analyst Richard Yonck offers insights on a variety of technologies, from nanotech to genetics to search engines, that will keep us from drowning in data (“Treading in the Sea of Data”).
* * *
While I admit to often feeling overloaded by the information coming at me at the World Future Society’s conferences, I always look forward to them because of the variety of perspectives that can be found nowhere else. The opportunity to sift through the data and to tap into the great energy store of new ideas is one not to be missed!
For information about WorldFuture 2011: Moving from Vision to Action, to be held July 8-10 in Vancouver, visit www.wfs.org/content/worldfuture-2011.
—Cynthia G. Wagner
Editor
A computer program that recognizes different levels of urgency in callers’ voices could help crisis centers respond more quickly to the most serious emergencies.
“Stress and negative emotions, in general, have a strong influence on voice characteristics,” according to researchers at Delft University of Technology and the Netherlands Defense Academy.
Rapid talking, variations in pitch (rising or falling intonation, for example), and changes in breathing rates are among the vocal cues that allow the program to gauge urgency and alert responders who may already be overwhelmed with calls during a major crisis. The system may also prove beneficial in military situations.
Source: International Journal of Intelligent Defence Support Systems (2011, Vol. 4, No. 2), Inderscience, www.inderscience.com.
A followee is someone you follow (or “fan” or “friend”) in an online social network such as Twitter. It is just one of many neologisms submitted by wordsmiths to Merriam-Webster’s Open Dictionary.
New words and clever coinages showcase the rapid and fluid movements of the English language, but they are not necessarily accepted as words in the official Merriam-Webster dictionary (which, incidentally, still spells e-mail with a hyphen, unlike just about everyone else).
More new words at Open Dictionary, http://nws.merriam-webster.com/opendictionary/.
Film lovers on the go may soon have the best of all worlds: 3-D movies delivered to their cell phones.
Combining new mobile radio standards with advanced video coding, researchers at Fraunhofer Institute for Telecommunications in Berlin have developed a special data compression technique that permits the transmission of two video streams required for a 3-D effect.
The 3-D option promises to enhance the consumer experience as more people access YouTube and other favorite Internet sites via their smartphones. The technology also will increase the options for businesses, medical responders, and other communicators who need high-definition mobile imaging.
Source: Fraunhofer-Gesellschaft, www.fraunhofer.de/en/.
Bringing microscopic samples up to where researchers can see and “touch” them is the aim of a new touchscreen developed by researchers in Finland.
The innovation merges Web-based virtual microscopy with supersized (minimum of 46 inches) multitouch displays. The samples are digitized with microscopy scanners and stored on shared image servers. Researchers accessing the image can zoom in and move around it, much like using Google Maps.
The developers believe that the multitouch microscope will enhance interactive teaching as well as virtual research.
Sources: University of Helsinki, www.helsinki.fi. MultiTouch Ltd., www.multitouch.fi.
The creation of albums —collections of songs built around a theme or cohesive musical sound—could have become a lost art in the age of single-tune downloads. But thanks to application development for tablets like the iPad, artists and their record labels now have a new way to offer their fans an affordable augmented musical package.
In his blog for Forrester Research, consumer products analyst Mark Mulligan reports that music giant EMI has released an app version of the band Swedish House Mafia’s Until One album. The app includes the trio’s book and photos, along with lyrics, videos, interviews, discussion forums, games, social-networking feeds, and dynamically updated news content.
“This is a great innovation in music product,” says Mulligan, “but EMI needs to understand that that is exactly what this is: the start of the next generation of music products, not ‘just an app.’”
Source: “Finally a 21st Century Music Product, And It’s From EMI!” blog post by Mark Mulligan, March 24, 2011, Forrester Research Inc.
Graham T. T. Molitor, former vice president and legal counsel for the World Future Society, bases his forecasts on the assumption that successive waves of economic change are the primary forces shaping human life.
“The dominant economic activities around which humanity is centered constitute the single most influential force shaping humanity, its institutions and civilizations,” Molitor asserts.
So if you can anticipate a major economic shift, you can anticipate other changes in human life.
“Past eras have been molded by shifts from agriculture to manufacturing, then to services, and now (among the so-called modern nations) to information era enterprises,” Molitor says. “Now emerging to sustain strong economic growth through the next millennium are at least five economic activities, which are poised to dominate advanced nations. These five enterprises include leisure, hospitality, recreation, and entertainment; life sciences; meta-materials; a new atomic age; and a new space age.”
Each of these economic activities can become the leading wellspring of jobs and the gross domestic product in country after country. However, previously dominant activities will not disappear, but will decline in relative importance. Major changes do not occur instantaneously, but rather develop in stages over time, so knowing these stages allows a forecaster to anticipate many things to come.
Molitor’s thinking about change draws on his long experience dealing with public policy issues as a lawyer, political scientist, and consultant for U.S. defense agencies, the White House, and politicians running for office, including presidential candidates Richard Nixon and Nelson Rockefeller.
Governmental policies, Molitor discovered, typically follow a series of steps, starting with people’s ideas about what should be done. Also, a significant change may occur in one nation or region long before it occurs in others.
This observation led him to the idea of forerunner jurisdictions—nations or areas that tend to be on the leading edge of change—as well as laggard jurisdictions, which typically cling to old ways. The jurisdiction may be a city, a province, or even an entire nation.
Molitor began thinking about forerunner nations while touring Europe. He noted that Sweden led the way in adopting social policies, so he made a special study of that nation. Why, he wondered, was Sweden moving ahead rapidly toward visionary goals of providing a better life for its people?
Molitor recognized that political leaders in most countries rarely have the resolve, tenure of office, and popular support to pursue distant future goals. So typically, political leadership succumbs to the pressures and problems of the moment. Commitments to immediate situations leave little room for addressing long-range visionary goals.
Sweden, along with a few other countries, has become adept at utilizing high level blue-ribbon gatherings of experts to come up with serious visionary propositions and targets.
“Sweden’s Royal Commission reports are exemplary,” Molitor asserts. “These undertakings are structured to reflect the viewpoints of experts representing almost every important sector of the nation that is materially affected. Opportunities for additional views also are structured into the rigorous review process. It comes as no surprise that most of the new and novel concepts put forward in these Royal Commission reports are swiftly passed into law.”
Britain’s Royal Commission reports are also exemplary, according to Molitor. In fact, he says the blue-ribbon Royal Commission’s reports are probably the most carefully thought out and technically proficient of any nation and are always right on the mark intellectually.
“Unfortunately,” he adds, “while Britain’s scholarship is superb and frequently unsurpassed, there tends to be a reluctance to implement a new concept swiftly. As a result, a quick response is often lacking, and hopes may be dashed.”
Though change takes place faster in some nations than others, it is now faster everywhere than it was in times gone by.
The Stone Age dominated human activities for millions of years. Then came—in increasingly rapid succession—the Copper Age, the Bronze Age, the Iron Age, and the Steel Age.
Broadly characterizing contemporary times, it might be fair to identify an ongoing succession of materials technologies, such as Plastics (1900s) and Silicon (1950). Beginning in the current millennium, Molitor foresees at least five potential eras based on a succession of materials technologies: A Bio-materials Age (starting around 2100), followed by a Meta-materials Age (around 2200-2300), Atomic Matter (around 3000), and Anti-matter (beyond 3000).
These expectations for the future offer an opportunity for governments or private industry to plan their future and develop forward-looking strategies. Government policies and other factors can significantly delay, accelerate, or prevent the realization of these anticipated developments. What actually happens will depend heavily on the public policy decisions made by various governments.
Molitor has made a specialty of analyzing how government policy is shaped and has identified 22 stages in the formulation of a policy. At the beginning of the process is the generation and discussion of ideas about what should happen. The ideas lead to innovations, events, and issues (the emergence of topics about which there is a clear difference of opinion). Near the end of the process come governmental regulations, judicial reviews, and, finally, revisions of state or national constitutions.
The capstone of Molitor’s career has been the editing (in collaboration with George Thomas Kurian) of the two-volume, 1,115-page Encyclopedia of the Future (published by Simon & Shuster Macmillan, New York, 1996).
Molitor has now closed his consultancy, Public Policy Forecasting in Potomac, Maryland, and has retired, with his wife Anne, to western Pennsylvania. During the course of his career in forecasting, Molitor amassed a vast library covering a wide variety of subjects. In 2007, he donated some 13,000 of these books to Regent University Library in Virginia Beach, Virginia.
Edward Cornish is the founding editor of THE FUTURIST and author of Futuring: The Exploration of the Future (WFS, 2004).
Graham T. T. Molitor may be reached by e-mail at gmolitor@comcast.net.
For millennia, humans have suspected that life exists on other planets. Science writer Marc Kaufman thinks that it does—and that we might have to look no further than Mars.
FUTURIST staff editor Rick Docksai asks noted scientist Paul Davies for his views on our prospects for finding life elsewhere in the universe. Davies is an Arizona State University cosmologist and the author of The Eerie Silence (Houghton Mifflin Harcourt, 2010).
Rick Docksai: We know that there are billions of stars in our galaxy, and that quite a few of them have planets. We’ve seen a few of those planets. But—just referring to our galaxy—what are the odds of at least one of those planets not only being habitable, but actually being inhabited?
Paul Davies: The odds are not just unknown, but unknowable in our present state of ignorance. The reason is simple: We have no idea what is the probability that life will emerge on an Earthlike planet if you have one. Traditionally, that number was assumed to be very close to zero. Today, it is fashionable to assume it is near one. There is no justification for that fashion. If we knew the physical mechanism that turns non-life into life, we could have a go at estimating the odds. We have no idea what that mechanism was, so guessing how likely it is to happen is completely pointless.
Docksai: If another planet has given rise to microbial life, how likely or unlikely is it that microbial life would give rise over time to intelligent life, as happened on Earth?
Davies: That is a tractable question, because at least we know what the physical mechanism is that evolves simple cells into intelligent beings—variation and natural selection. Darwinism could be used in principle to estimate the odds for intelligence to emerge, but sadly in practice we have no idea how to do that calculation. There seem to be many very unlikely accidents of evolution that preceded the evolution of intelligence on Earth, so it’s easy to believe that the probability is very small. But it may not be.
Docksai: How much information can we currently obtain about planets in other star systems and said planets’ potential to sustain life?
Davies: This is getting better all the time. Kepler [NASA’s program searching for habitable planets] is compiling an impressive inventory of extrasolar planets. Unfortunately, Earthlike planets [i.e., habitable, roughly] are still hard to spot with current technology. But I am convinced there are lots of Earthlike planets out there.
Docksai: Our tools for studying other stars and other planets continue to evolve and enable us to learn more. Looking ahead, what new information do you hope we might learn about the planets in other star systems in the next 10–15 years? How might our capacities to study them expand?
Davies: The holy grail is to measure the composition of the atmospheres of Earthlike planets. We know how to do this, for example with fancy space-based spectrometers combined with systems to blot out the glare of the parent star. But it will be expensive. I don’t see NASA doing it in the next 15 years in the current financial climate.
Docksai: Given the tough economic climate worldwide, space agencies have seen their already-sparse revenues shrink further. How optimistic are you that astronomy will progress in years ahead despite thin funding streams?
Davies: I am cautiously optimistic about finding traces of life on Mars, as both NASA and ESA have good Mars exploration programs. If we find extant life, for my money it would just be Earthlike life that got there from here in impact ejecta (or vice versa).
For more information, see The Eerie Silence: Renewing Our Search for Alien Intelligence by Paul Davies (Houghton Mifflin Harcourt, 2010. 242 pages. $27). Davies’s Web site is http://cosmos.asu.edu.
“There is a very good chance that [a near-future Mars mission] will say definitively, ‘Here are organic compounds,’” says Kaufman, a Washington Post science reporter and the author of First Contact (Simon & Schuster, 2011).
Astronomers believe that, billions of years ago, Mars was warm, had liquid water, and could have supported life. Mars cooled over time, and its surface water disappeared, Kaufman explains. But there could be much more ice below the surface, and maybe some remaining microbes.
“Potentially there was life on Mars three or four billion years ago, when it was wetter and warmer. And as conditions changed, it might have migrated underground,” he says.
Kaufman cites Michael Mumma, a NASA astronomer who in 2009 discovered frequent methane emissions from several surface sites across Mars. This could indicate life, says Kaufman, because on Earth, more than 90% of methane comes from living organisms; the rest is produced by volcanoes and other geological phenomena.
Mumma told THE FUTURIST that, in the two years since his methane discovery, he and his research team gathered data on Mars’s atmosphere from 24 Earth-based observatories. NASA will publish the findings in late summer 2011. Mumma is hopeful that the information will help scientists judge whether Mars’s methane implies Martian life—or just Martian geological activity.
“We’re following up on the methane production to identify if the release repeats annually, but also to measure gases that might be key tests of biology versus volcanism,” says Mumma.
More insights may emerge from ExoMars, a pair of robotic missions that NASA and the European Space Agency (ESA) will co-launch to examine methane and other trace gases on Mars. According to Mumma, ExoMars may find important chemical clues. For example, if it is accompanied by sulfur dioxide, the methane is probably geological in origin; if hydrogen sulfide accompanies it, then the methane almost certainly came from something living.
“We are hopeful that, if we see a methane hotspot, we will look closely for these other gases to see what’s really happening,” says Mumma.
While finding life would be momentous, according to ESA ExoMars Project scientist Jorge Vago, so would finding signs of geological movement. Geological activity would suggest a warm inner core. This means that Mars might have enough heat in zones below its surface to sustain microbial life.
“In either case, it will mean the planet is not dead—either from a geological point of view or from a biological point of view,” says Vago.
The first ExoMars mission will launch in 2016 and, upon arriving in Martian aerospace in mid-2017, will deploy a rover that will land on the surface and spend four days analyzing the soil and air. The spacecraft above will continue orbiting and analyzing atmospheric gases for the next two years.
A follow-up ExoMars mission in 2018 will carry a rover with a drilling arm that will tunnel two meters into the soil to obtain dirt and rock samples. Vago says that this drilling will be the deepest that humans have ever before pried into the planet.
“This is a 3-D rover,” says Vago. “For the first time, we will be able to look into Mars’s third dimension, that of depth.”
The rover will deposit the samples into a cache. Sometime after 2024, a third robotic mission may retrieve the samples and fly them back to Earth for astronomers to study in person.
“This mission in 2018, you can think of it as the first element of Mars sample return,” says Vago. “The long-term aim is Mars sample return.”
Viewing Mars’s inner core will probably not be possible any time in the near future, Mumma concludes, but he considers studying the gas emissions up close to be the next-best thing. They may reveal much about what is taking place under the Martian surface.
“Detecting these effluent gases would be one of the most direct clues. Other than that, we’re stuck with looking at surface land forms,” he says.—Rick Docksai
Sources: Marc Kaufman (interview), The Washington Post; author of First Contact: Scientific Breakthroughs in the Hunt for Life Beyond Earth (Simon & Schuster, 2011. 213 pages. $26).
Michael Mumma (interview), NASA, Goddard Space Flight Center, www.nasa.gov/goddard.
Jorge Vago (interview), European Space Agency, www.esa.int.
Telecommuters, freelancers, and others without a regular office to anchor their workday may suffer from loneliness or require a more professional environment than a local coffeehouse from which to conduct business. One solution, co-working, may offer some options to improve working lives and productivity.
With co-working, independent and freelance workers voluntarily share an office space if not necessarily a common employer. Emphasizing cooperation over competition, the process enables remote (i.e., otherwise placeless) workers to create a community, a support system, and a strong professional network among themselves. Co-workers report having found opportunities to collaborate, share skills, and subcontract among each other, and perhaps not surprisingly, many find they are able to be more productive in such an environment.
Regardless of the type of work that is performed, the co-working spaces themselves can be run on either a nonprofit or a for-profit basis; they typically charge a monthly membership fee (generally inexpensive), and the level of membership can vary depending on how much time one plans to spend at the office.
There were more than 700 such spaces around the world as of March 2011, according to online co-working magazine Deskmag.com. While that number may seem small, it is significant: It represents around twice as many facilities as there were just twelve months prior. The movement is clearly growing fast, but its direction is not entirely certain at this point.
To help gain a clearer picture of co-working’s possibilities, a scenario analysis workshop was conducted by Thomas Chermack, the director of Colorado State University’s Scenario Planning Institute, and Angel Kwiatkowski, the founder of Cohere, a co-working space in Fort Collins, Colorado. Projecting several years ahead, the group at Cohere developed a set of near-term scenarios.
After going through hundreds of sticky notes, the group managed to pin down what participants agreed were the two most important key variables determining co-working’s future. The first was an internal game changer (will a given co-working group hold together?). The second was an external one (is the economy stable or unstable?). The group then created four scenarios based on these two variables:
1. Stable economy/stable community. In the best-case scenario, co-working has gone mainstream and its appeal has expanded. More and more companies recognize it as a viable way to increase efficiency, productivity, and employee satisfaction and well-being. As a result, employees are offered this option upon being hired. Most co-working spaces are staffed 24 hours a day to accommodate everyone’s schedule.
2. Stable economy/unstable community. As in the above situation, everyone is generally doing well career-wise. However, most of the advantages of co-working—including side benefits such as educational classes, guest speakers, social mixers, and other activities—have disappeared. According to this scenario’s authors, “new members often arrived to an empty or near empty space and received no orientation or details about their membership.” In the absence of any genuine leadership or investment in the community, those hired to run these facilities don’t really know what they’re doing or why they’re doing it (aside from the paycheck).
With no real sense of community and little to keep people there, members come and go. As trust and camaraderie evaporate, those who remain erect cubicle walls. There is very little in the way of communication (much less collaboration) in the space.
3. Unstable economy/stable community. Despite the ongoing recession, “fierce loyalty and tight networking bonds” among long-term members enable co-working spaces to flourish.
Meanwhile, corporate co-working franchises begin popping up. Cheaper to join, these offer “more lavish amenities” but lack the same sense of community as the smaller, less-profit-oriented spaces. As a result, they tend to attract a different, less tightly knit crowd, and the turnover rate is higher. Ultimately, the smaller model proves more sustainable, while the larger franchises struggle.
4. Unstable economy/unstable community. Upon arrival, co-workers (if you can even call them that) walk through a turnstile, slide their credit cards through a plexiglass partition, and then choose an empty stall in which to work in isolation. Everything is pay-per-minute. There is little to no interaction between people. There is also little trust and security (you’d be wise to take your valuables with you if you leave the room).
While this scenario may be more satirical than realistic, the point is clear: Without an emphasis on community building, co-working as it exists today will either take a turn for the worse or disappear altogether.
In the end, Kwiatkowski believes co-working will most likely evolve somewhat along the lines described in the third scenario. On the one hand, there will be co-working spaces “run by really passionate people who love what they’re doing,” she says. They may not believe in growth or have any interest in scaling their activity. However, they will be fully engaged and immersed in the communities they are creating.
“On the other side of the continuum,” Kwiatkowski notes, “you have the ‘chain restaurant’ [model] of co-working: people franchising and opening multiple spaces and hiring community managers.” With the support of corporations, partnerships, and sponsorships, these franchises may or may not be interconnected, and may ultimately be more accountable to their investors than their members.
“That’s the division I’m already starting to see,” she says. “We’re polarizing on opposite ends.”
So while many “officeless” workers may increasingly be tempted to join the ranks of co-working in the future, they may want to look for those office spaces that emphasize community over facilities. When it comes to co-working’s future, smaller may be better.—Aaron M. Cohen
Source: Angel Kwiatkoski (interview), Cohere LLC, www.coherecommunity.com.
A new Web-based game developed by researchers in the U.K. endeavors to help users quantify their level of confidence to improve decision making.
World of Uncertainty uses mathematics, statistics, critical thinking, knowledge management, and educational psychology and consists of a 10-question quiz to help people make better decisions. After answering a question on religion, politics, general knowledge, and so on, the player is asked to indicate how confident she is in her answers.
This latter metric determines the number of points awarded based on confidence level, in a manner similar to a gambling game. If the player is supremely confident in one of her answers, she can gamble all of her points on being right to double her money but receive virtually no points in the event that she were wrong. If she is unconfident and indicates as much, betting nothing, she would earn roughly the same amount of points regardless of whether she was right or wrong, as some points are awarded just for answering the question.
At the end of the quiz, the player receives a score for the number of questions she answered correctly and—much more importantly—how her knowledge on the subject related to her confidence level. “The more quizzes you will try, the more accurate you will get in estimating and expressing your confidence,” reads the message at the end of the ordeal.
Later iterations of the game may involve graphics or more action-packed game play.
“Whether the choices facing us are simple or complex, a greater awareness of uncertainty and of our own biases can improve the quality of our decision making. We believe there’s real potential for people to acquire that awareness through computer games,” says David Newman of Queen’s University Belfast, one of the Web site creators.
In a paper that the team submitted as part of its funding request, the researchers outline their goals for the project and describe why video games are ideal tests for human reactions to uncertainty. In the game environment, “the play is not limited to following a pre-written story.… A player may explore, gather evidence, estimate risks, make decisions, and see the consequences of these decisions.” Faced with immediate effects of over- or underconfidence, such as a high or low score, players gain the ability to grasp their propensity to commit errors of false self-assurance.
“Our vision is of a society transformed from one in which most people prefer simple stories, and avoid discussing uncertainty, to one where a large proportion of the population has the skills of exploring uncertain evidence and can estimate uncertainty,” the researchers write.
—Patrick Tucker
Source: The Engineering and Physical Sciences Research Council, www.epsrc.ac.uk.
Recent events in Japan have sparked concerns about freshwater availability in many parts of the country. Fortunately for Japan, the nation is also the world’s leader in water filtration.
Japanese manufacturer Nitto Denko is currently marketing what it claims is the world’s most efficient desalination filter, the SWC6 MAX, a reverse-osmosis nanomembrane system released in 2010. According to the company, the filter can remove “salt and other minerals as well as bacteria and viruses from seawater, and lower the 3.5% of salt in seawater to 0.0075%”—lower than the salt content of freshwater. The SWC6 MAX was invented by Hisao Hachisuka and is currently in use in a water treatment facility in Australia.
At present, SWC6 MAX water is rather expensive. The cost of filtering an acre foot is more than $650, because of the amount of energy required to push water through the filter. That price tag is beyond the means for the world’s poorest inhabitants but within reach for the Japanese. The company has not said that it will be using the technology in the areas affected by the March 2011 tsunami or radiation. However, numerous other technologies exist for effective wastewater filtration, which could be used in Japan, including ozone injection and nanofiltration.
One of the more interesting water purification technologies to emerge recently is electro-filtration through silver nanowire fiber. The silver nanowire mesh, connected to a 20-volt power source, zaps bacteria and pathogens, making the water drinkable. This method, pioneered by Stanford University professor Yi Cui, has been shown to be more effective and less energy-intensive than other filtration methods that require large amounts of energy to push water through filters.
At present, silver nanowire filtration is also cost-prohibitive for the world’s poorest regions, due to the high cost of constructing silver nanowires. But in 2010, Taiwanese chemist Yi-Hsiuan Yu patented a process for mass production of silver nanowires. If this method is effective, it could greatly reduce the cost of production for these nanowires, making Yi Cui’s filter more practical for the world’s poor. Korean firm Toptec has patented the world’s first nanofiber mass production system.
The Global Water Recycling and Reuse System Association of Japan has a large, government-funded mandate to “develop [a] comprehensive water recycle system and expand the system, making the most use of Japanese technologies and know-how.” The Japanese government sees water filtration and green infrastructure as a key export area for the future.
The United Nations estimates that 2.8 billion people will live in a water-stressed environment by 2025. The world’s poorest people need access to cutting-edge desalination technologies, coupled with advanced filtration, to increase the availability of freshwater and to remove toxins from wastewater. Wastewater recycling on the community level is essential for water stability, many experts contend. According to the Japanese government, there will be a $1 trillion market for safe water reclamation and recycling by 2025, so the potential private client list is considerable.
—Patrick Tucker
Sources: Nitto Denko Corporation, www.nitto.com. Stanford University, http://stanford.edu. Personal interviews.
Over four days in March 2011, conference goers at StrategyNZ: Mapping our Future envisioned the most preferable long-term future for New Zealand and searched for innovative ways to meet future challenges. The conference, sponsored by the Sustainable Future Institute, was held in Wellington.
StrategyNZ participants discussed the past, present, and future of the country, looking ahead almost half a century to the year 2058. With an eye on policy, attendees explored a variety of issues, including health, education, technology, the environment, and the economy.
The event kicked off with a two-day futures-studies course conducted by Peter Bishop, associate professor of strategic foresight and coordinator of the graduate program in futures studies at the University of Houston. Bishop gave overviews of futuring and forecasting techniques as well as strategic planning methods.
Afterwards, during a workshop held over the next two days, participants broke into groups of 10 to create “strategy maps”—the strategic foresight tool from which the conference gets its name.
Strategy maps are diagrams that graphically depict a set of goals and strategies. These visual tools can help people see more clearly the ways in which objectives, resources, and various other facets of a given strategy interrelate with one another, providing a clearer sense of cause and effect and the best way to move forward with a plan of action.
At the conference’s conclusion, workshop groups had an opportunity to present their strategy maps to several members of New Zealand’s Parliament.
Coordinators plan to present the results of StrategyNZ at the World Future Society’s annual meeting, WorldFuture 2011: Moving from Vision to Action, in Vancouver, Canada. Co-presenter Wendy McGuinness, chief executive of the Sustainable Future Institute, chairs both the New Zealand chapter of the World Future Society and the Millennium Project’s New Zealand node.
Audio files and PowerPoint presentations from most of the speakers are available for free download on the Sustainable Futures Institute’s Web site, as are the various workshop groups’ outputs.
Attendees were also involved in the preparation of an e-book: StrategyNZ: Mapping our Future Reflections. This followed the publication of two reports (also available online): StrategyNZ: Mapping our Future Workbook and StrategyNZ: Mapping our Future Strategy Maps. Wendy McGuinness hopes to publish a further book next year, which currently has the working title Exploring New Zealand’s Long-Term Future.
Sources: Strategy NZ, http://strategynzsite.info. Sustainable Futures Institute, www.sustainablefuture.info.
The European Commission–funded iKnow (Interconnecting Knowledge) project is reaching out to those in the larger futuring and foresight community for help evaluating key wild cards and weak signals in its extensive, ever-expanding database. The project calls its online evaluation an “international Delphi 2.0 survey.”
In classical Delphi polling, groups of experts are individually and anonymously surveyed in a series of rounds. They are presented with a summary of responses in each subsequent round and work to narrow those down, such as by assigning the responses a rating. The process may continue until the experts reach consensus on a given issue, if that is the end goal.
In a Real-Time Delphi survey, the entire process takes place online and is opened up to anyone interested—it’s a method of crowdsourcing, in a way. The series of rounds is eliminated (but anonymity is preserved), and responses are available for viewing as soon as they are submitted.
The iKnow Project’s database is intended to aid the practice of studying, understanding, and anticipating the wild cards and weak signals that are “potentially ‘shaping and shaking’ the future of science, technology, and innovation.”
Wild cards are widely understood as low-probability, high-impact events; iKnow divides them into three categories: intentional, unintentional, and nature-related “surprises.”
Weak signals, on the other hand, are trickier to define. The iKnow project classifies them as “unclear observables warning us about the probability of future events (including wild cards).” Examples the organization gives include current policies, past wild cards, and emerging issues, which reveals just how wide-ranging the term can be.
The categories range from information and communication technologies to social sciences and the humanities.
Source: iKnow, www.iknowfutures.eu.
This past March, South By Southwest Interactive (SXSW) attendees in Austin, Texas, had the chance to attend Plutopia 2011: The Future of Play. Held at the Mexican American Cultural Center, the event showcased configurable, experiential, and interactive works—art installations, projections, demonstrations, games, live performances, and more. The works on display exemplified some of the ways that emerging technologies are being incorporated into the arts.
San Francisco–based game manufacturer Sifteo showed off its interactive gaming cubes, originally developed at the MIT Media Lab. (Co-founder David Merrill was a featured speaker at Plutopia 2011.) This electronic gaming system features small blocks with color LCD screens, built-in wireless communication, and motion sensors that enable them to respond to players as well as other Sifteo Cubes. According to Merrill, the company “aims to empower people to interact with information and media in physical, natural ways that approximate interactions with physical objects in our everyday lives.”
In the courtyard of the cultural center, French artists Grégory Lasserre and Anaïs met den Ancxt, who collaborate as Scenocosme, displayed a hanging garden of interactive musical plants. These digitally enhanced “hybrid” plants, which they call Akousmaflore, respond to motion and touch by making different sounds.
Nearby, the Edge of Imagination Station invited partygoers to create digital stop-motion animation sequences using various toys, props, and chalk drawings. Across the courtyard, the University of Texas Department of Computer Science showed off its robot soccer team. (Robot soccer was reported about in the January-February 2011 issue of THE FUTURIST.)
Other highlights included the improvisational group Text of Light (featuring Lee Ranaldo of seminal art-punk band Sonic Youth and artist/composer Christian Marclay, among others), who spontaneously composed moody soundscapes to accompany projections of experimental filmmaker Stan Brakhage’s abstract films. Futurist, design critic, and science-fiction author Bruce Sterling was also on hand to give the opening speech, as he has done at Plutopia’s previous SXSW parties.
Founded in 2007, Plutopia is a future-focused entertainment company that creates what it calls “sense events.” The company, whose name is a “mash-up” of pluralist utopias, looks ahead to a technologically enhanced, interconnected future that is worth getting excited about.
Source: Plutopia, www.plutopia.org.
Our Twitter followers responded to our quest to find robot- and alien-free scifi with a very diverse reading and movie-renting list of suggestions. And they reminded us that good science fiction also doesn’t have to be about the future, though of course that is our preference. ;)
@WorldFutureSoc: Can anyone name a good #scifi story (book, play, movie) that does not involve aliens or robots?
@ebonstorm: The Foundation Series—No aliens or robots are harmed in the making of these three books by Isaac Asimov. #scifi
@Geofutures: It has robots.
@heathervescent: I don’t think these have either: The Stars My Destination. Trouble on Triton.
@Geofutures: 11 of my top 20 #futurist #movies involve neither (Blade Runner more clones than robots) http://bit.ly/cr7Km1 #scifi
@WorldFutureSoc: @Geofutures Thx for link to www.Futuristmovies.com! Just bookmarked it.
@heathervescent: I believe you can substitute the Foundation series as God Emperor Leto’s unwritten 10000 yr rule. #scifi #mashup
@heathervescent: If that last tweet was too cryptic it was a Dune + Foundation Series mashup. :)
@WorldFutureSoc: Yeah, it kinda was! ;)
@heathervescent: Its a challenge to fit two major scifi series’ mashed up in a 140char tweet. :)
@jasonporath: Primer. The Fountain.
@jlindenger: I love “The Fountain”
@MattCCrampton: Easy, any book by William Gibson. Start with Neuromancer.
@JasonSwanson: OHHH good suggestion!
@jlindenger: Sharon Shinn’s Samaria series which starts a bit more fantasy and shifts to scifi as things are explained.
@jlindenger: Octavia Butler’s Parable of the Talents/Parable of the Sower are probably among the best in that category.
@jlindenger: I think she takes issue with labeling it SF, but... @margaretatwood ‘s “The Handmaid’s Tale”
@johnadamyates: What about giant bunny rabbits?- Donnie Darko
@heathervescent: Here’s the FTW submission: Clockwork Orange. (via Andrew on FB) Extra Credit: Solaris.
@jotaace: Those who mentioned Solaris: It’s a story about a BIG alien.
@Geofutures: And an unusually realistic one http://bit.ly/guBpEJ
@ahnie: No idea if they’re “good” but CSI (TV)-based should qualify as #scifi w/o aliens/robots
@WorldFutureSoc: @ahnie My mind-set is toward the future, so I sometimes forget that not all #scifi is set in the future.
@ahnie: I *think* Firefly/Serenity was w/o aliens except (d)evolved humans. Can’t remember if any robots. Clearly time to re-watch.
@jlindenger: bazinga!
@ahnie: had to check Urban Dictionary for “bazinga” Is it definition 2, 5, or 6? #TragicallyUnhip
@heathervescent: Just search for Sheldon and Big Bang Theory.
@ahnie: BBT & Sheldon are UD def #1. Sometimes being TV-less is el-sucko. #ThemeSongIsNowMyEarworn
@WorldFutureSoc: The #scifi chat took on a life of its own! #BBT is science, fiction, robotless except for Sheldon’s “virtual presence”
@OscarMopperkont: I believe 1984 by Orwell isn’t mentioned yet #scifi
@jlindenger: there’s a ton. Enjoy!
@WorldFutureSoc: Great list! Thx so much! (#scifi books/movies w/o robots, aliens)
@WorldFutureSoc: smacks forehead for forgetting great movies like Jurassic Park and WarGames.
Follow the World Future Society’s Twitter page and search all of our tweeps at http://twitter.com/WorldFutureSoc.
• Air by Geoff Ryman (St. Martin’s Griffin, 2004). @justinpickard
• The Chrysalids by John Wyndham (Michael Joseph, 1955). @OscarMopperkont
• A Clockwork Orange by Anthony Burgess (William Heinemann, 1962; also a film). @heathervescent
• Dune by Frank Herbert (Chilton Books, 1965; also a film). @JITHyperion
• The End of Eternity by Isaac Asimov (Doubleday, 1955). @jimmath
• Fahrenheit 451 by Ray Bradbury (Ballantine Books, 1953; also a film). @Whaatson
• Flashforward by Robert J. Sawyer (Tor Books, 1999). @Geofutures
• Glasshouse (Orbit [U.K.], Ace [U.S.], 2006) and Halting State (Orbit [U.K.], Ace [U.S.], 2007) by Charles Stross. @jlindenger
• The Handmaid’s Tale by Margaret Atwood (McClelland and Stewart, 1985; also a film). @jlindenger
• The Moon Is a Harsh Mistress (1966) and I Will Fear No Evil (1970) by Robert A. Heinlein (G. P. Putnam’s Sons). @Ouroboros4ever, @jlindenger
• Neuromancer by William Gibson (Ace Books, 1984). @MattCCrampton
• Nineteen Eighty-Four by George Orwell (Secker and Warburg, 1949). @OscarMopperkont
• Parable of the Sower (Four Walls Eight Windows, 1993) and Parable of the Talents (Seven Stories Press, 1998) by Octavia E. Butler. @jlindenger
• River of Gods by Ian McDonald (Simon & Schuster, 2004). @abranches
• Samaria series by Sharon Shinn (e.g., Archangel, Ace Books, 1997). @jlindenger
• Seeker by Jack McDevitt (Ace Books, 2005). @heathervescent
• Snow Crash (Bantam Books, 1992) and The Diamond Age (Bantam Spectra, 1995) by Neal Stephenson. @jlindenger
• The Stars My Destination by Alfred Bester (Sidgwick and Jackson, 1956). @heathervescent
• Trouble on Triton by Samuel R. Delaney (Bantam Books, 1976). @heathervescent
• Blade Runner (directed by Ridley Scott, 1982; based on the novel Do Androids Dream of Electric Sheep? by Philip K. Dick). @Geofutures
• Brazil (1985) and Twelve Monkeys (1995), directed by Terry Gilliam. @ryonck
• Children of Men (directed by Alfonso Cuarón, 2006; based on the novel The Children of Men by P. D. James). @jotaace
• Code 46 (directed by Michael Winterbottom, 2003). @justinpickard
• Donnie Darko (directed by Richard Kelly, 2001). @johnadamyates
• Eternal Sunshine of the Spotless Mind (directed by Michel Gondry, 2004). @jasonporath
• The Fountain (directed by Darren Aronofsky, 2006). @jasonporath, @jlindenger
• Frau im Mond (Woman in the Moon) (directed by Fritz Lang, 1929). @jotaace
• Gattaca (directed by Andrew Niccol, 1997). @JasonSwanson, @justinpickard
• Jurassic Park (directed by Steven Spielberg, 1993; based on a novel by Michael Crichton) @WorldFutureSoc
• Just Imagine (directed by David Butler, 1930). @jotaace
• Kosmicheskiy Reys (Cosmic Rays) (directed by Vasili Zhuravlov, 1936). @jotaace
• Minority Report (directed by Steven Spielberg, 2002; based on the short story “The Minority Report” by Philip K. Dick). @ryonck, @WorldFutureSoc
• Primer (directed by Shane Carruth, 2004). @jasonporath
• Serenity (directed by Joss Whedon, 2005). @transhumanistic, @ahnie
• Stereo (1969), Crimes of the Future (1970), and eXistenZ (1999), directed by David Cronenberg. @jotaace
• Things to Come (directed by William Cameron Menzies, 1936; based on The Shape of Things to Come by H. G. Wells). @jotaace
• WarGames (directed by John Badham, 1983) @WorldFutureSoc
• Big Bang Theory (created by Chuck Lorre and Bill Prady, CBS, 2007). @jlindenger, @heathervescent, @ahnie, @WorldFutureSoc
• CSI (Crime Scene Investigation) (created by Anthony E. Zuiker, CBS, 2000). @ahnie
• Firefly (created by Joss Whedon, Fox, 2002). @transhumanistic, @ahnie
The Net Delusion: The Dark Side of Internet Freedom by Evgeny Morozov. PublicAffairs. 2011. 408 pages. $27.95.
In 2009, reports that dissidents in Iran were using Twitter prompted many Western commentators to proclaim that social media was fomenting a democratic Iranian revolution—only to be disappointed when the “revolution” fizzled and died. New America Foundation fellow Evgeny Morozov attributes the commentators’ misplaced hopes to cyber-utopianism, a widespread but naïve expectation that the Internet will empower oppressed peoples and advance democracy.
According to Morozov, cyber-utopians failed to anticipate that authoritarian regimes would also benefit from the Internet. In fact, such police states as Belarus and Iran pay bloggers to spread propaganda and frequent social-networking sites to monitor dissidents. Other states, such as Russia, disseminate crass entertainment through video-sharing sites to distract viewers from social and political issues.
Morozov debunks many widely held assumptions about how politically repressive states and their opposition both work. He follows with advice for democratic lawmakers who want to help the dissidents.
Pro-democracy lawmakers must engage with the Internet, he says, but they must observe how it impacts different countries in different ways and shape their policies accordingly: What works in Tunisia might not work in Burma. Also, they must never treat Web-based platforms as substitutes for diligent, committed human activists who mobilize people to action in real life.
The Net Delusion is a sobering assessment on the limits of Internet activism. It has practical advice for policy makers and nonprofit activists across the globe.
—Rick Docksai
The only place where the West is still unabashedly eager to promote democracy is in cyberspace. Enthusiastic belief in the liberating power of technology, accompanied by the irresistible urge to enlist Silicon Valley start-ups in the global fight for freedom, is of growing appeal to many policy makers. In fact, many of them are as upbeat about the revolutionary potential of the Internet as their colleagues in the corporate sector were in the 1990s.
We shouldn’t give the Internet too much credit, however, and we should probably give it credit for some of the negative things that are happening. We shouldn’t be biased and just look at the brighter side. We should be more critical in thinking about its impacts.
The idea that the Internet favors the oppressed rather than the oppressor is marred by what I call cyber-utopianism: a naïve belief in the emancipatory nature of online communication that rests on a stubborn refusal to acknowledge its downside.
Cyber-utopians ambitiously set out to build a new and improved United Nations, only to end up with a digital Cirque du Soleil. Failing to anticipate how authoritarian governments would respond to the Internet, cyber-utopians did not predict how useful the Internet would prove for propaganda purposes, how masterfully dictators would use it for surveillance, and how sophisticated modern forms of Internet censorship would become.
Fidel Castro’s Twitter page has been around for a few years. But very few people in Cuba own computers, because the Cuban government restricted the sale of computers to its population, so most of them just don’t have the equipment to tweet. They don’t have Internet cafés. They do have a small blogging culture, a few bloggers who have to be very careful. The government modified the restrictions on computers only a short while ago, so I wouldn’t expect Facebook or Twitter to matter much in Cuba in the next five to ten years.
Take a closer look at the blogospheres in almost any authoritarian regime, and you are likely to discover that they are teeming with nationalism and xenophobia. Things don’t look particularly bright for the kind of flawless democratization that some expect from the Internet’s arrival.
Likewise, bloggers uncovering and publicizing corruption in local governments could be—and are—easily coopted by higher-ranking politicians and made part of the anti-corruption campaign. The overall impact on the strength of the regime in this case is hard to determine; the bloggers may be diminishing the power of local authorities but boosting the power of the federal government. Authoritarian regimes in Central Asia, for example, have been actively promoting a host of e-government initiatives.
Normally a regime that fights its own corruption has more legitimacy with its own people. From that perspective, I wouldn’t go so far as to say that the Internet is making the government more accountable, but I would say that it is making local officials more responsible.
The government may be eliminating corruption in the provinces, making the people happier, but that doesn’t mean that they’re eliminating corruption at the top. So the distribution of corruption might be changing. But I do think government might use the Internet to solicit more citizen input. That won’t undermine the government. It will bolster its legitimacy.
It’s not paradoxical. The fact that the government is soliciting their opinions does not mean that the government is listening to them. It wants to give the people the impression that it is listening to them. In some sense, it creates a semblance of democratic institutions. It’s all about creating a veneer of legitimacy.
Digital activists in the Middle East can boast quite a few accomplishments, particularly when it comes to documenting police brutality, but I don’t think the Internet will play much of a role in Middle Eastern democratic revolutions compared with other factors. The things to watch for are how the new leaders shape the new constitutions and how they deal with the elements of the previous regimes. All those things are far more important than what happens online. I wouldn’t bet that the Internet will be a great help.
As for the extent to which these new regimes become democracies—it’s a wild guess for anyone, me included. They have a chance, but outcomes will depend upon many factors, including internal policies and external conflicts. I don’t buy into the cultural notion of Arabs not being ready for democracy. Democracy in the Middle East may succeed. But it will depend on how they work with the existing challenges.
The revolts were driven by people who had economic grievances and were politically oppressed. They turned to the Internet to publicize their grievances and their resistance. The fact that new media and blogs were present probably set a different tempo to the revolts. If the Internet were not around, the regime might be tempted to crack down in a much more brutal way. The revolts themselves would be taking a different shape, and they may have happened three to six months later.
It’s hypothetical to say how today’s democratic revolutions would have happened without the Internet, but revolutions throughout history are driven by cultural factors. The events probably would have happened differently and probably would have turned out differently. We have to entertain the possibility that these events could have been much more violent and taken much more time if they hadn’t had the publicity that they had thanks to the Internet.
But ultimately, a regime’s response to a revolt depends on the regime, not on the Internet. Just because people can tweet and blog doesn’t stop the Libyan government from instituting a violent crackdown.
In all, it’s hard to generalize based on the future of the Internet. We don’t have a one-size-fits-all approach to every country. We adapt our policies for each country. That’s how foreign policy works. But with the Internet, we have a tendency to generalize that this must be how it works everywhere, and that isn’t the case.
While civic activism—raising money for sick children and campaigning to curb police corruption—is highly visible on the Russian Internet, it’s still entertainment and social media that dominate. In this respect, Russia hardly differs from the United States or countries in western Europe. The most popular Internet searches on Russian search engines are not for “What is Democracy?” or “how to protect human rights,” but for “What is love?” and “how to lose weight.”
The Kremlin supports, directly or indirectly, a host of sites about politics, which are usually quick to denounce the opposition and welcome every government initiative, but increasingly branches out into apolitical entertainment. From the government’s perspective, it’s far better to keep young Russians away from politics altogether, having them consume funny videos on Russia’s own version of YouTube, RuTube (owned by Gazprom, the country’s state-owned energy behemoth), or on Russia.ru, where they might be exposed to a rare ideological message as well.
Many Russians are happy to comply, not least because of the high quality of such online distractions. The Russian authorities may be on to something here: The most effective system of Internet control is not the one that has the most sophisticated and draconian censorship, but the one that has no need for censorship whatsoever.
I don’t think there is anything unique about Russia per se. It’s just that the government is smarter than the Egyptian government was about how to use the Internet. The Egyptian government didn’t do anything online. It didn’t engage in propaganda, deploy bloggers, or launch cyberattacks. They missed the train.
I think the difference is that the people who built up the Russian Internet ended up working for the government. The Egyptian government’s approach to the Internet was very shallow, and it had to pay the price, eventually.
Giving everyone a blog will not by itself increase the health of modern-day democracy; in fact, the possible side effects—the disappearance of watchdogs, the end of serendipitous news discovery, the further polarization of society—may not be the price worth paying for the still unclear virtues of the blogging revolution. This does not mean, of course, that a smart set of policies—implemented by the government or private actors—won’t help to address those problems.
The people who were instrumental in making the Egyptian revolution happen weren’t new to politics. Almost all of them were part of existing political and social forces. They had had plenty of training and organization by various Western foundations and governments. I don’t think the view of this as being a spontaneous revolution was true. I myself have been to several democracy workshops in Egypt. I wouldn’t necessarily view these people as atomized individuals. They have been trained offline.
But of course, you wouldn’t have heard as much about it. Who’s paying for those workshops? It’s the U.S. government and U.S. foundations. In this sense, Facebook and Twitter are much better covers, because the uprisings they enabled appeared to be spontaneous. It would be very misleading to suggest that all the connections forged by these activists are virtual. Revolution is much more about building human networks.
In 1996, when a group of high-profile digerati took to the pages of Wired magazine and proclaimed that the “public square of the past” was being replaced by the Internet, a technology that “enables average citizens to participate in national discourse, publish a newspaper, distribute an electronic pamphlet to the world … while simultaneously protecting their privacy,” many historians must have giggled.
From the railways, which Karl Marx believed would dissolve India’s caste system, to television, that greatest “liberator” of the masses, there has hardly appeared a technology that wasn’t praised for its ability to raise the level of public debate, introduce more transparency into politics, reduce nationalism, and transport us to the mythical global village.
In virtually all cases, such high hopes were crushed by the brutal forces of politics, culture, and economics. Technologies tend to overpromise and underdeliver, as least on their initial promises.
Which of the forces unleashed by the Web will prevail in a particular social and political context is impossible to tell without first getting a thorough theoretical understanding of that context. Likewise, it is naïve to believe that such a sophisticated and multipurpose technology as the Internet could produce identical outcomes—whether good or bad—in countries as diverse as Belarus, Burma, Kazakhstan, and Tunisia. There is so much diversity across authoritarian regimes.
I wouldn’t have much hope in the Internet in North Korea. First, it’s a country with some of the fewest Internet connections in the world. And second, average North Koreans have been brainwashed to such an extent that you have serious psychological challenges that you can’t overcome just by using blogs and Twitter. It would be much harder than for a country like Belarus, for example, where one-third of the country is online. Mobile phones might play a role in getting more information out. But it’s unlikely that Facebook or Twitter will play much of a role.
Policy makers need to abandon both cyber-utopianism and Internet-centrism, if only for the lack of accomplishment. What would take their place? What would an alternative, more down-to-earth approach to policy making in the digital age—let’s call it cyber-realism—look like?
Cyber-realists would struggle to find space for the Internet in existing pillars. Instead of asking the highly general, abstract, and timeless question of “How do we think the Internet changes closed societies?,” they would ask “How do we think the Internet is affecting our existing policies on country X?” Instead of operating in the realm of the utopian and the ahistorical, impervious to the ways in which developments in domestic and foreign policies intersect, cyber-realists would be constantly searching for highly sensitive points of interaction between the two.
They wouldn’t label all Internet activism as either useful or harmful. Instead, they would evaluate the desirability of promoting such activism in accordance with their existing policy objectives.
Cyber-realists wouldn’t search for technological solutions to problems that are political in nature, and they wouldn’t pretend that such solutions are even possible. Nor would cyber-realists search for a bullet that could destroy authoritarianism—or even the next-to-silver bullet, for the utopian dreams that such a bullet can even exist would have no place in their conception of politics.
Instead, cyber-realists would focus on optimizing their own decision-making and learning processes, hoping that the right mix of bureaucratic checks and balances, combined with the appropriate incentives structure, would identify wicked problems before they are misdiagnosed as tame ones, as well as reveal how a particular solution to an Internet problem might disrupt solutions to other, non-Internet problems.
Most important, cyber-realists would accept that the Internet is poised to produce different policy outcomes in different environments and that a policy maker’s chief objective is not to produce a thorough philosophical account of the Internet’s impacts on society at large, but, rather, to make the Internet an ally in achieving specific policy objectives. For them, the promotion of democracy would be too important an activity to run it out of a Silicon Valley lab.
Evgeny Morozov is a contributing editor to Foreign Policy, visiting scholar at Stanford University, a Schwartz fellow at the New America Foundation, and the author of The Net Delusion: The Dark Side of Internet Freedom (Public Affairs, 2011). E-mail evgeny.morozov@gmail.com.
This article draws from his book as well as an interview with staff editor Rick Docksai, which may be read at wfs.org.
The signs that our civilization is in trouble are multiplying. During most of the 6,000 years since civilization began, we lived on the sustainable yield of the Earth’s natural systems. In recent decades, however, humanity has overshot the level that those systems can sustain.
We are liquidating the Earth’s natural assets to fuel our consumption. Half of us live in countries where water tables are falling and wells are going dry. Soil erosion exceeds soil formation on one-third of the world’s cropland, draining the land of its fertility. The world’s ever-growing herds of cattle, sheep, and goats are converting vast stretches of grassland to desert. Forests are shrinking by 13 million acres per year as we clear land for agriculture and cut trees for lumber and paper. Four-fifths of oceanic fisheries are being fished at capacity or overfished and headed for collapse. In system after system, demand is overshooting supply.
For past civilizations, it was sometimes a single environmental trend that was primarily responsible for their decline. Sometimes it was multiple trends. For ancient Sumer, decline could be attributed to rising salt concentrations in the soil as a result of an environmental flaw in the design of their otherwise extraordinary irrigation system. After a point, the salts accumulating in the soil led to a decline in wheat yields. The Sumerians then shifted to barley, a more salt-tolerant crop, but eventually barley yields also began to decline. The collapse of the civilization followed.
Although we live in a highly urbanized, technologically advanced society, we are as dependent on the Earth’s natural support systems as the Sumerians and Mayans were. If we continue with business as usual, civilizational collapse is no longer a matter of whether but when. We now have an economy that is destroying its natural support systems and has put us on a decline and collapse path. We are dangerously close to the edge. Among other actions, we need a worldwide effort to conserve soil, similar to the U.S. response to the Dust Bowl of the 1930s.
On March 20, 2010, a suffocating dust storm enveloped Beijing. The city’s weather bureau took the unusual step of describing the air quality as hazardous, urging people to stay inside or to cover their faces when they were outdoors. Visibility was low, forcing motorists to drive with their lights on in daytime.
Beijing was not the only area affected. This particular dust storm engulfed scores of cities in five provinces, directly affecting more than 250 million people. It was not an isolated incident. Every spring, residents of eastern Chinese cities, including Beijing and Tianjin, hunker down as the dust storms begin. Along with the difficulty in breathing and the stinging eyes, there is a constant struggle to keep dust out of homes and to clear doorways and sidewalks of dust and sand. The farmers and herders whose livelihoods are blowing away are paying an even higher price.
These annual dust storms affect not only China, but neighboring countries as well. The March 20 dust storm arrived in South Korea soon after leaving Beijing. It was described by the Korean Meteorological Administration as the worst dust storm on record. In a similar event in 2002, South Korea was engulfed by so much dust from China that people in Seoul were literally gasping for breath, reported Howard French for The New York Times. Schools were closed, airline flights were canceled, retail sales fell, and clinics were overrun with patients having difficulty breathing. Koreans have come to dread the arrival of what they call “the fifth season”—the dust storms of late winter and early spring.
While people living in China and South Korea are all too familiar with dust storms, the rest of the world typically learns about this fast-growing ecological catastrophe when the massive soil-laden storms leave the region. In April 2010, a National Aeronautics and Space Administration (NASA) satellite tracked a dust storm from China as it journeyed to the east coast of the United States. Originating in the Taklimakan and Gobi deserts, it ultimately covered an area stretching from North Carolina to Pennsylvania. Such huge dust storms carry off millions of tons of topsoil, a resource that will take centuries to replace.
The thin layer of topsoil that covers much of the planet’s land surface and is typically measured in inches is the foundation of civilization. Soil is “the skin of the earth—the frontier between geology and biology,” writes geomorphologist David Montgomery in Dirt: The Erosion of Civilizations. After the Earth was created, soil formed slowly over geological time from the weathering of rocks. This soil supported early plant life on land. As plant life spread, the plants protected the soil from wind and water erosion, permitting it to accumulate and to support even more vegetation. This relationship facilitated an accumulation of topsoil that could support a rich diversity of plant and animal life.
As long as soil erosion on cropland does not exceed new soil formation, all is well. But once it does, it leads to falling soil fertility and eventually to land abandonment. Sadly, soil formed on a geological time scale is being removed on a human time scale.
Soil erosion is “the silent global crisis,” observes journalist Stephen Leahy in Earth Island Journal. “It is akin to tire wear on your car—a gradual, unobserved process that has potentially catastrophic consequences if ignored for too long.”
Losing productive topsoil means losing both organic matter in the soil and vegetation on the land, thus releasing carbon into the atmosphere. The 2,500 billion tons of carbon stored in soils dwarfs the 760 billion tons in the atmosphere, according to soil scientist Rattan Lal of Ohio State University. The bottom line is that land degradation is helping drive climate change.
Soil erosion is not new. It is as old as the Earth itself. What is new is that it has gradually accelerated ever since agriculture began. At some point, probably during the nineteenth century, the loss of topsoil from erosion surpassed the new soil that is formed through natural processes.
Today, roughly a third of the world’s cropland is losing topsoil at an excessive rate, thereby reducing the land’s inherent productivity. An analysis of several studies on soil erosion’s effect on U.S. crop yields concluded that, for each inch of topsoil lost, wheat and corn yields declined by close to 6%.
In August 2010, the United Nations announced that desertification now affects 25% of the Earth’s land area, threatening the livelihoods of more than 1 billion people—the families of farmers and herders in roughly 100 countries.
China may face the biggest challenge of all. After the economic reforms in 1978 that shifted the responsibility for farming from large state-organized production teams to individual farm families, China’s cattle, sheep, and goat populations spiraled upward. The United States, a country with comparable grazing capacity, has 94 million cattle, a slightly larger herd than China’s 92 million. But when it comes to sheep and goats, the United States has a combined population of only 9 million, whereas China has 281 million. Concentrated in China’s western and northern provinces, these animals are stripping the land of its protective vegetation. The wind then does the rest, removing the soil and converting rangeland into desert.
Wang Tao, one of the world’s leading desert scholars, reports that, from 1950 to 1975, an average of 600 square miles of land turned to desert each year. Between 1975 and 1987, this climbed to 810 square miles a year. From then until the century’s end, it jumped to 1,390 square miles of land going to desert annually.
China is now at war. It is not invading armies that are claiming its territory, but expanding deserts. Old deserts are advancing and new ones are forming like guerrilla forces striking unexpectedly, forcing Beijing to fight on several fronts.
While major dust storms make the news when they affect cities, the heavy damage is in the area of origin. These regions are affected by storms of dust and sand combined. An intense 1993 sandstorm in Gansu Province in China’s northwest destroyed 430,000 acres of standing crops, damaged 40,000 trees, killed 67,000 cattle and sheep, blew away 67,000 acres of plastic greenhouses, injured 278 people, and killed 49 individuals. Forty-two passenger and freight trains were either canceled or delayed, or simply parked to wait until the storm passed and the tracks were cleared of sand dunes.
While China is battling its expanding deserts, India, with scarcely 2% of the world’s land area, is struggling to support 17% of the world’s people and 18% of its cattle. According to a team of scientists at the Indian Space Research Organization, 24% of India’s land area is slowly turning into desert. It thus comes as no surprise that many of India’s cattle are emaciated and over 40% of its children are chronically hungry and underweight.
Africa, too, is suffering heavily from unsustainable demands on its croplands and grasslands. Soil scientist Rattan Lal made the first estimate of continental yield losses due to soil erosion. He concluded that soil erosion and other forms of land degradation have cost Africa 8 million tons of grain per year, or roughly 8% of its annual harvest. Lal expects the loss to climb to 16 million tons by 2020 if soil erosion continues unabated.
On the northern fringe of the Sahara, countries such as Algeria and Morocco are attempting to halt the desertification that is threatening their fertile croplands. Algeria is losing 100,000 acres of its most fertile lands to desertification each year, according to President Abdelaziz Bouteflika. For a country that has only 7 million acres of grainland, this is not a trivial loss. Among other measures, Algeria is planting its southernmost cropland in perennials, such as fruit orchards, olive orchards, and vineyards—crops that can help keep the soil in place.
Mounting population pressures are evident everywhere on this continent where the growth in livestock numbers closely tracks that in human numbers. In 1950, Africa was home to 227 million people and about 300 million livestock. By 2009, there were 1 billion people and 862 million livestock. With livestock demands now often exceeding grassland carrying capacity by half or more, grassland is turning into desert. In addition to overgrazing, parts of the Sahel are suffering from an extended drought, one that scientists link to climate change.
The incidence of Saharan dust storms—once rare—has increased 10-fold during the last half century, reports Andrew Goudie, professor of geography at Oxford University. Among the African countries most affected by soil loss from wind erosion are Niger, Chad, Mauritania, northern Nigeria, and Burkina Faso. In Mauritania, in Africa’s far west, the number of dust storms jumped from two a year in the early 1960s to 80 a year recently.
And the impacts are global. Dust storms leaving Africa travel westward across the Atlantic, depositing so much dust in the Caribbean that they cloud the water and damage coral reefs.
Nigeria, Africa’s most populous country, reports losing 867,000 acres of rangeland and cropland to desertification each year. While Nigeria’s human population was growing from 37 million in 1950 to 151 million in 2008, a fourfold expansion, its livestock population grew from 6 million to 104 million, a 17-fold jump. With the forage needs of Nigeria’s 16 million cattle and 88 million sheep and goats exceeding the sustainable yield of grasslands, the northern part of the country is slowly turning to desert. If Nigeria’s population keeps growing as projected, the associated land degradation will eventually undermine herding and farming.
In East Africa, Kenya is being squeezed by spreading deserts. Desertification affects up to a fourth of the country’s 39 million people. As elsewhere, the combination of overgrazing, overcutting, and overplowing is eroding soils, costing the country valuable productive land.
In Afghanistan, a UN Environment Programme (UNEP) team reports that in the Sistan region “up to 100 villages have been submerged by windblown dust and sand.” The Registan Desert is migrating westward, encroaching on agricultural areas. In the country’s northwest, sand dunes are moving onto agricultural land in the upper Amu Darya basin, their path cleared by the loss of stabilizing vegetation due to firewood gathering and overgrazing. The UNEP team observed sand dunes as high as a five-story building blocking roads, forcing residents to establish new routes.
An Afghan Ministry of Agriculture and Food report reads like an epitaph on a gravestone: “Soil fertility is declining,... water tables have dramatically fallen, de-vegetation is extensive and soil erosion by water and wind is widespread.” After nearly three decades of armed conflict and the related deprivation and devastation, Afghanistan’s forests are nearly gone. Seven southern provinces are losing cropland to encroaching sand dunes. And like many failing states, even if Afghanistan had appropriate environmental policies, it lacks the law enforcement authority to implement them.
Neighboring Iran illustrates the pressures facing the Middle East. With 8 million cattle and 79 million sheep and goats—the source of wool for its fabled Persian carpet-making industry—Iran’s rangelands are deteriorating from overstocking. In the southeastern province of Sistan-Balochistan, sandstorms have buried 124 villages, forcing their abandonment. Drifting sands have covered grazing areas, starving livestock and depriving villagers of their livelihood.
In Iraq, suffering from nearly a decade of war and recent drought, a new dust bowl appears to be forming. Chronically plagued by overgrazing and overplowing, Iraq is now losing irrigation water to its upstream riparian neighbors—Turkey, Syria, and Iran. The reduced river flow—combined with the drying up of marshlands, the deterioration of irrigation infrastructure, and the shrinking irrigated area—is drying out Iraq. The Fertile Crescent, the cradle of civilization, may be turning into a dust bowl.
Dust storms are occurring with increasing frequency in Iraq. In July 2009, a dust storm raged for several days in what was described as the worst such storm in Iraq’s history. As it traveled eastward into Iran, the authorities in Tehran closed government offices, private offices, schools, and factories. Although this new dust bowl is small compared with those centered in northwest China and central Africa, it is nonetheless an unsettling new development in this region.
One indicator that helps us assess grassland health is changes in the goat population relative to those of sheep and cattle. As grasslands deteriorate, grass is typically replaced by desert shrubs. In such a degraded environment, cattle and sheep do not fare well, but goats—being particularly hardy ruminants—forage on the shrubs. Between 1970 and 2009, the world cattle population increased by 28% and the sheep population stayed relatively static, but the goat population more than doubled.
In some developing countries, the growth in the goat population is dramatic. While Pakistan’s cattle population doubled between 1961 and 2009, and the sheep population nearly tripled, the goat population grew more than sixfold and is now equal to that of the cattle and sheep populations combined.
As countries lose their topsoil, they eventually lose the capacity to feed themselves. Among those facing this problem are Lesotho, Haiti, Mongolia, and North Korea.
Lesotho—one of Africa’s smallest countries, with only 2 million people—is paying a heavy price for its soil losses. A UN team visited in 2002 to assess its food prospect. Their finding was straightforward: “Agriculture in Lesotho faces a catastrophic future; crop production is declining and could cease altogether over large tracts of country if steps are not taken to reverse soil erosion, degradation, and the decline in soil fertility.”
During the last 10 years, Lesotho’s grain harvest dropped by half as its soil fertility fell. Its collapsing agriculture has left the country heavily dependent on food imports. As Michael Grunwald reported in the Washington Post, nearly half of the children under five in Lesotho are stunted physically. “Many,” he wrote, “are too weak to walk to school.”
In the Western Hemisphere, Haiti—one of the early failing states—was largely self-sufficient in grain 40 years ago. Since then, it has lost nearly all its forests and much of its topsoil, forcing it to import over half of its grain. Lesotho and Haiti are both dependent on UN World Food Programme lifelines.
A similar situation exists in Mongolia, where over the last 20 years nearly three-fourths of the wheatland has been abandoned and wheat yields have started to fall, shrinking the harvest by four-fifths. Mongolia now imports nearly 70% of its wheat.
North Korea, largely deforested and suffering from flood-induced soil erosion and land degradation, has watched its yearly grain harvest fall from a peak of 5 million tons during the 1980s to scarcely 3.5 million tons during the first decade of this century.
Soil erosion is taking a human toll. Whether the degraded land is in Haiti, Lesotho, Mongolia, North Korea, or any of the many other countries losing their soil, the health of the people cannot be separated from the health of the land itself.
Restoring the Earth will take an enormous international effort, one far more demanding than the Marshall Plan that helped rebuild war-torn Europe and Japan after World War II. And such an initiative must be undertaken at wartime speed before environmental deterioration translates into economic decline, just as it did for the Sumerians, the Mayans, and many other early civilizations whose archaeological sites we study today.
Protecting the 10 billion acres of remaining forests on Earth and replanting many of those already lost, for example, are both essential for restoring the planet’s health. Since 2000, the Earth’s forest cover has shrunk by a net 13 million acres each year, with annual losses of 32 million acres far exceeding the regrowth of 19 million acres.
Thus, protecting the Earth’s soil warrants a worldwide ban on the clear-cutting of forests in favor of selective harvesting, simply because each successive clear-cut brings heavy soil loss and eventual forest degeneration. Restoring the Earth’s tree and grass cover, as well as practicing conservation agriculture, protects soil from erosion, reduces flooding, and sequesters carbon.
We also need a tree-planting effort to both conserve soil and sequester carbon. To achieve these goals, billions of trees need to be planted on millions of acres of degraded lands that have lost their tree cover and on marginal croplands and pasturelands that are no longer productive.
Planting trees is just one of many activities that will remove meaningful quantities of carbon from the atmosphere. Improved grazing and land management practices that increase the organic matter content in soil also sequester carbon.
The 1930s Dust Bowl that threatened to turn the U.S. Great Plains into a vast desert was a traumatic experience that led to revolutionary changes in American agricultural practices, including the planting of tree shelterbelts (rows of trees planted beside fields to slow wind and thus reduce wind erosion) and strip cropping (the planting of wheat on alternate strips with fallowed land each year). Strip cropping permits soil moisture to accumulate on the fallowed strips, while the alternating planted strips reduce wind speed and hence erosion on the idled land.
In 1985, the U.S. Department of Agriculture, with strong support from the environmental community, created the Conservation Reserve Program (CRP) to reduce soil erosion and control overproduction of basic commodities. By 1990, there were some 35 million acres of highly erodible land with permanent vegetative cover under 10-year contracts. Under this program, farmers were paid to plant fragile cropland in grass or trees. The retirement of those 35 million acres under the CRP, together with the use of conservation practices on 37% of all cropland, reduced annual U.S. soil erosion from 3.1 billion tons to 1.9 billion tons between 1982 and 1997. The U.S. approach offers a model for the rest of the world.
Another tool in the soil conservation toolkit is conservation tillage, which includes both no-till and minimum tillage. Instead of the traditional cultural practices of plowing land and discing or harrowing it to prepare the seedbed, and then using a mechanical cultivator to control weeds in row crops, farmers simply drill seeds directly through crop residues into undisturbed soil, controlling weeds with herbicides. The only soil disturbance is the narrow slit in the soil surface where the seeds are inserted, leaving the remainder of the soil covered with crop residue and thus resistant to both water and wind erosion. In addition to reducing erosion, this practice retains water, raises soil carbon content, and greatly reduces energy use for tillage.
In the United States, the no-till area went from 17 million acres in 1990 to 65 million acres in 2007. Now widely used in the production of corn and soybeans, no-till has spread rapidly, covering 63 million acres in Brazil and Argentina and 42 million in Australia. Canada, not far behind, rounds out the five leading no-till countries. Farming practices that reduce soil erosion and raise cropland productivity such as minimum-till, no-till, and mixed crop–livestock farming usually also lead to higher soil carbon content and soil moisture. In Kazakhstan, the 3 million acres in no-till seemed to fare better than land in conventional farming during the great Russian heat wave and drought of 2010.
In sub-Saharan Africa, where the Sahara is moving southward all across the Sahel, countries are concerned about the growing displacement of people as grasslands and croplands turn to desert. As a result, the African Union has launched the Green Wall Sahara Initiative. This plan, originally proposed in 2005 by Olusegun Obasanjo when he was president of Nigeria, calls for planting a 4,300-mile band of trees, nine miles wide, stretching across Africa from Senegal to Djibouti. Senegal, which is losing 124,000 acres of productive land each year and which would anchor the green wall on the western end, has planted 326 miles of the band. A $119-million grant from the Global Environment Facility in June 2010 gave the project a big boost. Senegal’s Environment Minister, Modou Fada Diagne, says, “Instead of waiting for the desert to come to us, we need to attack it.” One key to the success of this initiative is improving management practices, such as rotational grazing.
In the end, the only viable way to eliminate overgrazing on the two-fifths of the Earth’s land surface classified as rangelands is to reduce the size of flocks and herds. Not only do the excessive numbers of cattle, sheep, and goats remove the vegetation, but their hoofs pulverize the protective crust of soil that is formed by rainfall and that naturally checks wind erosion. In some situations, the preferred option is to keep the animals in restricted areas, bringing the forage to them. India, which has successfully adopted this practice to build the world’s largest dairy industry, is a model for other countries.
Conserving the Earth’s topsoil by reducing erosion to the rate of new soil formation or below has two parts. One is to retire the highly erodible land that cannot sustain cultivation—the estimated one-tenth of the world’s cropland that accounts for perhaps half of all excess erosion. For the United States, that has meant retiring nearly 35 million acres. The cost of keeping this land out of production is close to $50 per acre. In total, annual payments to farmers to plant this land in grass or trees under 10-year contracts approaches $2 billion.
In expanding these estimates to cover the world, it is assumed that roughly 10% of the world’s cropland is highly erodible, as in the United States, and should be planted in grass or trees before the topsoil is lost and it becomes barren land. In both the United States and China, which together account for 40% of the world grain harvest, the official goal is to retire one-tenth of all cropland. For the world as a whole, converting 10% of cropland that is highly erodible to grass or trees seems like a reasonable goal. Since this costs roughly $2 billion in the United States, which has one-eighth of the world’s cropland, the total for the world would be $16 billion annually.
The second initiative on topsoil consists of adopting conservation practices on the remaining land that is subject to excessive erosion—that is, erosion that exceeds the natural rate of new soil formation. This initiative includes incentives to encourage farmers to adopt conservation practices such as contour farming, strip cropping, and, increasingly, minimum-till or no-till farming. These expenditures in the United States total roughly $1 billion per year. Assuming that the need for erosion control practices elsewhere is similar to that in the United States, we again multiply the U.S. expenditure by eight to get a total of $8 billion for the world as a whole. The two components together—$16 billion for retiring highly erodible land and $8 billion for adopting conservation practices—give an annual total for the world of $24 billion.
Altogether, then, restoring the economy’s natural support systems—reforesting the Earth, protecting topsoil, restoring rangelands and fisheries, stabilizing water tables, and protecting biological diversity—will require additional expenditures of just $110 billion per year. Many will ask, Can the world afford these investments? But the only appropriate question is, Can the world afford the consequences of not making these investments?
Lester R. Brown is president of Earth Policy Institute and author of World on the Edge: How to Prevent Environmental and Economic Collapse (W.W. Norton & Company, 2011), from which this article was adapted with permission. He may be contacted at Earth Policy Institute, 1350 Connecticut Avenue, N.W., Suite 403, Washington, D.C. 20036. Web site www.earth-policy.org; e-mail epi@earth-policy.org.
Data, endnotes, and additional resources can be found on Earth Policy’s Web site, at www.earth-policy.org. Also see:
• Dirt: The Erosion of Civilizations by David R. Montgomery (University of California Press, 2007). A geomorphologist argues that we are running out of sufficient soil to feed future populations, making a case for organic inputs and conservation.
• The Grapes of Wrath by John Steinbeck (Viking Penguin Inc., 1939) puts environmental damage into a human context.
• Food and Agriculture Organization (www.fao.org) provides information on soil and soil resources, conservation, desertification, land assessment, plant and crop nutrition, and more.
• NASA’s Earth Observatory site (http://earthobservatory.nasa.gov) offers satellite imagery showing the extent and impacts of dust storms, droughts, and more.
• U.S. Department of Agriculture Agricultural Research Service (www.ars.usda.gov) oversees the National Soil Erosion Research Laboratory, among many other programs promoting innovation in resource management.
Dust storms provide highly visible evidence of soil erosion and desertification. Once vegetation is removed either by overgrazing or overplowing, the wind begins to blow the small soil particles away. Because the particles are small, they can remain airborne over great distances. Once they are largely gone, leaving only larger particles, sandstorms begin. These are local phenomena, often resulting in dune formation and the abandonment of both farming and grazing. Sandstorms are the final phase in the desertification process.
In some situations, the threat to topsoil comes primarily from overplowing, as in the U.S. Dust Bowl, but in other situations, such as in northern China, the cause is primarily overgrazing. In either case, permanent vegetation is destroyed and soils become vulnerable to both wind and water erosion.
Giant dust bowls are historically new, confined to the last century or so. During the late nineteenth century, millions of Americans pushed westward, homesteading on the Great Plains, plowing vast areas of grassland to produce wheat. Much of this land—highly erodible when plowed—should have remained in grass. Exacerbated by a prolonged drought, this overexpansion culminated in the 1930s Dust Bowl, a traumatic period chronicled in John Steinbeck’s novel The Grapes of Wrath. In a crash program to save its soils, the United States returned large areas of eroded cropland to grass, adopted strip-cropping, and planted thousands of miles of tree shelterbelts.
Three decades later, history repeated itself in the Soviet Union. In an all-out effort to expand grain production in the late 1950s, the Soviets plowed an area of grassland roughly equal to the wheat area of Australia and Canada combined. The result, as Soviet agronomists had predicted, was an ecological disaster—another Dust Bowl.
Kazakhstan, which was at the center of this Soviet Virgin Lands Project, saw its grainland area peak at just over 25 million hectares in the mid-1980s. (One hectare equals 2.47 acres.) It then shrank to less than 11 million hectares in 1999. It is now slowly expanding, and grainland area is back up to 17 million hectares. Even on the remaining land, however, the average wheat yield is scarcely 1 ton per hectare, a far cry from the 7 tons per hectare that farmers get in France, western Europe’s leading wheat producer.
Today, two giant dust bowls are forming. One is in the Asian heartland in northern and western China, western Mongolia, and central Asia. The other is in central Africa in the Sahel—the savannah-like ecosystem that stretches across Africa, separating the Sahara Desert from the tropical rain forests to the south. Both are massive in scale, dwarfing anything the world has seen before. They are caused, in varying degrees, by overgrazing, overplowing, and deforestation.
—Lester R. Brown

Scene: The date is March 11, 2011. I am in my apartment in Kyoto, Japan, watching my first partial nuclear meltdown 335 miles away in Fukushima. Because the word “melt” suggests a visible and even transition between physical states, I always thought of a meltdown as a fast and fluid event. The experience does not conform to my expectations; it seems to proceed at a lurching pace.
Chief Cabinet Secretary Yukio Edano becomes a regular fixture on our televisions and laptops. Because he, and the Kan administration, are reliant on the plant’s operator, Tokyo Electric Power, for information, he can offer little more than reassurances that the situation is under control. These come in stark contrast to the ever more frightening scenes behind him. We watch as problems spread from one part of the facility to another. Hydrogen explosions literally blow the walls off of several of the reactor buildings.
We turn to Twitter and Facebook. In the hours after the earthquake, 177 million tweets are sent and 572,000 new Twitter accounts are created. We discover that radiation levels have reached 8,217 microsieverts per hour near the front gate of the Fukushima Daiichi nuclear power station and that anyone in this kind of environment would be exposed to more than three years’ worth of naturally occurring radiation within 60 minutes. We also learn that the government is venting steam laced with cesium and iodine and that iodine exposure can result in thyroid cancer.
We begin to compulsively massage at our thyroid glands and text fellow American expats around Japan.
“We’re definitely getting out of Tokyo and coming to Kyoto,” says my friend, father of a two-year-old toddler.
“I’m definitely going home to New York,” says my neighbor. He’s on a flight 12 hours later. We make fun of him for overreacting. Within the day, we, too, begin to contemplate leaving the country. A family of three nuclear refugees, as they’ve come to be called, has taken up temporary residence in my bedroom. Emails from family back home are hysterical in tone. Emails from Japanese friends within the country politely suggest that the situation has been blown out of proportion.
In the days that follow, my wife and I begin to pursue contrasting avenues of research. I gravitate toward articles and sources that confirm the narrative to which I have already subscribed, that the mainstream Western media is dramatizing the situation at the power plant. I am encouraged and impressed by the enterprising young people in Tokyo who have taken it upon themselves to monitor the radiation from their homes and offices and tweet the results—which show that radiation levels are not dangerous.
“In a worst-case scenario, the alpha radiation would be contained to a relatively small area. The main threat is to the food supply and only the food from that prefecture,” I tell my wife.
“The USS Ronald Reagan was picking up radiation on deck; and it was a hundred miles offshore. They moved the entire fleet,” she says, emphasizing “fleet” as though this nuance in the story indicates that a truly remarkable naval maneuver has occurred.
My wife begins to follow a different line of research. She seizes on the movements taking place behind the official statements, the diplomatic breakdown between the U.S. and Japanese governments over the proposed size of the evacuation zone. Her faith in the Kan administration has completely evaporated. But she is not panicked. She is calm, collected, and open-eyed as she weighs various bits of information against the credibility of their respective sources.
In the days that follow, we learn that the French government has advised its citizens to evacuate Tokyo. The U.K. government chief scientist states that the French response is “not based on science.”
The discussion within our tiny apartment turns to the future. The evacuation area is steadily swelling, first to 10 then to 20 kilometers. The government continues to proclaim that radiation levels in Tokyo and beyond are not dangerous. Officials also anticipate that they may have to vent more steam. We know that we are probably safe where we are. But the amount of radiation detectable at the mouth of the plant seems to be rising roughly in tandem with the price of airfare out of Japan. Finally, in a calm and careful manner, like so many others, we purchase tickets to leave the country.
The week of the March 2011 earthquake saw a massive exodus of foreigners from Japan, a 16% drop in the Nikkei 225 stock market index (which has partially bounced back), and runs on bottled water and toilet paper in Tokyo. Could the government have handled the situation better? Tragedies like a tsunami can’t be prevented. Other disasters, like the Kan administration’s public relations response to the breakdown at the Fukushima Daiichi nuclear power plant, offer lessons for the future.
Upon my return to the United States, I contacted crisis communications expert Peter Sandman. When disasters strike, he says, most people have three questions for the government man or woman at the podium: What happened? What do you expect to happen? and What are you worried might happen?
“By far the biggest crisis communication error of the Japanese government [was] failure to answer the second and third questions satisfactorily: i.e., its failure to forewarn people about tomorrow’s and next week’s probable headlines, and its failure to guide people’s fears about worst-case scenarios,” says Sandman.
In the midst of the unfolding disaster, the Kan administration refused to speculate publicly about how bad the situation could get. The result was the bizarre scene I saw on my television: Yukio Edano, in his bright blue jumpsuit, issuing public reassurances and hastily revising them as the nuclear power plant exploded behind him. In some parts of the country, these press conferences were followed by public-service announcements calmly advising people to stay indoors, wear masks to limit exposure to radiation, and avoid tap water. Competing messages like these led rational people like my wife and me to conclude that the Kan administration and Tokyo Electric Power weren’t giving us the full story.
Sandman says that the Japanese government “failed to predict that there would probably be increasing radiation levels in local milk, vegetables, and seawater; that Tokyo’s drinking water would probably see a radiation spike as well; that plutonium would probably be found in the soil near the damaged plants; that the evidence of core melt would probably keep getting stronger… etc. After each of these events occurred, the government [said] they were predictable and not all that alarming. But it failed to predict them.”
This vicious cycle—public official downplays situation, situation worsens, repeat—is one that Sandman has seen before, when he served on the congressional investigation into the Three Mile Island nuclear incident. In that instance, the operating utility, Metropolitan Edison, quickly worked to paint an optimistic but not inaccurate portrait of what was going on inside the plant. When the picture worsened, the public was left to speculate that Metroplitan Edison was either lying about the risks or unaware of what they were.
Sandman’s first piece of advice to any government or company spokesman tasked with addressing the public during a crisis is, in a word, speculate. Do it gloomily, alarmingly, but above all else, do it loudly. He suspects that in the case of Fukushima, like Three Mile Island, the people in control only communicated what they knew for certain. Because they kept their worst fears private people were left to invent their own worst-case scenarios.
“Talking about what’s likely and what’s possible is necessarily speculative. Some commentators and even some crisis communication professionals have argued that authorities shouldn’t speculate in a crisis. This is incredibly bad advice,” says Sandman.
If my wife and I and the many other people who fled the country had known the government’s worst-case scenario, we likely would not have left. But there’s another lesson to be learned from the Fukushima disaster, evinced by the many who stayed to volunteer in the areas most affected by the tsunami: Trust your people not to panic. They’re probably more steady than you think.
Patrick Tucker is senior editor of THE FUTURIST magazine and director of communications for the World Future Society. Contact him at ptucker@wfs.org. A longer version of the interview with Peter Sandman can be found at this link.
The desire for information is rooted deep within us, evolved into our genes. Essentially an outgrowth of food foraging behavior, information foraging provides similar neurological payoffs. In a now-famous 2009 study on monkeys, Ethan Bromberg-Martin and Okihide Hikosaka demonstrated that dopamine neurons treat information as a reward. In other words, looking for and finding information makes us feel good: The behavior reinforces itself and makes us want to do it again.
At the same time, the growing volume of information available to us makes us increasingly inclined to seek breadth of knowledge rather than depth. We delve into a task or subject just a bit before we’re drawn away to something else. Our attention is continually being pulled toward a different target than the one we’re currently semi-focused on. In the end, it’s a little like being at a smorgasbord buffet: There are so many dishes, we can’t properly savor any single one of them.
All this information and the technologies that accompany it have led to an ongoing dialogue about the pros and cons of our advances. In his most recent book, The Shallows (W.W. Norton, 2010), Nicholas Carr argues that information technology is changing our brains, making us less focused, less capable of deep thought. Others, such as technology writer Clay Shirky, futurist Jamais Cascio, and cognitive scientist Steven Pinker, have acknowledged that, while we are changing in response to all of our progress, this is a pattern that has occurred throughout human history. Time and again, we’ve adjusted our ways of thinking in response to our technological advances. As toolmakers, we’ve used our devices to change the world, and in turn they’ve changed us.
There’s no denying that this relentless inundation of information severely hampers our ability to concentrate. Interruptions and distractions abound, invading our mind, making focused thought far more difficult. A study by Microsoft Research found that, following even a minor interruption, it typically takes us 15 minutes to fully refocus on the subject at hand. The study’s authors reported that they were “surprised by how easily people were distracted and how long it took them to get back to the task.”
Data grows exponentially. According to market research and analysis firm IDC, the world’s digital output is doubling every one and a half years. In 2010, they expect the world to create and replicate a record 1.2 zettabytes of data. That’s over a trillion billion bytes, or a stack of DVDs reaching to the Moon and back. By 2020, IDC expects this number to grow to 35 zettabytes, or enough DVDs to reach halfway to Mars. But there are reasons to believe this estimate may fall woefully short.
Right now, data only seems to be everywhere, but in the near future it really will be. High-speed wireless technologies will soon enable us to access information from almost any location at speeds approaching those of wired networks. At the same time, devices that generate that data will increasingly be distributed throughout our environment. Embedded networked processors and smart dust—sensor networks made up of billions, even trillions, of nodes—will be everywhere, providing real-time data streams about everything, all the time.
Lifelogging is another development that could exacerbate our data problem. As cameras, recording devices, and storage media continue to shrink, the ability to record every instant of our lives becomes not only feasible, but possibly even appealing. Used in conjunction with intelligent search methods, lifelogging could provide us with the equivalent of near total recall. Where was I on the night of the thirteenth? What was the name of that associate I met for a few seconds five years ago? And perhaps most importantly, where did I leave those darn keys? These kinds of questions could become trivial using such a system, but the storage and data processing involved would not.
Gordon Bell, formerly of DEC, now works for Microsoft Research, where he is the subject of the MyLifeBits lifelogging project. In his recent book, Total Recall (Dutton, 2009), he writes, “e-memory will become vital to our episodic memory. As you live your life, your personal devices will capture whatever you decide to record. Bio-memories fade, vanish, merge, and mutate with time, but your digital memories are unchanging.” Such technology will bring with it many benefits as well as many unintended consequences, not the least of which will be an explosion of additional digital information.
Then there’s the sheer volume of metadata that will be created by computers. The examination of primary data—whether it’s Web links or cell-phone habits or demographic voting habits—yields a tremendous amount of secondary or derivative information. Analysis of smartphone records can generate information about traffic flow and population movement. Tweets and search-engine queries can contribute data for analysis in epidemiological studies of infectious diseases. As each set of data is recombined and reanalyzed, it generates still more data.
This brings us to the Semantic Web. Conceived by Tim Berners-Lee, the father of the World Wide Web, the Semantic Web aims to take information that is currently only machine readable and make it machine understandable.
The Semantic Web alters the relationship between data and machine. It gives data meaning. Currently, computers treat most information on the Web merely as strings of letters and numbers, so that “the quick brown fox” has about as much meaning as “Sgd pthbj aqnvm enw,” at least at the machine level. But with the Semantic Web, “quick,” “brown,” and “fox” are all formally represented concepts with defined relationships to other concepts. The ontologies that define these concepts establish meaning that can be understood by our computers.
With these improvements, our computers will be able to readily compile information from a range of sources without human oversight and consolidate it into a format that best suits our needs. As information comes to be better structured and defined, all sorts of new ways of working with it will become possible. Existing information will be analyzed and recombined in ways we’ve never even thought of—all at the speed of our fastest computers.
Body area networks (BANs) will also be a source of new information. A set of wearable or implanted sensors that monitor body functions, our BAN would keep us and our health-care providers apprised of our well-being with continuous data streams. As sensor costs plummet, such monitoring holds the potential to drastically reduce health costs by alerting us at the earliest stages of an illness. But while such devices may have considerable benefit, they also threaten to add greatly to the world’s data load.
Under this onslaught of information, how will we function, much less use these resources effectively? We already use filtering technologies, like the ad-zappers used in digital video recorders that enable us to circumvent the commercials that checkerboard the television schedule. Similarly, ads, banners, and other commercial efforts might be filtered out by software that’s able to distinguish it from relevant content. But because the data collected from these efforts can provide useful information to advertisers, they will find ways to disable such filters. In many ways, the battle for our attention will be a technological escalation between media and viewer.
All this brave new data will result in many changes, as we try adapting our behavior, improving the existing technologies, and developing better interfaces with them.
• Adapting ourselves. Changing our own behavior is both the simplest and the most difficult option. On the one hand, we can decide the how and when for ourselves, whether it’s checking e-mail or surfing the Web or watching TV. We can even choose to opt out entirely, cutting off all but the most basic forms of communication. On the other hand, such habits are often very difficult to break for the same reasons they can lead to compulsive behavior. And while there may be certain benefits to going completely “cold turkey,” such a decision could find the user increasingly cut off and at a disadvantage in society.
Despite the possible difficulties involved, setting aside regular time each day to shut down the information flow can yield benefits. Such a hiatus creates time to absorb, digest, and reflect on what’s been learned. Taken even further, incorporating regular meditation into one’s schedule can help to diminish the negative physiological and psychological effects of information overload. It can also contribute to further insight, as can REM sleep. But such methods can only take us so far, especially when the volume of data in our world continues to escalate.
• Adapting existing technologies. A number of possible strategies for dealing with information overload can be found within existing technologies. For instance, various software can already be used to direct, consolidate, and filter information, channeling only what is useful and relevant to our attention. We have “dashboards” that aggregate information streams such as RSS feeds, and “radars and filters” to manage what we get.
Advances in natural language processing of unstructured data will give us another means to better access data. A good example of this is IBM’s DeepQA Project, better known as Watson, which captured the public imagination in early 2011 on the popular quiz show, Jeopardy. As this already impressive technology matures, it will find applications in many fields, including health care, business analytics, and as personal assistants.
A very different approach to processing and improving the way we access information can be found in the knowledge engine Wolfram|Alpha. The brainchild of Stephen Wolfram, the eponymously named program computes answers to queries based on structured data. Rather than returning lists of documents as in a Google search, Wolfram|Alpha consolidates the information into relevant answers and visualizations.
According to the project’s mission statement, “Wolfram|Alpha’s long-term goal is to make all systematic knowledge immediately computable and accessible to everyone.” While this may strike some as an extremely lofty objective, no one can accuse the creator of Mathematica and author of A New Kind of Science (Wolfram Media, 2002) of ever thinking small, his work in particle physics notwithstanding. Wolfram has stated that Wolfram|Alpha’s processing of structured data is very different from the way DeepQA works with unstructured data. He’s also suggested that, if there is ever a Watson 2.0, it could benefit from integrating the Wolfram|Alpha API.
Once the Semantic Web and knowledge engines become more widespread, one category of software that should develop rapidly is that of intelligent agents. These machine assistants are programs that will be able to perform routine tasks for us, whether it’s making appointments, locating supplies, handling inquiries, or planning a vacation. Over time, these agents will become increasingly intelligent, capable of learning our individual preferences. Eventually, they’ll become so good that they’ll almost be able to mirror our own thought processes.
At some point, these “virtual selves” might even be able to go into the world (or at least virtual worlds) as autonomous avatars, our representatives in the world at large. As technology advances further, it may become possible to reintegrate these virtual selves, acquiring their experiences with such fidelity that it would seem like we’d been there ourselves. Such tools could go a long way toward helping us deal with a world swimming in information.
• New interfaces. The development of new interfaces will change not only how we think about and visualize information, but also how we work with it. New large-scale multitouch screens and gesture interfaces already allow us to work with virtual 3-D models in ways that are far more like manipulating objects in the real world and therefore much more intuitive. As these develop further, Minority Report–like interfaces will give us the means to work with large amounts of complex information quickly and with ease.
Three-dimensional displays are another tool that will allow us to pull much more information from visual displays. Currently, these use special glasses, but high-quality 3-D displays that don’t require glasses will be available later this decade. This will allow for the use of complex spatial relationships in visualizing information.
• Augmented reality. Augmented reality applications are already available for our smartphones and are developing rapidly. Nevertheless, they are still very much in their infancy. Augmented reality superimposes digital information and artifacts over maps and real-life images to convey additional information to the user. The combination of features available in today’s smartphones—mobility, camera, display, GPS, compass, accelerometer—make them the medium of choice for these applications.
Already, augmented reality apps can direct you to a nearby bus stop or subway station, recommend a local restaurant, and act as a travel guide. In coming years, more-sophisticated applications will provide virtual devices and dashboards seemingly in mid-air; personalized, contextual datafeeds; and advertising customized to our individual preferences. While this technology will be responsible for still more information finding its way to us, it will also play a major role in compressing and consolidating information that will be almost instantly available for our needs.
Transforming our tools will only go so far in helping us keep our heads above the rising sea of data. In order to stay afloat, we may eventually find it necessary to transform ourselves. Such augmentation, generally called cognitive enhancement, will probably follow a number of parallel paths.
• Pharmacological enhancements. Caffeine and other stimulants have long been used as “productivity enhancers” to help us focus on tasks. More recently, pharmaceuticals such as Adderall, Modafinil, and Ritalin have grown in popularity, particularly among college students. But there is a lot of anecdotal evidence indicating that, while some abilities such as focus are improved, other functions related to creativity can suffer. Additionally, these drugs can be addictive and increase the potential for psychosis over time. Since this usage is off-label—meaning it isn’t what they were actually developed or prescribed for—it seems likely that improved versions may be possible, hopefully with fewer side effects. Other categories, such as vasodilators—for example, gingko biloba—claim to improve brain function by delivering more blood and oxygen to the brain. Here again are potential avenues for improving brain function.
True smart drugs, or nootropics, hold significant potential to improve learning and retention. Current research aimed at helping Alzheimer’s and dementia patients may eventually lead to drugs that have other uses, such as learning augmentation. Ampakines, for instance, are a new class of compounds that improve attention span and alertness as well as facilitating learning and memory. Ampakines have been studied by the Defense Advanced Research Projects Agency (DARPA) for use by the military.
• Genetic and biotechnology enhancements. Many genetic studies are being done to identify therapeutic strategies that promote neuroplasticity—the formation of new neural structures in the brain—and improve learning ability. A study at the European Neuroscience Institute published in 2010 found that memory and learning ability of elderly mice was restored to youthful levels when a cluster of genes was activated through the introduction of a single enzyme.
A number of stem-cell research studies offer hope not only for degenerative mental pathologies but also for restoring our ability to learn rapidly. In a 2009 study, older mice predisposed to develop the plaques associated with Alzheimer’s were treated with neural stem cells. These cells stimulated an enhancement of hippocampal synaptic density, which resulted in better performance on memory tests a month after receiving the cells. (The hippocampus is a region of the brain that plays important roles in long-term memory and spatial navigation. It is one of the first regions to suffer damage from Alzheimer’s.)
Another recent study of mice exposed to the natural soil bacterium Mycobacterium vaccae found that their learning rate and retention greatly improved. It’s been speculated that this was due to their brains’ immune response to the bacterium. As we learn more about the chemical and genetic processes our brains use in acquiring knowledge, it should eventually become possible to enhance them in very targeted ways.
• Brain–computer interfaces. While still some way off, technology may one day allow us to offload a small or large portion of our memory and processing to machines. To some, this may seem farfetched, but there is already considerable research taking place in this and related fields. Today, there are already interfaces that give quadriplegics and people with locked-in syndrome the ability to control computers and operate wheelchairs. There are even headsets available that allow users to operate computer games, all through the power of thought. Someday, we will no doubt look back on these as primitive devices, but in the meantime, they offer a glimpse of what may become commonplace.
The information-management potential of advanced brain–computer interfaces (BCIs) would be significant. We might have the ability to generate separate threads that take care of several tasks at once, transforming us into true multitaskers. We could gather information on a subject from a broad range of sources and have it condensed into just the format we needed. We could draw on immense external computer resources to rapidly resolve a problem that might take months for a team of present-day experts. We could learn at the speed of thought—only the speed of thought would be many orders of magnitude faster than it is today.
Futurist Jamais Cascio and others believe we will forgo BCI in favor of one of the other forms of cognitive enhancement, and they may be correct. The problem of being lumbered with last year’s BCI model as these technologies continue to develop could well dissuade many potential augmenters. But this presumes that the BCIs of tomorrow will be as permanently fixed as the computer hardware of yesteryear. Due to just this sort of concern, the neural equivalent of a firmware upgrade may be devised. Also, nanotechnology may offer a means for “rewiring” the interface in a straightforward manner as new advances are made. It’s far too early to say for sure, but the possibilities will (and should) continue to be explored.
Rapidly increasing amounts of data, improvements in technology, and augmentation of our own mental processes, combined with competitive pressures, are already creating a positive feedback loop. This is producing additional incentives for generating more information, leading to more and better technology to work with it, and giving us further motivation to make ourselves even more capable of accessing and utilizing it. The result of such a cycle will be an escalation of intelligence, both in our technology and ourselves.
Like so many technological trends, this one could potentially accelerate and continue up to the point when limiting factors bring it to a halt. However, because improved intelligence would give us better tools for discovering and creating new ways to manipulate the primary physical laws of the universe, this threshold may be a very distant one.
Some theorists have speculated that our computers will continue to shrink and improve until every particle of matter in a block of material could be utilized for computation. Quantum theorist Seth Lloyd has referred to this as the “ultimate laptop,” and its upper bounds are defined by the fundamental limit on quantum computation and by the maximum information that can be stored in a finite region of space. Such a device would be 1033 times faster than today’s fastest supercomputer. (That’s a billion trillion trillion times faster.) Moore’s law asserts that computer performance doubles every one-and-a-half to two years. If this trend were maintained—and that’s a big “if”—then this upper limit could be reached sometime in a little over two centuries.
What would we do with so much computer processing power, so much data, and presumably so much intelligence? Would we spend our days pondering the remaining mysteries of the universe? Or would we become a world of navel-gazers, tweeting and friending at the speed of thought (or whatever it is we’ll be doing with Web 327.0)? In all likelihood, it will be something in between—something that appears utterly fantastical today and will seem quite mundane tomorrow. We may even still be arguing about how some new technology is going to render us less focused, less capable, or less human than our forebears—just as we always have when confronted with new information technologies.
In the face of all this, only one thing seems certain: Whether we’re swimming in the shallows or diving to the deepest depths, we’ll continue to work hard to stay afloat in an ever-growing sea of information.
Richard Yonck is a foresight analyst for Intelligent Future LLC. He writes a futures blog at Intelligent-Future.com and is the founder of FutureNovo.com, a site about emerging technologies. His previous article for THE FUTURIST, “The Age of the Interface,” appeared in the May-June 2010 issue. E-mail: ryonck@ intelligent-future.com.
This article draws from his paper in the World Future Society’s 2011 conference volume, Moving from Vision to Action, which may be preordered from www.wfs.org/wfsbooks.
We live in an age in which the digital and the real worlds comingle effortlessly. In a relatively short period of time, a variety of computing and communications devices have seamlessly incorporated themselves into our lives. New applications and tools for these devices compete to grab (and keep) people’s attention. At the surface, what is happening may seem to have a certain frenetic mindlessness to it, but at the heart of it, a critical transformation is occurring that will be important for companies and organizations to understand, and even leverage, in the coming economy.
Digital technologies, especially the “social media” that facilitate connections, operate within a certain framework. Imagine this framework as an abacus, with each bead representing a different opportunity, a different facet or characteristic of the digital sphere. No bead is fixed. Each can slide easily back and forth, reconfiguring the entire landscape and presenting entirely new and untapped areas of exploration.
The easiest way to visualize each bead on this “data abacus” is to list them, with the understanding that any product, service, or brand can use any combination of beads to be better positioned in the digital space in the coming decade. Listed below are some key emerging characteristics of this new era.
Honest, unfiltered, and anonymous or pseudonymous feedback may become the new norm. Web sites such as BetterMe and SideTaker allow people to gather anonymous constructive criticism and objective opinions from friends on specific matters or disputes and to give feedback to others as well. Honestly (formerly Unvarnished) aims to create an open forum to rate professionals in the workplace. Honestly’s founders express hope that that their profiles and performance reviews could eventually function in the same capacity as job references, thus reshaping the employee recruitment process, much like Facebook has done in recent years. While many employers have made it a practice to screen social media profiles for information about potential hires, the ability to discern reality from perception on those profiles will be an increasingly valuable skill.
Yet, anonymity is not always a good thing. While these sites tout built-in safeguards, whenever there is a lack of personal accountability there is also a good deal of risk. However unlikely it is that “flame wars” will spring up on such sites as these, anyone with a slight grudge or dislike for a co-worker could potentially damage that person’s career with a few thoughtless keystrokes.
The explosive growth of social networking is giving people more of a voice around the world. Activism on Facebook and other sites has successfully rallied citizens against oppressive governments and posed challenges to bureaucrats and government officials. Twitter, Facebook, and LinkedIn have all been officially blocked in China due to protests being organized via the sites. Currently, more than a dozen countries block Internet sites for political, social, and security reasons. Meanwhile, the Obama administration is permitting technology companies to export online services like Instant Messenger and photo sharing to Iran, Cuba, and Sudan as a way to make it more difficult for restrictive governments to clamp down on free speech. Businesses and corporations are increasingly being held to similar levels of accountability.
On the other hand, federal agencies in the United States and elsewhere are beginning to make statistical data and other information more widely available to the public, in an effort to boost government transparency, efficiency, and responsiveness. In addition, the amount of information available at the push of a “search” button is making it increasingly easier for citizens to fact-check statements coming through official channels.
Then there is the rise of e-government, or Government 2.0, in which public agencies utilize available technologies such as social media, cloud computing, and even mobile applications both to improve their own efficiency and to enable more open and direct participation in governance. Such crowdsourcing efforts engage more citizens in the governing process. In addition, social networking sites (Twitter in particular) have made it easier for voters and politicians to interact.
This is analogous to changes happening in the business world. Technology is redefining our relationship to time, space, and location. The private sector has seen many examples of this (over the past year especially) and now it is starting to spill over into the public sector as both citizens and government agencies have grown more comfortable with new technology.
It will become equally important for organizations to track the ways in which marketplace democratization is spreading, and to explore how to leverage it. Governments and companies alike will continue to grapple with issues about an increasingly vocal and connected public. Analytical
The amount of digital information increases tenfold every year. As The Economist reported (February 25, 2010), data management and analytics are worth more than $100 billion and are growing at almost 10% a year, roughly twice as fast as the software business as a whole. Roger Bohn of the University of California in San Diego, quoted in The Economist, says that “information created by machines and used by other machines will probably grow faster than anything else.”
Known as “database to database information,” this removes people from the equation altogether. This represents something I call the shift from Big Brother to Big Sister. What is emerging is a complex “system of systems” that has the ability to monitor and control everything from municipal power grids, integrated toll networks on major highways, and water distribution systems to employee communication, behavior, and productivity—all without human inclusion.
It can also benefit futurists. Google Ventures is investing in Recorded Future, a data analytics technology that could possibly be used to predict future events by tracking how frequently an entity or event is referred to in the news and around the Web over a period of time. This foresight technique, known as scanning, can be tedious and time consuming when conducted by individuals or groups. Fully automated scanning would likely lead to more (and more accurate) indications of what is to come.
The ability to communicate—and be communicated to—constantly, cheaply, and effortlessly is creating so much noise in the system that it is a wonder anyone can pay attention to anything for very long. Often, people are simultaneously immersed in the digital and the “real” world, and multitasking online and offline. All of this is leading toward what technology writer and consultant Linda Stone, a former senior executive at Microsoft, has termed “continuous partial attention.”
It has been found that cell phone conversations do not just interfere with driving; driving also interferes with the processing, description, and memory of cell phone conversations and messages and can impede your ability to relay information accurately and remember key pieces of information. Research shows that interruptions can decrease accuracy, judgment, creativity, and effective management, making people far less effective.
Currently, a lot of attention is being paid to location-based applications that harness GPS-enabled mobile technology to let users broadcast their location. Such services as Foursquare, Gowalla, SCVNGR, and Geoloqi have enticed millions to digitally “check in,” and more sites are popping up each year. Facebook has been testing the waters with Facebook Places, signaling that the company recognizes this potentially profound shift in the way people interact online. These mobile applications encourage users to share which business locations they visit with selected friends. Sometimes there is a gaming component involved.
Location-based mobile check-in services do not simply offer free word-of-mouth advertising. They are also designed to give businesses a chance to tailor deals to patrons and forge relationships with them.
For traditional businesses, one of the advantages of these services is the ability to reach customers on-the-go. Shopkick identifies consumers via their smartphones the moment they enter a retail environment, and the mobile app automatically begins to accumulate “rewards” and exclusive product discounts from the stores they choose to visit. Users can also earn virtual currency, called “kickbucks,” which can be cashed in for gift cards at any of Shopkick’s retail partners.
Many small local businesses have had difficulty competing online, but location-based apps could help level the playing field a little. At the moment, Shopkick’s partners are limited to large retail chains, but other services tailor themselves to local businesses as well.
And there is yet another side to all of this data transferring. Eventually, businesses may know exactly who their customers are, where they are, and when they are nearby.
If that wasn’t enough, digital billboards will increasingly be able to track the age and gender of pedestrians who walk by them, providing advertisers with a more accurate reading of the potential audience. Clothing retailer Forever21, for instance, recently unveiled a billboard in Times Square that interacts with the crowd on the street using high-tech surveillance equipment and computer vision technology. The software identifies and maps the people below, enabling the computer to build a composite image of them in real time.
As this shows, businesses are exploring completely new ways to reach consumers. Radio frequency identification technology and global positioning systems are being developed arm-in-arm with the imagination of businesses (not to mention governments), which are creating new ways to use these tiny electronic sensors to monitor and track consumer behavior, as well as their own supply chains and product inventory—all in real time. However, the growth and interconnectedness of embedded systems will undoubtedly raise major challenges and privacy implications, especially as complex systems become increasingly self-adaptive.
Data systems are beginning to take on a life of their own, becoming more autonomous and more fully integrated into daily life. The Economist projects that, by 2017, there could be as many as 7 trillion wirelessly connected devices and objects, which translates to approximately 1,000 per person. Self-contained systems will increasingly pool their resources and capabilities to create new, more complex, and fully independent meta-systems that will offer more functionality, operability, and computing power.
As systems become vaster and more complex, our ability to interface with them will undoubtedly become more difficult. The average American consumes about 34 gigabytes of data and information every day, and the 2010 study “Digital Universe” by IDC projects the amount of information to rise 44-fold in the next decade. This will become harder to manage and control. Also, as we’ve seen through developments in reality mining, mobile phones are able to predict the patterns of our movements and whereabouts. In fact, we can be found as much as 93% of the time, no matter how far we travel.
Low-cost technology, available to just about everyone, is radically changing the world’s economic structure. The drastic reduction in global poverty in the last couple of decades is due in no small part to the declining cost and increasing availability of equalizing technologies such as the computer and the mobile phone. As these tools become even cheaper and even more available, there is hope that this trend will accelerate, particularly in the developing world.
We’ve seen for a while now how the Internet and mobile phones can be leveraged as tools for social development in Third World countries. Txteagle, for instance, distributes small jobs via text messaging to people in developing countries in return for small payments, which are transferred to a user’s phone by a mobile money service. These tools will be increasingly utilized by leveraging the power of the mobile phone to help educate and grow emerging economies.
There are a number of other “beads” we could add to this list, including adaptable, aesthetically pleasing, aware, authentic, algorithmic, artistic, actionable, and more. The point is that traditional brick-and-mortar establishments are merging with the online world, in-store experiences are merging with gaming platforms, and real-world brands are merging with location-based social media platforms. In the future, businesses will combine the real and the virtual in increasingly innovative and expansive ways.
In this budding age of digital entanglement, businesses will have more in-depth knowledge about their customers and clients—who they are, their comings and goings, and their purchasing patterns. As humans and data systems become more intertwined, we may also see a greater level of intimacy emerge between the business and the consumer (as well as other organizations and their stakeholders, including governments, schools, and even religious institutions).
Also, as more consumers voluntarily surrender more information, there is likely to be an accelerated trend shifting the focus from gathering information to processing it. Now that companies have the data, they aren’t sure how to best use it or sort through it all. Other third-party companies will likely emerge to handle the complex metrics. A host of powerful computational tools will mash up vast quantities of data from many sources, in many ways.
Yet, in certain ways, individuals will be more in control than ever before. Businesses and other organizations will be forced to engage in greater transparency and openness about their practices. They will have to interact with their customers on their customers’ terms, as well.
Small local businesses have traditionally found it hard to compete in the online marketplace against large, established brands. In this new environment, we are likely to see more companies, both large and small, operating effectively in both the traditional brick-and-mortar world and the digital realm—and merging the two together in increasingly innovative and expansive ways. Thanks to smartphone apps and social media, local businesses have the chance to gain more of a foothold, rather than losing out to online retailers.
Yet, journalist and cultural critic Douglas Rushkoff offers a word of warning in Program or Be Programmed (OR Books, 2010), writing: “If the social urge online comes to be understood as something necessarily comingled with commercial exploitation, then this will become the new normative human behavior.” In other words, something that seems almost unfathomable at the moment—for example, voluntarily allowing companies to gather private information on you—may become accepted as simply a part of life.
Meanwhile, all of our systems, networks, structures, electronic devices, and virtual entities are becoming increasingly connected and dependent on each other—in many cases, without human inclusion. This process began several years ago, and is happening more and more, faster and faster.
For organizations, this represents and requires a monumental shift in management, especially as organizational energy input continues to migrate away from human labor. At the end of the day, the number of hours spent managing labor may pale in comparison to those spent managing systems.
In the meantime, the visual of the data abacus can be used as a way to provide fresh insights, spot new business trends, and unlock new sources of economic value as companies work to determine how to gain a competitive advantage from this sea of digital data.
Erica Orange is vice president of Weiner, Edrich, Brown, Inc., a leading futurist consulting group in the United States. Her previous article for THE FUTURIST, “From Eco-Friendly to Eco-Intelligent,” was published in the September-October 2010 issue. Her address is 200 East 33rd Street, Suite 9I, New York, New York 10016. E-mail erica@weineredrichbrown.com.
Many of us find ourselves with multiple gadgets—in our pockets, our homes, our cars, our offices—and these gadgets are increasingly built to talk to each other, often automatically and invisibly. Camera phones upload straight to the Web and connect through WiFi and Bluetooth to unseen computer networks; the printer next to your desk can suddenly start printing out documents sent from a branch office on the other side of the world, and our cars automatically pull down information from the sky on the latest traffic and weather conditions.
A 2010 survey by Unisys Corporation showed that most Americans are largely unaware of the threat posed by data vulnerability. For instance, while a majority (73%) of Americans said they regularly update the computer virus detection software on their home computers, only a minority (37%) said they updated their cell phone passwords regularly, and nearly the same portion (36%) do not update mobile passwords at all.
Even common documents (licenses, passports, payment cards) that we carry around with us contain RFID chips. All these sensors and transmitters are constantly busy, silently collecting and giving away our personal information to other devices, often without our knowledge. Every time such information is transmitted and received, there is a very real risk that the data may be intercepted by people other than those for whom it was originally intended, and tampered with or abused for criminal, terrorist, or other purposes.
Scientists actually may be more at risk than the average population, especially those in academic circles. For all the theoretical discussion of computer security, those inside the academic environment often do not take real security issues as seriously as do those in the business world. This indifference puts researchers at risk with regard to their data, especially those who are involved in research with potential commercial applications.
Scientists working on politically controversial or emotionally charged projects have also famously found themselves targets for security attacks: In 2010, the e-mail accounts of climate researchers from East Anglia University were hacked by conservative activists, who then attempted to use private messages to discredit the researchers academically and professionally. The researchers were subsequently cleared of any wrongdoing or impropriety, but their exoneration received much less public attention than the initial scandal.
Numerous types of sensors were designed for our convenience, usually not with security in mind. By the end of 2010, almost 80% of cell phones had a built-in global positioning system (GPS) device, according to iSuppli. That’s up from about 50% in 2009. These devices can be used to send information on the user’s whereabouts to another place. For the most part, we see such technology as a welcome innovation, helping us find the nearest coffee shop when we are in a strange city, for example, or discover which of our friends is close at hand, thanks to social media applications.
We may have the option of allowing such information to be transmitted or of blocking it when we first start to use the application, but there are other ways of tracking phones (and people) without our consent or knowledge. The phone network is not the only system that provides information on our whereabouts; many digital cameras now also include GPS receivers, permitting the automatic geotagging of photos—i.e., instantly identifying the photographer’s real-time location. Most modern cars are equipped with satellite navigation systems, which also transmit location information.
Our computer systems at home and at work are obvious security targets, but the existence of “back doors”—methods for bypassing normal authentication—may not be that obvious. Networking over the air (WiFi) or over power lines and the use of Bluetooth gadgets help to reduce clutter and introduce flexibility, but they also introduce risk. “Free” wireless access points are sometimes set up to capture WiFi traffic, and it is now possible to spoof a global system for a mobile communications cellular tower to capture all cellular telephone calls in a specific targeted area. Clearly, politicians and celebrities are not immune to hacking, as seen by the recent revelations that members of the British press were routinely listening in on the voice mails of its citizens, including the royals.
To prevent channels between devices from being compromised, it is possible to encrypt the traffic; however, such encryption can slow down and impede users, and many “secure” products are quite vulnerable since the protocols are not well implemented. Often, the security and encryption on these devices is so troublesome to set up that many users (including corporate IT departments) don’t bother, or set things up incorrectly, falsely assuming they are protected.
Even if you’re not using a wireless network or a Bluetooth keyboard, the electromagnetic emissions from the equipment you use can be monitored remotely, and in extreme cases may actually allow someone to read your screen through walls or from across the street.
You would think that most people by now would know something about the risks of viruses on their computers, yet many people happily download and install unknown applications from dubious sources, oblivious of the fact that their new software could hijack their PC’s camera and microphone and surreptitiously transmit audio and video to parties unknown. In fact, the simple microphones found in all laptops can be used to determine what keys are being typed on those keyboards.
Misusing computer peripherals is sometimes an officially sanctioned activity, as shown in the case of the Pennsylvania school district that distributed student laptops with what the district termed “tracking security” features (but could better be described as Big Brother “spyware”), taking photographs of unsuspecting students in their homes.
While the proliferation of USB devices over the past few years has been a boon for computer users, it has also increased opportunities for data hacking. Small USB keyloggers, similar in appearance to thumb drives or keyboard cable extenders, can remain undetected for months at a time, faithfully recording every password, confidential memo, and private thought before the device is retrieved (or the data automatically uploaded) and the contents analyzed, regardless of how tightly locked down your office’s network is. Even innocent-seeming devices such as USB flash drives and CD-ROMs distributed at trade fairs, etc., can be used to install back doors and “Trojan horses,” sending confidential data such as banking passwords back to base, just as a “free” game downloaded to a mobile phone can open that device up to unlimited abuse.
Nor, in case you’re wondering, is the written word any more secure. Many office printers, copiers, and faxes now incorporate hard disks or other memory devices to capture, store, and transmit the printed and scanned images (we don’t think of them as such, but modern copiers are actually sophisticated computers that can be easily compromised).
These memory devices are designed to be accessible for maintenance purposes: They can be removed and their contents read at leisure. The printouts and copies from many of these devices incorporate microscopic anticounterfeiting information, which can also be used for tracking purposes. And when you leave the building, all the smart cards and RFID chips that you carry around—the corporate entry cards, mass transit cards, passports, credit and debit cards, etc.—can also let people know who and where you are and what you’re up to.
We can regard many of these security and privacy violations as essentially harmless, if irritating. Vending machines in Japanese train stations, for example, can automatically recommend drinks to customers based on their age, sex, and other factors. Annoying text messages may pop up on your cell phone, inviting you to enjoy a discounted latte every time you come within 100 yards of a coffee shop that you’ve visited in the past. But far more frightening are the criminals obtaining or abusing such information. The “safety blanket” supposedly provided by these RFID chips is an illusion, since the chips, together with their content, can be cloned, with all the attendant problems of identity theft.
Computer scientists both outside and inside the IT industry need to understand the essence of security and how the data that they collect will affect the overall system. The goal is to mitigate the risk of unintentional data leakage, which leads to other security issues. One way to do this is for researchers to find the flaws in the systems they use, but manufacturers seldom welcome these efforts.
A change in attitude needs to take place regarding the responsible disclosure of exploits by independent researchers: The discoveries need to be welcomed and acted upon, rather than seen as challenges to professional competence. Currently, there is a surprising lack of awareness of the risks posed by data breaches; the majority of technology companies are more concerned with (and devote large amounts of R&D to) business continuity than with security.
As sensors become cheaper and more commonplace, the IT industry needs to take a consistent approach with regard to alerting consumers, at a user interface level, about the privacy risks resulting from the use of different sensors and applications, as well as a unified hardware and basic software approach to security. The emphasis on continuity rather than security illustrates that companies and organizations, including research institutions, need to take active security and privacy protection measures within their domains. The industry as a whole should be aware of the risks to their businesses posed by security breaches, and should take the necessary steps to guard against these risks.
At least three steps are vitally necessary to head off what I see as a serious crisis developing—serious for individuals who will suffer as a result of abuses of their privacy and personal information, and also for the many companies and organizations that will suffer from all-too-predictable legislation enacted to protect citizens from the “evils” of technology being perverted by unscrupulous forces.
1. Inform the public of data products’ vulnerabilities. The makers of all devices that are capable of collecting and/or transmitting data should inform the public of any known vulnerabilities associated with their products. Whether or not this should be a legal duty is another matter, as it is probably impossible for a company to come up with an exhaustive list of every way in which its products could be abused.
Industry, too, needs to create standardized guidelines on the use of sensor data that contains personal information. There needs to be a cross-industry “best practices” standard to govern the implementation of these sensors at the device level, which can be explained to end users in a standardized format so that the use of these is consistent.
2. Make security a data design priority. Companies engaged in such designing and manufacturing must proactively incorporate security in their products and the design process. Designers must balance the accelerated demand for new features against a possible regulatory backlash that may occur if security becomes a populist consumer issue.
There are real-world examples of how security is already being taken seriously in areas that may seem surprising. Some copier manufacturers, such as SHARP, offer and promote encryption on the hard drives built into their copiers and printers. Such encryption significantly reduces the value of a stolen or illegally accessed hard drive. Many laptop manufacturers now offer the option to disable USB ports (this is standard operating procedure in many corporate Windows desktop builds), and several cell phone manufacturers promote models without cameras. Unfortunately, these solutions fail to address the root cause of the issue; they are merely “patches” for a few of the holes in what is a veritable Swiss cheese of data insecurity.
3. Set high standards and enforce them. Perhaps most important of all, industry players must collaborate and implement stringent self-regulation to better define the collection and use of data from the different sensors in our lives. Moreover, global business must work closely with government to strengthen the penalties for any interception of information containing personal data not intended for the person or organization reading it.
We stand at a crossroads in terms of dealing with data security, and both paths are, for different reasons, highly unattractive. Prompt, meaningful self-regulation to avoid a coming crisis seems just as impossibly difficult to some as suffering the painful, throw-out-the-babies-with-the-bathwater overreaction of technically unsophisticated, politically motivated government regulators.
I argue that self-regulation is far preferable to government control. I am aware that cross-industry cooperation, not to mention industry–government cooperation, is no easy matter, but the consequences of delaying could be catastrophic. It is essential to avert this crisis so that consumer choice isn’t restricted, manufacturers aren’t shackled, and researchers aren’t thwarted in their development work by a new wave of draconian personal data protection laws.
William H. Saito is an entrepreneur, venture capitalist, educator, and advisor on security issues worldwide. Currently, he serves as an advisor on innovation and entrepreneurship for several Japanese ministries and lectures at Keio University (SFC) and at Tokyo University of Agriculture and Technology (TUAT) in Japan. He was recently selected as a Young Global Leader (YGL) for 2011 by the World Economic Forum. Web site http://saitohome.com/.
Twenty years, ago Harvey F. Wachsman, a lawyer and medical doctor, wrote an op-ed piece published in the New York Times proposing that cash be abolished and replaced by government bank cards. His purpose was to reduce tax evasion, muggings, and other crimes. Many observers over the years have expressed the same idea. Yet, to this day, rudimentary research on abolishing cash, let alone a cost-benefit study, is nonexistent.
Aside from local news of an occasional mugging or robbery—and the strong suspicion that workers who prefer cash do not report such earnings to the IRS—few people in the United States are aware of the extent and severity of cash crime and how heavily it weighs on the economy. Americans’ reliance on cash is a contributor—albeit an indirect one—to the high U.S. prison rate and the bloody Mexican drug war.
Consider the fact that the nation’s prisons are overflowing with the highest inmate population in the world. With only 5% of the world’s population, the United States holds about one-quarter of its prisoners. At an average outlay of $25,000 per inmate, the United States spends more than $60 billion annually just to house prisoners.
Now consider that low-level drug offenses comprise 80% of the rise in the federal prison population since 1985 (though those numbers have begun to go down in more recent years). Harsh penalties for nonviolent drug offenses are a central component of prison overcrowding. Whether or not the War on Drugs or minimum sentence requirements are an appropriate or fair response to U.S. drug use, one thing is certain: The vast majority of those illegal transactions are cash-based.
Greenbacks are also the currency of choice for Mexican drug cartels, which funnel between $19 billion and $29 billion in profits out of the United States annually, according to the U.S. government. That money goes across the border by the truck full. More than half of it quickly disappears into Mexico’s cash-based economy, where it goes toward off-the-record purchase of huge items, tracts of land, car dealerships, and even hotels, according to L.A. Times writer Tracy Wilkinson.
The good news is that, for law-abiding consumers in the United States, cash is disappearing. A prediction made 15 years ago by economist Milton Friedman is proving correct; that is, that cash “will die a natural death.” But you would not get this impression judging from the amount of currency in circulation: In December 2010, it reached $829 billion. And that sum rises every year. However, about two-thirds of that is located abroad. Most of the rest might be hoarded by a small minority, because a recent survey by the Boston Fed reveals that the average U.S. consumer has only $79 in cash on his or her person and another $157 in cash elsewhere (home, car, office).
The best indicator of where cash is headed is the percentage that it is used at point of sale (POS) in comparison with other types of payment. The Boston Fed survey reveals that Americans now pay in cash at POS only 22.7% of the time. Cash is clearly trending downward. Indeed, the recent surge in debit card usage is not so much a switch from credit cards as it is at the expense of cash and checks.
The post-cash era will see a reduction in those crimes in which cash is the typical payment medium. If it becomes more risky and difficult for thieves to sell stolen goods, then criminals will steal fewer goods in the first place, and crime rates for simple theft and burglary of goods will fall. Even identity theft and wire fraud will decline. Fraudulently acquired goods are typically sold for cash, and fraudulently wired funds are most often redeemed in cash in order to break audit trails. Cash also cloaks the links between thefts and subsequent sales of the stolen property in online auctions and at flea markets.
The greatest single benefit will be the elimination of cash robberies. Around 800 Americans are murdered every year in these crimes. A recent study by Iowa State University places the overall societal cost of a single armed robbery at $335,732 and that of a murder at $1.75 million. Armored car service alone is a $15 billion industry. Based on 2009 FBI statistics, this means that ending cash robberies (even allowing for noncash robberies) would save the United States about $144 billion per year.
Criminals may turn to cash alternatives, but all will prove to be poor substitutes. Credit and debit cards are easily traced. Barter is impractical. Prepaid cards or foreign currency may work in limited scenarios, but on analysis they have other snags. None of these comes close to cash in providing ease of use, secrecy, universal acceptability, and value-storage. Drug crime will continue, no doubt, but today’s freewheeling narcotrafficking empires are destined to disintegrate.
A further benefit of the demise of cash is that it will compel the unbanked—some 17 million Americans—to establish bank relationships. This will not only save them often exorbitant fees for payday loans, pawn shops, refund anticipation loans, and rent-to-own financing, but it will also further the government’s imperative to provide financial access to all citizens.
Privacy protections will evolve with the post-cash society. Subpoenas or consent will still be required to access payment data. Yet, current investigative methods, particularly “following the money,” will yield greater crime-solving results. Most significantly, the much-increased chance of being caught will deter many crimes in the first place. Deterrence promises to be particularly effective against tax evasion: The widespread practice of omitting to report and pay taxes on cash income costs honest taxpayers at least $300 billion per year. Cash is the essence of the underground economy—which the Cato Institute, a conservative think tank, estimates at 13% of the U.S. GDP.
Obviously, the Fed cannot simply pluck cash out of circulation without a replacement plan. Notwithstanding the terminal decline of cash, actively abolishing it is a formidable challenge. It interrupts cash’s economic functions as opposed to simply allowing it to succumb to attrition by other payment media, during which time glitches tend to work themselves out. No country has ever abolished its currency.
Government could speed its descent, for example, by imposing a federal tax on cash withdrawals from ATMs and on receiving “cash back.” This might well precipitate the final process. Belgian economist Leo Van Hove suggests “nudging” cash out of circulation by means of such surcharges. The Fed could also transform cash into an electronic currency. This would preserve seigniorage and government control of the national unit of money, to name but two advantages.
Powerful and influential privacy advocates extol cash because of its anonymity; it leaves no trail and does not involve third parties. But for the public, cash-based crime is violent and real, and a more tangible threat than concerns about government “spying” on citizens. Bank cards are deemed safe and easy to use, and losses from such use are minimal and insured. Americans are making their choice clear: Nearly 80% of their point of sale purchases are made in trail-leaving transactions in which they freely give their names.
While government economists likely recognize most privacy concerns as hyperbole and are cognizant of the damage that cash facilitates, particularly in the underground economy, actually acting to abolish cash remains controversial. A cashless future is a policy labeled “radioactive” by some—too politically dangerous to discuss.
Enlightened economists around the globe are breaking away from such constraints. I would urge America’s policy makers to recognize the profound social and economic merit in the proposal, join ranks, roll up their sleeves, and get to work on it.
If they do not, cash will crash anyway. But that may take 20 or more years, and it will cost many thousands of lives, even more serious injuries, and trillions of wasted dollars.
David R. Warwick is a real-estate developer, investor, and former attorney. He is author of Ending Cash: The Public Benefits of Federal Electronic Currency (Quorum Books, 1998) and many articles in various publications. E-mail drwarwick@comcast.net.
Alone Together: Why We Expect More from Technology and Less from Ourselves by Sherry Turkle. Basic Books. 2011. 360 pages. $28.95.
Sherry Turkle, a clinical psychologist and professor at the Massachusetts Institute of Technology, has spent more than 15 years studying the relationship of humans with robots and other electronic technologies. Once a techno-cheerleader, she now questions whether our relationship with new electronic technologies will work out well over the long run.
“Technology now reshapes the landscape of our emotional lives,” Turkle writes in her new book, Alone Together, “but is it offering us the lives we want to lead? Many roboticists are enthusiastic about having robots tend to our children and aging parents, for instance. Are these psychologically, socially, and ethically acceptable propositions?… And are we comfortable with virtual environments [e.g., Second Life] that propose themselves not as places for recreation but as new worlds to live in?”
Overwhelmed by demands on our time, says Turkle, we turn to technologies that promise to help us but in fact make us more harried than ever, so we eagerly escape to the Web. “Then, we gradually come to see our life on the Web as life itself!”
Technologies like BlackBerrys and cell phones do connect us to friends all over the world, and robots can serve us as helpers, friends, lovers, and playthings. But such technologies also impose serious costs.
One problem is that these technologies distract us from attending to the people we are actually with. Coming home after a long day, a schoolteacher talks to a robot dog because her husband is too busy on his cell phone to listen to her.
Children longing for a bit of “quality time” with their parents find that mothers are online chatting with faraway friends while fathers text-message during Sunday dinner.
Meanwhile, in schools, students update their Facebook status instead of listening to teachers.
Deprived of meaningful contact with human friends, many people now turn to robot pets such as the Furby—a highly sociable, owl-like robot that plays games and seems to learn to speak English as its owner plays with it.
A robot owl or other pet that can speak soothingly to older people at bed time can be a comfort to many people, both old and young. So in a few years, airlines may routinely distribute Furbies, Tamagotchis, and other sociable robots to passengers wanting to get a little rest and relaxation on long overseas flights.
At the same time, we may be losing the real human touch of other people, even as we try to stay more connected: “I don’t use my phone for calls any more,” reports a college student. “I don’t have the time to just go on and on. I like texting, Twitter. Looking at someone’s Facebook wall, I learn what I need to know.”
Young people now expect to be continuously connected to their friends wherever they are and to be always “on.” Meanwhile, people have become increasingly impatient with dealing face to face with other people. Hardly anybody now seems to have time for a relaxed conversation; instead, people compulsively text-message while driving to work, despite the risk.
Alone Togetheroffers a wealth of information about the numerous uses being made of new technologies, but Turkle does not offer a clear answer to the problems she describes. It seems highly unlikely that these technologies will disappear (unless they are replaced by even more powerful technologies) or that people will refrain from using them in ways that many of us will not always be happy with.
On the other hand, a glance at history reveals that major technological innovations have frequently alarmed the contemporary world when they first appeared but became completely accepted as time passed.
Edward Cornish is founding editor of THE FUTURIST and the World Future Society’s futurist-in-residence.
The Techno-Human Condition by Brady Allenby and Daniel Sarewitz. The MIT Press. 2011. 216 pages. $27.95.
What if two people plugged into an electronic brain–brain interface that transmitted all their thoughts to each other? In spring 2001, according to Brady Allenby and Daniel Sarewitz in The Techno-Human Condition, a panel at the National Science Foundation considered this very scenario. The participants, who included health researchers and IT executives, agreed unanimously that this interface would erase misunderstanding and usher in world peace.
It didn’t occur to them that humans who understand each other might still want to kill each other, or that good diplomats sometimes have to keep some information to themselves. Allenby, an Arizona State University engineer, and Sarewitz, an Arizona State science professor, cite this as one example of well-intentioned humans placing too much faith in technology.
People often fail to anticipate a new technology’s undesirable side effects, the authors argue. The twentieth-century physicists who discovered nuclear energy did not foresee the atom bomb, global arms races, or toxic fallout from defective nuclear power plants.
Blind faith in technology and failure to gauge new technology’s long-term consequences both receive scrutiny in The Techno-Human Condition. Allenby and Sarewitz stress that technology doesn’t exist in a vacuum: It interacts with larger human and natural systems in both helpful and harmful ways.
“Technology is best understood as an earth system—that is, a complex, constantly changing and adapting system in which human, built and natural systems interact,” the authors write.
Central to their discussion is the use of technology to enhance humans’ mental and physical performance. The transhumanist movement anticipates humans merging with machines, with consequently huge accelerations in people’s life spans, intelligence, and overall well-being. Allenby and Sarewitz agree that technology will transform life, but they call for caution to ensure the best results.
For instance, many people would opt for medical treatments that boost their cognitive skills. But cognitive enhancement does not make someone a better person, and a malicious person who undergoes cognitive enhancement might become even worse—by becoming more capable, he or she also becomes more dangerous.
“If a lot of jerks improved their concentration, the cumulative effect on the rest of us might well be unpleasant,” the authors write.
Western societies, in particular, have a track record of overly trusting technology, according to Allenby and Sarewitz. The Enlightenment era of the seventeenth and eighteenth centuries bred successful traditions of scientific inquiry but also left many Westerners assuming that reason and analysis would solve or manage all societal ills.
Allenby and Sarowitz hold that Enlightenment-era thinking has run its course, for our world increasingly defies understanding and management. Humanity will not make much further progress until it learns to embrace complexity and contradiction instead of instinctively trying to solve them.
The authors state that any technology exists on multiple levels. On one level, a human user operates it and benefits from it. On another, the technology interacts with society and generates long-term change. Social media exemplifies this. People first used Facebook, Twitter, and similar applications to interact with other users. Then resultant changes in social interaction, marketing, political activism, reading habits, and human thought patterns emerged.
Allenby and Sarewitz identify five technologies that are poised to rapidly evolve and generate massive societal change: nanotechnology, biotechnology, robotics, information and communications technology, and applied cognitive science. Societies will benefit more from them if, while creating them, they also create frameworks to guide their continuing evolution.
The frameworks will have to be adaptable. Technology will change quickly, so the rules governing its use will need to change along with it.
“The lessons of yesterday’s experience are not easily transferred and applied to today’s problems,” write Allenby and Sarewitz.
The authors place more hope in forums for exploring scenarios of innovations and their consequences. Critical analysis, experiential learning, forecasting, and emergency planning will all be vital to helping people navigate the often-bewildering paths of innovation.
“Intelligence must co-evolve with, and emerge from, experience,” they write.
The Techno-Human Condition is a thoughtful and analytical discussion of how humanity might continue to develop technology while preserving the best of human nature. The authors’ tone is philosophical and academic; it is not light reading. But readers who look forward to an informative debate will be highly satisfied.
Rick Docksai is a staff editor for THE FUTURIST and an assistant editor for World Future Review. E-mail rdocksai@wfs.org.
Crashes, Crises, and Calamities: How We Can Use Science to Read the Early-Warning Signs by Len Fisher. Basic. 2011. 233 pages. $23.99.
A stock market crash and a volcanic eruption have at least one thing in common, according to science writer Len Fisher: People can foresee them if they know what to look for. As Fisher explains, most of the worst natural and human disasters strike quickly but with some prior warning signs. In Crashes, Crises, and Calamities, he describes how new models of system processes and system breakdowns can help observers to anticipate hurricanes, tornadoes, economic crashes, mass riots, and many other deadly situations.
Fisher describes the four processes—positive feedbacks, acceleration past the point of no return, domino effect, and chain reactions—that are the primary causes of runaway disasters in the environment, economics, society, and even our personal lives. Each one, in its own way, pushes a system toward a “critical transition” point in which massive change takes place in very short time.
It used to be nearly impossible to spot a critical transition until it had already begun. New computer models in the last decade have vastly expanded our scope, however, and revealed common patterns running throughout many natural and human system failures. It is now possible to foretell when many natural or human-made systems are about to collapse.
Fisher offers examples of research models and how researchers use them. For example, a “fold catastrophe” model helps ecologists understand ecosystem degradation, psychologists make sense of mood swings, and anthropologists chart the rise and fall of human civilizations. Another, the “cusp catastrophe” model, illustrates the thought processes that lead college students to engage in binge drinking. No matter what the subject area, the right models help researchers to make sense of data and, ultimately, help craft sound public policies.
Some models are more accurate than others. Fisher advises that readers apply skepticism to critique the models themselves. He lays out sets of questions that an observer can ask to ascertain that the data, the model, the calculations, and the people conducting the tests are each demonstrably reliable. A model that fails on any of these four points will probably not offer reliable forecasts.
A book that describes scientific research models could easily intimidate average readers. Fisher’s book does no such thing. The author lays out the theories and their uses in very direct, conversational language. Crashes, Crises, and Calamities is well-suited for all readers who want to think creatively about their future and the world’s future.
Earth: The Operators’ Manual by Richard B. Alley. W.W. Norton. 2011. 479 pages. $27.95.
Richard B. Alley is admittedly not the stereotypical environmental advocate: He worked many years for an oil company, and his political registration is right-of-center. But the Penn State geosciences professor, who is also an Intergovernmental Panel on Climate Change researcher, takes the greenhouse-gas threat seriously. In Earth: The Operators’ Manual, he explains why.
In simple—but data-rich—prose, he illuminates carbon dioxide’s role in warming Earth’s climate from prehistory onward. He lays out the conclusive evidence that dangerous warming is now under way, and that human-generated carbon dioxide is the culprit. Along the way, he rules out every other proposed cause of warming, from sunspots to volcanoes. As a corollary, he highlights the looming crisis in fossil fuels supplies.
He also tackles many objections raised by global-warming skeptics: He explains why the observed worldwide temperature cooling after 1998 does not disprove warming, why the 2009 “Climategate” incident does not discredit the evidence for climate warming, and why some countries had record snowfalls and cold spells in recent years even though the planet is allegedly heating up.
Alley calls for a measured transition away from fossil fuels, with taxes on carbon emissions imposed to incentivize progress. Then he walks readers through a variety of renewable-energy systems. None can replace fossil fuels just yet, he says, but the creativity and initiative that their inventors demonstrate should give us great hope.
Much of Alley’s information will be old hat to environmental researchers and energy insiders, but general audiences may gain many new insights. He shows a firm understanding of both the business world and climate science, laying out the facts in a direct but well-balanced and ultimately hopeful voice of authoritative scholarship. Lingering skeptics—and those wishing to debate said skeptics—will find Earth: The Operators’ Manual a worthwhile read.
Exorbitant Privilege: The Rise and Fall of the Dollar and the Future of the International Monetary System by Barry Eichengreen. Oxford University Press. 2011. 215 pages. $27.95.
The U.S. dollar was the world’s currency of choice throughout most of the twentieth century and the first few years of the twenty-first, but within a few years it became just one international currency among many, says Berkeley economist Barry Eichengreen.
Confidence in the dollar dropped precipitously in the wake of the 2008 financial crisis, he notes. Whereas international banks had bought billions of dollars of U.S. securities every year, U.S. assets were now viewed as toxic.
Eichengreen believes that the recovered, post-recession global economy will hold a reduced role for the dollar. India and its trade partners will use the Indian rupee for more transactions; Brazil and its partners, the Brazilian real; etc.
The twenty-first century world marketplace is distinctly multipolar: The U.S. economy thrives alongside those of Europe, Japan, and the surging BRIC countries. Also, mobile technology makes it easier for buyers and sellers in different countries to conduct transactions with different currencies.
But Eichengreen does not expect the dollar to go away. Not only are markets everywhere accustomed to using it, but some would suffer losses if they disposed of their dollars too quickly. Also, each rival currency has major problems of its own. Eichengreen concludes that the dollar will retain a dominant presence in international commerce for many years to come. However, the dollar’s stock will fall further if the United States does not rein in its annual budget deficits, he warns.
Some authors trumpet American exceptionalism, while others hold that the United States is on a path of irreversible decline. Eichengreen presents a well-reasoned middle path. Economists, public-policy specialists, and others curious about the future of the global marketplace will find Exorbitant Privilege to be an analytical and provocative read.
Fault Lines: How Hidden Fractures Still Threaten the World Economy by Raghuram G. Rajan. Princeton University Press. 2010. 260 pages. $26.95.
Economies may be recovering from the 2008 recession, but another meltdown may be inevitable unless major changes in worldwide financing take place, warns finances professor Raghuram G. Rajan. The “fault lines” that led to the last recession—income inequality, exorbitant debts, and high-risk capital—are still with us and must be fixed, he says.
The current system still rewards banks that issue high-risk loans and encourages financial institutions to buy delinquent debts en masse, according to Rajan. Traders assume that, even if their accounts fail, their governments will cover their losses.
Regulatory authorities are to blame, also. Many do not demand enough information from the financial institutions, and even when they do, they often do not disclose it all to the public.
At the country level, the world’s most prosperous economies, such as Japan and the United States, cannot grow without borrowing from China and other lender nations. Developing economies, meanwhile, depend too heavily on exports.
Rajan encourages light, targeted regulations that reduce risks but hold financial institutions responsible for their own success or failure. He also calls for open information channels that keep investors, consumers, and regulators fully aware of risks.
Social equality is also a must, according to Rajan. When a country’s citizens are deeply unequal in health-care access and opportunities for education, they will be more prone to unemployment and high debts. A country that maximizes opportunities for all its citizens, however, will better withstand economic ups and downs.
Recession-weary readers may consider Fault Lines grim news. Rajan makes no secret that the existing market systems greatly worry him. But he also expresses hope that, if the world community learns the right lessons from the latest recession, a more stable and equitable world economy may be in our future.
How Asia Can Shape the World: From the Era of Plenty to the Era of Scarcities by Joergen Oernstroem Moeller. Institute of Southeast Asian Studies. 2011. 540 pages. $49.90.
A new global economic system will emerge this century, and Asia will probably be its center of gravity, argues Joergen Oerstrom Moeller, an Institute for Southeast Asian Studies senior fellow, in How Asia Can Shape the World.
Demographic and environmental challenges demand new economic models that use fewer materials and more manpower while balancing the drive to accumulate wealth with the realities of finite resources. The countries of Asia are well positioned to form that model, according to Moeller. Their millennia-old philosophical traditions prize harmony, sustainability, and social responsibility while discouraging materialism—and Asian businesses exemplify these principles by, in general, showing a greater sense of corporate social responsibility than their Western counterparts do.
Moeller also cites the growing size of Asia’s financial sectors and the increasing investments by Asian businesses in Africa, Latin America, and the Middle East. Also, Asia is becoming the primary market for the developing world’s raw-material exports. Over the next 25 years, China—not the United States—could be the most influential power in some developing regions.
Asian leadership of the global economy, however, is not a foregone conclusion. China, Japan, and their regional neighbors urgently need to modernize their educational systems to graduate enough qualified workers. They must also increase their domestic spending rates to compensate for shrinking workforces and aging populations. China faces the additional challenges of bringing more women into the workforce and creating new social-welfare programs to attend to massive future populations of retirees.
How Asia Can Shape the World is a global view of twenty-first-century economics, how they may evolve, and the role that the nations of Asia may play in their evolution. Its title may appeal foremost to scholars of Asia, but economists and policy analysts in any corner of the world may find it relevant.How to Cool the Planet: Geoengineering and the Audacious Quest to Fix Earth’s Climate by Jeff Goodell. Houghton Mifflin Harcourt. 2010. 262 pages. $26.
Could purposely deploying clouds of pollutants that blot out sunlight be our last, best hope for averting catastrophic climate change? Proponents of “geoengineering” think so. Jeff Goodell, a journalist who contributes to Rolling Stone and the New York Times, explores current research into how—and whether—geoengineering might work.
In career journalist style, Goodell conveys interviews from a range of climate experts who have studied geoengineering. Their viewpoints vary: Some ardently support geoengineering, while others think it is ludicrous and maybe even dangerous.
Geoengineering would be foolhardy if we attempted it without understanding in full how Earth’s systems work, the critics tell Goodell. They contend that our knowledge is still very limited: Inaccurate weather forecasts and computer models of climate change that actually underestimate the extent of planetary warming effects both attest to this, they argue. They fear that geoengineering might unleash many unintended side effects, such as weather changes and altered water currents.
Plus, while it might stop carbon-dioxide-induced warming, it would do nothing to stop carbon dioxide’s other toxic effects, such as the destruction of coral reefs. Only serious reductions in emissions by humans can achieve that.
Knowledge gaps do not faze the proponents, who remind Goodell that climate change is happening quickly and with intensifying force. They see little time ahead for humanity to take action adequate to mitigate it. Geoengineering will be a winning solution, they argue, since it is faster-acting and less expensive than other alternative fixes.
How to Cool the Planet is a behind-the-scenes look at the debates and speculations surrounding geoengineering. Goodell gathers a range of voices, both pro and con. Environmentally aware readers who like a good debate will enjoy this book.
[Ed. note: See also “Dimming the Sun,” World Trends & Forecasts, May-June 2011.]
Walk Out Walk On by Margaret Wheatley and Deborah Frieze. Berrett-Koehler. 2011. 264 pages. Paperback. $24.95.
When the World Bank suspended its shipments of fertilizer and seeds to Zimbabwe, residents of Kufunda, Zimbabwe, feared that they would go hungry: Their farms had always depended on outside support. Then they conferred with their village’s elders and, under the elders’ guidance, created a new, localized agricultural system that could feed them year after year without government subsidies or NGO donations and with enough resilience to withstand droughts, pest infestations, and other hazards.
Kufunda is one of seven success stories told by Margaret Wheatley and Deborah Frieze, both former presidents of the nonprofit Berkana Institute, in Walk Out Walk On. The authors profile seven communities that implemented unique solutions to the problems that afflict cities and towns throughout the world.
The communities’ residents are “walk outs,” according to Wheatley and Frieze, because they exited from economic models, situations, and ideas that had confined them. Then they “walked on” to new concepts, practices, and opportunities by ceasing their reliance on outside help and direction. They drew upon their own initiative and ancestral wisdom to grow their own crops, produce their own energy, and create vibrant public spaces out of previously abandoned buildings and parking lots.
Although the communities’ development models are unique, they are guided by core principles that communities anywhere can emulate, the authors conclude: self-sufficiency, compassion, community wisdom, and listening to all of one’s neighbors, especially the poorest and most disadvantaged. Wheatley and Frieze hope that more communities will join them in walking out of resource-depleting and exploitative practices, and walking into empowerment of all people.
Community activists on every continent long for communities to become more sustainable and more self-reliant. Wheatley and Frieze’s Walk Out Walk On speaks directly to them by showing how some communities are making that transition in the here and now.