March-April 2012, Vol. 46, No. 2

  • Nuclear Power’s Unsettled Future
  • A World Wide Mind: The Coming Collective Telempathy
  • Thriving in the Automated Economy
  • Hard at Work in the Jobless Future
  • Rethinking “Return on Investment”
  • A Future of Fewer Words?
  • From the Three Rs To the Four Cs: Radically Redesigning K-12 Education

Tomorrow in Brief

Cars That Generate Power

illustration

Future car buyers may be quizzing the dealer not on how much fuel a vehicle consumes, but rather on how much energy it produces.

A scheme envisioned at the Technology University of Delft proposes the development of electricity plants in parking garages and other facilities. Not only could electric vehicles be easily charged there, but also their fuel cells would be used to convert biogas or hydrogen into more electricity when the cars are parked. As a bonus, car owners would be paid for the electricity that their vehicles produce.

illustration

Another project at the university is the Energy Wall, a motorway whose walls generate energy for roadside lighting and serve as a support for a people mover on top.

Source: Delft University of Technology, www.tudelft.nl.

Childhood Cancer Survivors’ Children

Aggressive treatment for cancer during childhood may not put the survivors’ future offspring at a greater risk of birth defects than the children of survivors who did not receive such treatment.

Radiotherapy and chemotherapy with alkylating agents may damage DNA, but it now appears that the damage may not be passed along to offspring, according to a large retrospective Childhood Cancer Survivor Study led by Lisa Signorello of Vanderbilt University.

“We hope this study will become part of the arsenal of information used by the physicians of childhood cancer survivors if reproductive worries arise,” says Signorello.

Source: Vanderbilt University Medical Center, www.mc.vanderbilt.edu. The study was published in the December 12, 2012, issue of the Journal of Clinical Oncology.

illustration

Open-Source Robot Blueprints

Robot development may accelerate, thanks to a new open-source hardware-sharing system launched by Eindhoven University of Technology in the Netherlands.

The Robotic Open Platform allows participants to share their designs so that other developers can adapt or improve on them. For example, Eindhoven’s AMIGO caregiving robot would cost €300,000 to €400,000 to purchase, but because the designs are being made available, future researchers could build AMIGO’s successor for just €10,000.

Source: Eindhoven University of Technology, www.tue.nl.

Big Tobacco’s Future: Up in Smoke?

China accounts for 40% of the world’s production and consumption of cigarettes, but it may become the first country to bar their sale, predicts Stanford University historian Robert Proctor.

The cigarette industry will not die easily, as it is incredibly profitable—not just for the manufacturers, but also for governments relying on revenue from tobacco taxes, Proctor observes in his book, Golden Holocaust.

But smoking is also incredibly costly to societies, especially in terms of lost productivity. Proctor bets that China will be among the first to recognize these costs and to do something about it.

Source: Stanford University, www.stanford.edu.

WordBuzz: Mistweetment

The term mistweetment, referring to an ill-conceived, misdirected, erroneously attributed, or simply sloppy tweet (with comical or catastrophic impacts), is almost as old as Twitter itself.

In 2009, a minister in India botched his report on a meeting with an Australian minister, perhaps by leaving out the word no when suggesting that he left his guest “with doubt” about his stance on the issue under discussion.

Other opportunities for mistweetment come when groups inadvertently borrow other groups’ hashtags for their discussions, as happened recently when a group of futurists and a group of food service industry professionals were both chatting about #fsed (futures studies education and food service equipment distribution, respectively).

Source: Tharoor story reported by the Lowy Institute for International Policy, www.lowyinterpreter.org.

Follow THE FUTURIST magazine, @TheYear2030, and the World Future Society, @WorldFutureSoc.

Future Scope

Custom Teaser: 
  • Can Food Supply Meet Doubled Demand?
  • End-of-Life Indecision
  • Religious Awakening in China

Can Food Supply Meet Doubled Demand?

Global demand for food is expected to double by 2050, which will put more pressure on the world’s farmers to increase production. But these efforts could also increase carbon dioxide in the air and nitrogen in the soil and contribute to species extinction, warns a team of researchers in a study published in the Proceedings of the National Academy of Sciences.

Agricultural intensification on existing farmland through improved practices and technology transfer—rather than clearing more land—offers the most sustainable approach to increasing food supply and minimizing risks to human and environmental health, the researchers believe. They call on wealthier countries to develop these methods and then transfer the best practices to poorer nations.

“Our analyses show that we can save most of the Earth’s remaining ecosystems by helping the poorer nations of the world feed themselves,” says study leader David Tilman, resident fellow of the University of Minnesota’s Institute on the Environment.

Source: “Global Food Demand and the Sustainable Intensification of Agriculture” by David Tilman et al., Proceedings of the National Academy of Sciences (online edition, November 21, 2011), www.pnas.org.

End-of-Life Indecision

More than a third of patients with chronic illnesses may ultimately change their minds about life-saving emergency procedures. This suggests that doctors need to discuss these options with their patients more frequently.

A study in the Netherlands focused on 206 patients with chronic obstructive pulmonary disease, chronic heart failure, or chronic renal failure who were in stable condition at the start of the study. The patients were monitored every four months for a year to assess their preferences for resuscitation and mechanical ventilation in the event of cardiac arrest.

At the end of the year, 38% had altered their initial preference—and the changes of mind went both ways, for resuscitation and against, according to lead researcher Daisy Janssen of the Centre of Expertise for Chronic Organ Failure.

Factors contributing to the revised preferences included changes in health status, mobility, marital status, and symptoms of anxiety and depression. Janssen calls for reevaluation of health-care planning protocols, better communication between doctors and patients, and improved training for doctors and nurses in end-of-life care.

Source: CIRO+ Centre of Expertise for Chronic Organ Failure, www.ciro-horn.nl. The study was presented at the European Respiratory Society Annual Congress in Amsterdam (September 26, 2011). Details: European Lung Foundation, www.european-lung-foundation.org.

Religious Awakening in China

Religious practices and spirituality among the Chinese may get a boost, thanks to forthcoming mandatory changes in China’s central government when the 18th Congress of the Communist Party meets later in 2012.

Bans on certain religions and strict regulations of those that are allowed (Buddhism, Catholicism, Taoism, Islam, and Protestantism) have resulted in the creation of black and gray “markets” to fill spiritual needs illegally, such as practicing qigong (breathing techniques and exercises) or holding Sunday school classes for Christian children. An estimated 85% of Chinese citizens engage in supernatural beliefs or practices.

portrait of Fenggang Yang

As different local officials enforce the laws differently, spiritual life in China has become more ambiguous, according to Purdue University sociologist Fenggang Yang. “Ironically, the more restrictive and suppressive the country’s religious regulations, the larger the gray market grows,” he notes.

China may be viewed as a bellwether for shifts in other countries where Communism has historically encouraged atheism and suppressed religion, says Yang. Moreover, such shifts are likely to have long-term effects. “This is not really merely about China anymore, because what China becomes will affect the world in many spheres, such as economy, politics, and culture,” he concludes.

Source: Purdue University, www.purdue.edu. Fenggang Yang, professor of sociology, is author of Religion in China: Survival and Revival Under Communist Rule (Oxford University Press, 2011).

The Road Ahead for Gasoline-Free Cars

In a few years, one out of every two cars on the road could be a hybrid or electric.

By Jim Motavalli

portrait of Jim Motavalli

Until recently, most people experienced clean-energy cars at auto shows, in the pages of magazines, or as image advertising—they weren’t tangible. All that’s changed now: You can actually see electric and plug-in hybrid vehicles on the street, picking up groceries with early adopters at the wheel, taking the kids to Little League, and—lo and behold—even charging up at public stations.

The basic types of clean-energy cars are as follows:

  • Battery electrics. These cars have electric motors and battery packs, and no other means of propulsion. The range is generally 100 miles, but that’s not likely to remain the standard for long. The Tesla Roadster can deliver 245 miles on a charge.
  • Plug-in hybrids. The plug-in hybrid car acts like an electric car for the first 15 to 50 miles, but then can switch to an on-board internal-combustion engine that, in many cases, acts as a generator instead of directly driving the wheels. The Chevrolet Volt is an example of the plug-in hybrid, as is the Fisker Karma.
  • Hybrids. Hybrids either use their electric motors as assists for the gas engine, or allow short bursts of electric-only driving. The Toyota Prius and Ford Fusion hybrids are examples of this car type.
  • Hydrogen fuel-cell cars. The fuel cell, which produces electricity from hydrogen, replaces the battery pack. Hydrogen is the most abundant element in the universe; we’ll never run out of it. The main challenge is not having enough hydrogen filling stations.

Nearly every major auto maker is planning new clean-energy models. Ford, for instance, intends to roll out five new models in 2012. Roland Berger Strategy Consultants forecasts that 10% of new cars globally will be electric by 2025, and the larger category that includes hybrids and plug-in hybrids will have grabbed 40% of the market by then. That would mean that half of new cars heading into showrooms around the world would be at least partly electric, but it’s a pretty optimistic forecast—what ultimately rolls out depends to a great extent on what happens with gas prices.

Hydrogen fuel-cell cars should be ready for mass use in just a few more years. In addition, four car companies—Daimler, Toyota, Honda, and Hyundai—plan to roll out tens of thousands of hydrogen-powered cars by 2015.

The near-term challenge is the lack of a hydrogen infrastructure. There are currently fewer than a hundred hydrogen stations in all of the United States, and only a handful are public. Some entrepreneurs are attempting to change that. Tom Sullivan, the founder of Lumber Liquidators, has just started SunHydro, a private chain of hydrogen fueling stations along the U.S. east coast.

As it stands, though, the upcoming hydrogen-powered cars may end up being sold in Europe, South Korea, or Japan, where public commitments on hydrogen infrastructure are much stronger than in the United States. The U.S. government has had an on-again, off-again relationship with hydrogen-powered cars.

That’s not to say that American consumers don’t like electric cars. Demand is higher in the United States than anywhere else. But demand in China could surpass U.S. demand very quickly. China will likely become the world’s largest electric-car market: It has put in place some of the world’s best incentives for electric cars, and quite a few manufacturers are lining up to sell them to Chinese buyers.

Demographic trends might also help the electric car market, as more people move to cities. Electrics will help fill the need for vehicles that can take people short distances at low speeds due to traffic and pedestrians. The obstacle for electric vehicles as “city cars” is the problem of charging them. In cities like New York, we’re not likely to see on-street parking and charging units for electric vehicles.

What we will probably see are EV charging units in garages and buildings, but the rules and protocols have yet to be developed. Suppose you own a condo, and you want to install a charging station on the condo grounds. You have to bring in the condo association on it, and it’s going to slow things down. There need to be guidelines for apartment dwellers to charge electrics. Right now, that doesn’t exist.

But smart meters do. A smart meter, installed on the side of your house, enables you, on your computer at work, to dial up software that shows you exactly how much juice each of your appliances is using, and allows you to shut some of them down remotely during peak power demand times.

Smart meters are a huge advance and are fortunately going mainstream at the same time that electric cars are hitting the road. The two can work together closely. When it’s plugged in, your electric car is just another household load—and a pretty big one, sometimes doubling electricity consumption. If we get really smart about this, we can create home networks that empower consumers to manage and reduce their power needs—and save money in the process. The smart home is finally coming to America, and it’s making huge strides in Japan.

I visited Panasonic’s Eco Ideas House in downtown Tokyo, and there was a plug-in hybrid Toyota Prius in the driveway. As I learned, the car and the house form a singularly green home energy management system. The house combines a five-kilowatt solar panel on the roof and a one-kilowatt hydrogen fuel cell in the backyard to generate electricity, and a stationary five-kilowatt lithium-ion battery to store it. Holistic systems that use sophisticated power management electronics like this are all the rage in Japan, thanks to a combination of a growing green consciousness, corporate commitment, and financial support from the government.

In Japan, Panasonic now sells home fuel cells that can supply 60% of a family’s power needs. General Electric, in cooperation with a company called Plug Power, had planned to sell its own home fuel cells to Americans in the early 2000s. But without federal subsidies, the economics weren’t there—the fuel cell would have produced electricity at a cost higher than that of grid power.

There are some good reasons to be optimistic for electric cars’ future. At first, a fairly small percentage of people will buy electric and plug-in hybrid cars solely because they expect to save money on them. Most will be motivated by environmental concerns, but oil prices could certainly affect the popularity of electrics.

It is true that the auto makers face major challenges to transitioning to electricity. But they are taking a chance with these new clean-energy cars. The revival of the electric car is now well under way, pushed forward by technological leaps, the imperatives of global warming, and the sobering prospect of peak oil. Electric cars are going to jumpstart our lives and do good things for the planet, too.

Jim Motavalli is an environmental writer and the author of High Voltage: The Fast Track to Plug In the Auto Industry (Rodale, 2011). Web site http://jimmotavalli.com

A Competition for Lunar Enterprise

A serial entrepreneur is aiming for the final frontier.

NASA’s original Apollo program, which put a human presence on the Moon, cost the U.S. government $145 billion in today’s dollars and took nine years to accomplish. Entrepreneur Naveen Jain is hoping to get back to the Moon at a cost of no more than $70 million and to do so within a three-year time frame.

illustration of the Moon Express lander

Jain is co-founder, with Bob Richards, of Moon Express, a Silicon Valley–based start-up. It’s one of 26 teams competing for the Google Lunar X Prize, which will award $20 million to the privately funded team that places on the Moon, before 2015, a robot that is capable of exploration (moving at least 500 meters) and broadcasting video back to Earth. (The awardable amount changes if a government lands a robot on the Moon first.)

Other teams competing for the prize include Odyssey Moon, founded by Rick Sanford, Cisco’s former chief operating officer for Internet routing in space, and Next Giant Leap, led by Jeffrey Alan Hoffman, a former astronaut and current faculty member at MIT.

In an interview with THE FUTURIST, Jain said he was undaunted by the competition: “We’ve built a great team, one of the best in the world to make this happen. Bob Richards was also part of the Mars Mission when I was in Canada. [We have] Tom Gardner, the mission manager for the Mars Mission. We have the entire Mars Rover team. When their NASA funding was cut, we hired the whole team.”

Jain’s previous ventures in Internet search and e-commerce, Infospace* and Intelius, made him a billionaire and put him on the Forbes 400 list during last decade. Infospace also landed Jain in court on charges of insider trading (he paid $65 million without admitting wrongdoing). Intelius has been the subject of hundreds of complaints to the Better Business Bureau for its practices. Jain expects his new company to attract less controversy and return a profit in the tens of billions of dollars through the harvesting of rare minerals like platinum on the Moon’s surface.

He admits that, before he can begin harvesting minerals from the Moon, he has to find them. Spectrographically and topographically, the Moon has been more closely studied than any other body in space, says Jain. “But no one has ever said [that] the spectrographic data, the topographical data, suggests the existence of platinum here, or this mineral there. So the Moon has never been explored from the perspective of an entrepreneur.”

The amount of heavy metals like platinum on the Moon’s face is a matter of some dispute among scientists. Jain contends that, since these minerals are present in asteroids and since asteroids strike the Moon regularly (and since the asteroids don’t burn up prior to impact as they commonly do when encountering Earth’s atmosphere), the Moon should hold an abundance of valuable rock, especially near craters, which signify asteroid impacts. Other commercial applications for the Earth’s nearest neighbor include broadcasting messages and images, even wedding proposals.

“Once you build the platform, the only limit to the possibilities is the human imagination,” says Jain.

The company’s public investors include the Founders Fund (started by PayPal’s Peter Thiel) and Netopia founder Reese Jones, who likens the Moon Express effort to the building of the first transcontinental railroad and the development of the U.S. telecommunications infrastructure—ventures that once seemed overly ambitious to many, but that went on to “change our world and humanity in myriad beneficial ways. Moon Express has assembled a world class team and the technologies most likely to turn this concept into viable commercial reality.” Jain claims: “We’re the only company with a real business model. When you tell people, ‘You can be part of a private enterprise aimed at Moon exploration,’ people get very excited. Some are skeptical that a private company can do it, so we’re trying to show them it can be done.”—Patrick Tucker

Source: Naveen Jain (interview). For further reading, see Moonrush: Improving Life on Earth with the Moon’s Resources by Dennis Wingo (Apogee Books, 2004).

Note: Venture capitalist and Moon Express founder Naveen Jain will be appearing along with angel investor Reese Jones at WorldFuture 2012: Dream. Design. Develop. Deliver.

*Originally reported as Infosys. Corrected on 2/4/2011.

Partnership for a Freer World

An alliance of established democracies helps newly emerging democracies take wing.

Lithuania and Mongolia successfully transitioned from authoritarian rule to democracy in the twentieth century, and they are working to help other developing nations do the same in the twenty-first. As leading members of the governing council of the Community of Democracies (CD), an association of nations committed to advancing global democracy, the two countries have been receiving acclaim for bringing coalitions of governments and nongovernmental organizations together to assist democracy movements and fledgling democratic governments everywhere.

“If you had to invent the perfect time for countries as enthusiastic and committed as Lithuania and Mongolia to assume the chairmanship of the Community of Democracies, you would have chosen this two-year period of time: the time of the Arab Spring and so much change and transition around the world,” said Samantha Power, U.S. special assistant to the president and senior director for multilateral affairs and human rights. She was speaking at a forum in November 2011 at the Carnegie Endowment for International Peace in Washington, D.C.

forum speakers at Carnegie Endowment for International Peace in Washington DC, November 2011

The Community of Democracies formed in 2000 in Warsaw with 106 signatory member nations. It suffered, however, from lack of clarity about what membership entailed and how the members were to work together to promote democracy, Power observed.

“The CD was slow to show real-world results, and as a result, attention, the level of participation from governments, the curiosity of those who are not actively part of the Community of Democracies about getting into the Community—all that faded pretty substantially,” she said.

Things took a huge turn for the better, however, after the chairmanship, which changes hands from one country to another every two years, passed to Lithuania in 2009. According to Evaldas Ignatavicius, vice minister of foreign affairs for Lithuania, his republic committed the CD like never before to assisting new democracies everywhere. It also prioritized bringing more governments into the organization and engaging more civil society organizations. Its efforts bore fruit over the next few years as Nigeria, Sweden, Costa Rica, and other new nations joined.

audience at the forum

“The Community for Democracies is expanding. It’s no more a club of Western democracies. [It’s] becoming [an] ownership of different regions worldwide, and it’s really a good feeling,” he said.

Lithuania also oversaw the launch in June 2011 of the Global Partnership Challenge, a “race to the top” initiative that invites national governments to submit proposals detailing reforms that they have undertaken and areas in which they intend to enact reforms in the future. The CD will select two applicants each year and work directly with them to help them reach their desired reform goals.

The winners for 2011 were Moldova and Tunisia. Having selected them, the CD then appointed task forces, one to work directly with each country to develop action plans for it on the areas of needed reform that the country identified. For example, Moldova said that it was in serious need of judicial reform, since average citizens placed little trust in judges. So the task force began working with Moldova on programs to enforce ethics within the judicial system.

Mongolia gained the chairmanship in July 2011 and continued where Lithuania had left off. First, it co-launched with South Korea the Asian Pacific Partnership Initiative for Democracies, an alliance of all democracies in Asia. Mongolia and the Asian Pacific Partnership will organize a mission to Myanmar in 2012 to encourage its government to be more open.

Suren Badral, ambassador-at-large of Mongolia, told THE FUTURIST that he looks to Myanmar (also known as Burma) as a promising area of operations because its military-ruled government has been gradually allowing more political freedoms, such as permitting opposition political parties to form. With time and encouragement, he said, Myanmar could eventually progress toward being fully free, with representative governance.

“Myanmar is emerging as a possible target because it started to become more open. We would like to use this as an opportunity to speed up Myanmar’s transformation to democracy,” Badral said.

In 2012, Mongolia will host in its capital city of Ulaanbaatar an international seminar on education. Participants will draw up high-school-level and college-level course materials on democratic ideas and values.

“We need to educate people at a young age about democratic values,” Badral said.

Later in the year, his country will host a larger conference in India, with representatives of governments and the private sector, on how they can all work together to promote democracy. Other initiatives and events will follow until July 2013, when the chair passes to El Salvador.

“Mongolia happens to be at the right place at the right time,” Badral said in his speech. “We have been honored to chair the Community of Democracies when [it] is becoming a more vibrant, more live organization.”—Rick Docksai

Sources: “Is the Community of Democracies Coming of Age?” held November 17, 2011. Note: Event transcript is available from the Carnegie Endowment for International Peace, carnegieendowment.org.

Presentations and interviews: Samantha Power, White House, www.whitehouse.gov. Evaldas Ignatavicius, Ministry of Foreign Affairs of the Republic of Lithuania, www.urm.lt. Suren Badral, Community of Democracies, www.community-democracies.org.

Growing Pains Ahead For China and India

Demographic change will challenge the world’s two most populous countries.

China and India have been flourishing economically over the last decade, but they will have to tackle significant challenges of demography, infrastructure, and standards of living if they want to ensure steady prosperity in decades to follow, according to a RAND Corporation study, “China and India, 2025: A Comparative Assessment.” The report compared the two countries on population growth, economics, science and technology development, and defense, and it assessed where they might trend in each area between now and 2025.

“Each country’s role on the world stage will be affected by the progress that it makes and by the competition and cooperation that develop between them,” the study states.

India’s workforce is enviably young and growing, thanks to steady population increase: With a population growing at twice the rate of China’s, India may eclipse China’s population by 2028, the study predicts. And India’s population will continue to grow after 2050, while China’s population slowly shrinks.

To capitalize on this population growth, however, India must improve its education system and expand career opportunities for women. It must also raise overall living standards and the quality of health care, so that skilled young professionals do not emigrate out.

“Whether India’s demographic advantages will be a dividend or drag on future economic growth will depend on the extent to which productive employment opportunities emerge from an open, competitive, innovative, and entrepreneurial Indian economy,” the report states.

China has the edge technologically, and its workforce is considered to be better educated, according to the report, which forecasts that China’s GDP will continue to exceed India’s through 2025. But China’s elderly population is growing at an ominously faster rate; unlike most industrialized countries, China does not have an extensive social security or retirement pension system in place to help retirees support themselves in their later years. The country could eventually have too many dependent retirees for its working population to support.

This aging trend could also push China’s health-care costs to unsustainably high levels. The country’s per capita health expenditures already doubled between 2000 and 2006. India’s grew by a smaller but still significant 50%. Both nations’ health expenditures are expected to keep growing, but China’s will grow much more.

“China’s projected demographics are creating a challenge for its economic development—a potential economic drag—that may be more complex to manage compared with the situation of India,” the report states.

Julie DaVanzo, a RAND senior economist and co-author of the study, says that China should strive now to make it easier for working people to build up retirement savings.

“It needs to enable them to support themselves in old age so that this burden doesn’t fall to the state or fall to families in such a way that it impedes their economic opportunities—such as women dropping out of the labor force to care for older relatives,” DaVanzo told THE FUTURIST.

She also recommends that Chinese leaders encourage families to have more children. However, that will only help in the long term, not the near term, since babies born now won’t reach working age for another two decades.

Most industrialized countries have rapidly aging populations and anticipate some consequent fiscal strain. Fortunately for them, steady influxes of working-age immigrants partially offset the aging shifts. China cannot bank on immigrants, however, because of the comparatively low fertility rates of most of its neighbors, according to DaVanzo.

“China is so large that immigrants might be a drop in the bucket,” she says.

India has to worry about its young population, too, however. Study lead author Charles Wolf Jr., a RAND distinguished chair in international economics, says that both China’s and India’s working-age populations skew 65%–70% male. With such a gender imbalance, it will be all the harder for either country to keep skilled male professionals from leaving.

“There is a big question of whether that excess of the male cohort will lead to the best and the brightest emigrating,” he says.

According to Wolf, India has to create more high-paying job opportunities for its rising pool of young job seekers to pursue. China can weather its own demographic decline, Wolf adds, if it concentrates on advancing technology, productivity, and management so as to achieve the most possible, at the lowest costs, with a reduced workforce that has more retirees to support.

“The workers will have more equipment and technology to work with, and that compensates for the downward trend in the working-age population,” he says of China’s labor force.

Wolf also says that it is not clear that either country will have a complete advantage over the other. With each having its own share of difficulties, however, they could turn toward sharper economic and political rivalry, short of all-out war.

“There is more room for cooperation and there is more room for rivalry, and if they are prudent in managing the rivalrous aspects, then the cooperative ones will dominate,” he says.

RAND senior policy researcher Eric V. Larson, author of the study’s chapter on defense spending, finds hope for peaceful competition. Neither India nor China is likely to substantially increase its defense spending. The rate of growth of China’s defense expenditures fell from 15% to 7.5% between 2009 and 2010, and while it rose again in 2011, Larson says that internal difficulties will render large future increases unsustainable.

“China faces many more problems domestically, [such as] environmental problems, social stability, and other sorts of potential claimants. I think that China is less able to carry higher levels of defense spending,” he says.

India has raised its defense spending modestly and will likely continue to do so. Furthermore, the Indian government does not usually spend all the money that it allocates toward defense.

“They’ve got some undeveloped capacity,” Larson concludes.—Rick Docksai

Source: “China and India, 2025: A Comparative Assessment” by Charles Wolf et al., RAND, www.rand.org, and interviews with Wolf, Julie DaVanzo, and Eric V. Larson.

Dealing with "Warning Fatigue"

Given enough warning that a disaster is on its way—be it flood, fire, volcano, or storm—most people would heed the warning and take appropriate action. Or not.

Early warnings that are ambiguous and conflicting may raise skepticism and indecision instead of action, observes Matthew Cochrane of the International Federation of Red Cross and Red Crescent Societies.

“It’s a fine line between keeping people aware, and agitated to an extent, but not so overwhelmed or underwhelmed that you create confusion or prove yourself worthless,” he says.

Making a threat less abstract is one approach to overcoming warning fatigue (e.g., not that a flood may affect them, but that their pipes will probably burst), as is helping people know what specific actions they can take—and enabling them to do so.

Source: “The Risk of ‘Warning Fatigue’ in Disaster Preparedness,” IRIN News, UN Office for the Coordination of Humanitarian Affairs, www.irinnews.org.

Solving Renewables' Storage Problems

BrightSource Energy shows that storage can make solar power more viable.

By Letha Tawney

It’s not unusual to hear about the challenges of batteries for electric vehicles, particularly the tradeoffs among weight, cost, and capacity. But energy storage is also critical to replacing fossil fuels in the power sector. In electric cars, the weight of the battery can have a major effect on performance, but, in the context of power plants, the size of the power storage system is less important. With no weight limitations, a broader array of interesting technologies are on the table.

photo of groundbreaking ceremony for solar thermal facility to be built on federal land

Storage is important in electricity because some renewable energy options are variable—they only have a fuel source when the sun shines and the wind blows. To incorporate a large percentage of variable resources, grid operators can lean on dynamic demand management; options here include remotely powering down nonessential systems in industrial facilities. They can also lean on widely interconnected grids that can ship power from overproducing areas to underproducing areas, fossil fuel backup generation (such as gas turbines), or energy storage.

Each of these solutions has pros and cons, but a significant advantage for large-scale storage is the way it uses the renewable resources more efficiently. From an economic perspective, it would be great to be able to use the wind that blows at night to earn more revenue, or to sell solar power for a high price at peak times like early evenings.

The existing solutions to this storage problem are pumped water and compressed air, which require dams or geologic storage. New options are coming to market, though, including distributed thermal storage, centralized thermal storage in molten salt, and large arrays of batteries, both distributed in electric vehicles and centralized at power stations. A couple of recent examples are particularly exciting.

In the Pacific Northwest, the Bonneville Power Administration is turning demand management on its head. When there is too much wind power for the grid to absorb, they remotely turn up homeowners’ hot water tanks and turn on heaters that warm up ceramic bricks. Homeowners are paid a bit for offering the service to the grid, and the extra wind power doesn’t go to waste.

In November 2011, BrightSource Energy signed the largest storage deal to date with Southern California Edison to add three salt thermal storage units to their planned concentrating solar plants. The three storage facilities will replace a previously planned additional plant, at a significant cost savings. They also allow BrightSource to operate more like a traditional power generator and take advantage of peak prices in the evening. The increase in efficiency means BrightSource expects to produce the same amount of power as in their original project design, but using about 1,250 fewer acres of land.

Wind and solar power have both seen dramatic growth in the last five years, and as they come closer to the cost of fossil fuel alternatives, that trend is likely to continue. Managing the variability that comes with them will become more and more critical to continuing their growth trajectories—and storage of all sorts is a particularly attractive option because of the way it allows us to wring that much more energy out of the wind and sun.

Letha Tawney has worked for the UN Foundation’s International Bioenergy Initiative and currently works for World Resources Institute in Washington, D.C.

Nuclear Power's Unsettled Future

By Ozzie Zehner

A year after the Fukushima Daiichi disaster in Japan, prospects for the nuclear power industry worldwide are far from certain. An energy policy scholar assesses the key economic, environmental, political, and psychological hinges on which nuclear power’s future now swings.

On March 16, 1979, Hollywood released a run-of-the-mill film that might have been rather unremarkable had the fictional plot not played out in real life while the movie was still in theaters. The China Syndrome, starring Jane Fonda, Jack Lemmon, and Michael Douglas, features a reporter who witnesses a nuclear power plant incident that power company executives subsequently attempt to cover up. Many days pass before the full extent of the meltdown surfaces. Just 12 days after The China Syndrome premiered, operators at the Unit 2 nuclear reactor at Three Mile Island, outside Harrisburg, Pennsylvania, received abnormally high temperature readings from the containment building’s sensors.

They ignored them.

Many hours passed before the operators realized that the facility they were standing in had entered into partial core meltdown. Power company executives attempted to trivialize the incident and many days passed before the full extent of the meltdown surfaced.

The China Syndrome went viral. When star Michael Douglas appeared on NBC’s The Tonight Show, host Johnny Carson quipped, “Boy, you sure have one hell of a publicity agent!” The staged nuclear leak filmed in the back lots of Hollywood and the real nuclear leak on Three Mile Island became conjoined, feeding into one another, each event becoming more vividly salient in the eyes of the public than if they had occurred independently. The intense media and political fallout from the leak at Three Mile Island, perhaps more than the leak itself, marked the abrupt end of the short history of nuclear power development in the United States.

Nuclear industry officials regularly accuse their critics of unfairly brandishing the showmanship of disaster as if it were characteristic of the entire industry while downplaying the solid safety record of most nuclear facilities. Indeed, meltdowns like the ones at Three Mile Island, Chernobyl, and Fukushima don’t occur as frequently as oil spills. But then, the risks that people associate with nuclear leaks are inordinately more frightening. As with oil spills, industry officials frame meltdowns as accidents, almost without exception. Alternatively, we could choose to frame nuclear power activities as highly unstable undertakings that are bound to expel radioactive secretions into the surrounding communities and landscapes over time.

For some concerned citizens, nuclear power is an opportunity for low-carbon and independent energy generation, while for others it’s a guarantee of nuclear proliferation and fallout risks. Greens in Germany, for instance, rail against nuclear power. Meanwhile, environmentalists in Britain frequently support it. In Japan, nuclear energy risks remained conceptually separated from the fallout horrors of World War II until the March 2011 meltdowns at Fukushima folded those perceptions together into the nation’s history.

The fallout at Fukushima contaminated a large swath of Japan. However, the fallout incurred by the nuclear industry itself was not limited to the island nation. The Fukushima meltdowns prompted nuclear cancellations across the globe.

To capably assess possible nuclear futures following this moment of crisis, we must first interrogate nuclear power’s past. The successes and failures of modern nuclear power facilities have not hinged on the kind of technical limitations that surround alternative energy technologies such as solar, wind, and biofuels. Nor have they been beleaguered by the threat of eventual resource scarcity associated with oil, gas, and coal. (There’s plenty of uranium fuel on our planet, both in the ground and in ubiquitous seawater.) Rather, the coming generations of nuclear power will pivot on something equally foreboding: those same rusty hinges upon which the nuclear establishment has swung for decades.

Hinge 1: An Enduring Dilemma

Travel 200 miles off the northeast coast of Norway into the Arctic Ocean toward the shores of Novaya Zemlya Island and you’ll see seals, walrus, and aquatic birds, as well as numerous species of fish, such as herring, cod, and pollack, much as you’d expect. But some of them will be swimming around something less anticipated—a curious fabricated object rising above the dark sea floor like an ancient monument, identifiable only by the number, “421.” Inside the corroded steel carapace lies a nuclear reactor. Why, we might wonder, has someone installed a nuclear reactor under the sea so far from civilization?

It wasn’t built there. It was dumped there—along with at least 15 other unwanted nuclear cores previously involved in reactor calamities.

These cores lie off the coasts of Norway, Russia, China, and Japan, as reported by the Russian government in 1993. Many of the reactors still contain their fuel rods. Resurfacing them and processing them in a more accepted manner would be risky and expensive. But even disposing of the world’s existing nuclear reactors that haven’t been tossed in the ocean won’t be a straightforward proposition. The largest problem is, of course, what to do with the radioactive waste.

The U.S. Department of Energy started to construct a repository in Yucca Mountain, Nevada, to store the nation’s spent reactor fuel. It was to accept spent fuel starting in 1998, but management problems, funding issues, and fierce resistance by the state of Nevada pushed the expected completion date back to 2020. President Obama called off the construction indefinitely, slashing funding in 2009 and finally withdrawing all support in 2011. If completed, the Yucca Mountain crypt will cost about $100 billion, according to the U.S. Department of Energy. Even then, it’s designed to house just 63,000 tons of spent fuel. More than that is already scattered around the country today, reports Frank von Hippel in a study for the U.S. Army’s Strategic Studies Institute.

In the meantime, utility companies have been storing waste in open fields surrounding their plants. A large nuclear power reactor typically discharges 20 to 30 tons of 12- to 15-foot-long spent fuel rods every year, totaling about 2,150 tons for the entire U.S. commercial nuclear industry annually. Taxpayers will end up paying billions to temporarily store this waste, according to the Congressional Research Service, which brings us to the next hinge of nuclear power’s future.

Hinge 2: Costly Secrets

Every single nuclear plant in the United States was built with taxpayer help. It costs hundreds of millions of dollars to carefully assemble a nuclear power plant. And it costs hundreds of millions to carefully disassemble one, as well.

In addition to direct expenditures, the nuclear industry incurs substantial capital write-offs through bankruptcies and stranded costs. This leaves the burden of their debt on others—a hidden and formidable set of often overlooked expenses. To make matters worse, economies of scale don’t seem to apply to the nuclear industry. Just the opposite, in fact. Historically, as the United States added more nuclear energy capacity to its arsenal, the incremental costs of further expanding capacity didn’t go down, as might be expected, but rather went up, reports energy policy scholar Gregory F. Nemet.

If the costs to taxpayers are so high and the risks are so extreme, why do nations continue to subsidize the nuclear industry? It’s partly because so many of the subsidies are hidden. Subsidy watchdog Doug Koplow (founder of Earth Track) points out, “Although the industry frequently points to its low operating costs as evidence of its market competitiveness, this economic structure is an artifact of large subsidies to capital, historical write-offs of capital, and ongoing subsidies to operating costs.”

The nuclear industry often loops taxpayers or local residents into accepting a variety of the financial obligations and risks arising from the planning, construction, and decommissioning of nuclear facilities, such as:

  • Accepting the risk of debt default.
  • Paying for cost overruns due to regulatory requirements or construction delays.
  • Dropping the requirement of insurance for potential damage to surrounding neighborhoods.
  • Taking on the burden of managing and storing high-level radioactive waste.
  • Since these handouts are less tangible and comprehensible to the public than cash payments, the nuclear industry and its investors have found it relatively easy to establish and renew them.

    These costs may be worth it, some say, since nuclear power generation produces less carbon dioxide than fossil-fuel alternatives. It therefore promises to mitigate the potentially far greater risks of catastrophic climate change. For solar, wind, and biofuel power generation, the projected costs to mitigate a ton of CO2 are very high. Does nuclear fare any better?

    Not really.

    Assuming the most favorable scenario for nuclear power, where nuclear power generation directly offsets coal-fired base-load power, avoiding a metric ton of CO2 costs about $120 ($80 of which is paid by taxpayers). This figure does not include the costs of spent-fuel containment and the risks of proliferation and radiation exposure, burdens that are especially difficult to quantify. This is far more expensive than boosting equipment efficiency, streamlining control system management, improving cropping techniques, and many other competing proposals to mitigate climate change. Why spend $120 on nuclear to avoid a single ton of CO2 when we could spend the same money elsewhere to mitigate five tons, or even ten, without the risks? Nuclear energy will become a more plausible CO2 mitigation strategy after we have exhausted these other options, but we have a long way to go before that occurs.

    Hinge 3: Boom!

    In 2008, the Nuclear Suppliers Group, an organization of 45 nations that patrols nuclear material trading and technology, agreed to bend its rules. The cartel allowed India access to uranium imports for the first time. When it announced the waiver, political sparring arose between those who identified the move as a step toward nuclear armament proliferation in the region and others who argued the freely flowing uranium represented a peaceful development of power, which stood to benefit millions of Indians. So who’s right? Is nuclear power a way to produce electricity or a path toward building deadly weapons?

    In reality, it’s both.

    A large part of the problem comes back to storage. How can we keep spent fuel away from those who might craft it into dirty bombs, disperse it with conventional weapons, or otherwise compromise its stability?

    Another factor arises from the main alternative to storage: recycling. Reprocessing used fuel rods is expensive and leaves behind separated plutonium. Since plutonium is ideal for making bombs, many countries, including the United States, consider reprocessing a proliferation risk. Meanwhile, the United Kingdom, France, Russia, Japan, India, Switzerland, and Belgium reform their spent rods. They have separated a combined 250 metric tons of plutonium to date, more than enough to fuel a second Cold War.

    Alternatively, fast-neutron “burner” reactors can run directly on spent fuel. This presumably sidesteps the plutonium issue, though such plants may not be commercially feasible to build. And they run hot. As a result, relying on them may merely trade in proliferation risks for meltdown risks.

    In short, the often-cited separation between civilian nuclear power and military nuclear weaponry is problematic for several reasons. First, countries often end up desiring a bit of both—a little civilian electricity and a little nuclear weaponry. Political desires rarely congeal into exclusively one form or the other.

    Second, peacetime and wartime nuclear technologies are intermingled. The facilities, the expertise, and even the waste products can easily cross the imagined division between peacetime and wartime nuclear enterprise.

    Third, nation-states are in constant flux—politically, economically, and culturally. The motivations of a country today cannot be assumed to hold in the future. Even the Department of Energy acknowledges in one report that we can’t assume the United States will remain a contiguous nation-state throughout the time frame required to see nuclear waste through its decomposition.

    Hinge 4: The Psychology of Fear

    The Colorado River flows through one of the largest natural concentrations of radioactive surface rock on the planet, containing about a billion tons of uranium in all. The levels of radiation are 20 times the proposed limit for Yucca Mountain. Unlike the glass-encapsulated balls used to store radioactive waste, Colorado’s uranium is free ranging and water soluble.

    “If the Yucca Mountain facility were at full capacity and all the waste leaked out of its glass containment immediately and managed to reach groundwater, the danger would still be twenty times less than that currently posed by natural uranium leaching into the Colorado River,” claims Berkeley physicist Richard Muller, author of Physics for Presidents (W. W. Norton, 2008).

    Does this mean Coloradans are exposed to more radiation than the rest of us? Yes—along with those in Los Angeles who regularly bathe and drink water piped in from the Colorado River. Yet, the residents of Colorado and California, together with those of the nearby states—South Dakota, Utah, and New Mexico—experience the lowest cancer incidence rates anywhere in the contiguous United States, according to the National Cancer Institute. This goes to show how tricky it is to assess complex radiation risks.

    According to early documentation of the 1986 Chernobyl nuclear reactor meltdown, the catastrophe exposed 30,000 people living near the reactor to about 45 rem of radiation each—about the same radiation level experienced by the survivors of the Hiroshima bomb, Muller observes. According to a statistical scale developed by the National Academy of Sciences, 45 rem should have raised cancer deaths of residents near Chernobyl from the naturally occurring average of 20% to about 21.8%—or roughly 500 excess fatalities.

    Nevertheless, deaths are only one of many measures we might choose to evaluate harm, and even then, what counts as a radiation fatality in the first place is not so clear and has changed over time.

    In 2005, the United Nations put the Chernobyl death toll at 4,000. And in 2010, newly released documents indicated that millions more were affected by the fallout and cleanup than originally thought, which in turn led to tens of thousands of deaths as well as hundreds of thousands of sick children born long after the initial meltdown.

    To make matters more complex, the concrete sarcophagus entombing the reactor is now beginning to crack—a reminder that it is far too early to complete a history of Chernobyl and its aftermath. We will have to wait equally long to assess the fallout at Fukushima Daiichi, which is now, long after the tsunami, still posing new challenges to our conceptions of acceptable radioactive risk.

    Humans won’t be able to calculate nuclear risks as long as humans have nukes. Perhaps it is this very uncertainty that evokes particularly salient forms of nuclear unease. The emotive impulse that wells up in response to free radiation is a more visceral phenomenon than one bound to the shackles of calculation. Fossil-fuel executives should consider themselves lucky that the arguably more dangerous fallout from fossil-fuel use, which kills tens of thousands of people year after year, has not elicited a corresponding fear in the minds of the citizenry.

    As a society, we begrudgingly tolerate the fossil fuel–related risks of poisoning, explosions, asthma, habitat destruction, and spills, which regularly spawn tangible harms. Yet, when it comes to nuclear power, we slide our heads back on our necks and purse our lips with added skepticism. Whether the degree of our collective skepticism toward nuclear power is appropriate, or even justified, doesn’t really seem to matter. The public doesn’t need experts to tell them when to be terrified.

    As simple as fear, and as complex as fear, public angst will remain a nagging bête noire of the nuclear industry. Is it possible that taxpayers and investors could spend billions of dollars constructing a new generation of nuclear reactors just to have a hysterical public again shut the whole operation down following the next (inevitable) mishap? Absolutely. As taxpayers subsidizing the nuclear industry, we must worry not only about the risk of a hypothetical nuclear event with tangible consequences but also about an event with imagined consequences, especially if it should strike during a slow news week.

    The Path Forward

    Should concerned citizens make it their job to push for nuclear power? Proponents argue that nuclear yields less CO2 than coal or natural gas. But this might not matter in the contemporary American context. There is little precedent to assume that nuclear power will necessarily displace appreciable numbers of coal plants. In fact, historically, just the opposite has occurred. As subsidized nuclear power increased, electricity supply correspondingly increased, retail prices eased, and greater numbers of energy customers demanded more cheap energy—a demand that Americans ultimately met by building additional coal-fired power plants, not fewer.

    Without first addressing the underlying social, economic, and political nature of our energy consumption, can we assume that nuclear power, or any alternative production mechanism for that matter, will automatically displace fossil-fuel use? Should we address these underlying conditions before cheering on nuclear energy schemes? Will the risks of nuclear energy forever outweigh the benefits? Or will the scarcity of traditional fossil fuels eventually leave us with no other option?

    Whether governments, taxpayers, politicians, and investors are willing to increasingly place nuclear wagers will, more than technical feasibility, become the central nuclear question over coming decades. Then again, someday we may find that our choices on the matter have dwindled. The more nuclear plants we establish today, the less choice we’ll have about lugging around their protracted risks tomorrow.

    Ultimately, those in favor of nuclear power should not underestimate its inescapable hazards. Those against nuclear power should not underestimate its inevitable allure. These four hinges of nuclear power’s future may not tell us which way nuclear will swing. But they do clarify its range of motion.

    About the Author

    Ozzie Zehner is the author of the forthcoming Green Illusions: The Dirty Secrets of Clean Energy and the Future of Environmentalism (University of Nebraska Press, June 1, 2012; www.GreenIllusions.org). He is a visiting scholar at the University of California, Berkeley, and serves as the editor of Critical Environmentalism. E-mail OzzieZehner@berkeley.edu; Web site http://berkeley .academia.edu/OzzieZehner or http://OzzieZehner.com.

    A World Wide Mind: The Coming Collective Telempathy

    By Michael Chorost

    The Internet plus humanity equals hyperorganism, a merger of man and machine that may result in global mindfulness.

    This article is only available in the printed edition of the March-April 2012 Futurist. Purchase Print Copy.

    Thriving in the Automated Economy

    By Erik Brynjolfsson and Andrew McAfee

    Two management experts show why labor’s race against automation will only be won if we partner with our machines. They advise government regulators not to stand in the way of human–machine innovation.

    The legend of John Henry became popular in the late nineteenth century as the effects of the steam-powered Industrial Revolution were felt in every industry and job that relied heavily on human strength. It’s the story of a contest between a steam drill and John Henry, a powerful railroad worker, to see which of the two could bore the longer hole into solid rock. Henry wins this race against the machine but loses his life; his exertions cause his heart to burst. Humans never directly challenged the steam drill again.

    This legend reflected popular unease at the time about the potential for technology to make human labor obsolete. But this is not at all what happened as the Industrial Revolution progressed. As steam power advanced and spread throughout industry, more human workers were needed, not fewer. They were needed not for their raw physical strength (as was the case with John Henry), but instead for other human skills: physical ones like locomotion, dexterity, coordination, and perception, and mental ones like communication, pattern matching, and creativity.

    Throughout the Industrial Revolution, economists have reassured workers and the public that new jobs would be created even as old ones were eliminated. For more than 200 years, the economists were right. Despite massive automation of millions of jobs, more Americans had jobs at the end of each decade up through the end of the twentieth century. However, this empirical fact conceals a dirty secret. There is no economic law that says that everyone, or even most people, automatically benefit from technological progress.

    Around 1811, just as anxiety about the Industrial Revolution was leading to worker uprisings (the Luddite riots), economist David Ricardo—who initially thought that advances in technology would benefit all—developed an abstract model that showed the possibility of technological unemployment. The basic idea was that, at some point, the equilibrium wages for workers might fall below the level needed for subsistence. A rational human would see no point in taking a job at a wage that low, so the worker would go unemployed and the work would be done by a machine instead.

    Of course, this was only an abstract model. But in his book A Farewell to Alms (Princeton University Press, 2007), economist Gregory Clark gives an eerie real-world example of this phenomenon in action:

    There was a type of employee at the beginning of the Industrial Revolution whose job and livelihood largely vanished in the early twentieth century. This was the horse. The population of working horses actually peaked in England long after the Industrial Revolution, in 1901, when 3.25 million were at work. Though they had been replaced by rail for long-distance haulage and by steam engines for driving machinery, they still plowed fields, hauled wagons and carriages short distances, pulled boats on the canals, toiled in the pits, and carried armies into battle. But the arrival of the internal combustion engine in the late nineteenth century rapidly displaced these workers, so that by 1924 there were fewer than two million. There was always a wage at which all these horses could have remained employed. But that wage was so low that it did not pay for their feed.

    As technology continues to take on jobs and tasks that used to belong only to human workers, one can imagine a time in the future when more and more jobs are more cheaply done by machines than humans. And indeed, the wages of unskilled workers have trended downward for more than 30 years, at least in the United States.

    We also now understand that technological unemployment can occur even when wages are still well above subsistence if there are downward rigidities that prevent them from falling as quickly as advances in technology reduce the costs of automation. Minimum wage laws, unemployment insurance, health benefits, prevailing wage laws, and long-term contracts—not to mention custom and psychology—make it difficult to rapidly reduce wages. Furthermore, employers will often find wage cuts damaging to morale. As the efficiency wage literature notes, such cuts can make employees unmotivated and cause companies to lose their best people.

    But complete wage flexibility would be no panacea, either. Ever-falling wages for significant shares of the workforce is not exactly an appealing solution to the threat of technological employment. Aside from the damage it does to the living standards of the affected workers, lower pay only postpones the day of reckoning. Moore’s law is not a one-time blip but an accelerating exponential trend. Either way, technological unemployment is emerging as a real and persistent threat to middle-class employment.

    When significant numbers of people see their standards of living fall despite an ever-growing economic pie, it threatens the social contract of the economy and even the social fabric of society. One instinctual response is to simply redistribute income to those who have been hurt. While redistribution ameliorates the material costs of inequality, and that’s not a bad thing, it doesn’t address the root of the problems the economy is facing. By itself, redistribution does nothing to make unemployed workers productive again. Furthermore, the value of gainful work is far more than the money earned. There is also the psychological value that almost all people place on doing something useful. Forced idleness is not the same as voluntary leisure. Franklin D. Roosevelt put this most eloquently:

    No country, however rich, can afford the waste of its human resources. Demoralization caused by vast unemployment is our greatest extravagance. Morally, it is the greatest menace to our social order.

    Fortunately, if we make the right decisions today, we can still secure the gains that come from technological progress without sacrificing broad prosperity or the social contract. Here are some ideas.

    Racing with the Machine

    The John Henry legend shows us that, in many contexts, humans will eventually lose the head-to-head race against the machine. But the broader lesson of the first Industrial Revolution is more like the Indy 500 than John Henry: Economic progress comes from constant innovation in which people race with machines. Human and machine collaborate together in a race to produce more, to capture markets, and to beat other teams of humans and machines.

    This lesson remains valid and instructive today as machines are winning head-to-head mental contests, not just physical ones. We observe that things get really interesting once this contest is over and people start racing with machines instead of against them.

    The game of chess provides a great example. In 1997, Garry Kasparov, humanity’s most brilliant chess master, lost to Deep Blue, a $10 million specialized supercomputer programmed by a team from IBM. That was big news when it happened, but then developments in the world of chess went back to being reported on and read mainly by chess geeks. As a result, it’s not well known that the best chess player on the planet today is not a computer. Nor is it a human. The best chess player is a team of humans using computers.

    After head-to-head matches between humans and computers became uninteresting (because the computers always won), the action moved to “freestyle” competitions, allowing any combination of people and machines. The overall winner in a recent freestyle tournament had neither the best human players nor the most powerful computers. As Kasparov writes, it instead consisted of

    a pair of amateur American chess players using three computers at the same time. Their skill at manipulating and “coaching” their computers to look very deeply into positions effectively counteracted the superior chess understanding of their grandmaster opponents and the greater computational power of other participants. … Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.

    This pattern is true not only in chess but throughout the economy. In medicine, law, finance, retailing, manufacturing, and even scientific discovery, the key to winning the race is not to compete against machines but to compete with machines. While computers win at routine processing, repetitive arithmetic, and error-free consistency and are quickly getting better at complex communication and pattern matching, they lack intuition and creativity and are lost when asked to work even a little outside a predefined domain. Fortunately, humans are strongest exactly where computers are weak, creating a potentially beautiful partnership.

    As this partnership advances, we’re not too worried about computers holding up their end of the bargain. Technologists are doing an amazing job of making them ever faster, smaller, more energy efficient, and cheaper over time. We are confident that these trends will continue even as we move deeper into the twenty-first century.

    Digital progress, in fact, is so rapid and relentless that people and organizations are having a hard time keeping up. We want to focus on recommendations in two areas: improving the rate and quality of organizational innovation, and increasing human capital—ensuring that people have the skills they need to participate in today’s economy, and tomorrow’s. Making progress in these two areas will be the best way to allow human workers and institutions to race with machines, not against them.

    Fostering Organizational Innovation

    How can we implement a “race with machines” strategy? The solution is organizational innovation: co-inventing new organizational structures, processes, and business models that leverage ever-advancing technology and human skills. Economist Joseph Schumpeter described this as a process of “creative destruction” and gave entrepreneurs the central role in the development and propagation of the necessary innovations. Entrepreneurs reap rich rewards because what they do, when they do it well, is both incredibly valuable and far too rare.

    To put it another way, the stagnation of median wages and polarization of job growth is an opportunity for creative entrepreneurs. They can develop new business models that combine the swelling numbers of mid-skilled workers with ever-cheaper technology to create value. There has never been a worse time to be competing against machines, but there has never been a better time to be a talented entrepreneur.

    Entrepreneurial energy in America’s tech sector drove the most visible reinvention of the economy. Google, Facebook, Apple, and Amazon, among others, have created hundreds of billions of dollars of shareholder value by creating whole new product categories, ecosystems, and even industries. New platforms leverage technology to create marketplaces that address the employment crisis by bringing together machines and human skills in new and unexpected ways:

    • EBay and Amazon Marketplace spurred more than 600,000 people to earn their livings by dreaming up new, improved, or simply different or cheaper products for a worldwide customer base. The “long tail” of new products offered enormous consumer value and is a rapidly growing segment of the economy.
    • Apple’s App Store and Google’s Android Marketplace make it easy for people with ideas for mobile applications to create and distribute them.
    • Threadless lets people create and sell designs for T-shirts. Amazon’s Mechanical Turk makes it easy to find cheap labor to do a breathtaking array of simple, well-defined tasks. Kickstarter flips this model on its head and helps designers and creative artists find sponsors for their projects.
    • Heartland Robotics plans to provide cheap robots-in-a-box that make it possible for small-business owners to quickly set up their own highly automated factory, dramatically reducing the costs and increasing the flexibility of manufacturing.

    Collectively, these new businesses directly create millions of new jobs. Some of them also create platforms for thousands of other entrepreneurs. None of them may ever create billion-dollar businesses themselves, but collectively they can do more to create jobs and wealth than even the most successful single venture.

    As technology makes it possible for more people to start enterprises on a national or even global scale, more people will be in the position to earn superstar compensation. While winner-take-all economics can lead to vastly disproportionate rewards to the top performer in each market, the key is that there is no automatic ceiling to the number of different markets that can be created. In principle, tens of millions of people could each be a leading performer—even the top expert—in tens of millions of distinct, value-creating fields. Think of them as micro-experts for macro-markets. Technology scholar Thomas Malone calls this the age of hyperspecialization. Digital technologies make it possible to scale that expertise so that we all benefit from those talents and creativity.

    The Limits to Organizational Innovation and Human Capital Investment

    We’re encouraged by the emerging opportunities to combine digital, organizational, and human capital to create wealth: Technology, entrepreneurship, and education are an extraordinarily powerful combination. But we want to stress that even this combination cannot solve all our problems.

    First, not everyone can or should be an entrepreneur, and not everyone can or should spend 16 or more years in school. Second, there are limits to the power of American entrepreneurship for job creation. A 2011 research report for the Kauffman Foundation by E. J. Reddy and Robert Litan found that, even though the total number of new businesses founded annually in the United States has remained largely steady, the total number of people employed by them at start-up has been declining in recent years. This could be because modern business technology lets a company start leaner and stay leaner as it grows.

    Third, and most importantly, even when humans are racing using machines instead of against them, there are still winners and losers. Some people, perhaps even a lot, can continue to see their incomes stagnate or shrink and their jobs vanish while overall growth continues.

    We focus our recommendations on creating ways for everyone to contribute productively to the economy. As technology continues to race ahead, it can widen the gaps between the swift and the slow on many dimensions. Organizational and institutional innovations can recombine human capital with machines to create broad-based productivity growth. That’s where we focus our recommendations.

    Toward an Agenda for Action

    The following solutions involve accelerating organizational innovation and human capital creation to keep pace with technology. There are at least 19 specific steps we can take to these ends in the United States.

    Education

    1. Invest in education. Start by simply paying teachers more so that more of the best and the brightest sign up for this profession, as they do in many other nations. American teachers make 40% less than the average college graduate. Teachers are some of America’s most important wealth creators. Increasing the quantity and quality of skilled labor provides a double win by boosting economic growth and reducing income inequality.

    2. Hold teachers accountable for performance by, for example, eliminating tenure. This should be part of the bargain for higher pay.

    3. Separate student instruction from testing and certification. Focus schooling more on verifiable outcomes and measurable performance and less on signaling time, effort, or prestige.

    4. Keep K-12 students in classrooms for more hours. One reason American students lag behind international competitors is that they simply receive about one month less instruction per year.

    5. Increase the number of skilled workers in the United States by encouraging skilled immigrants. Offer green cards to foreign students when they complete advanced degrees, especially in science and engineering subjects at approved universities. Expand the H-1B visa program. Skilled workers in America often create more value when working with other skilled workers. Bringing them together can increase worldwide innovation and growth.

    Entrepreneurship

    6. Teach entrepreneurship as a skill not just in elite business schools but throughout higher education. Foster a broader class of mid-tech, middle-class entrepreneurs by training them in the fundamentals of business creation and management.

    7. Boost entrepreneurship in America by creating a category of founders’ visas for entrepreneurs, like those in Canada and other countries.

    8. Create clearinghouses and databases to facilitate the creation and dissemination of templates for new businesses. A set of standardized packages for start-ups can smooth the path for new entrepreneurs in many industries. These can range from franchise opportunities to digital “cookbooks” that provide the skeleton structure for an operation. Job training should be supplemented with entrepreneurship guidance as the nature of work evolves.

    9. Aggressively lower the governmental barriers to business creation. In too many industries, elaborate regulatory approvals are needed from multiple agencies at multiple levels of government. These too often have the implicit goal of preserving rents of existing business owners at the expense of new businesses and their employees.

    Investment

    10. Invest to upgrade the country’s communications and transportation infrastructure. The American Society of Civil Engineers gives a grade of D to the overall infrastructure in the United States at present. Improving it will bring productivity benefits by facilitating flow and mixing ideas, people, and technologies. It will also put many people to work directly. You don’t have to be an ardent Keynesian to believe that the best time to make these investments is when there is plenty of slack in the labor market.

    11. Increase funding for basic research and for preeminent government R&D institutions, including the National Science Foundation, the National Institutes of Health, and the Defense Advanced Research Projects Agency (DARPA), with a renewed focus on intangible assets and business innovation. Like other forms of basic research, these investments are often underfunded by private investors because of spillover, a benefit that accrues to someone or some company that’s far away from the original innovator.

    Laws, Regulations, and Taxes

    12. Preserve the relative flexibility of American labor markets by resisting efforts to regulate hiring and firing. Banning layoffs paradoxically can lower employment by making it riskier for firms to hire in the first place, especially if they are experimenting with new products or business models.

    13. Make it comparatively more attractive to hire a person than to buy more technology through incentives, rather than regulation. This can be done by, among other things, decreasing employer payroll taxes and providing subsidies or tax breaks for employing people who have been out of work for a long time. Taxes on congestion and pollution can more than make up for the reduced labor taxes.

    14. Decouple benefits from jobs to increase flexibility and dynamism. Tying health care and other mandated benefits to jobs makes it harder for people to move to new jobs or to quit and start new businesses. For instance, many a potential entrepreneur has been blocked by the need to maintain health insurance. Denmark and the Netherlands have led the way here.

    15. Don’t rush to regulate new network businesses. Some observers feel that “crowdsourcing” businesses like Amazon’s Mechanical Turk, which allows a global pool of workers to bid online for temporary jobs or tasks, exploit their members, who should therefore be better protected. However, especially in this early, experimental period, the developers of these innovative platforms should be given maximum freedom to innovate and experiment, and their members’ freely made decisions to participate should be honored, not overturned.

    16. Eliminate or reduce the massive home mortgage subsidy. This costs more than $130 billion per year, which would do much more for growth if allocated to research or education. While home ownership has many laudable benefits, it likely reduces labor mobility and economic flexibility, which conflicts with the economy’s increased need for flexibility.

    17. Reduce the large implicit and explicit subsidies to financial services. This sector attracts a disproportionate number of the best and the brightest minds and technologies, in part because the government effectively guarantees “too big to fail” institutions.

    18. Reform the patent system. Not only does it take years to issue good patents due to the backlog and shortage of qualified examiners, but too many low-quality patents are issued, clogging our courts. As a result, patent trolls are chilling innovation rather than encouraging it.

    19. Shorten, rather than lengthen, copyright periods and increase the flexibility of fair use. Copyright covers too much digital content. Rather than encouraging innovation, as specified in the Constitution, excessive restrictions like the Sonny Bono Copyright Term Extension Act inhibit mixing and matching of content and using it creatively in new ways.

    These suggestions are only the tip of the iceberg of a broader transformation that we need to support, not only to mitigate technological unemployment and inequality, but also to fulfill the potential for new technologies to grow the economy and create broad-based value. We are not putting forth a complete blueprint for the next economy—that task is inherently impossible. Instead, we seek to initiate a conversation. That conversation will be successful if we accurately diagnose the mismatch between accelerating technologies and stagnant organizations and skills.

    Successful economies in the twenty-first century will be those that develop the best ways to foster organizational innovation and skill development, and we invite our readers to contribute to that agenda.

    About the Authors

    Erik Brynjolfsson is a professor at the MIT Sloan School of Management, director of the MIT Center for Digital Business, chairman of the Sloan Management Review, a research associate at the National Bureau of Economic Research, and co-author of Wired for Innovation: How IT Is Reshaping the Economy.

    Andrew McAfee is a principal research scientist and associate director at the MIT Center for Digital Business at the Sloan School of Management. He is the author of Enterprise 2.0: New Collaborative Tools for Your Organization’s Toughest Challenges.

    This article was excerpted with permission from their book Race Against the Machine: How the Digital Revolution Is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming Employment and the Economy (Digital Frontier Press, 2011).Purchase the book via Amazon.

    Hard at Work in the Jobless Future

    By James H. Lee

    Jobs are disappearing, but there’s still a future for work. An investment manager looks at how automation and information technology are changing the economic landscape and forcing workers to forge new career paths beyond outdated ideas about permanent employment.

    Futurists have long been following the impacts of automation on jobs—not just in manufacturing, but also increasingly in white-collar work. Those in financial services, for example, are being lost to software algorithms, and intelligent computers.*

    Terms used for this phenomenon include “off-peopling” and “othersourcing.” As Jared Weiner of Weiner, Edrich, Brown recently observed, “Those jobs are not going to return—they can be done more efficiently and error-free by intelligent software.”

    In the investment business (in which I work), we are seeing the replacement of financial analysts with quantitative analytic systems, and floor traders with trading algorithms. Mutual funds and traditional portfolio managers now compete against ETFs (exchange-traded funds), many of which offer completely automated strategies.

    Industries that undergo this transformation don’t disappear, but the number of jobs that they support changes drastically. Consider the business of farming, which employed half the population in the early 1900s but now provides just 3% of all jobs. The United States is still a huge exporter of food; it is simply a far more efficient food producer now in terms of total output per farm worker.

    In an ideal world, jobs would be plentiful, competitive, and pay well. Most job opportunities have two of these qualities but not all three. Medicine, law, and finance are jobs that are both competitive and pay well. Retail, hospitality, and personal services are competitive but pay low wages. Unions often ensure that jobs pay well and are plentiful, only to later find that those jobs and related industries are no longer competitive.

    Since 1970, manufacturing jobs as a percentage of total employment have declined from a quarter of payrolls to less than 10%. Some of this decline is from outsourcing, some is a result of othersourcing. Those looking for a rebound in manufacturing jobs will likely be disappointed. These jobs will probably not be replaced—not in the United States and possibly not overseas, either.

    This is all a part of the transition toward a postindustrial economy.

    Jeff Dachis, Internet consulting legend and founder of Razorfish, coined the phrase “everything that can be digital, will be.” To the extent that the world becomes more digital, it will also become more global. To the extent that the economy remains physical, business may become more local.

    The question is, what is the future of work, and what can we do about it? Here are some ideas.

    The Future of Work: Emerging Trends

    Work will always be about finding what other people want and need, and then creating practical solutions to fulfill those desires. Our basic assumptions about how work gets done are what’s changing. It’s less about having a fixed location and schedule and more about thoughtful and engaged activity. Increasingly, this inspiration can happen anytime, anyplace.

    There is a blurring of distinctions among work, play, and professional development. The ways that we measure productivity will be less focused on time spent and more about the value of the ideas and the quality of the output. People are also going to have a much better awareness of when good work is being done.

    The old model of work provided an enormous level of predictability. In previous eras, people had a sense of job security and knew how much they would earn on a monthly basis. This gave people a certain sense of confidence in their ability to maintain large amounts of debt. The consumer economy thrived on this system for more than half a century. Location-based and formal jobs will continue to exist, of course, but these will become smaller slices of the overall economy.

    The new trends for the workplace have significantly less built-in certainty. We will all need to rethink, redefine, and broaden our sources of economic security. To the extent that people are developing a broader range of skills, we will also become more resilient and capable of adapting to change.

    Finally, we can expect that people will redefine what they truly need in a physical sense and find better ways of fulfilling their needs. This involves sharing and making smarter use of the assets we already have. Businesses are doing the same. The outcome could be an economy that balances the needs between economic efficiency and human values.

    Multitasking Careers

    In Escape from Cubicle Nation (Berkley Trade, 2010), career coach Pamela Slim encourages corporate employees to start a “side hustle” to try out new business ideas. She also recommends having a side hustle as a backup plan in the event of job loss. This strategy is not just for corporate types, and Slim says that “it can also be a great backup for small business owners affected by shifting markets and slow sales.”

    She says that an ideal side hustle is money-making activity that is doable, enjoyable, can generate quick cash flow, and does not require significant investment. Examples that she includes are businesses such as Web design, massage, tax preparation, photography, and personal training.

    The new norm is for people to maintain and develop skill sets in multiple simultaneous careers. In this environment, the ability to learn is something of a survival skill. Education never stops, and the line between working and learning becomes increasingly blurred.

    After getting her PhD in gastrointestinal medicine, Helen Samson Mullen spent years working for a pharmaceutical company—first as a medical researcher and then as an independent consultant. More recently, she has been getting certifications for her career transition as a life coach. Clinical project management is now her “side hustle” to bring in cash flow while she builds her coaching business. Meanwhile, she’s also writing a book and manages her own Web site. Even with so many things happening at once, Helen told me that “life is so much less crazy now than it was when I was consulting. I was always searching for life balance and now feel like I’m moving into harmony.” Her husband, Rob, is managing some interesting career shifts of his own, and is making a lateral move from a 22-year career in pharmaceuticals to starting his own insurance agency with State Farm.

    Fixed hours, fixed location, and fixed jobs are quickly becoming a thing of the past for many industries, as opportunities become more fluid and transient. The 40-hour workweek is becoming less relevant as we see more subcontractors, temps, freelancers, and self-employed. The U.S. Government Accountability Office estimates that these “contingent workers” now make up a third of the workforce. Uncertain economics make long-term employment contracts less realistic, while improvements in communications make it easier to subcontract even complex jobs to knowledge workers who log in from airports, home offices, and coffee shops.

    Results-Only Workplace Environments

    Imagine an office where meetings are optional. Nobody talks about how many hours they worked last week. People have an unlimited amount of vacation and paid time off. Work is done anytime and anywhere, based entirely on individual needs and preferences. Finally, employees at all levels are encouraged to stop doing anything that is a waste of their time, their customers’ time, or the company’s time.

    There is a catch: Quality work needs to be completed on schedule and within budget.

    Sound like a radical utopia? These are all basic principles of the Results Only Work Environment (ROWE), as pioneered by Cali Ressler and Jody Thompson while they were human resource managers for Best Buy.

    It’s “management by objective” taken to a whole new level, Ressler and Thompson write in their book, Why Work Sucks and How to Fix It (Portfolio, 2008).

    Best Buy’s headquarters was one of the first offices to implement the ROWE a little over five years ago, according to Ressler and Thompson. The movement is small, but growing. The Gap Outlet, Valspar, and a number of Minneapolis-based municipal departments have implemented the strategy. Today, 10,000 employees now work in some form of ROWE.

    Employees don’t even know if they are working fewer hours (they no longer count them), but firms that have adopted the practice have often shown significant improvements in productivity.

    “Thanks to ROWE, people at Best Buy are happier with their lives and their work,” Ressler and Thompson write in their book. “The company has benefited, too, with increases in productivity averaging 35% and sharp decreases in voluntary turnover rates, as much as 90% in some divisions.”

    Interestingly enough, the process tends to reveal workers who do not produce results, causing involuntary terminations to creep upward. ROWE managers learn how to treat their employees like responsible grown-ups. There is no time tracking or micromanagement.

    “The funny thing is that once employees experience a ROWE they don’t want to work any other way,” they write. “So employees give back. They get smarter about their work because they want to make sure they get results. They know that if they can deliver results then in exchange they will get trust and control over their time.”

    Co-Working

    There are now more alternatives to either working at home alone or being part of a much larger office. Co-working spaces are shared work facilities where people can get together in an officelike environment while telecommuting or starting up new businesses.

    “We provide space and opportunity for people that don’t have it,” Wes Garnett, founder of The coIN Loft, a co-working space in Wilmington, Delaware, told me.

    Getting office space in the traditional sense can be an expensive proposition—with multiyear leases, renovation costs, monthly utilities. “For $200 [a month], you can have access to presentation facilities, a conference room, and a dedicated place to work.” And coIN Loft offers day rates for people with less-frequent space needs.

    According to Garnett, more people are going to co-working spaces as “community centers for people with ideas and entrepreneurial inclinations.” He explains that co-working spaces provide a physical proximity that allows people to develop natural networks and exchange ideas on projects.

    “We all know that we’re happier and more productive together, than alone” is the motto for nearby Independents Hall in Philadelphia.

    Co-working visas enable people to choose from among 200 locations across the United States and in three dozen other countries.

    Silicon Colleagues

    Expert systems such as IBM’s Watson are now “smarter” than real people—at least on the game show Jeopardy. It was a moment in television history when Watson decimated previous human champions Ken Jennings and Brad Rutter on trivia questions, which included categories such as “Chicks Dig Me.”

    IBM’s Watson is a software-based knowledge system with unusually robust voice recognition. IBM has stated that its initial markets for the technology are health care, financial services, and customer relations. In the beginning, these systems will work side-by-side with human agents, whispering in their ear to prompt them with appropriate questions and answers that they might not have considered otherwise. In the next decade, they may replace people altogether in jobs that require simple requests for information.

    “It’s a way for America to get back its call centers,” futurist Garry Golden told me. He sees such expert systems reaching the workplace in the next two to three years.

    Opting Out

    A changing economy is causing people to rethink their priorities. In a recent survey by Ogilvy and Mather, 76% of respondents reported that they would rather spend more time with their families than make more money.

    Similarly, the Associated Press has reported that less than half of all Americans say they are happy with their jobs.

    Given the stresses of the modern workplace, it is not surprising that more people are simply “opting out” of the workforce. Since 1998, there has been a slight decline in the labor force participation rate—about 5% for men and 3% for women. This trend may accelerate once extensions to unemployment benefits expire. Some of these people are joining the DIY movement, and others are becoming homesteaders.

    A shift back toward one-income households can happen when the costs of taxes, commuting, and child care consume a large portion of earnings. People who opt out are not considered unemployed, as they are no longer actively looking for paid work. Their focus often reflects a shift in values toward other activities, such as raising kids, volunteer work, or living simply. This type of lifestyle is often precarious and carries risks, two factors that can be mitigated through public policy that extends the social safety net to better cover informal working as well as formal employment. But this way of life also carries rewards and is becoming a more and more attractive option for millions of people.

    The Future of Work, Personified

    Justin Caggiano is a laid-back rock-climbing guide whom my wife and I met during our last vacation in the red canyons of Moab, Utah. He’s also been guiding rafters, climbers, and hikers for the past six years.

    We watched Justin scramble up the side of a hundred-foot natural wall called The Ice Cream Parlor, a nearby climbing destination that earned its name from keeping shaded and cool in the morning despite the surrounding desert. His wiry frame allowed him to navigate the canyon cliffs and set up the safety ropes in a fraction of the time that it took us to make the same climb later that day.

    Justin’s rock-climbing skills easily translated into work as an arborist during the off-season, climbing up trees and then cutting them from the top down to prevent damage to nearby buildings. Since graduating from college six years ago, he has also worked as an artisanal baker, a carpenter, and a house painter. This makes him something of a down-to-earth renaissance man.

    His advice is “to be as flexible as you can—and work your tail off.”

    It’s an itinerant lifestyle for Justin, who frequently changes his location based on the season, work, and nearby climbing opportunities. Rather than committing to a single employer, he pieces together jobs wherever he can find them. His easygoing personality enables him to connect with people and find new opportunities when they become available.

    In the winter, he planned to stay with a friend who is building a house, trading help with carpentry and wiring in exchange for free rent. He’s been living on a shoestring for a while now, putting away money every year. Longer term, he’d like to develop all of the skills that he needs to build his own home and then pay for land and materials entirely with savings from his bank account. He plans to grow fruit trees and become somewhat self-sufficient. After that time, he says, “I’ll work when I’m needed, and live the debt-free, low-cost lifestyle when I’m older.”

    Our concept of work is getting reworked. A career used to be a ladder of opportunities within a single company. For the postwar generation, the concept of “lifetime employment” was a realistic expectation. My father worked for 40 years at DuPont as a research scientist and spent almost all of that time at a sprawling complex called the Experimental Station. Most of my friends’ parents had similar careers. Over time, they were gradually promoted and moved up the corporate ladder. At best, it was a steady progression. At worst, they found their careers stuck in neutral.

    The baby boomers had a somewhat different career trajectory. They still managed to have a single career, but it more closely resembled a lattice than a ladder. After working for an employer for five to 10 years, they might find a better opportunity elsewhere and continue their climb. The successful ones cultivated networks at related businesses and continually found better opportunities for themselves.

    The career path for younger generations more closely resembles a patchwork quilt, as people attempt to stitch together multiple jobs into something that is flexible and works for them. In today’s environment, they sometimes can’t find a single job that is big enough to cover all of their expenses, so, like Justin, they find themselves working multiple jobs simultaneously. Some of these jobs might match and be complementary to existing skills, while others may be completely unrelated.

    The future of work is less secure and less stable than it was. For many of us, our notions of employment were formed by the labor environment of the later twentieth century. But the reality of jobless working may be more in line with our values. If we can build support systems to benefit workers, wherever they are and whether they be formally employed or not, then we may be able to view the changes sweeping across society as opportunities to return to a fuller, more genuine, and more honest way of life.

    Justin’s lesson is applicable to all of us; there’s a difference between earning a living and making a life.

    About the Author

    James H. Lee is an investment manager in Wilmington, Delaware, and a blogger for THE FUTURIST magazine (www.wfs.org/blogs/james-lee). He’s currently writing a book, tentatively titled Resilience: An Upbeat Guide to the End of the World, based, in part, on the ideas described above. Contact him at lee.advisor@gmail.com.

    * The word "robotics" was removed from the printed edition. We were unable to find data to show job losses in the financial sector due to robotics.

    Rethinking "Return on Investment": What We Really Need to Invest In

    By Timothy C. Mack

    Innovation means more than inventing new products for the world’s growing populations to consume. Innovation also means solving the problems created by consumption. By investing in sustainable innovation and creativity now, we will enhance our future returns.

    A leading challenge for the twenty-first century is how to enhance innovation and creativity in the midst of a global recession. While this area of concern might seem to focus largely on technology and business issues, it is also tied to enhancing social development, academic vitality, political stability, and the standard of living worldwide—and doing so sustainably.

    By several measures, the health of global business has actually been quite robust in recent years, especially for the largest multinational corporations in areas like energy. Large companies have enough resources to weather the storms of economic, market, and even regulatory reversals. But smaller, more innovative enterprises around the globe are hit much harder by the downturns in the world economy and by diminishing returns on inputs.

    We see “diminishing returns” as global economic expansion generates its own dysfunctions. For example, more consumption leads to more waste products, which then lead to negative impacts like pollution and climate change. In another context, we see diminishing returns when we recognize that “working bigger” is not always working smarter, and many view the increasing acquisition of smaller firms by multinational entities as undermining productivity worldwide.

    Basic examples of diminishing returns include too much fertilizer on a single field, too much additional seed without more available land to plant it in, and too many added tools without enough added workers or vice versa. In other words, diminished returns result from increasing one factor of economic production without being able to change other parts of an economic system, to keep things in balance.

    The global problems we are wrestling with today are largely due to system imbalances of various types, and to the lack of a holistic systems approach overall. We must add creativity and innovation to the economic system, so as to enhance competitiveness system-wide. By innovation, I mean the ability to imagine, reconcile, and combine ideas that will improve economic health and prosperity throughout the world.

    The law of diminishing returns suggests that, when complexity and scale increase past a certain point, returns will ultimately plateau and then plummet. This dynamic is often masked by the fog of ever-more-complex partnerships or ever-increasing debt, which necessarily have built-in problems that also tend to build up. These problems in turn create numerous delayed and feedback loops, which alter the ongoing operation of those systems—for better or worse.

    For example, one result of large-scale mergers and consolidations is to concentrate risk on a scale never possible before. The underside of this global interconnectedness is that the individual “dominoes” within that system become increasingly aligned. And as in a crowded forest, a single falling tree can bring down far too many others.

    Responses to this perceived problem often aim at classic sustainability solutions. I prefer to look at sustainable development in a broader context and to seek solutions not simply for the environment, but for social and political dilemmas, as well.

    The term “diminishing returns” does not always imply a negative assessment of past, present, or future return on investment (ROI) strategies. Diminishing returns can affect any investment that involves financial, intellectual, or industrial resources. In what may come to be recognized as a new normal, it also refers to strategies for the future that rethink the traditional concept of ROI and levels of adequacy—that is, rethinking ways to assess systemic balance. To put it more assertively, a total rethinking of return-on-investment strategies could be in order.

    An MIT study on innovation notes that, over the past 50 years, the vast majority of innovations have come from small organizations that actually receive little financial support from institutional investors. Accordingly, an increase in early-stage investment in smaller, innovative enterprises might buoy up the ailing global economy. However, due to the risk-averse nature of institutional investor groups, this is not likely to widely occur.

    So what are our options? I believe we need to focus on a range of initiatives to promote innovation, including innovation in education and training. This would include education/private sector strategic partnerships that promote creativity and cultivate “a taste for risk.”

    Local approaches that adjust for country-by-country variations are frequently more productive than one-size-fits-all policies. In France, local history favors the use of cooperatives (both manufacturing- and services-based) and a focus on improving local production, shortening channels between producer and consumer, and introducing innovation tax credits. This might be termed bringing economic prosperity through rebalancing the economic food chain.

    John Holland at the Santa Fe Institute defines the concept of emergence as “much coming from little.” Similarly, as we begin to rethink return on investment, there are three common strategies for attaining much from little. The first approach is to focus on increasing efficiency, ideally producing more with fewer resources. A good example of this is Moore’s law in electronics, where diminishing costs and increasing productivity have gone hand in hand.

    A second approach focuses on consistency, which concentrates on improving quality (versus just turning out more of the same product in the same manner) and emphasizes predictability and repeatability.

    The final approach is the path of sufficiency, which involves rethinking the elements involved and often results in less coming to be seen as more. This could include the march of the Green Movement, with its concentration on economy, ecology, and appropriate downsizing. It could also include the industrial ecology movement—a convergent multidisciplinary approach to building integrated and sustainable industrial systems. It includes reinterpreting former waste streams as “repurposed assets” that may be utilized as raw material for an entirely different industry. A recent example is waste carbon or plastic scraps being used in nanotechnology for construction of fullerene nanotubes.

    Still another example of Green approaches is resource decoupling. This involves using fewer resources per unit of economic output while also reducing the environmental impacts of resource use and other economic activities. The positive impact of resource decoupling stands in sharp contrast to the ecological degradation and resource scarcities that currently make the problems of failing financial markets and economic recession even worse.

    Looking back over the course of the twentieth century, we find that relative resource balances actually remained fairly equitable. For example, while world gross domestic product rose by a factor of 23 between 1901 and 2000, global resource use only rose by a factor of eight. This was partly the result of improved technologies, including those enabling increased energy efficiency. But a balance of this sort seems far less likely for the unfolding twenty-first century.

    Looking forward, the UN Environmental Program’s International Resource Panel projects that world consumption of natural resources could triple by 2050, far exceeding sustainable levels. In a 2011 report, the Panel called for the general realization that prosperity and well-being do not depend on consumption of ever-greater amounts of resources. Instead, we need to recognize that the trade off between environmental negatives and economic positives can be avoided. In other words, low-carbon, resource-efficient approaches can stimulate economic growth, increase employment, and reduce poverty, while still keeping the human footprint within sustainable limits.

    Of course, successful decoupling will require significant changes in national government policies, corporate behavior worldwide, and the consumption patterns of the global public. It is also clear that any one-size-fits-all approach is unlikely to succeed, given the range of economic levels and diverse national cultures worldwide.

    One of the widest extremes between countries and among groups within countries remains the consumption of raw materials: The richest 20% of global population is responsible for 80% or more of consumption, while the poorest 20% consumes closer to 2% of the total. This puts the poorer groups at a disadvantage in the search for the energy resources to support creative activities. New resources can either be found or created, but in the lowest economic ranges, people lack the energy needed for effective search or creative activity. And so these imbalances continue or increase.

    Any process that reduces inputs and/or increases outputs will require changes in public policy and public opinion. For this to happen, the training and education needed must be both subtle and sophisticated. The goal is to bring improved resource productivity (e.g., through materials substitution) into balance with the demands of rising affluence. This can only be accomplished with political will and coordination among governments (national and supranational), nongovernmental organizations (with their growing influence), and private industry/corporations.

    At times, it seems too much to hope that political influence will ever be spread equitably—or even that enough private citizens will become involved in public policy to substantially affect outcomes. But the expansion of “Occupy ____” movements in the United States and growth of bilateral e-government capabilities worldwide all suggest at least that the number of active stakeholders is likely to continue to increase.

    Over the past decade, the concept of hybrid organizations has become more popular, with government, NGOs, and private-sector entities creating new configurations. But progress has been much slower than was initially expected. Those who assume leadership of such hybrid organizations will naturally guide and shape their own agendas, but the real challenge for any organization engaged in boundary spanning is how to get the now-wider range of stakeholders to cooperate in reality versus merely for public relations purposes. Authentic partnerships are long-lasting, because they provide tangible benefits for the majority of those involved.

    Enhancing Innovation

    Innovation arises from applying creative approaches to problems. This is true across the economic, technological, logistic, political, and social arenas. The most radical and revolutionary technology innovations tend to emerge from formalized R&D, while less-dramatic incremental and pragmatic innovations may emerge from day-to-day operations.

    Innovation can be seen as either supply-pushed (based on new technological possibilities) or demand-led (based on social needs and user requirements). But innovation also arises through a complex set of processes that link many different players together. This includes not only developers and users, but also consultancies, standards bodies, governments, and NGOs.

    It is tempting to view innovation—particularly technological innovation—as a panacea. This is not always the case. While technological development continues to produce solutions, new problems continue to emerge—global warming and related economic problems, resource depletion, unmanaged waste products of consumption, population growth, and so on.

    New technologies can only do so much to forge solutions or drive needed change: Building electric cars will not reduce pollution if no one buys them or if no network of convenient recharging stations exists to keep them running. Therefore, government regulation, marketplace dynamics, and public willingness to change their behaviors are also integral parts of the innovation formula. South Korea, for example, is currently building a smart grid that is expected to support 30,000 electric vehicle charging stations by 2030. Focused applications appear to be the most productive approach: Correctly identify a problem, then solve it.

    Much successful innovation occurs at the boundaries of organizations and industries, where legacy restrictions are fewer and the problems and needs of users can be linked with the potential of technologies in a creative process that challenges both. In such networks of innovation, communities of users can help further develop technologies and reinvent their social meaning with tools like open-source software.

    Innovation in Education and Learning

    The goal must be to develop changes that are both relevant and valuable to users. One example of this process in action is to develop learning/lesson plans that are custom-tailored according to the abilities of individual students and responsive to stakeholder input. An articulate voice for individualization is Ben Bloom, who divides educational skill sets into cognitive, affective, and psychomotor skills: i.e., mind, heart, and body. Men, he says, learn better by doing; women, through dialogue.

    Will schools ever embrace these opportunities for learning and innovation? Classroom productivity has not always risen with increased online learning, but computer games do teach skills—especially analytical thinking, team building, multitasking, and problem solving under stress—which are not often learned in the classroom. There is general agreement that the social dimension of learning is beneficial because learning in a social context is usually faster, with longer retention. The challenge is how to build a working hybrid that solves problems without creating new ones.

    One move in this direction is to assess social network technology in light of clear quantitative and qualitative educational outcomes, rather than worrying about potential classroom disruption. Another is to support child-guided learning, where kids and adults work side by side as peers to solve (for example) a local real-world environmental problem. Using this problem-solution approach, many school-based community environmental programs are student run: Students choose projects and do most or all of the work. In such a setting, even mistakes become good opportunities to learn and to improve the process of finding a viable solution.

    Innovation and Learning in Communities

    Besides creative education techniques, another innovation-friendly concept is that of Living Systems, as described by James Grier Miller in the late 1970s. Countries, societies, and even super-national organizations such as the European Union can also be much more organically interactive, given the opportunity. This principle can also be applied to mechanical systems—such as those that convert matter to energy and vice versa—as well as to information-transmission and exchange systems. In this context, information means “options to choose among” (such as signals, symbols, messages, or patterns) that can be transmitted or responded to.

    One major unresolved question in this approach involves responses to subjective phenomena; e.g., different interpretations of the same structures by subjective viewers. These kinds of value differentiations are common in systemic behavior (in politics for instance), and the issues involved are anything but trivial. The goal is to build successful harmonious systems, not conflict-ridden or disruptive ones. To quote Frijof Capra, “In the end, aggressors always destroy themselves, making way for others who know how to cooperate and get along. It is much less a competitive struggle for survival than a triumph of cooperation and creativity.”

    Karen Hawley Miles, executive director and founder of Education Resource Strategies and author of The Strategic School, asserts the need for leading indicators of performance versus lagging ones in order to identify and act quickly to support and change failing undertakings. But the most critical question is what changes to make in order to produce positive differences. In terms of appropriate educations tools, a significant complication arises from the wide range of individual learning styles.

    Howard Gardener has identified seven distinct types of intelligence: linguistic, musical, logical-mathematical, spatial, bodily kinesthetic, interpersonal, and intrapersonal. Digital tools and gaming software can help make self-paced and self-styled learning in many of these areas possible, because gaming’s flexibility tends to enhance this range of styles rather than ignore or combat them. But the once almost universal “precision” learning approach, built upon rote memorization and tests based upon the premise that there is only one “right answer” to a given problem, remains dominant. A more organic understanding of the learning process and a broader acceptance of the idea that different solutions are appropriate in different situations will be needed if we are ever to achieve the ideal of “educating one student at a time.”

    Regulatory structures guiding education still lean toward the one-size-fits-all model, leaving many hopeful innovators trapped within networks of inflexible requirements. Continuing to focus on standard outputs rather than the quality of inputs tends to reward compliance more than success in process change.

    The marketplace—the ultimate customer for individual education—could help reform such measures of output by encouraging more customization to fit locale, resources, culture, and community needs. Techniques such as scenario building, which effectively illustrate the consequences of failure to change where change is needed, can have a powerful effect by building the relevant political will within the community in question.

    We have seen this approach to community innovation succeed: The Mont Fleur scenario-planning project (under Adam Kahane) in South Africa helped to end apartheid through a public win-win process, largely by illustrating the alternatives. This does not imply that scenarios are a magic technique that always works. As James Ogilvy says in Facing the Fold: Essays on Scenario Planning, “There are no guarantees. Contrary to the creationists, happy endings are not foreordained. The best of intentions can yield unintended consequences. For any single actor, tribe, species or company, there is always the distinct possibility of tragedy, defeat, extinction, or bankruptcy.”

    Innovation for Improving The World

    While emerging strategies and the wonders of technology always arouse intellectual interest, the more critical question is how technology actually changes our lives (for better or worse) and how we might better prepare for these changes. For example, much has been made of the impact of smart technologies on health, through such mechanisms as telemedicine, and on the lives of senior citizens, through concepts like aging in place. Both health maintenance and independent living would be enabled by wearable monitoring equipment and by enhanced automation of household tasks (such as cleaning and garbage disposal).

    In addition, household appliances will soon be designed with the ability to offer advice and protection to those who use them. For example:

    • Smart refrigerators will be able to aid in meal planning by keeping track of what specific foods are on hand, their nutritive value, and taste combinations. Accordingly, they will be able to suggest menus based on available raw materials.
    • Smart bathrooms, already undergoing medical testing/assessment, could feature chemical-sensing toilets and floors that measure weight, body mass, and skin temperature, as well as monitor for falls, etc.
    • Smart medicine dispensers with packaging that can “recognize” contents and know a patient’s medical needs could automatically sound an alarm to help guard against accidental overdose and/or prevent harmful drug interactions.

    Technology applications like these not only affect health, but also could significantly enhance the economic vitality of less-developed countries. Already, poor and developing countries are acquiring hand-held communications equipment at four times the rate of developed countries. Such devices support the growth of financial services without banks in countries like Ecuador and Kenya. Smart cash transfer also makes crowd sourcing for paid micro-tasks possible and can thus generate tracking data to reveal patterns of credit-card use.

    Consider the growth of wired smart cities like London, Singapore, and Stockholm, where smart tech is helping to address such challenges as traffic congestion, mass transit, water use grids, crime map networks, etc. It is projected that, by 2020, a global broadband network with sufficient levels of penetration will be in place to bring Paul David’s productivity paradox into play. David predicts that, once a certain level of adoption is achieved, a new technology begins to generate increases in its own productivity at an expanding rate.

    At that point, David believes, a sea change in the impact of smart technology will occur, one consequence of which will be the fully measured society. “The Internet of Things” will consist of an almost planet-wide sensor network constantly monitoring local changes in light, temperature, humidity, pathogens, pesticides, and many other aspects of society and the environment. Such constant sampling and testing could generate a host of health and behavior-changing knowledge.

    Also, look for the coming “bodnet,” or Internet of bodies, for bio platforms will surely be part of the network. Possibilities include storing data with your thumb, such as Sparsh (MIT Media Lab), and the use of biochips printed on plastic wrap utilizing blood and memristors (tiny two-terminal variable resistors that will be able to store data far more efficiently than today’s computer hard drives), thus further narrowing the gap between machines and humans.

    Even epidemiological behavior such as the spread of influenza may be identified and tracked, based on movement and communications patterns using smartphone info. In addition, social behavior trends such as obesity can become more predictable from mining data on travel and eating behavior available through smart technologies—especially since obesity often seems to behave like a communicable disease. Smartphone applications already provide analytical tools that make it possible to perform sonograms or analyze biochemical blood work from a remote location. Accordingly, public health, urban planning, and marketing strategies can all be informed and guided by smart tech’s use of information.

    This can even make it possible to track behavioral indicators of growing mental illness, or identify leading influencers in any social network. Thus, we could follow the spread of ideas, including political ones; we would monitor the spread of memes the way we monitor the spread of a disease today.

    The question here is how much information is too much to understand and whether that knowledge will be used wisely. Can the government or private sector be relied upon to make appropriate use of this highly personal information? Only time will tell, but we can hope that the beneficial aspects outweigh the detrimental ones.

    The bottom line is that, while there are many indicators of diminishing productivity through business consolidation and the reduction of innovation, there are just as many pointing to expanded technological problem solving and enhanced positive capabilities. The creative impulse is still strong in the human spirit, and we can expect to see problems solved and new mountains climbed far into the future.

    About the Author

    Timothy C. Mack is president of the World Future Society and executive editor of World Future Review. Email tmack@wfs.org.

    A Future of Fewer Words?: Five Trends Shaping the Future of Language

    By Lawrence Baines

    Natural selection is as much a phenomenon in human language as it is in natural ecosystems. An ongoing “survival of the fittest” may lead to continuing expansion of image-based communications and the extinction of more than half the world’s languages by this century’s end.

    Just after I moved to Oklahoma three years ago, I was invited to a meeting of the state’s Department of Education to discuss Native American languages. I learned that, of the 37 or so Native American languages represented in the state, 22 are already extinct. The last speakers of the Delaware and Mesquakie tongues had recently died; several other languages had only one or two speakers left.

    Vanishing languages are not unique to Oklahoma. K. David Harrison, author of When Languages Die (Oxford University Press, 2008), estimates that, of the 6,900 or so languages spoken on the planet, more than half are likely to become extinct over the next century. Today, 95% of people speak one of just 400 languages. The other 6,509 languages are unevenly distributed among the remaining 5%. Hundreds of languages, most with only a few speakers still living, are teetering on oblivion at this very moment.

    Why are the world’s languages disappearing? Like living organisms, languages morph over time in response to continuous evolutionary pressures. Any language is in serious trouble if it is spoken by few people or is confined to a remote geographic area. Many of the languages in northeastern Asia, for example, are in isolated, inhospitable regions where low birthrates and high morbidity rates have been facts of life for hundreds of years.

    Geography and Distribution of Languages and Speakers

    Geographic isolation is a problem that Oklahoma’s dying Native American languages have in common. For example, speakers of Ottawa, of which there may be only three still living in Oklahoma, live in the northeastern part of the state, a location that draws few tourists and little business. If the remaining speakers of Ottawa are still alive, there is a good chance that they are over age 70 and rarely travel outside of the community. Anyone who would want to learn the Ottawa language would have to journey down dirt roads and knock on some unfamiliar doors to find out where these speakers live. Once you arrived on their doorstep, they still might not talk to you, especially if you are not a member of the tribe.

    In New Guinea, a country that hosts a cauldron of language diversity, villagers on one side of a mountain often speak a completely different language from villagers who may live less than a kilometer away on the other side. If travel to a geographic location is difficult or interactions with speakers of other languages is restricted, then a language has no way to flourish. Like a plant that receives no pollination, a language without some kind of interaction eventually dies.

    A second factor contributing to a language’s health is its social desirability. In some parts of the United States, children of first-generation immigrants often grow up in English-speaking neighborhoods, go to English-speaking schools, and come to think of English as the language of acceptance and power. Some of my Texas friends whose parents emigrated from Mexico do not know how to speak, read, or write in Spanish. One friend told me that his parents actually forbade him from speaking Spanish when he was growing up because they considered mastery of English to be essential for success in America.

    According to the Global Language Monitor (www.languagemonitor.com, 2011), almost 2 billion people around the globe speak English as either a first or second language, making it the most widely spoken language in the history of the world. The closest runner-up is Mandarin Chinese, with roughly 1 billion speakers, the majority located in or around China. Spanish is the third most widely spoken language, with 500 million speakers, while speakers of Hindi and Arabic come in at fourth and fifth respectively, with between 450 million and 490 million speakers.

    French was the most popular language in the world in 1800, but today, Spanish speakers outnumber French speakers worldwide by more than a 2:1 margin. English speakers outnumber French speakers by 10:1.

    As the table below shows, 10 languages constitute a combined 82% of all content and traffic on the Internet. Six of them—English, Chinese, Spanish, Arabic, French, and Russian—also happen to be the six “official” languages of the United Nations. The ubiquity of these languages on the Internet, and in international relations and commerce, assures their advance for the foreseeable future.

    Languages Represented on the Internet, 2011 est.

    Language Number of Users Percent of Total Users Percent Increase, 2000-2011
    English 565 million 27% 301%
    Chinese 510 million 24 1,478
    Spanish 165 million 8 807
    Japanese 99 million 5 110
    Portuguese 83 million 4 990
    German 75 million 4 174
    Arabic 65 million 3 2,501
    French 60 million 3 398
    Russian 60 million 3 1,825
    Korean 39 million 2 107

    Source: Internet World Stats, www.internetworldstats.com/stats7.htm

    Trend 1: Images Are Subverting Words

    Not only is the world using fewer languages on a daily basis, but it is also using fewer words. Consider the rich vocabulary and complex sentence constructions in extemporaneous arguments of politicians in earlier centuries against the slick, simplistic sound bites of contemporary times. No politician today speaks like Thomas Jefferson, whose 1801 inaugural address began with the following two sentences:

    Called upon to undertake the duties of the first Executive office of our country, I avail myself of the presence of that portion of my fellow citizens which is here assembled to express my grateful thanks for the favor with which they have been pleased to look towards me, to declare a sincere consciousness that the task is above my talents, and that I approach it with those anxious and awful presentiments which the greatness of the charge, and the weakness of my powers so justly inspire. A rising nation, spread over a wide and fruitful land, traversing all the seas with the rich productions of their industry, engaged in commerce with nations who feel power and forget right, advancing rapidly to destinies beyond the reach of mortal eye; when I contemplate these transcendent objects, and see the honour, the happiness, and the hopes of this beloved country committed to the issue and the auspices of this day, I shrink from the contemplation & humble myself before the magnitude of the undertaking.

    During the nineteenth century, Abraham Lincoln and Stephen Douglas debated for hours in open, public arenas around the country. An examination of their spontaneous verbal sparring reveals dexterous vocabulary and complex thought processes delivered in speeches loaded with clarity and wit. In contrast, during the 2004 presidential campaign, George W. Bush’s campaign logo for the election was a simple W alongside an American flag, an essentially wordless communiqué.

    The move from language to image is perhaps most apparent in advertisements, which increasingly emphasize sound and image to the exclusion of language. A winner of the 2010 CLIO award for the best commercial of the year was Volkswagen, whose commercial featured a series of rapid close-ups of a man and woman intimately dancing to rap music, followed in the last few seconds by a picture of a car and just two words: “Tough. Beautiful.”

    To help encourage communication among tribes who have been long-time rivals, organizations working in Tanzania, where 129 “official languages” exist, have turned to images, not words, to try to get the tribes to communicate with one another. As John Wesley Young reports in Totalitarian Language (University of Virginia, 1992), translators in these organizations have found that trying to find a common language was cumbersome and fraught with unexpected problems, such as the “loaded connotations” of words like comrade and enemy. To de-escalate tensions, translators try to establish communications using only images, which require no intermediary translation and are not as encumbered by pejoration.

    Trend 2: The Written Word Is Losing Authority

    In the Bible, John 1:1 begins, “In the beginning was the Word, and the Word was with God, and the Word was God.” In Isaiah 48:13, God says, “By my word, I have founded the earth.”

    In Christianity, as in most religions, holy words are assumed to have potency well beyond human comprehension, and the mere utterance of a holy word is assumed to have mystical power. J. K. Rowling borrowed this aspect of religious texts in writing the Harry Potter series of books, where her characters are often too fearful to even mention “him that need not be named” (Voldemort).

    To get a sense of the power of words in earlier times, it is instructive to read the literate stirrings of a sixteenth-century Italian peasant named Domenico Scandella and his attempt to understand the Bible on his own terms. Scandella’s interpretation of the world as a ball of cheese infested with worms (angels) was considered blasphemous by the priests of the local Catholic Diocese, and he was thrown in prison several times over the course of his life, eventually dying there.

    The Church assumed that Scandella’s linguistic interpretation could influence other parishioners in nefarious ways, so it silenced him. Today, thousands of political dissidents around the world are imprisoned on the same principle—that a few well-chosen words have the potency to change society.

    The power of words is also substantiated by endless volumes of legal documents. In most countries, an agreement between individuals may be binding only if it is in writing and features the signatures of all involved. In courts of law, the presence of written documentation trumps oral agreements.

    With the proliferation of electronic documents, clicking “I ACCEPT” has become equivalent to a written signature. Software programs downloaded from the Internet whose long, legal agreements momentarily flash upon a computer screen, are, in actuality, legally binding documents. In this manner, “proof of click” is replacing the multi-page, hand-signed documents in the legal system.

    At first blush, the popularity of texting might be construed as a sort of affirmation for writing. Upon closer inspection, text messages and e-mails have more in common with oral language than written language. Text messages are usually spontaneous, one-shot efforts, written with little to no revision, often in response to a previous communication. They may include pauses (communicated through additional spaces or …), facial expressions (communicated through emoticons such as ;D for a wink and a smile), simple vocabulary, and recursive, sometimes incoherent construction, all of which are characteristics of oral language. Not surprisingly, text messages are generated by a device originally designed for speaking—a telephone.

    Some texts are tweets, which are limited to 140 characters. Obviously, a 140-character limit restricts both linguistic complexity and sentence length. Few tweeters are likely to become the next William Faulkner, who commonly used more than 140 words (not characters) in a single sentence.

    The cell phone has become a ubiquitous, all-purpose communications tool. However, its small keyboard and tiny screen limit the complexity, type, and length of written messages. Because no sane person wants to read streams of six-point font on a three-inch video screen, phones today are built with menus of images up to the presentation point of the messages themselves.

    Trend 3: Changing Environment for Words

    Most public libraries around the world are transforming from institutions focused on archives and research to centers for information and entertainment. The old conception of the library, with its mammoth, unabridged dictionary, ordered sets of reference books, and collections of bounded materials, has become a relic. Now most libraries feature large open spaces with Wi-Fi access, plenty of computer terminals, and as many film DVDs and audio CDs as can be purchased on a dwindling budget. Most libraries today spend more on non-print media than on books and magazines. In my local, college-town library, the computer stations always have a line of patrons waiting to log on, and the DVD aisles are packed with browsers, while the book stacks are relatively deserted.

    Libraries are simply responding to changes in human behavior. In 1996, Americans spent more time reading than using the Internet. The following year, time spent on the Internet eclipsed reading, and the gap between reading and Internet usage has been expanding every year since. On a typical weekend, when individuals can choose how to fill their time, they read for about five minutes and they watch television, socialize, text, click around the Internet, and play video games for about five hours. In other words, the ratio of time spent on the Internet or video games to time spent reading has ballooned to 60:1.

    Research by the Kaiser Foundation found that adolescents, who are particularly heavy users of electronic media, pack a total of 10 hours and 45 minutes of media content into seven and a half hours of media interactions per day. In his book Everything Bad Is Good for You (Riverhead, 2005), Steven Johnson observes that electronic media have been shown to enhance student decision-making processes, to improve hand–eye coordination, and to promote collaborative thinking. However, most electronic media do not build vocabulary, enhance reading comprehension, or improve the quality of writing.

    Television shows, even critically acclaimed series, are notoriously simplistic in their use of language. Script analyses of popular television shows such as South Park, 24, CSI, American Idol, and Friday Night Lights all reveal a preponderance of monosyllabic words and short sentences.

    Language simplification is apparent in cinema, as well. Film scripts from Avatar, Planet of the Apes, Transformers, Lord of the Rings, and Star Wars are written at a second- or third-grade readability level. The basic unit of communication for film is the image, with music and special effects playing significant, supplementary roles. Words serve only as minor support.

    The move toward grander spectacle through computer-generated images moves film even more toward the visual and farther away from the linguistic. The complete dialogue for the first Terminator film, which served as a harbinger for a new era of special effects, is just 3,850 words—about as long as this magazine article.

    Trend 4: Effects of Neural Darwinism

    Nobel Prize winning neuroscientist Gerald Edelman postulated that the brain constantly undergoes a “survival of the fittest” process, in which cells respond to environmental stimuli and, in turn, battle for dominance. Thus, avid readers build parts of their brains that are associated with reading while those parts of the brain associated with other tasks, such as hand–eye coordination (exercised during the playing of video games, for example), stabilize or atrophy. This “neural Darwinism,” the constant fight for dominance in the brain, is evident even in very young children. The stimuli that newborns choose to pay attention to will strengthen the circuits and synapses in the brain related to the stimuli. If certain parts of the brain are not stimulated, that part of the brain will not develop.

    More than 20 years ago, neuroscientist Marian Diamond noted that enriched environments increase the size of the cortex at any age. Incredibly, detectable increases in cortical development become apparent after only four days.

    The hypotheses of Edelman and Diamond have been confirmed in non-laboratory settings by sociologists Betty Hart and Todd Risley, who did studies of language use among parents and children in professional and welfare homes. The sociologists observed that, by age 3, children in professional homes had twice the vocabularies of children in welfare homes.

    To find out why, they recorded oral exchanges between parent and children in both environments and found that professional parents averaged 487 utterances per hour with their children with a ratio of positive to negative comments of 6:1 (six positive comments for every one negative comment). On the other hand, in welfare homes, parents only averaged 178 utterances per hour with a ratio of positive to negative comments of 1:2 (one positive comment for every two negative comments).

    By age 3, the average IQ of children of professional parents was 117; the average IQ of children of welfare parents was 79. Thus, much of the achievement gap may be attributable to impoverished environments in the early years.

    In a more recent Carnegie Mellon University study, psychologists Timothy A. Keller and Marcel Adam Just took PET images (Positron Emission Tomography) of the brains of children who were poor readers, and then offered the children 100 hours of intensive “reading therapy” designed to improve reading effectiveness. Upon the conclusion of the 100 hours of therapy, the students showed significant improvements in their ability to read. When photographs were taken of the children’s brains after the 100 hours of therapy, the physical structure of their brains had changed to look more like the brains of avid readers.

    As the world recedes from the written word and becomes inundated with multisensory stimuli (images, sound, touch, taste, and smell), the part of the human brain associated with language will regress. While there are benefits to becoming more visually astute and more aurally discriminating, the areas of the brain associated with language are also associated with critical thinking and analysis. So, as the corpus of language shrinks, the human capacity for complex thinking may shrink with it.

    Trend 5: Translating Machines

    Imagine a hand-held device that can translate simple phrases into any of several foreign languages. You type a phrase in your native language and the machine instantly translates and pronounces the desired phrase in the target language. Actually, such a machine already exists and may be purchased for about $50.

    While today’s machine translators are not perfect, they are surprisingly functional. Most rely on translation algorithms that depend upon the most commonly occurring words in a language. Plain-language-in and plain-language-out enhances the probability that a word is contained in the database of the device and is understandable by the listener. An erudite translation could result in misunderstanding and confusion. That is, the machine is programmed explicitly for the most common words and phrases.

    The inevitable proliferation and technological improvements of translating devices will mean more plainspeak, more monosyllabic words, and fewer polysyllabic words. As world commerce continues to expand and the need to communicate in several languages becomes a standard expectation, the emphasis will be on functionality—a few, useful words and durable phrases. Again, the universe of words seems destined to shrink.

    The Rise and Fall of Languages, and What Comes Next

    Foreign-language courses in K-12 schools and colleges used to focused upon culture as much as language. Students would study the government, religion, history, customs, foods, and etiquette of a country as much as its language. Today, foreign-language teaching is moving away from cultural awareness and toward language as a transaction. If you own a factory in Norway and you want to export your products to Vietnam, it would be in your best interest to become competent in Vietnamese as quickly as you can. What level of competency do you need to achieve your goals? How long will it take to get there? When the enterprise in Vietnam dries up, then the urgency to learn Vietnamese ceases to exist. While the loss of cultural transmission is lamentable, the focus on functionality is understandable, especially in light of widening international trade.

    The reverberations of the shift from words as the dominant mode of communication to image-based media are becoming apparent. As we click more and write less, the retreat of polysyllabic words, particularly words with complex or subtle meanings, seems inevitable. The rich vocabulary in books occurs in the exposition, not the dialogue. When a book is adapted for film, a video game, or a television series, the exposition is translated into images, so the more complex language never reaches the ears of the audience. Media associated with the print world (books, magazines, newspapers) are the repositories of sophisticated language, so as individuals read less, they will have less exposure to sophisticated language.

    Losing polysyllabic words will mean a corresponding loss of eloquence and precision. Today, many of the most widely read texts emanate from blogs and social networking sites, such as Facebook. Authors of these sites may be non-readers who have little knowledge of effective writing and may have never developed an ear for language. Over the next century, a rise in “tone deaf” writing seems certain.

    Finally, more and more languages will disappear from the face of the planet, and world languages will coalesce into pidgin dialects as communication among cultures continues to accelerate. There will be an ongoing “survival of the fittest” battle among languages. If a language is not needed for commerce, identity, or communication, then it will shrink and possibly die.

    The French novelist Gustave Flaubert once wrote, “Human language is like a cracked kettle on which we beat out tunes for bears to dance to.” As images replace words, they will foster faster comprehension, enable easier communication, support stronger retention, and stimulate new ways of thinking. The possible consequences of the contraction of written communication are difficult to discern, but whether we are ready or not, the age of the image is upon us.

    About the Author

    Lawrence Baines is chair of Instructional Leadership and Academic Curriculum at the Jeannine Rainbolt College of Education, University of Oklahoma. His latest books are Going Bohemian (International Reading Association, 2010) and The Teachers We Need (Rowman & Littlefield, 2010). E-mail lbaines@ou.edu.

    From the Three Rs to the Four Cs: Radically Redesigning K-12 Education

    By William Crossman

    The battle against nonliteracy has focused on teaching everyone to read and write text. But new technologies that facilitate more holistic learning styles, engaging all of the learner’s senses, may open the locked stores of global knowledge for all. Instead of reading, ’riting, and ’rithmetic, we’ll move to critical thinking, creative thinking, “compspeak,” and calculators.

    From the moment that Jessica Everyperson was born, her brain, central nervous system, and all of her senses shifted into high gear to access and to try to understand the incredible new informational environment that surrounded her. She had to make sense of new sights, sounds, tastes, smells, tactile experiences, and even new body positions.

    Jessica approached her new world with all of her senses operating together at peak performance as she tried to make sense of it all. Her new reality was dynamic, constantly changing from millisecond to millisecond, and she immediately and instinctively began to interact with the new information that poured through her senses.

    Jessica’s cognitive ability to access new information interactively, and to use all of her senses at once to optimize her perception of that ever-changing information, is all about her hardwiring. Jessica, like all “everypersons” everywhere, was innately, biogenetically hardwired to access information in this way.

    For Jessica’s first four or five years, her all-sensory, interactive cognitive skills blossomed with amazing rapidity. Every moment provided her with new integrated-sensory learning experiences that helped to consolidate her “unity of consciousness,” as the ancient Greek philosophers called it. Because each learning experience was all-sensory, Jessica’s perception of reality was truly holistic. This meant that the ways she processed, interpreted, and understood her perceptions were also holistic. Jessica was therefore developing the ability to both perceive and understand the many sides of a situation—the cognitive skills that form the basis of critical thinking and lead to a broad and compassionate worldview.

    During those preschool years, she also became proficient in using the variety of information technologies (ITs) that continued to be introduced into her environment: radio, TV, movies, computers, video games, cell phones, iPods, etc. Early on, she stopped watching TV, which engaged only her eyes and ears, and switched to video games, which engaged her eyes, ears, and touch/tactility. Before she could even read a word, Jessica had become a multimodal multitasker, talking on her cell phone while listening to her iPod and playing a video game.

    At this point in her young life, Jessica was feeling very good about her ability to swim in the vast sea of information using the assortment of emerging ITs. Not surprisingly, she was also feeling very good about herself.

    Then, Jessica started school!

    The Brightness Dims: Hello K-12, Hello Three Rs (Reading, ’Riting, ’Rithmetic)

    On Jessica’s first day in kindergarten, her teacher was really nice, but the message that the school system communicated to Jessica and her schoolmates was harsh. Although none of the teachers or administrators ever stated it in such blatant terms, the message, as expressed via Jessica’s school’s mandated course curriculum and defined student learning outcomes (SLOs), was this: Reading/writing is the only acceptable way to access information. This is the way we do it in “modern” society. Text literacy is the foundation of all coherent and logical thinking, of all real learning and knowledge, and even of morality and personal responsibility. It is, in fact, the cornerstone of civilization itself.

    And the message continued: Since you don’t know how to read or write yet, Jessica, you really don’t know anything of value, you have no useful cognitive skills, and you have no real ways to process the experiences and/or the data that enter your brain through your senses. So, Jessica, from now on, through all of your years of schooling—through your entire K-12 education—you and we, your teachers, must focus all of our attention on your acquiring those reading and writing skills.

    The U.S. Department of Education holds every school system in the United States accountable for instilling reading skills, as well as math skills, in every one of its students, and it requires students to take a battery of standardized tests every year to see if both their reading scores and math scores are going up.

    If the test scores trend upward, the schools are rewarded. If they stay level or decline, the schools are punished with funding cuts and threatened with forced closure. Schools literally pin their long-term survival on just two variables: First, do the tests show that students can read and write, and second, do the tests show that students can do math?

    From that moment on, Jessica’s learning experience took a radical downward turn. Instead of accessing a dynamic, ever-changing reality, she was going to have to focus almost entirely on a static reality that just sat there on the page or computer screen: text. Instead of accessing information using all of her integrated senses simultaneously, she was going to have to use only her eyes. And instead of experiencing information interactively—as a two-way street that she could change by using her interactive technologies—she was going to have to experience information as a one-way street: by absorbing the text in front of her without being able to change it.

    Welcome, Jessica, to the three Rs, the essence of K-12 education. Of course, Jessica and her schoolmates, particularly in middle and high school, will take other courses: history, chemistry, political science, and so on. However, these other courses count for almost nothing when students go on to college, where they have to take these subjects all over again (history 101, chemistry 101, political science 101), or when they enter the vocational, business, and professional world, where they have to receive specialized training for their new jobs. College admissions directors and workplace employers really expect only one narrow set of SLOs from students who graduate with a high school diploma: that the students should have acquired a basic level of text literacy.

    Jessica, like almost all of her kindergarten schoolmates, struggled to adjust to this major cognitive shift. Actually, for the first year or so, Jessica was excited and motivated to learn to read and write by the special allure of written language itself. The alphabet, and putting the letters together to make words, was like a secret code that grown-ups used to store and retrieve information. The prospect of learning to read and write made Jessica feel that she was taking a step into the grown-up world.

    However, this initial novelty and excitement of decoding text soon wore off, and most of the children in Jessica’s first, second, and third-grade classes, including Jessica herself, had a hard time keeping up. By the fourth grade, numbers of students were falling further and further behind the stated text-literacy SLOs for their grade level. Their self-confidence was getting severely damaged, and they were feeling more and more alienated from school and education itself. Not surprisingly, Jessica was no longer feeling very good about herself.

    Young People’s Rebellion against The Three Rs and Text Literacy

    What’s going on here with Jessica and young people in general? Our children are actually very intelligent. From the earliest age, their brains are like sponges soaking up and interpreting experiences and information that floods their senses. Almost all young children love to learn about everything, including about the learning process itself. They’re continually asking “why?” in an effort to understand the world around them. It’s a survival mechanism that we humans have evolved over millennia, much like the newborn deer kids that can stand and run minutes after they’re born.

    Young people’s failure to excel, or to even reach proficiency, in reading and writing in K-12 is reflected in the school literacy rates that continue to fall or, at best, remain stagnant decade after decade. Look no further than the National Assessment of Educational Progress, an annual test that most experts consider a fairly accurate gauge of reading scores throughout the United States. The scores for 12th-graders declined from 292 in 1992 to 188 in 2009, while the scores of students in other grades only negligibly improved during that same time period—this despite gargantuan amounts of time, resources, and hundreds of billions of dollars that school systems burned through in an attempt to bring them up.

    Yet another reflection of young people’s dissatisfaction with reading is the tragic rising dropout rates of middle-school and high-school students, particularly African American and Latino students. The question that parents and educators need to ask themselves is: Do children become less intelligent as they pass through the K-12 years?

    The answer is No! Studies consistently show that, although young people’s text-literacy rates are falling, their IQs (intelligence quotients) are rising at an average of three points every 10 years. Researchers have been noting this trend for decades and call it the “Flynn Effect,” after James Flynn, a New Zealand political science professor who first documented it.

    What’s going on here is that young people today are rebelling against reading, writing, and written language itself. They are actively rejecting text as their IT of choice for accessing information. They feel that it’s no longer necessary to become text literate—that it is no longer relevant to or for their lives.

    Instead, young people are choosing to access information using the full range of emerging ITs available to them, the ITs that utilize the fullness of their all-sensory, interactive cognitive powers. Because their K-12 education is all about learning to gather information via text, young people are rejecting the three Rs–based educational system, as well. Why, Jessica is asking, do I need to spend years learning to read Shakespeare’s Hamlet when I can download it and listen to it, or listen to it via audio book CD, or watch a movie or DVD of it, or interact with it via an educational video game of the play?

    We may be tempted to point out to Jessica and her fellow text rejecters that, when they’re text messaging, they are in fact writing and reading. But it’s not really the writing and reading of any actual written language—and Jessica knows it. Texting uses a system of symbols that more closely resembles a pictographic or hieroglyphic written language than an alphabetic one. “♥2u” may be understandable as three symbols combined into a pictogram, but it’s not written English.

    In my opinion, “♥2u” exemplifies not a flourishing commitment to text literacy among young people, but rather the rejection of actual text literacy and a further step in the devolution of text/written language as a useful IT in electronically developed societies.

    Replacing Text in Schools—and Everywhere Else

    What is text/written language, anyway? It’s an ancient technology for storing and retrieving information. We store information by writing it, and we retrieve it by reading it. Between 6,000 and 10,000 years ago, many of our ancestors’ hunter-gatherer societies settled on the land and began what’s known as the “agricultural revolution.” That new land settlement led to private property and increased production and trade of goods, which generated a huge new influx of information. Unable to keep all this information in their memories, our ancestors created systems of written records that evolved over millennia into today’s written languages.

    But this ancient IT is already becoming obsolete. Text has run its historic course and is now rapidly getting replaced in every area of our lives by the ever-increasing array of emerging ITs driven by voice, video, and body movement/gesture/touch rather than the written word. In my view, this is a positive step forward in the evolution of human technology, and it carries great potential for a total positive redesign of K-12 education. Four “engines” are driving this shift away from text:

    First, evolutionarily and genetically, we humans are innately hardwired to access information and communicate by speaking, listening, and using all of our other senses. At age one, Jessica just started speaking, while other one-year-olds who were unable to speak and/or hear just began signing. It came naturally to them, unlike reading and writing, which no one just starts doing naturally and which require schooling.

    Second, technologically, we humans are driven to develop technologies that allow us to access information and communicate using all of our cognitive hardwiring and all of our senses. Also, we tend to replace older technologies with newer technologies that do the same job more quickly, efficiently, and universally. Taken together, this “engine” helps to explain why, since the late 1800s, we have been on an urgent mission to develop nontext-driven ITs—from Thomas Edison’s wax-cylinder phonograph to Nintendo’s Wii—whose purpose is to replace text-driven ITs.

    Third, as noted above, young people in the electronically developed countries are, by the millions, rejecting old text-driven ITs in favor of all-sensory, nontext ITs. This helps to explain why Jessica and her friends can’t wait until school is over so they can close their school books, hurry home, fire up their video-game consoles, talk on their cell phones, and text each other using their creative symbols and abbreviations.

    Fourth, based on my study and research, I’ve concluded that the great majority of the world’s people, from the youth to the elderly and everyone in between, are either nonliterate—unable to read or write at all—or functionally nonliterate. By “functionally nonliterate,” I mean that a person can perhaps recognize the letters of their alphabet, can perhaps write and read their name and a few other words, but cannot really use the written word to store, retrieve, and communicate information in their daily lives.

    Since the world’s storehouse of information is almost entirely in the form of written language, these billions of people have been left out of the information loop and the so-called “computer revolution.” If we gave a laptop computer to everyone in the world and said, “Here, fly into the world of information, access the Internet and the Worldwide Web,” they would reply, “I’m sorry, but I can’t use this thing because I can’t read text off the screen and I can’t write words on the keyboard.”

    Because access to the information of our society and our world is necessary for survival, it is therefore a human right. So the billions of people who are being denied access to information because they can’t read or write are being denied their human rights. They are now demanding to be included in the “global conversation” without having to learn to read and write.

    Three great potential opportunities for K-12 education in the coming decades arise out of this shift away from text.

    Using nontext-driven ITs will finally enable the billions of nonliterate and functionally nonliterate people around the world to claim and exercise their right to enter, access, add to, and learn from the world’s storehouse of information via the Internet and World Wide Web. Voice-recognition technology’s instantaneous language-translation function will allow everyone to speak to everyone else using their own native languages, and so language barriers will melt away. Consider the rate of improvement in voice-recognition technology over the last decade. As David Pogue points out in a 2010 Scientific American article, “In the beginning, you had to train these programs by reading a 45-minute script into your microphone so that the program could learn your voice. As the technology improved over the years, that training session fell to 20 minutes, to 10, to five—and now you don’t have to train the software at all. You just start dictating, and you get (by my testing) 99.9 percent accuracy. That’s still one word wrong every couple of pages, but it’s impressive.” People whose disabilities prevent them from reading, writing, and/or signing will be able to select specific functions of their all-sensory ITs that enable them to access all information.

    The Brightness Returns: Goodbye, Three Rs; Hello, Four Cs

    Every minute that Jessica and her friends spend getting information and communicating using video games, iPods, cell phones, and other nontext ITs, they’re developing new cognitive skills. Their new listening, speaking, visual, tactile, memory, interactive, multitasking, multimodal skills allow them to access information and communicate faster and more efficiently than ever before. I believe that Jessica and her friends are developing the very skills that will be required for successful K-12 learning as we move into the coming age of postliterate K-12 education.

    Something good is also happening to Jessica’s brain and consciousness as she uses her all-sensory, interactive ITs. Jessica is retraining her brain, central nervous system, and senses. She is reconfiguring her consciousness so that it more closely resembles its original, unified, integrated, pre–three Rs state. Jessica’s worldview is broadening because she’s perceiving and understanding the world more holistically. And she’s feeling good about herself again.

    Jessica’s story—and there are millions of Jessicas struggling to succeed in our three Rs–based classrooms today—points the way to a new strategy for K-12 education in the twenty-first century. Basing K-12 education on the three Rs is a strategy for failure. We have the emerging ITs on which we can build a new K-12 strategy, one that has the potential to eliminate young people’s academic nonsuccess and sense of failure and replace it with academic success and self-confidence.

    Instead of the three Rs, we need to move on to the four Cs: critical thinking, creative thinking, compspeak (the skills needed to access information using all-sensory talking computers), and calculators (for basic applied math).

    As text/written language falls more and more out of use as society’s IT of choice for accessing information, so will the text-based three Rs. It’s a trend that’s already starting to happen. Videos as teaching–learning tools are surpassing textbooks in innumerable K-12 classrooms. Instructional interactive videos (we won’t be calling them video “games” anymore) are already entering our classrooms as the next big IIT—instructional information technology—because students want to be interactive with information.

    As the three Rs exit the K-12 scene, they’ll leave a huge gap to be filled. What better way to fill that gap than by helping young people to become better critical and creative thinkers—the most crucial cognitive skills they’ll need to help them build a more sustainable, peaceful, equitable, and just world? In order to store and retrieve the information they’ll need to develop and practice these thinking skills, they’ll also need to systematically acquire the all-sensory, interactive skills to access that information: the compspeak skills.

    These compspeak skills are the very same skills that Jessica and her classmates have been developing unsystematically by using their all-sensory ITs, but systematic training in listening, speaking, visuality, memory, and the other compspeak skills should be a central component of their post–three Rs education. It’s ironic, and definitely shortsighted, that, in a difficult economic and budget-cutting climate, classes that support these compspeak skills are the first to be cut: music (listening, visual, body movement, memory), art (visual, body movement), physical education and dance (body movement, memory), speech (speaking, listening, memory), and theater arts (all of the above).

    Over the next decades, we will continue to replace text-driven ITs with all-sensory-driven ITs and, by 2050, we will have recreated an oral culture in our electronically developed countries and K-12 classrooms. Our great-great-grandchildren won’t know how to read or write—and it won’t matter. They’ll be as competent accessing information using their nontext ITs as we highly text-literates are today using the written word.

    About the Author

    william.crossman.jpg

    William Crossman is a philosopher, futurist, professor, human-rights activist, speaker, consultant, and composer/pianist. He is founder/director of the CompSpeak 2050 Institute for the Study of Talking Computers and Oral Cultures (www.compspeak2050.org). Email: willcross@aol.com.

    Some of the ideas discussed in this article are discussed in greater depth in the author’s book VIVO [Voice-In/Voice-Out]: The Coming Age of Talking Computers (Regent Press, 2004). This article is adapted from an earlier version in Creating the School You Want: Learning @ Tomorrow’s Edge (Rowman & Littlefield, 2010), edited by Arthur Shostak and used with his permission.

    Visions: Toward Better Space-Weather Forecasts

    By Cynthia G. Wagner

    Scientists hope to help avert devastating impacts of solar outbursts.

    Charged particles and magnetic fields streaming from the Sun at a rate of a million miles an hour can do an awful lot of damage to unprepared systems on Earth. They can make the data coming from global positioning satellite systems (GPS) unreliable, thus putting a wide variety of operations at risk: oil drillers and miners, airline operators, and any driver trying to get to an unfamiliar location while avoiding traffic or dangerous construction tie-ups.

    Like the weather on Earth, space weather can be modeled and, to a large extent, forecast. The National Oceanic and Atmospheric Administration (NOAA) is now developing models to improve its predictions of space-weather activity and its impacts. Combining the power of two previous models, the new WSA-Enlil model simulates conditions from the base of the Sun’s corona and the impacts of solar events as they evolve into storm systems out in space.

    The NOAA researchers are specifically looking for ways to minimize the effects of big plasma ejections that may temporarily interrupt vital electrical power grids and radio and satellite communications systems, such as GPS.

    In addition to more-accurate forecasts, it is also important that they be faster, giving us enough time to act once the forecasts are made.

    It may take up to four days for an ejection of charged particles and magnetic streams to produce magnetic storms on Earth, so more-accurate forecasts of the timing of these impacts could, for example, give airline operators the opportunity to reroute traffic and power companies time to work around potential outages or other problems.

    “This advanced model has strengthened forecasters’ understanding of what happens in the 93 million miles between Earth and the Sun following a solar disturbance,” says Tom Bogdan, director of NOAA’s Space Weather Prediction Center in Boulder, Colorado. “It will help power grid and communications technology managers know what to expect so they can protect infrastructure and the public.”

    Source: National Oceanic and Atmospheric Administration, www.noaa.gov.

    Future Active

    Custom Teaser: 
    • Atlas of European Values
    • Doomsday for the Arts?

    Atlas of European Values

    Many Europeans simply do not feel “European,” judging from the findings in the latest edition of the Atlas of European Values, released in December by Dutch academic publisher Brill. The Atlas covers the attitudes, concerns, and values of people in 45 countries.

    cover of book

    With a looming debt crisis threatening to break the European Union apart, the differences of opinions on religion, immigration, sexuality and gender issues, family, work, and morality may be more acute than ever. European values remain divided geographically, though central Europe is converging more with the western nations.

    “Interestingly, the book also provides a picture of the direction in which Europe seems to be heading,” writes European Council President Herman Van Rompuy in the book’s preface. “Modernisation and individualisation have gained ground, especially in the North-Western parts. However, traditional family values still dominate.”

    Among other findings, the Atlas reports:

    • Despite the high divorce rate, marriage remains popular. Europeans retain relatively traditional family values, and loyalty is seen as the key success factor for a marriage.
    • Single mothers are accepted, but Europeans think that it is better for a child to have a father.
    • In northern Europe, cheating in marriages is more tolerated than in the south.
    • Europeans are still religious, but religion is increasingly viewed as a personal matter rather than something linked to an institution.
    • The rich countries of Europe are the least willing to pay for a clean environment.
    • Confidence in the EU is highest in countries that are keen to receive funds from the EU and lowest in the countries that provide the funding.

    Source: Atlas of European Values, published by Brill, www.brill.nl/atlas-european-values-trends-and-traditions-turn-century.

    Doomsday for the Arts?

    The fine arts in Nigeria—a relatively young institution—may be on a path to a doomsday scenario if current trends continue, warns University of Nigeria art historian Ola Oloidi.

    portrait of Professor Ola Oloidi

    In a recent lecture for the National Gallery of Art, Oloidi cited dwindling enrollment of students in university art programs. But perhaps an even more threatening trend is the merging of fine and applied arts into environmental sciences programs, along with architecture and urban and regional planning.

    “Though the National Universities Commission has justified this change by saying that art should not be taught to produce fine and applied artists alone ‘but also to influence developments’ in the above newly found neighbors, the professed philosophy of the new faculty that accommodates fine arts clearly shows that art has lost some natural values while acquiring new ones that are now derailing or upsetting visual arts as aesthetic, humanistic, and industrial tools,” he said in the lecture.

    The loss of art is also a loss for future creativity and innovativeness, Oloidi warned in a message directed toward academic policy makers.

    Sources: “A Brief History of Art” by Okechukwu Uwaezuoke, This Day (Nigeria), December 15, 2011, www.thisdaylive.com/articles/a-brief- history-of-art/105081/.

    National Gallery of Art, 10th Annual Distinguished Lecture, www.nga.gov.ng/ 2011annuallecture.html.

    Renewing Prospects for American Prosperity

    By Rick Docksai

    Reenergizing social activism could put American progress back on track.

    The Price of Civilization: Reawakening American Virtue and Prosperity by Jeffrey D. Sachs. Random House. 2011. 324 pages. $27.

    The United States has much to learn from the rest of the world, according to antipoverty advocate Jeffrey D. Sachs, director of Columbia University’s Earth Institute. In The Price of Civilization, he puts forth a detailed blueprint for how the world’s de facto superpower might, with a broader perspective and honest assessment of values, step up to the challenges that beset it and the globe as a whole.

    America’s elites—business leaders, academics, media, and government officials—have largely given up on social activism, Sachs asserts, and American society is suffering for it. Living standards, innovation, and educational attainment are declining, and public infrastructure everywhere is crumbling; meanwhile, consumer debt is climbing to unprecedented levels and corporate elites repeatedly breach the public trust without fear of punishment.

    “Despite America’s vast natural resource advantages, it has actually ended up with a lower average quality of life in many ways than in northern Europe,” he writes.

    Sachs traces the roots of America’s present-day crisis back decades. In the 1970s and 1980s, presidents and Congress drastically cut taxes on the wealthiest Americans and weakened many government programs and categories of federal regulations. Meanwhile, business interests gained unprecedented levels of influence over policy making.

    Government had led the United States through World War II and numerous other crises in the years prior, but following all this federal downsizing and corporate enabling, government lost its ability to lead effectively. Furthermore, it allowed for a disastrous shortchanging of America’s future generations, who will have to pay today’s skyrocketing debts in the form of reduced government benefits or higher interest rates and taxes.

    In Sach’s view, the Obama administration falls far short, as well. Not only does it continue many of the policy mistakes of its predecessors, but it also, like them, exhibits an “anti-planning mentality”: no coherent plan on health care, education, science and technology, or other pivotal issues. Sachs urges government leaders to widen their perspectives. American society will not become a better place until its people begin to think systematically about the future, he warns.

    “To retake political power from the lobbies, we will need to take the long view,” he writes.

    He points to other countries for examples of more forward-thinking policies. For instance, European governments mitigate unemployment by investing more heavily in job retraining and career services that match workers and jobs. Germany reduces workers’ hours instead of laying them off.

    Also, countries across the globe are focusing on measuring their citizens’ overall well-being, and not just their national GDPs. Since 1972, Bhutan’s government has been taking official measures of “gross national happiness,” which takes into account such measures as community vitality, culture, health, education, and psychological well-being.

    U.S. citizens have actually been reporting lower levels of personal well-being, despite still having the world’s largest economy, Sachs notes. Citizens of Costa Rica, the Dominican Republic, and other less-affluent countries all purport to be much happier.

    The U.S. government can enhance its citizens’ well-being if it, too, raises its expenditures on public services and infrastructure, Sachs argues. But to do that, it will have to reinstate higher taxes.

    The reward would be immense, according to Sachs: With increased budget revenues, the U.S. government could end extreme poverty once and for all in one generation.

    “With a fair tax structure and a just contribution of the rich to the rest of society, we can afford a truly civilized America,” Sachs writes.

    Raising taxes is a difficult feat in America’s political climate, and Sachs acknowledges as much. But, as he also points out, opinion polls indicate that 61% or more of Americans favor raising taxes on the wealthy and agree that lobbyists’ influence over government needs to be curbed. The will is there, if political leaders can only summon the resolve to marshal it into action.

    Sachs finds further cause for hope in demographics. The millennial generation is markedly more progressive on most political issues, and ethnic minorities such as Hispanic Americans and African Americans constitute increasingly large percentages of the U.S. voting population. These population shifts, combined with the reality of the U.S. fiscal crisis, may make for a historic, long-lasting progressive recalibration of the American political scene.

    Citizens themselves must reform, also, according to Sachs. He laments that Americans are too money-obsessed and too saturated by mass media. They consequently work excessive hours for more pay that does not translate to more life satisfaction, and they watch excessive television and other entertainment while knowing shockingly little about the world around them. Mass consumerism and media saturation must give way to contemplation and citizens’ reconnection with their communities and each other.

    “Pull back from hypercommercialism, unplug from the noisy media a bit, and learn more about and reflect on the current economic situation,” he writes.

    Sachs has gained acclaim globally for his repeated calls to wealthy nations to contribute larger shares of aid to the developing world. The “price of civilization” is a sharper focus on the United States and the responsibilities that its citizens hold to correct social ills within their nation’s borders. This book offers an insightful view that is sure to engage anyone concerned with what is going wrong in American society and what it will take to set it right.

    About the Reviewer

    Rick Docksai is an assistant editor of THE FUTURIST and of World Future Review.

    Books in Brief

    Edited by Rick Docksai

    Criminal Justice and Criminal Redemption

    After the Crime: The Power of Restorative Justice Dialogues between Victims and Violent Offenders by Susan L. Miller. New York University Press. 2011. 265 pages. Paperback. $25.

    The criminal justice system can arrest an offender and impose a sentence, but it cannot erase the pain and trauma that the victims suffer, says Susan Miller, a University of Delaware professor of sociology and criminal justice. New “therapeutic restorative justice” programs, however, are forming to help both the victims and their offenders to attain peace.

    As Miller describes in After the Crime, most of the programs host voluntary, carefully planned face-to-face meetings between the victim and the offender in which the two can discuss what had happened, share their feelings with each other, and achieve resolution. Program staff workers prepare both parties thoroughly before their meeting: The offender must accept responsibility for the crime, and the victim must overcome any initial anger or fear he or she might feel toward the offender.

    Miller spotlights one particular program, Victims’ Voices Heard (VVH), and profiles a number of victims who participated in it. They include a mother who reconciles with the drunk driver who had killed her son; a daughter who makes peace with her father who had sexually abused her; a woman who forgives the burglar who had broken into her house and raped her; and many others.

    Convicted offenders credit the program with forcing them to reevaluate their past behaviors and to feel empathy for the people that they had hurt. Victims say that they find healing and resolution.

    Readers who have been affected by a crime will find much in After the Crime to which they may relate. Other readers who have not experienced a crime will find much of value, as well: moving, powerful stories of redemption, and a profound view of the emotional aspect of criminal justice and what new groups such as VVH are doing to better attend to it.

    More Technology, More Problems?

    Techno-Fix by Michael Huesemann and Joyce Huesemann. New Society. 2011. 434 pages. Paperback. $24.95.

    Many people believe that most of the world’s problems will disappear if we just develop better technology and more efficient means of producing and delivering things: Genetically modified crops could eradicate hunger, renewable energy will make pollution obsolete, advancements in medicine will expand access to health care, etc. In Techno-Fix, environmental researchers Michael and Joyce Huesemann confront this view head-on and refute it.

    They point out that any new technology may combat one old problem but replace it with a new one. For example, ecological harms stem from genetic modification of crops, and overuse of antibiotics gives rise to deadlier bacteria that no antibiotic can kill.

    Also, meat-processing industries separate consumers from the rearing and killing of farm animals; hence, people may eat too much meat while remaining oblivious to the cruel living conditions often inflicted on animals raised for food. Likewise, technology also separates human laborers from their executive supervisors, and therefore perpetuates exploitative work conditions.

    Policy makers and politicians may be tempted to rely on technological innovation to fix societal problems. It is often easier to create new tools than to actually change people’s behavior. But they are mistaken, the Huesemanns write. It will take human thought and attention to resolve the unintended consequences that human-made technology creates. Unless we level off our population growth and give up on the consumerist lifestyle, there is little hope that any of our planet’s ecological ills will cease, no matter what new gadgetry we develop.

    Techno-Fix is an introspective, philosophical look at human beings and their relationships to the technologies that are integral to their lives. It’s research-intensive, but approachable and relevant enough for any conscientious reader to enjoy.

    Time Travel Ahead?

    Time Travel and Warp Drives by Allen Everett and Thomas Roman. University of Chicago Press. 2012. 259 pages. Paperback. $30.

    Forecasting the future—we all try to do that. But what if we could actually travel to the future? It’s quite scientifically possible, and engineers in later years just might achieve it, argue Allen Everett and Thomas Roman.

    The authors’ discussion is mostly conceptual—a necessity, since the nuts-and-bolts technological hardware that would achieve the task is vastly beyond our present-day capacity to build, or even to imagine. The authors do not set out to show how to build a time machine. What they do is to show that it is indeed possible.

    Time Travel and Warp Drives is a riveting scenario for our scientific future.

    Eating for a Water-Secure World

    Virtual Water: Tackling the Threat to Our Planet’s Most Precious Resource by Tony Allan. I. B. Tauris. 2011. 351 pages. Paperback. $18.

    Did you know that 40 liters of water went into preparing the single slice of toast you had for breakfast? Or that when you drink a single cup of espresso you’ve consumed 140 liters of water? Tony Allan, a King’s College geography professor, opens readers’ eyes to the vast, unseen quantities of “virtual water” that humans unknowingly use every day.

    In Virtual Water, Allan warns that few of us realize just how much water we consume—but we should. Countries across the world are in the grip of water shortages, and unless we change our wasteful water habits, we will witness starvation on a never-before-seen scale.

    Diet is critical, according to Allan, since agriculture and food production constitute a larger share of virtual water usage than any other human activity. We need to educate ourselves about our water usage and learn how to live with less, he argues. By eating less meat and throwing away less food of any kind, industrialized countries could cut their water consumption by 40%.

    The BRIC countries Brazil, China, and India have large roles to play, as well. The three countries produce more than one-third of the world’s food, and in coming decades they will be dominant water exporters.

    Brazil is undertaking major conservation initiatives now, according to Allan, who credits China with making the most progress out of any country on Earth. As China invests more heavily into business opportunities in Africa, he adds, it may export its conservation knowledge and bring on a new era of sustainable water practice internationally.

    Virtual Water is a provocative read and a surprising look at Earth’s present water situation and the actions necessary to ensure a healthy water future.

    March-April 2012 Futurist photo one

    cover of the March-April 2012 issue of THE FUTURIST

    March-April 2012 Futurist photo two

    decoration

    March-April 2012 Futurist photo three

    decoration