March-April 2011, Vol. 45, No. 2

  • From Hospital to Healthspital: A Better Paradigm for Health Care
  • Health Insurance in America After the Reform
  • Could Medical Tourism Aid Health-Care Delivery?
  • Bike to the Future
  • Relationships, Community, and Identity in the New Virtual Society
  • Imagineers in Search of the Future
  • Understanding Technological Evolution and Diversity

From Hospital to Healthspital: A Better Paradigm for Health Care

By Frank W. Maletz

Hospitals should not simply be places where people go to get well (or, worse, where they go to die). Future hospitals could become wellness information centers and proactive partners in community well-being, says a practicing orthopedic surgeon.

Is health-care delivery in the United States so broken that it cannot be repaired, remediated, rejuvenated, reformed, or reorganized? Should all existing delivery mechanisms be torn down so we can start from scratch?

My unequivocal answer is no to creative destruction, but creative rethinking is imperative. Nowhere on the planet is there a “perfect delivery system” for health-care modeling. In the United States, what is currently called a “system” is certainly not one in the sense of an ecosystem—i.e., controlled, sustainable, natural, with known inputs and outputs, with precise and defined resources and resource management, and with holistic feedback loops. There should also be within the ecosystem a balanced and proportionate response to all perturbations. A health-delivery system requires open adjustability.

The current U.S. health-delivery system does have many strengths: strong expertise at universities and other research hubs. Its free-market structure for product development and dissemination is inventive and innovative. Its safety is ensured through oversight by the U.S. Food and Drug Administration and organizations such as the Joint Commission. The robust National Institutes of Health provides funding and research prioritization. We now also have the social networking tools (wikis, Facebook, Twitter, LinkedIn, and the like) to deploy seamless and remarkable change on the magnitude of a paradigm shift.

But the biggest asset of the current system is the network of 5,010 community hospitals that deliver care to unique individuals locally, one provider to one patient in need, day or night, weekend or holiday. Thus, the United States already has the fundamental building blocks for a strong, personalized health-care-delivery system. So what else is needed?

Goals for Health: Elements of a Redesigned Approach

According to the Institute of Medicine report “Cross the Quality Chasm: A New Health System for the 21st Century,” the U.S. health-care system should strive to effect the following changes:

  • Redesign care processes.
  • Make effective use of information technology (IT).
  • Improve knowledge and skills management.
  • Develop effective teams.
  • Coordinate care across patient condition, service, and settings.
  • Use performance and outcome measurement for continuous quality improvement and accountability.

Reforming health care is a ubiquitous topic in the national dialogue because of the amount of resources that health consumes—16% of GDP. For all the ideas and opinions brought forth, however, all we seem to get is more GDP devoted to the problem, with partial solutions that get traction, then fizzle, doing little to improve quality or reduce the chaos in the system. Then the blame game begins: Rising costs are the “fault” of providers, or of insurers, attorneys, pharmaceutical and product companies, patient demands and expectations, for-profit hospitals, or government leaders who lack will.

It is time now for a true health renaissance, with constructive, holistic, integral, paradigm-shifting thinking and action. I believe that, until we can fix the delivery systems, we cannot begin to correct the reimbursement mechanisms.

What We Already Know about Health

First, we know that prevention is more cost-effective than treatment. Emergency-room visits are more expensive than routine maintenance. Chronic disorders such as diabetes, hypertension, heart disease, strokes, and renal failure consume an inordinate share of health-care dollars. Smoking cigarettes is bad. Obesity and nutritional deficiencies are epidemic. Fruitless, futile care at the end of life dominates a large proportion of the Medicare allocation. Reckless behaviors are responsible for much loss of productive and functional young lives. Cure and precision diagnosis are much more desired than mere control, maintenance, or palliation.

We also know that waste and redundancy in a paper-based information system have extraordinary costs both in real dollars and in time that could be allocated much more productively. A systematized, constantly updated, searchable, linkable database available at each point of care would reduce waste, repetition, redundancy, and the tendency for hand-off errors. Care could then be coordinated among all providers.

On the positive side, we know that workers who are healthy function more productively. Jobs, income, and reliable, portable health-insurance benefits add to security and productivity. Happy, contented people live longer and better, and many people already spend huge amounts of money on a host of programs to improve their health and well-being.

We know that regular exercise, especially aerobic, improves clarity, mental functioning, and wellness. Having a meaningful, fulfilled, goal-directed life and trying to contribute to society also increase longevity. And meeting our basic needs, including shelter, nutrition, and clothing, and maintaining appropriate levels of stress, balance, and moderation, are essential ingredients for physical and mental well-being.

Thus, the goal is not simply to eliminate sickness or delay death. We must take a much more holistic and expansive view of health care that embraces wellness and enrichment, a view that is flexible and that adopts the best practice from moment to moment.

Hospitals Today

Hospitals and sanitaria were developed to house the sick and treat or quarantine the diseased, deformed, or demented. Today, care is usually delivered locally to one patient by one provider at a time. Community hospitals provide the vast majority of the contact visits. Patients are generally not fluent in health-related matters, and this lack of understanding leads to major compliance failures with best advice and recommendations.

Providers are not infallible. Patient problems are inherently complex, and there are many unknowns. Medicine itself is becoming more complex. Natural healing using biological, biochemical, and immunologic enhancing remedies will function more predictably than artificial implants, prosthetics, xenograft replacements, and the like, but we are on the verge of advanced treatments with nanotechnology, bioengineering, genomics, proteomics, metabolomics, stem cells, and immunomodulation. Such advances bring us closer to cures and disease elimination.

Hospitals could do more to experimentally model an integrated, holistic health-delivery system that effects a real shift. They would collate the best research and brilliant idea production; incorporate the best of wellness, well-being, natural, and alternative options; improve oversight of chronic debilitating conditions; and mobilize and coordinate effective preventive strategies.

We have the tools to craft a better more healthful future and enable more-productive lives for everyone on the planet. The first step does not require much more than a creative paradigm shift in thinking and approach—a paradigm I call Healthspital 2.0.™

Elements for Integration

The biggest need is for data and information management. The needs for privacy and confidentiality of health information have not disappeared in the age of social networking, with people’s growing desire to be heard, noticed, and connected. Thus, rather than locking down all health information as a matter of privacy, we need to reconstruct laws regarding inappropriate use of data, such as in the discriminatory use of genetic information.

Seamless availability and transfer of health knowledge allows in-depth knowledge of confounding variables, reduces redundancy, and potentially eliminates hand-off errors. A computerized health “passport” would serve as a template and allow interconnectivity, not just benefiting the patient, but also allowing broader public-health research to be performed. This systemization of information would be able to highlight best outcomes and best practices through true tracking and social networking.

The Healthspital model also requires more-effective use of expert systems. With integrated data management, experts could render opinions from afar on questions within the database. Doctors and other practitioners would have access to remote monitoring, enabling them to render remote advice. They could find answers to questions and discover best practices, as well as share their own discoveries, ideas, and best practices. Innovation could be instantly disseminated globally.

Patients and families will more easily engage with extensive information and support networks, and self-education would expand.

Healthspitals in the Community

Each Healthspital would appreciate the norms, mores, and expectations of the community it serves on issues such as end-of-life ministrations. Dialogue could begin in earnest regarding hospice services. Part of the Healthspital’s mission could be to celebrate each patient as a life well lived, honoring individual care preferences during life-and-death decision making. Throughout the community, such openness would reenergize relationships between younger and older generations and promote mutual caring, which would contribute to the curing function across the health-care continuum.

The Healthspitals’ integrated delivery system at the community level would allow a much truer triage at emergency departments. As these are often the places of first resort for patients with all levels of care needs, a system-wide approach to triage would help refer all patients to the appropriate (and often less expensive) level of care. This would lessen the issue of “dumping” and allow tracking of referral patterns to provide a feedback mechanism for improving triage throughout the system.

A Healthspital 2.0 approach could proactively intervene against negative health modulators such as smoking, impaired driving, and other reckless behaviors and would promote modifications.

Healthspitals would also assess and promote healthy lifestyles, such as appropriate nutrition and exercise regimens. Using personal monitoring devices for walks would allow people to compile and monitor their health via a database, which would be accessible to their physicians as well as to researchers tracking public-health trends.

Healthspital 2.0 would, by virtue of eliminating redundancy and improving health, allow huge savings from current health-care expenditures. These savings could be reinvested into promoting more healthful programs such as building walking trails, biking areas, parks, and local organic farms. Public health and wellness would thus become self-sustaining.

Once the Healthspital is fully functional, true reform of medical malpractice would be possible, as errors would decline and overall health of the community would be improved. Also, the integration would allow risk sharing across the system, which would require understanding the rights and responsibilities of all stakeholders, from patients to the Healthspital personnel, all of whom are truly invested in providing and maintaining the health of the entire community—the health ecosystem.

Barriers to the Healthspital Paradigm

Professor Randy Pausch, in The Last Lecture (Hyperion, 2008), taught that barriers are put in front of us to see how much we want what is beyond them. Here are some of the challenges facing the Healthspital 2.0 paradigm.

  • Legal issues: Many modifications of current laws will be needed, especially in the areas of information use and availability at point of care. Issues that will need to be addressed include HIPAA (Health Insurance Portability and Accountability Act), patient dumping, conflict of interest, and discrimination. HIPAA, in particular, was originally enacted as a privacy guard. I contend that the American population is more comfortable sharing personal health-care information than current legislation indicates, so long as they have confidence that the information will be used responsibly. With 500 million people already utilizing Facebook, I believe the vast majority of people (therefore, patients) would make health-related data available to providers and researchers in the interests of preserving health.
  • Financial issues: Compared with building a new hospital, the Healthspital model offers potentially tremendous cost savings, but care must be taken that these savings are reinvested into more healthful projects rather than shifted to various nonhealth-related special interests.
  • Political issues: The creation of the Healthspital 2.0 concept will require substantial commitment, investment, and will on the part of politicians. The paradigm shift is monumental, so it is certainly appropriate to work at the experimental project level where results can be analyzed in terms of cost savings and improved health care. However, politicians with appropriate foresight would also be helpful in providing leadership and serving as champions for concepts such as this.
  • Educational issues: As with all major changes, educational ramifications of a health system paradigm shift are tremendous. Health awareness should be taught at the earliest levels, starting in pre-school. Science and nutritional coursework throughout formal schooling is imperative, as well as example setting. Patients currently receiving treatment in the older delivery model will need tools that the local community Healthspital will provide. Lifelong education could thus enable individuals to become more involved in their own health future, allowing them to assist responsibly in the delivery of care to themselves and family members.
  • Punitive and unconstructive programs: Bashing and the blame game must be eliminated throughout the health-delivery system. No one—individual or institution—functions well with a stick at the back. The current pay-for-performance model does not allow the raising of all boats toward improvement, but rather widens the gap between the great performers and the health programs and systems that are performing poorly.

Building a Healthspital Model

I currently work at Lawrence & Memorial Hospital in southeastern Connecticut, a 250-bed community hospital. We serve a number of employers and on the continuum of care from birth to death, from neonatal intensive care to skilled nursing facilities, and with a robust hospice presence. We care for patients in 10 counties, and our primary service area includes both the destitute and the wealthy. We are regional, and our facility would be a perfect venue for an experimental design incorporating any and all of the above suggestions.

How would this work? First and foremost, it would be an experiment requiring bright investigators to provide oversight and analysis of data. All elements of health care and wellness should be incorporated. Every member of the community in the 10 primary service areas should be enrolled, and a swipe-card passport developed such that, at any point of care, information is standardized. Any and all good ideas would be welcomed for inclusion in a central repository of ideas and best practices. Through instant messaging, such bright ideas would be disseminated throughout the system for consideration, and this would assure equality.

No person requiring care or requesting information would get anything less than the best available. Funding sources would include venture capitalists, information system vendors, federal government pilot project or American Hospital Association new investigator sources.

Pilot projects shown to work effectively would merge databases and coalesce into a national or even global health-delivery ecosystem, addressing the big five issues of waste and redundancy, expensive access, prevention, chronic disease management, and fruitless ministrations at end-of-life.

About the Author

Frank W. Maletz, MD, is an orthopedic surgeon specializing in spine and trauma at the Lawrence & Memorial Hospital in East Lyme, Connecticut. For more information about the Healthspital 2.0™ concept, please contact the author, email malfam5@aol.com. He will also speak on this topic at WorldFuture 2011: Moving from Vision to Action in Vancouver.

Health Insurance in America After the Reform

By Jay Herson and David Pearce Snyder

If for-profit health insurers find that business is too unprofitable under the new law, where will Americans find affordable coverage? One solution may rise from the nonprofit sector led by credit unions, which have already demonstrated an ability to keep up with for-profit banks.

The primary objective of President Obama’s 2009 health care reform initiative was to provide health insurance for an estimated 46 million people who did not have it. The Act requires insurers not to reject coverage on the basis of preexisting health conditions, and it requires all citizens to purchase health insurance or to pay a tax should they decline to purchase.

Should the Reform Be Reformed?

Conventional post-election wisdom holds that, in spite of heavy rhetorical assault, the 2010 health insurance reforms will survive the new Congress largely intact. Since conventional political wisdom is right only half the time, this offers little assurance. However, the predictable demographic and economic realities underlying the coming decade will be sufficient, by themselves, to produce the sequence of developments summarized in this article—even if the Patient Protection and Affordable Care Act (PPACA) were to be overturned.

Without PPACA’s constraints, for-profit insurers can be expected to increase premiums in line with health-care providers’ costs, which are rising at two to four times the rate of the Consumer Price Index. At the same time, the United States will experience a 50% increase in the “high-maintenance” over-65 patient population—plus the retirement of the baby boomers, who represent one-third of the nation’s current caregivers—just as the nation passes through five to seven years of projected stagnant income growth, chronic high unemployment, fiscal deleveraging, and shrinking public-sector budgets.

Absent the PPACA reforms, with each passing year a growing percentage of U.S. households will simply be unable to afford the premiums set by for-profit insurers. Nonprofits would emerge naturally to fill the growing unmet marketplace need.

In short, PPACA will largely serve to facilitate and accelerate the adaptive free-market behavior that is almost certain to occur in the austere circumstances that will confront most Americans for the foreseeable future.—David Pearce Snyder

Omitted from the final version of the Patient Protection and Affordable Care Act (PPACA), passed in March 2010, was the so-called “public option,” a government-run health insurance program designed to compete with profit-making companies. Legislation notwithstanding, it has generally been marketplace forces—not government interventions—that have shaped the U.S. future, so we shall examine how these market forces will create a new source of competition for the health-insurance market: nonprofit organizations.

Health Insurance Forecast to 2030

Under the 2010 health care reform legislation, the health-insurance business is expected to become less attractive for investor-owned public insurance companies. This will especially be the case if courts decide that requiring citizens to purchase health insurance is unconstitutional.

More particularly, insurers’ inability to reject applicants or to cap the benefits (or even terminate the policies) of patients incurring serious and costly illnesses will make health insurance increasingly unattractive as a profit-making business. As for-profit insurers exit the affordable health insurance market, nonprofit institutions may step up to meet consumer demand.

There are already a number of nonprofit organizations that serve large pools of people, such as credit unions, which may offer their members health insurance. These programs would be administered by large data-processing organizations similar to those that currently have service contracts with Social Security, Medicare, and Medicaid and other state-run programs.

There are now approximately 7,800 credit unions (CUs) in the United States, including federally insured, state insured, and self-insured institutions. These serve tens of millions of members and hold hundreds of billions of dollars in assets, which increased significantly during the recent banking crisis.

Credit unions should have little concern about competing with for-profit insurance companies since they have been competing with the for-profit banks for the past 75 years. Health insurance would be a logical extension of providing low-cost services to members, as well as an extension of their current offerings of health savings accounts.

Interstate cooperatives of CUs—already in existence—could serve a critical mass of insured, taking advantage of their existing institutional infrastructure such as data processing and electronic funds transfer. The CUs would initially offer low-cost health insurance primarily targeted at the uninsured. However, people insured with individual policies or group insurance might also choose CU health insurance as an alternative. In fact, employers could offer CU health insurance as a benefit. As insured pools increase, the number of providers (doctors, hospitals) accepting the insurance would increase and the insurance coverage would become more attractive to the public and employers.

The nonprofit organizations offering health insurance would by no means be limited to CUs. New health insurance companies can be created by all sorts of nonprofits banding together to represent a sufficiently large pool of insured. For example, public radio and TV stations could unite to form insurance groups, as could university alumni associations or retirement funds such as the Texas Teachers Retirement System and the California Public Employees’ Retirement System (CalPERS).

The 2010 health care reform act provides for subsidies to people who cannot afford to purchase health insurance. Presumably, these subsidies could be used to purchase the nonprofit health insurance described above. Federal subsidies, however, may be insufficient for some families to afford health insurance. Should this be the case, there will be pressure on the states to provide subsidies. Some states may be more progressive than others in helping citizens get the necessary coverage, and those that do not provide a path to health insurance may see a dwindling labor supply as workers and businesses move to more progressive states. Under the health reform act, state health insurance programs will become a tool of economic development policy.

Scenarios for Nonprofit Health Insurance

Much of the foregoing discussion is, admittedly, speculative. The following are four possible scenarios, plus a most-likely scenario, that could emerge if nonprofits began a process of providing health insurance as a consequence of the sweeping, congressionally mandated reform.

1. Business as Usual. Although a nonprofit initiative is widely discussed, actual health insurance policies issued by credit unions and other nonprofits never get off the ground as Congress comes to a stalemate over legislation that would enable it. Perhaps because of intense lobbying by for-profit health insurance companies, Congress eliminates some aspects of the Act. Proposals for state-run, high-risk insurance pools to merge over state lines and provide expanded coverage are also widely discussed, but fail to get the approval of state legislatures due to budget constraints and problems foreseen in governance. Meanwhile, health-care costs continue to rise rapidly, and, as premiums charged by for-profit health insurers soar, more people are forced to abandon their coverage; there are growing lines at public health clinics for minimal care.

2. At Least We Tried. Credit unions launch several health-insurance companies, but they fail to enroll enough people fast enough to sustain the enterprise. Although subsidies from state government and charities do materialize, interest dwindles because of failing CU health insurance initiatives and a declining sense of urgency, in spite of the fact that tens of millions of Americans remain uninsured.

3. Nonprofits Succeed. The demand is so great that credit-union-based health insurance takes off, and 30 states create funds to subsidize premiums for those who qualify. The success of the first CU groups creates the experience base for other groups to be quickly formed. Competition is healthy for all. By 2030, 93% of the U.S. population has some form of health insurance—40% from nonprofits and 53% from public company health insurance and government agencies.

4. Watch What You Wish For. After 10 years of success, CU health insurance becomes commonplace and a workplace standard. However, with the increased visibility that comes with success, fraud among providers and patients is making headlines. This causes a drop in governmental and charity subsidies for premiums. The existing CU insurance companies feel they need to grow more, and mergers begin taking place. This reduces the amount of competition and consumer choice. To compete, some of the remaining CU insurance companies decide that they can reduce costs and attract more members by actually becoming direct health-care providers, ultimately building (or buying) their own medical facilities. This leads some of the CU insurers to go public and, thus, cease to be nonprofit. By 2030, the medical insurance industry has begun to look the way it did in 2010.

5. Most-Likely Scenario. The most-likely scenario for the next 20 years lies somewhere between scenarios 3 and 4 above. By 2030, population demographics will make scenarios 1 and 2 politically unviable. Although scenario 4 is possible, the pendulum never swings completely back. While most Americans are likely to be covered by private and government health insurance in 2030, there will continue to be a need for the nonprofit alternatives described here. Still, barring further legislative intervention, it seems unlikely that more than 93% of the population will have health insurance in 2030.

Amtrak emerged when private railroads did not want to continue providing passenger service. Rural electric cooperatives emerged when it was not profitable for private industry to provide power to rural areas. Similarly, some form of cooperative health insurance is likely to emerge to fill the void created by omission of a public option in the health insurance reform.

It is difficult to forecast beyond the year 2030, but the information, communication, and health-care-management technologies that exist by 2050 should make a single-payer system easy to implement and the only logical way to provide quality health care to the U.S. population. Out of necessity, nonprofit organizations will pave the way to 2050.

About the Authors

Jay Herson is a senior associate at the Johns Hopkins Bloomberg School of Public Health and managing editor of FutureTakes. E-mail jay.herson@earthlink.net.

David Pearce Snyder is a consulting futurist and principal of The Snyder Family Enterprise and THE FUTURIST’s contributing editor for Lifestyles. E-mail david_snyder@verizon.net.

This article draws from and updates their essay in the World Future Society’s 2010 conference volume, Strategies and Technologies for a Sustainable Future (WFS, 2010, 450 pages), which may be ordered from www.wfs.org/wfsbooks for $29.95 ($24.95 for Society members).

Could Medical Tourism Aid Health-Care Delivery?

By Prema Nakra

Medical tourism—wherein patients seek more affordable or specialized treatment outside their home countries—represents a major challenge for health-care delivery in developed countries such as the United States. It also offers an opportunity to integrate and improve medical delivery globally.

Health care has long been one of the most local of all industries, but in today’s world, people, information, ideas, and technologies are increasingly crossing national borders. The move to “go global” is such a strong force that hardly any human activity is exempt from its impact.

Medical tourism, an outgrowth of the globalization of services, has emerged as an innovative, border-crossing industry, and many developing countries are poised to take advantage of this opportunity. But this opportunity also represents a challenge to health-care-delivery systems in developed countries such as the United States.

U.S. health-care costs, already an estimated $2 trillion a year, are predicted to double in the coming decade. By 2020, health-care spending is projected to consume 21% of U.S. GDP, compared with 16% of GDP in other developed countries.

Today, more than 40 governments are involved in supporting medical tourism, and the number is growing each year. The medical community in developed countries has started to recognize medical tourism as a real phenomenon with significant impacts on both practitioners and patients. Yet “medical tourism” is not a phrase that has come up openly in the U.S. debate on health-care reform.

Just after the 2011 Patient Protection and Affordable Care Act (PPACA) was passed, President Obama signed into law the Health Care and Education Reconciliation Act, which made a number of significant changes to the PPACA. According to Chris Brandt and Michael Cohen of Deloitte Consulting, these reforms represent one of the most significant disruptive events for U.S. health-care providers in the last century. Key challenges that providers will face due to this reform include:

• Estimating the potential impact of increased coverage and associated revenues on profit margins.

• Reviewing the operational capacity to ascertain whether or not the providers can respond to the pent-up demand from the newly insured.

• Handling the approximately 32 million people added to the list of those seeking primary medical care, typically provided by an internist or family-care physician.

A nationwide shortage of doctors—projected by the American Academy of Family Physicians to reach 40,000 primary-care physicians by 2020—may eventually mean long hours in the waiting rooms at busy clinics, less quality time available with doctors in examining rooms, and emergency rooms packed with patients who couldn’t find physicians elsewhere.

For past 30 years, the United States has relied heavily on foreign-born and foreign-educated doctors to help meet the demand for health-care services. About a quarter of all physicians now practicing in the United States came from other countries. In 2007, more than 38% of U.S. family-medicine residents were international medical graduates, according to the American Academy of Family Physicians.

If medical tourism continues to grow at its current rate, recruiting foreign-born physicians and nursing staff to the United States will become more challenging. In 2006, the Association of American Medical Colleges recommended that medical schools increase their student enrollment 30% by 2015 in order to address the nation’s growing shortage of physicians.

No matter what shape the current health-care reform takes or how it is implemented, health-care costs in the United Sates will continue to increase and consume more of the public’s discretionary spending. By 2017, as many as 23 million Americans could be traveling internationally and spending almost $79 billion per year for medical/surgical care, according to a 2008 report from the Deloitte Center for Health Solutions.

Stated differently, if these predictions are correct, U.S. health-care providers stand to lose $79 billion per year to medical tourism. If the gap between the cost of major medical procedures performed in the United States and other countries continues to grow, low-cost providers will capture a larger share of the market for complex surgical procedures. Top U.S. health-services managers, policy makers, and physician and surgeon groups appear to be strategically unprepared for globalization in the health-care services industry and the resulting international competition.

When patients travel out of the country for surgical care and then return home, they need follow-up care. Their providers are then faced with such challenges as the unavailability of adequate medical records and the potential of complications after their patients’ overseas surgeries. The issue of adequately reimbursing the surgeons providing the follow-up care also remains unsolved.

Turning “Medical Tourism” into Globalized Health

The medical-tourism industry has introduced new business models to deal with global health-care challenges. These business models are bringing about significant changes in the way that governments around the world deal with financing hospitals, recruiting physicians, reimbursing health-care providers, and building adequate health-care systems for current and future generations.

It is time for policy makers in the United States and other developed countries to embrace medical tourism: It could save money by taking advantage of more-efficient health-care systems outside the country, while also enabling providers to learn from the best practices in this increasingly globalizing industry.

Globalization and medical tourism are changing the health-care landscape in industrialized and developing countries alike. A “globalized health system” of the future should include international networks of highly specialized, virtually connected providers, organized around mid-sized district hospitals that function as planning, management, and communication hubs to offer a variety of local, community-oriented, preventive, and curative services.

Medical tourism is largely a consumer-driven trend. In order to survive and thrive, the health-delivery industry must keep up with its consumers’ demands and needs.

About the Author

Prema Nakra is a professor of marketing at the School of Management, Marist College, Poughkeepsie, New York 12601. E-mail prema.nakra@marist.edu.

VISIONS: Imagineers in Search of the Future

By Gary Dehrer

In 1955, Walt Disney Imagineers achieved virtual reality with Disneyland. Eight Imagineering principles explain how they did it.

Here You Leave Today, and Enter the World of Yesterday, Tomorrow and Fantasy —Sign at the entrance to Disneyland

The opening of Disneyland in the middle of the twentieth century saw Walt Disney unleashing the forces of Imagineering to create a true “virtual reality” world of entertainment and adventure. When the first paying customers entered Disneyland on July 18, 1955, they walked through one of two tunnel passageways leading to Main Street, U.S.A. Many thought they were about to encounter an upgraded amusement park, but Walt Disney knew he had created something much more than that. From Town Square, guests looked down Main Street to see Sleeping Beauty Castle beckoning in the distance. This immediate first impression was designed to have guests feel like they were being absorbed into a cinematic experience, a sensation of knowing they had stepped from their everyday life into an extraordinary world.

Eight Principles of Imagineering

According to Disney historian Alex Wright and contributors to The Imagineering Field Guide to Disneyland, Imagineering consists of the following eight basic principles.

1. Area Development: “The interstitial spaces between the attractions, restaurants, and shops. This includes landscaping, architecture, propping, show elements, and special enhancements intended to expand the experience.”

2. Blue Sky: “The early stages in the idea-generation process when anything is possible. There are not yet any considerations taken into account that might rein in the creative process. At this point, the sky’s the limit!”

3. Brainstorm: “A gathering for the purpose of generating as many ideas as possible in the shortest time possible. We hold many brainstorming sessions at WDI [Walt Disney Imagineering], always looking for the best ideas.” The rules include remembering that there is no such thing as a bad idea and that nothing should stifle the flow of ideas.

4. Dark Ride: “A term often used to describe the charming Fantasyland attractions, among others, housed more or less completely inside a show building, which allows for greater isolation of show elements and light control, as needed.”

5. Elevation: “A drawing of a true frontal view of an object—usually a building—often drawn from multiple sides, eliminating the perspective that you would see in the real world, for clarity in the design and to lead construction activities.”

6. Kinetics: “Movement and motion in a scene that give it life and energy. This can come from moving vehicles, active signage, changes in lighting, special effects, or even hanging banners or flags that move as the wind blows.”

7. Plussing: “A word derived from Walt’s penchant for always trying to make an idea better. Imagineers are continually trying to plus work, even after it’s ‘finished.’”

8. Show: “Everything we put ‘onstage’ in a Disney park. Walt believed that everything we put out for the Guests in our parks was part of a big show, so much of our terminology originated in the show business world. With that in mind, ‘show’ becomes for us a very broad term that includes just about anything our Guests see, hear, smell, or come in contact with during their visit to any of our parks or resorts.”

Source: The Imagineering Field Guide to Disneyland by Alex Wright and the Imagineers (Disney Editions, 2008).

Virtual reality is most often defined as a simulated sensory experience made possible by computer software, creating a convincing, three-dimensional experience that—at its best—looks, feels, and sounds like the real thing. It can be likened to any virtual environment where someone can literally walk into it and perceive it as true to life. Another word for virtual is enhanced reality. While various applications of simulated virtual reality will be increasingly possible in the future, people actually experienced it at Disneyland in 1955, without the aid of computer-generated special effects or other advanced technology.

Ground was broken for the Disneyland Park in July 1954, with opening day set for only 12 months later. A frenzy of construction activity swept over the former Anaheim, California, orange grove. In just a few months, the outlines of now-familiar landmarks began to emerge, with Main Street, Sleeping Beauty Castle, the Jungle Rivers of the World, and the larger Rivers of America visible. The Tomorrowland site, which lagged behind in construction, lacked the clear identity of the other lands. The Imagineers, specialists using creativity and technical know-how, became frustrated and suggested that the Tomorrowland of 1986 be concealed behind an attractive fence until it was ready. Although Walt Disney agreed to this at first, he changed his mind, saying, “We’ll open the whole park.… Do the best you can with Tomorrowland, and we’ll fix it up after we open.”

Now Is the Time for the Future

At the entrance to the original 1955 Tomorrowland, the first attraction to come into view was a tall clock structure. This was the Clock of the World, which declared that now is the time for the future. This clock was intended to symbolize the incredible futuristic world about to be entered. Standing more than 17 feet tall, the clock looked much like a squeezed soda can topped with a half sphere, gold-spiked anodized aluminum sun and a stylized silver crescent Man in the Moon face. The blue tiles encircling its base depicted the vast universe.

Few passersby stopped to notice that the timepiece showed not only the time in Anaheim, California, but also around the world. Other than serving as a convenient place for parents to meet their kids, the clock rapidly faded into obscurity. The towering red-and-white TWA Rocket was a much more-remembered symbol of Tomorrowland.

The Clock of the World is now gone, with only some first-generation Disneylanders able to recall it. The clock continued to faithfully perform its timekeeping duties until it was removed in 1966, along with the widespread demolition of the original 1955 Tomorrowland. The exiting of the clock was captured in a photo showing the timepiece, minus its top ornamentation, being hauled away with the lower edge of its blue “universe” mosaic tiles broken off at the base.

Sometimes the future can be treated rather shabbily.—Gary Dehrer

Imagineering Realism And Fantasy

To realize his Disneyland vision, Walt Disney assembled a talented team of Imagineers, who would transform ideas and dreams into reality. Looking up at the second-floor windows along Disneyland’s Main Street, you can see painted signs with the names of people and their businesses. While the businesses are somewhat fictitious, the people are not. These are names of Imagineers—such as Harper Goff, Ken Anderson, Herb Ryman, and Sam McKim —and others who played significant roles in making Disneyland happen. Even Walt Disney’s father, Elias Disney, has a window with his name painted on it with “Contractor Est. 1895” listed.

Goff, with his background in designing movie sets, would lend a hand with Main Street and the Jungle Cruise ride. Anderson, trained as an architect and all-around designer, worked on many last-minute Disneyland projects. Ryman, a versatile artist who rendered the dazzling overview of Disneyland in 1953, would later help conceptualize New Orleans Square. McKim, a multitalented artist, rendered concept sketches for Disneyland and other Disney projects.

These and many other Imagineers to follow helped dream and bring Disneyland into existence.—Gary Dehrer

Imagineering Principles: How a Dream Is Built

Eight basic Imagineering principles were essential to the creation of Disneyland’s virtual reality: Area Development, Blue Sky, Brainstorm, Dark Ride, Elevation, Kinetics, Plussing, and Show [see sidebar, “Eight Principles of Imagineering”].

1. Area Development. The original 1955 master plan for Disneyland envisioned Main Street, U.S.A., as the initial experience funneling people to a central plaza hub and then drawing them into one of four adjoining lands: Adventureland, Frontierland, Fantasyland, and Tomorrowland. Creating an expansive and interactive 60-acre venue such as Disneyland was a monumental undertaking; with no other prior experience, the Imagineers were faced with a Herculean task.

In reviewing Walt Disney’s plan to have everyone enter Disneyland at Town Square, amusement-park experts questioned why there was only one entrance. They warned that this would create unnecessary congestion. They also questioned the expense of Town Square, especially since it was not going to produce any revenue.

Disney responded that this entry space was designed to create an essential first impression and special mood for his guests. All guests had to enter the Park the same way to share an identical illusion. Even the Main Street transportation, which included a fire wagon and horse-drawn trolleys, was not intended to make any money but to help add to the overall sensory experience. Town Square was to serve as the gateway to Disneyland’s virtual reality.

The dramatic, one-two punch of the Main Street environs with Sleeping Beauty Castle looming down the street convinced Disney that he was on the right track in lifting his guests to a higher entertainment experience.

Disney was able to use his experience in animation and films, especially his extraordinary storytelling skills, to add believability to his Park creation. He grasped the importance of quickly altering the perception and attitudes of guests entering Disneyland, thereby drawing them into a new reality. This is similar to what video-game designers would be doing decades later using an interactive electronic visual format.

2. Blue Sky. Disneyland was the first project for Walt Disney Imagineering (WDI), which was created on December 16, 1952, as part of WED (Walter Elias Disney) Enterprises. Walt Disney, considered to be the foremost Imagineer of modern times, had built a major animation and film studio by the early 1950s. WED was to address all Disney activities outside the film studio and this would come to include Disney parks, resorts, special attractions at World’s Fairs, cruise ships, and other diverse entertainment activities. Disneyland offered the Imagineers an opportunity to demonstrate that anything is possible.

He was creating something to bring people across disciplines—engineering, animation, scriptwriting and filmmaking—together to tackle specific projects. Early in the development of the Disneyland project, Walt Disney realized that creating his park illusion or “show” needed mechanical know-how as well as artistic expertise. To make his “big dreams” a reality, he would have to enlist an army of Imagineers, versed in an ever-widening range of disciplines. The Disneyland show needed not only people who could design and illustrate the dream, but also writers, architects, interior designers, engineers, lighting experts, graphic designers, set designers, craftsmen, sound technicians, landscapers, model makers, sculptors, special-effects technicians, master planners, researchers, managers, construction experts, and more.

Disneyland was first envisioned as a “place for people to find happiness and knowledge.” Here, people would not be watching a movie, but rather participating in it. They would be walking through a tunnel and emerging in another world. Even the landscaping and specially scaled architecture would add to the credibility of this dream place. He was intent on creating an illusion of time and space taking people away from their daily cares on a journey of imagination that was different from anything they had ever experienced before.

In explaining the secret of his success, Walt Disney had one word for it: curiosity. “There’s really no secret about our approach,” he said. “We keep moving forward—opening up new doors and doing new things—because we’re curious. And curiosity keeps leading us down new paths. We’re always exploring and experimenting.” And curiosity was forever wrapped in endless “Blue Sky” possibilities that begged to become realities.

3. Brainstorm. Brainstorming was used to shape and define the Park, as well as to solve practical problems. The collaborative-thinking process energized the designing of Disneyland as the Imagineers pursued ideas both good and bad. Brainstorming represents a continuous process where success is many times intermingled with failure, as evidenced by Disneyland’s 1955 opening. Two of Tomorrowland’s brightest ideas—the freeway Autopia and Rocket to the Moon—both experienced initial failure. Bob Gurr, a young Imagineer with a bachelor’s degree in industrial design but scant mechanical knowledge, was put in charge of the Autopia’s first fleet of cars. On opening day, the Autopia drew a good-sized crowd, but by closing time, half of the cars were disabled. By the end of the first week, only two cars were still moving.

Walt Disney came by to inspect the ravaged car fleet and said, “Well, we’ve got to do something.” Gurr responded that he didn’t have a place to repair the broken cars. The Park, by this point, was already built, so there was no place to construct a shed. Some outside-of-the-box thinking was in order. Half an hour later, a tractor showed up towing a small wooden shed. The driver asked Gurr, “Where do you want your damn garage?” An enhanced Autopia with its sporty cars and meandering freeway is still thriving in the twenty-first century.

4. Dark Ride. Of all the rides in Fantasyland, Walt Disney’s favorite was Peter Pan. He particularly appreciated its fly-through concept, with its tiny galleon cars suspended on ceiling cables allowing passengers to soar over landscapes. It was one ride that he rode over and over again. Peter Pan was an original 1955 dark ride housed completely inside of a building.

Dark rides formed the backbone of Fantasyland’s entertainment experience, as special effects could be used to further create illusion and magic. In 1965, John Hench, one of Disney’s first and longtime Imagineers, rendered a concept sketch that would evolve into Space Mountain, housing a dark-ride roller coaster. The Space Mountain ride was finally achieved in 1975 as Tomorrowland continued to be reworked. Hench said, “The ride is above all an experience of speed, enhanced by the controlled lighting and projected moving images. But it evokes such ideas as the mystery of outer space, the excitement of setting out on a journey, and the thrill of the unknown.”

The power of dark rides pulled guests deeper into the Park experience, whether it was riding with Mr. Toad or flying with Peter Pan. Guests would themselves pass through the live-action scenes and physically experience being part of the story. The rides and attractions were designed to work in harmony to produce a series of sensations. Arguably, the Park setting and attractions worked well to subliminally capture moods and influence attitudes that are so important in creating virtual reality. Fantasy would become real.

5. Elevation. Imagineering ushered in the concept of three-dimensional storytelling. Imagineers detailed the images and settings they felt important to telling stories through mood and sensation.

Even Main Street, U.S.A., had a story to tell. John Hench explains, “Mood is created mainly by the sensation of carefully orchestrated and intensified stimuli, of color, sound, form, and movement. Disneyland’s Main Street, U.S.A., which represents the main shopping street in an idealized American turn-of-the-century small town, is a good example of mood created by sensation that results in enhanced reality.”

Disney historian Jeff Kurtti notes, “While the first Imagineers had no formal training in urban design, the nature of the animator’s art made them natural systems architects. As storytellers, they ‘wrote’ the park, giving it consistency of narrative that is matched by few other public spaces.” As the architectural elevation drawings of Disneyland were made into real buildings, Walt Disney was achieving an unprecedented breakthrough in entertainment, causing people to directly experience and interact with a virtual world as stories and adventures came alive. The Imagineered elements of storytelling created a virtual-reality setting by placing Park guests in a fantasy, larger-than-life environment. Transferring imagination into blueprints and then into an actual park virtual experience was a singular achievement that foreshadowed a future world as yet unknown.

6. Kinetics. On an inspection tour of Disneyland when it was under construction, Walt Disney spent several hours riding around in a Jeep accompanied by several people, including Joe Fowler, his construction boss. Departing from Town Square, Disney and his small party drove over to Sleeping Beauty’s unfinished castle, where he described all of the attractions and how everything would look in full color. He was describing the kinetics of Fantasyland and how the carousel horses would be leaping.

Disney realized that transferring stories from film to real-life three dimensionality would be challenging but knew his guests could use their imaginations in the Park just as they did in movie theaters. Thus, the Park experience would become believable, allowing guests to trust and enjoy the attractions and illusions.

The Jeep visited all the lands, and everyone could feel the enthusiasm of Walt Disney. When the Jeep returned to the Park entrance, Disney looked back down an unpaved Main Street and remarked, “Don’t forget the biggest attraction isn’t here yet.” When asked what that was he responded, “People. You fill this place with people, and you’ll really have a show.”

7. Plussing. Walt Disney said of Disneyland, “It’s something that will never be finished, something I can keep developing, keep ‘plussing’ and adding to. It’s alive.”

Disneyland has been compared to an animated movie, where main attractions are much like “key frames” in a film. Disney even went so far as to devise ways to fade from one Disneyland attraction and then focus guests into another, much as a film moves from scene to scene. John Hench said of Disney, “He would insist on changing the texture of the pavement at the threshold of each new land because, he said, ‘You can get information about a changing environment through the soles of your feet.’” Thus, through continuous plussing, the Disneyland experience would be both ordered and harmonious, not chaotic or confusing.

From opening day in 1955, Disneyland was meant to undergo continuous innovation and upgrading. Walt Disney and his Imagineers envisioned that Disneyland would embrace ongoing change and newly emerging technologies, while retaining its original footprint of a wondrous “magical kingdom.”

Imagineering plussing kept the park vision alive, with each “frame” being reedited to achieve the best real-life experience possible. Virtual reality is all about “plussing” an environment so that it is constantly being changed and improved.

8. Show. Crucial to the virtual-reality creation was its cast of characters. To further create his Disneyland illusion, Walt Disney instituted his Disneyland University, which would train Park personnel to not just do their jobs, but to perform as though they were onstage. Employees were expected to be happy and cheerful, further creating the feeling of an optimistic world. They would follow special protocols and a dress code to help guests feel comfortable about participating in the show.

Adding to this inclusive effect were Mickey and Minnie Mouse, along with other Disney cartoon characters, who would join guests in the Park. These costumed walk-around characters were meant to mingle with guests, posing for pictures but remaining silent. The physical impact of the walk-around characters enhanced the show and produced a convincing and compelling fantasy environment for adults and children alike.

Disneyland: A Living Virtual World And Portal into the Future

In 1955, Walt Disney had made Disneyland a living virtual reality. It would pull generations of people into Town Square to start altering their moods and sensations, and then down Main Street, U.S.A., and on into the Park, enabling them to escape into their imaginations through carefully Imagineered experiences, settings, stories, and adventures. Imagineering architecture, landscaping, and storytelling created not only a compelling “show,” but also a living virtual world.

Walt Disney, who died in 1966, had a family apartment over the Fire Station overlooking Town Square in Main Street, U.S.A., where he would sometimes stay overnight at the Park. Staff members knew that, when the front window lamp was on, their ever-watchful boss was on board. Few guests took notice of the apartment lamp, as there were many lights along Main Street. Today, if you look up to the second-floor Fire Station apartment, you realize that the lamp in the window behind the curtain is always on.

In assembling his team of Imagineers, Walt Disney had created an extension of himself that would pursue his dreams and the future long after he had died. Disneyland is a living virtual world that is a portal into an optimistic future. It is “another world” where everything is all right, people are innately good, and anything can be handled. In this sense, all of Disneyland is indeed a bright and hopeful Tomorrowland.

About the Author

Gary Dehrer is a retired principal of the San Bernardino City Unified School District (San Bernardino, California), a retired lieutenant colonel in the U.S. Army Reserves, author of Building a Championship Family (New Horizon Press, 2007), and a lifelong visitor to Disneyland. He resides in Yucaipa, California. E-mail gpdehrer@yahoo.com.

This article draws from his essay “Tomorrowland,” to be published in the 2011 World Future Society conference volume, Moving from Vision to Action.

The Disneyland Story: For Further Reading

Walt Disney: An American Original by Bob Thomas (Walt Disney Company, 1994). Thomas chronicles Disney’s keen attention to detail in perfecting an enhanced park experience, as with tree placement, the scale of the trains, and noise level of cars in his dark rides. He also observes that Walt Disney challenged those around him to go the extra mile in their work, but that this was not always well received. According to Thomas, Walt Disney viewed the Park as a living motion picture that could change and grow with its guests.

Walt Disney: The Triumph of the American Imagination by Neal Gabler (Vintage Books, 2006). Gabler’s candid assessment of Walt Disney offers an excellent companion to Bob Thomas’s insightful biography. Gabler feels that Disney saw the Park as an interlocking series of movie sets, whereby guests were to be absorbed as participants in a cinematic experience. He sees Disneyland as both transforming and therapeutic in helping people feel good about themselves and in love with life, and he sees Walt Disney, as the master animator, pulling his audience or guests into his own creation.

Walt Disney’s Imagineering Legends and the Genesis of the Disney Theme Park by Jeff Kurtti (Disney Editions, 2008). Kurtti’s book is an informative overview of the men and women who created the Disney theme-park concept. Beyond Disneyland’s “architecture of reassurance” is a carefully crafted encounter with virtual reality. Kurtti writes, “Nothing looks fake. Fabricated, yes; fake, no. Disneyland isn’t the mimicry of a thing. It’s a thing.” Once through the entry tunnels, you are quickly absorbed into Disney’s imagineered world of fantasy.

Designing Disney: Imagineering and the Art of the Show by John Hench (Disney Editions, 2008). Hench, a legendary Disney Imagineer, had a 65-year Disney career, from 1939 to until his death at age 95 in 2004. In this book, he relates how the 1955 Disneyland was to be a venue for a succession of new attractions within the park’s original Main Street and four lands framework. Hench suggests the enhanced simulated reality is achieved through carefully orchestrated and intensified color, sound, form, and movement.—Gary Dehrer

Tomorrow in Brief

The Broccoli Plan

Nutritionists tell us that broccoli is one of the healthiest foods for us, but this super veggie must be shipped from far away to reach markets where it isn’t so easily grown. For instance, 90% of broccoli sold on the U.S. Eastern Seaboard is shipped from California and Mexico—with less than desirable environmental impacts.

To solve this problem, researchers led by Cornell University horticulturalist Thomas Bjorkman are developing new strains of broccoli that can tolerate the more-humid East Coast climate. Once the right varieties have been developed, the project will also train local growers and marketers, organizing them into production networks.

With USDA support, the team aims to develop a $100 million broccoli industry on the East Coast over the next 10 years.

Source: Cornell University, www.cornell.edu.

Eye Exams via Smart Phones

Need an eye exam? There’s an app for that.

A $2 smart-phone application could tell you in minutes what prescription eyeglasses you need. Developed by the MIT Media Lab’s Camera Culture research group, the NETRA (Near-Eye Tool for Refractive Assessment) combines software with a small, lightweight plastic viewfinder that clips onto your smart phone.

Within minutes, NETRA can diagnose whether someone is nearsighted or farsighted, or suffers from astigmatism or the vision loss associated with aging. The researchers claim that NETRA is safe, fast, accurate, and easy to use.

Currently being field-tested, the device is intended primarily for use in poorer communities, such as those in the developing world, that lack access to proper eye care. While eyeglasses themselves can be inexpensive, the testing equipment up until now has been fairly cost-prohibitive, especially for those in underdeveloped areas.

Source: MIT Media Lab, www.media.mit.edu/press/netra.

Catching Up With the Stars

The Hubble Space Telescope has enormously accelerated astronomers’ ability to detect star movement, from 50 years with ground-based telescopes to just a few years.

It is Hubble’s razor-sharp visual acuity that enables the measurement of the stars’ motion, so predicting stars’ future movement has likewise been speeded up: Astronomers at the Space Telescope Science Institute in Baltimore have collected Hubble’s images from 2002 to 2006 to simulate stars’ projected migration over the next 10,000 years.

Source: Hubble Site, http://hubblesite.org.

Artificial Experimenter

Software that can take over the routine aspects of experimentation could help reduce its costs.

An “artificial experimenter” developed at Britain’s University of Southampton autonomously analyzes a project’s data, builds hypotheses, and chooses the experiments to perform, according to one of the developers, PhD student Chris Lovell of the School of Electronics and Computer Science. The program will also help detect anomalies in error-prone areas such as biological experimentation.

The next step is to join the AI software with automated platforms—labs on a chip—to perform the experiments requested by the artificial experimenter, using fewer resources in the process.

Source: University of Southampton, School of Electronics and Computer Science, www.ecs.soton.ac.uk.

WordBuzz: Weisure

Mobility, connectedness, and competitiveness have long been blurring the boundaries between activities performed in the workplace and everywhere else. Now, a term has been coined to define these omnitasking hours: weisure (work and leisure).

Attributed to Dalton Conley’s book Elsewhere, U.S.A. published by Pantheon, 2009, the term was soon popularized by CNN in a story entitled “Welcome to the ‘weisure’ lifestyle.”

Comment: We are hoping someone can still come up with a less-unwieldy coinage (something less frighteningly similar to seizure). Please send your suggestions for renaming this concept of time-use-blurring to letters@wfs.org.

News from WFS: Renewal at THE FUTURIST Magazine

By Edward Cornish, Founding Editor

For 44 years, I have had the privilege of serving as Editor of THE FUTURIST magazine. I would like to thank all of you for your support during our journey along the frontiers of the future. It has been a thrilling ride, but the time has come for me to retire as Editor and assume a new role at THE FUTURIST.

So, starting with this issue, your editor will be Cynthia G. Wagner, who has served as Managing Editor of THE FUTURIST since 1992.

In my new role, I plan to act as a futurist-in-residence. After thinking and writing about the future for more than four decades, I believe I have learned some things about foresight and I would like to pass them on to readers of THE FUTURIST through the articles I plan to write.

The study of the future is a pioneering field that is still developing. The World Future Society today is, I believe, only a foreshadowing of what it could become in the future. As futurists, we can make major contributions to the improvement of human life around the world. This is an awe-inspiring challenge but one worthy of our best efforts.

Our New Editor

Cindy Wagner came to THE FUTURIST as an editorial assistant in 1981. She has a bachelor’s degree in English from the prestigious Grinnell College in Iowa and a master’s degree in communications, specializing in magazine journalism, from Syracuse University’s S. I. Newhouse School of Public Communications. Right from the start, she proved to be a highly capable editor and quickly developed into an outstanding one. When it came time to recommend a successor I could think of no one better qualified than Cindy to replace me as Editor.

Timothy C. Mack, president of the World Future Society, shares my enthusiasm for Cindy and has given her his full support.

Adding further to my confidence in the future of THE FUTURIST is the fact that we have in the last six years added three talented journalists to the staff. They are senior editor Patrick Tucker, who also serves as the Society’s director of communications, and staff editors Aaron M. Cohen and Rick Docksai, who also work diligently on the Society’s journal for professional members, World Future Review. In addition, we have on staff Lisa Mathias, a highly talented artist, as our Art Director.

All in all, THE FUTURIST has never had such a strong editorial staff, so I have never been more confident of the future of our magazine. We hope that you will continue to share our journey into the future.

THE FUTURIST versus the World Future Society

Some readers may wonder, “Which came first—THE FUTURIST or the World Future Society?” The fact is that they were born almost simultaneously and either one can claim priority.

President Ronald Reagan meeting futurists at the White House, February 1, 1985

Here’s why. Back in 1966, I prepared a six-page newsletter providing news about new scientific discoveries and the ideas that scientists and other thoughtful people were expressing about the future. I decided to call this newsletter THE FUTURIST and sent copies of it to people I thought might be interested. These people included comprehensive designer Buckminster Fuller, physicist Herman Kahn (author of On Thermonuclear War and other prescient works), science writer Arthur C. Clarke, science-fiction writer Isaac Asimov, and Glenn T. Seaborg, the Nobel Prize–winning discoverer of plutonium.

In my newsletter, I invited the recipients to join me in establishing an organization devoted to the study of the future. To my surprise and delight, a number of well-known people actually responded to my mailing with a keen interest in what I was doing.

Furthermore, a few of the respondents lived in the Washington, D.C., area where I lived, so I could easily invite them to lunch and try to enlist their support for the project. Happily, several people responded and one, Charles W. Williams, said he could arrange space for a meeting in his suite at the National Science Foundation. This was perfect: We would be born in one of the world’s most prestigious scientific organizations. That fact, I hoped, would counter the view that people interested in the future were exclusively science-fiction fans or perhaps something weird.

As our plans for the proposed World Future Society began to take shape, we started preparing for its official launch, but we immediately encountered a big problem: We needed money if we were going to do anything. So we decided we would have to ask members of our new Society to pay modest dues and also to pay for their own lunches at our first meeting. Fortunately, a number of attendees were willing to do so.

This policy made the Society economically viable, though money would remain even to today a serious limitation on what the Society could do.

Slowly and erratically, we received membership applications and dues income while avoiding every possible expense by doing almost everything we could by ourselves. We pressed family members and colleagues into providing free labor for humble projects such as typing and stuffing envelopes. My wife, Sally, and our neighbors, friends, and children all were enlisted into doing Society chores.

So with a little money and lots of free labor, the newborn World Future Society—and its modest newsletter—could just barely manage to pay the bills. The Society’s membership gradually grew; though lack of money continued to dog us, we were able to survive and even grow.

To boost revenue, I decided we needed to offer members something more than just a crudely printed newsletter. So I decided to expand the newsletter, despite knowing nothing about typesetting, layout, art, and other skills needed for magazine publishing, and despite still having almost no money to pay suppliers for these services. However, I managed to recruit an unemployed friend who had had some experience in publishing, and with his help we produced the first issue of THE FUTURIST as a magazine (the March-April 1967 issue).

To our great joy, the response to this first issue was very encouraging and allowed us to persevere. We continued to improve the magazine and keep the World Future Society alive, but it was never easy.

Today, the members of the Society can take pride in what we have accomplished so far. We have come a long way, but I believe we have enormous opportunities to develop into a far stronger Society with an increasingly influential magazine that can help the people of the world toward a far better future than any known in the past.

To read more about the birth of WFS and THE FUTURIST, go to www.wfs.org/content/search-for-foresight.

The American Dream Moves Downtown

Revitalizing urban life with both nature and culture may benefit communities and citizens alike.

By Roger L. Kemp

In mid-twentieth-century America, the dream was to raise children in a single-family house with a yard, away from the traffic and noise in downtown areas. And the U.S. highway system stretched out to new residential subdivisions in the suburbs, as homes added more and more garages for everyone’s cars.

Downtown Trends

Major trends now under way in U.S. downtowns include:

  • Restoring and enhancing nature, such as ponds, parks, and even urban farms.
  • Integrating commercial and residential functions in multistory buildings.
  • Making public transit available, usually light-rail systems.
  • Restoring the public infrastructure to favor people over cars.
  • Combining landscaping with the restoration of all aspects of the public infrastructure.
  • Converting surface parking lots into parks, gardens, and open spaces.
  • Attracting culture, the arts, and entertainment facilities.
  • Attracting educational institutions and nonprofit organizations.
  • Attracting or keeping smaller specialized businesses downtown while bigger businesses relocate in malls or “big-box” sites.
  • Supporting ethnic and niche stores, such as markets, delicatessens, bakeries, and restaurants.
  • Providing a sense of “public place” in the core of downtowns to ensure that shared spaces feel truly shared.

This trend now is in the process of reversing. The children born in the middle of the twentieth century are now grown, and older parents are relocating to more-convenient downtown areas. Young professionals focusing on their respective jobs, too, head toward inner-city areas, postponing the American dream of starting a family and moving to the suburbs until later in life. Another group of urban dwellers consists of those who would like to live without needing a vehicle. Hence, a new type of residential development has emerged around public transit stations, called Transit-Oriented Developments. The market for condominiums and townhouses located next to public light-rail transit systems has developed rapidly in recent decades.

the Riverwalk District in downtown Reno, Nevada

Now the challenge for communities is to make downtowns more attractive, more livable. Government planners at the state and local levels need to advocate for changes that will benefit downtown areas. One model is the high-rise residential area in the Lower East Side of New York City a century ago, where individuals and families lived in multistory residential structures that featured an assortment of commercial businesses located on the ground floor. All of the restaurants, markets, and other types of commercial activity took place at street level.

It’s also great for those commercial businesses established on the ground level to have their market built-in above them. Rezoning downtowns to allow more residential units above ground-level businesses is the wave of the future. If you build them, people will come, especially if there’s public transit in the area.

In addition to such mixed-use zoning, blending the commercial and the residential, thriving communities should increasingly bring arts, entertainment, and culture back to downtown areas. Some cities have used libraries and museums as tools to stimulate economic development, while others are trying to lure educational institutions and nonprofit organizations back downtown.

There is also a big trend to preserve what’s left of nature in urban environments, restoring what’s been removed over the decades. Cities are expanding parks, wetlands, and waterways; they’re enhancing pedestrian access and movement by narrowing the streets and widening walkways, bikeways, plazas, and other public areas, reversing the car-centric planning of the previous century. This trend, too, has facilitated the movement of people back to downtown areas.

When successful, these efforts stimulate the local economy and attract the type of businesses, educational institutions, and nonprofit organizations that would benefit from revitalized downtown areas. Additional economic-development incentives would help attract desirable private, educational, and nonprofit institutions to downtowns, but selling local public officials on such incentives requires a clear demonstration of their reasonableness and long-term benefits to the taxpayers and all of the citizens within the community. A nice downtown should serve as a great public place not only for those who live there, but also for other citizens in the area who come to work, shop, eat, or participate in cultural attractions.

Prudent economic-development incentives that promote downtown renewal are a wise way to generate revenues without raising taxes and can assist in balancing a community’s budget. Most cities evolved piecemeal over the years and now need to be retrofitted and redesigned for the future.

Planning and zoning regulations should be in place to accommodate mixed land-uses, infill, and redevelopment projects. Call it New Urbanism, Sustainability, Pedestrian Cities, Healthy Cities, Inner-City Renewal, or the Green Cities Movement—these practices can be applied to projects of all sizes to promote livability in a single building, on a full block, in a neighborhood, and even an entire community.

Roger L. Kemp is an adjunct professor in the Public Administration Program, University of New Haven, and in the Urban Studies Program, Southern Connecticut State University. E-mail rlkbsr@snet.net.

Hackers of the World Unite

Crowd-sourced attacks on networks are increasingly destructive.

Computer networks have been on guard for decades against individuals trying to “hack” them. But networks now face a larger danger from mass attacks, warns IT security analyst Richard Stiennon.

“The new trend is to mobilize forces over the Internet to engage in the equivalent of mass online protests,” writes Stiennon in his latest book, Surviving Cyberwar.

Political groups, organized-crime syndicates, and some governments launch distributed denial of service (DDoS) attacks, which direct hundreds, thousands, or millions of computers to simultaneously strike a single Web site. The browser overloads and shuts down.

In 2007, when Estonia enacted laws that some Russian-Estonians opposed, denial of service attacks from some 80,000 IP addresses based in Russia sabotaged the Web sites of Estonian government agencies, banks, and telecommunications companies.

Stiennon blames many attacks on Nashi, a 120,000-member Russian nationalist youth association. Some Nashi operatives distribute the attack instructions and encourage members to use them against designated targets.

“They share a political mind and have the computer skills to join a call for an attack,” Stiennon writes.

In an exclusive interview with THE FUTURIST, Nashi member Alexi Kanskakof claims that Russian DDoS attacks have caused major economic disruption in Ukraine and may have contributed to Moscow-favored candidate Victor Yanukovych winning Ukraine’s presidential election in 2009. Also, during Russia’s 2008 war against Georgia, Russian hackers co-opted Georgian television stations to run pro-Russian broadcasts.

“From these examples, one can see just how effective Russian cyberattacks can be at blackmailing the citizens of other nations or causing economic chaos,” says Kanskakof.

He points out that DDoS attacks carry few risks for the perpetrators. A Nashi member could attack the Web site of a business in Ukraine, for example, without ever leaving Russia. “Even if the Ukrainian police forces found out it was you who did the cyberattack, there is really nothing they can do about it.”

Of course, Russians are not the only ones who may be using this weapon. It is believed that such attacks were also deployed to thwart WikiLeaks in its attempt to distribute “anonymously submitted” diplomatic cables embarrassing to the U.S. government and its global partners. And DDos attacks were also allegedly launched by WikiLeaks supporters against its “enemies.”

Businesses and government agencies worldwide are at risk, according to Daniel Gonzalez, director of information systems for the Software & Information Industry Association. He says that, while some denial of service attacks are orchestrated by masses of volunteers, others are created by “botnets,” automated software tools that infect computers and make them emit malware without their owners knowing it.

“With botnets, what they’re doing is building a network of all these infected computers that they can use for their own purposes,” says Gonzalez. He adds that many organized-crime groups create botnets and sell them to buyers on every continent.

Social-networking sites provide huge opportunities for botnets. These sites have few spam filters, according to Gonzalez, so hackers increasingly use them to distribute malware.

“Someone I know opened up a Facebook message. It looked like it was coming from one of their Facebook friends. It said, ‘Hey, I found this photo of you.’ It turned out it wasn’t a photo. It was installing a virus,” says Gonzalez.

Normal precautions that many people fail to take could be the simplest protections, such as keeping software up to date, notes Stiennon. He also urges Web sites to have independent platforms and not share servers. That way, if one site suffers a DDoS attack, other sites won’t fail, too.—Rick Docksai

Sources: Richard Stiennon, author of Surviving Cyberwar (Government Institutes, 2010), IT-Harvest, www.it-harvest.com.

Alexi Kanskakof, member of Nashi, private communications.

Daniel Gonzalez, Software and Information Industry Association, www.siia.net.

Alarms Ring as Wedding Bells Do Not

Trends in postponed marriages and births spark debate on economy’s role.

Americans are waiting longer to marry, and household size declined between 2000 and 2010, according to the U.S. Census Bureau. Marriage is also declining among young people, the Bureau reports. The media have been quick to point to the 2008 recession as the key cause.

“The United States crossed an important marital threshold in 2009, with the number of young adults who have never married surpassing, for the first time in more than a century, the number who were married,” Erik Eckholm of The New York Times reported. “A long-term decline in marriage accelerated during the severe recession, according to new data from the Census Bureau, with more couples postponing marriage and often choosing to cohabit without tying the knot,” he concluded.

Meanwhile, the U.S. Center for Health Statistics has reported a 2.7% drop in fertility from 2008 to 2009, leading Marilynn Marchione of the Associated Press to comment, “The U.S. birth rate has dropped for the second year in a row, and experts think the wrenching recession led many people to put off having children. The 2009 birth rate also set a record: lowest in a century.”

But, while the fertility drop is recent, it’s actually linked to a longer-term trend. The Pew Research Center reports that the number of American women who had ended their childbearing years without giving birth has doubled since 1970s, from 1 in 10 to 1 in 5. Childlessness rose among women without a high school diploma, which could be attributable to a bad economy. Another plausible explanation is the success of public information campaigns urging people to delay childbirth until after high school.

Meanwhile, rates of childlessness declined by 32% for women with doctorate or professional degrees. But this group is still the least likely to have a child, according to Pew.

A few researchers have cautioned that, while the economy may have played a role in some people waiting longer to wed or bear children, it is still too early to extrapolate a clear causal link between the bad economic environment of 2008 and 2009 and the recent marriage and childbearing statistics. Census Director Robert Groves, writing on his blog, noted, “Many factors can affect the estimates of the number and proportion of people currently married. For example, declining numbers could reflect the passing of members of an older generation that had higher marriage rates.”

Pew recently reported that young adults (under 30) with a college degree had become more likely to marry than their peers without a degree, representing a reversal in favor of marriage among that group. Despite this, the overall marriage rate was still down among both degreed and non-degreed young adults.

Pew points to what researchers call a clear “marriage gap” along economic lines. “Those in this less-advantaged group are as likely as others to want to marry, but they place a higher premium on economic security as a condition for marriage.”

In sum, data from the last 10 years shows more Americans now waiting to marry, compared with a few years ago, but fewer college-educated Americans waiting than non-college educated. A drop-off in fertility occurred in 2008–2009 and was more pronounced among non-college educated women than for women with advanced degrees.

The state of the U.S. economy may have been a factor in the drop-off in fertility, and an income-based “marriage gap” may be emerging. However, these trends could turn out to be a blip. A longer-term decline in marriage is seen in decades-old trends of fewer weddings among twenty-somethings and rising cohabitation arrangements in lieu of tying the knot.—Patrick Tucker

Sources: The U.S. Census Bureau, www.census.gov.

Stephanie Ventura, the Centers for Disease Control and Prevention, www.cdc.gov.

Surviving the Great Recession's Aftershocks

By Patrick Tucker

Too much wealth in the hands of too few will result in less for all, warns a former U.S. labor secretary, who offers a prescription for rebalancing wealth.

Aftershock: The Next Economy and America’s Future by Robert B. Reich. Knopf. 2010. 192 pages. $25.

The inequality of wealth in the United States will result in a stagnant economy and political turmoil by the year 2020, argues public-policy scholar and former U.S. Labor Secretary Robert B. Reich in Aftershock. Millions of deeply indebted Americans will embrace isolationism, reject both big government and big business, and sever America’s ties with the rest of the world, he predicts.

To illustrate the size and scope of this disaster, Reich sets up a credible and horrifying scenario: The year is 2020. The recently elected president, Margaret Jones of the Independence Party, is about to set forth on a legislative agenda reflecting the frustrations of the broad, outsider constituency that elected her. Her objectives: a freeze on legal immigration and the swift deportation of all illegal immigrants; increased tariffs on foreign goods; prohibition against foreign investment; withdrawal from the World Bank, the United Nations, and other international organizations; and a default on the U.S. debt to China.

The results are immediate.

“On November 4, the day after Election Day, the Dow Jones Industrial Average drops 50 percent in an unprecedented volume of trading,” writes Reich. “The dollar plummets 30 percent against a weighted average of other currencies. Wall Street is in a panic. Banks close. Business leaders predict economic calamity. Mainstream pollsters, pundits, and political consultants fill the airwaves with expressions of shock and horror. Over and over again, they ask: How could this have happened?”

This aftershock, says Reich, is a direct result of Americans failing to learn the lessons of the Great Depression, thus setting the country up on a course for yet another economic crisis. The most important of these lessons is that too much money resting in the hands of too few people cannot grow an economy. What’s needed is an orderly division of income spread across lower, middle, and upper classes, he argues. When income (hence, wealth) is too concentrated among elites, the economy atrophies and declines.

It’s a classic Keynesian argument that would ring shrill and tinny if we didn’t live in such Dickensian times. Consider that, prior to the Great Recession of 2008, income and wealth inequality in the United States were higher than they had been any time in the recent past other than just before the Great Depression, with the top 1%—those with incomes more than $380,000 per year—owning roughly 23% of the assets. Median wages for workers have been stagnant since the 1970s, at about $45,000 a year, despite the fact that the economy itself is much larger than it was three decades ago. Those gains mostly went to those at the top.

This present situation is, of course, not without historic precedent. In the 1700s, wealth inequality in the American colonies was similar to that of the United States today. The climate was particularly wintery in Boston, where the top 5% of the population controlled 25% of the wealth in the 1720s (this would become 50% by 1770). Too often we forget that the decades leading up to the American Revolution were marked by the burning of rich merchants’ shops, occasional riots, and massive resentment over the issue of debt and wealth inequality, as chronicled by the late historian Howard Zinn in A People’s History of the United States: 1492-Present.

Today’s wealth inequality is a moral failing, says Reich, but it’s also an operational malfunction at the root of many of America’s other problems. An economy that is growing across all income levels encourages people to buy more things like new cars, consumer electronics, bachelor’s degrees, bigger houses, and the like. Instead, over the last two decades, a larger portion of the wealth went to a smaller group; as a result, Americans were forced to resort to a number of coping mechanisms to continue to consume at ever higher levels.

The first of these coping mechanisms was the two-income household. In the 1970s, the mass entry of women into the workforce increased household income, but only up to a point. Over the last decades, those economic gains have been eaten up by such things as the costs of child care.

The second coping mechanism that Americans employed to mitigate stagnant wages was longer working hours. This also worked well until, by the mid-2000s, Americans were putting in 500 more hours—that’s 12 more weeks—of paid work a year than they were in 1970.

Finally, Americans resorted to saving less and borrowing more in order to continue consuming at ever higher levels. Reich points out that average household debt was 138% of household income in 2007, up from a manageable 55% in the 1960s. This represents the largest gap since the Great Depression. Much of that debt was tied up in home loans that people would never be able to pay off.

The question becomes, Does voluminous spending by the well-funded few necessarily lead to reckless spending on the part of the many? Reich argues that it does. There is some recent independent research to back him up on this. In an October 2010 paper titled “Expenditure Cascades,” Robert H. Frank of Cornell University, Adam Seth Levine of Vanderbilt University, and Oege Dijk of the European University show that “changes in one group’s spending shift the frame of reference that defines consumption standards for others just below them on the income scale....”

What of the gainers, the 10% who saw unprecedented wealth and income increases? They didn’t fare as well as you might expect. With too much capital to ever spend efficiently, many of them invested in a series of asset bubbles through unscrupulous Wall Street intermediaries, with predictably lackluster results.

The battle against falling middle-class wages is one that Reich has been fighting for decades, since serving as labor secretary in the Clinton White House. He acknowledges that, even in those instances when he’s had the ear of the president (he also served briefly on the Obama administration team), he hasn’t had much success in implementing the sorts of structural changes that would set the nation’s distribution of income on a more equitable path.

“We in the Clinton administration tinkered. We raised the minimum wage.… We offered students from poor families access to college and expanded a refundable tax credit for low-income workers.… All these steps were helpful but frustratingly small in light of the larger backward lunge.”

Reich lays out several proposals—either reasonable or radical depending on your point of view—to correct the imbalance of wealth in the next decade:

  • A reverse income tax. The government would put extra money into the paychecks of low wage earners and cut taxes on middle class Americans (those earning less than $90,000 per year). The policy would be modeled after the Earned Income Tax Credit but would be more ambitious in reach. Reich speculates that the cost to the government would be about $600 billion per year.
  • A carbon tax collected against energy companies. Reich estimates that, if set at $35 per metric ton of CO2, this tax would raise about as much as the reverse income tax (wage supplement) would cost—around $600 billion.
  • A one-time “severance tax” levied against employers who lay off long-term workers, equal to 75% of a worker’s yearly salary.
  • Federal subsidization of less-profitable but socially valuable college majors. Public universities, under a Reich plan, would be free, and loans for private schools would be available at low cost. Upon graduating, a student who took such a loan would pay about 10% of his or her income on the loan for 10 years. After that, the loan would be considered fully paid. “This way,” says Reich, “graduates who pursue low-income occupations such as social work, teaching, or legal services would be subsidized by graduates who pursue high-income occupations including business, finance, and corporate law.”

The effect of these proposals, with the exception of the college funding one, would be to transfer investing power away from the private sector (rich people and their money advisors) and put it in the hands of the federal government, which would then distribute those funds to the people to buy consumer goods.

There’s a libertarian argument against this, but also a practical one. As Reich himself points out, a rising share of consumer spending now goes abroad, as more Americans purchase products made in other countries. Taxing U.S. energy companies—at a time when a larger than ever portion of the fuel the country uses comes from Canada, Mexico, and Saudi Arabia—in order to pay Americans to purchase electronics from Malaysia, toys from China, and wine from Spain seems unlikely to have a positive effect on national GDP.

A better use of such money might be infrastructure or public works, which would put more money in the hands of Americans. Reich acknowledges the dilapidated state of the country’s roads and bridges, but he doesn’t propose a single large-scale public infrastructure project. In fact, he derides the 1990s as a time when too much private investment capital resulted in “more miles of fiber-optic cable than could ever be profitable.” The 1990s telecom asset bubble was certainly severe, but Reich disregards or ignores the types of services that can be offered over the Internet once bandwidth limitations are removed. Perhaps, while serving in the White House, he never experienced the frustration of a slow download.

The idea of a company paying severance costs of 75% of a terminated employee’s yearly salary—in essence paying the “social costs” for outsourcing—is a radical one for the United States. Businesses would argue that such a measure would crimp their flexibility and that the ability to hire and fire freely helps keep companies lean, nimble, and competitive. They might say that, faced with a 75% severance requirement, firing anybody would be too difficult and American companies would come to resemble Japanese companies during the 1990s—the so-called “lost decade,” when every employee was guaranteed a high degree of job security regardless of whether or not he (it was mostly men) helped or hindered the overall corporation. The suggestion that companies be penalized for firing people reads like an open pander to labor interests, not a viable revenue generating strategy. A straight tax hike on corporate entities, regardless of hiring or firing behavior, would seem to meet the same objective with fewer downsides.

The principal argument against Reich is that his proposals are politically untenable in an environment where any effort to raise taxes on any American, for any reason, meets with nearly insurmountable resistance from the Right and passionate charges of socialism on the floor of the House of Representatives. The 2010 election saw a number of Tea Party candidates rise to power in some very poor states like Kentucky—places that would benefit greatly from the wealth-redistributing policies that Reich proposes. How did these candidates win? They succeeded by promising to thwart any increase on taxation for the very wealthy, no matter what the cost; they promised to halt any remaining “bail-out” funds from being spent. They vowed to undo the recently enacted health care law and its provisions to expand health coverage to more Americans.

It’s one thing to argue that the country, running a record deficit, cannot afford such policies. It is another thing entirely to suggest that such policies are not in the interests of the growing poor. Yet, people in the first district of West Virginia and the first district of Arkansas voted against their own interests.

What does this show? Perhaps the worst enemy of the American middle class is not the most wealthy 1%, but the mistrustful and ever-angrier middle class itself, all of which adds to the timeliness and value of Reich’s achievement with this important book.

About the Reviewer

Patrick Tucker is the senior editor of THE FUTURIST magazine and the director of communications for the World Future Society.

What Hath Hawking Wrought?

By Edward Cornish

Scientists show how gravitational forces might create universes spontaneously, with no divine intervention required.

The Grand Design by Stephen Hawking and Leonard Mlodinow. Bantam Books. 2010. 119 pages. Color illustrations, including original art by Peter Bollinger. $28.

In their ambitious new book, The Grand Design, mathematician Stephen Hawking and his collaborator, physicist Leonard Mlodinow of Caltech, offer scientific explanations for many of the mysteries of the universe.

Why do we exist?

Why is there something rather than nothing?

Why do we live under this particular set of natural laws and not some other?

Philosophers have long struggled with such questions and typically ended by invoking God. But Hawking and Mlodinow insist on a strictly scientific view, commenting, “It is reasonable to ask who or what created the universe, but if the answer is God, then the question has merely been deflected to that of who created God. In this view it is accepted that some entity exists that needs no creator, and that entity is called God. This is known as the first-cause argument for the existence of God. We claim, however, that it is possible to answer these questions purely within the realm of science, and without invoking any divine beings.”

To warm up for Hawking’s expansive thinking, we might begin with his assertion that our universe is merely one of a set or assemblage of universes, which he calls the Multiverse, or M-Theory.

“Our universe seems to be one of many, each with different laws,” Hawking and Mlodinow assert. “The Multiverse Theory is the only theory that has all the properties we think the final theory ought to have.”

According to the authors, a whole universe can be created out of nothing because gravity shapes space and time.

“Gravity allows space-time to be locally stable but globally unstable,” they write. “On the scale of the entire universe, the positive energy of matter can be balanced by the negative gravitational energy, and so there is no restriction on the creation of whole universes. Because there is a law like gravity, a universe can and will create itself from nothing. Spontaneous creation is the reason the universe exists, why we exist. It is not necessary to invoke God….”

Hawking and Mlodinow write in a friendly, engaging style, but the average reader may still struggle with their mind-blowing ideas. Never mind: It’s worth making the effort. Most of us don’t stretch our minds nearly enough.

Readers will certainly expand their thinking by reading The Grand Design, but they may have difficulty finding immediate practical use for it. But let us be patient: Practical uses may well come in the future. In science, theory tends to precede practical applications. Benjamin Franklin’s theorizing about electricity (along with his experiments) led eventually to the huge electric-power industry that we know today. So sometime in the future, Hawking’s ideas may well reshape the world economy and other aspects of our world that we have yet to imagine.

About the Reviewer

Edward Cornish is the founding editor of THE FUTURIST magazine and founder of the World Future Society.

Tools for Problem Solving

By Rick Docksai

In order to meet the challenges ahead, we’ll need less control, more distributed action, and less resistance to change.

2030: Technology That Will Change the World by Rutger van Santen, Djan Khoe, and Bram Vermeer. Oxford University Press. 2010. 295 pages. $29.95.

Technology could contribute to solving many of the world’s problems, ranging from resource shortages to financial crises, state three Dutch scientists in 2030: Technology That Will Change the World. Citing interviews with researchers from health, information technology, energy, foreign policy, and other fields, they identify an array of innovations that could improve life across the globe.

The authors—chemist Rutger van Santen and electro-optical communication professor Djan Khoe, both of Eindhoven University, with science journalist Bram Vermeer—and the experts they cite express agreement on several fundamentals: that the world’s global systems are growing more interconnected, that information systems must become more adept at gathering information from the ground level and rapidly responding to it, and that humans must overcome their reticence to change.

“We need to pursue more flexible solutions so that technology can serve us more effectively in a fast-changing environment. And we must also come to grips with complexity itself,” the authors write.

Some promising research areas, according to the authors, include the following:

  • Water management. Droughts worldwide will worsen in the absence of new methods to reduce stress on water systems. Potential remedies include drought-resistant crops, improved irrigation, and water purification and desalination systems that operate at the neighborhood or household scale.
  • Energy efficiency. Humans already consume the earth’s resources more than 1.5 times faster than the planet can replenish them—and the deficit is widening. Some hope, however, lies in more energy-efficient buildings and household appliances.
    Prototype alternative-energy systems also show promise. Solar cells made with conducting polymers would be lighter, more flexible, and easier to manufacture than present-day solar cells, for example. Nuclear breeder reactors would provide massive amounts of energy with minimal nuclear waste. And hydrogen could be a practical fuel if combined with other, denser gases.
  • Medicine. In the future, medical scanning software will process more images in less time. Also, the scanners will analyze the images and advise physicians on follow-up tests and treatments.
    Cognitive decline may be inevitable for some people at advanced ages, but they may better cope by using technological applications such as cookers that turn themselves off or kettles that protect users against accidental burnings, for example.
  • Manufacturing. Microplants—whole factories the size of a computer chip—will construct devices “to a precision of a few micrometers,” all with much less energy and waste than traditional manufacturing processes. Computer chips cannot get much smaller, but they can become far more capable. “Smart” computer chips will be aware of their environments and act upon them. Applications could include brain-wave monitors for patients who have epilepsy. The monitor would recognize an oncoming seizure and avert it.
  • Communications. Numbers of radio stations, TV stations, and mobile-phone and satellite connections are increasing, but room on the electromagnetic spectrum is limited. Networks will operate better if regulations governing bandwidth are loosened and control delegated to local units, the authors argue. Communications will further benefit from new systems that broadcast with less spectrum and from software-defined radio sets whose components change frequencies and perform upgrades automatically in accordance with changing airwave transmissions.
  • Finances. Major economic crashes are often preventable if market observers spot market instabilities before they spiral out of control. Use of computer simulations and other new network science tools would enable economists to better understand market mechanisms. Computers could even perform trades for people: “Automated” trading would eliminate unnecessary trading and lower market risk.
  • Conflict resolution. Civil conflicts are more numerous, and nuclear weapons are an ever-present danger. Satellites and environmental sampling can help keep the peace, however, by enforcing disarmament agreements.

Foreign policies have to evolve, too, according to the authors. Governments need to pursue greater integration, economic cooperation, and interdependence. In addition, every measure that nations take to use less oil and electricity will engender a more peaceful world.

The world and the challenges it faces are both becoming increasingly complex, the authors acknowledge. They are hopeful, however, that if humans expand their capabilities to cooperatively gather information, analyze it, and act upon it, they will thrive.

“Protecting the future of our industry is not about securing the status quo but fostering the dynamics needed to adapt to changes as they arise,” they write.

In 2030, the authors have provided an incisive report about the upcoming frontiers of modern scientific research. Readers will find this book an approachable guide to the new applications that we might realistically see come into use in the decades ahead.

Books in Brief (March-April 2011)

Edited by Rick Docksai

Keeping Connected with the Joneses

The Abundant Community: Awakening the Power of Families and Neighborhoods by John McKnight and Peter Block. Berrett-Koehler. 2010. 173 pages. $26.95.

Avid consumerism became a societal trend in the early twentieth century, and since then “keeping up with the Joneses” has impacted life in many harmful ways, according to social-policy professor John McKnight and workplace consultant Peter Block in The Abundant Community. They argue that the marketplace has essentially replaced the community in most people’s minds, and thus people’s neighborhoods no longer satisfy their emotional needs.

The incessant drive to buy and consume requires huge corporations, health-care infrastructures, and thousands of different types of specialists to feed it. People work nonstop and rely on specialists to look after their health, maintain their homes, keep their neighborhoods in order, and care for their children. Families spend less time together, neighbors scarcely know each other, and relationships become shallow and utilitarian.

Should consumerism persist, the health of communities everywhere will suffer greatly, the authors warn. No neighborhood can effectively prevent crime, educate its youth, create jobs, keep parks clean, and ensure that the elderly, the poor, and other people in need are cared for unless its residents work together to make all these things happen.

McKnight and Block hold out hope that communities everywhere will rediscover their own nonmaterial abundance and relearn how to create vibrant community life. They conclude by laying out the values a community must adopt to achieve this.

The Abundant Community is an in-depth evaluation of twenty-first-century society and the values that define it. Community activists, organizers, and leaders of all kinds will find it deeply meaningful.

Engines of Human Advancement

Acceleration: The Forces Driving Human Progress by Ronald Havelock. Prometheus. 2011. 363 pages. $28.

Humanity has much to look forward to in this century, argues technology consultant Ronald Havelock in Acceleration. He describes a sweeping transformation of human life by 2050: longer life spans, growing knowledge platforms, swelling ranks of scientists and engineers, exponentially more powerful computers, and the diffusion of a more inclusive human ethics.

Havelock identifies a powerful “Forward Function”—movement of societal and technological progress—that he says has been active throughout human history. Progress has been especially great over the last 60 years due to an array of new forces: expanded learning, increased information storage capacity, the evolution of social networking, a larger division of labor in the service of problem solving, more sophisticated problem-solving processes, and immensely enhanced power to distribute knowledge via media.

For the first time in human history, individual groups of researchers, producers, distributors, and consumers are all continuously connected. These ties of communication will bring all more closely into alignment and enable them to work together to make more rapid and consistent innovation.

Pessimism about the future still runs deep, Havelock notes. Vast numbers of people believe that the future will be grim. Havelock encourages a more positive outlook: Pessimism not only lowers quality of life, but it also slows the Forward Function. He remains confident that the Forward Function will stay on course for as long as there is a human species and will continue to improve human life.

Acceleration is an upbeat philosophical perspective on humanity’s past, present, and future. Audiences from all walks of life will find it thought-provoking and inspirational.

Foresight in a Flash?

Flash Foresight: How to See the Invisible and Do the Impossible by Daniel Burrus with John David Mann. HarperBusiness. 2011. 268 pages. $27.95.

We’ve all had moments of “flash foresight”—i.e., intuitive grasps of what is to come—says executive consultant Daniel Burrus in Flash Foresight, written with business journalist John David Mann. The challenge, Burrus adds, is to know when to act on it; sometimes this foresight is counterintuitive and requires doing the opposite of what everyone else is doing.

You exercise flash foresight when you look to the future and try to discern what you already know. Then, once you’ve established your certainties, you attempt to fill in the uncertainties. There is much about the future that we can predict in advance, Burrus says.

He describes real-life examples of people who exercised flash foresight to solve real problems. Apple Computers’ leadership used it to resurge from market failure to market domination. The phone company Mobile Telephone Networks used it to create burgeoning cell-phone markets throughout sub-Saharan Africa. And Burrus claims to have used it in the early 1980s to accurately predict the digital revolution, the explosive growth of fiber-optic cable networks, and the sequencing of the human genetic code by the year 2000.

Burrus also points out examples of people who failed to use it. They include the heads of General Motors, who had a hugely successful company in the mid-twentieth century but faced collapse and federal takeover in 2008.

Flash Foresight presents helpful case studies in how decision makers in any industry can more effectively shed light on their futures.

Islam’s Call to Sustainability

Green Deen: What Islam Teaches About Protecting the Planet by Ibrahim Abdul-Matin. Berrett-Koehler. 2010. 232 pages. Paperback. $16.95.

Conservation of the earth is integral to Islam, argues Muslim author and policy advisor Ibrahim Abdul-Matin. He presents multiple examples of what Muslims are doing and can do to improve human stewardship of the planet and its resources.

These include “green” mosques that incorporate sustainability into their architecture; urban and suburban food gardens that flourish in some Muslim neighborhoods; and Alpujarra, a Muslim community in Mexico that draws all of its energy from localized solar and wind generators.

There are also individual Muslims who are leading sustainability changes in their own communities, such as Adnan Durrani, an organic food pioneer, and Qaid Hassan, an entrepreneur who delivers fresh produce to low-income communities in Chicago. Also, the Inner-City Muslim Action Network, a Chicago nonprofit, operates a Green Reentry Project that helps recently incarcerated men transition into green jobs.

None of the examples above is an anomaly, Abdul-Matin asserts. He notes that Muhammad, Islam’s foremost prophet, once said that “The Earth is a Mosque, and everything on it is sacred.” Abdul-Matin points to many verses in the Koran pertaining to daily living and how each actually contributes to solving global problems of energy use, food distribution, water supplies, and waste. He further explains how these teachings can be useful and relevant to anyone, Muslim and non-Muslim alike, who is concerned about the environment’s long-term health.

Green Deen offers a new perspective on Islam—the world’s second-largest religion—and its potential as a force for positive worldwide change. Secular and religious audiences of all faith traditions may find it informative and enlightening.

Can Information Empires Be Free?

The Master Switch: The Rise and Fall of Information Empires by Tim Wu. Knopf. 2010. 366 pages. $27.95.

Since the invention of the telephone, every information technology has evolved along a similar trajectory, says Tim Wu, chairman of the media reform organization Free Press, in The Master Switch. He calls this trajectory “The Cycle.”

At first, the technology is an open system that is controlled by no one and subject to extensive innovation by many different developers. Over time, however, one corporation or entity gains exclusive control. Then the technology becomes a “closed system,” and innovation grinds to a halt.

He traces the Cycle as it played out during the twentieth century in film, telecommunications, and broadcast media. Key industry players took over each market, and the outcomes were blander media content, stifled individual expression, and fewer choices for consumers.

The Internet is still an open system, Wu adds. But there are signs that it, too, could fall under centralized control. The consequences would be staggering, given that information industries are integral to almost every aspect of our lives.

Wu advises against aggressive government regulation of information markets. At the same time, he insists that those who develop information, those who own the networks on which it travels, and those who control the tools of information access must all be kept separate from each other. Government must also remain vigilant against excessively large corporate mergers. These basic checks are vital, Wu argues, to prevent any one corporation from becoming the sole arbiter of what consumers see and hear online.

The Master Switch is a provocative thesis on where the Internet has come from and where it is headed. It will interest technology enthusiasts and all who value a vibrant media market.

Putting Our Minds to Morality

My Brain Made Me Do It: The Rise of Neuroscience and the Threat to Moral Responsibility by Eliezer Sternberg. Prometheus. 2010. 244 pages. Paperback. $21.

As neuroscientists learn more about the influences that the brain’s neurons and neurotransmitters have, difficult questions arise over how much control people really have over their lives, according to Tufts University medical student Eliezer Sternberg in My Brain Made Me Do It.

Some neurologists believe that human behavior is entirely predetermined by brain chemistry and that free will does not really exist. Many philosophers object strongly to this viewpoint, however. They hold that to deny free will is to reduce human beings to mindless machines without capacity for moral responsibility.

Sternberg presents both sides and then concludes with his own nuanced view: The brain influences behavior, but it does not determine it. Humans still have the capacity to make their own decisions. Referencing numerous studies of brain activity, brain hormones, and mental disorders, he constructs the complex process of human decision making and the multiple factors—emotional, hormonal, logical, and situational—that underlie it.

Sternberg recasts complex theories about the human brain and human behavior in simple terms that almost any audience will readily grasp. My Brain Made Me Do It will be an engaging read for scientists and lay readers alike.

Humanity’s Next Great Evolution in Values

Thriving in the Crosscurrent: Clarity and Hope in a Time of Cultural Sea Change by James Kenney. Quest. 2010. 253 pages. Paperback. $16.95.

A cultural sea change is under way across the globe, says interfaith activist Jim Kenney in Thriving in the Crosscurrent. Old beliefs and new beliefs are clashing, and the end result will be the prevalence of cultural values that are better attuned to current realities.

Ethnocentric values—sexism, racism, war, materialism, greed, and exploitation of the environment—are receding. And world-centered values—gender partnership, intercultural dialogue, religious pluralism, nonviolence, spiritual awareness, social justice, and environmental justice—are taking their place.

At least three such sea changes have taken place in human history: the rise of agriculture, the emergence of the major Eastern and Western religious traditions, and the Copernican realization that the Earth is not the center of the universe. Each one signified a profound shift in human understanding and an affirmation of interdependence and creative complexity.

Kenney points out concrete examples of the sea change in academia, the nonprofit world, contemporary politics, and other areas of life. He describes current reactionary forces opposing change, but argues that the new values will ultimately prevail.

Readers who worry about humanity’s future will find in Thriving in the Crosscurrent a compelling case for hope.

Working with Millennials

The 2020 Workplace: How Innovative Companies Attract, Develop, and Keep Tomorrow’s Employees Today by Jeanne C. Meister and Karie Willyerd. HarperCollins. 2011. 294 pages. $26.99.

The Millennial generation—all those born between 1977 and 1997—will constitute nearly half the world’s workforce by 2014, according to workplace consultant Jeanne Meister and Sun Microsystems vice president Karie Willyerd in The 2020 Workplace. They call on employers to plan now for a new paradigm in how and where people will work, the skills they will offer, and the technologies they will use to communicate.

Workforces will exhibit greater diversity in age, gender, and ethnicity, the authors forecast. Also, due to the proliferation of virtual communications, more offices will consist of employees who are dispersed across remote corners of the globe. Professionals everywhere will have far more options as to how, where, when, and for whom they work—provided that they produce results. Leadership will have to be more global, culturally aware, and skilled at building alliances and sharing authority.

The authors describe the unique values that will set the Millennial workforce apart—such as freedom, personal choice, collaboration, corporate integrity, and innovation—and how these priorities will influence their professional lives. They advise employers on how to best engage this new generation while still keeping their senior employees satisfied.

Workplace managers and leaders in practically any industry or sector may find The 2020 Workplace to be a helpful guide to how they can prepare their workplaces for success in the world of 2020.

Future Active

Edited by Aaron M. Cohen

Symposium Tackles Sustainable Transportation

University of Virginia professors from such diverse departments as business, nursing, urban planning, and architecture came together to discuss sustainable transportation at the symposium “The Car of the Future / Future of the Car.”

The event was conceived as a multidisciplinary exploration. “If you want to approach the subject properly, you need expertise that comes from many different disciplines,” said co-organizer Manuela Achilles, program director of UVa’s Center for German Studies.

Guest speakers included bestselling author and futurist Jeremy Rifkin, president of the Foundation on Economic Trends, who presented on “The Third Industrial Revolution and the Reinvention of the Automobile.” Christopher Borroni-Bird, GM’s director of advanced technology vehicle concepts and co-author of Reinventing the Automobile: Personal Urban Mobility for the 21st Century, spoke as well.

Daniel Sperling and Deborah Gordon, the co-authors of Two Billion Cars: Driving Toward Sustainability (Oxford University Press, 2009), also gave a presentation. Their book examines, among other things, the global emphasis on individual car ownership.

Most of the sessions were free and open to the public. University undergraduates also participated in “The Car and its Future,” a contest that gave them the option to either write an essay or design a project around the symposium’s theme.

Source: University of Virginia Center for German Studies, http://artsandsciences.virginia .edu/centerforgermanstudies.

Ten Likely Global Occurrences

“Much can happen in ten years—just review the past decade.” So begins the Copenhagen Institute for Futures Studies’ report “Ten Tendencies Towards 2020.” With this in mind, the CIFS analyzes 10 shifts that the organization believes are already well under way and examines how they could play out in the future, charting potential consequences.

The fact that things are already moving in these general directions, with some momentum behind them, is what distinguishes these as tendencies as opposed to trends. A panel of Danish executives from different industries rates the significance of each item on the list, on both industry-wide and global levels.

Some highlights from the report are as follows:

  • Due to a number of factors, ranging from aging populations to the financial crisis, companies will place increasingly greater value on employees’ talent and ability, to the point where talent will be “regarded as a company’s most important asset for future growth.” In order to retain talented employees, businesses will compete with each other to offer better work environments, larger salaries, and other benefits. Worth noting: “One method of identifying potential and existing employees’ undiscovered talents in the future could be brain scanning.”
  • The 10-year economic prognosis looks good for many countries in Africa—particularly the so-called African Lions, which include Botswana, Egypt, Libya, Mauritius, Morocco, South Africa, and Tunisia. Fueled in no small part by attention from investors in China and India, “the region’s economic capacity is one of the fastest growing in the world,” according to the report. However, there will continue to be substantial divides between haves and have-nots within these countries. Worth noting: Africa could become an increasingly popular vacation destination for Westerners.
  • In contrast, “the indications are that Europe’s glory days are coming to an end,” according to the CIFS. One (perhaps all-too-likely) scenario shows Europe and the United States experiencing zero economic growth. “On the other hand, it is possible to set up scenarios in which [basic] reforms are gradually implemented, and there is a return to growth, albeit at a lower level than we experienced during the decade from 2000-2010.” Worth noting: Spain, the EU’s fifth largest economy, recently made headlines when it reported zero growth for the third quarter of 2010.
  • A renewed and lasting interest in collectivity and community will benefit global society. Examples of such range from social networking sites to urban car-sharing programs. The CIFS writes, “The communities of the future will be based on co-creation.” In other words, rather than competing against each other, talented people will work together to find innovative solutions to overarching problems. Worth noting: The report suggests that digital media could be facilitating a kind of collective intelligence on a global level.
  • Mental doping is on the rise. Prescription medications such as Adderall, Ritalin, and beta blockers are being used (and abused) more and more as brain stimulants by students and workers looking to improve their mental performances. Yet, cognitive-performance enhancement does not have the same stigma attached as physical-performance enhancement has. “Is this a development that gives cause for concern? Opinion on this is divided,” the report says. Whether such substances will be banned from schools and workplaces—or at least tacitly allowed—is a big question. Worth noting: This tendency overlaps with intensifying genetic research, personalized medicine, and the pioneering of such methods as in utero gene therapy.

The CIFS report is a follow-up to 2003’s “Ten Tendencies Towards 2010.”

Source: Copenhagen Institute for Futures Studies, www.cifs.dk.

Legendary Conservationists Share Award

President emeritus of the Missouri Botanical Garden Peter H. Raven and Harvard University entomology professor Edward O. Wilson were the co-recipients of the 2010 Linnaean Legacy Award. The award was presented to the two colleagues in recognition of their contributions to the field of biological classification by the Linnaean Society of London and the International Institute for Species Exploration at Arizona State University. The ceremony was held at the New York Academy of Sciences as part of the conference “Sustain What? The Mission to Explore and Conserve Biodiversity.”

During the conference, scientists also worked on developing an ambitious 50-year plan to discover and classify at least 90% of the Earth’s species. It is estimated that only 20% (1.9 million) of all species have been discovered and classified so far. What’s more, experts predict that around 30% of all species will become extinct during the twenty-first century. This massive extinction is “changing the entire character of life on Earth,” Raven told the crowd. Preserving the various species—the so-called living environment—is essential to protecting the physical environment, Wilson said during the joint keynote presentation.

Raven’s past articles for THE FUTURIST, including “A Time of Catastrophic Extinction: What We Must Do” (September-October 1995) and the cover story “Disappearing Species: A Global Tragedy” (October 1985), have also sounded this alarm. In “A Time of Catastrophic Extinction,” he suggests ways to prevent what he warns would be “an episode of species extinction greater than anything the world has experienced for the past 65 million years.”

Source: The International Institute for Species Exploration at Arizona State University, www.species.asu.edu.

Europe’s Blue Future: Offshore Energy

The Marine Board of the European Science Foundation presented a report at the EurOCEAN2010 conference that details how Europe could get half of its electricity from renewable marine resources by 2050. The plan entails researching and developing innovative ways to harness energy from offshore wind, tides, and ocean currents, as well as marine biofuels such as algae.

The report, entitled “Marine Renewable Energy: Research Challenges and Opportunities for a New Energy Era in Europe,” points to the fact that the EU currently imports more than half of its energy and that this amount is projected to increase if current trends are unchanged.

In making its case, the Marine Board highlights potential economic benefits, such as job creation and new business opportunities—which were dubbed “blue jobs” and “blue growth” at the conference. The Board’s projections show that “by 2050, the Renewable Ocean Energy sector could provide 470,000 jobs, which corresponds to ten to twelve jobs (direct and indirect) created per megawatt installed.”

Developing the technology means developing new bodies of knowledge in fields ranging from engineering to ecology. It also entails crafting innovative legislation to help facilitate it. “Marine renewable energy is in its infancy, but it has remarkable potential, so the target of 50% is ambitious, but achievable,” said Marine Board chair Lars Horn. “We just need research, industry and policy to come together.”

The report further recommends comprehensively assessing the available aquatic resources, and developing ways to properly monitor them, in order to keep track of the environmental impacts caused by large commercial-scale installations. Such issues could include electromagnetic disturbances and problems caused by altering water circulation patterns. The report states: “There is limited data or knowledge on the medium- and long-term environmental impacts of Marine Renewable Energy devices.” The Board advocates finding better ways to research, predict, and respond to potential cumulative impacts. To that end, it also advocates for the creation of an initial test site.

The Marine Board is a co-organizer of the EurOCEAN2010 conference, which was held in October 2010 in Belgium.

Sources: European Science Foundation, www.esf.org. EurOCEAN2010 Conference, www.eurocean2010.eu.