
Download the 2020 Visionaries Essay Series PDF
Don’t be alarmed, but the next 10 years could be the most significant in the history of the human race. The unsolved problems of the last century have grown in size and urgency. Issues such as climate change, governmental fiscal imbalances, the demographic shift to older populations, depleting resources, and increasing technological complexity could cause major disruptions in the next decade as our species arrives at what futurist William Halal calls a “crisis of maturity.”
Some of the questions we will have to address in the next decade include:
* How do we deliver inexpensive and reliable health care to a rapidly aging population?
*How does a civilization maintain economic growth and prosperity in the wake of overdevelopment, misuse of wealth, and profligate exploitation of resources?
*Will the Internet bring democracy and freedom to the people of the world who live under authoritarian rule? Or will nondemocratic regimes appropriate the power of information technology to spy on their own citizens?
*What’s the best method for educating our children for an ever-more competitive and demanding economic environment?
In a series of essays to run in this magazine throughout 2010, we hope to bring you some answers. We will ask 20 individuals, each with a unique vision and a unique voice, to share with you their hopes, fears, and ideas for the next 10 years and beyond. Some of these voices offer a new approach to the problems that we’re facing today. Other voices highlight an issue or dilemma that will grow as a major concern. All of these individuals offer solutions, and all are highly independent.
Why is independence important? Look closely and you’ll see signs that a global shift is occurring. Technological breakthroughs and globalization are imbuing ordinary people with new powers, from the street activist in Beijing organizing a flash demonstration on his phone to the entrepreneur in Kenya who’s just made a biofuel breakthrough.
History has seen the transfer of power from mobs to empires and from empires to states and corporations. This most-recent transmission of control, from giant institutions to small groups and citizens, could be our last if we fail to wield power properly.
We have the opportunity to redefine “progress” for a new era. Technology and globalization are presenting us with opportunities to build entirely new futures from the ground up.
Let the visioning commence. — Patrick Tucker
In this first series of essays, we tackle health and education. Andrew Hessel showcases his vision for open-source drug manufacturing and noted nanoscientist Robert Freitas details the medical future of nanorobotics. Then two teachers — Janna Anderson and Mark Bauerlein — present two distinct visions for education in the twenty-first century.
The founder of the Pink Army Cooperative is bringing the open-source development model to breast cancer therapies.
By Andrew Hessel
If I were to tell you that volunteers working out of garages and bedrooms could play as big a role in the elimination of breast cancer by 2020 as a multibillion-dollar big pharmaceutical company, would you believe me?
I’m convinced it’s possible. That’s why I founded the Pink Army Cooperative. The Cooperative is not your average biotechnology startup. It’s an open source biotechnology venture that is member-owned and operated and not-for-profit. It’s working to create individualized therapies for breast cancer. The mission is to build a new drug development pipeline able to produce effective therapies faster for less money, without compromising safety.
Big Drug Makers versus Co-Op: Why Small Is Better
About six years ago, I realized that the cooperative model could change the future of medicine. I’d just spent years working inside a well-funded scientific playhouse where R&D should have moved forward at breakneck speed, but somehow it hadn’t. Technologies are changing fast, and drugs frequently fail in development.
It costs hundreds of millions, or even billions, of dollars to bring a drug to market, and the costs are still growing faster than inflation. Even the largest pharmaceutical companies are struggling. The bottom line? Making a new drug is an adventure with no guarantee of success at any cost. The question I asked myself was, why hasn’t the pipeline been scrapped and replaced with something that can accommodate development done faster, better, and cheaper?
There is no public route for drug development; virtually all development is industry-backed. I wondered, if open-source software could effectively challenge multibillion-dollar software franchises, could scientists and drug developers work cooperatively to compete with a product from a big pharmaceutical company? To my mind, breast cancer therapies were the obvious choice, since many people already give time and money toward finding a cure.
Perhaps the single most powerful tool for accomplishing this goal is openness, which allows everyone, amateur or professional, anywhere, to peek under the hood of the company, understand what is being done, and add his or her ideas or comments. I personally believe it’s lack of transparency and inability to share information easily that has held back the biopharma industry compared to the IT industry.
Overall, as biology becomes more digital, there is potential for massive change. Open access will make it easier to share ideas, publish protocols and tools, verify results, firewall bad designs, communicate best practices, and more. Individualized medicine development will be built on this open foundation, which will only help developers be more successful and lower risk.
It also permits a novel funding model — i.e., directly approaching those who would benefit from any breakthrough. Whereas traditional funding models require attracting a few individuals or groups able to make large investments, for which they expect a financial return, we can deliver our message widely, asking people to invest $20 in a membership, in exchange for sharing our data with the community. Finding people to support us and running the cooperative itself is made easier because of social networking sites like Facebook and Twitter.
In the short term, I don’t see open-source drug development having a large effect on the U.S. health economy. The $2 trillion–plus system includes many products and services beyond just drugs. But there is room for a few examples to exist, make a real and measurable difference, and inspire others to experiment with nonprofit development. If Pink Army can treat even a single individual, I will consider the project a tremendous success, although I hope it will grow to treat millions of people with medicines that only get better and cheaper over time.
Personal Cures: From Individuals, For Individuals
The idea of cures or therapies that are unique to the individual is a critical component of the Pink Army Cooperative vision. A few years ago, the notion of cancer treatment that was specific to a person’s genome was seen as a fantasy. But thanks to rapidly moving technologies like synthetic biology, the prospects are very different today. This is a powerful new genetic engineering technology founded on DNA synthesis that amounts to writing software for cells. It’s the ideal technical foundation for open-source biotechnology. Moreover, synthetic biology drops the cost of doing bioengineering by several orders of magnitude. Small proteins, antibodies, and viruses were amenable to the technology and within reach of a startup.
Readers familiar with Wired editor Chris Anderson’s The Long Tail will recognize individualized medicine as the very end of the tail — a future of one product sold only to one person. I don’t think any company had seriously considered making these types of drugs before Pink Army. Most people accept that drugs cost hundreds of millions to make. Who could pay that much for a custom medicine, other than a few billionaires?
But individualized drugs could lower the cost of drug development across the entire spectrum of the development chain. Only very small-scale manufacturing capability is necessary. Lab testing is simplified. And clinical trials are reduced to a single person: No large phased trials are necessary, so there’s no ambiguity about who will be treated, and every patient can be rigorously profiled. This shaves money and years off development. Moreover, with the client fully informed and integral to all aspects of development and testing, the developer’s liability approaches the theoretical minimum.
My interest in breast cancer is personal and professional. Because it affects so many women — roughly 12% — almost everyone has been touched by breast cancer either personally or through someone they know. But cancer has always been central to my work as a genetic scientist, and I’m lucky to have been involved with several breast cancer–related projects during my time in biopharma. Curing cancer should be straightforward: It’s about making a better antibiotic, but the search for a cure seems to have stalled. It’s time to see if open-source drug development can reboot the process. That’s why Pink Army is important.
About the Author
Andrew Hessel is a geneticist and founder of the Pink Army Cooperative in Alberta, Canada. Web site www.pinkarmy.org.
The founder of the Nanofactory Collaboration is innovating medicine molecule by molecule.
By Robert A. Freitas Jr.
© 2009 Robert A. Freitas Jr. All Rights Reserved.
For countless centuries, physicians and their antecedents have sought to aid the human body in its efforts to heal and repair itself. Slowly at first, and later with gathering speed, new methods and instruments have been added to the physician’s toolkit – anesthesia and x-ray imaging, antibiotics for jamming the molecular machinery of unwanted bacteria, microsurgical techniques for physically removing pathological tissue and reconfiguring healthy tissue, and most recently biotechnology, molecular medicine, pharmacogenetics and whole-genome sequencing, and early efforts at gene therapies.
In most cases, however, physicians must chiefly rely on the body’s ability to repair itself. If this fails, external efforts may be useless. We cannot today place the component parts of human cells exactly where they should be, and restructure them as they should be, to ensure a healthy physiological state. There are no tools for working, precisely and with three-dimensional control, at the molecular level.
To obtain such tools, we need nanotechnology (nanomedicine.com/NMI/1.1.htm ). Nanotechnology is the engineering of atomically precise structures and, ultimately, molecular machines. The prefix “nano-” refers to the scale of these constructions. A nanometer is one-billionth of a meter, the width of about five carbon atoms nestled side by side. Nanomedicine is the application of nanotechnology to medicine.
The ultimate tool of nanomedicine is the medical nanorobot (http://www.nanomedicine.com/index.htm#NanorobotAnalyses ) – a robot the size of a bacterium, composed of many thousands of molecule-size mechanical parts perhaps resembling macroscale gears, bearings, and ratchets, possibly composed of a strong diamond-like material. A nanorobot will need motors to make things move, and manipulator arms or mechanical legs for dexterity and mobility. It will have a power supply for energy, sensors to guide its actions, and an onboard computer to control its behavior. But unlike a regular robot, a nanorobot will be very small. A nanorobot that would travel through the bloodstream must be smaller than the red cells in our blood – tiny enough to squeeze through even the narrowest capillaries in the human body. Medical nanorobotics holds the greatest promise for curing disease and extending the human health span. With diligent effort, the first fruits of this advanced nanomedicine could begin to appear in clinical treatment sometime during the 2020s.
For example, one medical nanorobot called a “microbivore” ( http://www.jetpress.org/volume14/freitas.pdf ) could act as an artificial mechanical white cell, seeking out and digesting unwanted pathogens including bacteria, viruses, or fungi in the bloodstream. A patient with a bloodborne infection might be injected with a dose of about 100 billion microbivores (about 1 cc). When a targeted bacterium bumps into a microbivore, the microbe sticks to the nanorobot’s surface like a fly caught on flypaper. Telescoping grapples emerge from the microbivore’s hull and transport the pathogen toward the front of the device, bucket-brigade style, and into the microbivore’s “mouth.” Once inside, the microbe is minced and digested into amino acids, mononucleotides, simple fatty acids and sugars in just minutes. These basic molecules are then harmlessly discharged back into the bloodstream through an exhaust port at the rear of the device. A complete treatment might take a few hours, far faster than the days or weeks often needed for antibiotics to work, and no microbe can evolve multidrug resistance to these machines as they can to antibiotics. When the nanorobotic treatment is finished, the doctor broadcasts an ultrasound signal and the nanorobots exit the body through the kidneys, to be excreted with the urine in due course. Related nanorobots could be programmed to quickly recognize and digest even the tiniest aggregates of early cancer cells.
Medical nanorobots could also be used to perform surgery on individual cells. In one proposed procedure, a cell repair nanorobot called a “chromallocyte” ( http://www.jetpress.org/v16/freitas.pdf ), controlled by a physician, would extract all existing chromosomes from a diseased cell and insert fresh new ones in their place. This process is called chromosome replacement therapy. The replacement chromosomes are manufactured outside of the patient’s body using a desktop nanofactory optimized for organic molecules. The patient’s own individual genome serves as the blueprint to fabricate the new genetic material. Each chromallocyte is loaded with a single copy of a digitally corrected chromosome set. After injection, each device travels to its target tissue cell, enters the nucleus, replaces old worn-out genes with new chromosome copies, then exits the cell and is removed from the body. If the patient chooses, inherited defective genes could be replaced with non-defective base-pair sequences, permanently curing any genetic disease and even permitting cancerous cells to be reprogrammed to a healthy state. Perhaps most importantly, chromosome replacement therapy could correct the accumulating genetic damage and mutations that lead to aging in every one of our cells.
Right now, medical nanorobots are just theory. To actually build them, we need to create a new technology called molecular manufacturing. Molecular manufacturing is the production of complex atomically precise structures using positionally controlled fabrication and assembly of nanoparts inside a nanofactory, much like cars are manufactured on an assembly line. The first experimental proof that individual atoms could be manipulated was obtained by IBM scientists back in 1989 when they used a scanning tunneling microscope to precisely position 35 xenon atoms on a nickel surface to spell out the corporate logo “IBM”. Similarly, inside the nanofactory simple feedstock molecules such as methane (natural gas), propane, or acetylene will be manipulated by massively parallel arrays of tiny probe tips to build atomically precise structures needed for medical nanorobots. In 2006, Ralph Merkle and I founded the Nanofactory Collaboration ( MolecularAssembler.com/Nanofactory ) to coordinate a combined experimental and theoretical R&D program to design and build the first working diamondoid nanofactory that could build medical nanorobots.
How are these ideas being received in the medical community? Initial skepticism was anticipated, but over time people have begun taking the concept more seriously. (In late 1999 when my first book on “nanomedicine” came out, googling the word returned only 420 hits but this number rose fourfold in 2000 and fourfold again in 2001, finally exceeding 1 million hits by 2008.) Of course, most physicians cannot indulge themselves in exploring the future of medicine. This is not only understandable but quite reasonable for those who must treat patients today with the methods available today. The same is true of the medical researcher, diligently working to improve current pharmaceuticals, whose natural curiosity may be restrained by the knowledge that his or her success – no matter how dramatic – will eventually be superseded. In both cases, what can be done today, or next year, is the most appropriate professional focus.
But only a fraction of today’s physicians and researchers need look ahead for the entire field of medicine to benefit. Those practitioners who plan to continue their careers into the timeframe when nanomedical developments are expected to arrive – e.g., younger physicians and researchers, certainly those now in medical and graduate programs – can incrementally speed the development process, while simultaneously positioning their own work for best effect, if they have a solid idea of where the field of medicine is heading. Those farther along in their careers will be better able to direct research resources today if the goals of nanomedicine are better understood.
The potential impact of medical nanorobotics is enormous. Rather than using drugs that act statistically and have unwanted side effects, we can deploy therapeutic nanomachines that act with digital precision, have no side effects, and can report exactly what they did back to the physician. Test results, ranging from simple blood panels to full genomic sequencing, should be available to the doctor within minutes of sample collection from the patient. Continuous medical monitoring by embedded nanorobotic systems, as exemplified by the programmable dermal display ( http://www.nanogirl.com/museumfuture/freitastalk.htm ), can permit very early disease detection by patients or their physicians. Such monitoring will also provide automatic collection of long-baseline physiologic data permitting detection of slowly developing chronic conditions that may take years or decades to develop, such as obesity, diabetes, calcium loss, or Alzheimer’s.
Drug companies? Rather than brewing giant batches of single-action drug molecules, Big Pharma can shift to manufacturing large quantities of generic nanorobots of several basic types. These devices could later be customized to each patient’s unique genome and physiology, then programmed to address specific disease conditions, on site in the doctor’s office at the time of need. Could personal nanofactories ( http://www.rfreitas.com/Nano/NoninflationaryPN.pdf ) in patients’ homes eventually do some of this manufacturing? Yes, especially if creative designs for new devices or procedures are placed online as open-source information. But basic issues such as IP rights, quality control, legal liability, trustworthiness of design improvements and software upgrades, product branding, government regulation and the like should allow Big Pharma to retain a significant role in medical nanomachine manufacture even in an era of widespread at-home personal manufacturing.
Doctors and hospitals? For commonplace pathologies such as cuts or bruises, colds or flu, bacterial infections or cancers of many kinds, individuals might keep a batch of generic nanorobots at the ready in their home medical appliance, ready to be reprogrammed at need either remotely by their doctor or by some generically-available procedure, allowing patients to self-treat in the simplest of cases. Doctors in this situation will act in the role of consultants, advisors, or in some cases gatekeepers regarding a particular subset of regulated conventional treatments. This will free up physicians and hospitals to deal with the most difficult or complex cases, including acute physical trauma and emergency care. These practitioners can also concentrate on rare disease conditions; many diseases also have few symptoms and thus go unrecognized for a long time. Medical specialists will also be needed to plan and coordinate major body modifications such as cosmetic surgeries and genetic upgrades, as well as more comprehensive procedures such as whole-body rejuvenations that may involve cell repair of most of the tissue cells in the body and might require several days of continuous treatment in a specialized facility.
Cost containment? Costs can be held down because molecular manufacturing can have intrinsically cheap production costs (probably on the order of $1/kg for a mature molecular manufacturing system) and can be a “green” technology generating essentially zero waste products or pollution during the manufacturing process. Nanorobot life cycle costs can be very low because nanorobots, unlike drugs and other consumable pharmaceutical agents, are intended to be removed intact from the body after every use, then refurbished and recycled many times, possibly indefinitely. Even if the delivery of nanomedicine doesn’t reduce total health-care expenditures – which it should – it will likely free up billions of dollars that are now spent on premiums for private and public health-insurance programs.
Many are working to extend the bounds of conventional medicine, so here it is relatively difficult for one person to make a big difference. Few are given the opportunity (the perspective, the resources, and the willingness) to look a bit farther down the road, identifying an exciting long-term vision for medical technology and then planning the detailed steps necessary to achieve it. Planning and executing these steps toward the long-term vision has been my career and my passion for the last two decades. As the technologies I’m working on come more clearly into focus, more people will acknowledge them as realistic and their enhanced trust in the longer-term vision will help speed the development of medical nanorobotics.
About the Author
Robert A. Freitas Jr. is senior research fellow at the Institute for Molecular Manufacturing (IMM) in California, after serving as a research scientist at Zyvex Corp. in Texas during 2000-2004. He is the author of Nanomedicine (Landes Bioscience, 1999, 2003), the first technical book series on medical nanorobotics. Web site www.rfreitas.com. Freitas is the 2009 winner of the Feynman Prize in nanotechnology for theory.
Communications scholar Janna Anderson is charting a new path for education outside of the classroom.
The following interview was conducted by FUTURIST senior editor Patrick Tucker.
THE FUTURIST: You’ve talked about entrenched educational institutions of the industrial age, and how those will be replaced as computer interfaces will be improved. You’ve said that developments in materials science will make learning into a process that happens via computer and video game, and that may even be a precursor to learning by computer implant by 2030 or 2040. My first question is: What role does the classroom have in the classroom of the future?
Janna Anderson: I do believe that a face-to-face setting is an important element of learning. The era of hyperconnectivity will require that most professionals weave their careers and personal lives into a blended mosaic of activity. Work and leisure will be interlaced throughout waking hours, every day of the week. We need to move away from the format of school time and non-school time, which is no longer necessary. It was invented to facilitate the agrarian and industrial economies.
Faculty, teachers, and principals could inform students that they expect them to learn outside of the classroom and beyond homework assignments. The Internet plays a key role in that. Rather than classrooms, one can see the possible emergence of learning centers where students with no Internet access at home can go online, but everyone will be working on a different project, not on the same lesson. You can also imagine students making use of mobile and wireless technology for purposes of learning.
More importantly, we need to teach kids to value self-directed learning, teach them how to learn on their own terms, and how to create an individual time schedule. We need to combine face time with learning online. And we can’t be afraid to use the popular platforms like text-messaging and social networks. As those tools become more immersive, students will feel empowered and motivated to learn on their own — more so than when they were stuck behind a desk.
THE FUTURIST: One thing you and many others have said is that neuroscience has the potential to radically change the way we teach. As we develop a more real and full understanding of the way the brain accumulates knowledge, what technology, aside from IT, could change education?
Anderson: It’s hard to predict which new technology could capture people’s imaginations. I think the combination of bioinformatics — biology and information technology — could have the biggest impact in the next couple of decades. If we continue to see the digitization of all information, which renders even our chemistry knowable, the ramifications for education could be immense and unfathomable. But the far future is the confluence of too many different factors to see.
THE FUTURIST: Right now, many educators perceive a digital divide between the members of different socioeconomic classes. You’ve talked about how scalability — technology becoming cheaper and more available in the future — could help solve that. But what if some people adopt the new technology faster than others? There are early adopters and late adopters. Being a late adopter is a small matter when you’re talking about the new iPhone, but as education becomes increasingly digitized, late adoption could have significant consequences in terms of the educational quality. Do you see any threat of an adopter divide?
Anderson: There’s no doubt that there are capacity differences. When we’re talking about the digital divide, we’re not talking just about access to equipment, but also the intellectual capacity, the training to use it, and the ability to understand the need for it, as well as its importance. There’s no doubt that cultural differences are also a huge factor. In areas that have been less developed, especially in the global south, a capacity gap in terms of adoption of a new technology may emerge because some societies are less able to adopt something new at this point in time.
THE FUTURIST: How can this cultural divide be overcome?
Anderson: This is why the effort to educate women is so important. In cultures where women are highly educated and tend to be heads of the family in terms of the upbringing of their children, there’s a higher likelihood that those children are going to show a more open cultural perspective and be more willing to take up new technologies.
THE FUTURIST: So, you still see an active role for actual physical teachers. In many ways, teachers will be more necessary than ever if they’re going to help people, especially in less-developed nations, to pick up these technologies to improve their own lives?
Anderson: There’s definitely a role for technology evangelists who can help people to understand how to use information technology no matter what level they happen to be at. But the traditional idea of the teacher may be much less valuable to the future, just like the traditional library will have much less value. We need to remove the old books that no one has opened in twenty years and put them in nearby storage. What we do need are places were people can gather — places that foster an atmosphere of intellectual expansion, where learners can pursue deeper meaning or consult specialists with access to deep knowledge resources. It’s all about people accessing networked knowledge, online, in person, and in databases. We need collective intelligence centers, and schools could be that way, too.
THE FUTURIST: The Internet is inherently disruptive to business models; the decimation of the newspaper industry is a case in point. One of the aspects of digital education that people don’t talk about much is how disruptive it could be to the career of teaching. On the one hand, really great teachers will be able to reach a broader audience than ever before, but younger educators — teachers who have not yet hit their stride — could be left out. What happens when the educational community one day realizes that they’re facing the same forces of creative destruction that newspapers are facing today?
Anderson: Today there’s actually an advantage for young teachers because they generally understand better than the oldest generation how to implement new digital tools. If we eventually are able to “patch in” to all of the knowledge ever generated with a cybernetic implant, or if we are able to program advanced human-like robots or 3-D holograms to deliver knowledge resources, “elders” will have more influence over the content delivered. Regarding forces of advancing technology and their influence on things such as the news industry, the story of the entrenched institutions fighting change is an old one. We have to overcome the tyranny of the status quo. Many media leaders understood in the 1990s that they had to prepare for a new day, but they had this great profit machine. They wouldn’t let go of it until the economics of the situation forced them to change. Economics is generally the force that pushes leaders of stagnating institutions to adopt new paradigms. It will be interesting to see how all of this develops over the next few years.
Maybe what we need is a new employment category, like future-guide, to help people prepare for the effects of disruptive technology in their chosen professions so they don’t find themselves, frankly, out of a job.
About the Interviewee
Janna Anderson is an associate professor in Elon University’s School of Communications and the lead author of the Future of the Internet book series published by Cambria Press. She is also the author of Imagining the Internet: Personalities, Predictions, Perspectives (Rowman & Littlefield, 2005). She will be speaking at the World Future Society’s 2010 conference in Boston.
Emory University professor Mark Bauerlein is fighting to preserve literary thought in an age of digital distraction.
By Mark Bauerlein
When the Boston Globe reported that an elite prep school in Massachusetts had set out to give away all its books and go 100% digital, most readers probably shrugged. This was just a sign of the times: Everyone now assumes a paperless future of learning through screens, not Norton anthologies and Penguin paperbacks. After all, the headmaster of the school told the Globe, “When I look at books, I see an outdated technology, like scrolls before books.” Who wouldn’t believe that every school a decade hence will display a marvelous, wondrous array of technology in every classroom, in the library, in study hall?
It won’t go that far, though, not in every square foot of the campus and every minute of the school day. In 2020, schools will indeed sport fabulous gadgets, devices, and interfaces of learning, but each school will also have one contrary space, a small preserve that has no devices or access, no connectivity at all. There, students will study basic subjects without screens or keyboards present — only pencils, books, old newspapers and magazines, blackboards and slide rules. Students will compose paragraphs by hand, do percentages by long division, and look up a fact by opening a book, not checking Wikipedia. When they get a research assignment, they’ll head to the stacks, the reference room, and the microfilm drawers.
It sounds like a Luddite fantasy, but even the most pro-technology folks will, in fact, welcome the non-digital space as a crucial part of the curriculum. That’s because over the next 10 years, educators will recognize that certain aspects of intelligence are best developed with a mixture of digital and nondigital tools. Some understandings and dispositions evolve best the slow way. Once they mature, yes, students will implement digital technology to the full. But to reach that point, the occasional slowdown and log-off is essential.
Take writing. Today, students write more words than ever before. They write them faster, too. What happens, though, when teenagers write fast? They select the first words that come to mind, words that they hear and read and speak all the time. They have an idea, a thought to express, and the vocabulary and sentence patterns they are most accustomed to spring to mind; with the keyboard at hand, phrases go right up on the screen, and the next thought proceeds. In other words, the common language of their experience ends up on the page, yielding a flat, blank, conventional idiom of social exchange. I see it all the time in freshman papers, prose that passes along information in featureless, bland words.
English teachers want more. They know that good writing is pointed, angular, vivid, and forceful. A sharp metaphor strikes home, an unusual word catches a perceptive meaning, a long periodic sentence that holds the pieces together in elegant balance draws readers along. There are the ingredients of style, the cultivation of a signature. It happens, though, only when writers step outside the customary flow of words, especially those that tumble forth like Yosemite Falls. Because writing is a deep habit, when students sit down and compose on a keyboard, they slide into the mode of writing they do most of the time on a keyboard — texting (2,272 messages per month on average, according to Nielsen), social networking (nine hours per week, according to National School Boards Association), and blogging, commenting, IM, e-mail, and tweets.
It’s fast and easy, but good writing doesn’t happen that way. As more kids grow up writing in snatches and conforming to the conventional patter, problems will become impossible to overlook. Colleges will put more first-year students into remedial courses, and businesses will hire more writing coaches for their own employees. The trend is well under way, and educators will increasingly see the nondigital space as a way of countering it. For a small but critical part of the day, they will hand students a pencil, paper, dictionary, and thesaurus, and slow them down. Writing by hand, students will give more thought to the craft of composition. They will pause over a verb, review a transition, check sentence lengths, and say, “I can do better than that.”
The nondigital space will appear, then, not as an antitechnology reaction but as a nontechnology complement. Before the digital age, pen and paper were normal tools of writing, and students had no alternative to them. The personal computer and Web 2.0 have displaced these tools, creating a new technology and a whole new set of writing habits. This endows pen and paper with a new identity, a critical, even adversarial one. In the nondigital space, students learn to resist the pressures of conformity and custom, to think and write against the fast and faster modes of the Web. Disconnectivity, then, serves a crucial educational purpose, forcing students to recognize the technology everywhere around them and to see it from a critical distance.
This is but one aspect of the curriculum of the future. It allows a better balance of digital and nondigital outlooks. Yes, there will be tension between the nondigital space and the rest of the school, but it will be understood as a productive tension, not one to be overcome. The Web is, indeed, a force of empowerment and expression, but like all such forces, it also fosters conformity and stale behaviors. The nondigital space will stay the powers of convention and keep Web 2.0 (and 3.0 and 4.0 ) a fresh and illuminating medium.
About the Author
Mark Bauerlein is a professor of English at Emory University. He’s served as a director of the Office of Research and Analysis at the National Endowment for the Arts, where he oversaw studies about culture and American life. He’s published in the Wall Street Journal, The Weekly Standard, The Washington Post, and the Chronicle of Higher Education. His latest book, The Dumbest Generation: How the Digital Age Stupefies Young Americans and Jeopardizes Our Future; Or, Don’t Trust Anyone Under 30, was published in May 2008 by Penguin. Web site www.dumbestgeneration.com.
In the second installment of our 2020 Visionaries series, we look at media and spirituality in the next decade and beyond.
Media refers not only to books, movies, music, and journalism that we consume; it also speaks to the way we enjoy and create culture. Today, publishing houses, record companies, and movie studios face a future where every book, album, and movie is nothing more than a collection of ones and zeroes, downloadable anywhere, with no expensive packaging or backroom dealing technically necessary.
This means diminished profits and returns for media companies that rely on enormous and expensive distribution systems. In 2008, the sale of music on the Web rose significantly: More than a billion songs were downloaded, up from just 19 million in 2003. But the number of albums sold has dropped considerably, reflecting a change not only in the way music is sold, but also in the way it’s created and arranged. Similar trends are affecting newspapers, publishing houses, and movie studios.
Cory Ondrejka, co-founder of the online game Second Life and former vice president for digital marketing for EMI Music, and Andrew Keen, Internet entrepreneur and outspoken critic of Web 2.0, paint contrasting pictures of how the Internet will redefine culture in the next 10 years.
The Web is also changing the way we perceive the universe and our place in it. Scientific breakthroughs that challenge core religious beliefs — fossil data adding credence to evolution or new telescopic imagery showing the vast emptiness of space — are broadcast immediately, globally, and with increasing frequency. A cross-continental community is developing around the rejection of traditional religion, as evinced by the growing popularity of prominent atheists such as Richard Dawkins and Christopher Hitchens.
The twenty-first century, more than any other, will be governed by science. No wonder the number of Americans who self-identified as not being part of any organized religion roughly doubled from 8% of the population in 1990 to 15% in 2008. The percentage of the U.S. population who self-identified as Christian decreased from 86% of the population to 76% during the same time.
But the Internet is also allowing religious people to connect on an international scale and discuss the intersection of science and spirituality. The relationship need not be a hostile one, as a number of religious leaders are beginning to recognize. In his 2005 book The Universe in a Single Atom, the Dalai Lama remarked, “Today, in the first decade of the twenty-first century, science and spirituality have the potential to be closer than ever, and to embark upon a collaborative endeavor that has far-reaching potential to help humanity meet the challenges before us.”
We asked Roy Speckhardt of the American Humanist Association and Buddhist abbess Ayyā Gotamī (the reverend Prem Suksawat) for their views on how spirituality, science, and the Internet may influence one another in the decades ahead. — Patrick Tucker, senior editor, THE FUTURIST.
A Second Life creator is fighting to give digital pioneers room and freedom to grow.
By Cory Ondrejka
We live in the future.
I am writing on a three-pound sliver of a laptop that can execute billions of operations every second, connected to the Internet over a mobile broadband connection. I am juggling competing deadlines for this article and writing Web code for a project running on a cloud infrastructure I will never see. I send e-mail, listen to music, publish updates to Twitter and Facebook, and test machine-learning algorithms, all while enjoying a double espresso.
When I run into a bug or can’t remember the syntax for a command, I Google the question, knowing the Web’s answers to be faster and more accurate than thedocumentation stored on my hard drive and Kindle. My phone is a pocket computer with nearly the power of the first desktop machines used in Second Life’s development. All of us can find, manage, remix, and share information with an ease we are already taking for granted.
We live in the future. Big Content does not. Big Content — shorthand for the
publishing, music, movie, television, and news industries and their powerful lobbying organizations — is staggering into the second decade of the twenty-first century. Despite the largest, wealthiest, and most connected global audience in human history, Big Content faces precipitous declines in sales and advertising revenue.
Because Big Content does not embrace the world we live in, two wildly different outcomes exist for media in 2020: Big Brother or Little Brother. Which future we get is a function of who participates in and drives the ongoing debates around media, innovation, copyright, piracy, and Internet access.
Content owners have railed against technological change since before Big Content even existed, from John Philip Sousa’s denouncing of the player piano to former Motion Picture Association of America chief Jack Valenti’s famous comparison of the VCR to the Boston Strangler. But for the most part, national governments have acted to maximize growth and prosperity through a balanced approach to intellectual property law. History validated that approach, with content creators and owners repeatedly adapting to and mastering new technologies and business opportunities.
The massive advantages that Big Content has when leveraging new technology — existing audience, established talent, and institutional expertise — are not enough for the incumbents. Instead, Big Content wants regulation to control technology, to force the law to define what artists and fans want in the future. As laughably bad as our ability to regulate the future is, it is fortunate that previous attempts, focused on limiting what artists and fans could do with their content through digital rights management (DRM), met with limited success at best.
Recently, Big Content has changed tactics, determined to create a Big Brother future for us all. At the heart of this new approach is the idea of “graduated response,” embodied in France’s recently ratified “three strikes” law. “Three strikes” is fairly simple: Be accused of violating copyright law three times, and you lose access to the Internet for three months to one year. A mysterious official body, the High Authority for the Distribution of Works and the Protection of Rights on the Internet, or HADOPI, would enforce the measure in accordance with a broad mandate to “prevent the hemorrhaging of cultural works on the Internet.”
This bears repeating: If HADOPI accuses you of violating copyright law three times, you could lose your access to the Internet.
“Three strikes” is gaining support worldwide. Today, most of the developed world is engaged in negotiations around the Anti-Counterfeiting Trade Agreement (ACTA). The parties to the agreement, including the United States, the European Union, and South Korea, have kept negotiations secret, but leaks indicate “three strikes” is under consideration.
This is akin to having your phone service terminated for making a mix tape. Or your electricity turned off for wearing a homemade superhero Halloween costume.
This is the Big Brother future. Abuses of the United States’ Digital Millennium Copyright Act have already shown that content owners will allege copyright abuse to hamper business competitors, suppress free speech, and block innovation. How much more powerful a hammer is “three strikes,” since it can effectively cut off the accused from their community, co-workers, and family by blocking Internet access? For many people, such a move would destroy their livelihood.
Thus, a bleak media future for 2020 is on the horizon, one where any content owner is able to invoke Big Brother to cut off your access to the Internet. Use the Internet as your phone service, for playing games, watching television, paying your bills, attending school, or working from home? Too bad! Three alleged copyright violations and you are back in 1990. Three mistakes by notoriously error-prone filtering software, and you are a second-class citizen, blocked from interacting with the rest of civil society.
What will this lead to? In 2020, the Internet and World Wide Web will be the most important technology any of us have access to. Fearing the loss of that connection, you will take the only option available and avoid media on the Internet entirely. No posting pictures, no streaming content, no cloud storage of your data, no user-generated content, no discussion groups about media. Any of these uses of media could lead to inadvertent copyright infringement or false positives, resulting in the loss of your connection to the Internet and the Web. So, ironically, by driving a Big Brother agenda, Big Content is sowing the seeds for its own destruction by eliminating the largest and most easily addressable audience ever.
Fortunately, Big Brother is not the only option. We — innovators, politicians, and citizens — can demand a better option: a Little Brother future.
Among the changes brought on by the Internet age, one of the greatest is the tremendous increase in capability available to individuals and small teams. First, the dramatic decrease in computation, storage, and transmission costs makes it far less risky to experiment with new technology. Next, the explosion of wired and wireless connections to the Web ensures an addressable audience. Finally, intense competition between tools and technologies makes Web development faster and more approachable. It has never been cheaper or easier to create content or to find and deliver that content to the right audience.
In broader terms, the costs of communication and learning have never been lower, and will only drop further if we refuse to allow Big Content to drive the debate. Cheaper learning should matter to all of us, because innovation is constrained by the cost of learning. Innovation — turning knowledge into products — drives per capita economic growth through productivity gains, so any nation that fundamentally reduces the costs of learning has a global competitive advantage. History is replete with examples, most recently the United States’ productivity gains in the 1990s due to the expansion of the Internet and information technology.
In 2010, as the world pulls out of a dramatic economic downturn, we are on the cusp of a new period of innovation and growth, driven by wireless broadband, mobile devices, and ubiquitous connectivity. Rather than succumbing to Big Brother policy demands, citizens should rally for a Little Brother future, where everyone has the maximum chance to create the next Google, Facebook, or Twitter. A Little Brother future relies on net neutrality (unrestricted movement of data) and a reasonable balance between the rights of content owners and music, movie, and content fans.
This is the future where artists and audiences have the best chance to find each other and create the next great ideas for Big Content. New media companies and business strategies can only emerge if the regulatory framework enables experimentation. Recent examples abound. EMI Music led first with non-digitally rights-managed music and a focus on the music experience; the company announced a significant upturn in revenues. YouTube was able to launch and grow while the courts worked on the legal questions related to hosting user-uploaded videos. Trent Reznor, founder of the rock band Nine Inch Nails, works with fans to create new media projects. Second Life’s users created successful virtual venues for live music. A Little Brother future ensures that innovators continue to have the space to try.
As the last decade has taught us, they will try and try! And TRY! This doesn’t guarantee Big Content’s ongoing success, but when compared with a Big Brother future where their demise is guaranteed, the choice is obvious. Big Content should be joining the rest of us as we lead the charge for a Little Brother future.
About the Author
Cory Ondrejka is the former executive vice president of digital marketing for EMI Music and the co-creator of Second Life. He’s also an entrepreneur, speaker, advisor, and a nonresident fellow at the Network Culture Project at the Annenberg School for Communication at the University of Southern California. He can be reached at cory.ondrejka(at)gmail.com. Blog: http://ondrejka.net
Note: The choice of “Little Brother” was inspired by the novel of the same name by Cory Doctorow, which examines life in today’s Big Brother world.
An Internet entrepreneur and Web critic is trying to remake the Internet from within.
The following interview was conducted by FUTURIST senior editor Patrick Tucker.
THE FUTURIST: You’re perhaps the most outspoken critic of Web 2.0 and
Internet culture to participate in Internet culture. You’ve railed against Twitter and Facebook even though you subscribe to both. What do you see as your mission?
Andrew Keen: I’m ambivalent about Facebook and Twitter and almost all of these things. But as a speaker and a social critic, I have an economic incentive in finding an audience. As mainstream media cracks up, the only way to build a brand successfully is to use a service like Twitter. That’s not to say that tweets in themselves have intrinsic value or will ever have intrinsic value. You’ll never be able to sell a tweet, no matter how beautifully crafted. I don’t have to admire or improve what’s happening, but I can’t be a Luddite, either. The people in the nineteenth century who refused to acknowledge the significance of the Industrial Revolution were swept away.
THE FUTURIST:You’ve been very vocal about how today’s Internet culture erodes privacy. Do you envision a future in which privacy neither exists nor is particularly missed? If so, what does someone with no conception of privacy behave like? What is the culture like?
Keen:I do envision such a possibility. In a culture with no concept of privacy, there wouldn’t be an inner life. Nothing would be kept to ourselves. We would lifestream 24 hours a day. The John Stuart Mill idea of the good life, with a clear delineation between inner and outer life, is turned on its head. I hope we never see it.
But you can already sense the way the Internet and artificial intelligence are tearing down the notion that we should have a distinction between public and private. I’m terribly hesitant about terminology like “transparency,” which suggests that businesses, institutions, even professionals, should try to put as much of themselves online as they can to reassure the public about their activities. This portrays that shift as something good, as more evolved. What does this lead to? Perhaps a culture of constant self-arrest, where we’re afraid to do anything because of how it may appear to others. Perhaps we’ll live vicariously through our AI entities.
THE FUTURIST: You’ve compared the Internet revolution to rock ’n’ roll, but it seems that the Internet revolution has the potential to be more hopeful. After all, rock ’n’ roll coincided with the rise of Jacques Derida and the deconstructionist philosophical movement. Many argue that the appeal of rock ‘n’ roll was the way it presented a violent teardown of prior musical forms. The Internet, by definition, is about construction, building the future. How exactly is the Internet like rock ‘n’ roll?
Keen:The Internet is more closely related to the rock ’n’ roll culture of the Sixties rather than of the Fifties. Richard Florida, who wrote Rise of the Creative Class, has talked about this. He makes a good point that the Internet, technologically, rose from the military-industrial complex of the Fifties, but the culture of it is better represented by the counterculture of the Sixties. It’s not that the Internet is like the Sixties; it is the Sixties.
The primary difference is that rock ’n’ roll generated a lot of money for certain types of people — namely, record companies and artists. The heroes of the Sixties were the rock stars and the counterculturalists. Many of them were obsessed with a childish revolt against authority, but some of them were remarkable. The wealth that’s been created out of the Internet revolution has been monopolized by technologists. The heroes of this age are entrepreneurs like the Google boys rather than the creative artists who have been relatively ignored for the most part.
The ultimate irony is that the artists were the radicals in the Sixties. In the Internet age, they’ve been rendered conservatives. Look at what happens to singers who try to defend intellectual property rights and the vitriol of the attacks to which they’re subjected?
Similarly, nothing of much intellectual or cultural value has come out of the Internet as an artistic medium. Intellectuals are able to use it peddle their own brands and ideas, but I’ve seen little Internet-based art with any lasting value. There haven’t been any real Internet movies; there hasn’t been a truly affecting Internet novel, though many have tried; Internet music has been basically a failure. Even when it’s successful, it isn’t created on the Internet, it’s just distributed on the Internet. Google, for example, is a remarkable company built by two computer scientists. Has it contributed to culture? At best, Google has undermined traditional media and destroyed it, and in so doing, it’s destroyed the way artists make money and the way experts make their livelihoods.
What does this mean for us now? The crusty old academic in a chair with a pipe reading books and giving lectures won’t work in the twenty-first century, but I don’t believe expertise will be swept away. I hope it will be modernized. Today’s expert needs to learn how to ride the wave, which requires not only wisdom, but speed.
THE FUTURIST:Watching these trends over the last few years, have you grown more optimistic or more pessimistic?
Keen:I’m more optimistic than I was when I wrote Cult of the Amateur, which was published in 2007 (St. Martin’s). People have begun to realize that Wikipedia isn’t reliable, that most of the stuff on YouTube has no value, that a tweet, by definition, cannot be wise. My hope is that by 2020 experts will be able to flood back into the production of culture. A legal scholar can tweet as well as a 12-year-old. It’s not the technology that undermines the expertise; it’s a cultural disrespect for authority and even for learning.
But culture is changing. We’ll require new experts to help us understand how that’s happened and what it means, particularly for education. Hopefully, people who are smart and well educated, particularly in the humanities and the social sciences, will seize back the tools of production and realize that they can have quite an impact by distributing their wisdom.
About the interviewee
Andrew Keen is an author and Internet critic and the author of Cult of the Amateur: How the Internet is Killing Our Culture (St. Martin’s, 2007). As an Internet entrepreneur, he founded the music site Audiocafe.com in 1995. His second book, Digital Vertigo: Anxiety, Loneliness and Inequality in the Social Media Age, will be published by St. Martin’s Press in 2010.
A humanist is spreading the gospel of godlessness, respectfully.
By Roy Speckhardt
A modern look at religion and spirituality yields a mix of potentialities. On the
one hand, there’s testimony and evidence that religion and spirituality can benefit people. After all, there’s an obvious social and political benefit in adhering to the beliefs of the majority. And there are also indications that both psychological and physical health are stronger among the faithful. On the other hand, spiritual faith has been a source for conflict and authoritarian control. Disputes, terrorism, and war are rarely rooted solely in religious disagreement, but such conflict surely fans the flames of such conflagrations. And faith is also a tool used by those in power to retain their control. As we look to the future, we see that traditional spirituality may bring us both good and harm. But will it persist?
From Friedrich Nietzsche to H. L. Mencken to Christopher Hitchens, many who were convinced of religion’s negative impact falsely predicted its demise. Those whose worldviews are solidly built with a frame of logic upon the firm foundation of knowledge often forget that they are in the minority. Just because faith requires adherence to unproven and unprovable assertions does not mean that such ideas will be abandoned now, or even over time. Much more likely is that the human need for resolution, the tendency to hold on to what’s desired, and simple inertia will maintain spiritual faith indefinitely.
While religion and spirituality may persist, it will certainly not be as it is today in the future — not 10 years from now, and not into the more distant decades. History has shown the evolution of religion from tribal animisms and other polytheistic faiths to monotheistic ones. A few religions, including some modern schools of Buddhism, New Age worldviews, and religious philosophies, are even in the realm of “post-theological.”
One steady change we’ve seen is the lessening impact of traditional religion on richer societies. Where once religion held equal sway over political, social, and spiritual domains, we’ve seen that authority recede. Political authority is rarely granted today in the same way it was under Holy Roman emperors and the divine right of kings. Social control in the West is far less stringent than it once was, with the churches losing their hold on rules surrounding courting, marriage, and the family. Even mainline churches are seeing their domain shrinking as discoveries provide testable explanations for movements of the stars, the origins of species, and the birth of the universe. For some, these sorts of answers remove the need to rely on spirituality.
As we move into the future, one can predict where traditional spirituality will continue to lose its authority. The churches will eventually surrender their losing battles on gay marriage, on a woman’s right to choose (abortion rights), and on the maintenance of stereotyped gender roles. But it will also lose in struggles that are just beginning.
The prejudice seen commonly among the faithful today — that goodness can only come through godliness — will be less and less accepted. As more and more of the 10%–15% of the population who are atheists and agnostics come out of their closets to their friends, family, and neighbors, it will be difficult to hold to the claim that so many lack the ability to lead productive moral lives. As that breaks down, religion and spirituality will begin to lose its connection to goodness in general. No longer will it be a social liability to voice secular principles and rationalist grounding.
In a world like this, being part of secular humanist communities will be an accepted alternative to traditionally religious ones, and even preferred over increasingly irrelevant fundamentalist faiths. And fundamentalists will no longer get the support they need to impose religion in the military, the democratic process, and the public schools. When the time comes to mark marriages, funerals, and the like, evocative and inspiring humanist ceremonies will become the norm for these life events, because they address the needs of an increasingly diverse culture.
The scientific method, with its basis in observation, analysis, and experimentation, will be seen as the driving force for determining valid choices for public policy. People will understand that science is a way to seek answers, not something to “believe” in, and polls will show vast majorities accepting human evolution over creationism, supporting comprehensive sex education, and understanding how human intervention impacts our environment.
As politicians campaign for public office, the days when it was political suicide to be a humanist or an atheist will be long gone. Like Jack Kennedy’s efforts to show his political actions to be separate from the Holy See, future political leaders will go even further, trying to position their belief systems in humanistic terms, with the religious candidates pointing out that, while they believe in a higher power, they base their decisions on the here and now.
That may not all come to pass by 2020, but progress should be clearly visible. Though Christianity and other religious paths will remain, the writing will be on the wall for the end of Christian social and political dominance in the United States.
With all these changes occurring, what will the new spirituality look like? Perhaps the word spirituality will slip from usage, since it’s derived from something so debatable, but the idea of shared values and a unified vision for the future will remain.
Humanists will encourage empathy, along with the compassion and sense of inherent equal worth that flows from it, in a way that honors human knowledge about ourselves and our universe. This means applying the scientific method to our pursuit of happiness, a pursuit we recognize as not just a solitary one, but one for us to strive for as a society. When we look at the world in this way, we discover that self-improvement, doing for others, and working to improve society are the keys to deep-seated happiness.
Those ideals are consistent with many traditional morals, like integrity, fidelity, and an independent work ethic. In 2020, most people will no longer regard religious ideas as outside the realm of analysis and critique. Respect for the various gods will diminish, but respect for parents, teachers, and others who’ve accumulated knowledge should increase. Holding to sacred days and geographies will become less prevalent, but an appreciation for diverse expertise will be cultivated. The finality of death will be a challenge for many to grapple with, but fear of the unknown will be replaced with greater curiosity and an acceptance of uncertainty.
So looking to the future, 10 years from now and further down the road, we see a changing landscape for spirituality. Religious faith, with its positives and negatives, will persist. While mainstream faiths remain part of U.S. culture, traditional, and fundamentalist religious ideas will recede. As they lose their reach, rational, universal answers will come to the center stage.
About the Author
Roy Speckhardt is executive director of the American Humanist Association, where he actively promotes the humanist perspective on political issues. He serves as a board member of the Humanist Institute and the United Coalition of Reason and as an advisory board member of the Secular Student Alliance. He lives in Washington, D.C.
A Buddhist teacher brings the dharma, both digitally and in person.
By Ayyā Gotamī, Dr. Rev. Prem Suksawat
What is the future of spirituality? To answer, let’s look at its recent past. Many individuals around the world, especially in the developed West, put less emphasis on spirituality and more faith in science and technology to solve their problems. They sought to break with religious authority. The last century was marked by rapid change, and this century surely will be as well. Change has an enormous effect on the human psyche — the estrangement many of us feel in the twenty-first century is only worsening, at ever-escalating rates. The tsunami of technology has done nothing to assuage the problem; indeed, it is a major force in the surge of feelings of collective alienation.
Computing technology is the most striking example. It has become an indispensable survival tool for most, yet the hardware and software often have short, costly lives in terms of both money and time. Even a nun like me is not immune to this! Especially for young people, technology can be a real addiction. Instead of doing the physical activity the human body was designed to do, many young people spend long hours in front of computers and have electric and electronic devices to replace almost all human duties.
Technology may be at the root of many of the spiritual problems related to modernity; but technology, combined with spirituality, can be part of the solution to feelings of emptiness and despair. Far from becoming obsolete, spirituality can play a larger and more important role in the lives of people around the world if spiritual teachers and leaders adapt to modern life and use technology to reach a wide audience, sustainably and with minimal cost.
However, they must also work with their students and congregations on a very personal level. Even those who do not realize it are in desperate need of compassion and human understanding, and there are times that “virtual spirituality” will not be adequate. Nothing about spirituality, in my opinion, needs to be redefined; however, it is critical that we make the right use of the new methods of communicating to ensure its usefulness to coming generations.
For example, Buddhism in the text-based Theravāda tradition is inherently rational, logical, and often scientific. The mandate to use the texts as they are and not to update them to modern circumstances provides an excellent opportunity to teach the theology while applying its lessons to present-day problems. A learned and most-likely ordained person can find sections of text, such as helpful allegories, that are applicable to almost any modern situation. Religious texts highlight what is universally true about human suffering. This is why they’ve endured and why I believe they will endure into the future. For students and young people, finding that people from thousands of years ago shared their difficulties is a great discovery and source of comfort.
Science and religion will not only coexist in the twenty-first century; they will reinforce one another. I have a mental-health background; I draw upon that knowledge to assist people who come to me for guidance and support. I also have students to whom I lecture regarding their religious studies. I draw parallels between modern mental-health practices and Buddhist teachings. In the future, most ordained individuals should expect to have some counseling function in their roles, focusing on practical, daily living in the twenty-first century (and by this I mean “counseling” strictly in the Western sense of psychological guidance; the same word is used, in some countries, to refer to superstitious and sometimes exploitative rituals).
Religious practitioners will rely on technology to reach out to more people. Many monks, nuns, and other ordained people have spread knowledge via videos, blogs, wikis, etc. Many people have switched from attending sermons at churches or temples to watching them on TV or online due to transportation and time constraints. I teach more than 200 students around the world. We take as much advantage as we can of electronic and online reference materials. I run retreats and Dhamma Talks via online chat. I have frequent phone and chat discussions with individual students.
However, I am continually surprised by the number of students who still put forth the effort and expense to visit me in person to gain real, human support. Recently, I visited the Fo Guang Shan (FGS) monastery where Ven. Hsing Yun leads a practice of Humanistic Buddhism (utilizing Buddhism to fit the needs of the present world). FGS is based in Taiwan and has branches around the world. Apparently, China, one of the greatest examples of a developing country rampantly assuming the problems of the West, has asked them to establish more temples there. This shows that people need spiritual support more and more, and they need it where they live, not just via the Internet.
I predict that more and more people will begin to visit their priests, rabbis, and pastors again because the technology will not be able to replace warm gestures from real, live human beings. While we need not update our scriptures, we must certainly update our practices to suit the real needs of people as they evolve; this means both the high-tech and the high-touch aspects. By doing this, many of them will gain a feeling of security that will allow them to make positive life changes.
About the Author
Ayyā Gotamī (Dr. Rev. Prem Suksawat) is the abbess of the Dhamma Cetiya Buddhist Vihāra, in West Roxbury, Massachusetts. She founded the temple in 1997, converting her former lay residence to propagate Buddhism in the United States. Prior to her ordination, she spent 14 years as an anāgārika (“homeless monk”). Her lay career included experience in the mental-health field in public and private agencies, where she specialized in the impact of cultural differences on individuals. She uses this background to integrate Western psychology and psychiatric treatment with Buddhist teachings to help and educate her students.
Published in THE FUTURIST, May-June 2010
In this third installment of the 2020 Visionaries series, we look at the future of the global environment and of democracy — two areas of concern that will increasingly intertwine in the next 10 years.
Over the course of the last century, humans took over the evolution of our species from nature. From huge public works projects visible from space to designer protein species that companies like Maxygen can manufacture on demand, evidence of our escape from the Darwinian imperative is all around us.
This artificial evolution has proceeded 10 million times faster than natural evolution, according to one scientist with whom I spoke. The results include not only exponential scientific progress and increased longevity and quality of life, but also human-engendered global warming, pollution, deforestation, and the threat of mass species extinction. The eventual collapse of the ecosystem is becoming the overwhelming issue for our time, as the European Commission on Key Technologies first declared in 2005. There are 30% too many people for the ecosystem to support sustainably.
The time has come to evolve the way that we evolve.
How do we reduce our species’ impact on the planet? Or has the opportunity to apply that solution already passed? If so, what are the last-resort options available to us, and what are the risks and obstacles?
In these essays, Dennis Bushnell, chief scientist at the NASA Langley Research Facility and a WorldFuture 2010 featured speaker, provides an overview of the scope of the climate crisis and the weapons against it that we have at our disposal. The problem is larger than you’ve probably imagined, but the tools to use against it are more numerous.
Next, Jamais Cascio, author of Hacking the Earth, will explain the potential and pitfalls of geoengineering, which refers to the deliberate manipulation of the earth’s natural systems to fight global warming. Both foresee a radical break in the way human beings relate to the Earth. It’s a change that’s long overdue.
Ian Bremmer, head of the world’s largest political-risk consultancy, is widely regarded as the go-to expert on the intersection of geopolitics and business. In his new book, The End of the Free Market, Bremmer describes the rise of a new geopolitical force — state capitalism — a form of government where political elites use state-owned companies and sovereign wealth funds to entrench their power, and where markets are rigged for political gain. China is the quintessential example; after recording year-over-year expansion while the United States experienced the worst recession since the Great Depression, China has also become the poster child for state capitalism’s success. This has changed the landscape for the United States, the spread of democracy, and the future of free markets.
In his previous book, The J Curve, Bremmer argued that information technology in the hands of citizens would make it increasingly difficult for authoritarian governments to operate. In his new book, he acknowledges that advances in communications technology have not yet proven their ability to topple dictatorships. He argues that, unless there is widespread, grassroots demand for democracy, “these new tools will simply be used for other purposes.”
We asked Bremmer about his new book, the future of Sino-U.S. relations, and the changing face of freedom and prosperity in the next decade and beyond. We contrast his answers to those of American Enterprise Scholar Michael Rubin, who also generously donated his time to the project.
We also spoke to Azar Nafisi, human-rights advocate, fellow at the Johns Hopkins School of International Studies, and author of the international bestseller Reading Lolita in Tehran. In her second memoir, Things I’ve Been Silent About, she tells of life growing up in Iran as the daughter of the mayor of Tehran, before and after the 1979 Iranian Revolution.
In one poignant section, she retells a story by Shahrnoosh Parispur about an old man who meets a British foreigner. The British colonialist confronts the Iranian with the fact that the earth is round. For several days, she writes, the old man “contemplates the foreigner’s presence, the roundness of the earth, the future changes and upheavals and finally announces ‘yes, the earth is round; the women will start to think, and as soon as they begin to think they will become shameless.’” The anecdote serves as a metaphor for globalization and the clash of cultures that follows the spread of Western ideals. The story also inspired Nafisi to write about her life and the lives of others she calls women without shame.
We asked her about what Iran’s history means for its future, and the effects of technology on democracy around the globe.
--Patrick Tucker, senior editor.
By Dennis M. Bushnell

Carbon-dioxide levels are now greater than at any time in the past 650,000 years, according to data gathered from examining ice cores. These increases in CO2 correspond to estimates of man-made uses of fossil carbon fuels such as coal, petroleum, and natural gas. The global climate computations, as reported by the ongoing Intergovernmental Panel on Climate Change (IPCC) studies, indicate that such man-made CO2 sources could be responsible for observed climate changes such as temperature increases, loss of ice coverage, and ocean acidification. Admittedly, the less than satisfactory state of knowledge regarding the effects of aerosol and other issues make the global climate computations less than fully accurate, but we must take this issue very seriously.
I believe we should act in accordance with the precautionary principle: When an activity raises threats of harm to human health or the environment, precautionary measures become obligatory, even if some cause-and-effect relationships are not fully established scientifically.
As paleontologist Peter Ward discussed in his book Under a Green Sky, several “warming events” have radically altered the life on this planet throughout geologic history. Among the most significant of these was the Permian extinction, which took place some 250 million years ago. This event resulted in a decimation of animal life, leading many scientists to refer to it as the Great Dying. The Permian extinction is thought to have been caused by a sudden increase in CO2 from Siberian volcanoes. The amount of CO2 we’re releasing into the atmosphere today, through human activity, is 100 times greater than what came out of those volcanoes.
During the Permian extinction, a number of chain-reaction events, or “positive feedbacks,” resulted in oxygen-depleted oceans, enabling overgrowth of certain bacteria, producing copious amounts of hydrogen sulfide, making the atmosphere toxic, and decimating the ozone layer, all producing species die-off. The positive feedbacks not yet fully included in the IPCC projections include the release of the massive amounts of fossil methane, some 20 times worse than CO2 as an accelerator of warming, fossil CO2 from the tundra and oceans, reduced oceanic CO2 uptake due to higher temperatures, acidification and algae changes, changes in the earth’s ability to reflect the sun’s light back into space due to loss of glacier ice, changes in land use, and extensive water evaporation (a greenhouse gas) from temperature increases.
The additional effects of these feedbacks increase the projections from a 4°C–6°C temperature rise by 2100 to a 10°C–12°C rise, according to some estimates. At those temperatures, beyond 2100, essentially all the ice would melt and the ocean would rise by as much as 75 meters, flooding the homes of one-third of the global population.
Between now and then, ocean methane hydrate release could cause major tidal waves, and glacier melting could affect major rivers upon which a large percentage of the population depends. We’ll see increases in flooding, storms, disease, droughts, species extinctions, ocean acidification, and a litany of other impacts, all as a consequence of man-made climate change. Arctic ice melting, CO2 increases, and ocean warming are all occurring much faster than previous IPCC forecasts, so, as dire as the forecasts sound, they’re actually conservative.
These threats exist in addition to the documented economic, geopolitical, and national-security issues associated with the continued use of fossil fuels. The finite nature of coal, oil, and natural gas will instigate higher energy prices and greater energy price disruptions. According to some credible estimates, the world will realize “peak” oil fuel availability before 2015, peak uranium around 2025, peak natural gas around 2035, and peak coal around 2050. Because of these climatic, economic, national-security, and geopolitical drivers, it makes sense to alter our energy sources and uses in an expeditious manner.
Conquering Climate Change
The world currently derives 300 exajoules (83 million gigawatt hours) of energy from fossil fuel use each year. The major renewables — such as biomass, drilled or hot rock geothermal, solar thermal, solar photovolatics, and wind — could yield 4,000 exajoules per year each. In my previous article for THE FUTURIST magazine, I touched on the potential of genetically engineered saltwater algae, and I would reiterate my enthusiasm for that solution here.
There are several other intriguing renewable alternatives, such as a number of wind-energy systems that merit more research. These include not only terrestrial, or even offshore wind projects, but also high-altitude wind-energy farming. Estimates of the high-altitude wind capacity off the East Coast indicate the presence of enough potential energy to meet U.S. electrical grid requirements.
Researchers are also considering several unconventional sources of heated water with huge potential capacity. These include harnessing the waste water sitting in deep oil wells that’s been geothermally heated and tapping the Gulf Stream off the U.S. East Coast. Researchers at MIT have documented the potentials of drilled or hot rock geothermal energy.
Oceanic thermal energy conversion (OTEC) uses the temperature differences in the ocean to run turbines and produce energy. In tropical climates, the surface of the water, continually exposed to the sun, can reach temperatures of 80°F. Some 3,000 feet below the surface, the temperature descends to 40°F. This temperature difference, harnessed correctly, is enough to drive generators. New research suggests that descending to depths of 3,000 feet and lower may not even be necessary, as very cold water actually runs alongside the Gulf Stream and can be tapped horizontally. Studies from the University of Massachusetts suggest that this type of OTEC could produce sufficient energy to power the U.S. electrical grid.
These are among the more exotic solutions, but simple conservation could reduce overall energy use by 30%. In the United States alone, some 200,000 homes are off the electrical grid. The technology for this type of distributed power generation, where individuals are much less beholden to utility companies, is developing rapidly. Tomorrow’s off-the-grid pioneers will use next-generation photovoltaic panels, windmills, solar thermal, passive solar, thermoelectrics, and bioreactors, which convert sewage, yard waste, and kitchen scraps into fuels.
Nuclear power could play a larger role if we were able to go to nuclear reactors that generate more fissile material than they use (also called breeders) and switch from uranium to thorium, which is three times as abundant but otherwise is probably not a major portion of the energetics solution space. Renewables remain the less-costly option.
Skeptics such as former U.S. Energy Secretary James Schlesinger have raised concerns about the difficulty of storing energy from renewable sources, as opposed to oil or coal. But geothermal energy and biomass produce power continuously, 24 hours a day, 365 days a year. Wind, photovoltaics, and solar thermal power plants are, of course, cyclical — when there’s no wind or light, there’s no power — but storage options are increasing daily. Future batteries will take advantage of new technologies that will make them orders of magnitude more efficient than today’s chemical battery options. Researchers at Sandia National Labs are already researching the practicality of batteries using ultracapacitors and superconducting magnetic energy storage with carbon nanotube magnets. Low-energy nuclear reactors (LENRs), otherwise known as cold fusion reactors, were considered impossible to build a decade ago but are gaining attention thanks to the work of Allan Widom and Lewis Larsen, who have proposed a new theory to explain how LENR might work. NASA is conducting experiments in an attempt to verify their theory, which explains the decades-long LENR experiments as products of quantum weak interaction theory applied to condensed matter, not fusion.
The footprint of human civilization on this planet is now so large, covering so much geographical area, that we can even have a meaningful effect on climate change simply by painting our roofs and roads white to reflect more sunlight back into space.
The costs of fossil carbon fuels are increasing, and this trend will accelerate due to potential “carbon taxes,” but mostly due to worsening shortages. The costs for the renewables have been dropping for years. Many, such as certain biofuels, are already economically competitive with fossil fuels, and all renewables are projected to be as cheap as oil and even coal within some 10 to 15 years or sooner. If governments mandate that power companies who run coal-fired plants sequester their waste CO2, the costs of coal use will go up, hastening its inevitable replacement.
If, by the year 2020, we’ve passed a critical climate tipping point and guaranteed future generations a much more difficult future, it won’t be because of a lack of available solutions today. It’s not technology, capacity, or costs per se that are slowing humanity’s move to renewables, but rather conservatism, our attachment to the industries and strategies we’ve already invested money in (sunk costs), and lack of creative strategic planning for the inevitable demise of fossil fuels.
About the Author
Dennis Bushnell is the chief scientist at the NASA Langley Research Center in Hampton, Virginia, and a speaker at the World Future Society’s conference in Boston this July. His previous article for THE FUTURIST, “Algae: A Panacea Crop?” was published in March-April 2009. Web site www.nasa.gov/centers/
langley/home/index.html.
By Jamais Cascio

When it comes to our planet’s environment, “isolationism” is impossible. By the year 2030, environmental policy and political policy will be completely inseparable at the global level. Arguably, that should be the case today. As we gain a deeper understanding of ecosystem processes, we’re seeing how actions on one side of the world can dramatically change the lives of people on the other. Interconnectedness is a challenge we can’t run away from.
By 2030, though, the connection between politics and the environment could show up in two very different ways: as a catalyst for war, or as a new model for handling complexity.
The Complexity of Environmental Challenges,
The Challenge of Complexity
Global delays in reducing carbon emissions will likely force the human race to embark upon a set of geoengineering-based responses, not as the complete solution, but simply as a disaster-avoidance measure.
Geoengineering, the deliberate manipulation of the earth’s natural systems, may include various forms of thermal management, such as stratospheric sulfate injections or high-altitude seawater sprays, and might also embrace some form of carbon capture via ocean fertilization, or even something not yet fully described. The mid-2010s is the probable starting period for these strategies, in my view. Geoengineering advocates may see the mid-2010s as already too late, while opponents would likely want more time to study their models.
Once we start down the geoengineering path, we’ll see that talking about it is much simpler than doing it. The unexpected feedbacks and unintended consequences would quickly become manifest, and the reactions could be volatile. Planetary management could become a political flashpoint, leading to outbreaks of violence, especially if different regions have divergent results or demand incompatible outcomes. A good portion of international diplomacy would focus on just how to control climate engineering technologies and deal with their consequences.
Ours will be a challenging world to navigate in the next couple of decades, and not just because of conflicts over who’s in charge or who’s to blame for which problems. We’ll be dealing with multiple complex global system breakdowns — from the ongoing financial system crisis, peak oil production, climate disruption, and the very real possibility of food system collapse. These crises demand greater information analysis, longer-term thinking, and more accountability than traditional forms of global politics have tended to offer. For centuries, nations have been ready to commit “hard power,” military force, when necessary to push their interests. In the twentieth century, nations recognized the value of “soft power,” cultural influence, as a way of gaining allies. But these multifaceted system problems don’t lend themselves to either the hard or soft power approach. They call out a need for a new model to meet the needs of the new century.
It’s hard to exaggerate the sheer complexity of the situation. If the great obstacle to our continued survival and prosperity as a species were “just” global warming, achieving success would be tricky but doable. The challenge we face is global warming plus resource collapse plus pandemic disease plus post-hegemonic disorder plus the myriad other issues.
Nonetheless, there are reasons for optimism.
Solutions to Complex Challenges
We know what we need to do to mitigate climate change, and we have the necessary technology. What we’re missing, more than anything else, is the political will. But politics and society can change — we’ve seen it happen before. It might take a generational shift, it might take a disaster (or three), or it might just come from an expanding understanding of what we’re doing to the planet. It will take a lot of people working on fixes and solutions and ideas — not simply top-down mandates, but massively multi-participant quests, across thousands of communities and hundreds of countries, bringing in literally millions of minds. The very description reeks of innovation potential.
• Innovation in energy. A mix of nuclear, wind, solar, and a few others, such as ocean thermal energy conversion and hydrokinetic power, will overtake fossil fuels by the 2020s, even if China and India retain coal-fired power plants. If handled poorly, such recalcitrance may end up being a driver for significant global tension. If handled well, it could be an engine for new markets and development.
• Innovation in urbanization. More than half the planet lives in cities today, and that proportion is increasing quickly. Sensor dust, embedded computing, augmented reality, and a host of other emerging technologies hold the potential to “awaken” cities as smart environments. But “smart city” has to mean more than just lots of urbanites knowing their own carbon footprint; it must come to refer to a far better understanding of what can be done to improve things.
• Innovation in materials and manufacturing. By the year 2030, molecular fabrication (“nanofactories”) will significantly boost the world’s productive capacities. Although nanofactories have the potential to pose another complex system problem, the kinds of political institutions and models we’ll be forced to develop in response to ongoing environmental crises can serve as platforms for handling issues such as this one. If we can handle the political and social complexities of global warming (and likely geoengineering) in the 2010s and 2020s, we’ll be well-positioned to handle potentially even more disruptive events as the century continues.
Then the Singularity happens in 2048 and we’re all uploaded by force.
I’m kidding about that last one.
I think.
About the Author
Jamais Cascio is a writer, futurist, and ethicist based in the San Francisco Bay Area. He specializes in design strategies and possible outcomes for future scenarios. Cascio has written for the The Atlantic and The Wall Street Journal, and is the author of Hacking the Earth: Understanding the Consequences of Geoengineering (Lulu.com, 2009). He was one of Foreign Policy magazine’s “Top 100 Global Thinkers” for 2009.
The Futurist Interviews American Enterprise Institute scholar Michael Rubin
THE FUTURIST: What do you see as the best strategy the U.S. might employ to further the cause of human rights in Iran?
Rubin: First and foremost, the White House should use its bully pulpit. After this past summer’s election protests erupted, the Obama administration muted its response, fearing that to throw support to the protestors might taint them. This is a valid concern, but there is no reason why the White House and the State Department can’t speak up for broad principles, such as democracy, justice, free speech, and free association.
After the Berlin Wall fell, we discovered that Presidential rhetoric meant more to dissidents than we ever imagined. There’s a tendency today to want to address human rights issues silently, but discreet diplomatic inquiries are rarely as effective as public support. Regimes prefer to murder in silence; when a dissident becomes a public symbol, not only does the cost associated with a dissident’s imprisonment or murder increase, but the dissident’s story can be a driving force in mobilizing public pressure, as it humanizes the abstract. We saw this in 1999, when Ahmad Batebi became a symbol of the student uprising when he appeared on the cover of the Economist holding a bloody shirt, and 16-year-old Neda, shot in the street by the paramilitary Basij, became a symbol of the situation in Iran in 2009.
The U.S. government should take care against bestowing undue legitimacy upon the regime. When Iranians are taking to the streets in protest against not only the legitimacy of their post-election government, but also their system of government, the White House’s reference to the Islamic Republic of Iran implies endorsement of the theocracy, and their efforts to engage a government which the Iranian electorate does not support also implies recognition. Instead, the White House and State Department might direct their comments to the Iranian public in general and, if necessary, simply refer to the ‘Iranian government’ or the ‘regime,’ as every president—whether Democrat or Republican—did until President Obama changed the formula.
Most controversially, it is important for the U.S. government to consider aid and assistance to Iranian civil society and independent media. For example, the State Department working through non-governmental intermediaries might assist programs which seek to document Iranian human rights abuses or help independent trade unions organize. Fears that U.S. funding might undercut the opposition and strengthen the regime are real, but misplaced. Opponents of civil society support argue that the presence of funding enables the Iranian government to taint all civil society work. The problem with this perspective, however, is that the Iranian regime always accuses its opponents of foreign connections regardless of U.S. action, so supporting civil society would not appreciably alter Iranian behavior. If fear of Iranian rhetoric toward its own internal opposition were to shape U.S. policy, then we’d also have to rule out dialogue, since Iranian security forces have taken to toward accusing any Iranian who engages with American institutions—Yale University and the Carnegie Endowment, for example—of treason.
THE FUTURIST: What about in China, where the attendant economic risks from the Chinese sale of U.S. Treasuries are much greater?
Michael Rubin: U.S. support for human rights and free speech might antagonize the Chinese government a bit, but the chance that Beijing would respond in this fashion is slight to none. It’s simply not in the interest of the Chinese government to sabotage the United States economy to that extent given the level of U.S.-Chinese trade. At the same time, turning a blind eye toward abuses in China also has some inherent, even if indirect, risk. The Chinese government has no incentive to reform and to correct government abuses against its citizenry. Economic disparities run deep from coast into heartland. Absent an outlet for dissent and a system which forces the government to be accountable to the people, there is an inherent risk of wildfire outbreaks of instability in China. Certainly, gentle U.S. prodding for democratization in China is in both our countries long-term interests.
THE FUTURIST: Do you see the Iranian regime persisting in its present state until the year 2020? What might happen when it fades from existence?
Michael Rubin: If we take a snapshot of Iranian demography, it might look like the Islamic Republic is in trouble. The Iranian economy is stagnant, living standards are declining, and the regime can’t provide enough work for young people finishing the university. Time is, unfortunately, working in the regime’s favor. In the years immediately after the Islamic Revolution and Saddam Hussein’s invasion of Iran, Ayatollah Khomeini encouraged large families. The regime put up posters showing ‘a good Islamic family’ with a mother, a father, and six children. After the Iran-Iraq War ended in 1988, the Iranian government realized that it could not handle such a large population. Suddenly posters appeared depicting ‘a good Islamic family’ as having a mother, a father, and just two children. As Patrick Clawson, an economist at the Washington Institute for Near East Policy points out, the Iran-Iraq war years’ baby boomers are in their 20s, precisely the age of the protestors. In five years, however, the number of 20-somthings is going to decline while the current protestors are going to be in their 30s, and beginning to settle down with young families, their personal priorities elsewhere.
The regime is nervous, though. There is no question that the regime is unpopular across a broad cross-section of society. The evidence for this is not only anecdotal, but also quantitative. Using Persian speakers in Los Angeles, polling companies have surveyed Iranians by taking every telephone exchange in Tehran, and randomizing the last four numbers and conducting what, on the surface is an economic survey but which also provides insight into political altitudes. In September 2007, the Islamic Revolutionary Guard Corps reorganized and implemented what its new commander, Mohammad Ali Jafari, called the mosaic doctrine. Rather than orient the IRGC to defend against foreign armies—as it had been from the days of the Iran-Iraq War—Jafari divided the IRGC into inwardly-oriented units, one for each province and two for Tehran. Jafari argued that internal unrest and the possibility of a velvet revolution posed more of a threat to the regime than foreign armies, a judgment validated by the June 2009 unrest.
The key issue in regime survival therefore lies with the loyalty of the Revolutionary Guards. It matters not if 90% of the Iranian people turn against the regime so long as the IRGC remains loyal to the Supreme Leader. Western politicians can hope for muddle-through reform, but ultimately change will come when the IRGC defects, much like regime change came to Romania after Nikolai Ceausescu’s security forces switched sides. The Iranian regime is aware of this, and so IRGC members are seldom stationed in their home provinces minimizing the risk that units will refuse to fire on crowds which might contain family members, friends, or neighbors.
If the Islamic Republic does not fall, then the regime will have made a Faustian bargain. The IRGC will become a predominant force, dominating not only political life, but also economic and religious life. What we are now seeing is a slow, creeping coup d’état. The Islamic Republic is becoming a military dictatorship, albeit one with a religious patina.
THE FUTURIST: Of all the trends playing in terms of human rights at this moment, from China to Iran to the United States, which ones concern you the most? Which make you the most hopeful?
Michael Rubin: What concerns me most is cultural relativism—the willingness of Western states to accept the arguments of oppressive regimes that Eastern cultures simply do not uphold the same values of individual rights and Western demands that they should is simply new age imperialism. We see this primarily with regard to women and women’s rights.
Communication offers the most hope. From telegram to radio to television to fax to IM and mobile camera and twitter, technology is empowering citizens and preventing human rights abusers from acting with impunity.
THE FUTURIST: Paint us a picture of democracy in the year 2020? What does the word mean? Has the world come to some agreement on it? Is there, on a whole, more of it than existed 10 years ago or less?
Michael Rubin: I’d define democracy not only as representative government accountable to the people, elections contested by political parties who have abandoned militias, and but also a proven record of peaceful transfers of power between government and opposition. I am an optimist and see the spread of democracy is inevitable. I also believe those who argue that certain cultures—Chinese or Arab, for example—are impervious to democracy are wrong. Here, Korea is instructive. Harry S Truman was lambasted for the Korean War and for attempts to bring democracy to South Korea. Critics said that democracy was alien to Korean culture, and it certainly was a process. But today, when we juxtapose North and South Korea, I doubt there are many people who do not believe the price was worth it. Taiwan, too, showed that democracy can thrive in Chinese culture and, while the Iraq war remains a polarizing debate, it is telling that ahead of the March 7 elections, no Iraqi knows who will lead their new government.
About the Interviewee: Michael Rubin is a resident scholar at the American Enterprise Institute (AEI), senior lecturer at the Naval Postgraduate School, and lecturer at Johns Hopkins University.
This interview was conducted by Patrick Tucker, senior editor of THE FUTURIST magazine.

THE FUTURIST: In your new book, The End of the Free Market, you write that human rights and free markets are inextricably linked, yet you perceive a future where many large and profitable state-run corporations exist, advancing neither free-market principles nor human rights. Briefly, was there a particular moment or incident in your travels where you reached this realization?
Ian Bremmer: These tectonic shifts have been under way for a long time. I’ve seen this on the horizon since I started the Eurasia Group. All states are going to become a much bigger driver for global investment. It’s happened more structurally in countries that have state-capitalist systems. Those countries are becoming more important. The eureka moment came several months after the financial crisis first hit. I got a phone call from the protocol office of the Chinese mission in New York. They said the vice minister of foreign affairs, He Yafei, was coming to town. They asked if I would have time to engage in an exchange of views. We got together a small group. I was sitting right across from him, and he said, “Tell me, now that the free market has failed, what do you believe the appropriate role for the state in the global economy should be?” I had to suppress a smile. It was a bold statement.
My response was that, just because the self-regulation of banks proved to be a bad way to run the global economy, does not mean that the absence of the rule of law, or an independent judiciary, or the presence of the state as both principal actor and arbiter of the economy is a better way to run an economy. That was the beginning of a long conversation where we began to engage each other’s worldviews. But the fact of the matter is, on some fundamental philosophical level, these worldviews and systems are incompatible. We in the United States have been able to ignore that, because America has done well in China, and China’s been a very small country (economically speaking). In other words, there’s been a lot of free-riding. It’s now 2010. China is growing at 10% a year and the United States has 10% unemployment. This is going to become a very politicized relationship.
THE FUTURIST: How will people in the United States begin to see that polarization?
Bremmer: Here’s one example: In 2008, as an American voter, you could choose McCain or Obama without any interest or concern as to what their views were in regard to China. That will never happen again.
THE FUTURIST: How does the United States navigate that relationship? What happens to our argument for greater openness in China, greater respect for human rights?
Bremmer: The first thing we have to do is understand that you can’t navigate something without a map. We haven’t had one. There are big problems with the basic narrative that Americans subscribe to about China. Here’s the story we tell ourselves: There’s an authoritarian, communist government on one side and there are people yearning to be free on the other. In that struggle, ultimately, the Chinese people will win; therefore, the United States stands on the side of the Chinese people. We don’t seem to understand that the vast majority of Chinese are exceptionally supportive of their leadership.
Imagine for a moment that there were political reforms put into China right now. They had free and democratic elections. Would the resulting government in China be more beneficial, antithetical, or indifferent to American interests than the government now in place in Beijing? I could make a very strong argument that the resulting Chinese government would be less pro-status quo, more nationalistic, and more problematic to U.S. interests.
THE FUTURIST: You could argue the same thing happened when Hamas won elections in Palestine.
Bremmer: Indeed you could. We tend to fetishize elections in the United States.
THE FUTURIST: What’s the most important thing the U.S. government can do to ensure a better relationship with China, one that’s mutually beneficial?
Bremmer: Be indispensable. We’ve forgotten about this. Many years ago, James Chase wrote about America as the indispensable nation. Today, the United States is in comparative decline vis-à-vis countries like China. I’m not a declinist. But the rise of the rest does mean the comparative decline of the United States. In policy terms, that means we need to focus on the places where we can make them feel that we are indispensable.
1. We have by far the world’s largest military, and it’s essential for humanitarian response after a disaster, such as the 2004 Indian Ocean tsunami that struck Indonesia. The United States has more capability for large-scale coordinated operations because of the size of our military. Hard power becomes more important over time, especially as parts of American soft power, like financial leverage, deteriorate, comparatively speaking.
2. Reiterate our commitment to regulated free markets; that includes being open to Chinese investment in the United States. In 2005, CNOOC — the state-owned Chinese National Offshore Oil Company — made an offer for Unocal [a U.S.-based oil company]. At the time, U.S. lawmakers expressed concern over the Chinese government acquiring a U.S. energy interest. Ultimately, Unocal was sold to Chevron for millions less than the CNOOC offer. Some of the people who really wanted the sale to CNOOC to happen were Unocal management. Chevron already had good managers, but China wanted Unocal managers at CNOOC. More Western management in Chinese firms is insidious; it shows we do this stuff better than they do. They want access to better accounting and more transparency. We have it. This know-how over time makes us more indispensable to China.
3. Most important, we want to avoid protectionism. We don’t want the Chinese to look toward decoupling. That’s a bad scenario. But the argument for globalization will become harder to make politically in the United States, as laid-off workers complain that globalization favors Beijing more than Detroit.
THE FUTURIST: How do you sell globalization to an electorate that’s feeling increasingly pressured by it?
Bremmer: It will become a less popular sell. That’s why you see people arguing in favor of protecting automotive sector jobs even if they aren’t competitive. It’s why the European Union spends 40% of its budget on the farming sector, just 8% of the population. The Europeans just shouldn’t be farming. They should be spending more where they have an advantage.
THE FUTURIST: The United States, too, should focus on its advantages?
Bremmer: Yes, and the United States has huge competitive advantages. A couple of months ago someone asked me, is America even going to be relevant in 10 or 15 years? I told him to ask that question again, but replace “America” with “world’s largest economy.” So, in 10 to 15 years, is the world’s largest economy going to be relevant? The question is farcical. The United States will still, overwhelmingly, be the world’s largest economy and reserve currency.
Research and development throughout the world is U.S.-driven. The world’s best institutions of higher learning are in the United States. It’s harder to get immigration into this country, but scientists are still seeking training here. Who would I bet on to make the next world-changing patents in 20 years for new energy technologies? I’d bet on the United States, of course. If Iwas betting a pool of money, would I bet as much on the United States after 20 years as I would today? No I would not. Would there be a significant shift? Probably yes. But I would still bet on the United States.
Unfortunately, the trajectory is moving in a way that’s more and more uncomfortable. U.S. institutions, which operate well in a steady state or during times of increasing wealth, are terrible at responding to impending crisis. India has been dealing with this same problem for some time, which is why China continually eats their lunch. The kind of system that we have in the United States is decentralized away from the president on key legislative issues; it’s one where individual constituencies win political battles except in the bleakest crises. Given that, it’s very hard for elected political leaders to make globalist arguments publicly. That’s a weakness in the American political system that is structural and will become increasingly apparent as we muddle through these deficit and spending issues.
As a political scientist, I expect that America won’t address these issues proactively; as a consequence, the war between states and corporations will look increasingly combative.
THE FUTURIST: Do you assume the average American voter is incapable of grasping the inherent logic of a globalist perspective?
Bremmer: I would never say “incapable,” but there is a serious collective-action problem. The average American is capable of understanding why voting is important, but what are our voting numbers? As the economic situation, especially from a comparative perspective, becomes tougher, as deficits grow, you’re also going to see much more populism. Some of that will be driven top-down, and a lot of that will be driven by frightened, upset people. It’s a reality. The United States is not well positioned — given all of our priorities on a daily basis — to actually deal with these globalist perspectives. That’s clearly true.
When I talk about these issues, I’m trying not to be ideological about them. The embrace of globalization is how, I am convinced, we will ultimately have the strongest global growth with the most boats rising. But I also understand why it is relatively unlikely to happen. We have to be honest about that.
THE FUTURIST: The year is 2020. Is the average human being — take the aggregate, Russia, China, Iran, western Europe, the United States — more free or less?
Bremmer: A little less, for two reasons. First, the dynamics playing out between the United States and China and within China itself will not have run their course by then. As a consequence, we will increasingly experience an absence of global cohesion and institution making. There will be no sufficient global response to climate change or to proliferation. That creates more volatility and instability, which tends to empower these entrenched authoritarian systems.
The second reason is the increasing risk of the diffusion of dangerous technologies. Rogue states and individuals are more empowered, irrespective of the amount of money going into counterterrorist efforts. It doesn’t take teams of people to take down planes anymore but one sufficiently motivated individual, and not just planes but other targets with real-time market implications. That’s going to have an impact on individual liberties.
The combination of those two things, the dangerous technology growth and the tectonics of an increasingly non-polar world, will affect the spread of freedom and democracy. The fight between free but regulated markets against state capitalism will result in swings in that direction.
THE FUTURIST: On the most micro-level, what can a reader of THE FUTURIST do to improve that situation?
Bremmer: I focused just now on the massive decentralization of dangerous technologies. The flipside of that coin is the decentralization of empowering technologies. The most significant of those is the Internet, the blogosphere, and communications networks. We’re living in an increasingly content-rich environment. Some of that content is dangerous, but more of it is benign.
We’re also living in a world where really interesting and valid content becomes more important even if it comes from people who have not been anointed by the powers that be. The average insightful reader with something intelligent to say can contribute ideas and criticism in a way that has actual and meaningful potential to affect the way political, civic, and economic leaders think and act, and in a way that 10 or 20 years ago was unimaginable.
About the Interviewee
Ian Bremmer is an American political scientist specializing in U.S. foreign policy, states in transition, and global political risk. He is the president and founder of Eurasia Group, a global political risk research and consulting firm providing financial, corporate, and government clients with insight on how political developments move markets. His latest book, The End of the Free Market: Who Wins the War Between States and Corporations? will be released by Portfolio in May 2010.
THE FUTURIST: You are regarded as a proponent both of women’s rights in the Muslim world and of Westernization. How have recent events changed your views of the influence of Western culture in Iran? On the one hand, there is evidence that students in Iran were using mobile technology to organize protests following the 2009 Iranian presidential election. (Most of the people “tweeting” about it, however, were from the United States.) On the other hand, the Iranian government has used that same technology against protesters. Does mobile tech like cell phones and the Internet make the fight against authoritarianism easier or more difficult? What are the pitfalls?
Azar Nafisi: You see the adverse effects of technology in America itself. It’s become a challenge to turn information into real knowledge. The United States is becoming a superficial culture. But right now, inside Iran and other repressive countries, this technology is far more advantageous to the people than to governments. The Internet and cell phones are allowing the Iranian people to connect to the world through human-rights sites where texts about democracy are available. These texts are read and translated widely in Iran. I’ve connected with hundreds of Iranian students to learn about what’s actually going on there. A similar phenomenon is playing out in China. But the continuance of this progress requires the help of companies like Google and Yahoo.
THE FUTURIST: Looking more broadly, the current tension between the United States and Iran has become a dispute over technology — does Iran have the right to the same nuclear weapons capability that the United States has possessed for more than 60 years? Isn’t it hypocritical for the West to claim it’s seeking to aid the cause of progress when it is literally standing in the way of knowledge sharing on this issue?
Nafisi: Don’t get me started criticizing the problems of Western U.S. foreign policy; this isn’t among my criticisms. We should put our efforts into taking these weapons out of the hands of all countries, whether Pakistan, Iran, or North Korea. Yes, Ahmadinejad mentions this supposed double standard, and nuclear weapons are dangerous in America’s hands, just as they are in anyone’s. But the United States is far more open and democratic than is Iran. The system in the United States is more reliable. The government is more accountable than that of the Iranian regime. I can trust it more. But I don’t feel good about America or any other country having nuclear weapons.
THE FUTURIST: You’ve said: “At the beginning of the [Iranian] Revolution, not only the Islamists but also the radical left were all very set in what they wanted and the way they saw the world. As the revolution progressed, two things happened to the young Islamists. One was that the Islamic Republic failed to live up to any of its claims. Apart from oppressing people and changing the laws, and lowering the age of marriage from 18 to nine, [the Islamic government] did not accomplish anything economically, socially, politically, or in terms of security.” Today, as part of the so-called Green Revolution, thousands of Iranians are directly challenging the results of the latest presidential election. Do you think the Green Revolution’s aims are more realistic? Do today’s rebels stand a greater chance of success? And what’s the most important thing the 1979 revolution has to teach the Iranian rebels of today?
Nafisi: I was one of those starry-eyed optimists as well. But the new movement is mature. The Iranian people have paid a very high price for the mistakes of 1979. The most important lesson: If you’re going to join a revolution, you have to have as clear an idea of what sort of government you do want as what you don’t want.
The second lesson they appear to have learned is that democratic ends should be achieved through democratic means. The government won’t allow it. I have hope, but I’m not overly optimistic. The government is savage and terrified. The political leaders who would favor democracy, both in Iran and abroad, are now followers of the new movement, whose strength comes from the spontaneous actions of the people themselves. It’s truly a grassroots phenomenon.
What does this show? That Iran has a strong civil tradition. But there are times you need leadership and strategy. I expect the government will continue to kill and jail anyone who comes to the front.
THE FUTURIST: In your new memoir, Things I’ve Been Silent About, you write: “Looking back at our history, what seems surprising to me is not how powerful religious authorities have been in Iran, but how quickly modern secular ways took over a society so deeply dominated by religious orthodoxy and political absolutism.” Why do you think that was, and what does it say about the potential spread of Western ideals and Western notions of democracy in Iran and throughout the Muslim world?
Nafisi: Iran has a unique history; it goes back 3,000 years to the beginning of Zoroastrianism. Even now, the Islam practiced today in Iran is mixed and mingled with pre-Islamic traditions. The Iranian New Year is celebrated on the first of March; the names in the calendar are Zoroastrian deities. We are a multicultural society, with different religions, different traditions, living side by side. This provides the flexibility the country needs to accept the new.
So many people think changes and modernization in Iran just came from the West. I think the old system of monarchy just stopped working. The time of Western ideas coincided with a period of crisis. At the start of the last century, Iranians were bringing novels and theater back to Iran, but they were also boycotting foreign goods and fighting British imperialism. The history of the West in Iran is one of cultural and economic exploitation.
On the other hand, you have a close relationship culturally. This persists. The most important political leaders of Iran in the twentieth century were secular. And the most important of these was Mohammad Mosaddeq. The Ayatollah Khomeini hated him as much as he hated the Shah. Mosaddeq was religious but secular in governance, and his influence remains considerable.
When you talk about genuine, multiculturalism, you need a political and civil system that extends rights to all. You see that in the United States itself. There are people who think the country is Christian in nature, but this is a stagnant view. The Founding Fathers were Christian — they mention God — but without freedom of religion, no country can claim to be multicultural.
THE FUTURIST: What do you see as the likely future of Iranian–U.S. relations? What future would you like to see?
Nafisi: The problem lies with both sides. It’s to the advantage of the United States to have full diplomatic relations, but it’s not in the regime’s interests to make peace. The regime sees U.S. culture as the most dangerous weapon. An embassy in Iran, with people lining up to apply for visas, doesn’t help them maintain power. But the United States has been tactical and simplistic in its approach. It’s reduced its perception of Iran to the regime.
The United States has vacillated. I think the correct policy is pursuing dialogue with the regime, but also creating a dialogue with the Iranian people.
My ideal future is one that features genuine interaction and dialogue well beyond the government level. The problem is that connections right now aren’t through personal contacts but through governments. If people in the United States became more concerned with the human rights of the Iranian people, this would be a positive step. I’ve been looking for ways to create a connection between the two peoples. I do this through my books and through my teaching. I was first introduced to America by Huck Finn. I want people to come to Iran through Firdausi, a poet. Perhaps I can help with this. Art and literature should not be bound by nationality.
THE FUTURIST: Paint us a picture of the year 2020.
Nafisi: I hope that developments in technology, particularly visual and virtual reality, will bring us closer together. Imagine people across countries and continents “walking” into each other’s homes thousands of miles away. If we can create this experience through technology, the world will become a better place. I’m terrified of a future where we use gadgets, devices, and little amusements to shut ourselves in, to isolate ourselves. But new technology can actually serve the cause of empathy. If a girl is shot in the street in Iran during a protest, and a girl across the world can see it — can put herself in the place of her comrade across the sea — a tragedy becomes a victory for humanity. ❑
About the Interviewee
Azar Nafisi is the author of Reading Lolita in Tehran (Random House, 2008) and Things I’ve Been Silent About (Random House, 2008). She is a visiting fellow and lecturer at the Foreign Policy Institute of Johns Hopkins University’s School of Advanced International Studies, www.sais-jhu.edu.
This interview was conducted by Patrick Tucker, senior editor of THE FUTURIST magazine.
Remaking the Car, Remaking the City
Ryan C. C. Chin of the MIT Media lab discusses MIT's much-remarked CityCar concept. The car itself presents a radical—and welcome—break from driver-vehicle interaction to which we're accustomed, but the real genius of is how it integrates into a larger organism of city life. In the Media Lab's Smart Cities model, the car of the future is one component in a broader and more sane transportation system reflecting the way people actually interact with the urban environment, and with one another.
Also, young computer scientist Jason Clark will share his company's vision for re-starting the tech startup. He and his allies at Syntiant say companies can be philanthropic and make money at the same time; and they're proposing a bold new business model to do exactly that.
Illustration by William Lark / MIT Media Lab
By Douglas Rushkoff
By restoring our connections to real people, places, and values, we’ll be less likely to depend on the symbols and brands that have come to substitute for human relationships. As more of our daily life becomes dictated by the rules of a social ecology instead of those of a market economy, we will find it less necessary to resort to the behavior of corporations whenever things get rough. We might be more likely to know the names of our neighbors, and value them for more than the effect of their landscaping on our block’s real-estate prices.
By Pavlina Ilieva and Kuo Pao Lian
What if there were a better way of living? A way that was more environmentally sound, more economical, more conducive to the building of community, and didn’t require huge monetary investments? What if this new method of existence was already visible, and people were already participating in it, in places we had never thought to look?
Two Internet experts, a psychologist, and an anthropologist explore our multiplying connections.
What is a social network? A few years ago, the social network would have referred to our immediate acquaintances, the people we lived with and worked beside, perhaps individuals we identified as similar to us in age, income, politics, or consumption habits. They may simply have been classmates, just as Facebook was originally designed to cater to the student body at Harvard University. (A new film directed by Aaron Sorkin provocatively chronicling the rise of Facebook is called The Social Network.)
Take a look at the average Facebook page today and you’ll find millions of networks overlapping one another in a grand circuit. Personal and intimate postings from daily life—details of a child’s first steps, a disappointing day at work, a spousal argument—mingle freely with bits of political activism, amateur journalism, small acts of civic engagement. Our every human relationship, from the way we interact with one another at the most personal level to the way we relate to institutions, are interwoven into a single fabric that we now wear in public. Our social network is everyone with whom we interact; and that, increasingly, is everyone.
The question becomes, how do we make the most of these new connections in order to become better citizens, better life partners, and better people? We attempt to provide some insight in this final installment of the 2020 Visionaries series.
In true futurist fashion, we’ve tried to cast our net wide. We begin with a broad discussion about the new relationship between individuals and institutions.
First, New York University telecommunications professor and best-selling author Clay Shirky says that the greatest challenge of the Interconnected Age is also its greatest asset: cognitive surplus. We have more creativity, more data, more art, more content than any publisher, editor, or news producer could ever use effectively. The onus is on each of us to participate and make something useful with the new tools at our disposal.
Following, we present our account of a remarkable conversation. In one corner, Cory Doctorow, best-selling science-fiction writer, creator of the popular technology blog Boing Boing, and one of the world’s most vocal advocates for network freedom, liberal copyright policies, and open-source creative collaboration. His conversational partner? The network, in person: 60 people with whom Doctorow spoke over the course of two days of touring the mid-Atlantic region. The discussion ranged from science-fiction scenarios to the future of e-readers to the Google versus Viacom copyright fight and what it means for the future (hint: a lot). Here are the highlights of that discussion.
Next we’ll look at our deepest impulses toward moral action, love, and fidelity. Two of the world’s foremost experts on this subject will assess how these central aspects of our humanity could evolve over the next 10 years.
Stanford University psychology professor Philip Zimbardo describes his most recent endeavor, The Heroic Imagination Project, an exploration of the psychology of heroism. Zimbardo is uniquely qualified to speak on the strange ways that people can play off one another when they’re suddenly thrust into new networks and asked to take on new roles.
In 1971, Zimbardo gathered together 24 Stanford undergraduates to perform a mock prison experiment in the basement of the university’s psychology building. Participants were randomly assigned the role of guard or prisoner. The experiment was stopped after only six days when the students assigned to be guards began abusing their classmates. In his new research, he looks at “what pushes some people to become perpetrators of evil, while others act heroically on behalf of those in need?”
Finally, Helen Fisher, Rutgers University anthropologist and author of Why We Love: The Nature and Chemistry of Romantic Love (Henry Holt 2004), examines the institution of marriage and discusses how our understanding of love and fidelity will change in the next two decades. The amount of new data we are gathering about the chemical and biological roots of romantic partnership will challenge our traditional assumptions about these most important connections in our social web, presenting new obstacles and creating new opportunities in the decades ahead.—Patrick Tucker, senior editor, THE FUTURIST
Download a PDF of the entire Visionaries 2020 Part V article.
The sudden bounty of accessible creativity, insight, and knowledge is a public treasure, says a network guru.
Imagine treating the free time of the world’s educated citizenry as a kind of cognitive surplus. How big would that surplus be? To figure it out, we need a unit of measurement, so let’s start with Wikipedia. Suppose we consider the total amount of time people have spent on it as a kind of unit—every edit made to every article, every argument about those edits, for every language in which Wikipedia exists. That would represent something like 100 million hours of human thought.
One hundred million hours of cumulative thought is obviously a lot. A television producer once asked me about people who volunteer to edit Wikipedia, “Where do they find the time?” The people posing this question don’t understand how tiny that entire project is relative to the aggregate free time we all possess. How much is all that time spent on Wikipedia compared with the amount of time we spend watching television? Americans watch roughly 200 billion hours of TV every year. That represents about 2,000 Wikipedia projects’ worth of time annually. Even tiny subsets of this time are enormous: We spend roughly a hundred million hours every weekend just watching commercials.
The good news about our current, remarkable age is that we can now treat free time as a general social asset that can be harnessed for large communally created projects, rather than as a set of individual minutes to be wiled away one person at a time.
Wikipedia is one well-known example; here’s another you may not have heard of, a service called Ushahidi (Swahili for “witness”) developed to help Kenyan citizens track outbursts of ethnic violence. The originator, human rights activist Ory Okolloh, imagined a service that would automatically aggregate citizen reporting of attacks with the added value of locating the reported attacks on a map in near-real time. She floated the idea on her blog, attracting the attention of programmers Erik Hersman and David Kobia, who helped Ushahidi.com go live.
Several months later, Harvard’s Kennedy School of Government compared the site’s data to that of the mainstream media and concluded that Ushahidi had been better than the big media at reporting acts of violence as they started, better at reporting acts of nonfatal violence (which are often a precursor to deaths), and better at reporting over a wide geographical area, including rural districts.
You don’t need fancy computers to harness cognitive surplus; simple phones can be all that’s required. But one of the most important lessons is this: Once you’ve figured out how to tap the surplus in a way that people care about, others can replicate your techniques, over and over, around the world.
The question we now face—all of us who have access to new models of sharing—is what we’ll do with those opportunities. The question will be answered more decisively by the opportunities we provide for one another and by the culture of the groups we form than by any particular technology. The trick for creating new social media is to use those lessons as ways to improve the odds for successful harnessing of cognitive surplus.
Our media environment (that is to say, our connective tissue) has shifted. In a historical eyeblink, we have gone from a world with two different models of media—public broadcasts by professionals and private conversations between pairs of people—to a world where public and private media blend together, where professional and amateur production blur, and where voluntary public participation has moved from nonexistent to fundamental.
This was a big deal even when digital networks were used by only an elite group of affluent citizens, but it’s becoming a much bigger deal as the connected population has spread globally and crossed into the billions. The world’s people, and the connections among us, provide the raw material for cognitive surplus. The technology will continue to improve, and the population will continue to grow, but change in the direction of more participation has already happened.
What matters most now is our imaginations. The opportunity before us, individually and collectively, is enormous; what we do with it will be determined largely by how well we are able to imagine and reward public creativity, participation, and sharing.
About the Author
Clay Shirky teaches at the Interactive Telecommunications Program at New York University. He is the author of Here Comes Everybody: The Power of Organizing Without Organizations. His writings have appeared in The New York Times, the Wall Street Journal, the Times of London, Harvard Business Review, Business 2.0, and Wired.
This article was adapted from Cognitive Surplus: Creativity and Generosity in a Connected Age by Clay Shirky. Reprinted by arrangement of The Penguin Press, a member of Penguin Group (USA), Inc. Copyright 2010 by Clay Shirky.
Sixty people interview one of today’s hottest science-fiction authors and most dedicated open Internet advocates.
Cory Doctorow is the author of various science-fiction novels, including Makers and Little Brother, which he makes available for free from his Web site. He’s one of the editors of the technology blog Boing Boing. In addition, he’s a current fellow and former European Affairs Coordinator for the Electronic Frontier Foundation and a fierce advocate for the liberalization of copyright laws to allow for free sharing of all digital media. On June 27–28, he visited Red Emma’s bookstore in Baltimore, Maryland, and then appeared at CopyNight DC, a regular event in Washington, to discuss his work with more than 60 participants. Highlights from those exchanges are presented here.
Audience: How do you come up with your science-fiction ideas?
Cory Doctorow: Pick something that’s difficult, complicated, and expensive for people to do, then imagine that thing becoming easy, simple, and inexpensive, and write about it. That’s what’s happening today. Anything that requires more than one person and lots of coordination has become easier because of networks, which take the coordination cost associated with these very complicated tasks and make them low. The change is profound, because any task that one person can’t do alone, whether it’s making an airplane or a skyscraper, is literally superhuman. But the superhuman is becoming easier. You could write a damn good science-fiction story about free skyscrapers.
Audience: On the subject of exponential price depreciation, what can we do to ameliorate the socially and economically disruptive effects of a hypothetical breakthrough in nanofabrication? Those negative effects would be massive unemployment, institutions becoming obsolete, and millions of people having no idea what to do about government or commerce.
Doctorow: How can we ameliorate the social upheaval that arises from a postindustrial revolution based on nanofabrication? Iron-fisted totalitarian dictatorship? Workers’ paradise? I don’t know.
Audience: In your novel Makers, you talk about people who take electronic gadget waste (referred to as e-waste) and turn it into something new. Where do you see this happening in real life?
Doctorow: A large part of the e-waste problem is that we design devices that are meant to be used for a year but take a hundred thousand years to degrade. I wonder if we won’t someday design some devices to gracefully degrade back into the part stream, back into materials faster. Bruce Sterling wrote a manifesto about this for MIT Press called Shaping Things. He proposed that, with the right regulatory framework and technology, it might be possible to start readdressing design decisions so that things gracefully decompose back into components that can be reused in next-generation devices.
Audience: In For the Win and in Little Brother, you discuss small, technologically savvy networks sparking revolutions among a larger, much less sophisticated group, like enslaved factory workers who were waiting for a catalyst to overthrow their oppressors. Do you really believe that a few thousand well-connected individuals can trigger revolution?
Doctorow: My themes in those books aren’t small groups of people using technology to liberate larger groups, but rather that information rapidly diffuses through small groups, and then larger groups of people use it to help themselves. This is characteristic of all technological diffusion.
Audience: Does that go both ways?
Doctorow: Technology is good at disrupting the status quo because technology gives an advantage to people who want to undermine something that’s stable. Imagine a scenario in the Middle Ages where someone had just invented earth-moving technology and you manage security for a city. You want to defend your city with earth-moving technology. I want to break into your city with earth-moving technology. You need a perfect wall; I need to find one imperfection. Your task is exponentially harder than my task.
When you look at Orwell in 1984, he comes across as a technophobe. What he was seeing was a small piece in the arc of technology, where tech had realized an old totalitarian dream, where there had been states previously who wanted to assert control over private lives of the people who lived in them but they couldn’t make that a reality until technology gave them an assist. According to Orwell, this is what technology does: It allows authoritarians to assert authority. But not long after he wrote that, technology became a tool to undermine the state.
Today, we’re living in another one of those inflection points. We went from technology as a liberating force during my adolescence—it gave young people access to tools, ideas, communities, that even the most powerful and rich couldn’t have dreamt of before—to an age where everybody’s kid gets an iPhone with an application that tracks them like they’re a felon. Every library is mandated to put spyware on their computers, and students who are caught using proxies or another tool that might enhance their privacy are thrown out of school. Educators are scanning students’ Facebook pages. I’m hoping for another swing of the pendulum.
Audience: What did you think of the recent Viacom versus Google verdict?
Doctorow: Here’s the background: Recently, Viacom sued Google, owner of YouTube, for a billion dollars, claiming that YouTube has a duty to police all the material it hosted before the material went live. Viacom also argued that YouTube should not be allowed to have any privacy settings for its users. Right now, if you want to post a video of your newborn taking a bath and you just want to share it with family, you can show the video privately. You can select a privacy setting. Viacom argued that there should be no private videos, because Viacom had no way to police these videos to see if copyrighted material was being shared. By extension, they were arguing that no one should have any privacy settings, because if it’s illegal for YouTube it should be illegal for everyone.
If Viacom had won, they could have changed established law. There’s a copyright law called the Digital Media Copyright Act (DMCA) published in 1998. DMCA exempts people who host content from liability if that content infringes on copyright if they take it down expeditiously. If you have a Web server and one of your users posts something that infringes on copyright, you aren’t liable provided that when you receive a notice that the material is infringing you take the material down. This is what YouTube does with all of the material that its users post. It’s a ton of material; 29 hours of video per minute is uploaded to YouTube. The DMCA allows all the user-generated material on Web sites to exist. It’s why Blogger, Twitter, and Wordpress exist. There aren’t enough lawyer hours between now and the heat death of the universe to review all this material before it’s posted online. In other mediums where similar protections don’t exist, like cable television, very small amounts of user-generated material are shared.
Over the course of the court proceedings, it turned out that, even as Viacom was suing YouTube, it was still uploading videos to YouTube because they needed to have them there as part of their media strategy. Various Viacom divisions were paying as many as 25 marketing companies to put Viacom videos on YouTube under false fronts because no one officially connected to Viacom could put the videos on YouTube. The firms were even “roughing up” the videos to give them a “pirate chic.” At any big media company, beneath the top layer of corporate leadership, beneath the people who file lawsuits for things like copyright infringement, you have a layer of people who understand the realpolitik. These are the actual content producers. They say to themselves, “I have a new TV show. I have to get a certain number of viewers or it will be canceled, and I can’t do it unless I have my video on YouTube.” The real question is, how do you empower those people? We need to start a secret society for clued-in entertainment executives to help each other across companies.
What the court held in the case was that you don’t have to preemptively police all material before it gets onto the Internet. Viacom said it would appeal. It was a foregone conclusion that they would. One day, your university will change its Internet-use policy based on this case. Your Internet service provider will change its policy based on this. It affects everyone, even people who use the Internet for reasons besides uploading entertainment content.
This case speaks directly to how we will share information collectively in the future. It’s the basis also of all of tomorrow’s political organizing. The more constricted that becomes, the harder it becomes to resist bad laws.
Audience: Last year in Spain, the government deactivated 3 million phone numbers. The owners of the phones had to go to a store and show ID to register their phones to get service again. A few weeks ago, Senator Charles F. Schumer (Democrat–New York) proposed mandatory registration of cell phones in the United States because the Times Square bomber used a prepaid phone. How do we resist this in the context of the May 11 threat of terrorists using prepaid phones?
Doctorow: This is another example of politicians shouting terrorism as a way to get anything passed. If the Times Square bomber didn’t have access to an anonymous phone, there’s no reason to think he wouldn’t have just bought a phone using his ID. What he was worried about was blowing up Times Square, not whether or not he would get caught afterward. All of the 9/11 bombers used a real ID when they got on their planes. Being identified after you committed your suicide atrocity is not a downside. These people record videos with their information before they act. Our current approach to antiterrorism seems to take as its premise that al-Qaeda was trying to end aviation by making flying inconvenient.
I don’t follow your premise, though, that we can do meaningful broadband things with phones that are anonymous but that we’ll lose that capability once Chuck Schumer’s crazy law comes in.
The primary barrier to doing meaningful broadband things with wireless mobile devices is the terrible carriers. When you’re using an Ethernet, you have a universe of electromagnetic spectrum between a small bit of insulation. Burners [inexpensive phones purchasable with anonymous, limited-service plans] will never be able to provide that. Maybe cognitive radio can figure out how to solve these bottlenecks, but we’re not going to get 3G or 4G.
Audience: You talk about the threat to democracy in terms of how the copyright fight leads to individuals being taken off the Net. What other trends in society do you see that might affect liberty at a much greater level? What do you think of this notion that, if speech is money, then restrictions we place on money should apply to speech?
Doctorow: I concentrate on issues related to network freedom because one day I woke up and realized that no one will ever be able to campaign on any of those issues without a free and open network. Our capacity to make any sort of positive change on any of this stuff, to elect a lawmaker who passes a law that the Supreme Court will interpret differently, is built around our capacity to use the network to organize with one another.
My role, as I see it, is to try and keep the network open for people who have other issues that they care about.
Audience: Mere blocks from here [in D.C.] is the Jack Valenti building of the Motion Picture Association of America. Should we start picketing there or keep walking until we get to Congress or the White House? How do we find hundreds of thousands of people to picket with us?
Doctorow: The point of my talk tonight is this: We need to make the fight for individual rights online bigger than entertainment copyright and questions of who gets to make movies or mashups, or who gets to decide how much it costs to load a thousand songs onto your iPod. We need to make this about freedom of speech, freedom of the press, due process, the right to education, and all of the fundamentals that are at the heart of the Internet. Next year, and the year after that, the Internet will absorb and encompass even more realms of our daily lives. We’ll also be even better at copying stuff. If you want to get people interested in this, stop talking about cultural freedom—movie copyright, music copyright—and just start talking about freedom.
I’m working on a novel right now called Pirate Cinema; it’s a neo-Dickensian piece set in London. It’s about kids who cost their parents their Internet access as a result of them downloading mashup movies. They cost their parents everything. They survive on handouts. Their moms are on benefits and can’t log in to get the benefits because their Internet has been taken away. To spare their families the shame of living with downloaders, the kids move to London, start a gang called the Jammer Dodgers, and take it upon themselves to destroy the entertainment industry before the entertainment industry destroys society. They cut movies that they’ve pirated into new movies. They screen them in cemeteries and vaulted Victorian sewers; they go up to the people lining up to see movie premieres in Leicester Square and they hand out the DVD of that very film on offer with an insert advertising the free showing of the same movie down the street.
I was stranded in Los Angeles for four days because of the volcanic ash cloud; I took the time to meet with my film agent, and I told him about this idea. He asked, “What else have you got?”
—Patrick Tucker reported on these events.
Next we’ll look at our deepest impulses toward moral action, love, and fidelity. Two of the world’s foremost experts on this subject will assess how these central aspects of our humanity could evolve over the next 10 years.
Stanford University psychology professor Philip Zimbardo describes his most recent endeavor, The Heroic Imagination Project, an exploration of the psychology of heroism. Zimbardo is uniquely qualified to speak on the strange ways that people can play off one another when they’re suddenly thrust into new networks and asked to take on new roles.
In 1971, Zimbardo gathered together 24 Stanford undergraduates to perform a mock prison experiment in the basement of the university’s psychology building. Participants were randomly assigned the role of guard or prisoner. The experiment was stopped after only six days when the students assigned to be guards began abusing their classmates. In his new research, he looks at “what pushes some people to become perpetrators of evil, while others act heroically on behalf of those in need?”
Finally, Helen Fisher, Rutgers University anthropologist and author of Why We Love: The Nature and Chemistry of Romantic Love (Henry Holt 2004), examines the institution of marriage and discusses how our understanding of love and fidelity will change in the next two decades. The amount of new data we are gathering about the chemical and biological roots of romantic partnership will challenge our traditional assumptions about these most important connections in our social web, presenting new obstacles and creating new opportunities in the decades ahead. —PMT
A leading psychologist and originator of the Stanford Prison Experiment is applying his understanding of evil to the promotion of good.
What is a hero? I argue that a hero is someone who possesses and displays certain heroic attributes such as integrity, compassion, and moral courage, heightened by an understanding of the power of situational forces, an enhanced social awareness, and an abiding commitment to social action.
Heroism is a social concept, and—like any social concept—it can be explained, taught, and modeled through education and practice. I believe that heroism is common, a universal attribute of human nature and not exclusive to a few special individuals. The heroic act is extraordinary, the heroic actor is an ordinary person—until he or she becomes a heroic special individual. We may all be called upon to act heroically at some time, when opportunity arises. We would do well, as a society and as a civilization, to conceive of heroism as something within the range of possibilities for every person.
But these days rarely do we hear about ordinary men and women who have, by circumstance or fate, done something extraordinary for a greater cause or sacrificed on behalf of fellow human beings. Today’s generation, perhaps more than any preceding one, has grown up without a distinct vision of what constitutes heroism, or, worse, has grown up with a flawed vision of the hero as sports figure, rock star, gang leader, or fantastic super hero.
This is why, in 2010, I formed the Heroic Imagination Project, or HIP, which seeks to encourage and empower individuals to develop the personal attributes that lead them to take heroic action during crucial moments in their lives, on behalf of others, for a moral cause, and without expectation of gain.
HIP is committed to realizing this goal in three ways. First we will conduct and support new research that will expand society’s understanding of heroic behavior. Next we will create new educational programs in schools and on the Web that coach and mentor people in how to resist negative social influences, while also inspiring them to become wise and effective heroes. Then we will create public engagement programs that involve people everywhere to take our heroic pledge and to sign on to one of our many emerging programs.
Research on Heroism
One of the most fundamental and unique aspects of our mission is its focus on encouraging new empirical research on the nature and dynamics of heroism. There is a dearth of information on this idea, at least partly due to the changing definition of heroism over the last 30 years, and the earlier focus in psychology on the dark side of human nature. To build this new body of research, we are partnering with major universities and will sponsor promising doctoral candidates who devote their research to questions around this issue of heroic behavior.
Research into the component attributes of heroism (ethical behavior, leadership, courage) and their practical application (defiance of unjust authority, whistle blowing, facing physical danger) can have far-reaching benefits for society. We need to better understand the neurological and psychological basis of such phenomena as action versus passivity at the decisive moment. The components of our research initiative include Web-based surveys of self-selected individuals, analysis of a program of senior volunteers, and laboratory studies of the personal, social, and neurological roots of heroic behaviors.
Implementation of Our Findings
Everyday heroism is the highest form of civic virtue. It transforms the personal virtue of compassion into meaningful social action. To that end, we will work to instill in all people, particularly in young people, the self-confidence and the ability to readily perform deeds that improve the lives of other individuals and society as a whole. We believe it begins by adopting, and internalizing, the mind-set of a heroic imagination—I can do that, I can be a hero when the opportunity arises.
We are now developing specific program modules for scholastic, corporate, and military audiences. Our initial program is being launched in middle and high schools and provides young people with tools to encourage heroic self-identification. The aim is to fortify their moral framework and coach them to act beyond their comfort zone—but wisely so. Our corporate heroic leadership programs and accountability/integrity programs are currently in design and will roll out soon.
We are also launching a comprehensive Web site that will celebrate the community of everyday heroes, while taking our mission and our programs to the general public.
Why Heroism
This exploration into heroism was spurred by recent research that shows how otherwise exemplary individuals can be easily persuaded, when their social framework is skewed or altered, to perform acts that go against conscience, and behave in ways they would ordinarily find despicable. My Stanford Prison Experiment (1971) reflected such an outcome, and my findings have been frequently validated since, including the recent actions of American military police guards at Abu Ghraib prison in Iraq in 2004.
Not long ago, I testified during the trial of one of the U.S. guards accused of mistreating prisoners in that incident. My message was this: It’s imperative for our society to acknowledge how situational forces can corrupt even good people into becoming perpetrators of evil. It is essential that all of us learn to recognize the situational and systemic determinants of antisocial behaviors. What’s more, I argue, we must actively seek to change this paradigm by encouraging and empowering individuals to make the difficult but moral decision—the decisive heroic choice—when faced with challenging circumstances.
By redefining these ideas for contemporary audiences, we can popularize and energize the concept of everyday heroism around the world. In doing so, HIP hopes to be the catalyst for individuals to transform their passivity and reluctance to come to the aid of those in need into the positive social action heroism. Ideally, HIP will become a social movement that sows the seeds of heroism everywhere.
About the Author
Philip Zimbardo is professor emeritus of psychology at Stanford University and author of The Lucifer Effect: Understanding How Good People Turn Evil (Random House, 2007) and The Time Paradox: The New Psychology of Time That Will Change Your Life (Simon and Schuster, 2009) among hundreds of other books, chapters, and articles. For more information on the Heroic Imagination Project visit www.heroicimagination.org.
An author and anthropologist looks at the future of love.
Marriage has changed more in the past 100 years than it has in the past 10,000, and it could change more in the next 20 years than in the last 100. We are rapidly shedding traditions that emerged with the Agricultural Revolution and returning to patterns of sex, romance, and attachment that evolved on the grasslands of Africa millions of years ago.
Let’s look at virginity at marriage, arranged marriages, the concept that men should be the sole family breadwinners, the credo that a woman’s place is in the home, the double standard for adultery, and the concepts of “honor thy husband” and “til death do us part.” These beliefs are vanishing. Instead, children are expressing their sexuality. “Hooking up” (the new term for a one-night stand) is becoming commonplace, along with living together, bearing children out of wedlock, women-headed households, interracial marriages, homosexual weddings, commuter marriages between individuals who live apart, childless marriages, betrothals between older women and younger men, and small families.
Our concept of infidelity is changing. Some married couples agree to have brief sexual encounters when they travel separately; others sustain long-term adulterous relationships with the approval of a spouse. Even our concept of divorce is shifting. Divorce used to be considered a sign of failure; today it is often deemed the first step toward true happiness.
These trends aren’t new. Anthropologists have many clues to life among our forebears; the dead do speak. A million years ago, children were most likely experimenting with sex and love by age six. Teens lived together, in relationships known as “trial marriages.” Men and women chose their partners for themselves. Many were unfaithful—a propensity common in all 42 extant cultures I have examined. When our forebears found themselves in an unhappy partnership, these ancients walked out. A million years ago, anthropologists suspect, most men and women had two or three long-term partners across their lifetimes. All these primordial habits are returning.
But the most profound trend forward to the past is the rise of what sociologists call the companionate, symmetrical, or peer marriage: marriage between equals. Women in much of the world are regaining the economic power they enjoyed for millennia. Ancestral women left camp almost daily to gather fruits, nuts, and vegetables, returning with 60% to 80% of the evening meal. In the hunting and gathering societies of our past, women worked outside the home; the double-income family was the rule, and women were just as economically, sexually, and socially powerful as men. Today, we are returning to this lifeway, leaving in the “dustbin of history” the traditional, male-headed, patriarchal family—the bastion of agrarian society.
This massive change will challenge many of our social traditions, institutions, and policies in the next 20 years. Perhaps we will see wedding licenses with an expiration date. Companies may have to reconsider how they distribute pension benefits. Words like marriage, family, adultery, and divorce are likely to take on a variety of meanings. We may invent some new kinship terms. Who pays for dinner will shift. Matriliny may become common as more children trace their descent through their mother.
All sorts of industries are already booming as spin-offs of our tendencies to marry later, then divorce and remarry. Among these are Internet dating services, marital mediators, artists who airbrush faces out of family albums, divorce support groups, couples therapists, and self-improvement books. As behavioral geneticists begin to pinpoint the biology of such seemingly amorphous traits as curiosity, cautiousness, political orientation, and religiosity, the rich may soon create designer babies.
For every trend there is a countertrend, of course. Religious traditions are impeding the rise of women in some societies. In countries where there are far more men than women, due to female infanticide, women are likely to become coveted—and cloistered. The aging world population may cling to outmoded social values, and population surges and declines will affect our attitudes toward family life.
Adding to this mix will be everything we are learning about the biology of relationships. We now know that kissing a long-term partner reduces cortisol, the stress hormone. Certain genes in the vasopressin system predispose men to make less-stable partnerships. My colleagues and I have discovered that the feeling of romantic love is associated with the brain’s dopamine system—the system for wanting. Moreover, we have found that romantic rejection activates brain regions associated with profound addiction. Scientists even know some of the payoffs of “hooking up.” Casual sex can trigger the brain systems for romantic love and/or feelings of deep attachment. In a study led by anthropologist Justin Garcia, some 50% of men and women reported that they initiated a hook up in order to trigger a longer partnership; indeed, almost a third of them succeeded.
What will we do with all these data? One forward-thinking company has begun to bottle what our forebears would have called “love magic.” They sell Liquid Trust, a perfume that contains oxytocin, the natural brain chemical that, when sniffed, triggers feelings of trust and attachment.
We are living in a sea of social and technological currents that are likely to reshape our family lives. But much will remain the same. To bond is human. The drives to fall in love and form an attachment to a mate are deeply embedded in the human brain. Indeed, in a study I just completed on 2,171 individuals (1,198 men, 973 women) at the Internet dating site Chemistry.com, 84% of participants said they wanted to marry at some point. They will. Today, 84% of Americans wed by age 40—albeit making different kinds of marriages. Moreover, with the expansion of the roles of both women and men, with the new medical aids to sex and romance (such as Viagra and estrogen replacement), with our longer life spans, and with the growing social acceptance of alternative ways to bond, I believe we now have the time and tools to make more-fulfilling partnerships than at any time in human evolution. The time to love is now.
About the Author
Helen Fisher is a research professor in biological anthroplogy at Rutgers University and chief scientific advisor of Chemistry.com. Her most recent book is Why We Love: The Nature and Chemistry of Love (Henry Holt, 2010).