The Dawn of the Postliterate Age

Information technology, cybernetics, and artificial intelligence may render written language “functionally obsolete” by 2050.
Originally published in THE FUTURIST magazine, November-December 2009

By Patrick Tucker

For the literate elite — which includes everyone from Barack Obama to this spring’s MFA graduates — the gnashing of teeth and rending of garments over the demise of reading has become obligatory theater. Poets, writers, and teachers alike stand over the remains of a once-proud book culture like a Greek chorus gloomily crowded around a fallen king. How can it be that, between 1982 and 2007, reading declined by nearly 20% for the overall U.S. population and 30% for young adults aged 18–24, or that 40 million Americans read at the lowest literacy level?

The answer that rises most immediately to meet this anguish is: the image makers. Television, the Pied Piper of the last century, has been joined in its march by video games, YouTube, and an assortment of other visual tempters that are ferrying Western culture further away from the nourishing springs of literature. The public appetite for images — scenes of war, staged or otherwise, music videos, game shows, celebrities roaming the streets of Los Angeles in a daze — seems both limitless in scope and apocalyptic in what it portends for the future.

To the literary eye, the culture of the image has grown as large as Godzilla, as omnipresent as an authoritarian government, and as cruel and erratic as the Furies. In our rush to blame the moving picture for the state of our cultural disarray, we’ve overlooked the fact that — as a carrier of data, thoughts, ideas, prayers, and promises — the image is neither as functional nor as versatile as text.

The real threat to the written word is far more pernicious. Much like movie cameras, satellites, and indeed television, the written word is, itself, a technology, one designed for storing information. For some 6,000 years, the human mind was unable to devise a superior system for holding and transmitting data. By the middle of this century, however, software developers and engineers will have remedied that situation. So the greatest danger to the written word is not the image; it is the so-called “Information Age” itself.

Texting, the Brief, Golden Age of Internet Communication

Consider, first, the unprecedented challenges facing traditional literacy in today’s Information Age. The United States spends billions of dollars a year trying to teach children how to read and fails often. Yet, mysteriously, declining literacy and functional nonliteracy have yet to affect technological innovation in any obvious way. New discoveries in science and technology are announced every hour; new and ever-more complicated products hit store shelves (or virtual store shelves) all the time. Similarly, human creation of information — in the form of data — has followed a fairly predictable trend line for many decades, moving sharply upward with the advent of the integrated circuit in the mid-twentieth century.

The world population is on track to produce about 988 billion gigabytes of data per year by 2010. We are spending less time reading books, but the amount of pure information that we produce as a civilization continues to expand exponentially. That these trends are linked, that the rise of the latter is causing the decline of the former, is not impossible.

In a July 2008 Atlantic article entitled “Is Google Making Us Stupid?” Nicholas Carr beautifully expresses what so many have been feeling and observing silently as society grapples with the Internet and what it means for the future:

“Over the past few years I’ve had an uncomfortable sense that someone, or something, has been tinkering with my brain, remapping the neural circuitry, reprogramming the memory…. The deep reading that used to come naturally has become a struggle.… My mind now expects to take in information the way the Net distributes it: in a swiftly moving stream of particles. Once I was a scuba diver in the sea of words. Now I zip along the surface like a guy on a Jet Ski.”

Information Age boosters such as Steven Johnson (Everything Bad is Good for You), Don Tapscott (Grown Up Digital), and Henry Jenkins (Convergence Culture) argue that information technology is creating a smarter, more technologically savvy public.

These authors point out that the written word is flourishing in today’s Information Era. But the Internet of 2009 may represent a brilliant but transitory Golden Age. True, the Web today allows millions of already well-read scholars to connect to one another and work more effectively. The Internet’s chaotic and varied digital culture is very much a product of the fact that people who came by their reading, thinking, and research skills during the middle of the last century are now listening, arguing, debating, and learning as never before.

One could draw reassurance from today’s vibrant Web culture if the general surfing public, which is becoming more at home in this new medium, displayed a growing propensity for literate, critical thought. But take a careful look at the many blogs, post comments, MySpace pages, and online conversations that characterize today’s Web 2.0 environment. One need not have a degree in communications (or anthropology) to see that the back-and-forth communication that typifies the Internet is only nominally text-based. Some of today’s Web content is indeed innovative and compelling in its use of language, but none of it shares any real commonality with traditionally published, edited, and researched printed material.

This type of content generation, this method of “writing,” is not only subliterate, it may actually undermine the literary impulse. As early as 1984, the late linguist Walter Ong observed that teletype writing displayed speech patterns more common to ancient aural cultures than to print cultures (a fact well documented by Alex Wright in his book Glut: Mastering Information Through the Ages). The tone and character of the electronic communication, he observed, was also significantly different from that of printed material. It was more conversational, more adolescent, and very little of it conformed to basic rules of syntax and grammar. Ong argued compellingly that the two modes of writing are fundamentally different. Hours spent texting and e-mailing, according to this view, do not translate into improved writing or reading skills. New evidence bares this out. A recent report from the Organization for Economic Cooperation and Development found that text messaging use among teenagers in Ireland was having a highly negative effect on their writing and reading skills.

Cybernetics and the Coming Era of Instantaneous Communication

Consider the plight of the news editor or book publisher trying to sell carefully composed, researched, and fact-checked editorial content today, when an impatient public views even Web publishing as plodding. Then imagine the potential impact of cybernetic telepathy.

In the past few years, amazing breakthroughs involving fMRI, or functional magnetic resonance imaging — with potential ramifications for education — have become an almost daily occurrence. The fMRI procedure uses non-ionizing radiation to take detailed pictures of soft tissue (specifically the brain) that tends to show up as murky and indistinct on computed tomography scans. The scanner works like a slow-motion movie camera, taking new scans continuously and repeatedly. Instead of observing movement the way a camcorder would, the scanner watches how oxygenated hemoglobin (blood flow) is diverted throughout the brain. If you’re undergoing an fMRI scan and focusing one portion of your brain on a specific task, like exerting your anterior temporal lobe on pronouncing an unfamiliar word, that part of the brain will expand and signal for more oxygenated blood, a signal visible to the scanner.

In 2005, researchers with the Scientific Learning Corporation used fMRI to map the neurological roots of dyslexia and designed a video game called Fast ForWord based on their findings. The project was “the first study to use fMRI to document scientifically that brain differences seen in dyslexics can be normalized by neuroplasticity-based training. Perhaps of greater relevance to educators, parents, and the children themselves are the accompanying significant increases in scores on standardized tests that were also documented as a result of the intervention,” neuroscience experts Steve Miller and Paula Tallal wrote in 2006 in School Administrator.

Fast ForWord is likely the forerunner of many products that will use brain mapping to market education “products” to schools or possibly to parents, a commercial field that could grow to include not just software, but also chemical supplements or even brain implants. In much the same way that Ritalin improves focus, fMRI research could lead to electronic neural implants that allow people to process information at the speed of electric currents — a breakthrough possible through the emergent field of cybernetics.

Speculative nonsense? To Kevin Warwick, an IT professor at Reading University in the United Kingdom, our cybernetic future is already passé. In 2002, Warwick had an experimental Internet-ready microchip surgically implanted in his arm. Building on the success of widely available implants like the cochlears that treat certain types of deafness, Warwick’s implant research dealt with enhancing human abilities. In a December 2006 interview with I.T. Wales, he discussed an experiment he took part in with his wife, wherein the couple actually traded neural signals — a crude form of telepathy.

Warwick wore an electrode implant that linked his nervous system (not his actual brain) directly to the Internet. His wife, Irina, had a similar implant, and the two were able to trade signals over the Internet connection.

“When she moved her hand three times,” Warwick reported, “I felt in my brain three pulses, and my brain recognized that my wife was communicating with me.”

In April 2009, a University of Wisconsin–Madison biomedical engineering doctoral student named Adam Williams posted a status update to the social networking site Twitter via electroencephalography or EEG. EEG records the electrical activity that the brain’s neurons emit during thought. Williams, seated in a chair with the EEG cap on his head, looked at a computer screen displaying an array of numbers and letters. The computer highlighted the letters in turn, and when the computer highlighted a letter Williams wished to use, his brain would emit a slightly different electrical pulse, which the EEG would then pick up to select that letter.

“If you’re looking at th”” said Williams. “But when the ‘R’ flashes, your brain says, ‘Hey, wait a minute. Something’s different about what I was just paying attention to.’ And you see a momentary change in brain activity.”

Williams’s message to the world of Twitter? “Using EEG to send tweet.”

While advancement in cybernetics and the decline in literary culture appear, at first glance, completely unrelated, research into cyber-telepathy has direct ramifications for the written word and its survivability. Electronic circuits mapped out in the same pattern as human neurons could, in decades ahead, reproduce the electrical activity that occurs when our natural transmitters activate. Theoretically, such circuits could allow parts of our brain to communicate with one another at greater levels of efficiency, possibly allowing humans to access data from the Web without looking it up or reading it.

The advent of instantaneous brain-to-brain communication, while inferior to the word in its ability to communicate intricate meaning, may one day emerge as superior in terms of simply relaying information quickly. The notion that the written word and the complex system of grammatical and cultural rules governing its use would retain its viability in an era where thinking, talking, and accessing the world’s storehouse of information are indistinguishable seems uncertain at best.

Google, AI, and Instantaneous Information

The advent of faster and more dexterous artificial intelligence systems could further erode traditional literacy. Take, for example, one of the most famous AI systems, the Google search engine. According to Peter Norvig, director of research at Google, the company is turning “search” (the act of 220;search” (the act of googling) into a conversational interface. In an interview with Venture Beat, Norvig noted that “Google has several teams focused on natural language and dozens of Googlers with a PhD in the field, including myself.”

AI watchers predict that natural-language search will replace what some call “keywordese” in five years. Once search evolves from an awkward word hunt — guessing at the key words that might be in the document you’re looking for — to a “conversation” with an AI entity, the next logical step is vocal conversation with your computer. Ask a question and get an answer. No reading necessary.

Barney Pell, whose company Powerset was also working on a conversational-search interface before it was acquired by Microsoft, dismissed the notion that a computerized entity could effectively fill the role of text, but he does acknowledge that breakthroughs of all sorts are possible.

“The problem with storing raw sounds is that it’s a sequential access medium; you have to listen to it. You can’t do other things in parallel,” said Pell during our 2007 discussion. “But if you have a breakthrough where auditory or visual information could connect to a human brain in a way that bypasses the processes of decoding the written text, where you can go as fast and slow as you want and have all the properties that textual written media supports, then I could believe that text could be replaced.”

The likelihood of that scenario depends on whom you ask, but if technological progress in computation is any indication, we are safe in assuming that an artificial intelligence entity will eventually emerge that allows individuals to process information as quickly or as slowly as reading written language.

Will “HAL” Make Us Stupid?

How can the written word — literary culture — survive the advent of the talking, all-knowing, handheld PC? How does one preserve a culture built on a 6,000-year-old technology in the face of super-computation? According to many of the researchers who are designing the twenty-first century’s AI systems, the answer is, you don’t. You submit to the inexorable march of progress and celebrate the demise of the written word as an important step forward in human evolution.

When confronted by the statistic that fewer than 50% of high-school seniors could differentiate between an objective Web site and a biased source, Norvig replied that he did perceive it as a problem, and astonishingly suggested that the solution was to get rid of reading instruction altogether.

“We’re used to teaching reading, writing, and arithmetic; now we should be teaching these evaluation skills in school,” Norvig told me. “Some of it could be just-in-time. Education, search engines themselves should be providing clues for this.”

Norvig is not an enemy of written language; he’s even contributed several pieces to the McSweeny’s Web site, a favorite among bibliophiles. He’s not a starry-eyed technologist harboring unrealistic views of technology’s potential. Still, this cavalierly stated proposal that we might simply drop the teaching of “reading, writing, and arithmetic” in favor of search-engine-based education speaks volumes about what little regard some of the world’s top technologists hold for our Victorian education system and its artifacts, like literary culture.

In the coming decades, lovers of the written word may find themselves ill-equipped to defend the seemingly self-evident merits of text to a technology-oriented generation who prefer instantaneous data to hard-won knowledge. Arguing the artistic merits of Jamesian prose to a generation who, in coming years, will rely on conversational search to find answers to any question will likely prove a frustrating, possibly humiliating endeavor.

If written language is merely a technology for transferring information, then it can and should be replaced by a newer technology that performs the same function more fully and effectively. But it’s up to us, as the consumers and producers of technology, to insist that the would-be replacement demonstrate authentic superiority. It’s not enough for new devices, systems, and gizmos to simply be more expedient than what they are replacing — as the Gatling gun was over the rifle — or more marketable — as unfiltered cigarettes were over pipe tobacco. We owe it to posterity to demand proof that people’s communications will be more intelligent, persuasive, and constructive when they occur over digital media, proof that illiteracy, even in an age of great technological capability, will improve people’s lives.

As originally proposed by futurist William Crossman, the written word will likely be rendered a functionally obsolete technology by 2050. This scenario exists alongside another future in which young people reject many of the devices, networks, and digital services that today’s adults market to them so relentlessly. Being more technologically literate, they develop the capacities to resist the constant push of faster, cheaper, easier information and select among the new and the old on the basis of real value. If we are lucky, today’s young people will do what countless generations before them have done: defy authority.

About the Author

Patrick Tucker is the senior editor of THE FUTURIST magazine and director of communications for the World Future Society.