The Secret Life of Data in the Year 2020

Subject(s):

By Brian David Johnson

A futurist for Intel shows how geotags, sensor outputs, and big data are changing the future. He argues that we need a better understanding of our relationship with the data we produce in order to build the future we want.

My job as Intel’s futurist is to look 10 to 15 years out and model how people will act and interact with devices in the future. I explore a vision for all computational devices. Basically if it has a chip in it, it’s within my view. The driving force behind this work is incredibly pragmatic. The process of designing, developing, manufacturing, and deploying our platforms takes around 10 years. It’s of vital business importance today for Intel to understand the landscape a decade from now. That’s why in 2010 we started work on 2020.

When you look to 2020 and beyond, you can’t escape big data. Big data—extremely large sets of data related to consumer behavior, social network posts, geotagging, sensor outputs, and more—is a big problem. Intel is at the forefront of the big data revolution and all the challenges therein. Our processors are how data gets from one place to another. If anyone should have insight into how to make data do things we want it to do, make it work for the future, it should be Intel.

That’s where I come in. I model what it will feel like to be a human 10 years from now. I build models that explore what it will feel like to experience big data as an average person. An integral part of this work is collaborating with Genevieve Bell. She’s an Intel fellow, a cultural anthropologist by training, and one of the best minds working in this area. Together, we’ve been exploring 2020 through the lens of what we call ”the Secret Life of Data.”

For most people in 2020, it will feel like data has a life of its own. With the massive amount of sensors we have littering our lives and landscapes, we’ll have information spewing from everywhere. Our cars, our buildings, and even our bodies will expel an exhaust of data, information, and 1s and 0s at an incredible volume.

Why will most people think that their data has a life of its own? Well, because it’s true. We will have algorithms talking to algorithms, machines talking to machines, machines talking to algorithms, sensors and cameras gathering data, and computational power crunching through that data, then handing it off to more algorithms and machines. It will be a rich and secret life separate from us and for me incredibly fascinating.

But as we begin to build the Secret Life of Data, we must always remember that data is meaningless all by itself. The 1s and 0s are useless and meaningless on their own. Data is only useful and indeed powerful when it comes into contact with people.

This brings up some interesting questions and fascinating problems to be solved from an engineering standpoint. When we are architecting these algorithms, when we are designing these systems, how do we make sure they have an understanding of what it means to be human? The people writing these algorithms must have an understanding of what people will do with that data. How will it fit into their lives? How will it affect their daily routine? How will it make their lives better?

The Mysterious Resident of Glencoe and Wren Roads

gencoe and Wren2.jpg

The intersection of Glencoe and Wren

At Intel, solving the problem of how data will interact with other data in the future is not an esoteric pursuit. When I talk about making people’s lives better and having a deep understanding of how data will make their lives better, I’m not speaking in the abstract. I work with the people who are writing those algorithms and the people building the systems. Take Rita, for instance, who just had a baby last year. Rita did an experiment recently that will show you exactly what I mean when I say that algorithms need to understand people.

To test out this approach, Rita developed a prototype and programmed a personal tracking system. She allowed her smartphone to track and record all of her movements throughout her day. She wanted to test how the software understood who she was and what she did with her day.

After allowing her device and the software to track her every movement for a month, she checked out the report. The initial findings of the sensors and algorithms had learned some very specific information about her. The system told her that she “lived” in three primary places. The first location was spot on. It showed that she lived in her own home. It even showed the location on a map. Okay, that was right.

Second, it reported that she lived on the Jones Farm Campus of Intel. Okay, that was correct, as well. Rita spends most of her time at work when she’s not at home. But the third data point really enraged Rita.

The third data point showed that Rita lived at the intersection of Glencoe and Wren roads. This really made her mad. I didn’t completely understand. I asked why. She showed me on the map.

“There’s nothing at Glencoe and Wren,” she said. It’s a stop sign in the middle of nowhere. All it had to do is look at any mapping program and it would show nothing there. How could I live there if there is no building there? It’s ridiculous. We need to program these things to understand what it really means to be human. Just because I stopped in this place twice a day on my way to and from work doesn’t mean I live there. It’s so simple to fix. We just have to understand how people really live and not base it on just data points. People are the most important data points.”

That really is my challenge: How do we come up with the requirements and problems to build into the 2020 platform? The Secret Life of Data research and development work I’ve been doing with Genevieve Bell tells us that one approach is to start looking at data as if it were a person.

The Algorithm: More Human than a Human?

In the era of big data, how do we make sense of all this massive amount of information? We need new ways of conceptualizing and thinking about data that is not the traditional binary view that we have taken for the last 50 years.

You can meet Brian David Johnson at WorldFuture 2012.

If we begin to think of data as having a life of its own, and we are programming systems to enable them to have this life, then ultimately we are designing this data and the algorithms that process it to be human. One approach is to think about data as having responsibilities.

When I say responsibilities, I’m not just talking about the responsibility to keep the data safe and secure, but also a responsibility to deliver the data in the right context—to tell the story right. It’s akin to making sure that a person understands your family history, the subtle nuances of your father and grandmother and great-grandmother. It is the responsibility of history, and it cannot be taken lightly.

The research and development that Bell and I have been doing explores what is the only way to make sense of all this complex information—by viewing data, massive data sets, and the algorithms that really utilize big data as being human. Data doesn’t spring full formed from nowhere. Data is created, generated, and recorded. And the unifying principle behind all of this data is that it was all created by humans. We create the data, so essentially our data is an extension of ourselves, an extension of our humanity.

Ultimately in these systems, our data will need to start interacting with other data and devices. There will be so much data and so many devices that our data will need to take on a life of its own just to be efficient and not drive us crazy. But how do these systems understand and examine who we and our data are in the complex reality of big data that is basically too big for us to understand? This is where science fiction, androids, and Philip K. Dick and William Gibson come in.

Science Fiction and the Literary Origins of Android Data

In 1969, Philip K. Dick wrote the novel Do Androids Dream of Electric Sheep? The book is a meditation on what it means to be human and how the lines between that humanity and machines can become hazy—if not completely impossible to determine. The book eventually was developed into the science-fiction movie masterpiece Blade Runner by director Ridley Scott.

Just a few years after writing Androids, Dick further developed his ideas about humanity and the constructs that we build. He gave a speech called “The Android and the Human” at the University of British Columbia in February 1972, where he explored his new way of thinking: “I have, in some of my stories and novels, written about androids or robots or simulacra—the name doesn’t matter; what is meant is artificial constructs masquerading as humans.… Now, to me, that theme seems obsolete. The constructs do not mimic humans; they are, in many deep ways, actually human already.”

Thirty-six years later, another science-fiction legend, William Gibson, gave a speech at the Vancouver Institute called “Googling the Cyborg.” Gibson is best known for popularizing the cyberpunk movement in books like Neuromancer (Ace, 1984) and Pattern Recognition (Putnam, 2003). In his speech, Gibson contemplated what it means to be a cyborg. He had a good time poking fun at popular culture’s images of the man–machine hybrid with its carnal jacks, and he challenged his audience to think of the cyborg in a different way.

Gibson said he believes that the human and machine union has already happened, and it is called the Internet. He sees the Internet as “the largest man-made object on the planet” and says that the “real-deal cyborg will be deeper and more subtle and exist increasingly at the particle level.”

Gibson’s coupling of our humanity and the humanity of our data gives us another image of our constructs. We produce data and we write algorithms, and when we do this at the increasing scale (which will be coming in the next decade), we will need to begin to imagine who we are and who our data and our algorithms might be in a very different light.

The Android Is Your Data

Using these science-fiction visions, we can begin to develop a way to conceptualize the data. From the view of this narrative, our data—the data we created—becomes a kind of simulacrum of ourselves. Like Philip K. Dick’s androids and William Gibson’s cyborgs, data becomes a way to embody who we are, but at the same time it remains external. It allows us to examine who we are and also what we want to do with these systems. As we begin to architect these systems, often the reality is too hard to handle: It’s too complex for us to make any meaningful design decisions. We need these representations, these androids, to be our proxies.

Intel futurist Brian David Johnson

By thinking about data, large data sets, and the algorithms that make use of this information as human—or, in Dick’s language, androids—we are giving these complex systems a kind of narrative and characteristics that help programmers, system architects, and even regular folks to understand data’s “bigness.”

To understand what we want from the algorithms, these systems become less complex because we can understand them not only as an extension of ourselves but also a collection of human entities. If we understand them as human, then we know how to talk to them. We know how to ask for things. We know what to expect. We can hold them responsible, and we can even have an understanding for how far we can trust them.

But this humanness doesn’t really look like the humanness of Dick or even Gibson. This humanness is not trying to trick us into thinking that it is human like us, and it doesn’t exist on the particle level. Today, our understanding of humanity and intelligence is being challenged. Every year we get new products with increasing intelligence. These range from the amazing to the downright funny, but the reality of these systems looks more like a Furby toy having a conversation with the iPhone’s Siri service than two superhuman androids having a chat.

This concept of humanity is more about our relationships to other people, other pieces of data, and the complex web of relationships that make up our very culture. Humanity shouldn’t really be defined by Alan Turing’s test (designed to fool a person into thinking an AI was a human over teletype) or even Dick’s Voight-Kampff empathy test. How we define humanity is by our relationship to others—the connections we have to other people and their data.

And one day, humanity may be defined by how our personal data interacts with and is connected to other people’s data. We have to come to grips with the idea that this interconnected humanness that moves from data to data, algorithm to algorithm, might happen without us knowing anything about it. It very well could happen in the Secret Life of Data.

Do Algorithms Dream of Electric Sheep?

I think that there is something lovely about the idea that our data could have a life of its own. For too long, computers, computational power, and even software have been thought of as cold mathematical pursuits. In reality, the digital world is simply an extension of our world. Data and computational power are, at their core, human. With Genevieve Bell, these new models have given us a way to architect a future that is both more efficient and more human. And I think that’s awesome.

To answer the question “Do algorithms dream of electric sheep?” becomes complicated. First we can say “Yes,” because we programmed them to do so. Next we could say “No,” because the complex neurological structures of the human dream state will not be modeled in algorithms or software anytime soon. But finally, we might need to say “Maybe,” and we will just have to wait and ask them.

These questions of how we interact with data, and how data interacts with itself, may seem removed from our daily experience right now. That’s only because we’ve already come to expect our relationship with information to be a seamless exchange of signals that brings us closer to what we want. When we swipe a fare card to enter a subway, we expect the metal turnstile to turn for us. When we check in on Facebook, we expect our status update to change instantly. When we enter our credit-card numbers into a Web site like Amazon, we expect that the product we purchased is on its way, that our account has already been debited, and that a record of the transaction has already been stored in a database to provide us with more recommendations at a later date. We only truly notice how much we interact with data when something goes wrong, when the metal subway turnstile doesn’t spin.

But this current state of affairs can’t last. Data is becoming too big. We need to start paying attention to the data we create and what we want it to do for us.

What I find incredibly exciting about this vision for the future is that it is real. Big data is coming, and in many instances it’s already here. So it’s not a matter of if this will happen; it’s not even a question of when. For me, the real question is how. How do we want this to happen? What do we want it to do for us? How will it make the lives of every person on the planet better?

In 2010, Intel chief technology officer Justin Rattner said, “Science and technology have progressed to the point where what we build is only constrained by the limits of our own imaginations.” Imagining what the secret life of data could be is the real challenge; once we’ve done that, then all we have to do is go and build it.

That’s just engineering. The difficult part is changing the story we tell ourselves about the future we’re going to live in. If we can do that, then we can change the future.

About the Author

Brian David Johnson is a futurist at Intel Corporation, where he is developing an actionable vision for computing in 2020. He speaks and writes extensively about future technologies in articles and scientific papers as well as science-fiction short stories and novels (Science Fiction Prototyping: Designing the Future with Science Fiction, Screen Future: The Future of Entertainment Computing and the Devices We Love, Fake Plastic Love, and Nebulous Mechanisms: The Dr. Simon Egerton Stories).

You can meet Johnson WorldFuture 2012, the annual conference of the World Future Society taking place in Toronto this July.