The Machine Question - Our Perspective on Sentience, the Singularity and Humanity
I recently finished reading David J. Gunkel's book of the same name in which the author challenges the reader to understand the philosophical questions related to our perspective on artificial intelligence (AI) and machines in the 21st century. The HAL computer of Arthur C. Clarke's novel, "2001: A Space Odyssey," and Commander Data of "Star Trek: The Next Generation" are often reference points for the discussion on moral agency and patiency, two concepts thoroughly treated throughout Gunkel's text and equated with the definition of what is a "person."
Moral agency refers to the capacity of an individual to differentiate between right and wrong and then act knowing full well the implications.
Moral patiency refers to the capacity to endure the actions of a moral agent.
Humans exhibit both agency and patiency. In Gunkel's book he looks at the application of these concepts to animals and AI. Can an animal be defined as a "person" if it displays agency and patiency? Can a machine? Animal rights advocates believe that several species as we understand them today could easily be seen as qualifying for the definition of "person" based on this criteria.
Humanity has undergone an awakening in the last fifty years. Rene Descartes may have thought animals were automata. But where we once saw ourselves as unique, separate from other animals, today we are very much aware of our evolutionary roots and are cognizant that many animals display high levels of awareness, emotion, moral patiency and agency. Just recently I read a press report that described the results of a scientific study indicating that even lobsters and crustacean feel pain when we boil them in a pot. Pain and patiency go hand in hand.
So when HAL uncovers the human plot to shut down sentient functions, the machine responds to the threat terminating the lives of several of the crew before being disabled by the one human survivor, David Bowman. HAL in its actions displays moral agency and patiency. It is particularly poignant when HAL states, "I'm afraid" as Bowman shuts down its higher functions.
Gunkel also refers to another science fiction source when he discusses the three laws of robotics developed by Isaac Asimov. Asimov used the laws as a literary convenience for spinning his many stories about robots. Gunkel describes them as largely literary license for good storytelling rather than substantive rules for AI.
Ray Kurzweil, the noted futurist, envisions a point in time when machine intelligence will surpass human intelligence. He sees the inevitable integration between humans and machines and calls this the singularity. Kurzweil believes computers will reach a point in 2029 when machines will be able to simulate the human brain. And by 2045 AI machines and humans will be fully integrated.
But there are others who argue that the singularity will never happen and that we humans will always have a master-slave relationship with AI limiting machine intelligence so that it can never be equal to a HAL computer or a Commander Data.
Gunkel's book wrestles with all of these issues but arrives at no firm conclusions. It seems that advancements in technology and machine intelligence are leading us to ask complex philosophical questions never anticipated by Plato, Aristotle or Descartes. Maybe a future machine will provide the answers.
Essays and comments posted in World Future Society and THE FUTURIST magazine blog portion of this site are the intellectual property of the authors, who retain full responsibility for and rights to their content. For permission to publish, distribute copies, use excerpts, etc., please contact the author. The opinions expressed are those of the author. The World Future Society takes no stand on what the future will or should be like.
Free Email Newsletter
Sign up for Futurist Update, our free monthly email newsletter. Just type your email into the box below and click subscribe.
Futurists: BetaLaunch, THE FUTURIST magazine's invention and idea expo, is entering its third year and will be part of the opening night event at WorldFuture 2013. We'll be updating you soon on the BetaLaunch winners that will be showcasing their startups and inventions this July in Chicago. Right now, we would like to catch you up on one of our alumni, the Cyberhero League, an anti-bulling, pro-future game platform that teaches responsibility, sustainability, and civic-mindedness.
Over many centuries, attempts have been made to get food production out of the cities. Produce comes from the land and is transported into the cities. In most western cities, abattoirs have disappeared. Markets are still there, but no longer have a central role in our shopping.
Star Trek Into Darkness: Eye candy for the amygdala. Yes, this is another Hollywood blockbuster depicting a dystopian future with big explosions and small innovations. However, the first ten minutes are worth the price of the ticket. I was pleasantly surprised to see J.J. Abrams using the Ancient Aliens theory and a huge wink to author Zecharia Sitchin's work in the opening scene located on the fictional (depending on who you ask) world of Nibiru.
Spray-on skin. Lab-grown ears. Human tissue grown in a petri dish. We're going deep into sci-fi territory (and it is already happening).
“Extropy” is celebrating its first quarter of a century. The idea was formally introduced as a philosophy of the future in 1988, and many things have happened from the end of the 20th century to the beginning of the 21st century. A new millennium has been born and the philosophy of extropy is well-suited for these new times of accelerating change, full of challenges and opportunities.
One definition of resilience is “the ability to cope with shocks and keep functioning in a satisfying way”. Resilience is about the self organizing capacity of systems. This means the ability to bounce back after disaster, or the ability to transform if a bad stage has happened.