The rise of intelligent robots is inevitable, but we must not rush it, caution this volume’s 27 authors, whose areas of expertise range from philosophy and global affairs to cybernetics and computer programming. The authors call for serious societal discussion into how to ensure that thinking robots will not harm us and that, likewise, we will not misuse them.
For instance, militaries around the world are developing armed vehicles that will kill human targets without human instruction. Many military leaders worry that such robots will have a hard time distinguishing combatants from innocent civilians. Also, a country that has these machines might be more inclined to go to war.
On the civilian side, who is liable when a self-driving vehicle causes a traffic accident, or when a self-propelled lawnmower drives over someone’s foot? The manufacturer might claim no fault, in either case: It was the robot’s “mistake.”
Will “intelligent” robots know right from wrong? We will need to program morality into them and educate them on everyday nuances.
There is also the matter of robot companions. Consumers could buy personal robots to be their caregivers, pets, servants, and even sexual partners. How will we feel when people befriend these machines, or even fall in love with them? Also, what rights would personal robots possess?
The authors encourage us to think over these questions while robotic intelligence is still in development. We can make sure that intelligent robots do more help and less harm if we form social mores, professional codes, and regulations in advance.
Robot Ethics combines technology, philosophy, and sociology into one deep and varied discussion. Enthusiasts of any of these fields, and anyone else who is just curious about where artificial intelligence is heading, will find much to like.