Vice President, Rational Continuous Engineering Solutions
IBM Software Group
Robots have been with us for a long time. Surprisingly, there are references dating back to Greek engineers in 270 BC! But, recent advances in robotics are bringing them into our daily lives in all kinds of interesting ways. Domestic robots, for example, are already available in stores. They may not be up to the capabilities of Rosie the Robot (from the Jetsons), but they handle quite a few day to day tasks like cleaning floors, feeding pets, and monitoring our home security.
But, what is truly amazing is the rapid improvements in robotic capabilities. This example, from the Swiss Federal Institute of Technology is Lausanne, shows a robot that uses sensors and vision along with machine learning to learn how to catch unusual objects in mid flight. It is mesmerizing to watch, and it is easy to imagine what will be possible in the future.
But, with all these capabilities, it does make me wonder about the ethics of robots who make decisions. The classic 3 Laws of Robotics, first drafted in 1942 by Isaac Asimov, state:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
I have to wonder if that is sufficient to protect us. A recent article in Popular Science asks the broader ethical question: "Should A Robot Sacrifice Your Life To Save Two?" This is relevant to all automated machines, including self-driving cars. The articles paints a grim picture when it considers if a self-driving car should sacrifice a driver to save more lives of other drivers or passengers. It is easy to imagine the debates that will ensue on topics like this when 'thinking' machines take over more and more of our daily tasks.