Thirty years ago, few people envisioned just how completely computers would be integrated into our everyday lives; today, they're everywhere. In Robot Ethics: The Ethical and Social Implications of Robotics, Patrick Lin (a science ethicist), Keith Abney (a philosopher of science) and George Bekey (a computer scientist) argue that the same is true about robots. Today, they are technological oddities; tomorrow, they'll be ubiquitous and indispensable. That's why, they write, we need "the emerging field of robot ethics." In their introduction to the book, which is a collection of essays in robot ethics from philosophers, lawyers, and scientists, Lin, Abney, and Bekey point out that people have been thinking about the ethics of robotics for millennia. Isaac Asimov's three laws of robotics are only the most recent entry in a long tradition. "Homer," the editors write, "described in his Iliad the intelligent robots or 'golden servants' created by Hephaestus, the ancient Greek god of technology... Leonardo da Vinci conceived of a mechanical knight that would be called a robot today."
IN THE classic science-fiction film “2001”, the ship's computer, HAL, faces a dilemma. His instructions require him both to fulfil the ship's mission (investigating an artefact near Jupiter) and to keep the mission's true purpose secret from the ship's crew. To resolve the contradiction, he tries to kill the crew.Already, fascinating moral questions are emerging. If a robot malfunctions and harms someone, who is responsible -- the robot's owner, its manufacturer, or the robot itself? Under what circumstances can robots be put in positions of authority, with human beings required to obey them? Is it ethically wrong for robots to prey upon our emotional sensitivities -- should they be required to remind us, explicitly or implicitly, that they are only machines? How safe do robots need to be before they're deployed in society at large? Should cyborgs -- human beings with robot parts -- have a special legal status if their parts malfunction and hurt someone? If a police robot uses its sensors to perform a surveillance operation, does that constitute a search? (And can the robot decide if there is probable cause?)
Some of these questions are speculative; others are uncomfortably concrete. Take this example involving (what else?) robot sex, from an essay by David Levy:
Upmarket sex dolls were introduced to the Korean public at the Sexpo exposition in Seoul in August 2005, and were immediately seen as a possible antidote to Korea's Special Law on Prostitution that had been placed on the statute books the previous year. Before long, hotels in Korea were hiring out "doll experience rooms" for around 25,000 won per hour ($25).... This initiative quickly became so successful at plugging the gap created by the antiprostituion law that, before long, establishments were opening up that were dedicated solely to the use of sex dolls... These hotels assumed, quite reasonably, that there was no question of them running foul of the law, since their dolls were not human. But the Korean police were not so sure. The news website Chosun.com... reported, in October 2006, that the police in Gyeonggi Province were "looking into whether these businesses violate the law . . . Since the sex acts are occurring with a doll and not a human being, it is unclear whether the Special Law on Prostitution applies."