The idea that robots should be equipped with an "ethical black box" that would enable scientists to check their ethical decision-making has been floated by researchers at a conference at the University of Surrey.
Winfield and Marina Jirotka, professor of human-centred computing at Oxford University, argue that robotics firms should follow the example set by the aviation industry, which brought in black boxes and cockpit voice recorders so that accident investigators could understand what caused planes to crash and ensure that crucial safety lessons were learned. Installed in a robot, an ethical black box would record the robot’s decisions, its basis for making them, its movements, and information from sensors such as cameras, microphones and rangefinders.
“Serious accidents will need investigating, but what do you do if an accident investigator turns up and discovers there is no internal datalog, no record of what the robot was doing at the time of the accident? It’ll be more or less impossible to tell what happened,” Winfield said.
“The reason commercial aircraft are so safe is not just good design, it is also the tough safety certification processes and, when things do go wrong, robust and publicly visible processes of air accident investigation,” the researchers write in a paper to be presented at the Surrey meeting.
The introduction of ethical black boxes would have benefits beyond accident investigation. The same devices could provide robots – elderly care assistants, for example – with the ability to explain their actions in simple language, and so help users to feel comfortable with the technology.
In the movie, Dr. Chandra, the creator of the HAL 9000 artificial intelligence computer, returns to the spaceship Discovery, now orbiting Jupiter, to determine the reason for the ethical lapse that causes HAL to kill astronaut Frank Poole.
Even though 2010 was just a movie, and Hal is a science-fictional computer, it illustrates why a simple list of instructions executed may not fully explicate what happened. This problem will be exacerbated by the fact that AI computers of the near future will rely heavily on deep learning techniques that work, but are poorly understood.
You might also be interested in this article Can You Give A Robot A Conscience? which explores similar themes. The discussion with the SAL 9000 computer (see embedded video) is also revealing. Also, see Machine Ethics With Prospective Logic, which discusses the different results that can be reached by "moral" systems.
Technovelgy (that's tech-novel-gee!)
is devoted to the creative science inventions and ideas of sf authors. Look for
the Invention Category that interests
you, the Glossary, the Invention
Timeline, or see what's New.
A System To Defeat AI Face Recognition
'...points and patches of light... sliding all over their faces in a programmed manner that had been designed to foil facial recognition systems.'