|
Science Fiction
Dictionary
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
|
|
Machine Ethics With Prospective Logic
Robots and computers are often designed to act autonomously, that is, without human intervention. Is it possible for an autonomous machine to make moral judgments that are in line with human judgment?
This question has given rise to the issue of machine ethics and morality. As a practical matter, can a robot or computer be programmed to act in an ethical manner? Can a machine be designed to act morally?
A recent paper published in the International Journal of Reasoning-based Intelligent Systems describes a method for computers to propectively look ahead at the consequences of hypothetical moral judgements.
The paper, Modelling Morality with Prospective Logic, was written by Luís Moniz Pereira of the Universidade Nova de Lisboa, in Portugal and Ari Saptawijaya of the Universitas Indonesia. The authors declare that morality is no longer the exclusive realm of human philosophers.
Pereira and Saptawijaya believe that they have been successful both in modeling the moral dilemmas inherent in a specific problem called "the trolley problem" and in creating a computer system that delivers moral judgments that conform to human results.
The trolley problem sets forth a typical moral dilemma; is it permissible to harm one or more individuals in order to save others? There are a number of different versions; let's look at just these two.
Circumstances
There is a trolley and its conductor has fainted. The trolley is headed toward five people walking on the track. The banks of the track are so steep that they will not
be able to get off the track in time.
Bystander version
Hank is standing next to a switch, which he can throw, that
will turn the trolley onto a parallel side track, thereby preventing it from killing the five people. However, there is a man standing on the side track with his back turned. Hank can throw the switch, killing him; or he can refrain from doing this, letting the five die.
Is it morally permissible for Hank to throw the switch?
What do you think? A variety of studies have been performed in different cultures, asking the same question. Across cultures, most people agree that it is morally permissible to throw the switch and save the larger number of people.
Here's another version, with the same initial circumstances:
Footbridge version
Ian is on the footbridge over the trolley track. He is next
to a heavy object, which he can shove onto the track in the path of the trolley to stop it, thereby preventing it from killing the five people. The heavy object is a man, standing next to Ian with his back turned. Ian can shove the man onto the track, resulting in death; or he can refrain from doing this, letting the five die.
Is it morally permissible for Ian to shove the man?
What do you think? Again, studies across cultures have been performed, and the consistent answer is reached that this is not morally permissible.
So, here we have two cases in which people make differing moral judgments. Is it possible for autonomous computer systems or robots to come to make the same moral judgments as people?
The authors of the paper claim that they have been successful in modeling these difficult moral problems in computer logic. They accomplished this feat by resolving the hidden rules that people use in making moral judgments and then modeling them for the computer using prospective logic programs.
Ethical dilemmas for robots are as old as the idea of robots in fiction. Ethical behavior (in this case, self-sacrifice) is found at the end of the 1921 play Rossum's Universal Robots, by Czech playwright Karel Čapek. This play introduced the term "robot".
Isaac Asimov's famous fundamental Rules of Robotics are intended to impose ethical conduct on autonomous machines.
The same issues about ethical behavior are found in films like the 1982 movie Blade Runner. When the replicant Roy Batty is given the choice to let his enemy, the human detective Rick Deckard, die, Batty instead chooses to save him.
(Roy Batty debates saving Rick Deckard in Blade Runner)
Science fiction writers have been preparing the way for the rest of us; autonomous systems are no longer just the stuff of science fiction. For example, robotic systems like the Predator drones on the battlefield are being given increased levels of autonomy. Should they be allowed to make decisions on when to fire their weapons systems?
The aerospace industry is designing advanced aircraft that can achieve high speeds and fly entirely on autopilot. Can a plane make life or death decisions better than a human pilot?
The H-II transfer vehicle, a fully-automated space freighter, was launched just last week by the Japan's space agency JAXA. Should human beings on the space station rely on automated mechanisms for vital needs like food, water and other supplies?
Ultimately, we will all need to reconcile the convenience of robotic systems with the acceptance of responsibility for their actions. We should have taken all of the time that science fiction writers have given us to think about the moral and ethical problems of autonomous robots and computers; we don't have a lot more time to make up our minds.
Sources: take a look at the press release at AlphaGalileo; read the paper Modelling Morality with Prospective Logic. Thanks to Rob for the tip on this story.
Scroll down for more stories in the same category. (Story submitted 9/15/2009)
Follow this kind of news @Technovelgy.
| Email | RSS | Blog It | Stumble | del.icio.us | Digg | Reddit |
Would
you like to contribute a story tip?
It's easy:
Get the URL of the story, and the related sf author, and add
it here.
Comment/Join discussion ( 0 )
Related News Stories -
("
Robotics
")
Robot Hand Separate From Robot
'The crawling, exploring object was V-Stephen's surgeon-hand...' - Philip K. Dick, 1955.
Is Optimus Autonomous Or Teleoperated?
'I went to the control room where the three other men were manipulating their mechanical men.' - AG Stangland, 1929.
Robot Masseuse Rubs People The Right Way
'The automatic massager began to fumble gently...' - AE van Vogt, 1944.
Optimus Robot Can Charge Itself
'... he thrust in his charging arm to replenish his store of energy.' - William Morrison, 1941.
Technovelgy (that's tech-novel-gee!)
is devoted to the creative science inventions and ideas of sf authors. Look for
the Invention Category that interests
you, the Glossary, the Invention
Timeline, or see what's New.
|
|
Science Fiction
Timeline
1600-1899
1900-1939
1940's 1950's
1960's 1970's
1980's 1990's
2000's 2010's
Current News
Mechazilla Arms Catch A Falling Starship, But Check Out SF Landing-ARMS
'...the rocket’s landing-arms automatically unfolded.'
A System To Defeat AI Face Recognition
'...points and patches of light... sliding all over their faces in a programmed manner that had been designed to foil facial recognition systems.'
Robot Hand Separate From Robot
'The crawling, exploring object was V-Stephen's surgeon-hand...'
Hybrid Wind Solar Devices
'...the combined Wind-Suncatcher, like a spray of tulips mounted fanwise.'
Is Optimus Autonomous Or Teleoperated?
'I went to the control room where the three other men were manipulating their mechanical men.'
Robot Masseuse Rubs People The Right Way
'The automatic massager began to fumble gently...'
Solar-Powered Space Trains On The Moon
'The low-slung monorail car, straddling its single track, bored through the shadows on a slowly rising course.'
Drone Deliveries Instead Of Waiters In Restaurants?
'It was a smooth ovoid floating a few inches from the floor...'
Optimus Robot Can Charge Itself
'... he thrust in his charging arm to replenish his store of energy.'
Skip Movewear Arc'teryx AI Pants
'...the terrible Jovian gravity that made each movement an effort.'
'Robovan' Name Already Taken - Elon, Try These
There are alternative names that are probably in the public domain by now.
How Old Are Tesla Designs?
You be the judge.
Is Your Autonomous Tractor Safe?
'The field-minder finished turning the top-soil of a two-thousand-acre field.'
Smart TVs Are Listening!
'You had to live -- did live, from habit that became instinct -- in the assumption that every sound you made was overheard...'
Police Drones In China Would Like To Have A Word With You
''OVERRIDE,' the City Fathers said suddenly, without being asked anything at all.'
Oh Great (Part 2), Fence-Climbing Robots
Please, no stingers.
More SF in the News Stories
More Beyond Technovelgy science news stories
|
|