 |
Science Fiction
Dictionary
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
|
 |
Should Autonomous Cars Have Feelings About Crashes?
Should autonomous cars have feelings about crashes? There are good reasons to believe that car crashes are inevitable, even in autonomous vehicle systems. Feelings and emotions are ways that human beings control their behaviors and their expectations of others, and they influence the split-second choices that they make in vehicle crash situations.
In most science fiction novels, autonomous cars or taxis are relatively uncomplicated mechanisms; you enter the desired location and the machine does the work. For example, consider the tin cabbie from James Blish's 1957 novel Cities in Flight:
The cab came floating down out of the sky at the intersection and maneuvered itself to rest at the curb next to them with a finicky precision. There was, of course, nobody in it; like everything else in the world requiring an IQ of less than 150, it was computer-controlled...
Chris studied the cab with the liveliest interest, for though he had often seen them before from a distance, he had of course never ridden in one. But there was very little to see. The cab was an egg-shaped bubble of light metals and plastics, painted with large red-and-white checkers, with a row of windows running all around it. Inside, there were two seats for four people, a speaker grille, and that was all: no controls and no instruments...
However, in Philip K. Dick's 1952 short story A Present for Pat, robot taxi drivers display real feelings and emotions:
"Robots have no wives," the driver said. "They are nonsexual. Robots have no friends, either. They are incapable of emotional relationships."
"Can robots be fired?"
"Sometimes." The robot drew his cab up before Eric's modest six-room bungalow. "But consider. Robots are frequently melted down and new robots made from the remains. Recall Ibsen's Peer Gynt, the section concerning the Button Molder. The lines clearly anticipate in symbolic form the trauma of robots to come."
"Yeah." The door opened and Eric got out. "I guess we all have our problems."
"Robots have worse problems than anybody." The door shut and the cab zipped off, back down the hill.
(Read more about PKD's robot cab)
I'm not differentiating between an autonomous car and a car driven by an autonomous robot here, but the basic idea is the same.
In a recent study published in the Transportation Research Record, Noah Goodall points out that road vehicles are significantly different, in that (for example) trains only move in one dimension on a set track, and the only choices have to do with the speed of the vehicle. Cars on the road, on the other hand, have many more degrees of freedom, and there is an irreducible moral component to the choices that the autonomous driver must make.
These advanced automated vehicles will be able to make pre-crash decisions using sophisticated software and sensors that can accurately detect nearby vehicle trajectories and perform high speed avoidance maneuvers, thereby overcoming many of the limitations experienced by humans. If a crash is unavoidable, a computer can quickly calculate the best way to crash based on combination of safety, likelihood of outcome, and certainty in measurements,
much faster and with greater precision than a human. The computer may decide that braking alone is not optimal, since at highway speeds it is often be more effective to combine braking with swerving, or even swerving and accelerating in an evasive maneuver.
One major disadvantage of automated vehicles during crashes is that unlike a human driver who can decide how to crash in real-time, an automated vehicle's decision of how to crash was defined by a programmer ahead of time. The automated vehicle can interpret the sensor data and make a decision, but the decision itself is a result of logic developed and coded months or years ago. This is not a problem in cases where a crash can be avoided—the vehicle selects the safest path and proceeds. However if injury cannot be avoided, the automated vehicle must decide how best to crash. This quickly becomes a moral decision.
Goodall then asks if it is possible to design an ethical robotic vehicle. Science fiction writers have long wondered about this possibility, and Goodall mentions Isaac Asimov's Three Laws of Robotics, rewriting them for this context:
- An automated vehicle may not injure a human being or, through inaction, allow a human being to come to harm.
- An automated vehicle must obey orders given it by human beings except where such orders would conflict with the First Law.
- An automated vehicle must protect its own existence as long as such protection does not conflict with the First or Second Law.
Many problems can be seen with this scenario, as Goodall points out. What if a vehicle decides that there is too much traffic, and Rule (1) requires that it give up and stay in the driveway? I could well imagine a Philip K. Dick-designed car that might just give up. Or a Douglas Adams-designed vehicle with a Genuine People Personality inclined toward depression:
"...I'll send the robot down to get them and bring them up here. Hey Marvin!”
In the corner, the robot's head swung up sharply, but then wobbled about imperceptibly. It pulled itself up to its feet as if it was about five pounds heavier that it actually was, and made what an outside observer would have thought was a heroic effort to cross the room. It stopped in front of Trillian and seemed to stare through her left shoulder.
“I think you ought to know I'm feeling very depressed,” it said. Its voice was low and hopeless...
(Read more about Marvin the Robot)
Goodall concludes his paper by setting forth a strategy for developing vehicles with the necessary moral behaviors, including self-expression, to explain its actions to human occupants and other drivers.
A three-phase strategy for developing and regulating moral behavior in automated
vehicles was proposed, to be implemented as technology progresses. The first phase is a
rationalistic moral system for automated vehicles that will take action to minimize the impact of
a crash based on generally agreed upon principles, e.g. injuries are preferable to fatalities. The
second phase introduces machine learning techniques to study human decisions across a range of
real-world and simulated crash scenarios to develop similar values. The rules from the first
approach remain in place as behavioral boundaries. The final phase requires an automated
vehicle to express its decisions using natural language, so that its highly complex and potentially
incomprehensible-to-humans logic may be understood and corrected.
Hopefully, autonomous vehicles will be able to achieve some measure of ethical clarity without having, as Dick puts it, "worse problems than anybody."
From Ethical Decision Making During Automated Vehicle Crashes (pdf) via Buisiness Week, via Frolix_8.
Scroll down for more stories in the same category. (Story submitted 4/15/2014)
Follow this kind of news @Technovelgy.
| Email | RSS | Blog It | Stumble | del.icio.us | Digg | Reddit |
Would
you like to contribute a story tip?
It's easy:
Get the URL of the story, and the related sf author, and add
it here.
Comment/Join discussion ( 0 )
Related News Stories -
("
Vehicle
")
Maybe It's Too Soon To Require Autonomous Mode
'I hope all those other cars are on automatic,' he said anxiously. - Arthur C. Clarke, 1976.
Caterpillar Electric Mining Loader Not Yet Ready For Moon
'...the excavations were already in progress, for he saw gray slopes of rubble.' Jack Williamson, 1939.
Engineer Creates Crazy Motorized Track Hospital Bed
The Roujin Z system provides care to fully bedridden patients - and then some!
Xiaomi Self-Driving Self-Balancing Scooter
'Norman... had never ridden any motorized device that lacked onboard steering and balance systems.' - Bruce Sterling, 1998.
Technovelgy (that's tech-novel-gee!)
is devoted to the creative science inventions and ideas of sf authors. Look for
the Invention Category that interests
you, the Glossary, the Invention
Timeline, or see what's New.
|
 |
Science Fiction
Timeline
1600-1899
1900-1939
1940's 1950's
1960's 1970's
1980's 1990's
2000's 2010's
Current News
LLM 'Cognitive Core' Now Evolving
'Their only check on the growth and development of Vulcan 3 lay in two clues: the amount of rock thrown up to the surface... and the amount of the raw materials and tools and parts which the computer requested.'
Has Elon Musk Given Up On Mars?
'There ain't no such thing as a free lunch.'
Bacteria Turns Plastic Into Pain Relief? That Gives Me An Idea.
'I guess there's nobody round this table who doesn't have a Crosswell [tapeworm] working for him in the small intestine.'
When Your Child's Best Friend Is An AI
'Figments of his mind in one sense, of course, for he had shaped them...'
China's Drone Mothership Can Carry 100 Drones
'So the parent drone carries a spotter that it launches...'
Drones Recharge In Mid-Air Like Jets Refuel!
'...nurse drones that would cruise around dumping large amounts of power into randomly selected pods.'
Australian Authors Reject AI Training Of Llama
'It's done with a flip of the third joint of the tentacle on the down beat.'
Is China Mining Helium-3 On The Moon's Farside?
'...for months Grantline bores had dug into the cliff.'
Maybe It's Too Soon To Require Autonomous Mode
'I hope all those other cars are on automatic,' he said anxiously.
Is Agentic AI The Wrong Kind Of Smartness?
'It’s smart enough to go wrong in very complicated ways, but not smart enough to help us find out what’s wrong.'
Heat Waver - The First Ever Combo Solar Collector And Wind Turbine
'...like a spray of tulips mounted fanwise.'
Tesla 'Fleet Response Agents' Bolster FSD Autonomy
'You hate the whole idea that some bored drone pusher in a remote driving centre has got your life... in his hands.'
Mori3 Autonomous Shapeshifting Robot
'My homeland is being threatened by the Replicators. Thus far all attempts to stop them have failed.'
Tesla Seeks 'Tesla Robotaxi' And 'Robobus' Trademarks Ignoring Prior Art
'A robobus had just rolled up to the curb.'
Scary Grid Safety Robots
'The ultimate horror for our paranoid culture...'
Does AI Provide A Way Forward For Talk Therapy
'And there in the next room by the sofa sat a familiar suitcase, that of his psychiatrist Dr. Smile.'
More SF in the News Stories
More Beyond Technovelgy science news stories
|
 |