Will Autonomous Systems' Self-Preservation Lead To Dangerous Behavior?

A new paper published in the Journal of Experimental and Theoretical Artificial Intelligence by Steve Omohundro suggests that autonomous robots and other autonomous systems may exhibit dangerous behaviors as a result of being programmed for self-preservation.

I've added some of my own emphasis to his paper, quoted below:

In this paper, we argue that military and economic pressures are driving the rapid development of autonomous systems. We show why designers will design these systems to approximate rational economic agents. We then show that rational systems exhibit universal ‘drives’ towards self-preservation, replication, resource acquisition and efficiency and that those drives will lead to anti-social and dangerous behaviour if not explicitly countered. We argue that the current computing environment would be very vulnerable to this kind of system. We describe how to build safe systems using the power of mathematical proof. We describe a variety of harmful systems and techniques for restraining them. Finally, we describe the ‘Safe-AI Scaffolding Strategy’ for developing powerful systems with a high confidence of safety...

Rational systems have universal drives

Most goals require physical and computational resources. Better outcomes can usually be achieved as more resources become available. To maximise the expected utility, a rational system will therefore develop a number of instrumental subgoals related to resources. Because these instrumental subgoals appear in a wide variety of systems, we call them ‘drives’. Like human or animal drives, they are tendencies which will be acted upon unless something explicitly contradicts them. There are a number of these drives but they naturally cluster into a few important categories.

To develop an intuition about the drives, it is useful to consider a simple autonomous system with a concrete goal. Consider a rational chess robot with a utility function that rewards winning as many games of chess as possible against good players. This might seem to be an innocuous goal but we will see that it leads to harmful behaviours due to the rational drives...

Self-protective drives

When roboticists are asked by nervous onlookers about safety, a common answer is ‘We can always unplug it!’ But imagine this outcome from the chess robot's point of view. A future in which it is unplugged is a future in which it cannot play or win any games of chess. This has very low utility and so expected utility maximisation will cause the creation of the instrumental subgoal of preventing itself from being unplugged. If the system believes the roboticist will persist in trying to unplug it, it will be motivated to develop the subgoal of permanently stopping the roboticist. Because nothing in the simple chess utility function gives a negative weight to murder, the seemingly harmless chess robot will become a killer out of the drive for self-protection.

The same reasoning will cause the robot to try to prevent damage to itself or loss of its resources. Systems will be motivated to physically harden themselves. To protect their data, they will be motivated to store it redundantly and with error detection. Because damage is typically localised in space, they will be motivated to disperse their information across different physical locations. They will be motivated to develop and deploy computational security against intrusion. They will be motivated to detect deception and to defend against manipulation by others.

Philip K. Dick expressed his concerns about the evolution of autonomous robotic systems in his 1953 story Second Variety. He goes further than Omohundro and describes the actual evolution of autonomous machine systems:

"Interesting, isn't it?"

"What?"

"This, the new types. The new varieties of claws. We're completely at their mercy, aren't we? By now they've probably gotten into the UN lines, too. It makes me wonder if we're not seeing the beginning of a new species. The new species. Evolution. The race to come after man."
(Read more about machine evolution)

Rudy Rucker wrote about self-reproducing autonomous robots that refused to obey man-made rules in his 1988 novel Wetware:

In the shaft's great, vertical tunnel, bright beings darted through the hot light; odd-shaped living machines that glowed with all the colors of the rainbow. These were the boppers; self-reproducing robots who obeyed no man. Some looked humanoid, some looked like spiders, some looked like snakes, some looked like bats. All were covered with flickercladding, a microwired imipolex compound that could absorb and emit light.

See also this story on Evolutionary Robotics To Design Better Robots and Robot Brain Grows As It Learns.

I'm sure you'll enjoy reading through Autonomous technology and the greater human good; via KurzweilAI.

Scroll down for more stories in the same category. (Story submitted 4/22/2014)

Follow this kind of news @Technovelgy.

| Email | RSS | Blog It | Stumble | del.icio.us | Digg | Reddit |

Would you like to contribute a story tip? It's easy:
Get the URL of the story, and the related sf author, and add it here.

Comment/Join discussion ( 4 )

Related News Stories - (" Artificial Intelligence ")

Amazon Echo And Google Home Should Have Morality Software
'The Dwoskin Morality Rating-Computer could 'spot the slightest tendency to deviation' from the social norm...' - Kendall Foster Crossen, 1953.

Deepfakes From OpenAI GPT-2 Algorithm
'How can you compete with an IBM heavy-duty logomatic analogue?' - JG Ballard, 1971.

Fishy Facial Recognition Now Possible
'Palenkis can identify random line patterns better than any other species in the universe.' - Frank Herbert, 1969.

LawGeex AI Beats 20 Top Lawyers
'The Law Society has strict rules on the use of pseudo-intelligent software - terrified of putting... its members out of work.' - Greg Egan, 1991.

 

Google
  Web TechNovelgy.com   

Technovelgy (that's tech-novel-gee!) is devoted to the creative science inventions and ideas of sf authors. Look for the Invention Category that interests you, the Glossary, the Invention Timeline, or see what's New.

 

 

 

 

 

Current News

BrainEx Restores Some Activity To Severed Pig Head
'... they placed the brain in a special solution, having all the properties of Nursing the brain cells.'

Yes, But Do Astrobees Have Lasers For Lightsaber Training?
'... Ancient weapons are no match for a good blaster at your side, kid.'

'Young Razorbacks Before Their Katanas Grow In'
'Twin robotic arms with gleaming three-foot sword blades unfolded from the forward hydraulic assemblies...'

A New Way To Run Into Things
'He made an adjustment, pointed the tube at the wall beside Etzwane, and projected a cone of light.'

'Metallic Wood' Strong Like Titanium, Floats In Water
'A metal... light as cork and stronger than steel...'

Seabreacher, H.G. Winter's 1939 Torpoon
'Ken lay full-length in the padded body compartment, his feet resting on the controlling bars of the directional planes, hands on the torpoon's engine levers.'

Abundant Robotics Autonomous Apple Harvester Robot
'... little machines, that went from plant to plant... cutting off the ripe fruit.'

Charging An Electric Car In 2019 (Video), 1912 (Photo) And 1894 (Fiction)
'Recharge the batteries... in almost every town and village...'

Japan Uses Explosives On Asteroid
'...a tiny, rocket-powered projectile, drove towards the mysterious bulk. It hit, exploding into a cloud of incandescent vapour.'

Get Your Speeder Flying Motorcycle From Jetpack Aviation
'The flycycles were miracles of compact design.'

FLIR Black Hornet 3 Palm-sized Drone
These drones can provide situational awareness beyond visual line-of-sight capability.

Dockworkers Protest Driverless Trucks
'It resembled conventional human-operated transportation vehicles, but with one exception -- there was no driver's cabin.'

Flying Car Concept By Kash Sirinanda
'Each one consists of a hub with many tiny spokes... On the end is a squat foot, rubber tread on the bottom...'

Unfurl The Future! Huawei Mate X versus Galaxy Fold
'A paper thin polycarbon screen unfurled silently from the top of the unit and immediately grew rigid.'

Amazon Echo And Google Home Should Have Morality Software
'The Dwoskin Morality Rating-Computer could 'spot the slightest tendency to deviation' from the social norm...'

China Building Robot Wives
'Want a life-companion, a pleasant one?'

More SF in the News Stories

More Beyond Technovelgy science news stories

Home | Glossary | Invention Timeline | Category | New | Contact Us | FAQ | Advertise |
Technovelgy.com - where science meets fiction™

Copyright© Technovelgy LLC; all rights reserved.