Amazing Robots - Can Machines be Moral?

E. Christian Brugger
September 09, 2014
Reproduced with Permission
Culture of Life Foundation

I'm not a Calvinist, but a Catholic. And although I don't accept the doctrine of the total depravity of human nature, my rejection rests upon more lofty premises than that "humans are just better than that." Luther and Calvin were realists; it was their theology that was wrong, not their observations of human behavior.

Consequently, if I could craft one law to govern the world of robotics research, it would be to ban all utopians from the work. Anyone who denies the doctrine of original sin should be kept as far away as possible. When dealing with artificially-intelligent beings, as with naturally-intelligent ones, the potential for good is proportionate to the potential for evil. And if something can be used for evil, we must presume it will be, and eventually be used in worse ways than we first imagined. It's foolish to think otherwise.

With this caveat in mind, I think robotics is one of the most promising areas of modern technology. In the last five years, the world of science fiction has become a reality. Robots are no longer just used to build automobiles and clean rugs. In both the domestic and military spheres, they carry out remarkably human-like tasks.

Domestic robots can now detect people's emotions and initiate conversations based upon their 'observations'. Robots act as substitute teachers, babysitters, companions for the elderly, orderlies in hospitals, cops and bartenders. They model clothing on the catwalk, and, on the more lurid side, promise to put call girls out of business in the world's oldest profession. Airplanes land themselves. Trains drive themselves. Both Volvo and Ford have released models that automatically brake when they sense a collision. Google's trials with driverless cars have logged hundreds of thousands of miles on U.S. roadways and the director of the Google car team, Chris Urmson, hopes to have them commercially available in the next few years.

But this all is kid's play compared to military robotics. The dragonfly-shaped "DelFly" is a full-fledged aerial surveillance drone, and weighs about a half an ounce. The 10lb "Sand Flea" projects a gas-powered piston in the ground, propelling itself as high as thirty feet into the air, onto a roof or through a window, taking in-air videos; when it lands, it rolls on wheels till it reaches another obstacle then projects itself to another location, perhaps up a staircase or across the street to a nearby rooftop. The six-legged "RiSE" looks like a giant cockroach and can climb vertically up walls. The quadruped "LS3" - the "dog robot" - trots alongside of ground troops over mountainous terrain carrying more than 400lbs of supplies. Its gas engine is louder than the older model, the mule.

Robots can act as first-line scouts into the caves of Afghanistan; scan for and eliminate roadside bombs in Iraq; and identify a wanted man on a crowded street in Damascus and follow him. They can be thrown like hammers over walls or through windows, then roll on wheels over uneven surfaces taking photographs; be transported silently inside tiny aquatic subs to the hull of a ship, then climb with magnetic wheels up the hull and onto the ship's deck. "Soft robots" can slink like blobs; "aquabots" can slither like snakes; foraging robots can gather combustibles (e.g., leaves and wood) and burn them to generate electricity; aerial robots can remain airborne for weeks, fueled from energy supplied by ground-based lasers.

All this has spawned great interest in the new field of "machine ethics," concerned with the behavior of artificial moral 'agents'. The question of the moment is whether we should invest machines with autonomous decision-making capacity, especially lethal capacity. Presently, the Department of Defense requires that all weapon systems be designed "to allow commanders and operators to exercise appropriate levels of human judgment over the use of force;" in other words, fully-autonomous lethal robots are not permitted in the military.

But that is being reconsidered. The Office of Naval Research this year awarded $7.5 million in grant money to several universities to study the question of whether a sense of right and wrong and moral consequence can be programmed into autonomous robotic systems.

Some are optimistic. Wendell Wallach, the chair of the Yale Technology and Ethics Study Group and author of the 2010 book Moral Machines: Teaching Robots Right from Wrong , argues that, in time, robots "may be even better than humans in picking a moral course of action because they may consider more courses of action."

But machine decision-making can only be called "moral" in an analogous sense. Machines don't really make judgments. Nor can they choose freely. They follow algorithms (i.e., pre-coded rules used in problem-solving operations). And though programs can be based upon complex measurement codes keyed to the diverse evaluative styles of humans (i.e., real decision makers), someone has to decide what 'ethical' routines are programmed into the transistors in the first place. When robots find themselves in situations where a number of courses of action are possible, these 'ethical' routines will be brought to bear on determining the "safest" or "best" outcome. Who chooses the algorithm defining the concepts of "safe" and "best"?

Should combat drones follow the rule 'minimum damage for maximum results'? Who's to say what constitutes damage, or, for that matter, results? Should a robot fire on a military target if civilians are nearby? Should soldiers leave a robot behind to guard captured enemies, and should it fire on prisoners if they try to escape? How can it tell the difference between an escape attempt and, say, the frenzy of a seizure? Should a driverless car swerve to avoid pedestrians even if it puts the car into oncoming traffic? Should robots be programmed to lie to enemies? What about to lie to civilians to avoid panic? Which victim should a search and rescue robot first evacuate?

I am optimistic that programs could be developed that are sophisticated enough to make good 'decisions' most of the time. I confess, however, that I'm very worried about who will be chosen to write the programs.


Bibliography

Top