CAN WE PROGRAM ROBOTS TO MAKE ETHICAL DECISIONS?
/Self-driving cars are being tested by Google, Tesla, and other companies around the world. So far their safety record is good, but then they’re programmed to be much more conservative than the average human driver. Such cars are among the first of many robots that could potentially populate our everyday life, and as they do, many of them will be required to make what we’d consider ethical choices—deciding right from wrong, and choosing the path that will provide the most benefit with the least potential for harm. Autonomous cars come with some unavoidable risk—after all, they’re a couple of tons of metal and plastic traveling at serious speed. But the thought of military forces testing robot drones is a lot more frightening. A drone with devastating firepower given the task of deciding which humans to kill? What could possibly go wrong?
Most discussions of robot ethics begin with science fiction writer Isaac Asimov’s famous Three Laws of Robotics:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
It should be remembered that Asimov created the three laws to provide fodder for a series of stories and novels about scenarios in which the three laws failed. First and foremost, he was looking to tell interesting stories. As good as the laws are for fictional purposes, the reality will be vastly more complicated.
The core value of the three laws is to prevent harm to human beings above all. But how do we define harm? Is it harmful to lie to a human being to spare his or her feelings (one of Asimov’s own scenarios)? And there’s the question of quantifying harm. Harm to whom and how many? Some recent publications have pointed out that self-driving cars may have to be programmed to kill, in the sense of taking actions that will result in the loss of someone’s life in order to save others. Picture a situation in which the car is unavoidably faced with the sudden appearance of a bus full of children in front of it and cannot brake in time. If it veers to the left it will hit an oncoming family in a van, or it could choose to steer right, into a wall, and kill the car’s own occupants. Other factors might enter in: there’s a chance the van driver would veer away in time, or maybe the bus has advanced passenger-protection devices. Granted, humans would struggle with such choices, too, and different people would choose differently. But the only reason to hand over such control to autonomous robot brains is in the expectation that they’ll do a better job than humans do.
One of the articles I’ve linked to below uses the example of a robot charged with the care of a senior citizen. Grandpa has to take medications for his health but he refuses. Is it better to let him skip the occasional dose or to force him to take his meds? To expect a robot to make such a decision means asking it to predict all possible outcomes of the various actions and rank the benefits vs. the harm of each. Computers act based on chains of logic: if this, then that. And the reason they can take effective actions at all is because they can process unthinkably long chains of such links with great speed, BUT those links have to be programmed into them in the first place (or, in very advanced models, developed by processes like the search algorithms used by Google and Amazon that simulate self-learning).
A human caregiver would (almost unconsciously) analyze the current state of Grandpa’s health and whether the medicine is critical; whether the medication is cumulative and requires complete consistency; whether Grandpa will back down from a forceful approach or stubbornly resist; if he has a quick temper and tends to get violent; if his bones are fragile or he tends to bruise dangerously with rough handling; if giving in now will provoke greater compliance from him later, and so on. Is it possible to program a robot processor with all of the necessary elements of every possible scenario it will face? Likely not—humans spend a lifetime learning such things from the example of others and our own experience, and still have to make judgments on all-new situations based on past precedents that a computer would probably never recognize as being relevant. And we disagree endlessly amongst ourselves about such choices!
So what’s the answer? Certainly for the near term we should significantly limit the decisions we expect such a technology to make. Some of the self-driving cars in Europe have a very basic response when faced with a troublesome scenario: they put on the brakes. The fallback is human intervention. And that may have to be the case for the majority of robot applications, with the proviso that each different scenario (and its resolution) be added to an ever-growing database to inform future robotic decision-making. Yes, the process might be very slow, especially in the beginning, and we’re not a patient species.
But getting it right will be a matter of life and death.
There are some interesting articles on the subject here, here, and here, and lots of other reading available with any Google search (as Google’s computer algorithms decide what you’re really asking!)