Of all the branches of modern philosophy, ethics is both the most practical and the most baffling. For every clear-cut moral choice, there are a thousand what-ifs that cloud the picture. And as the future dawns and automated systems become more common, things will get murkier than ever.
I’ll give you an example. Take Google’s driverless car now in development. These cars can respond to traffic, road markings and stoplights; they follow directions and obey traffic laws. They can also react to unexpected events, such as bicycles darting into the street. In tests in California, they’ve successfully logged thousands of miles already (under their own control, but human supervision).
Robots Can Be Better Than Humans…
Once this robot driver is ready for prime time, he’ll be far superior to human drivers. He’ll never be drunk, tired or distracted. He’ll have precise control, 360 degree awareness and a millisecond response time.
There are 30,000+ traffic deaths in the US every year. Driverless technology takes human error out of the equation, so I think it could easily reduce that number by 50% or more. If so, widespread adoption is clearly the most ethical choice: fewer accidents, fewer deaths.
…But They’re Not Perfect
But even if there are fewer deaths overall, that’ll be cold comfort to those who are killed or hurt because of a robot’s mistake. Suppose your driverless car hits a pedestrian. Let’s say there was no mechanical or software failure involved. Everything worked the way it was supposed to, and it just wasn’t enough to prevent an accident.
So Who’s to Blame?
Is it you, because you could have overridden the controls but chose to trust the computer? Is it the software programmer, who failed to anticipate this specific combination of factors? The manufacturer? You could make an argument that the car itself is responsible, but it’s not an agent. Perhaps the answer is no one at all, but I suspect that would be a tough sell to the family of our hypothetical pedestrian.
There are many similar examples, such as when an automated military drone accidentally kills civilians. Or what if Dr. Watson, despite knowing vastly more than any human, misdiagnoses your cancer? When a human makes a mistake, we can be forgiving, even in extreme cases, because we’re fallible too, and we can relate. But when an automaton messes up, we have no one to blame, and most importantly, no answer to the question “why did this happen?”
I wish I had a solution to offer you, dear readers, but all I have are more questions. Does the concept of responsibility still apply when automated systems are involved? How? Is it worth dealing with this thorny issue, in order to get the benefits? Tell us all about it in the comments below.
Related:
NPR reports on Driverless Car Technology Today
NY Times’ comprehensive write-up
Did you enjoy this post? Want to stay up-to-date on the state of the future? Enter your email in the upper right to get free content, delivered fresh.
You could build a robot who’s job it is to take responsibility for the other robots. Perhaps this is the job of the poor little robot you featured here.
When a military drone wipes out civilians it seems to be due to erroneous information entered into the computer system about the intentions of the human beings. Are they friendly or unfriendly to the power structure that is using the drone? Machines do not act based on subjective beliefs about intentions.
So what is the situation with other robotic uses? When an assembly plant robot malfunctions in such a way as to cause injury to a human, there is human responsibility: was it properly programmed, updated, maintained, or used in the correct way?
Seems to me that similar human responsibilities come in to play with robot-operated automobiles.
That’s a really good point. In your drone example, errors are likely because of parameters that someone decided, or because someone decided to use a robot that was not capable of making that distinction in the first place.
The same could be argued for a robotic car. But if it still halves the number of road deaths each year, how do we decide to distribute (or not distribute) responsibility when an accident does occur?