Of all the branches of modern philosophy, ethics is both the most practical and the most baffling. For every clear-cut moral choice, there are a thousand what-ifs that cloud the picture. And as the future dawns andsystems become more common, things will get murkier than ever.
I’ll give you an example. Take Google’s driverless car now in development. These cars can respond to traffic, road markings and stoplights; they follow directions and obey traffic laws. They can also react to unexpected events, such as bicycles darting into the street. In tests in California, they’ve successfully logged thousands of miles already (under their own control, but human supervision).
Robots Can Be Better Than Humans…
Once this robot driver is ready for prime time, he’ll be far superior to human drivers. He’ll never be drunk, tired or distracted. He’ll have precise control, 360 degree awareness and a millisecond response time.
There are 30,000+ traffic deaths in the US every year. Driverless technology takes human error out of the equation, so I think it could easily reduce that number by 50% or more. If so, widespread adoption is clearly the most ethical choice: fewer accidents, fewer deaths.
…But They’re Not Perfect
But even if there are fewer deaths overall, that’ll be cold comfort to those who are killed or hurt because of a robot’s mistake. Suppose your driverless car hits a pedestrian. Let’s say there was no mechanical or software failure involved. Everything worked the way it was supposed to, and it just wasn’t enough to prevent an accident.
So Who’s to Blame?
Is it you, because you could have overridden the controls but chose to trust the computer? Is it the software programmer, who failed to anticipate this specific combination of factors? The manufacturer? You could make an argument that the car itself is responsible, but it’s not an agent. Perhaps the answer is no one at all, but I suspect that would be a tough sell to the family of our hypothetical pedestrian.
There are many similar examples, such as when an automated military drone accidentally kills civilians. Or what if Dr. Watson, despite knowing vastly more than any human, misdiagnoses your cancer? When a human makes a mistake, we can be forgiving, even in extreme cases, because we’re fallible too, and we can relate. But when an automaton messes up, we have no one to blame, and most importantly, no answer to the question “why did this happen?”
I wish I had a solution to offer you, dear readers, but all I have are more questions. Does the concept of responsibility still apply when automated systems are involved? How? Is it worth dealing with this thorny issue, in order to get the benefits? Tell us all about it in the comments below.
NPR reports on Driverless Car Technology Today
NY Times’ comprehensive write-upDid you enjoy this post? Want to stay up-to-date on the state of the future? Enter your email in the upper right to get free content, delivered fresh.