The Three Laws of Robotics are a set of rules devised by the science fiction author Isaac Asimov. The rules were introduced in his 1942 short story “Runaround“, although they had been foreshadowed in a few earlier stories. The Three Laws, quoted as being from the “Handbook of Robotics, 56th Edition, 2058 A.D.”, are:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Now comes a practical application and a query by Eric Schwitzgebel in the Los Angeles Times:
It’s 2025. You and your daughter are riding in a driverless car along Pacific Coast Highway. The autonomous vehicle rounds a corner and detects a crosswalk full of children. It brakes, but your lane is unexpectedly full of sand from a recent rock slide. It can’t get traction. Your car does some calculations: If it continues braking, there’s a 90% chance that it will kill at least three children. Should it save them by steering you and your daughter off the cliff?
This isn’t an idle thought experiment. Driverless cars will be programmed to avoid collisions with pedestrians and other vehicles. They will also be programmed to protect the safety of their passengers. What happens in an emergency when these two aims come into conflict?
The author raises a real concern and discusses how such things should be regulated. He notes:
Google, which operates most of the driverless cars being street-tested in California, prefers that the DMV not insist on specific functional safety standards. Instead, Google proposes that manufacturers “self-certify” the safety of their vehicles, with substantial freedom to develop collision-avoidance algorithms as they see fit.
But he says that’s not good enough:
That’s far too much responsibility for private companies. Because determining how a car will steer in a risky situation is a moral decision, programming the collision-avoiding software of an autonomous vehicle is an act of applied ethics. We should bring the programming choices into the open, for passengers and the public to see and assess.
I wounder how the public would assess this issue? Let’s take the same case today, with a person driving the car. How many people would say that they would go over a cliff to avoid killing pedestrians? It’s actually a harder question than you think, and you might have a different answer in real time than in the abstract.
I’m guessing that, in real time, the instinctive action for most of us would likely be to swerve to avoid the children, not realizing fully than in doing so, we’ll go over the cliff.
In contrast, if we had a chance to calmly consider the scenario in advance, we might have mixed emotions.
For example, you might say, “Well, even if I go over a cliff, the car will protect me from harm; whereas if I hit the children they’ll likely die. So, I’ll take my chance with the cliff.”
Or, you might say, “My obligation is to my own child first, and I’m not going to risk killing her by going over a cliff. I’m not violating the speed limit, and it’s not my fault if there’s gravel on the road. I’ll do my best to stop, but if I can’t, so be it. These things happen.”
Eric offers the following thought:
Some consumer freedom seems ethically desirable. To require that all vehicles at all times employ the same set of collision-avoidance procedures would needlessly deprive people of the opportunity to choose algorithms that reflect their values. Some people might wish to prioritize the safety of their children over themselves. Others might want to prioritize all passengers equally. Some people might wish to choose algorithms more self-sacrificial on behalf of strangers than the government could legitimately require of its citizens.
Lest you think this provides too much freedom of choice, Eric reminds us that today’s drivers also engage in implicit moral choices:
There is something romantic about the hand upon the wheel — about the responsibility it implies. But future generations might be amazed that we allowed music-blasting 16-year-olds to pilot vehicles unsupervised at 65 mph, with a flick of the steering wheel the difference between life and death.
He notes:
A well-designed machine will probably do better in the long run. That machine will never drive drunk, never look away from the road to change the radio station or yell at the kids in the back seat.
What would Isaac say?
Here’s what I worry about, more than this ethical question. As we’ve seen in the medical world–e.g., with regard to robotic surgery,femtosecond lasers, and proton beam therapy–there is an inexorable push to adopt new technologies before we determine that they are safer and more efficacious than the incumbent modes of treatment. Corporations have a financial imperative to push technology into the marketplace, employing the “gee whiz, this is neat” segment of early adopters to carry out their marketing, leading to broader adoption. All this happens well before society engages in the kind of thoughtful deliberation suggested by Eric. Meanwhile those same corporations take advantage of the policy lacunae that emerge to argue for less government interference. Unnecessary harm is done, and then we say, “These things happen.”
Let’s remember what Ethel Merman said in the movie when Milton Berle reported in that manner on a terrible traffic accident, “We gotta have control of what happens to us.”
Paul Levy is the former CEO of BIDMC and blogs at Not Running a Hospital, where an earlier version of this post appeared.
Categories: Uncategorized
and, oh yeah, deploy parachute the instant the car went over the cliff.
What kind of robot car doesn’t have a parachute??
trick question – the robot (“autonomous”) car would have KNOWN there was a crosswalk up ahead, and therefore would have slowed in anticipation of an obstacle there. Doesn’t take Mr. Spock to figure that out, duh.
Also, the robot car would be smart enough to know there was a recent rock slide, the exact location of the rock slide would have relayed to the vehicle from real-time seismologic sensors already placed all along the PCH in the early 2000’s.
HOWEVER, IF the robot car did somehow go over the cliff, it would be ok because once airborne, it would: 1) calculate the distance to the ground, 2) automatically inflate all the vehicle’s airbags milliseconds before impact, 3) pre-tension all the seatbelts (thanks Mercedes!), 4) squirt out a huge wad of instant-curing flame-retardant foam from the front bumper- allowing a soft landing, 5) temporarily lock all the doors, 6) contact the fire dept, police, wrecker service, and EMS via Twitter, and 7) turn down the Bluetooth radio so that potential rescuers could be heard easily – thus protecting everyone inside from injury.
Cited this post on my blog. http://regionalextensioncenter.blogspot.com/2015/12/on-health-care-technology-ehr-call-outs.html
This is a big deal in an age when we are increasingly trusting algorithms with decision making. What happens when algorithms prioritize in the ER? What about when they decide who gets an appointment with a specialist at a rural clinic? What happens when an algorithm is tied to a quality metric that a politician uses in a campaign or in a budget negotiation?
These are questions that need to be addressed.