Ctrl: Regarding the Morality of Self-Driving Cars
The last few weeks have been unfortunate for developers and fans of autonomous vehicles, or self-driving cars. On the night of March 18, Elaine Herzberg was struck by an autonomous Uber vehicle in Tempe, Arizona and became the first known pedestrian to be killed by a self-driving car. Less than a week later, a Tesla Model X operating in Autopilot mode crashed into a highway lane divider, killing the driver. This accident marks the second ever fatal accident where the Tesla Autopilot system was engaged.
In both cases, there’s a sense that human error and misuse of the systems played a role in the accidents. Authorities released videos of the Uber crash, showing both the interior and exterior camera feeds, in which the human safety operator can be clearly seen having his hands off the wheel and eyes off the road for several seconds immediately prior to the crash. While there’s yet no video of the Tesla crash, the company has suggested that the driver would have had time to react to the imminent crash, had his hands not been off the wheel for six seconds prior to the crash. To be clear, appropriate use of the Tesla system requires that the driver maintain their hands on the wheel at all times. But does that really matter?
This might be obvious, but no self-driving car is designed to crash. If it does crash, something failed. Don’t get me wrong, this isn’t an attempt to demonize self-driving cars, but instead I’m trying to highlight that that’s the way we intuitively think about these issues. So, when a pedestrian dies in Tempe, in a crash that likely no human driver would’ve been able to avoid based on the footage, we freak out. Amidst rampant fear (likely justified) of regulation, Uber has suspended testing in all cities, and Tesla stocks have taken a hit.
Is it always going to be like this, though? Unfortunately, car crashes are going to be basically inevitable for the foreseeable future. It’s kind of the nature of multiple thousand pound metal containers moving really fast to be dangerous, especially when you can have dozens of them on any given street. The only real way forward is to accept that crashes are going to happen, and decide how to deal with that likelihood before it happens.
Here’s an example. Let’s say the Uber car that struck the crossing pedestrian in Tempe recognized that a crash was inevitable, but found that there were two possible alternatives. The car could hit the pedestrian, or it could swerve and smash into the guardrail, destroying the car and killing the driver. While a human driver would have no way of identifying both options, debating the morality of each one, making a decision, and then executing that decision in the split second upon realizing a crash was imminent, a computer undoubtedly could. What should the car do? Usually the instinctual “right answer” is to save the driver, since that’s what a human would probably do, but sometimes there’s really no “right answer.”
What if you’re on a busy highway, and instead of two options where one person dies, you have a myriad of options. Maybe one car on the lane next to you could take the hit, but there’s a newlywed couple in the car. That seems like the wrong decision, but if you let the crash happen as it will you’ll end up dead, and so will the family of four with two children right in front of you. The car could always swerve right, but the car next to you has a businessman in it with a net worth of a couple hundred million. Luckily, a human would never be tasked with this decision, given that you wouldn’t ever have the information nor the time required to make it having been 100 percent informed. But a smart car, with plenty of processing power and a database with all the information it could need, would have to make a decision, even if the decision is to do nothing. What are we going to do then, when a car makes the call that one life is pragmatically worth more than another? Judging by the reactions to entirely accidental crashes, it’s not going to be pretty. We need to ask the questions of responsibility now, not during the blame game that unfolds after people end up dead.
Contact Caio Brighenti at [email protected].