The Myth Of Autonomous Vehicles’ New Craze: Ethical Algorithms

The sheer magnitude of the inevitable transition from human-driven vehicles to autonomous vehicles (AV) requires the careful consideration of a vast array of potential issues. Chief among them are cybersecurity, job loss and an appropriate regulatory schema to handle the fast-moving technology.

However, not crucial to this comprehensive review of an AV world is a recent craze known as “ethical algorithms.”

These algorithms focus on thorny hypothetical situations in which an autonomous car has to make a split-second ethical decision, such as having to choose between killing 10 adult pedestrians in the roadway or swerving into a median, killing three young children. There are a slew of reasons why devoting precious resources to “solving” such ethical dilemmas is not only unproductive, but is actually quite counterproductive.

Obviously, resources devoted to the development and diffusion of driverless technologies are finite. This means that every dollar, every engineer and every man hour spent theorizing and developing ethical algorithms is a vital resource not being employed toward the actual, aforementioned obstacles to an expedient, widespread adoption of AV technology.

What’s more, financial expenditures might not even be the most counterproductive effect of this recently developed school of thought. Rather, that distinction belongs to the concept’s perversion of the apprehensive psyche of the masses. The average person’s inherent mistrust of human-displacing technology already presents a major hurdle to widespread autonomous tech; the last thing the natural skeptic in all of us needs is a misguided belief that AVs are going to be choosing who lives and who dies.

Don’t get me wrong, these hypotheticals are fascinating thought experiments, and may well invoke fond memories of Ethics 101’s trolley experiment. But before we allow the fear of such slippery-slope dilemmas to de-rail mankind’s progress, we need to critically examine the assumptions such models make, and ask some crucial questions about their practical value.

The first question is itself hypothetical, because it assumes that these dilemmas are, in fact, at least possible. Even granting this assumption, for the sake of argument, how practical are these hypothetical ethical dilemmas, really?

Most Americans have driven a substantial sum, and yet I don’t think many of us can recall ever having to make the catastrophic moral choice between killing 10 adult pedestrians or three young children — or anything even remotely resembling such a dualistic, doomsday framework. But it is exactly that inherently absurd framework — one in which no right answer possibly exists — that sets the stage for most experimental ethical hypotheticals.

When we combine the evidence of billions of human-driven miles with the fact that autonomous cars will be far safer than the human drivers of today, the statistical likelihood of an individual’s autonomous car facing such a tragic predicament approaches zero.

Do you plow ahead, surely killing 10 pedestrians, or do you swerve hard, certainly killing yourself.

The second question we need to ask is whether the “decisions” presented in experimental ethical hypotheticals are even possible. (Spoiler alert: I don’t believe they are.) To begin, these hypotheticals assume that there would ever be a situation in which we, as human drivers, would be forced to decide between such mutually horrendous alternatives. The fact of the matter is, even if such a situation did arise, we would not have the time to decide anything.

The most convincing reason for this conclusion lies not under the hood of a computer, but on the wheels of the car itself: The brakes. The simple fact is that if we have enough time to weigh a complex moral dilemma requiring such considerations as utilitarian cost-benefit analysis, competing interests of the ego and justice, the Golden Rule and, ultimately life and death, and then to act affirmatively and precisely on our rationally begotten conclusion, we certainly have more than enough time to slam on the brakes and bring the car to a halt. So what does this mean?

Given that it only takes a few seconds or less for a car to come to a complete stop at normal driving speeds, the result is that any hypothetical that eliminates braking as a viable option will almost certainly eliminate the possibility of a decision based on a rational calculation of values.

To illustrate this conclusion, imagine yourself in the following popular hypothetical: You are driving along at 50 mph when suddenly 10 pedestrians appear on the road just a few feet ahead of your car. There is a concrete barrier on both sides of your car, and there is no time to brake. In the stark terms of an experimental ethical hypothetical, the resulting “choice” would be posed as: Do you plow ahead, surely killing 10 pedestrians, or do you swerve hard, certainly killing yourself.

The most immediate problem with this formulation of the “dilemma” is that, in reality, there are no guarantees. If you continue forward, it cannot be certain that you are going to kill 10 people, nor could it be certain that you would actually save your own life in “deciding” this way. Likewise, and perhaps most importantly, we are not, and cannot be, guaranteed to die if we instead choose to avoid the pedestrians by swerving into the concrete barrier.

If I were somehow dropped into this situation, I am inclined to think my “decision” would be to swerve, avoiding the group of pedestrians. However, I suspect the overriding explanation for this reaction would be one not grounded in ethics, but rather the result of pure physical reflexes: If 10 pedestrians appear in front of my car, I am going to swerve hard and slam on the brakes before I have time to think at all, let alone perform a reliable moral calculus.

The fact that I am not guaranteed to die by doing so allows me to have such a reaction — avoid hitting 10 people straight on with a two-ton hunk of steel and take my chances dancing with the barrier. This feels like a reasonable “decision,” but in reality isn’t — and never will be — a decision at all. Now add in the fact that autonomous cars will actually sense and react to sudden situations instantaneously, unlike human drivers who take more than 1 second of crucial time to see and react, and the need to solve these absurd hypotheticals with ethical algorithms hardly seems pressing.

Coming to terms with a world in which we increasingly put our lives in the hands of machines is not simple.

The final false assumption on which these ethical dilemmas rely is that, even if there were a right answer, the best approach to discovering it is to gather data through surveying people with impossible ethical dilemmas, asking them to decide how it is they would want their car to react in such a situation. Of course, the answer to this query is typically at odds with respondents’ answer to the other main question: what they would want other cars on the road to decide.

The predictable answers here stem from the inherent absurdity of the ethical hypotheticals themselves: Of course we want our own cars to save us, and others to sacrifice their occupants on our behalf — we’re humans. But just because we do feel that way doesn’t mean we should, and certainly doesn’t mean we should aim to program our AVs according to such results.

Finally, even if a situation did arise in which an AV had to decide between potential alternatives, it should do so not based on an analysis of the costs of each potential choice, information that cannot be known, but rather based on a more objective determination of physical expediency.

This should be done by leveraging the computing power of the vehicle to consider a vast expanse of physical variables unavailable to human faculties, ultimately executing the maneuver that will minimize catastrophe as dictated by the firm laws of physics, not the flexible recommendations of ethics. After all, if there is time to make some decision, there is time to mitigate the damage of the incident, if not avoid it entirely.

Coming to terms with a world in which we increasingly put our lives in the hands of machines is not simple, but it becomes much less simple when we try to force a false notion of ethics into the conversation. The world of tomorrow does not include a dangerous and slippery slope of software ethics, at least in the sphere of AVs. The sooner we accept this fact, the sooner we can realize the safer streets such technology promises.