The five are at fault for violating

The dilemmas posed by the advent of self-driving cars hearken back to the trolley problem and its variants. Every response or answer has its glaring flaws and drawbacks, but the trolley problem is much more hypothetical; while the companies producing self-driving cars are not responsible yet for a definitive ruling because the technology is still in its fledgling stage, the questions about these cars have real-world applications and require answers. Ethical musing and evasion will not suffice as the programming develops, so a consensus must be reached and a series of decisions made among imperfect choices. The first choice: should self-driving cars be moral utilitarians? This is to say that the car will make the choice that saves the most lives–in the case of the hypothetical posed involving the endangerment of five lives versus that of a sole occupant, the car would endanger the sole occupant. The question also specifies that the sole occupant would be me, which makes any answer I give tinged with some bias. If I want to present myself heroically, I say the car should kill me and save the five; if I desire to come across as blunt and realistic about my own capacity for selfishness, I say the car should save me. I suppose, then, that I must try to avoid personal motivations in my response and think as logically as possible, but even if I do that I find myself having to choose a fundamental system to accept as a premise for a rational progression of thought. My utilitarian self would say that because all lives are of the same worth, five lives are worth more than one, and thus I should be killed. My moral absolutist self would reply that, if the self-driving car is perfect in its obedience to the law, then the five are at fault for violating the law and I should not be killed simply because more people would die if I did not. The second option seems like the wrong one because it appears egocentric, but I believe it is the more morally sound answer to the question. One of the self-driving car’s purposes is to follow the law better than a human can as a way to make the roads safer and diminish people’s 94 percent responsibility for crashes, and any type of programming that violates this legal basis renders its creation useless. If driverless cars are programmed to make unlawful choices, what makes them any safer or more reliable than a human driver in this situation? A precedent has to be established, albeit an uncomfortable one, ergo the car should endanger the five other lives. The second choice: who can we trust to make this call? Even though the driverless car presents a new reality in which no person may be at fault for an accident, the car’s programmers are responsible for its intentions. Who has the authority to tell the people creating these cars how to instruct it, and what moral principles are the right ones? The “right” answer is unknowable, but one or perhaps a combination of several has to guide us. I broached the topics of utilitarianism and absolutism above, but not relativism. I am unsure if this will ever be a possibility, but maybe the car could be governed by a series of factors and act differently according to diverse variables. For example, if the sole occupant was an ordinary person and the five others were all serial killers, perchance the right thing to do is save the sole occupant. Then again, that is ethically questionable as well, because it assesses one life to be more valuable than others based on an arbitrary metric. Additionally, proceeding in that manner calls for too much analysis to occur in what likely would be a matter of seconds, so relativism would not work as a guiding principle. With all of this in mind, I believe my reflection on the first choice remains applicable, and consequently we should adhere to absolutist principles over utilitarian or relativistic ones in order to remain consistent and abide by the law.The third choice: if the whole point is to make the roads safer, and people are generally the common denominator in crashes, should we prohibit human drivers altogether? A significant problem with self-driving cars is that they do not avoid accidents. Ideally, they follow the driving laws perfectly, but they have no capacity (yet) to make an unlawful choice that evades danger; an example of this would be going in reverse after stopping at a red light to give a larger vehicle a wider turning berth and thus lessen the risk of any sort of collision. Self-driving cars may be safer, but their instincts are not as developed as those of a human driver in cases where an unlawful action would prevent a collision. Thus, until every driver owns or has the ability to use a self-driving car that exhibits a flawless track record of not simply safe but defensive driving, people must be morally permitted to drive. A way to ameliorate the difference between the quality of driving coming from humans versus self-driving cars would be to filter out unsafe and unreliable drivers with a more rigorous driving test and a stricter enforcement of road rules. Still, those tests cannot stop people from driving drunk or total accidents from occurring, so the best possible scenario would likely be a gradual phasing out of human drivers until everyone, regardless of class status and other limiting factors, had access to a self-driving car. Then, infrastructure would have to be perfect for the cars to function, and that world is going to take several decades, if not more, to materialize. Hopefully, by that time all of these questions will have definitive answers. Until then, we can only continue to ask high schoolers to answer these questions in a timed format and hope to arrive at a moral conclusion.

x

Hi!
I'm Mary!

Would you like to get a custom essay? How about receiving a customized one?

Check it out