The trolley problem - why would you turn?
Suppose you're the driver of a trolley coming down the tracks and it's headed towards three workers. You press the breaks and realize that they don't work. You can turn onto a different track, but there is a problem; one worker is on the other track, and if you turn you're going to crash into him, killing one but saving three others. What do you do? Most people would turn. But why? How is it different from murdering one person to save three others?
Comments (46)
Of course, having more information may change the decision... For example, the three workers on one track are convicted murderers that show little chance of rehabilitation, and the single worker is a hard-working father of three.
A potential third concern is that the trolley problem (like most thought experiments) is structured in much the same way as a mathematical problem. I mean just look at how many variations there are from Foot's original set up, how different parameters are established, how the numbers change for intended effect, etc. And yes, we might say something like, "well the alterations of the variables themselves are irrelevant, what's more important is that we are able to extrapolate some type of broad ethical norms from the scenario." Which might sound good, but probably doesn't accurately reflect the actual business of doing ethics or being in an ethical conundrum. Instead of 'feeling' ethics we just problematize it, and instead of practicing or considering the practice of more real world situations which demand ethical attention we shift our efforts toward a puzzle-solving motif where the focus becomes theory construction and the categorization of ethical attitudes based upon a seemingly unrealistic hypothetical.
Again, I don't say this to de-rail anything. Just to add a potentially useful consideration point to the discussion.
You're not simply minimizing the damage from three to one. You're choosing to kill different people than the ones destined to be killed by the trolley. It's let the trolley kill three, or choose to kill one. And if choosing to kill a innocent bystander is not murder, I don't know what is. (I'm just playing devil's advocate, I don't necessarily believe this.)
If I face a problem like this and am well aware that if I save them I kill another worker it would seem like it is in my responsibility to save them, because I am the only one that has the power to do so. However, in this problem where someone will die no matter what, the question is, who will or should die? Now, because I do not have any information about these workers the question remains unsolved. Even if I had some information (like in the case where the 3 of them are murderers), I do not believe I have the right to decide who should live and who should die. Either way it is not my fault that the incident happened and so I do not have any responsability.
Plus, I think that when people say they would turn, it's mostly just talk, if they were actually confronted to that situation they would probably freeze.
Third view is the Doctrine of Double Effect, so you intend to save three people, but the death of one person is simply a side effect of your action of pulling the lever. According to this view, as long as your intended goal is to save people rather than to kill people, it is morally permissible.There's another view by Foot which states that negative duties (e.g. you ought not to kill) trump positive duties (e.g. you ought to help the poor), so since pulling the lever would violate your negative duties you shouldn't pull it. There's also this view that it's morally permissible to pull the lever because you aren't initiating a chain of events, but rather redirecting it.
There are many other different (and equally important) views, but the above ones are ones I can think on top of my head.
Is there a theoretical scenario that is more realistic than people being strapped to tracks?
Thnx
Ah, so the article is just updated tech vs. trollies.
The real issue with the dilemma that no one seems wiling to address is the issue of more loss of life versus less. More pain vs. less. And of course, there's no morally sacrosanct answer. Pain and death are equally incomprehensible, regardless of in what quantity.
One thing that might be worth acknowledging is that puzzles like the trolley problem are flashy and relatively simple in terms of structure, yes? If you had magical powers to stop an asteroid from hitting earth but only insofar as you could divert it away to a less densely populated alien planet would you? And then we sit around and muse how exciting it would be to have magical powers and what it means to save planets. But aside from the three problems I listed in my first post we have another, more common sense issue here: I don't have magical powers, and even if by some minute chance I ever had something resembling magical powers, why would I be in a situation where I would need to decided between two planets for a potential asteroid impact? It sounds a little silly right?
A real ethical dilemma may not be as exciting. Let me give you a genuine ethical problem I faced last night: it's quite cold in the city I live in. My wife and I were walking our dogs through one of the parks last night and we saw a homeless man under an awning and against the wall of one of the buildings in the park (presumably to protect himself from the chilly wind and potential rain). It was fairly late, and he was trying to sleep but his sleeping bag looked thin I didn't get the sense he had any additional blankets. He appeared to be tossing and turning in the near freezing temperature.
I live approximately six city blocks from this park. Should I go home, get a few old blankets from our closet and run back there so he has some additional layers? I have an arctic rated sleeping bag that I barely use any more - should I just give it to him? Then there are other potentially mitigating thoughts: e.g. it's not as cold as it has been - he obviously survived the snow and ice we had, surely he can survive a twenty degree increase from those conditions? Or is it really my responsibility to provide blankets to the homeless and otherwise indigent? Where does it end? Technically I have lots of non-perishable food I could also give away - should I also give that to him? If I help him out should I also help out the cold homeless man on the street a few blocks away who sleeps near the bagel place? It's only a bit further and I have extra blankets after all. Of course at what point should I just canvass and donate for more homeless shelters in my city? What sort of financial commitment can I afford to make and still support my family? I make enough money to alright, but I likely do not make enough that I could prudently support a large homeless initiative as well.
That's not a flashy problem. It's not an easily structured problem that really allows us the categorization potential for ethical determinations that are typically generated from puzzles like the trolley problem. It's not simple to categorize all the divergent thoughts that run through our minds in those situations, and the nuances alone make it a bit less simple to think about in the types of terms we often partition out when dealing with something like the trolley problem. But it stands that this was a very real ethical question I had to think about with my wife, and it's a type of question that many in my city face on daily basis (from one side or the other) in the winter.
Consequentialism would say that pushing the fat man is morally permissible since it yields the best consequence (e.g. saving more people). However, the Doctrine of Doing vs. Allowing, Doctrine of Double Effect, Positive vs. Negative Duties, and Foot's distinction between initiating vs. redirecting causal sequence would all say that it's morally impermissible.
Since the Doctrine of Doings vs. Allowing states that doing harm is worse than allowing harm, pushing the fat man counts as doing harm and thereby it is morally impermissible. Doctrine of Double Effect states that if the intended goal constitutes harm (e.g. pushing the fat man in order to stop the trolley), then it is morally impermissible. Since your goal is to stop the trolley by pushing the fat man, it is morally impermissible. The view that negative duties trump positive duties would say that pushing the fat man violates a negative duty, so it is morally impermissible. Foot's distinction between initiating vs redirecting causal sequence would say that since pushing the fat man is initiating a causal sequence (e.g. pushing him leads to his death), it is morally impermissible.
It seems that many if not all cars in the near future may have automatic controls (some cars already have versions of this option) so that a person may, or perhaps must give up control of the car and hand it over to an on board computer system. The computer probably will face situations similar to the Trolley Problem....where multiple dangerous courses of action are possible and that it must make a split second decision.
Two possibilities
a) The car have a mandatory values system that common to all autonomous driving vehicles which is built in, and that regulate its normal as well as its course in extraordinary cases.
b) The owner gets some say in the ethics of the car, perhaps by prioritizing the car's occupants welfare over the welfare of those external to the vehicle.
My guess, option a is the only one that I can envision insurance companies accepting. Autopilot is expected to reduce accidents and injuries, so if it works as advertised, insurance companies will go along with it, but my guess is that they will want say in any ethically auto driven procedures.
Especially when it comes to warfare, real world "ethics" are totally different starting from laws and jurisdiction of a nation, the international laws on war and ending with military doctrine, strategy an objectives of an armed forces. After all, in many wars even today civilians are deliberate targets themselves. The legal and social maze that humanity has built especially around conflict between nation states shouldn't be underestimated.
An unintensional accident (where likely the fault would be put to the builder of the breaks or the maintaner of the trolleys) is a bit different from war. And let's not forget that the workers likely understand the dangers of their work on actively used tracks.
One is the option of physically throwing a fat guy onto the tracks to save ten people.
Turns out that when you have to get up close and personal people are more reluctant to obey their utilitarianist ideals.
What if it's a hundred workers?
Actually, this is a very good, if not ethically relevant point. All railroads, including commuter rail, I've worked for have very stringent requirements for working on the tracks. Those include notifications, signals, watchmen, and derailers. Derailers do just that, if a train goes somewhere people are working, it is nudged off the tracks in a, it is hoped, safe way. The need for worker protection rules, regulations, and work practices is a much more interesting, relevant, and realistic ethical issue than fat guys stopping trolleys.
Although I agree with those who say the situations described are unrealistic and unhelpful (and silly) from an ethical standpoint, your point is also a good one. There must come a point when the ethical fault caused by actively killing one person is balanced by passively allowing many to die.
Oh, wait. I misunderstood. You think it would be ok to let the Death Star destroy Alderaan rather than drop Jar-Jar Binks down the vent pipe into the reactor core.
If you are choosing someone to die then you are the cause of someone's death. If you let the 3 workers die then you played zero part in the accident because none of it was caused by you. Lives will be lost either way and there will be suffering either way I don't think you have the right to doom someone's life just to save more lives if they were safe to begin with.
You would let billions die in order to keep your conscience clean? So you can say "It's not my fault?"
Once again I say - Arguments like that are why people don't take philosophy seriously. Hey @Baden - How do I set up a macro that will print that phrase out automatically. I'm getting tired of typing it in so often.
Excellent point. I usually enjoy these problems as parodies of philosophy at its most tone-deaf. It's like the tragicomedy of a Vulcan working out an algorithm to maximize virtue. Everything profound and high is reduced to a maximization or minimization problem.
However, as @Michael showed the scenario isn't completely unrealistic. I read a true story of a sinking ship where one sailor was put in the exact same situation and he made the decision to sacrifice the few for the many. I believe he was honored as a hero.
Also, the Trolley problem is specific in its criticism. It's about Consequentialism (maximal happiness) and how its foundational premise, to the say the least, needs more work.
Some say rationality is paramount in all human endeavors; that being rational is akin to carrying a bright torch in the darkness. If you believe this then the Trolley problem carries weight for it exposes a hole in Consequentialist moral theory. It needs to be modified or discarded.
Strangely, I think we are all, instinctively, consequentialists in moral outlook. We always look to the effects of our thoughts and actions. Consequentialism seems to be our moral principle. All the reason to evaluate it thoroughly don't you think? The Trolley problem is important, even if only hypothetical.
Let me re-frame the criticism differently before going to far down the path of practical ethics as I think I may have miscommunicated the role of the hypothetical. What Michael mentioned with the war-time general, and what you mentioned with a sinking ship are examples that do happen - that's just fine. But those sorts of events are exceptions rather than rules - specific instances of ethical problems encountered very rarely by a select few and ethical problems themselves that do not have the common content that most of us encounter. Now I casually mentioned this in my first post I think: the argument in favor of this suggests that the applicability and/or exceptional nature of the circumstances are irrelevant - instead we're just attempting to extrapolate a mode of thinking from the example; that's essentially the point of these sorts of puzzle-solving activities. But the overarching issue I believe most critics have is that the grossly unrealistic nature of the hypothetical (and please read the term "unrealistic" as "not something likely to ever be encountered by the vast majority of people in real-world ethical scenarios") tends to the muddy the waters of our ethical judgement.
Ethical judgement becomes the medium of exchange when looking at the difference between a far more applicable ethical problem (e.g. helping the homeless person you pass on the street corner) and the exceptional case of the standard thought experiment. Take the trolley problem for instance - our ethical judgement(s) take on a very different character when we have to abstractly determine quantities of potential dead people, the nature of trolleys in the role of their death, and act of attempting to "solve" the problem. In and of itself that does may not sound as though it would cloud our judgement, but the critical response is to note that because of the structure of the trolley problem (which is used to categorize, compare, and evaluate ethical attitudes) we might cultivate a propensity to look at the far more common ethical problems in that same way. In order to truly articulate the distance of applicability and highlight the role of ethical judgement we need only inquire, "why not simply look at real problems that most of us face, have faced, or will face?" And I think that's the approach that is coming into favor with some contemporary ethicists.
Except people aren't particularly rational and we don't make our moral judgments on a primarily rational basis. And that's a good thing. We follow our hearts, and we should. Hearts aren't stupid. They're not ignorant of consequences.
My heart tells me - "Drop Jar-jar down the effing vent pipe. What are you, an idiot."
I mean you're certainly not wrong - I think the comparison to a scientific theory is fairly appropriate with one minor addition: scientific theories rely on a specific notion of verification criteria and consensus. Ethical theorization of the variety that comes about through puzzles like the Trolley Problem have the same procedural objective as a scientific theory but lack the constituent elements that transform the observation process into one of concrete theory construction. Don't get me wrong: I'm not trying to drive some wedge between science and philosophy here - that's far beyond the scope of this post. Rather I think it's important to acknowledge the aim of the scientific theory and how that aim is shared in consequentialist ethical theorization while lacking certain aspects of scientific theorization.
One way to potentially frame that lack is in terms of personalization. We needn't and often times shouldn't begin the long and detailed process of scientific theory building with a vested personal interest in the outcome or, technically speaking, the content and means of interpreting our data. Ethical problems however often do take on a deeply person character - and with that comes a certain level of mental and emotional complexity. What happens if we start to depersonalize ethics though? What happens if we try and remove the moral agent with all their feelings, reservations, and thoughts in favor of a model that facilitates an abstraction of content that might have a serious impact on our lives? Now to be fair - I am painting this in unfair terms, and I do not which to suggest that the process of ethical abstraction is sociopathic in someway as compared to a more virtuous structure of personalization (it might well be, but that's a big claim that would need a lot of backing). But it's important to observe that something is lost in translation when we move the ethics from the personal to the abstract; the specific to the approximate.
That's something very interesting. Contrary to what you think I feel our ''heart'' is a misconception of the ancients. Modern science has ''proven'' that our brains do the thinking AND feeling. Keeping the terminology for the discussion, do you really think our ''heart'' reasons through in its interaction with the world and ourselves? I don't know. Our ''heart'' is instinct-based or call it intuition. In neurological terms our ''hearts'' are reflexive - the seat of the much-maligned knee-jerk response. Do you think this involves any kind of mental processing to which we can apply the term ''rational''?
I think @T Clark might have something to say about this. I don't know. Speaking in very general terms, rationality applied to ethics hasn't resulted in anything practically useful. All moral theories, rationally generated, have holes in them. This makes me believe that morality is, well, irrational. [I]The heart has reasons the mind knows not.[/i] Too radical a view?
Last time I checked our thoughts did not have labels attached to them; 'heart' or 'mind'.
Its not an unreasonable principle that our intuition (which is what I'm presuming you mean by heart) is privy to information that our conscious brain is not, but then we are still left with distinguishing one from the other. What reason do we have for thinking our first thoughts are more 'intuitive' than our later ones?
[quote=Blaise Pascal]The heart has its reasons which reason knows nothing of... We know the truth not only by the reason, but by the heart.[/quote]
Quoting Pseudonym
I believe that there are many rational theories on morality out there and also that each one of them has imperfections. The end result is that none of these moral theories can pass as a comprehensive guide for moral decisions.
Yet, we, all of us, have a sense of morality.
What does that speak of?
May be it's not correct but, given the above is the case I find it convenient to make the distinction of mind and heart. Morality comes from the heart and the mind, reason, can't make sense of it.