An Alternative Trolley Problem
Just incase you’re not familiar with the trolley problem this is the outline:
Two people are tied to a train track (Track A) and one person is tied to another train track (Track B). A train is coming and will continue killing the people on Track A unless you pull a lever and instead redirect the train onto Track B killing one person.
What do you do?
My spin on this is a little different. In my scenario the two people on Track A BOTH have a 50% of going on to lead a bad life and kill someone, whilst the person on Track B has a [s]75%[/s] (sorry, mistake! Meant to be 25%) of the same.
What do you do?
Two people are tied to a train track (Track A) and one person is tied to another train track (Track B). A train is coming and will continue killing the people on Track A unless you pull a lever and instead redirect the train onto Track B killing one person.
What do you do?
My spin on this is a little different. In my scenario the two people on Track A BOTH have a 50% of going on to lead a bad life and kill someone, whilst the person on Track B has a [s]75%[/s] (sorry, mistake! Meant to be 25%) of the same.
What do you do?
Comments (70)
Here is a scenario (it can be adapted) that does a better job of getting at weighing decisions that the trolley scenario:
You are a lifeguard at a beach. You see that two women at opposite ends of the swimming area are both showing signs of serious trouble in deep water. One of them is slim and beautiful, the other one is fat and ugly. You can only help one. Which woman will receive the benefit of your life-saving expertise?
The two swimmers could be white and black, male and female, gay and straight (you observed them before they went into the water; that was your impression), or young and old, etc. You can't save them both, but you can save either one
In this case, the choice requires you to do something good (saving someone from drowning) rather than inevitably doing something bad (causing someone's death).
IN the real world, we are more likely to make a forced choice on whom to save, rather than on whom to kill.
Or, to whom do you give the benefit of the doubt? That's a real life situation that comes up much more often. Do you think someone may have cheated you? Why do you let it pass in one situation and not in another?
My question is do you see the issue I may be raising? It appears not if you find your scenario equivalent.
No, I don't. The fucking trolley has been rolling down the track for years and doesn't get better by being repeated. It's just a no-exit forced choice. Boring!
Whether 1 billion people or 1 or 2 people are supposed to get killed in the forced choice, it has nothing to do with the price of corn in Iowa.
Set up a scenario where someone comes out alive, rather than gets run over, why don't you?
What was your point, by the way?
I'm not the only one who doesn't seem to be getting whatever it is that you want to reveal.
Correct. Anything else?
Will do. Meanwhile, Wallows and Sushi are tied to the track on which a trolley approaches at high speed. There is nobody at the switch. What are their last words to the world?
Not whatsoever Bitter Crank. I am saying that you have been cheated if nobody tells you, you can't recuse yourself, which is the choice any sane person ought to do.
Whereof one cannot speak, thereof one ought to practice quietism. :blush:
And if you could save an eight year old boy or an eight year old girl from a fire you’d choose neither letting them both die. There is a necessary moral burden in either choice and I’d say a greater one in allowing two people to die instead of one.
The hypothetical in the OP it is set up to trick you. It’s a puzzle which almost everyone is duped by.
I don't think it is. It is set up to investigate moral intuitions in relation to consequentialism.
For an utilitarian, there is a simple calculation of the consequences, that you have made slightly more complicated, but not changed the nature of.
But for a Kantian, there is no calculation to be made, because it is always wrong to use a person as a means, in this case of stopping a trolley. In these days where everything is a trade, and everything has a price, it is quite unusual to find a non-consequentialist outside of religion, but still a lot of people feel a certain repugnance for such moral calculations.
Personally, I would say that there is no moral, and no immoral act, because there is no kindness or unkindness involved either way. If you are a calculator, you make your best calculation; if you are an intervener, you make your best intervention; if you are hesitant you let the dice fall where they will. IOW, it is the mind that is guilty or innocent, and the act and its consequences are mere scenery to the passion play.
Anyway, you’ve still not answered. You’re right in it doesn’t matter what you choose. It does matter if you choose for the right reason. That is the puzzle.
No it doesn't. It depends on whether you think the value of life is calculable, such that more is always better, or that there is some virtue formula whereby the value of lives can be compared. Those of us that don't think that find that the information in the trolley problem is of no use to us. I don't know what I would do, but I would feel bad about whatever I did, because death would be the result.
Is it really hard to grasp that for the purpose of the hypothetical the lives are viewed as equally important? So two people dying is NOT better than ONE or NO people dying?
The alternative is to say no life has any value. That is quite far removed from saying individual lives are difficult to measure against each other as a whole.
Dude you are reading some very weird shit into what I have said. But normally, when I happen to be passing someone on fire, I find I can wrap them in a blanket or spray them with water without killing anyone else. Have I been doing it wrong all these years? I am fatalist only in the sense that I think everyone dies.
Do the people on the track come with a sticker on their head that says how likely they are to kill someone in their life?
What if the guys on track A are sterile and the guy on track B ends up having 10 children but there is 50% chance one of them is Hitler, who do we pick?
The difference is 2 lives, or 3 lives. See? I can do arithmetic. First responders face this sort of situation sometimes:- Several casualties, some in dangerous places, some trapped, some bleeding to death. You can't help everyone, so you do the best you can, and it's not a moral question but a practical one, how you can best spend your time. What to do first - get the pregnant woman out of the burning car, or stem the bleeding of the cyclist she ran over, or perform CPR on the old philosopher whose heart attack caused the accident?
So I want to say that it is moral to treat any of these, and one is not more important morally than another. First responders tend to have a checklist so they don't have to think and choose at the time. The checklist is not a moral ordering. It is not immoral to treat the philosopher because the pregnant woman counts double.
The OP is as it is. I’ve given you as much information as needed. It’s a hypothetical so assume they are humans with exactly the properties I presented. If you wish to add extras move to other thread about allowing one billion to die to save the human race.
I've already answered the op with reference to established philosophy, to the effect that there is no moral difference to my mind, between killing one to save two, and killing two to save one. Thereafter, I am defending myself from your rather wild interpretations.
Well the hypothetical doesn't mention how many children they're gonna have, what kind of life their children will have, it doesn't say what a bad life is, or whether it's even possible for everyone to agree on what a bad life may be, it doesn't say whether someone living a bad life might inadvertently save many people through no will of their own (for instance because of his 'bad' actions the guy causes a plane to land to get him arrested but if he hadn't committed these 'bad' actions the plane would have crashed because of some technical failure and killed everyone on board), ...
The fundamental problem is how do you value life? Different people value life differently. Some will see the life of a child as worth more than that of an elderly, or that of a beautiful woman as worth more than that of an ugly one, and some will disagree on what makes a woman beautiful or ugly, ..., in your hypothetical many would make a decision based on what the guys look like.
Then when you say they "both have a 50%" of killing someone, do you mean there's 50% chance one person dies because of the two of them and 50% chance no one dies, or that there is 25% 0 die, 50% 1 dies and 25% 2 die?
I don’t understand what difference this makes to the original question.
They BOTH have a 50% chance of killing one person NOT a 50% chance between them. So the later.
Just work through it and see what you come up with and why you make the decision you make.
My decision would be the same in both cases because your addition makes no
difference; I would redirect the train to save 2 people.
Could you explain what difference your addition is supposed to make?
Ok, it could have been interpreted as if "both taken together" have a 50% chance of killing someone.
So if you kill track A on average 2.75 people die, and if you kill track B on average 2 people die. If you kill track A at least 2 people die for sure and maximum 3, if you kill track B at least 1 person dies for sure and maximum 3. So if your variable to minimize was the projected loss of life then you would pick track B.
But in real life you don't know how the lives of people are going to turn out. If you have two 90 years old on track A and one baby on track B what do you do? Then you might want to look for the guy who tied these people to the tracks so he doesn't do it again.
Yes, I gathered that.
But can you imagine how tedious it would be even with modern technology, going through a few billion faces and swiping left for death and right for life? I think your philosophy ignores the essential banality of death.
Quoting leo
I bet it was that guy from the other thread killing a billion people to save humanity.
I’m VERY happy to admit that I’d prefer choose one billion to die to save six billion rather than see EVERY human die. I think you’d be hard pressed to find anyone who’d agree with you in allowing the human race to die.
Clearly you don’t value life.
It’s not real life. It’s a hypothetical.
Probably because I mistyped! Oops! :(
It’s meant to be 25% chance to go bad and kill one person for the single person ans 50% chance each for the two people! Sorry about that :/
If I allow the train to continue on Track A then there is a 100% chance that 2 or more people will die and a 25% chance that 3 people will die.
If I redirect the train to Track B then there is a 75% chance that 2 or more people will die and a 25% chance that 3 people will die.
So I'd still redirect the train to Track B as there's a better chance to minimize the number of deaths.
I pretty much messed this up though with the numbers :( ah well!
Once upon a time there was this guy called Adolph. He was very kind to animals and didn't even eat meat, the sensitive soul. But he became convinced that it was his moral duty to save mankind, and for the sake of a thousand years of glorious humanity, a few people - just a few million - would have to die.
As it goes, it didn't work out and he failed. But I say that even if he had succeeded, he would still have been the epitome of immorality.
I think there is something wrong with you. Clearly you don’t value life.
At least try and make some sense. Come back when you’ve found someone who agrees with you. Until then I suggest you go and bother someone else with your nonsense.
Oh no. You have the greatest need of it. I will lay it out for you as simply as I can. The problem with your position is that there is literally nothing that is unequivocally wrong. There is nothing so vile you will not call it a moral act if circumstances dictate; pick any extremity of horror, make a dilemma between that and something even worse, and there you will be [s]performing[/s] theorising it as your moral duty.
You ought not indulge your need for rationality to this extent, because as I just pointed out and you dismissed, we know it leads to the worst of human depravity. And this is widely instantiated in, for instance, the ethics of human experimentation. The potential for saving many lives does not justify inhumane treatment of a few.
And the only way to avoid the endless slippery slope by which anything at all can be justified is to draw a moral line. No to torture, no to killing - at any price.
Well I agree with him entirely, so if that's what you needed to take what @unenlightened has to say here seriously, then maybe you can do so now.
I am referring to the other thread here. The hypothetical is that you either have to decide what billion people die OR the entire human race dies.
Are you saying you’d let the human race die to preserve human “morality”? Don’t quite see how that works.
The next point was that he contradicted himself by saying he cares about a man in fire in the street yet doesn’t see any difference between one person dying and a billion.
Are you SURE you agree with this? If so explain.
Note: There appears to be a purposeful blindsightedness toward the “HYPOTHETICAL”. Because I pose a difficult scenario it does mean I wish or believe it to be true. The issues seem to be arising when people add into the scenario their own particular flavour of neurosis. This thread is about logical thought and how it balances with moral views. The other is more focused on the irradication of the logical to reveal the moral heart of the individual to themselves.
What on earth are you talking about? What did you point out and what did I dismiss? Both the threads I’ve made are quite clear and quite simple (besides my error in this one with the figures.) In the other thread you chose to exterminate the human race and in this one you’ve refused to do some basic calculations in order to see the point.
Yes. We either all die together or no one dies at all. It's called solidarity.
Quoting I like sushi
I'm not a utilitarian. I don't 'deduce' that killing is wrong by some kind of calculus, it just feels wrong. It feels wrong to kill one man, it feels wrong to kill a million. Its ethical value, not maths.
If the trolley problem involved loved ones the whole dynamic changes. It is not me who is being unemotional or robotic. I assume these people matter and want to live so preserving two lives is better than preserving one on that basis alone. The choice would be a humane one not one based on arithemtic only and in my view better than adherring to some roughshod one size fits all attitude of “solidary because I say so.” That seems utterly at odds with a humanist approach to the problem.
I weigh the decision by the possible outcomes. One means everyone is dead and one means 6 billion people are not dead and can live their lives.
I am not saying we should apply cold hard logic or emotion. Both are required. Either alone is dogmatic and I view “solidarity for all” as futile given that the very idea of “solidarity” perishes with the human race. If the scale was different I guess it makes life easier if you have one singular answer to apply to every circumstance - I would caution against such an attitude though.
It certainly feels wrong to kill anyone. The point in the other thread is whether or not you value your own sense of morality over the rest of the human race enough to allow everyone to die. You appear to believe your sense of morality is superior and so everyone must die? Of course I am probing here because I want to see if you can give me a better idea of what you’re thinking is.
Thanks
Does it feel wrong to allow someone to die when you could have saved them?
Quoting I like sushi
The point is that, in your hypotheticals you can say things such as "such person has X% chance of killing someone" or "if you don't kill a billion people everyone dies" with certainty, but how that translates to real life is the problem, and attempting to connect your hypotheticals with real life can indeed be dangerous.
gave a fine example. Adolf was convinced that jews were going to lead to the extinction of mankind, if you read Mein Kampf he says as much. Here is the passage:
The real fundamental problem is not in deciding between killing 1 billion people or letting the human race die, but in knowing with certainty that mankind is about to disappear and the only way to save it is to kill a billion people.
You may be absolutely convinced that the human race is going to go extinct and that you need to kill a billion people to prevent it, but what if you're wrong? What if there was a flaw in your reasoning, what if there was something you hadn't realized that implies you killed a billion people for nothing? Then you would be no savior, you would be the worst monster. Out of attempting to be the morally good individual saving the human race, you would instead become the worst monster who killed a billion people because of his delusions.
People commit the worse atrocities out of fear, yet what is feared is often worse than what actually happens. As Mark Twain said, "It ain't what you don't know that gets you into trouble. It's what you know for sure that just ain't so".
Sure :)
That is what I am going to go into in another thread. I don’t see the hypothetical question as mapping 1 to 1 onto reality at all. That would be delusional.
That presumes a preference for continued existence above all else which the rampant suicide rate immediately proves to be erroneous.
Continued existence is simply not the highest preference on most people's list and having that existence at the expense of a horrific act of genocide is something most people would not want. Have you heard of 'survivor guilt'?
Quoting I like sushi
'Mattering' and 'wanting to live' are often contradictory goals. Sometimes one must sacrifice oneself in order to 'matter'.
Quoting I like sushi
I have no choice but to believe my sense of morality is superior. If I thought another superior then that would be my sense of morality. The fact that everyone must die (in your scenario) is caused, not by my moral choice, but by whatever factor created this horrific circumstance. I have to decide whether survival as a direct consequence of genocide is a survival worth having. I can't simply ask everyone, so it's not about moral superiority, I only have my moral judgement to go on, so of course it's going to be my moral judgement I use.
Yes, but less so. It depends on what would need to be done to save them. All actions have consequences, and the further into the future we look, the more complex calculating the consequences becomes until, much like predicting the weather, we rapidly end up with little better than a wild guess.
I don’t have a trouble with anything but this:
Suicide is certainly not the norm nor is it “rampant”.
We don’t “try” to exist we just exist. The point I’ve come across a lot in this exercise is about how the individual feels because of what they may have done. Genocide no, but if it was a matter of that or the extinction of the human race I’d go for genocide I because I value humanity.
Survivor guilt? People can live through all sorts of traumas and no doubt, if you’re old enough, you’ve lived through a few too. Sacrificing your own humanity to save humanity would be something. I have no idea if I could handle it or not and the situation will never occur I hope!
Anyway, to the trolley problem. If you had the chance to step aside and let someone else take responsibility would you? I guess you can see what I’m asking here.
Thanks again. I am VERY interested in how people think these things through and it’s not about me trying to convince you otherwise (although I won’t deny I hope at least to give you something to think about if nothing else!)
Earlier you said "I'm not a utilitarian. I don't 'deduce' that killing is wrong by some kind of calculus, it just feels wrong." so the above seems at a bit of a 180.
Quoting I like sushi
The point was that if a large enough number of people are deciding that the benefits of continued existence do not outweigh whatever harms they are experiencing (and we know those harms to be psychological) then it is very likely the case that an even larger number of people are making that assessment (presuming that not everyone who makes it reaches the conclusion that the one outweighs the other). Thus, a large number of people must consider the avoidance of certain psychological states to be more important than continuing to exist.
My judgement is that the psychological states which suicide indicates people consider to be more important to avoid than their own continued existence, are of an equal or lesser degree than the 'survivor guilt' of being left behind as a consequence of mass genocide. Therefore I don't think it is unreasonable, if I didn't have the opportunity to actually ask everyone, to guess that they'd rather be dead than live in those psychological conditions.
Not sure what you're getting at. The above was by way of an explanation as to why I'm not a utilitarian. The calculation process quickly becomes little more than a guess and one might just as well go with one's gut.
As it followed on from you saying "Yes, but less so. It depends on what would need to be done to save them" I assumed you were explaining why you feel that allowing someone to die when you could save them isn't as bad as killing someone.
Yes, because I wouldn't feel as bad, but the extent to which I wouldn't feel as bad, I think, would depend on what would need to be done to save them. If all I had to do was yell a warning, then not doing so would make me feel really bad about myself, but if I had to cross a minefield, I think I would feel less bad about not doing so because the possible consequences of such a prerequisite are so numerous, I could easily convince myself that it wasn't the right thing to do anyway.
If you could offer up your thoughts on this matter I’d appreciate it a great deal.
Thanks
Someone else making the decision is even more removed, so it would be easier to not interfere (if that's what I felt like doing). With another person in charge of the lever, I have to consider not only the complexity of the consequences of the deaths of either track-bound victim, but I'd also have to include the effect any action I might take to influence the decision-maker might have. How many times have you tried to influence someone's behaviour only to find them respond in the exact opposite way that to the one you intended?
If I knew the person, I may well make my preferences known, but if they thought the opposite, then I immediately have cause to doubt my judgement and so less justification to interfere.
I don't think, though, that this preference would extend to me actually wanting to shift responsibility to someone else. Deliberately shifting responsibility, just so one doesn't have to make difficult decisions feels pretty weak to me and not something I'd aspire to.
If someone was guarding the lever from my intent I would hope I would have the courage to physically force my way past this person and pull the lever to prevent an extra person dying - this is assuming the situation as a hypothetical and that the person guarding the lever is privy to exactly the same information as me about the people on the tracks (that is they are human).
I could of course then challenge my own convictions and ask myself what degree of force I’d be willing to use to get past the person guarding the lever due to his moral convictions. Would I be willing to kill him? Could I seriously justify killing this person to save another?
This is where things get even more interesting! One the one hand I may strongly disagree, to the point where I am convinced that the person is utterly wrong (which I do) BUT this extra piece of information cannot be neglected. What I have standing before me is a person of strong moral conviction, so I’m now in a position of weighing this person against a generic human I know nothing about PLUS the added weight of guilt upon my shoulders if I was to kill this person. This still doesn’t answer how far I would consider TOO far. Breaking an arm or a leg to get to the lever? I’d like to believe I could. To permanently maim or disable the person ... now I’m getting jittery thinking about such a thing along with the psychological price I have to pay for causing harm to another.
Of course how I feel at the time would play into this. That is not how I see the use of the hypothetical though. I know that if I’m hungry or sad then I’m more likely to act in a certain manner. The point is about asking myself what I would WANT to do and what I determine, as best I can, as the right thing to do. I’d be lying if I said I didn’t admire moral conviction even though I may deem the “moral stance” in question to be at fault. That is a piece of information the person would have over the person/s tied to the tracks. Even if I ignored the personal act of causing harm to the person guard the lever I couldn’t honestly justify killing them.
Further still I can then contemplate how I would “measure” passive and active act of “not pulling the lever”. Should I admire the doubter more or the person of moral conviction? It is here where I begin to align more with your stance on the matter probably (?), but it’s a secondary matter and not one that would change my conviction about pulling the lever - I don’t deem one action to better in “reality” but in a hypothetical I do because the parameters are set even if they’re unrealistic (there being no generic “human”). I completely understand if people have hard time disconnecting the two OR don’t even see it.
Yes, that would be exactly how I imagine I would feel. In a situation of such little data, I imagine I would very much lack conviction. A lack of conviction seems entirely appropriate to me in the uncertain circumstances.
The problem with these hypotheticals for someone of my convictions is that they break down the process of moral decision making further than I think we have the capacity to judge of ourselves. I believe moral judgements are made mostly without our conscious awareness and any rationalisation of them is mostly post-hoc.
What this means is that decisions are made, using all the data available, in a kind of 'black box'. You're asking me to remove data that would always exist in the real world and explain how that affects the decision. Its like asking what the weather would be like tomorrow if there was no such thing as wind. I'm afraid I don't know the answer to that, and it's my belief that no one really does.
Perfect! I’ll make my next post in this subject at some point during the next couple of weeks. Hopefully it will be an interesting and worthwhile exchange for both :)
I shall look out for it then.
If I were responsible for the lever - and therefore also the direction of the train, then I would switch the train to Track B. I don’t put much stock in % chance of living a ‘bad’ life and killing someone - all this does is convert it to a maths problem, and distance it even further from real life. This is not a moral dilemma.
But if I were a passerby who realised that I could reach the lever in time, then I wouldn’t switch tracks, because in all honesty, this would involve me first choosing to accept moral responsibility for the actions of the train in killing before choosing which track. I’m not going to voluntarily do that. If I choose inaction and the train continues on and kills two people, that is not a greater burden for me to bear than my actively directing the train to kill a different person instead. I am not to blame for the train’s actions - it does not automatically become my responsibility to act when there is no ‘good’ choice to be made. I imagine I will continue to wonder if I did the right thing, whatever I decide. But I think I would struggle more to sleep at night after taking conscious, voluntary action that was directly responsible for an individual’s death, than if I was to regret inaction. Enabling two others to live is not going to make up for that, in my opinion.
The logical choice certainly sounds better (more rational/sensible) in hypothetical discussions like these, but then we don’t have to experience the whole gruesome event and have it play out in our memories and nightmares when rational thought is asleep.
I don’t think it’s a matter of protecting my own sense of morality, either. I don’t believe anyone will succeed in preventing suffering, but I can succeed in not causing it by my thoughts, words or actions.