Are Consequentialism and Deontology a Spectrum?
"You ought to do X, because it has beneficial consequences."
What makes a particular consequence, or set of consequences, "beneficial?" Why are some consequences good? More to the point: do P1, because it brings about desirable consequence P2. But why do we want to bring about P2? Is it because P2 is good in itself (Deontology?), or because P2 is useful for something else, which we can call P3? But then the question arises again: why do we want to bring about P3? And so on and so forth. One of two things will happen: either the chain of desired consequences end somewhere and we have an intrinsic good, thus deontology, or it goes on forever, and the last consequence is always desired for no reason at all, which seems to undercut the motivation of consequentialism (that is, achieving the "right" results and acting in a goal-directed fashion).
Perhaps we could cash this out in a kind of spectrum: one is deontic to the extent that one is unwilling to allow something bad to happen for the greater good, while one is consequentialist to the extent is willing to allow something bad to happen for the greater good. That is to say, the amount of harm you're willing to cause in order to pay future dividends is directly proportional to how consequentialist you are.
One reason that my stance here bothers me is that I always thought it was obvious, but when I try to explain it, I get some weird looks from other people who study philosophy. Either it's not as obvious as I thought, or I'm just talking nonsense. What say you all?
What makes a particular consequence, or set of consequences, "beneficial?" Why are some consequences good? More to the point: do P1, because it brings about desirable consequence P2. But why do we want to bring about P2? Is it because P2 is good in itself (Deontology?), or because P2 is useful for something else, which we can call P3? But then the question arises again: why do we want to bring about P3? And so on and so forth. One of two things will happen: either the chain of desired consequences end somewhere and we have an intrinsic good, thus deontology, or it goes on forever, and the last consequence is always desired for no reason at all, which seems to undercut the motivation of consequentialism (that is, achieving the "right" results and acting in a goal-directed fashion).
Perhaps we could cash this out in a kind of spectrum: one is deontic to the extent that one is unwilling to allow something bad to happen for the greater good, while one is consequentialist to the extent is willing to allow something bad to happen for the greater good. That is to say, the amount of harm you're willing to cause in order to pay future dividends is directly proportional to how consequentialist you are.
One reason that my stance here bothers me is that I always thought it was obvious, but when I try to explain it, I get some weird looks from other people who study philosophy. Either it's not as obvious as I thought, or I'm just talking nonsense. What say you all?
Comments (10)
I think the problem is equating Deontology with what is good in itself and mixing P1 and P2.
P1 is an action, P2, P3 ... are situations. For Deontology an action is good in itself while for consequentialism its a situation. Therefore when you say:Quoting Pneumenon you're not really describing deontology.
Quoting Pneumenon
So here you're framing it as short term consequences vs long term consequences. But, deontologist don't concentrate on either of these, rather is the action against the rules, distasteful etc.
This isn't just a wacky ad hoc means of getting around you, by the way. I am, in fact, very grateful for your objection, because it made me realize why people looked at me funny - they don't share my weird views about time.
One minor point: I don't think that I was framing this as short term consequences vs. long term consequences. If a deontologist could guarantee that some good thing would be done in ten years, without doing anything wrong right now, they would, if they were rational and consistent, do so. What the deontologist would not do is cause some kind of harm, justifying it as paying dividends down the line. Not short vs. long term consequences, but willingness to trade bad for good.
I always sort of finding myself at odds with statements like that before the debates even begin.
I'd suggest that it would red a bit better as:
"You ought to do X (as in it seems advisable to do so), because it seems to have beneficial consequences insofar was we have taken potential consequences into consideration."
Problem is, my redux would sort of derail the conversation. (sorry)
Quoting Pneumenon
Anticipated effects/affects upon a current contextual perception... (best guesses)?
As for the value notion of "beneficial", this would have a strong tie to intentions and agendas, as well as preferences one might have within a given context and given a particular perspective.
Quoting Pneumenon
Did you just ask 'what is good' and simply use consequences as the particular vehicle to field that question?
Again... I place this into value notions and theories about value notions.
We have a set of preferences. We look for things that fit to these preferences. When confronted by other things yet to be valued, we establish a sort of standard of measure of other things by the 'understanding' we have from the patterns and effects/affects of what we prefer as opposed to what we do not prefer... indeed this might have a bit of refinement, as the context in which such a fielding of value notions will play a large role.
I suppose I'm saying that we simply make our (sometimes 'best') guesses according to our experience of what is prefered in a given context muddle with our (sometime 'best') guesses over what can be anticipated if events and actions play out as they do.
Sorry I'm not much help here...
... problem is I view certainty of anything (including consequences) to be an ever expanding process of adaptive refinements; thus I find no fixed points in the relativity of certainty.
------------------------------------------------------------------------------
As for the real question about the two being a spectrum...
... I'm not too sure I follow, as my understanding of a spectrum has more to do with physics. Other meanings and application beyond physics become a bit metaphoric or vacuous.
Could you elaborate of how you wish to use this term in this context?
Meow!
GREG
So it seems to me that they work in tandem; a deontological approach explains our duty and a consequentialist approach explains what sort of things satisfy this duty.
So it's the non-teleological which is [by definition] deontological, and so we can see how consequentialist theories differ because they justify goodness in accord with a teleology.
So you have Kant's moral theory which provides the categorical imperative which allows you to judge whether or not your principle of action is good based on whether or not it meets the CI. But the CI is more about self-consistency if everyone were to do it, and not about maximizing happiness [a teleological goal]. There is a standard for judgment, but not an end.
No worries. I think this also could have come about because the word 'action' is ambiguous. I meant to use it as referring to agency i.e an action is something an agent does. But I can just switch to using agency, its a better word for it.
When it comes to evaluation, deontology is very concerned with agency and utilitarianism is indifferent to it.
Take this random example:
You lie to me proclaiming 'my real name is Pneumenon!' Being somewhat alert I don't believe you.
From a utilitarian perspective nothings really happened, but from a deontological perspective you have been immoral.
I guess more to the point is:
From the perspective of the left the lines get closer between deontology and consequentialism, because the left place more weight on harm and justice and less on authority, purity loyalty etc. So it may seem like they are close but examples like 'it's immoral to have sex before marriage' just can't fit into a consequentialist framework unless one can show that its harmful.
I could probably just keep providing example after example of where they differ.
Quoting PneumenonI guess all the typical counter arguments against utilitarianism have the utilitarian doing some harm only to have it create better consequences. In almost all of the more realistic examples, its the deontologist that can advocate causing some harm and the utilitarian that is against it. This makes sense because all the utilitarian cares about is avoiding harm (or maximizing pleasure, maximizing preferences etc) while the deontologist cares about the proper way to act.
Let take euthanasia, stem cell research, gay marriage, telling white lies, its possible to have deontological views which are against these but not utilitarian ones. To the hardcore utilitarian nothing is taboo.
The problem is that there will be discrepancies in the way your mono-deontology (deontology with only one rule) and utilitarianism evaluate actions. Lets say I walk into a house which unbeknownst to me has a gas leak, I then turn on the stove incinerating everyone inside. From a utilitarian perspective we could say that turning on the stove was the wrong thing to do, but not from the deontological one.
And as a tangent, the same sort of thing happens with the supposed dichotomy between absolute and relative morality. The relativist might say that the immorality of killing is relative to the context but also that the immorality of killing an innocent child for personal gain is absolute, and the absolutist might say that the immorality of killing is absolute but that the immorality of doing something to someone that they'd prefer you not do to them is relative to the context.
In a way it doesn't make sense to have a duty never to accidentally cause anyone harm. I don't think it can be a duty and definitely not one that we could take seriously.
On a more practical note considering deontology doesn't claim to have only one rule. Any rules that it has will get in the way of maximizing consequences. Even if we made a list of 3 generalized principles, at some stage keeping them will mean not maximizing utility (unless they all are ways or paraphrasing 'you should maximize utility'.
Btw, I think the area where things get most blurred is rule-utilitarianism. Where utilitarians don't trust themselves to accurately assess individual situations on the fly, so they think it will result in more utility if we follow a set of rules than if we attempt to perform the hedonic calculus in each situation.
For deontology, the only thing good without qualification is the good will. Other things/characteristics (e.g., intelligence) have a value of goodness in relation to a condition or by the ends brought about. Since the good will is good without reference to its success, it is not a consequentialist theory because moral goodness resides entirely in the will and not the action taken.