My Moral Label?
Basically, the question is just what label fits my moral views? I seem to agree with parts of several different theories...
1. Nihilism- I conclude that nihilism is true due to the inability to logically justify any moral judgement (why rape is wrong, why I should help others, etc.).
2. Emotivism- I consider this to simply be factual, as evidence seems to show that we use the emotional part of our brain when answering moral questions.
3. Hedonism- Essentially factual just like 2. It’s obvious that we avoid pain and pursue pleasure.
4. Relativism- True because people have varying moral systems depending on culture, etc.
5. Egoism- True by default. Evolutionary pressures have led us to experience pleasure when we make choices that benefit ourselves (also, helping others oftentimes helps us as well).
6. Pragmatism- In life, I essentially ignore all of the above and instead just try to do whatever feels right and works for the particular situation.
7. Unknown- I also believe that morals are determined by our values, and that values are both objective and subjective. There is a lot of variation in values from different cultures, but also some overlap (we naturally value our life). Also, particular values may be subjective, yet universal (I.e. killing someone innocent is wrong; most cultures will agree with this, but who is innocent is subjective).
Other- For any other theories I may have forgotten, or be unaware of.
1. Nihilism- I conclude that nihilism is true due to the inability to logically justify any moral judgement (why rape is wrong, why I should help others, etc.).
2. Emotivism- I consider this to simply be factual, as evidence seems to show that we use the emotional part of our brain when answering moral questions.
3. Hedonism- Essentially factual just like 2. It’s obvious that we avoid pain and pursue pleasure.
4. Relativism- True because people have varying moral systems depending on culture, etc.
5. Egoism- True by default. Evolutionary pressures have led us to experience pleasure when we make choices that benefit ourselves (also, helping others oftentimes helps us as well).
6. Pragmatism- In life, I essentially ignore all of the above and instead just try to do whatever feels right and works for the particular situation.
7. Unknown- I also believe that morals are determined by our values, and that values are both objective and subjective. There is a lot of variation in values from different cultures, but also some overlap (we naturally value our life). Also, particular values may be subjective, yet universal (I.e. killing someone innocent is wrong; most cultures will agree with this, but who is innocent is subjective).
Other- For any other theories I may have forgotten, or be unaware of.
Comments (48)
People are likely to follow that more than anything else and our laws are pretty much based on that principle.
Certainly a good basis for antinatalism.. No reason to unnecessarily cause conditions of harm for the future person.
This gets more convoluted when people don't consider morality just passively, but actively, such as Immanuel Kant did. Most modern moralists dont' just ask, "was this or that act moral", but they try to establish how to behave in the future to make one's life and actions ethical.
Emotivism is a theory of moral semantics. It's not just a theory that we use the emotional part of our brains when answering moral questions, but a theory that moral claims are just expressions of emotion like "boo this" and "yay that", the likes of which are not semantically capable of being true or false.
Nihilism and relativism, meanwhile, are theories of... moral ontology, maybe? About whether and in what way any moral claims are true. The former is a subset of the other, in any case; relativism is anything non-universalist, and nihilism, being radially anti-universalist, thus cannot help but be relativist.
And emotivism entails nihilism (since according to emotivism no moral claims can be true, or false for that matter), so if you just say "emotivism" then you imply nihilism and so relativism for free.
Quoting Pinprick
Hedonism isn't just the view that we do seek pleasure and avoid pain, it's the view that we should, and so is contrary to nihilism and thus emotivism. (But it can be of either an altruistic variety, like in utilitarianism, or an egoistic variety, like people usually assume it means; and the egoistic version is thus relativist, see below).
Quoting Pinprick
Again, egoism isn't a view about what people do do, but what they should do. If egoism is true, then it is good for people to do what benefits them; and there is something that actually does benefit them. That means nihilism, strictly speaking, can't be true (if egoism is true).
Egoism entails relativism, though, since what is good according to egoism depends on which ego you ask.
Quoting Pinprick
This sounds like you don't actually agree with any of the above, since you ignore it all in practice.
I suspect what you're actually going for here is denying that there are any kind of moral facts about reality, but then in practice you still aim to do what is good. (I.e. what "works". What exactly does that mean here? There's the big question. What are we trying to do, in deciding on our moral opinions?) You just don't have any notion of how you can rigorously sort out what is good, since you can't apply the rigorous methods of sorting out what is real to morality, since morality isn't a part of reality, so you're just left with whatever you intuitively feel about it.
But maybe you could at least apply analogous methods?
I would suggest looking into non-descriptivist cognitivism, which I think will resolve that dilemma for you. It is a theory of moral semantics which holds that moral claims aren't aiming to describe facts about reality at all, much like emotivism, but that they are nevertheless cognitive claims, i.e. the likes of which are capable of being true or false in some sense or another, not just expressions of emotions that aren't even truth-apt.
This then enables universalism, but without supposing that there are some kind of weird metaphysically spooky moral features of reality that those universally true moral claims are describing. Which then leaves the question of how to tell which moral claims are true or false... but you've already got hedonism for that. It'll just have to be an altruistic hedonism, like utilitarianism, since universalism precludes egoism.
(But if you then apply the analogue of critical rationalist epistemology to that process of sorting out what's good in an altruistic hedoistic sense, you end up precluding consequentialism, as the moral analogue of confirmationism, leaving you with a kind of liberal deontology instead of straightforward utilitarianism).
Quoting Pinprick
Perhaps objective as in universal (i.e. altruistic), but not objective as in transcendent; and subjective as in phenomenal (e.g. hedonistic), but not subjective as in relative?
Agreeing with several parts of different theories may not have a label but it's basically as close to scientifically accurate as you can get. Several different brain regions are involved in different types of moral decision-making and these regions are variously associated with status, disgust, pleasure, group-identity, empathy, planning and fear (of punishment usually). So any and all philosophical systems which try to make out that morality is about harm reduction, or cultural norms, or golden-rules, or nothing at all...are all categorically wrong. We have irrefutable evidence that moral decisions are not made by consultation of any one of these rules, but rather by a varying, often contradictory consultation of several models at once depending on the specifics of the moral choice to be made.
Of course if you want to completely ignore the science and build your own castle in the air like everyone else seems to then I suggest nihilo-hedo-emoto-relativo-pragma-dubism.
Which nullifies the prospect of any discussion, because nothing that could be said would make any difference.
And you presume that logical justification is the only possible means of making any difference why...?
I thought it quite clear that 'moral nihilism' was being referred to, not universal nihilism. But maybe that was not as clear as it seemed to me.
I don’t think it can be applied universally in all situations.
Subjective Prescriptivism.
I agree with that also. I think it’s entailed from which part(s) of our brain is used.
Quoting Pfhorrest
Considering the fact of hedonism render the “should” question moot. If that’s all we can do it doesn’t matter what we should do.
Quoting Pfhorrest
Again this seems an irrelevant question. If we cannot do otherwise, then it doesn’t matter, because there are no other viable alternatives.
Quoting Pfhorrest
The short response is that it is whatever I feel is right. The keyword being feel. It isn’t rational or logical, and it isn’t based on facts. I can say that I consider outcomes before acting, at least some of the time, but whether or not the predicted outcome is good or bad depends on whether or not it benefits me, or is otherwise desirable. Things like potentially feeling guilty, or the likelihood of being punished are also factored in. Basically, I avoid pain and pursue pleasure, and what is painful/pleasurable is based entirely on emotion (and biology of course).
Quoting Pfhorrest
Universal may be a better way to say it, but also relative, because they vary from culture to culture. Sort of like language, I suppose. All cultures have a word for beauty, but what they consider beautiful varies.
Are you referring to dual process theory?
I’m sure most people follow it...
Perhaps rather than ask others to try and find a label to categorize you it would more helpful for you to work out what you believe exactly. Labels are only markers, and trying to label oneself may be a way of sidestepping your actual philosophical exploration.
Not necessarily. It could be a single process at a given time (though I do think dual process theory has it's place). It's only that there's no single method we use to determine a course of action in moral dilemmas, we use different approaches as the context changes. Any moral 'system' which tries to claim moral decisions are based on a single metric is just pointless armchair speculation without any reference to the real world in which this simply doesn't happen.
Whether this is a relevant objection depends on what one is trying to achieve. If your goal is to describe how moral reasoning functions in general, then of course you want to be "as close to scientifically accurate as you can get." But moral philosophy is primarily concerned with normative questions. Objecting to Peter Singer's utilitarianism, for example, on the grounds that it doesn't fit the moral profile of the general population would be missing the point.
For me that depends on an odd sort of private language (maybe not 'private', but oddly technical). To claim that one's process is addressing 'moral' decision-making, one must already know what type of decision-making is 'moral' as opposed to any other sort. And to know if one's process works, one must know what a 'good' decision should be, which again one would learn from experience.
So in order to understand the meaning of 'morality' and 'morally right' one must have learnt it by example from other people, and the evidence we have of the process other people are using is varied in the manner I described. Thus one is inevitably talking about the decision-making we actually do.
One could, I suppose, having learnt how to use the terms say "scrap all that and decide thus", but what would make anyone do so aside from their moral desires, the satisfaction of which has just been described.
It would seem like setting out an algorithm which we've no intention of following to solve a problem we already have the answer to.
But yes, you're right in that my comment is of no consequence to such a project.
Meaning (mattering) isn’t tied to truth, reason, or really anything else for that matter. Superstitious beliefs/actions demonstrate this rather easily.
I won’t argue against that, but I would venture to say that the method (faculty?) that actually does the deciding is the one that appeals to us emotionally. Very often there are competing “reasons” for performing a certain action, or not; but the reason that is most appealing is the one that wins.
And I’m using the term emotion very broadly to include things like feelings, desires, intuitions, etc.
From what you’ve said so far I think emotivist is the most succinct and comprehensive label for what you say you think, since it strictly entails nihilism, which strictly entails relativism and pragmatically entails egotism, usually a presumably hedonistic egotism.
But since you say that in practice you ignore all those things that you say you think, it still looks like you don’t actually think them, but just say you do. So I’d recommend instead saying that you think the things that you act like you think, and finding the right label for that instead.
I get what you’re saying, but it’s just difficult for me to say I believe something that I know is irrational. IOW, all of my moral actions are irrational in my view. As such, I really see no need in trying to justify them since it can’t be accomplished. That said, in practice I have general principles that I try not to violate for emotional/pragmatic reasons (guilt, punishment, undesirable outcomes, etc.). And my principles are heavily weighted towards what I shouldn’t do, as opposed to what I should do. Otherwise, that’s about it, unless you think it’s necessary to get into what my principles actually are.
The fact that morality is by and large an emotional entity implies that it'll vary more than agree. That being the case, we come face to face with being unable to justify morality in any universally acceptable sense. Ergo, we must be practical and the only person we can actually take care of is ourselves. So, let's make ourselves happy.
:lol:
I guess I'm just seeing echoes of my past self in your self-description. There was a period in my philosophical development where although I had definite opinions on a bunch of philosophical questions, regarding both reality and morality, it looked like it wasn't possible for any such opinions (mine or others') to be grounded in any way that made any justifiably better than any others, any of them anything other than just as equally baseless as anything else.
But I did eventually figure out a pragmatic basis for grounding my philosophy -- both sides of it, the descriptive side and the prescriptive side -- and since you say of yourself that in practice you disregard all of those broadly-speaking "skeptical" moral viewpoints and act on other principles instead, I feel a glimmer of hope that you too will recognize the rationality of pragmatic justification, and so be able to hold up the principles that you act on in practice as rationally justifiable principles, and not just your baseless opinions.
Something that I hope might help in that regard, which was part of my journey too, is to look into how all of the arguments for moral skepticism have analogues about reality as well, analogues that most people (probably yourself included) are much more easily inclined to refute for obvious-seeming practical reasons. Those same reasons, applied analogously against the arguments for moral skepticism, helped me to ground my moral principles on equal footing as my epistemological/ontological principles.
Quoting Pinprick
That is, I think, a very good principle in itself, and the moral analogue of critical rationalism, which I think is the correct epistemology. Both in deciding what to believe and in deciding what to intend, the focus is best put on avoiding the most wrong options, rather than on identifying one specific uniquely right option.
(And that right there is 25% of the way to completing the analogy between reality and morality already).
We learn how to use moral language from other people, but we don't necessarily learn how to be moral in the same way (although there is an overlap between these two learning processes). We acquire a common language, but we don't generally acquire a common morality with all language users - which, of course, is what makes moral disagreement possible.
Quoting Isaac
The problem that you are pointing at is that of persuasion. How persuasion happens is not simple and straightforward, but we know that it does happen.
So how is it that you imagine we do learn how to be moral?
Quoting SophistiCat
Definitional disagreements are common too, so I don't see the presence of disagreements as an indicator on it's own of a radically different process.
Quoting SophistiCat
I was actually referring to the individual themselves (although I suppose you could still see this a 'persuading oneself' but that seems a little schizophrenic). Writing a spiel about one's moral process is not indicative that one actually makes moral decisions that way. The overwhelming weight of evidence is to the contrary.
Just want to point out that the knowledge required to do this for any sustained length of time almost certainly must include knowing the interconnectedness of us all. So, in order for me to be happy, at the very least those I deeply care about must also be relatively happy. Ensuring that likely means that I will need to put forth some effort into their happiness as well, which may require sacrifice on my part. So, ironically, in order to be egotistical, I must also at times be altruistic. I suppose the out is to simply not care about others, but then you have to deal with loneliness and having virtually no support system. I think the idea of self-actualization is basically just an expansion of this model from one’s inner circle, to one’s community, country, world, universe, or even all life itself.
Appreciate your input in all of this. I just wanted to also say that I personally seem to lack much empathy. So, my principles are somewhat of a tangible line I don’t want to allow myself to cross, because it’s likely that my emotional response wouldn’t be strong enough by itself to prevent me from doing bad things. So for me personally, I probably need to develop some principles that guide me towards positive actions as well. Otherwise, I come across as being self-absorbed and inconsiderate, which I suppose I probably am. But that’s probably just a “me” thing.
That's not just ironic, it's a contradiction.
Let's be pragmatic here, shall we? The human condition either already is or is on the verge of becoming the way I'll describe it below:
Humanity is like a spaceship leaking precious air, there are 4 astronauts, and only 1 oxygen tank available containing just enough of the precious gas for 1 person.
If you're altruistic and share the oxygen tank all of you will die and, unfortunately, sacrificing, another act of altruism, will require 3 astronauts to give up their lives. Only 1 astronaut will make it.
If you're egoistic, you'll take the oxygen tank and live to see another day. The result, 1 astronaut will survive.
As you can see, it doesn't matter whether you're egoistic or altruistic, only 1 person will get out of this harrowing situation with their life still intact.
Well, I don’t really see how life is analogous to your spaceship scenario, but after thinking about it if you’re only being altruistic because it benefits you, it probably doesn’t actually qualify as altruism.
Why not? It seems to me selfishness (i.e. ego(centr)ism), not self-interestedness (i.e. conatus), is the opposite of altruism, which benefits the altruist - indirectly, non-reciprocally - iff others benefit (e.g. parenting, eldercare, teaching / mentoring, emergency first response (rescue), socially responsible investing / donating, etc).
Maybe you’re right, but I just always viewed altruism as helping others while gaining nothing (I.e. helping others strictly for their sake). An example of this being jumping in front of a train to save someone’s life, while sacrificing your own (although even in this case there is the argument that being viewed as a hero, and that the posthumous recognition/praise for the act is the determining motivation; thereby not altruistic either).
Selfishness may be altruism’s opposite (as white is to black), but that doesn’t make self-interestedness it’s equal (just as red is not black, but also not white).
That also sounds a lot like me.
Honestly I think emotional empathy is a pretty weak justification for any notion of morality. I want to say to some people who think that it grounds all of everyone's morality "so the only reason that you don't [insert awful thing] is because you just don't happen to feel like it, but if you did feel like it you'd just do it!?" And these same people seem to need some kind of self-interested excuse to do something nice for someone else.
Whereas on the other hand I feel like doing pretty awful things pretty frequently, but don't (usually, when I'm not in some kind of crisis state losing all self-control), because those aren't the kinds of things I think should be done, so why the hell would I do them if I could help it? And likewise, the question of "why do nice things for people" just perplexes me -- that's just the thing to do, and if it's not some kind of awful burden to do so, why wouldn't you just do that by default?
And not because my heart bleeds for all the poor souls out there. I shrugged the morning of 9/11 because that's something far away that I doesn't affect me personally (at least not immediately) and I can't do anything about. I don't really give a fuck about other people, emotionally, but when I'm deciding what to do myself, in my life, why wouldn't I do the thing (like help someone) that I think people generally should do (since I'm a person too), so long as I can manage it?
Quoting Pfhorrest
Why do you think those things shouldn’t be done?
Quoting Pfhorrest
So yours is a sort of intellectual empathy, right? Cognitive empathy.
bert1's suffering is much more important than anyone else's, because I am bert1. Under the spreading chestnut tree, I sold you and you sold me. Do it to Julia.
That doesn't work for observations about beliefs. It doesn't matter if I observe that bricks don't hang in the air, or someone else observes it and tells me about it.
Have I completely missed your point?
Quoting Brett
Yes! Thanks for this term. I’ve been trying to think of a way to explain it...
Edit to elaborate: When it comes to figuring out what is real, people generally find it intuitively "crazy" if someone denies either that their empirical experiences tell them anything about what's true or false, or that if something is true at all then it's true for everyone (and so whatever is true must satisfy everyone's empirical experiences). It's a little more common but still widely considered bad form to either insist that something just is definitely true regardless of its impact on empirical experience, or to insist that someone justify something beyond all shadow of a doubt or else be compelled to accept that it's false.
The analogous errors regarding morality are to deny either that hedonic experiences tell us anything about what's good or bad, or that if something is good at all then it's good for everyone (and so whatever is good must satisfy everyone's hedonic experiences); or to either insist that something just is good regardless of its impact on hedonic experience, or to insist that someone justify something beyond all shadow of a doubt or else be compelled to accept that it's bad.
Applied in practice, those moral principles mean, in reverse order: giving everything the benefit of the doubt that it might be good, by default; but defeasibly, accepting the possibility that it could be shown bad; treating anything (a particular event, not a whole class of events) that can be shown bad for anyone to be just bad period, even if other people don't think so; and appealing to hedonic experiences, suffering and enjoyment, pleasure and pain, as the measure of whether it's good or bad for someone.
I had always considered us to be empathic creatures by nature, hence the forming of successful, cohesive communities. But if one form of empathy is cognitive and some are like that and others aren’t, then I have to assume that empathy for those people, going right back in time, could only come about when cognitive faculties had reached a certain point of development. So therefor empathy could not happen until that point was reached. That also suggests that different groups of mankind reached a point of empathising at different stages.
I think there’s other possibilities. My lack of empathy could very well be the product of desensitization, mental illness, or something else entirely. Also, I’m not saying I have zero empathy, it just doesn’t seem as prominent as most other people.