You are viewing the historical archive of The Philosophy Forum.
For current discussions, visit the live forum.
Go to live forum

The Inequality of Moral Positions within Moral Relativism

Judaka August 10, 2020 at 21:21 13475 views 52 comments
Whether you agree with moral relativism or not, what we call morality is a perspective present in pretty much everyone we know. There is disagreement but there is not an absence - broadly speaking. Which is to say that certain people may, for instance, have very few moral positions when it comes to animals but they will still care deeply about how children or older people are treated. For the people who have these opinions, the relativism of their positions doesn't matter.

That is what my thread is about, what does it mean or what would it mean if morality is relative?

A common criticism of moral relativism is that it demeans morality, it means that we have to be tolerant of other points of view on moral issues. That is a choice and it's one that almost no one actually makes. Moral relativism is almost certainly on the rise and yet we are also at a time where society is the least tolerant of traditionally evil acts than it has ever been.

Moral relativism is not an obstacle to being passionate about your moral positions, this passion can be unforgiving and unrelenting. Moral relativism doesn't mean being amoral and realistically being amoral seems to be a genetic abnormality. We do not get to choose whether we think in moral terms, it is part of our biology. That is why you are not going to see a decrease in the interest and passion of thinking about things in moral terms - as you haven't.

What is moral or immoral has changed enormously even just over the past 30 years, let alone a century or two. Moral relativism in my view, is just saying, hey, morality is changing according to a variety of factors and this is observable. I still feel disgust, anger, outrage - towards immoral acts but what I find immoral is not the same as what you find immoral and there are reasons for that.

The idea of moral positions being equal within the framework of a moral relativist is ridiculous, it is not based on observation of moral relativists but ones own unrealistic interpretation. In deciding whether morality is relative or absolute, you are not deciding much. Either way, you were born with a strong proclivity to see things in moral terms, you will develop opinions on the topic of morality and your opinions will be accompanied by a strong emotional response - that's just you being a normal person.

Whether you believe you are objectively correct or you recognise your subjectivity, the way morality functions for you logically, psychologically and emotionally are virtually the same. You will be just as capable of being horrified, disgusted, outraged and passionate. That's not to say that how it works is identical for everyone but it's definitely not determined solely by whether you beleive morality is relative or not.





Comments (52)

BitconnectCarlos August 10, 2020 at 21:47 #441793
Reply to Judaka

Quoting Judaka
Moral relativism in my view, is just saying, hey, morality is changing according to a variety of factors and this is observable.


Yes, morality has changed. What you're describing here is descriptive moral relativism (just basically a factual description of the world, yes, moral norms have changed and continue to change). When philosophers argue against moral relativism they're arguing against prescriptive moral relativism: The view that morality on a fundamental level is always to be viewed relative to something (usually to some time or place) and that other standards cannot be privileged over that.
Judaka August 10, 2020 at 21:57 #441794
Reply to BitconnectCarlos
Fair point, I view all characterisations as subjective and moral or immoral are there too. I don't believe the distinction matters for the topic but you were right to point out this difference. Do you think the distinction matters within this context or were you just pointing out my inaccuracy?
Deleted User August 10, 2020 at 22:09 #441797
This user has been deleted and all their posts removed.
Philosophim August 10, 2020 at 23:31 #441837
Pure moral relativism is a translation for, "Morality is an opinion". Such assessments of morality are cop outs because such a morality is useless. Its just a disguise for, "I can't figure it out, so let people do what they want". I call such moral relativism whateverman morality.

Any good moral relativism theory is going to have some common objective basis between two apparently separate moral ideals.

For example, (I apologize if this is grizzly) but many people think it is immoral to kill a baby. However, what if we take a situation in Nazi Germany, that a group of Jews are hiding under a house, and one woman has a baby. If the German's find the Jews, they will be brought to concentration camps...and you know how this ends.

Unfortunately, the baby starts to fuss. The only way the woman can quiet the baby enough is to stifle its ability to breath. For a time, this won't hurt. But unfortunately the German's will be staying longer than the baby can breath and stay alive. Any attempt to let the baby breath, will result in a gasp and a cry.

If the woman has to suffocate the baby, do we call her moral, or immoral? The whateverman moralist would say, "Sure, if they think it is.". But a more discerning moral relativist would try to find a common thread between the two. Yes, there were two opposing actions of moral claim, "A baby being killed, versus we shouldn't kill babies". But surely the reasoning behind both moral claims has a common thread?

Which leads to this point. If there is a common thread, can we truly say, "the way morality functions for you logically, psychologically and emotionally are virtually the same"? What if its not our common emotions, but a common reasoning behind morality? And what if even if our emotions do not bring us disgust, there are actions that we should or should not do regardless?

Just a line of thought to consider.
Pfhorrest August 11, 2020 at 00:11 #441855
Quoting BitconnectCarlos
What you're describing here is descriptive moral relativism (just basically a factual description of the world, yes, moral norms have changed and continue to change). When philosophers argue against moral relativism they're arguing against prescriptive moral relativism: The view that morality on a fundamental level is always to be viewed relative to something (usually to some time or place) and that other standards cannot be privileged over that.


Quoting Judaka
A common criticism of moral relativism is that it demeans morality, it means that we have to be tolerant of other points of view on moral issues.


There are more than just two things to be differentiated here. Besides descriptive relativism, there are also meta-ethical relativism, which is what Carlos is talking about (the truth or falsity of moral claims is relative), but also normative moral relativism, which is what Judaka mentions here (we ought to tolerate behaviors that our morals say are bad because our morals are just relative). You can be the former without being the latter, and you can also be very tolerant of other cultures and practices without being a relativist of any kind — in fact, many think normative moral relativism is simply incoherent, because that “ought to tolerate” seems to be a universal moral claim.

Take me for instance. I’m a moral objectivist, or universalist, and I think we ought to be very tolerant of a wide variety of behaviors, because a lot of things are just not morally relevant at all, and a lot of the remaining things vary widely in their morality by context, and it’s often very uncertain whether a particular context even is the one in which something would be immoral, and even if it is, we are often not in a position where it would be morally right to intervene.

So on my account people who do intervene in circumstances where it’s not their place, where they can’t be sure it’s wrong, because it might not always be wrong, or might not even be the kind of thing that could be wrong... those people are doing something morally wrong, by being thus intolerant of things that might be fine or are none of their business. They’re doing something objectively, universally morally wrong by being intolerant.
Judaka August 11, 2020 at 00:32 #441859
Reply to Pfhorrest
I see.

Normative moral relativism just seems to be an interpretation of what to do because of moral relativism. I have complained in the past that for either descriptive relativism or the meta-ethical relativism often include (like nihilism) interpretations about what it means to agree with descriptive relativism or meta-ethical relativism.

Here's an example.

https://tinyurl.com/y42zp3za

"Moral relativism is the idea that there is no universal or absolute set of moral principles. It’s a version of morality that advocates “to each her own,” and those who follow it say, “Who am I to judge?”"

I can get behind the first sentence, not exactly what I would have said but whatever it's a reasonable attempt. The second sentence is clearly an interpretation of what to do as a result of the first sentence. If when I say I believe in moral relativism I am saying that I am advocating a "who am I to judge?" mentality then this is pretty annoying. Is this kind of cultural interpretation of relativism a component of the definition?
Pfhorrest August 11, 2020 at 01:46 #441871
Quoting tim wood
Neither grounded nor relative?

One answer lies in considering ships at sea.


For a moment I thought you were going to make the same analogy I do:

Don’t try to stand on the bottom. There is no bottom, the sea is infinitely deep. But that doesn’t mean you have to sink forever. You can just float on the surface, so long as you let go of anything that would pull you down.

This is a metaphor for falsificationism vs justificationism, and I think the same thing applies morally as it does factually. Everything and its negation is possible — morally speaking, it’s permissible — until it can be shown wrong. It is the possibility of showing wrong that makes for objectivity.
opt-ae August 11, 2020 at 02:16 #441874
Our parents have morality; we have morality - parent morality is guidance to kin morality, until that kin becomes a parent.

My mother's and father's morality, was enough righteous for me to become wise; that being so, I now rule my own mind without parental guidance.

Earth has parental morality, it parents kin, and I have parental morality, but I don't parent kin other than my heart.

I can be good to myself, and to others, but my good doesn't mean Earth must be good, to me.

A God's morality is thought of as morality which is in the most high position in a hierarchy. If I claim that 'something is evil', it means nothing - a super-morality is required.

Let's take Earth as the closest super-morality; there are common wants and needs, of people (who are a part of Earth), and judging by the majority vote, too much pain is evil.

Let's take the universe as another example, planets and stars form harmonious families so judgement is anti-family-harmony is evil.

That doesn't mean that too much pain or anti-family-harmony must be evil to everyone, it's just in the case of the super-morality-agents.

I want you to reset your mind here, and think about this; God watches over you and says 'I find this evil', then do you say "I find this evil too", or do you point yourself at the good-route based on a God's lesson?

What I'm proposing is everything I said prior to the previous paragraph was false, due to the fact I'm not talking in proper language.

If God finds thus evil, I do not find thus evil, but I repent God's evil to be good (I'd exchange God for 'super-morality' but I don't know how to type that).
Judaka August 11, 2020 at 03:04 #441880
Reply to Philosophim
Quoting Philosophim
What if its not our common emotions, but a common reasoning behind morality? And what if even if our emotions do not bring us disgust, there are actions that we should or should not do regardless?


The reason I reject "common reasoning" is that morality is not based on reasoning, a psychopath is not someone who thinks differently, they're wired differently. Just as we know killing a baby is wrong - even if we never sat down and thought about it. There aren't any serious philosophy forum threads about "is killing your baby immoral?" because we don't need to have these kinds of discussions. Why is that?

The only time we ever have to think about it is precisely the instances where we have either a moral conundrum like you have given, or there is a struggle between acting morally versus some other desire. We may have to extrapolate our base moral instincts to a more complicated topic as well.

I don't think any such situation will be avoided or be made easy by any approach to morality, not in reality at least.

I agree that moral relativism shouldn't mean the dissolution of moral importance, I am really not sure how many people advocating moral relativism are meaning it in this way.







Banno August 11, 2020 at 03:51 #441897
Seems as ethical claims are more than claims of personal preference. Ethical claims invoke a move such as that from "I choose not to eat meat" to "you should choose not to eat meat". The move from what I chose to what others should choose.

If that is so, it is difficult to see how moral relativism could count as a coherent ethical position.
Pfhorrest August 11, 2020 at 03:54 #441900
Quoting Judaka
I can get behind the first sentence, not exactly what I would have said but whatever it's a reasonable attempt. The second sentence is clearly an interpretation of what to do as a result of the first sentence. If when I say I believe in moral relativism I am saying that I am advocating a "who am I to judge?" mentality then this is pretty annoying. Is this kind of cultural interpretation of relativism a component of the definition?


Well, there are multiple definitions, which is why we've invented different qualifying terms to distinguish between them. That article you linked seems to be weird in that it lists the three different kinds of moral relativism correctly, but then also at the start and end talks about them like they're all the same thing.

It sounds like you are a meta-ethical moral relativist. If you say exactly that then nobody who understands the different kinds of moral relativism should think that you mean a "who am I to judge?" kind of attitude. But most people aren't going to understand the different kinds of moral relativism, and will think, like whoever wrote that article, that they're all the same thing: that acknowledging that there are disagreements (descriptive) means thinking nobody is more or less right or wrong than anybody else (meta-ethical) and therefore we ought to tolerate all our differences (normative).

You'll probably just have to clarify the difference between them and state which of them you support if you want to be understood correctly.
Pfhorrest August 11, 2020 at 04:06 #441903
Quoting Judaka
morality is not based on reasoning, a psychopath is not someone who thinks differently, they're wired differently


Psychopathy doesn't have to have implications on behavior. Psychopaths can behave morally even if they don't have the empathy that drives people to behave morally.

I've wondered sometimes if I might have something in the direction of psychopathy. I rarely actually feel emotionally upset about other people suffering. I am able to tell how other people feel very well, and I intellectually do not want other people to suffer, but it doesn't actually hurt me emotionally to see other people suffer. (Normally. Most of last year, something weird came over me, and sent me into a crazy existential dread where I was constantly worried about the likely death to predation of every cute bunny I saw in the meadow, and things like that). 9/11 didn't faze me, for instance; "just another huge tragedy somewhere on the other side of the continent like happens all the time, now I have a job interview to get ready for" was my thought process that morning. (I was also too young to realize the geopolitical ramifications that would stretch beyond that one event, though).

I've often felt like people who only do moral things because of emotional reactions have a shallow, fickle sense of morality. I try to do what is right because I think it is the right thing to do. I try to figure out what is or isn't right to make sure that when I'm trying to do the right thing, the thing I'm trying to do is actually the right thing, and I don't end up some well-intentioned extremist. In contrast, people who only do whatever they feel is morally right, just because they feel like it, don't seem like people who are actually acting out of any kind of moral duty.

It's sort of the moral equivalent of people who believe things uncritically, just because they heard someone say it or read it somewhere and it seemed truthy to them. That seems to be most people, and doing what feels like the moral thing to do because they feel like it seems to be most people too, but both of those seem like a very shallow, fragile, easily corrupted and highly fallible ways to go about deciding what to think and what to do, in contrast to, you know... actually reasoning about these things critically.
Judaka August 11, 2020 at 04:50 #441917
Reply to Pfhorrest
Well I would say both descriptive and meta-ethical moral relativism is true. Nihilism has a similar problem where people project their interpretations of what it would mean if they were a nihilist to them onto you. For Nihilism, that's the "there's no point in living" type thing

Quoting Pfhorrest
In contrast, people who only do whatever they feel is morally right, just because they feel like it, don't seem like people who are actually acting out of any kind of moral duty.


Well, I think that's exactly why people are moral though I disagree with degrading it.

I agree that psychopaths can believe in morally, or rather that something like an A.I. without our biology could intellectually appreciate the idea of morality. Psychopaths do things like torturing animals which a normal person wouldn't even dream of. Even that you care about bunnies, if someone abused a bunny in front of you, I doubt you would dream to tolerate such a thing and that's something I commend. However, from you feeling that way, to me commending it, that's not moral theory, that's just you being you and me being me.

I like to think about an A.I. who doesn't have our biology but is capable of intelligence, what positions do I think they can reach? Can they be jealous? Can they feel guilt? We can see that animals besides human care about fairness, they can feel jealousy. For us humans, fairness and jealousy can be deeply intellectual but that doesn't mean they're based on reason - sometimes it's actually very obvious they're not. Morality is the same.

We all have different personalities and attitudes, your approach to morality is probably just different to others.

Quoting Pfhorrest
It's sort of the moral equivalent of people who believe things uncritically, just because they heard someone say it or read it somewhere and it seemed truthy to them.


I'm not sure, what do you mean by just feel like it?

There's a YT channel I like to watch sometimes called "the dodo" and it's just short documentaries on animals who got may have gotten adopted or developed an unusual friendship with other animals. Of course, we project our feelings a bit but the friendships can be really adorable, I can see that the people really care about these animals and really want what's best for them. I cannot imagine these feelings NOT affecting their moral positions on issues surrounding animals.

This is why I get annoyed because although the bias is fairly obvious, animal lovers wanting to protect animals, their passion and love is endearing, why shouldn't they be passionate about moral issues surrounding animals. Why shouldn't I - as someone who also likes that stuff - be horrified and angry about the prospect of their mistreatment also? Why demean this powerful feeling? I think all of this is what makes us human.

Sometimes it's not emotional, but, it's like having good manners or finding certain provocative behaviour embarrassing or respecting peoples personal space. I think just doing it is good enough, being a moral person generally just means being biologically normal and having a stress-free healthy upbringing. Whether you have very well thought-out moral positions or you just kind of do what you feel like, the end result is probably pretty similar.
Tzeentch August 11, 2020 at 05:21 #441923
To a moral relativist, what is the purpose of morality?
Pfhorrest August 11, 2020 at 05:57 #441929
Quoting Judaka
Well I would say both descriptive and meta-ethical moral relativism is true


Yes, each later type of relativism assumes the previous types.

Descriptive: "People disagree about what's moral or immoral..."
Meta-ethical: "...and none of them are more or less correct than any others..."
Normative: "...so we should all tolerate each other's differences in behavior."

Quoting Judaka
something like an A.I. without our biology could intellectually appreciate the idea of morality.


Yeah, that's generally how I feel about myself.

I get emotionally upset about things that impact me directly, but I could, largely, ignore everyone else's suffering. Except that I think I shouldn't. I think the correct way for people to behave generally is to act in a way that minimizes the suffering of others, and I am a person so I should behave that way too; it would be inconsistent for me to do other than what I think people in general ought to do. Whether or not I feel like it isn't relevant, except inasmuch as my feelings might interfere in my doing what I think I ought to.

I've done right by people that I hated before, even though I'd rather have watched them suffer, because I thought that I ought to and I was able to override the feelings to the contrary. People who only do what's they feel is right because they feel like it seem unlikely to do something like that; if they want to see someone suffer, they'll invent a reason why that person "ought" to suffer to justify allowing it to happen, and not care whether or not that "reasoning" is consistent with their other reasoning about other people in other circumstances.

I'm not demeaning people having feelings that incline them to do things they ought to do. I'm very happy that people generally have feelings that more or less incline them to more or less usually do things they more or less ought to do, because most people don't think past those feelings. But their feelings can often lead people to do bad things instead, and I'd much rather that more people actually stop and think about what is or isn't actually the right thing to do, than just do what they feel like, even though it's good that they pretty often feel like doing things that are pretty okay.
Isaac August 11, 2020 at 06:58 #441938
Quoting Philosophim
If the woman has to suffocate the baby, do we call her moral, or immoral? The whateverman moralist would say, "Sure, if they think it is.". But a more discerning moral relativist would try to find a common thread between the two. Yes, there were two opposing actions of moral claim, "A baby being killed, versus we shouldn't kill babies". But surely the reasoning behind both moral claims has a common thread?


Well, what is it then? We've been at this for 2000 years, we've either found out what this common thread is or we must surely, on pain of pure dogmatism, concede that there very likely isn't one.

Quoting Banno
Seems as ethical claims are more than claims of personal preference. Ethical claims invoke a move such as that from "I choose not to eat meat" to "you should choose not to eat meat". The move from what I chose to what others should choose.

If that is so, it is difficult to see how moral relativism could count as a coherent ethical position.


A few hundred years ago everyone spoke freely as if there were a god. Most still do "God willing", "For God's sake", "God save us". Does that mean that atheism is incoherent, because people speak as if it were? It sounds like a very odd argument to say that simply because people speak as if there were an objective moral standard, it must be the case that there is one.

Quoting Pfhorrest
I get emotionally upset about things that impact me directly, but I could, largely, ignore everyone else's suffering. Except that I think I shouldn't. I think the correct way for people to behave generally is to act in a way that minimizes the suffering of others, and I am a person so I should behave that way too; it would be inconsistent for me to do other than what I think people in general ought to do. Whether or not I feel like it isn't relevant, except inasmuch as my feelings might interfere in my doing what I think I ought to.

I've done right by people that I hated before, even though I'd rather have watched them suffer, because I thought that I ought to and I was able to override the feelings to the contrary. People who only do what's they feel is right because they feel like it seem unlikely to do something like that; if they want to see someone suffer, they'll invent a reason why that person "ought" to suffer to justify allowing it to happen, and not care whether or not that "reasoning" is consistent with their other reasoning about other people in other circumstances.


It is frightening that you're advocating this stuff. This is Blair's description of a psychopath. Someone who follows the rules but without any feeling. That describes the psychological state of the overwhelming majority of the world's worst most horrendous mass-murderers, and you want people to use it as a basis for making moral decisions?
Banno August 11, 2020 at 07:03 #441940
Quoting Isaac
A few hundred years ago everyone spoke freely as if there were a god. Most still do "God willing", "For God's sake", "God save us". Does that mean that atheism is incoherent, because people speak as if it were? It sounds like a very odd argument to say that simply because people speak as if there were an objective moral standard, it must be the case that there is one.


Seriously?

Isaac August 11, 2020 at 07:18 #441943
Quoting Banno
Seriously?


Yes. Unless I've missed something in your grammar the construction of your proposition is - It seems as if claims are about X, therefore X

Where X here is some extra-personal force.

Otherwise moral relativism is completely unaffected. If all you're saying is the the subject of ethical claims is other people, then this offers no issues at all for relativism. We can all have subjective feeling about how we'd like others to behave no less than we can about his we'd like to behave.

The emotivist argument, for example, is not dented by the subject matter of the expression, unless you're also claiming, as I suggested, the the assumption within some expression reifies that subject.
Judaka August 11, 2020 at 07:55 #441954
Reply to Pfhorrest
I think it is very human to ignore a stranger's suffering, out of sight out of mind as they say. It is also very human for instance to want to minimise suffering - unless you have the power to cause it and get away with it. You say whether you feel like it isn't necessary but that's easy to say. It's like saying "I won't let anger cloud my judgement" but of course, when you actually get angry then anger clouds your judgement and with your clouded judgement, you no longer care. Or you "won't be lazy and you'll do your homework tomorrow" but then when tomorrow comes and you actually feel lazy, you don't do it and so on. This line of reasoning is unconvincing for that reason.

That is to say the "feeling" to do right is really must stronger than a belief that you must do right. The first happens naturally while the second is self-imposed. I can often tell by someone's personality what their moral stances are likely to be because there are fairly obvious correlations. Not specifically on specific topics although sometimes even that. However, who's going to be nice to the person they hate versus who's going to be cruel? I consider that to be more of a personality thing than moral stance. These things intertwine in ways that we cannot ourselves completely understand. If you are nonconfrontational then you will be nonconfrontational and your philosophies aren't going to change that. Either you will adopt realistic stances or you will not follow through with your ideals.

It might be impressive for an A.I. to come to the intellectual conclusion of being nice and kind - but it doesn't make too much sense. Morality is, in fact, based on the feelings and not the intellectual position. The intellectual position of morality is a cheap imitation, evaluated as pragmatic in some way. We conflate a lot when we talk about morality, of course, morality isn't just how you feel about something. What to do as a result of your feeling, what it means to have your feeling and so on, that's all intellectual.

Hence, though I don't agree with moral absolutism, it makes sense to me that people would think this way. It is an interpretation of the meaning of the feeling.

My view is moral opinion will be exerted one way or another, there is not a possibility for its disappearance. So in essence, it is about deciding what kind of world I would like to live in and what needs to happen to make that happen. I am decidedly intolerant of people who disagree with me on moral issues, they are obstacles to the creation of my ideal world. Not much different from moral absolutism except I don't feel the need to pretend that my ideals have divine authority. Mostly I believe that when I do what is best for myself and others, the best outcome comes naturally. Then it is only about creating the correct framing and the power to exert your influence. I certainly don't agree with normative relativism.

Although nearly all replies are just arguing against moral relativism, I just wanted to show that normative relativism is not the same as moral relativism - in which inequality of moral positions can exist. Whether you refuse to acquire your position because you think it has objective truth value or because you feel the way you feel and can't change it, it's not too different.
Pfhorrest August 11, 2020 at 16:58 #442052
Quoting Isaac
Someone who follows the rules but without any feeling.


I don’t just follow the rules blindly, I criticize the rules and make a great effort to be sure as I can that they really are the correct rules. NB that as I’ve said before I am firmly against absolutism, thinking that certain kinds of acts are always right or always wrong regardless of context; context matters a lot. But also, the ends don’t justify the means.

Those are the extremes that the usual types of normative ethical theory end up in, which is why I reject them both and try to find something better. And yeah, if someone just blindly followed those usual theories to their logical conclusion it would lead to atrocious ends. Kant would have you tell the Nazis you’re harboring Jews because lying is always wrong. Mill would have you harvest the organs of one healthy patient to save five dying ones because that end justifies those means. Those absurd conclusions that a robot blindly following such rules would reach is a demonstration that those rules are faulty.

Thinking in terms of how to program a robot is thus very useful in figuring out what the correct normative ethics is, and likewise, the correct normative ethics should lead an emotionless robot or psychopath to behave even more morally than someone just doing what they feel is right because they feel like it.

In the mean time, it’s still great that most people usually feel like doing something that’s usually mostly alright. But that still frequently goes wrong, to one degree or another. Ethics is all about thinking through what would be right more reliably than just that.

Quoting Judaka
Although nearly all replies are just arguing against moral relativism, I just wanted to show that normative relativism is not the same as moral relativism


Sorry for derailing your thread more above.

NB though that the terms are “normative MORAL relativism” and “META-ETHICAL moral relativism“. They are both kinds of moral relativism, so if you want to distinguish between them you have to make sure to use the qualifiers, because neither of them is just the one and only “moral relativism”.
bert1 August 11, 2020 at 17:38 #442067
Quoting Judaka
That is what my thread is about, what does it mean or what would it mean if morality is relative?


Perhaps it means that what we should do is negotiated rather than discovered. Not sure.

Quoting Pfhorrest
Besides descriptive relativism, there are also meta-ethical relativism, which is what Carlos is talking about (the truth or falsity of moral claims is relative)


I'm one of those...

Quoting Pfhorrest
but also normative moral relativism, which is what Judaka mentions here (we ought to tolerate behaviors that our morals say are bad because our morals are just relative).


...but not one of those.

Thanks for the concepts.

bert1 August 11, 2020 at 17:42 #442070
Quoting Banno
Ethical claims invoke a move such as that from "I choose not to eat meat" to "you should choose not to eat meat". The move from what I chose to what others should choose.

If that is so, it is difficult to see how moral relativism could count as a coherent ethical position.


I agree moral claims are about others. But they are about getting other people to do what you want them to do, not what they should do in any objective sense.
bert1 August 11, 2020 at 17:53 #442073
Quoting Pfhorrest
It's sort of the moral equivalent of people who believe things uncritically, just because they heard someone say it or read it somewhere and it seemed truthy to them. That seems to be most people, and doing what feels like the moral thing to do because they feel like it seems to be most people too, but both of those seem like a very shallow, fragile, easily corrupted and highly fallible ways to go about deciding what to think and what to do, in contrast to, you know... actually reasoning about these things critically.


Moral reasoning is still possible for the meta-ethical relativist (hope I've got that right). We just have to find common terminal goals (as opposed to instrumental goals, to put it in AI terms). Then we can argue about how best to achieve those goals. Rational argument ceases to be possible when people have divergent terminal goals. Then it's just a fight. It may be that, as a matter of fact, all people have convergent terminal goals (I believe that, or at least think it likely). But even if so, this does not make meta-ethical relativism false. It just makes it look like objectivism. Just like intersubjectivity is not objectivity, but seems like it.

At the moment I see human morality more or less in the same way as the guy in this video thinks about the orthogonality thesis regarding Artificial General Intelligence:

https://www.youtube.com/watch?v=hEUO6pjwFOo
bert1 August 11, 2020 at 17:58 #442075
Quoting Tzeentch
To a moral relativist, what is the purpose of morality?


Getting other people to do what you want them to do.

As a panpsychist, I might go as far as to say that reality (the way the world is) is at bottom, negotiated. I used to think that ethics was the last area of philosophy we should study, and is the most derivative. After being influenced by The Great Whatever, I now think it might be foundational.
Pfhorrest August 11, 2020 at 18:00 #442076
Quoting bert1
Just like intersubjectivity is not objectivity, but seems like it.


I've never really understood the supposed distinction between these two. It makes it seem like objectivity is being conflated with transcendence, like the objective is something completely beyond access. As I understand it, the objective is just the limit of the increasingly intersubjective; the maximally intersubjective (that we'll never reach, but can get arbitrarily close to) just is the objective. Any "objective" beyond that is incomprehensible nonsense, and so not worth speaking of.
Banno August 11, 2020 at 19:27 #442098
Reply to Isaac I'm just surprised to be so misread.
Banno August 11, 2020 at 19:31 #442099
Quoting bert1
But they are about getting other people to do what you want them to do, not what they should do in any objective sense.


That rather begs the question. There might, perhaps, be reasons to suppose that folk ought act in some particular way that are not objective.
Echarmion August 11, 2020 at 19:40 #442102
Quoting Pfhorrest
I've never really understood the supposed distinction between these two. It makes it seem like objectivity is being conflated with transcendence, like the objective is something completely beyond access. As I understand it, the objective is just the limit of the increasingly intersubjective; the maximally intersubjective (that we'll never reach, but can get arbitrarily close to) just is the objective. Any "objective" beyond that is incomprehensible nonsense, and so not worth speaking of.


That's an advanced position though. You first have to understand the objective as a realm completely beyond access, realize that therefore everything supposedly objective is therefore merely intersubjective, and then conclude that if the objective is inaccesible, we might as well cut out the middleman and equate intersubjective and objective.

Quoting Judaka
My view is moral opinion will be exerted one way or another, there is not a possibility for its disappearance. So in essence, it is about deciding what kind of world I would like to live in and what needs to happen to make that happen. I am decidedly intolerant of people who disagree with me on moral issues, they are obstacles to the creation of my ideal world. Not much different from moral absolutism except I don't feel the need to pretend that my ideals have divine authority. Mostly I believe that when I do what is best for myself and others, the best outcome comes naturally. Then it is only about creating the correct framing and the power to exert your influence. I certainly don't agree with normative relativism.


So, this is a nice introduction to something I have wondered about before: Given that, internally, you have to justify your moral position to yourself somehow, and that, having justified it, you are going to act on it, how does the distinction between meta-ethical relativism and meta-ethical universalism work from your internal perspective?

"It's all relative" won't help you when confronted with some moral dilemma. If you want to make a decision, you need to start somewhere, i.e. you need to treat something as an universal baseline to base your reasoning off. It may be true that, what you're actually doing is merely justifiying a conclusion you have already arrived at emotionally. But doesn't your reasoning nevertheless treat whatever you're doing as "logical" and therefore "objective"?
bert1 August 11, 2020 at 19:49 #442104
Quoting Echarmion
That's an advanced position though. You first have to understand the objective as a realm completely beyond access, realize that therefore everything supposedly objective is therefore merely intersubjective, and then conclude that if the objective is inaccesible, we might as well cut out the middleman and equate intersubjective and objective.


Indeed, you put that better than I was about to do. Not everyone will be happy with collapsing the distinction like this, and in characterising others' positions, I would maintain the distinction they maintain. (Says me in a high-minded "I never mis-characterise others" way)

It seems to me Pfhorrest is a meta-ethical relativist, as long as he thinks that everyone has, in fact, the same terminal goals such that rational argument can always in principle result in agreement.



Judaka August 11, 2020 at 20:02 #442110
Quoting bert1
Perhaps it means that what we should do is negotiated rather than discovered. Not sure.


Not necessarily, to intelligently talk about morality we need to break it down into smaller bits, otherwise, disagreements get bogged down in the same places all the time. Firstly, to separate how morality functions within a culture and within a person. Being passionate about your views on morality doesn't mean that others will agree with you, in fact, generally the opposite. No negotiation is required here, you have your passionate views, you may or may not be able to get others to agree/comply with your views but you retain your views and passions nonetheless.

Within a culture, negotiation is a misleading term because after all, no real negotiation actually takes place. Interpretations act like ideas, some are convincing to people and some aren't, when enough people or enough powerful people latch onto an idea (among other things), then it starts to take hold. What makes ideas convincing aren't necessarily because they're good ideas, there's a lot that goes into this process.

We have our base moral feelings, that you don't need to think about, the average, healthy person is predisposed to dislike certain acts and compelled to think in certain ways. Then we have the best way to implement those feelings. For instance, there are biological and social reasons for why in the past, society has shamed open sexuality but we can also see the other side, of allowing people to do what they want and being tolerant of difference. So we can in a sense "discover" the best way to think, given that the "best" way is something we determine. However, we are not blank slates, so the "best" way can be about discovering what works within our society (which is constantly changing) and for ourselves, given our values and all the components of our selves and lives which are not optional.

Meta-ethical relativism for me is certainly not about making morality an individual preference similar to a favourite colour. There are very powerful emotions involved, hugely important consequences and the stakes are very high. So a lot of careful thinking and experimentation has to take place before we can realise a set of moral principles that achieves the aims we hoped for.
Judaka August 11, 2020 at 20:25 #442113
Reply to Echarmion
Honestly, I use meta-ethical relativism to say that moral positions don't have a truth value, they're not objectively true. When I read meta-ethical universalism, it seems to be capable of saying exactly the same thing. I do disagree with having a one-size-fits-all approach to ethics but there are definitely scenarios where I would think that a universal approach is correct. Since I am learning more about the correct terms, I am not in the best position to comment deeply.

I think that morality is a conflation of our biological proclivity for thinking in moral terms, the intellectual positions that we create, the personal vs social aspects of morality. Hence, people say "you need a basis for your intellectual position to be rational" but to me, morality is not based on rational thought.

I don't believe a supercomputer A.I. can reach the moral positions that we do and for it, I think it would really struggle to invent meaningful fundamental building blocks towards morality which for us just come from our biology. When we look at people who commit really horrible crimes, they are often just dysfunctional in some way. We have psychopaths who are just wired differently and cannot understand why we think the way we do. Why would someone cry to see a stranger suffering? That doesn't make any sense but it's how many of us are.

Morality is often just you being you, the relativity of morality frames morality as being exactly that. You can be logical but your base positions aren't logical, they're just you being you. Morality is not simply an intellectual position. My reasoning is based on feelings which discount any possibility for objectivity, my feeling aren't dependant on reasoning.

Reasoning becomes a factor when we start to talk about the implications of my feelings. I may instinctively value loyalty but we can create hypothetical scenarios which challenge how strong those feelings are. I may value loyalty but we can create scenarios where my loyalty is causing me to make very bad decisions. That's the intellectual component of morality, interpretation, framing, decision-making and so on. I find all of this happens very organically regardless of your philosophical positions. Even for a normative relativist, I imagine it changes very little in how morality functions for that person.
Pfhorrest August 12, 2020 at 03:28 #442211
Quoting bert1
It seems to me Pfhorrest is a meta-ethical relativist, as long as he thinks that everyone has, in fact, the same terminal goals such that rational argument can always in principle result in agreement.


No, I just “have terminal goals” (i.e. take morality to be something*) that involves the suffering and enjoyment, pleasure and pain, of all people.

Whether or not other people actually care to try to realize that end is irrelevant for whether that end is right. Some people may not care about others’ suffering, for instance; that just means they’re morally wrong, that their choices will not factor in relevant details about what the best (i.e. morally correct) choice is.

*One’s terminal goals and what one takes morality to be are the same thing, just like the criteria by which we decide what to believe and what we take reality to be are the same thing. To have something as a goal, to intend that it be so, just is to think it is good, or moral, just like to believe something just is to think it is true, or real.
Gregory August 12, 2020 at 03:38 #442213
Whether completely amoral people were born that way or not is debatable. Some psychologists believe toddlers have free will, and the corollary to this is that toddlers form the personality they will latter have. I wish this issue was clearer but it's not
Isaac August 12, 2020 at 05:51 #442246
Quoting Pfhorrest
I criticize the rules and make a great effort to be sure as I can that they really are the correct rules.


And how do you carry out this criticism and effort? By following rules.
Isaac August 12, 2020 at 06:07 #442250
Quoting Banno
I'm just surprised to be so misread.


Really? Happens to me all the time.

Quoting Banno
Isaac throws out reality every second day.
Pfhorrest August 12, 2020 at 06:10 #442253
Quoting Isaac
And how do you carry out this criticism and effort? By following rules.


Which I in turn criticize and reformulate, ad infinitum.
Isaac August 12, 2020 at 06:18 #442262
Reply to Pfhorrest

Your ad infinitum is impossible. You must simply accept one set of rules, you do not have infinite mental resources to devote to actually questioning everything. You might have some whimsical theoretical notion of being able to question everything, but you cannot actually do so, which means in practice you're selecting a set of rules and blindly following them. The fact that you might 'one day' question them when you get time is pragmatically irrelevant.
Pfhorrest August 12, 2020 at 06:24 #442266
Reply to Isaac You don't have to (and can't, and shouldn't) finish any infinite series of questioning before proceeding with your life. But being open to seeing problems with the rules you live by and revising them as needed, as often and however long as needed, is the exact opposite of following them blindly.
Isaac August 12, 2020 at 06:32 #442268
Quoting Pfhorrest
But being open to seeing problems with the rules you live by and revising them as needed, as often and however long as needed, is the exact opposite of following them blindly.


No it isn't. If you do not question a rule then you are following it blindly. The fact that you might theoretically be open to questioning it sometime in the future if you get time is irrelevant. It just puts a gloss of 'cold, hard calculation' on the exact same gut feeling that everyone else is using.
Echarmion August 12, 2020 at 13:44 #442341
Quoting Judaka
Honestly, I use meta-ethical relativism to say that moral positions don't have a truth value, they're not objectively true.


That much I understand. But, in the case where you are faced with a moral dilemma, don't you then run into a performative contradiction? In order to solve the dilemma, you employ reasoning, and that reasoning will, presumably, reject some answers. What is that rejection if not assigning a truth value?

Quoting Judaka
I think that morality is a conflation of our biological proclivity for thinking in moral terms, the intellectual positions that we create, the personal vs social aspects of morality. Hence, people say "you need a basis for your intellectual position to be rational" but to me, morality is not based on rational thought.


From a descriptive perspective, I agree. Morality is, from the outside perspective, an evolved social capability of humans, and it's probably based on our capability for empathy, that is mirroring feelings.

Quoting Judaka
I don't believe a supercomputer A.I. can reach the moral positions that we do and for it, I think it would really struggle to invent meaningful fundamental building blocks towards morality which for us just come from our biology.


This is an interesting scenario actually. Is an AI independet from human morality even possible? An A.I. would, in the first instance, just be an ability to do calculations. In order to turn it into something we'd recognize as intelligence, we'd need to feed it with information, and that'd presumably include our ideas on morality. Given that we don't have any intelligences to model an AI on other than our own, it would seem likely that the outcome would actually be fairly similar in outlook to humans, at least in the first generations.

Quoting Judaka
Morality is often just you being you, the relativity of morality frames morality as being exactly that. You can be logical but your base positions aren't logical, they're just you being you. Morality is not simply an intellectual position. My reasoning is based on feelings which discount any possibility for objectivity, my feeling aren't dependant on reasoning.


But isn't it the case that, while you may intelectually realize that your basic moral assumptions, your moral axioms, are merely contingent, you are nevertheless employing them as objective norms when making your moral decicions? To me it seems rather analogous to the free will situation: You can intellectually conclude that free will is an illusion, but you cannot practically use that as a basis for your decisions.

It seems to me that this dualism - that of the internal and the external perspective - is fundamental and unavoidable when decisionmaking is involved.

Quoting Judaka
Reasoning becomes a factor when we start to talk about the implications of my feelings. I may instinctively value loyalty but we can create hypothetical scenarios which challenge how strong those feelings are. I may value loyalty but we can create scenarios where my loyalty is causing me to make very bad decisions. That's the intellectual component of morality, interpretation, framing, decision-making and so on. I find all of this happens very organically regardless of your philosophical positions. Even for a normative relativist, I imagine it changes very little in how morality functions for that person.


I would agree that, in general, your meta-ethical stance has limited bearing on how you make moral decisions in everyday life. We cannot reason ourselves out of the structures of our reasoning.
Pfhorrest August 12, 2020 at 17:58 #442389
Reply to Isaac This just seems to be an argument about what “blindly” means now. I’m taking that to mean what I call “fideism”: holding some opinions to be beyond question. You’re taking it to mean what I call “liberalism”: tentatively holding opinions without first conclusively justifying them from the ground up. But the latter is fine, it’s no criticism of me to say I’m doing that, and I’m not criticizing anyone else for doing that. It’s only the former that’s a problem, and you seem to want to impute that problematicness to me, perhaps because you conflate the two together, as so many do. Just like you conflate objectivism (which entails liberalism) with transcendentalism (which entails fideism).
Judaka August 13, 2020 at 00:45 #442482
Reply to Echarmion
Quoting Echarmion
That much I understand. But, in the case where you are faced with a moral dilemma, don't you then run into a performative contradiction? In order to solve the dilemma, you employ reasoning, and that reasoning will, presumably, reject some answers. What is that rejection if not assigning a truth value?


Those answers rejected aren't being described as untrue, they're being judged in other ways. An emotional argument like "it is horrible to see someone suffering" for why you should not cause suffering might or mightn't be a logically correct argument, it is based on my assessment. Everything about my choice to call a thing moral or immoral is based on me, my feelings, my thoughts, my interpretations, my experiences. The conclusion is not a truth, the conclusion can be evaluated in any number of ways. Is it practical, pragmatic, fair and the options go on. For me, it is never about deciding what is or isn't true.

As for A.I, I don't agree, intelligence doesn't require our perspective, I think it is precisely due to a lack of any other intelligent species that this is conceivable for people. It's much more complicated than being based on empathy, one of the biggest feelings morality is based on is fairness - even dogs are acutely aware of fairness, it's not just an intellectual position. We are also a nonconfrontational species, people need to be trained to kill and not the other way around. All of these things play into how morality functions and morality looks very different without them. An A.I. computer would not have these biases, it's not a social species that experiences jealousy, love, hate, empathy and it has no proclivity towards being nonconfrontational or seeing things as fair or unfair.

Quoting Echarmion
But isn't it the case that, while you may intelectually realize that your basic moral assumptions, your moral axioms, are merely contingent, you are nevertheless employing them as objective norms when making your moral decicions?


I don't consider morality to be mainly an intellectual position, we can look at other species and recognise a "morality" to their actions. Lions have a clear hierarchy in their pride, there is a really interesting guy called Dean Schneider who raised some Lions and spends a lot of time with them. Here's a video of what I'm about to talk about:
https://www.youtube.com/watch?v=cnTlNKZYFjQ check 1:40 specifically.

He physically smacks a lion to teach it that clawing him is not okay, explains that this is how the pride develop boundaries of right and wrong. It's okay to play around but if you actually hurt me that's not okay and I'll give you a smack. Surprisingly the lions just accept it as fair, you see a similar thing with dog trainers, they explain that the dog is acutely aware of its position in the pack, it has a very specific way of seeing who should eat first, when it should look for permission to do things form the pack leader and so on.

I've heard that when rats will wrestle each other for fun, the bigger rat will let the little rat win sometimes because otherwise, the little rat won't play anymore since it's boring to lose all the time. I draw parallels between these kinds of behaviours in animals and the behaviours we can see in humans. It's only much more complicated for humans due to our intelligence.

As humans, we can go beyond mere instincts and intellectually debate morality but that's superfluous to what morality is. Certainly, morality is not based on these intellectual debates or positions. I think people talk about morality as if they have come to all of their conclusions logically but in fact, I think they would be very similar to how they ended up if they barely thought about morality at all. One will be taught right from wrong in a similar way to lions and dogs.

Since morality isn't based on your intellectual positions, it doesn't really matter if your positions are even remotely coherent. You can justify that suffering is wrong because you had a dream about a turtle who told you so and it doesn't matter, you'll be able to navigate when suffering is wrong or not wrong as easily as anyone else. The complexity comes not from morality but interpretation, characterisation, framing, knowledge, implications and so on.





Isaac August 13, 2020 at 06:03 #442544
Quoting Pfhorrest
I’m taking that to mean what I call “fideism”: holding some opinions to be beyond question. You’re taking it to mean what I call “liberalism”: tentatively holding opinions without first conclusively justifying them from the ground up. But the latter is fine, it’s no criticism of me to say I’m doing that, and I’m not criticizing anyone else for doing that. It’s only the former that’s a problem


How is it a problem? The only problem with fideism that I can see is that one might be wrong about some belief and because one does not question it, one will persist in that falsity. Is there some other problem you're thinking of?
Pfhorrest August 13, 2020 at 06:09 #442547
Quoting Isaac
How is it a problem? The only problem with fideism that I can see is that one might be wrong about some belief and because one does not question it, one will persist in that falsity. Is there some other problem you're thinking of?


That is exactly the problem, yes.
Isaac August 13, 2020 at 06:17 #442550
Quoting Pfhorrest
That is exactly the problem, yes.


So how does the fact that you're open to them being questioned alter the issue? We've just agreed that the problem is related to whether you actually do question them, not whether you might do so in future if and when you get the time.
Pfhorrest August 13, 2020 at 06:34 #442558
Quoting Isaac
So how does the fact that you're open to them being questioned alter the issue?


Because if reasons to question them come up, I will. Someone who does otherwise won't. That's the "blindly" part of "blindly follow": turning a blind eye towards reasons to think otherwise.
Isaac August 13, 2020 at 06:48 #442564
Quoting Pfhorrest
Because if reasons to question them come up, I will. Someone who does otherwise won't. That's the "blindly" part of "blindly follow": turning a blind eye towards reasons to think otherwise.


But we've just established that the single issue is that you might be wrong about something and not correct that error. That is, you agreed, the only thing that is at fault with fideism. You're still not altering that by saying that you might question something in future if the matter arises.

Notwithstanding that, you have not yet met the burden of demonstrating that your method here is at all practically achievable. When faced with simple moral decisions which do not yield the expected results ("don't lie" for example) you say that the 'right' moral decision is heavily context dependant, the right choice for that person at that time in that context. If so, there are 7 billion people in several billion different contexts at several billion different times. Given the raw numbers, the onus is on you, I think, to demonstrate that this idea of yours is pragmatically any different from fideism. It seems clear to me that the vast majority of the time, in the vast majority of cases, your moral decisions will be made without going through this process because the time within which a decision has to be made falls far short of the time it would take to reach anything like the kind of context-specific conclusion you're advocating.

basically, you'll go through life making almost all of your moral choices on the same gut-feeling, peer-group, social-norms basis that everyone else does because you haven't the time or the mental bandwidth to actually do the calculation. The only difference is you get to act holier-than-thou simply on the grounds that your open (one day, maybe) to changing your mind.
Echarmion August 13, 2020 at 08:22 #442588
Quoting Judaka
Those answers rejected aren't being described as untrue, they're being judged in other ways. An emotional argument like "it is horrible to see someone suffering" for why you should not cause suffering might or mightn't be a logically correct argument, it is based on my assessment.


Yeah but "it's horrible" is not the saying the same as "it's feels horrible to me". That's my point: We treat the outcome of any deliberation on morality as more than just emotion.

Quoting Judaka
Everything about my choice to call a thing moral or immoral is based on me, my feelings, my thoughts, my interpretations, my experiences. The conclusion is not a truth, the conclusion can be evaluated in any number of ways. Is it practical, pragmatic, fair and the options go on. For me, it is never about deciding what is or isn't true.


Doesn't the ability to evaluate anything in any way require assigning truth values? Even the question "do I feel that this solution is fair" requires there to be an answer that is either true or false.

Quoting Judaka
As for A.I, I don't agree, intelligence doesn't require our perspective, I think it is precisely due to a lack of any other intelligent species that this is conceivable for people. It's much more complicated than being based on empathy, one of the biggest feelings morality is based on is fairness - even dogs are acutely aware of fairness, it's not just an intellectual position. We are also a nonconfrontational species, people need to be trained to kill and not the other way around. All of these things play into how morality functions and morality looks very different without them. An A.I. computer would not have these biases, it's not a social species that experiences jealousy, love, hate, empathy and it has no proclivity towards being nonconfrontational or seeing things as fair or unfair.


How do you suppose an A.I. would gain consciousness without human input? It's a bunch of circuits. Someone has to decide how to wire them, and will thereby inevitably model the resulting mind on something. And in all likelihood, in order to create something flexible enough to be considered "strong A.I.", you'd have to set it up so it started unformed to a large degree, much like a newborn child.

Quoting Judaka
As humans, we can go beyond mere instincts and intellectually debate morality but that's superfluous to what morality is. Certainly, morality is not based on these intellectual debates or positions. I think people talk about morality as if they have come to all of their conclusions logically but in fact, I think they would be very similar to how they ended up if they barely thought about morality at all. One will be taught right from wrong in a similar way to lions and dogs.

Since morality isn't based on your intellectual positions, it doesn't really matter if your positions are even remotely coherent. You can justify that suffering is wrong because you had a dream about a turtle who told you so and it doesn't matter, you'll be able to navigate when suffering is wrong or not wrong as easily as anyone else. The complexity comes not from morality but interpretation, characterisation, framing, knowledge, implications and so on.


I agree with all that, but it's notably an intellectual position looking at morality from the outside. It's not how morality works from the inside. Internally, you do have to keep your positions coherent, else you'll suffer from cognitive dissonance. Knowing, intellectually, that your moral decisions are ultimately based on feeling doesn't help you solve a moral dilemma. The position "it's right because I feel like it" is not intuitively accepted by the human mind.
Judaka August 13, 2020 at 10:13 #442600
Quoting Echarmion
Doesn't the ability to evaluate anything in any way require assigning truth values? Even the question "do I feel that this solution is fair" requires there to be an answer that is either true or false.


If I say that "I don't like bob", that's not something I put a truth value on but if you ask "is it true you don't like bob" I will say yes. So, it is not involved in my decision making in the event of a moral dilemma.

Quoting Echarmion
How do you suppose an A.I. would gain consciousness without human input?


The A.I. is just an illustration of my point, no need to get too bogged down in the details. The point is that humans are biologically predisposed to think in moral terms, we are predisposed to have particular feelings about children, violence, fairness, pain and all of this plays a part in how morality is developed. So often when it comes to meta-ethical relativism, there are issues about how morality is going to be able to function given that meta-ethical relativism just strips it of all authority and meaning. So one of the ways it retains those things is because of how morality functions organically in healthy people due to the influence of our biology.

As for the moral dilemma, when I listen to people talk about moral dilemmas, I hear "it's just wrong" and "what is being done is horrible" more than anything else. Not even dilemmas but on actual moral issues, most people cannot give explanations for why something is wrong without their feelings. People won't say "it's right because I feel like it", obviously, feelings don't work that way. Feelings are highly complicated, reason and feelings aren't separate, they mix and you can't take them apart and examine them.

One great example for how morality works is the meat industry and dogs, many Asian countries eat dog and it's considered truly awful by many meat-eaters, it crosses a line. My explanation for that is the dog has acquired an image or status in some societies as "man's best friend", Certain cultures view dogs as loyal, loving, great pets and not food. This is where things become complex, the characterisation of the dog is what makes the eating of the dog evil. Mass poison some rats and it's fine but mass poison dogs and you're a monster. The rat is a pest, the dog is a loyal and loving friend - how can you betray and eat a loyal friend? That's wrong.

As for cognitive dissonance, it is naturally occurring, to reduce cognitive dissonance requires a conscious effort.



Edgy Roy August 17, 2020 at 08:53 #443815
It is far better to act moral than it is be called moral
bert1 September 01, 2020 at 20:22 #448477
Quoting Pfhorrest
No, I just “have terminal goals” (i.e. take morality to be something*) that involves the suffering and enjoyment, pleasure and pain, of all people.


Thanks, this is interesting. It seems to me that that is consistent with metal-ethical relativism. FWIW, I share these terminal goals. However I don't think that other people who do not share these terminal goals have necessarily made a mistake (although they might have done). They are just my enemy. Do you think they have made a mistake such that they could be reasoned with?

Whether or not other people actually care to try to realize that end is irrelevant for whether that end is right. Some people may not care about others’ suffering, for instance; that just means they’re morally wrong, that their choices will not factor in relevant details about what the best (i.e. morally correct) choice is.


It means they are morally wrong from yours or my perspective. But I can't get away from the need to specify a perspective when evaluating the truth of moral claims. What is right and wrong just changes depending on whose point of view one is. This just seems like a fact that follows from the idea that people just can have divergent terminal goals.

*One’s terminal goals and what one takes morality to be are the same thing, just like the criteria by which we decide what to believe and what we take reality to be are the same thing. To have something as a goal, to intend that it be so, just is to think it is good, or moral, just like to believe something just is to think it is true, or real.


Yes, I agree with that.

Pfhorrest September 01, 2020 at 21:12 #448492
Quoting bert1
Do you think they have made a mistake such that they could be reasoned with?


I think they've made a mistake, but I don't know that they can be reasoned with. It seems to me completely analogous to someone like a solipsist, or anyone else who doesn't believe that there is a reality apart from themselves that can be known from sense experience (whether that be because they think reality is something completely apart from sense experience, or because they think there is nothing to it besides their own). I don't know if I could reason a solipsist out of their position (I have ideas for how to try, but it really depends on them listening), but I think they're wrong; and likewise someone who denies that morality is something apart from their own interests that has to do with hedonic experiences like pain and pleasure (whether that be because they think morality is something completely apart from such experiences, or because they think there is nothing to it but their own).

EDIT: My reasons for rejecting solipsism, transcendent reality, egotism, and transcendent morality, are all the same pragmatic ones: if any of those are true, we could not know them to be true, nor know them to be untrue, but also we could not help but act in assumption that either they are or aren't. If we assume any of them are true, then we just run into an immediate impasse from which there is no reasoning out of; while if we assume they're all false, and that both reality and morality are beyond merely our individual selves but accessible to all of our experiences, then we can actually start making practical progress sorting out which things are more or less likely to be real or moral. It might always be the case that actually nothing is either, but we can never be sure of that, only either assume it prematurely and so give up, or assume to the contrary and keep on trying to figure out what is each.