Emotions and Ethics based on Logical Necessity
It is easy to see mechanical systems behind our rational thinking and our vision and our hearing, as we have computers and cameras and microphones that do just those things. But our feelings seem different, although their explanation is much more simple – stability – a fundamental property of every possible thing since every possible thing is by necessity either stable or unstable. Every possible thing either tries or doesn’t try to change its current state. Stability also corresponds with our emotions completely. Instability is the negative feelings we try to get away from and stability is the positive feelings we try to stay in. And since there is an infinite number of ways to be stable and unstable and the combination of those two, stability explains every possible emotion that can ever be experienced. Stability does not explain every possible experience - just how good or bad any given part of a complex experience feels. From joy to hunger. From hate to love to guilt to feelings of right and wrong.
But what is Right and wrong? – I have a logical solution for that. The greatest problem of morality is the basic-assumption-problem. All moral systems assert goals. What should be and what shouldn’t be. All of them need to be justified. And all of the justifications need to be justified and so on, ending up in the basic assumption of that moral system. Luckily, the basic-assumption-problem can be solved by a logical necessity. And we already described a logically necessary goal – stability. Everything tries to get away from instability to stability by logical necessity since instability means that the system is trying get change its current state. In emotional terms stability is happiness.
But what is so moral in achieving your own stability – your own happiness? Since happiness is the only logically necessary goal, one should try to change all their other goals to serve this purpose. And what is the most efficient way to achieve happiness? To create a stable state – not only for you, but for everything else that you interact with – happiness for everyone. Because a stable system inside an unstable system will be affected and destabilised by that system. As incompatible goals compete with each other making the achievement of such goals hard and inefficient and forcing some to be always unhappy, one should change their changeable goals to be compatible with others and one should change others to be compatible with oneself. This creates harmony and sustained happiness. Cheaters can of course isolate themselves from others, so they only have to make themselves compatible with their environment.
Still, this is a complete moral system since every action of everyone can be objectively evaluated by how well it serves ones logically necessary goal and since the optimization of this creates happiness and harmony between all those, who interact with each other in this world. This shows that there never was a need for some external objective moral goal for the world. The unchangeable desire for personal happiness and the optimization of it was all that was ever needed.
But what is Right and wrong? – I have a logical solution for that. The greatest problem of morality is the basic-assumption-problem. All moral systems assert goals. What should be and what shouldn’t be. All of them need to be justified. And all of the justifications need to be justified and so on, ending up in the basic assumption of that moral system. Luckily, the basic-assumption-problem can be solved by a logical necessity. And we already described a logically necessary goal – stability. Everything tries to get away from instability to stability by logical necessity since instability means that the system is trying get change its current state. In emotional terms stability is happiness.
But what is so moral in achieving your own stability – your own happiness? Since happiness is the only logically necessary goal, one should try to change all their other goals to serve this purpose. And what is the most efficient way to achieve happiness? To create a stable state – not only for you, but for everything else that you interact with – happiness for everyone. Because a stable system inside an unstable system will be affected and destabilised by that system. As incompatible goals compete with each other making the achievement of such goals hard and inefficient and forcing some to be always unhappy, one should change their changeable goals to be compatible with others and one should change others to be compatible with oneself. This creates harmony and sustained happiness. Cheaters can of course isolate themselves from others, so they only have to make themselves compatible with their environment.
Still, this is a complete moral system since every action of everyone can be objectively evaluated by how well it serves ones logically necessary goal and since the optimization of this creates happiness and harmony between all those, who interact with each other in this world. This shows that there never was a need for some external objective moral goal for the world. The unchangeable desire for personal happiness and the optimization of it was all that was ever needed.
Comments (81)
The idea of foundationalist utilitarism is not new - the idea of basing it upon stability as a logically necessary personal goal for everyone (stability) and also explaining what emotions are through stability is quite new, to my knowledge. Since the logically necessary personal goal of stability/happiness is not a choise unlike other goals, it doesn't have to be justified as a choise like other goals. It's not my preferred utility, it comes from logic. My preferred utility would be humor, but one can only dream.
What's logically necessary about it? I wouldn't call it "logically necessary". In the same way that food isn't "logically necessary" neither is seeking a more stable state. Sure you try to do both, but there is nothing logical about that
Quoting Qmeri
In actuality it is the opposite but I get that what you mean by instability isn't exactly entropy
Also even if we give that this is the case what does that have to do with morality? "Things fall" is not a statement about morality. "People seek more stable emotions" shouldn't be either
Quoting Qmeri
Unsubstantiated claim
Quoting Qmeri
You could argue that others being unhappy makes you happy (which is the case a lot of the time, just look at the sweatshops making our clothes). So if your goal is to seek your own stability/happiness the most efficient way to do so is not necessarily to please everyone. Slavery had been a part of stable states for some time but I don't think you'd call it moral, even though it checks all the boxes (increases stability of state and of slaveowners and so would be their goal to achieve it)
Stability is a logically necessary property of everything since everything either is trying to change its current state or isin't. Therefore everything is trying to achieve stability since instability means that one is trying to change its current state. Therefore everything has a goal of achieving its own personal stability by logical necessity.
Quoting khaled
This I partially agree with. Even my text describes how to cheat this moral system by isolating yourself from the suffering of others. The point of this moral system is to solve the Hume's guillotine by showing that there is a logically necessary personal goal for everyone and the choice of it doesn't need to be justified like Hume demands since it's not a choice. No objective ought was ever needed to make logically justified choices. The fact is that in most cases (not all) trying to achieve personal sustained stability is intuitively moral. (has to be proven empirically, but we do see people usually achieving sustained happiness in communities which are not too unstable. Even dictators are usually more happy when their community thrives. And slave-systems and others with much unhappiness do see revolts and instability reliably.)
I actually usually call this just a "goal system", but since it does solve the Hume's guillotine and since it actually promotes intuitive morality like happiness and harmony as a bonus, I also call it a moral system. It kind of does make other moral systems redundant, since one can judge every action of everyone by this system. And unlike others, it's based on logical necessity.
You keep repeating this, but it makes no sense whatsoever. It is trivially true that every thing either changes or it does not, but no normative statements can be logically derived from this truism.
Quoting Qmeri
Although too vague and probably false, this at least is a potentially truth-apt statement - unlike what you wrote above, which is just nonsense.
I think what you are trying to articulate is 'balance'. If we consider all living things to be like tightrope walkers or cyclists. Without (stability/balance) they topple over. Living things have to balance the output of energy to the input of energy. Too little food/light/warmth...we die. And too much is also very unhealthy. In all things we seek balance: rest and work, certainty - uncertainty. I often return to this theme of yin/yang in all living things.
It seems like we don't have a common definition for words "trying to change" or "goal". Probably my mistake since I could have defined them in the original text, although I had to make some compromises to keep it short. I define "trying to change" as when a system changes in time when no outside influence is applied to it. And I define a "goal of a system" as a state in which the system does not try to change meaning a state of stability. Therefore it's a quite an obvious logical necessity that a goal of a system=stable state of that system with these definitions.
But then again, this moral system is trying to solve the problem of ethics by showing that no objective normative statements were ever needed. It simply tries to show that every system has logically necessary personal goals from which personal normative statements can be derived.
But you cannot just define goals. I as a moral agent select my goals according to what I judge to be good or bad; you cannot unilaterally define my goals for me and then call it a "logical necessity."
But even you should agree that something you try to achieve is your goal. And you as a system are trying to get away from any unstable states you may have aka you are trying to get to a stable state whether you judge it good or bad. It can't be chosen. It's inherent in what instability is. An unstable system is trying to achieve the change of its current state whether it wanted or not. This is a goal by any common definition of a goal. And you have this goal if you have instability in you no matter what you judge or choose.
I guess it would be easier for me not to describe the logically necessary goal as trying to get to stability, but trying to get away from instability, although to me they seem to be one and the same thing.
Of course, you set yourself up for failure by the very premise of your inquiry: developing a logically necessary ethics. Anything that is logically necessary cannot tell us anything about what is or what ought to be. Logic is a sealed system, it is limited to its own abstract playground. Unless you feed it some real-world premises - which you will then have to justify - it cannot accomplish anything that doesn't collapse into triviality.
Yep...you beat me to it. Since the OP is dealing with ethics, I tend to default to cognitive science/psychology most of the time anyway... .
Otherwise, invoking logic, suggests that Happiness is static rather than dynamic ( like it really should be in an Ontological way ). This seems more compatible with Being; time dependent creatures.
So in my mind re-wording it to something like : "Emotions and Ethics based on Homeostasis" seems a bit more appropriate/intriguing.
In fairness too, Homeostasis, in psychology, also has a static component to it. In the context of the human condition, if a person tries to change certain behavior's, but seemingly can't, it can be said that they then revert back to their natural state of ' homeostasis'. Has much to do with genetics, etc..
So, if the context is happiness, is it fair to say humans typically revert back to a state of homeostasis? I think so.... . From there I suppose one could also try to argue for selfishness, stoicism, pragmatism, utilitarianism (ala Sophisticat) so on and so forth...
It is true true that no logical necessity can ever give us any information about our world since they are true in every possible world. They are all trivialities. But since our intuition doesn't seem to understand all the logically necessary trivialities, they can still teach us new things we didn't realize before. (Like: I think, therefore I am.) Therefore proving things as logical necessities accomplishes useful things. In this case it demonstrates a trivial yet unintuitive goal that everyone in every possible world has. At least I didn't know that before I came up with this theory. A logically necessary triviality gave me new understanding, therefore logically necessary trivialities can give new understanding.
Since this theory is about logical necessities and not about premises that can be untrue in a possible world like ours, it misses the point to talk about real-world premises. Except my claims that it is usually easier to be happy in an environment where others are happy and where people have compatible goals - that is not a logical necessity and does require empirical justification.
I try to avoid words that have too specialized meanings like homeostasis that to my knowledge only apply to living systems. I use a very general term like "stability" and give it a very simple and general definition, since I'm trying to create a theory based on logical necessities, which usually are harder to find about very specialized concepts. For things like homeostasis I would need to study empirical information about organisms and make more of a scientific theory. Interesting - but not really the point of this theory, although finding empirical support from things like organisms and their tendency to find homeostasis is supportive for the claim that every being tries to achieve stability by logical necessity.
Maybe there is hope in making your theory inclusive, and not mutually exclusive, not sure. But it is important to at least draw distinctions. For instance, if you are making a claim that happiness is an intrinsic human need, you would want to explore say, the hierarchy of needs from a cognitive science point of view. Or Philosophically, you could pick any number of domains relative to ethics that makes happiness a universal objective goal.
Otherwise, I think all logical necessity would tell us is that it [happiness] exists a priori in human consciousness. And in turn, that could lead to Kantian metaphysical questions which might be interesting...
Not true. Everything is trying to change it's current state =/= everything is trying to achieve stability. Everything could just be wandering from instability to instability
Quoting Qmeri
Again, I don't agree with the use of "logically necessary". It's not "logically necess" to eat food or to seek a more stable state. It's just what we do.
Quoting Qmeri
Unsubstantiated claim. You haven't given an example and I can think of countless examples where trying to achieve personal stability is not moral such as for example: theft, rape, murder when knowing one won't be punished
Quoting Qmeri
Might it not be because we ourselves live in such a community the only happy people we see also belong in the same community
Quoting Qmeri
I don't think the history of the past few hundred years is enough to make such a claim.
Quoting Qmeri
I don't think it does. First off I don't agree this seeking for stability has anything to do with logical necessity. Secondly even if we give that there is a "logically necessary" goal for everyone that doesn't translate to it being moral to seek. Say eating is a "logical necessity", does that make it moral to eat and immoral to eat?
It's true that an unstable state might change just to another unstable state. Still, an unstable system is trying to achieve a change in its current state, which is a goal and a logically necessary one.
Although trying to change an unstable state does not seem to equal trying to get to a stable state... hmmmhhhh... a new realization! thank you for that :) It's been a while since the last time this happened for this theory.
I do still think all systems are trying achieve a stable state since we also see this in nature, but it seems that the trying to achieve change of an unstable state is the only goal I can demonstrate as a logically necessary one right now. Exiting!
Quoting khaled
This system is trying to solve Hume's guillotine by giving logically necessary personal normative statements which are not choices. It achieves this by showing that trying to achieve a change in an unstable state is a logical necessity. From that one can derive a personal normative statement for everything by how much it optimizes the achievement of that goal.
To me any system that makes normative statements is a moral system even if it does not make normative statements which seem intuitively moral. Although, while I do think following this system is also intuitively moral in most situations, that is an empirical scientific question and I can't demonstrate that claim here and it's not the point of this system.
Quoting 3017amen
In general I find it a lot easier to argue for absence of something negative.
I don't see happiness, joy or contentment as the result of gaining something positive but losing something negative. In mathematical terms going from -1 to 0 you achieve happiness. Not going from 0 to 1 because of the 'mean reversion' the revert to the average you will always end up in 0 again.
People should be content at zero but don't realize they are.
So, it ends up with the definition of what a goal is. Trying to achieve change in an unstable state is probably a goal by most definitions. Is stopping trying to achieve change the achievement of a goal? Hmmhhh... this will take some thinking.
Trying to achieve not being in any unstable state = trying to be in a stable state
Well, this stability based explanation for happiness is quite like that. Happiness is the absence of instability. Gaining happiness is the losing of instability. Being at zero instability is being content with ones state.
Agreed. But that implies that it is not logically necessary to seek stability.
Quoting Qmeri
That's what we see because we have flowers in our eyes. Ever heard of entropy?
Quoting Qmeri
That's what I'm saying though. This is NOT normative. Logically necessary? Maybe but not normative.
The statement is: People necessarily seek stability
A normative statement would be: People should seek stability
This falls right back within the guillotine. You can't go from either of those statements to the other.
Quoting Qmeri
Agreed but this ain't it chief.
Complete heat death of the universe aka where entropy is leading us is a stable state. Entropy increases stability.
Quoting khaled
The reasoning is:
Any person has a goal of stability by necessity
Therefore from the point of view of any person they should achieve stability by necessity
That is a normative statement, although its not an objective normative statement like X should happen irregardless of the point of view. It is a relative normative statement, relative to the ones point of view.
Is literally an oxymoron. Why do you think heat death is more stable than right now? Because that's what this implies
Quoting Qmeri
I've heard similar arguments before and I'll reply in the same way, there is always a semantic shift in the use of "should" when this happens. In this statement "should" simply indicates instructions as in "To cook the steak you should light the stove" there is nothing moral about it. To tell the difference between the instructional should and the moral one try replacing them with "would need to"
"People go towards stability" (inevitably)
"They should achieve stability"
"They would need to achieve stability" (inevitably)
The 3 sentences mean the same thing, so it's the instructional should being used here and there is nothing moral about the assertion.
An example of a moral should:
"You should give to the poor"
"You would need to give to the poor" (to do what)
The two sentences are clearly different, so it's a moral should
Quoting Qmeri
The mere fact that "should" is in the sentence doesn't make it normative. If it can be replaced by "would need to" or "needs to" then it's not normative
Heat death does not try to change its state unlike the current world which continuously tries to change its state until it reaches the heat death.
Quoting khaled
They mean the same thing in both of the cases. It is the side of objective moral imperatives that uses semantics since our language has developed around ideas of objective moral imperatives and it has become one of the standard meanings for the word "should".
In both cases they should be said like the following in order to avoid semantics:
According to an objective goal so and so should be
And
According to a subjective goal so and so should be
Therefore it wasn't semantics.
Ok so we're judging stability by "does it try to change" rather than the physical definition.
Quoting Qmeri
Then it's a non sequitor. Look at it this way:
Everyone seeks to change their state towards stability (fact, by definition/logically necessary)
Everyone should change their state towards stability.
Those are non sequitors. There is nothing normative about the first statement yet it is all you can establish. I agree that it is a goal most people share, but you can't just "sneak in" normative there. So let me look at your comment again.
Quoting Qmeri
If the moral should is meant here then these are non sequitors.
"I have the goal of achieving stability"
"I (morally) should achieve stability"/ "If I don't seek stability I would be doing something immoral"
It is entirely possible for something to be a shared goal or desire and for seeking it not to be moral.
In the same way that these are non sequitors
"I have the goal of eating"
"I (morally) should eat"/ "If I don't eat I would be doing something immoral"
Neither of these follow. If you had meant should in the same sense that you use should in "You should turn on the stove to cook the stake" then it makes sense (I (instruction)should achieve stability because I want it) but then you're not making a normative claim
It seems like you do not understand subjective normative statements.
I have a goal of eating.
Therefore according to that goal I should eat.
There is nothing weird about that and it is a normative statement. It is not a non sequitur. Any objective moral imperative can also be expressed like this.
There is an objective moral goal of eating.
Therefore according to that goal I should eat.
What makes it sound weird is the word "moral" since we because of the history of that word associate it with something other than eating. But this is irrelevant because even you acknowledged that any system that makes normative statements is a moral system no matter how unintuitive they sound.
maybe but I don't think so
Quoting Qmeri
It is. There is nothing "moral" about eating. Tell me what formal logical law is employed in going from "I have the goal to eat" to "I should eat"
I have a goal to eat
I am hungry
Or something similarly mundane is about all you can get in terms of a syllogism that starts with that statement
Quoting Qmeri
Yes. And I think that you are either not making normative statements or if you are they are non sequitors
Also if this was logically consistent:
Quoting Qmeri
Then that would be enough to answer Hume's law. Why wouldn't you just say that if that was your goal?
Quoting Qmeri
If this follows then "If I don't eat I would be doing something immoral" would also have to follow (since you say that normative statements automatically define a moral law). Do you really think "If I don't eat I would be doing something immoral" should follow from "I have a goal of eating"?
Okay, it seems like we understand the word "moral" differently. To me it just represents the systems we have created to evaluate our actions and choices as "desirable" or "undesirable", since we truly have a need for that kind of system in order to make any choices about anything.
It became quickly clear for us in history that judging actions simply by whether they helped ones personal goals made all choices and judgements arbitrary. Therefore we invented the idea of objective goals aka "morals". Then Hume realized that such goals can't be philosophically justified and we got the great problem of ethics aka Hume's guillotine.
My system tries to solve this by showing that there is a logically necessary personal goal for everyone that is not arbitrary and therefore judging actions as "desirable" or "undesirable" according to this goal gives us an unarbitrary system that functionally does the thing we created objective morality for - evaluating choices. Whether one calls it "moral" is irrelevant to me, although again, I do think following it creates intuitively moral choices most of the time.
Ok I'll go with that.
Quoting Qmeri
One can argue that the desire to eat is also (while not logical) a necessity. I also showed how "going to stability" isn't a logical necessity either. "Unstable things try to change their state" =/= "Unstable things try to be stable"
However all life hangs in a balance.
Yes yes, and I'll get back to that after a good night sleep and some thinking.
Without Instability nothing would exist.
You are right that we can learn by means of logical arguments implications of which we were not aware, even though they were always "contained" in the premises. However, you are not making a logical argument here. The only reason the triviality of what you are saying is not immediately apparent isn't because of the structural complexity of the argument but because you couch your pronouncements in obscure metaphorical language - which is precisely the opposite of a logical argument, in which strict, unambiguous formal language is used, with every term having a precise definition.
You say that the goal of every person is to achieve stability. But when we unpack this sentence, it turns out that by "stability" you mean nothing other than fulfillment of a goal. So once the obscure language is peeled away, it turns out that what you said was a simple tautology: your goal is your goal is your goal. Great! Thanks for making that clear.
You can replace logic with hypothesis and and experience with empiricism.
Stability was defined precisely, although I do agree that the text has other things in it that are interpretable. Stable state is simply a state of a system that doesn't try to change aka doesn't change without outside influence. Instability is the opposite of that. And by those precise definitions an unstable system is trying to achieve change of its current state by logical necessity, which is a goal by most definitions and therefore a logically necessary one. Not just your goal is your goal.
At least for most people "your goal is your goal" does not give the same ideas as "trying to achieve a change in an unstable state is a logically necessary goal that isn't a choice". "Your goal is your goal" does not demonstrate any logically necessary goals for anyone, which is the main point of this theory.
Is stability alone a bit of an oversimplification though? Piaget coined the term "equilibration" which is a combination of stabilization and progressive development.
Okay, let's acknowledge that at any given time, the only logically necessary goal an unstable system has is the goal of achieving a change in it's current state. But because our intelligence allows us to extrapolate and create the optimal solution to this problem that in any unstable state we have not achieved our goals, we can show that the achievement of stability is the optimal solution to this problem, although not a logically necessary goal itself.
But since this is a moral system whose purpose is to show that there is a logically necessary goal (unstable systems are trying to achieve a change in their state) and that the optimal way of solving that problem of not having achieved ones logically necessary goals in any unstable state is achieving stability, all the conclusions stay the same. Although, I do agree that there is a nuance difference.
We seem to agree that a system, which tries to achieve something has a goal. "Trying to achieving something" can be divided into two categories: "trying to achieve change" and "trying to achieve no change". To me trying to achieve change is not having achieved your goal and trying to achieve no change is having achieved ones goal. At least with that definition, stability, which is the state you have achieved when you try to achieve no change, is the the only possible state, where one has achieved his goal.
Do you have a good definition for a goal? The traditional way of defining it as something "desirable" is insanely vague.
A goal is whatever makes you behave with intent.
Absolutely correct.
Eckhart Tolle > Quotes > Quotable Quote
“... there are two ways of being unhappy. Not getting what you want is one. Getting what you want is the other.”
A New Earth: Awakening to Your Life's Purpose
For me achieving everything I want and being in state I don't want to change has meant boredom not happiness. Happiness requires change.
Boredom is a goal evolution created for us (probably) so that we don't stop trying to do things and therefore wasting our resources. You are not in a stable state if you are bored. You have internal conflict in you. And usually boredom eventually wins and makes you behave with intent.
These signals are stimuli. If we receive too little stimuli we get bored.
People in solitary confinement can go crazy through the lack of stimuli.
People can go crazy in super quiet rooms.
People go crazy if they don't get enough human contact. All these stimuli are necessary to stay sane.
To be honest I find it very difficult to get out of bed in the morning. If it were up to me I would stay in bed for the rest of my life. Unfortunately it becomes so uncomfortable after a while (back pain, sore butt etc) I am forced to get out. Spend about 12 hours in bed.
Quoting Qmeri
I don't recognize that (must be getting old). After a while I just want to 'come' and get it over with.
I agree... human mind is programmed to work in a very specific environment. Lack of stimuli would be such a huge change of that environment that it would be weird if we didn't go crazy.
I must be getting old, but after a short period I just want to 'come' and get it over with. BTW screwing the same woman gets a bit boring after 25 years.
The conclusions don't change but I never agreed with the conclusion in the first place
Your argument as I understand it is:
1- People seek change until they achieve stability
2- Therefore people should seek change until they achieve stability (I think is a non sequitur)
3- Therefore we have a system of morality that bypasses Hume's law
You can't reach 3 if 2 is a non sequitur
So it still seems that we disagree on the nature of the word "should". To me your "moral should" is the same as "according to this objective goal so and so should". And the should in my system is "according to this subjective goal so and so should". Therefore my argument is:
1- People try to achieve change until they achieve stability by logical necessity
2- Therefore according to this goal people should achieve change until they achieve stability
3- Therefore we have a system of morality that bypasses Hume's law
But even if we give you that there is some kind of "moral should" that is not just the same as an objective goal, this system still makes "moral should systems" functionally unnecessary since it gives an unarbitrary way to make all possible choices with just subjective normative statements.
At least to me, the only problem philosophically in just going with your personal goals was that you couldn't philosophically justify any goal better than another and that made your choice of goals arbitrary. With this system one subjective normative statement becomes not a choice and therefore not arbitrary and all other subjective normative statements can be derived from that one.
Objective morality was a nice idea that solved a functional need to evaluate choices. Hume realized that it couldn't be justified and now it doesn't need to be justified since with this system there is not even any functional need for it.
Ok so does the use of should in these two sentences sound EXACTLY the same to you?
1- You should not steal
2- In order to cook a steak you should turn on the stove
I think your entire system is based on the should in sentence #2 which I'm not even sure counts as normative. A normative statement says something about whether the situation is desirable or undesirable (google) or in other words (mine) talks about how things should be/ how we want them to be. In #2 this should isn't talking about how things should be or how deirable or undesirable turning on the stove is, it is simply describing HOW to cook a steak. A normative statement doesn't describe how to do something but WHETHER on should or shouldn't do something. Sentence 2 is not an instruction, it simply describing a state of affairs (that in order to cook a steak one would need to turn on the stove)
You can't create a system of morality based on the should in #2. In order to cook a steak one should light a stove, so what? Does that say one should light the stove? No. Does it say one should cook a steak? No. So similarly
1- People seek stability
2.i- People should try to obtain stability (and covnersely, people who don't try are immoral)
2.ii- People need to try to obtain stability (by necessity)
I think you need 2.i to make a moral system but I think 2.ii is what is equivalent to 1 and that 2.i is not quite the same statement. Another example
1- I am hungry
2.i- I should eat
2.ii- In order to eat I would need a steak
2.ii makes sense as a statement and is unrelated to 1 and is true. 2.i DOESN'T follow from 1 if by should you mean "Morally obliged to" not "Need to".
In order to distinguish these, I suggest we use "should" to mean morally and "need to" or "will" to mean instructionally.
You are just playing with words here. No one would describe a ball rolling downhill as trying to get to a more stable state, except metaphorically. No, a goal, by most definitions, is something that only sentient beings can have. It involves desires, intentions, planning, active pursuit - something that you won't find in non-sentient systems. Most importantly in this context, normativity does not apply to non-sentient systems (and arguably to non-humans). The movement of a ball cannot be inherently right or wrong. Only our goals can have that normative dimension. If you think otherwise, you'll have to argue for that - you cannot just play fast and loose with words and think that sufficient for an argument.
You are right in that you don't just stop at a simple tautology in your original argument. You do worse than that. By your reasoning, our willful actions can never be wrong. If you do something in fulfillment of your desires, that moves you closer to a state in which you will no longer have those desires and thus no motive to perform any further action - a stable state. So the argument goes. Of course, as someone has pointed out, living systems are only quasi-stable; they have to constantly work to maintain homeostasis, an unstable equilibrium with their environment. In conscious beings, such as humans, desires are a part of that equation. We never seize to have desires; perfect stability is death. But this isn't the biggest problem with your ethical theory. Your theory pretty much abolishes right and wrong. And since we know right and wrong, we know that your theory has to be wrong just for that reason alone.
As I have said many times, it's irrelevant whether this is called a moral system. It is simply a system with which one can make unarbitrary value judgements. Therefore it is at least a functional equivalent to the most important function of a moral system.
So, let's explain it from a completely different perspective. Let's start with a subjective value system which does not have a logically necessary subjective goal:
1. person A has a goal
2. therefore some things are desirable to person A (subjective normative statement)
The only problem of that system is that there is no way to choose one goal over the other since the goals themselves define what is better over other. The desirability of things is based on persons arbitrary choices.
The only new thing this system adds to that is a logically necessary goal.
1. person A has a logically necessary goal
2. therefore some things are necessarily desirable to person A (subjective normative statement)
In this system the desirability of things is not based on persons choices and therefore the desirability of those things for him does not need to be justified.
I'm not trying to make objective normative statements. I don't think that's possible since I think Hume's guillotine demonstrates that impossibility. I think objective morality is demonstrably a non thing and that's why we are incapable of making logical arguments for it. We still have a functional need for an unarbitrary system to make value judgements. This system provides that.
Except your willful actions can still be wrong. If you make an action that makes you temporarily more stable, but that decreases your stability in the long run, you have objectively made an error according to this system. For example: you hurt the group, but now the group makes sure you have more severe consequences. Or: you being secretly a thief caused your social system to lose trust in one another and now you have to be in an environment where everything social is harder and more complicated. Even your personal desire might be wrong if it's so hard or otherwise something that sabotages your ability to achieve stability. That's why I think this system does promote intuitively moral choices in most situations.
Quoting SophistiCat
No one has ever demonstrated any objective right and wrong to be a thing. We know that we have those feelings and intuitions and we know evolutionary reasons to have those feelings and intuitions. For example: acting too selfishly in an intelligent group/tribe makes them band against making you lose no matter how powerful you are therefore having unselfish feelings and feelings that follow the groups norms gives an evolutionary advantage.
To me Hume's guillotine demonstrates that objective morality is a non thing. But we still have a functional need for a system by which we can make unarbitrary value judgements. So, from a previous post:
Quoting Qmeri
It's irrelevant whether this is called a moral system. It is simply a system with which one can make unarbitrary value judgements. Therefore it is at least a functional equivalent to the most important function of a moral system.
"Act such that you ensure you consume the largest amount of cheese possible" is another system that does that. I don't think that would pass as a moral system though
Quoting Qmeri
No it isn't. This isn't a normative statement. Check this http://www.philosophy-index.com/terms/normative.php . This is a statement of fact. Some things are indeed desirable to person A.... So what? An answer to the "So what" is a normative statment Ex: Thus A should seek those desirable things. "Some things are desirable to A" is akin to "The sky is blue", it is a statement about a property of an object
Quoting Qmeri
Agreed. It doesn't provide a moral system though. It demonstrates a rather vague logical necessity that can predict what we will do. It is akin to saying "People do what they want to do the most". Ok but that doesn't have anything to do with morality.
Also since we agree that seeking stability is a logical necessity how useful is this sytem really for making decisions? Even without knowing this system exists or hearing about it you would have sought stability (necessarily). So unless a more accurate description of "stability" and how to seek it is provided I can't say this sytem would be too useful in actual decisionmaking. Again, it is akin to saying something like "People do what they want to do the most." That is logically necessary (depending on you you define want) and it doesn't actually help anyone to know that fact
No that doesn't make unarbitrary value judgements since the whole premise is arbitrary. The whole point of my system is that its premise is not arbitrary. It is based on a goal we have no matter what we chose. That cheese system is a perfect example of an arbitrary goal you simply chose. Therefore you have to justify your choice which you can't do.
Quoting khaled
Well, then we disagree what subjective normative statement means, but that is alright... probably my fault since i'm not very familiar with that word. It is still irrelevant for my point though. If you agree that we have a logically necessary goal, then you should also agree that it does not need to be justified like other goals. No matter how obvious and trivial you say it is, the fact that it does not need to be justified as a choice is not obvious to most people. And the fact that you can derive all your other goals and desires by choosing them as much as they are choosable to serve it and it's optimal achievement is also not obvious to most people. Therefore it is not a non-helpful realization and it does serve the function of making unarbitrary value judgements for a person. It still makes objective morality functionally unnecessary.
Is that an understandable way of explaining this goal-system of mine?
I didn't say it was unarbitrary. I agree.
Quoting Qmeri
Yup yup
Quoting Qmeri
But this "derivation" will be different from person to person and from circumstance to circumstance. Without some guidance or rules (which you can't justify) this system will end up with unjustifiable conclusions as well. That people seek stability will not tell you anything beyond that. It won't tell you whether or not the best way to achieve stability is through self centered behavior, charity, communism, capitalism or what
Sure it has the "functional equivalence" of objective moral systems in that it tells you what to do but it's so vague it doesn't actually help. It's like trying to extract some morality out of cogito ergo sum for example.
Well the derivations can be justified from circumstance to circumstance. It's just complicated, not undoable. Nothing forces this system to make generalizations without acknowledging them to be generalizations and thus not always true. It's still better in my opinion than an arbitrary goal system or a moral system which is just based on intuition or some "moral shoulds" which are chosen but not justified. Especially since very simple but not vague moral rules have been shown by history to not work very well. There are always exceptions where even the most moral sounding rules actually cause more misery. Like: give to the poor might be a good moral principle in certain situations, but not all. And the only rules that always end up causing nice things are always very vague like: increase happiness.
Actually, even intuitive morality is just as complex as mankind itself. Pretty much what we call "moral" is always dependent on the context and what actually causes things like suffering and happiness in any given situation. Therefore the only thing that makes this system more complicated is the layer that the optimal solution is dependent on the point of view. It's just the same jump we made in physics when we jumped to relative theory of time.
Quoting khaled
The details of what this goal system gives to any person is an empirical scientific question since it's by definition not a logical necessity since it is dependent on the person and the circumstance. But since I can't demonstrate the empirical evidence for every circumstance in this forum, I only try to demonstrate the logically necessary starting point which I can demonstrate.
And with that starting point combined with ones knowledge of his circumstance, at least I have been able to create to myself a pretty complete set of values with not too much effort.
Just because a system is very complicated doesn't mean that the system is unhelpful. Politics is complicated, yet we have been able to make useful simplifications and generalizations for it and for pretty much every other complicated thing we have encountered. Including well established moral systems like utilitarianism which is almost as complicated and vague as my goal system, but you are not complaining about that, are you?
The most of what we sense to be right or wrong is more of a belief than knowledge.
People seek what to believe in. Belief gives them stability. We are all groping around in the dark with belief like a candle to show the way.
How is this "objectively an error?" You have not shown this. Your argument is that a closed system will by necessity converge towards a stable (static) state. This is both wrong and irrelevant, but let's set that aside for now. I just want to emphasize that your argument doesn't say anything about right and wrong - it just says, in the more restricted case, that whatever choices you make, in the long run they will tend to converge towards a state of perfect satisfaction. That is all.
is absolutely right: your "system" doesn't help us make decisions, it just claims to make an objective statement about decision-making in general. It is not a system of ethics, because it cannot prescribe any course of action.
If instead you propose that we ought to optimize our decision-making in order to maximize satisfaction, as measured by some metric (which you will also supply), then there is nothing "logically necessary" about that - that is just another in a long line of ethical systems that will have to compete with the rest.
I will leave you with this admonition from the recently departed philosopher Jaegwon Kim (hat tip to ), because I feel that this is kind of a theme with your posts:
Quoting Jaegwon Kim
In the very same responte, khaled says that my system prescribes a course of action for every circumstance - just that it does not give simple universal courses of action like "be charitable" irregardless of circumstance.
Quoting khaled
The fact that the optimal course of action is different depending on circumstance is true about every consequentialist moral system. This system is not abnormal in that regard. The two things this system is abnormal in are:
1. it gives a personal goal for everyone, not universal goals
2. it avoids the problem of justifying the choice of this goal by showing that it is unchoosable and therefore doesn't need to be justified as a choice.
But it seems like that you will not accept that people have this unchoosable logically necessary goal. That's all right. I hope that you at least understand what this system is trying to say even if you're not convinced.
That is the case with each and every moral system
Quoting Qmeri
That is the same with your system. I understand that you begin from a premise that's true by definition, but the problem with moral systems is rarely that the system is unjustified but more so that it's hard to go from ANY vague premise to concrete reality. "People seek stability" (I still don't consider this as a moral premise but nonetheless) doesn't point to anything specific we should do
Quoting Qmeri
Again, why I asked you to define "stability" in the first place because you weren't going to use "entropy".
Quoting Qmeri
This system is too SIMPLE to be helpful.
Quoting Qmeri
I complain about utilitarianism all the time. I complain about every well established moral system. Because I don't think any of them have a basis, including yours.
Quoting Qmeri
No, what it does is state everyone has a personal vague goal which is seeking stability which is true by definition of "stability". That is very different from "gives a personal goal for everyone" because that sounds like saying "everyone should seek stability" where the only thing you can say logically is "everyone seeks stability". Again, those are very different statements
Quoting Qmeri
Agreed, but as a result the starting premise is true by definition.
"Everyone seeks stability" is like "A bachelor is not married" it is true by definition, however in the same way that "A bachelor is not married" doesn't logically lead to "A bachelor shouldn't be married", "Everyone seeks stability" doesn't lead to "Everyone should seek stability". You need the moral should for the thing to be considered a moral system
Quoting Qmeri
Oh I accept it, I just think it's a useless premise since it's true by definition.
This I disagree with. At least I haven't met any system that solves the problem of justifying your choice of goals. And I also disagree with that this system is even that hard to use to make concrete decisions. In practice this system simply makes people ask the question: "what actions would make me the most satisfied in the long run?". Since people have much more information about themselves than the world as a whole, such question is much easier to turn into concrete actions than something like utilitarianism. Even a moral system with a very specific goal like: "increase capitalism", would be more difficult to implement for the average person since what actually would increase capitalism is a question that needs expertise.
And if you are talking about moral systems that give rules that are not based on circumstance, I don't even know where to find those these days. People seem to accept that our moral intuition makes mistakes from time to time and that even religious teachings should be implemented based on the circumstance. So, while the exact nature of human "stability" and the absolute optimal way to achieve it are complex to answer questions, this system is very simple for any single person to turn into somewhat functional concrete decisions. And I'm not aware of any moral system, which does anything better than that. Optimal solutions need expertise and effort and somewhat working solutions are what is expected from the average person.
"What would make me the most satisfied in the long run?" is not even a new difficult thing to teach to people. People are doing it already. This system simply solves the problem of justifying that choice of a goal.
We apply morality when we are faced with confusing choices.
This doesn't do so either. It says it's a necessity. That's all it does. That's different from "justifying". Example: "People will have children" does not justify having children. "People will kill each other" does not justify murder. "Everyone will try to eat" does not justify eating.
Quoting Qmeri
That question is LITERALLY what a utilitarian would ask though
Quoting Qmeri
Not anymore than
Quoting Qmeri
I never said it justifies anything. It solves the problem of justifying ones choice of goals by bypassing it. Something that is not a choice doesn't need to be justified as a choice.
Quoting khaled
Utilitarianism is a family of consequentialist ethical theories that promotes actions that maximize happiness and well-being for the majority of a population.-wikipedia. My system is about personal satisfaction - similar, but still majorly different.
Quoting khaled
Seeking of larger things like capitalism does require much more expertise than seeking own personal satisfaction since one has naturally orders of magnitude more information about himself and things that effect himself than he has about larger things like capitalism. Not to mention that a single person and his satisfaction is a much more simple thing than anything that has to do with large groups like capitalism. Are you really saying that a non-expert would be able to do as good decisions regarding capitalism as what makes him satisfied? People try to achieve personal satisfaction all their lives. They are relatively trained in it even if they never thought of it. Therefore this system is very easy to turn into practical solutions.
I don't know which part of what you wrote he had in mind. As I already pointed out, you equivocate between a trivial (but wrong) descriptive statement about decision-making and a bare-bones prescriptive theory. To recap, the descriptive bit is that wish fulfillment necessarily leads towards a permanent state of satisfaction ("stability"). This just tell us what is, but this doesn't say anything about what ought to be. This is not a prescriptive system of ethics.
And then there is the prescriptive part, which says that you ought to make decisions so as to achieve this putative state of stability-nirvana in the most optimal way. This does not follow from the above, for all the usual reasons. You fail to bridge the is-ought gap.
Can be considered a version of hedonism
Quoting Qmeri
Yes, he would botch both
Quoting Qmeri
What if my personal satisfaction requires shooting people?
I guess that is true about any non-expertise irregardless of how simple the subject is, but I would still give an advantage to personal satisfaction, since we naturally have some expertise in it.
Quoting khaled
Then you would be in a difficult situation with this system. Since achieving such a desire would be insanely hard to do without huge retaliations against your long term satisfaction, your best bet would be to change your desires. This is a good example of how other goals and desires can be derived from this one goal since they themselves affect how efficiently satisfaction can be achieved in practice.
You can't just say: "According to this system I should do this simply because it's my desire!" The optimal solution according to this system can always be to change ones desire or to ignore a desire that would hurt your general long term satisfaction instead of acting on it.
Because of this, this system usually ends up with intuitively moral behaviour. Realistic exceptions to this are usually not because one has overwhelmingly strong insane desires because these are rare and changeable. The exceptions happen in situations like: my own survival vs others when resources are low. Or: I already am a dictator and the people are going to rebel and kill me if I don't force them to be passive and unfree. Both are situations where long term satisfaction is very uncertain and therefore they should be avoided in this system.
If you have a goal, you do have a functional equivalent to a moral system, since you can choose all your actions according to that goal. To me, the biggest problem of ethics has always been the justification of the choice of goals. No other system I have encountered has solved or justifiably bypassed the problem of justifying the choice of goals.
You keep saying this, but three pages into the discussion it makes no more sense than in the beginning. I think we may as well leave it here.