Libertarian free will is impossible
1) A free act must be intentional, which means that the act must be influenced by the agent's intention to do the act.
2) To the extent that the intention does not influence the act, the act cannot be free - because to this extent the act is unintentional.
3) To the extent that the intention influences the act, the act cannot be free either - because the intention is not freely chosen and thus the act is influenced by something that is not freely chosen. (Libertarians would agree with this point but compatibilists would disagree.)
4) So the act is to the full extent unfree (in libertarian sense).
Point 3 might need some explanation. Why is an intention not freely chosen (in libertarian sense)? Because to freely choose an intention would be a free and therefore intentional act, and an intentional act must be influenced by the agent's intention to do the act (point 1); in this case it would be an intention to choose an intention. But the intention to choose an intention would have to be freely chosen too and therefore would have to be influenced by another intention, and so on - ad infinitum. Obviously, no one can work through an infinite chain of intentions and so our intentional acts must start from an intention that is not freely chosen.
Libertarian free will is characterized by the first three points while compatibilist free will is characterized by the first two points and the negation of the third point.
2) To the extent that the intention does not influence the act, the act cannot be free - because to this extent the act is unintentional.
3) To the extent that the intention influences the act, the act cannot be free either - because the intention is not freely chosen and thus the act is influenced by something that is not freely chosen. (Libertarians would agree with this point but compatibilists would disagree.)
4) So the act is to the full extent unfree (in libertarian sense).
Point 3 might need some explanation. Why is an intention not freely chosen (in libertarian sense)? Because to freely choose an intention would be a free and therefore intentional act, and an intentional act must be influenced by the agent's intention to do the act (point 1); in this case it would be an intention to choose an intention. But the intention to choose an intention would have to be freely chosen too and therefore would have to be influenced by another intention, and so on - ad infinitum. Obviously, no one can work through an infinite chain of intentions and so our intentional acts must start from an intention that is not freely chosen.
Libertarian free will is characterized by the first three points while compatibilist free will is characterized by the first two points and the negation of the third point.
Comments (177)
I can believe in the compatibilist notion of freedom - I am free when I can do what I want. What other freedom can we have?
That's not the kind of freedom that makes sense of moral responsibility, and of praise and blame in general.
One way to block the regress is to say that an intention to act is not a separate event from the free act itself, and so there is no need to postulate a second order intention to explain the first, and so on.
In other words, when you act freely, it is not because there's a distinct event which is your intention to act freely, that somehow causes your action; but rather the intention is an aspect or a property of the action itself, and thus not a separable entity.
What use would it bring?
Because it's a fair topic for philosophical discussion? Also, the goal might not be convincing others but rather to test out some ideas in view of reaching a better understanding.
This is my view also. Following Elizabeth Anscombe, it has become much more common to view intentions that occur prior to the initiation of actions on the same model as intentional actions (or intentions in actions as John Searle calls them). This is consistent with Aristotle's claim that actions are the conclusions of acts of practical deliberation. Or, as John McDowell puts it, when one intends to do something, one is thereby doing it, just not right now.
This view of intentions being constitutive parts of the actions that they govern indeed avoids some of the regress problems that afflict voluntarist conceptions of action that picture acts of the will as mental events separate from the actions that they allegedly cause.
Can you explain the "just not right now" part?
Has a philosopher ever had a good idea such that Aristotle hadn't already beaten them to the punch ;-) Maybe some Pre-Socratic...
Yes. The idea is that the formation of an intention amounts to the (practical) rational determination of an orientation of the will. It is akin to the endorsement of a plan that structures behavior, and behavioral dispositions, from then on (and until such a time when what one intended to do has been done, or one has changed one's mind about doing it). On Anscombe's account, acting and reasoning practically aren't separate activities since actions are internally structured by means-end relations. So, as you are acting intentionally, you are determining what you are doing (the component parts of your action), in accordance with the overarching intention, as means to realizing it or of progressing towards the achievement if its goal.
In view of this, an action can be viewed as furnishing the reasons why an agent is performing the component parts of her action. Hence, if I intend to go to Cuba for my next vacations, the fact that I intend to go to Cuba may motivate me to shop for plane tickets, or to decline an invitation elsewhere. When asked why I am shopping for plane tickets, or declining the invitation, I may reply that's because I am going to Cuba next month. I need not say that it's because I will be going to Cuba next month. Effectively, the fact that I'm going to Cuba next month already is structuring my action right now through means-end relationships. So, it's similar to saying that I'm breaking eggs because I'm making an omelet. However, when I'm breaking eggs, I've thereby already started making the omelet. But when I'm buying tickets because I'm going to Cuba next month, I'm not going there right now. I'm intending to go next month. But since I'm right now structuring my behavior in view of that end, there also is a clear sense in which my action (i.e. going to Cuba) is present. And this is what motivates the use of the present tense when one is invoking intentions for the future in justifying what one is doing right now.
I hope this isn't too confusing. The main lesson is that although I am not going to Cuba right now (I think McDowell's phrase was "I am doing it... only not yet", this action has as its constitutive parts present component actions and/or present dispositions that already are structured by it.
On edit: I just found the following quote: "Intention for the here and now is, if you like, a kind of thought. But it is practical in the sense that assenting to such a thought just is beginning to act in a certain way, for instance starting to cross a street; and continuing to assent – not revoking one’s assent – is continuing to act, for instance continuing to cross the street. An intention for the future is, by all means, a thought of the same kind, apart from the time difference. But the way to accommodate that is not to distance intention for the here and now from acting, on the ground that intending purely for the future is not acting, but to conceive an intention for the future as a potential action biding its time." -- John McDowell, Some Remarks on Intention in Action (my emphasis)
That's a nice way to put it! An intention for the future is a potential action that is biding its time.
An act of will is a choice to move in a particular direction. That is all that it is. Perception are virtual actions or possible direction of movement. There is nothing free but there is choice.
Quoting Fafner
I suppose this means that the intention does not influence the intentional act and thus denies point 1 of my argument? If an intention does not precede the act then there is no time for the intention to influence the act.
The problem with such an intentional act seems to be that we lose control of the act and thus the act is not free. I think we control our intentional acts through our intentions - by influencing them by our intentions - but if intention does not influence the act then we cannot control the act. The intention then seems to be just an epiphenomenon that is formed along with the act, a feeling of agreement with the act even though we don't have control over the act (and over the intention).
I think the feeling of moral responsibility arises from compassion, which is another feeling. Compatibilist freedom allows that.
Praise and blame can be seen as psychological motivators that evolved thanks to their usefulness. Praise evokes pleasant feelings and thus encourages an action while blame evokes unpleasant feelings and thus discourages an action.
You mean a choice such as the choice of a robot to move to the left rather than to the right because it is programmed to move to the left?
That's as it may be, but I'm not talking about the feeling or moral responsibility; I'm taking about the rational justification of the idea of moral responsibility. Without the assumption of radical freedom the notion of moral responsibility is incoherent; a human being responsible for an act reduces to the same kind of responsibility that natural phenomena and animals are thought to have for their acts.
Support?
I think that what really matters for free will is not that your 'intentions' must control your action, but that you should control what you do. And when people control what they do, we say that they behave intentionally, but this doesn't mean that we have to postulate the existence of a distinct psychological state that accompanies actions which is called the intention.
Quoting litewave
This doesn't follow, because you still have the agent himself who can perfectly well control his actions, only not by a mediation of distinct events of 'intention'. You intentionally control your actions simply by doing them.
It does indeed negate the first point, but not because there is no time for a prior intention to influence the action. It is rather because, on my view (which appears to be similar to Fafner's), prior "intentions to act" -- or intentions for the future, we may call them -- stand to intentional actions in the same sort of causal relation that intentions in action stand to with respect to the intentional actions that manifest them. And this form of causation is quite different from event-event-causation where something that occurs at a time causes something else to occur at a later time (or maybe at the very same time) by virtue of some natural law.
So, on the alternative view, acts of the will aren't mental acts that occur prior to intentional actions (or instantaneously at the same time when the action begins). Rather the intentions themselves are manifestations of our acts of will. As Eric Marcus has put the point, it makes sense to say that, in the case of intentional actions, the whole is the cause of the parts. For instance, the fact that you are making an omelet (which is a manifestation of the orientation of your will at that time) can explain why you are breaking eggs. What is it, then, that explains the fact that you are intentionally making an omelet? The explanation for this can not be: because you formed the intention to do so. That would be a dummy explanation. ("Why are you making an omelet?" -- "I was caused to do so by my prior intention to make an omelet.")
Rather, when asked why you are doing something, the chain of explanation terminates with your mention of the considerations that, by your own lights, makes it reasonable for you to do it. The mention of those reasons make it intelligible why you are doing it in a manner that is quite different from mentioning prior causes of an event.
That is not to say that mention of prior causes can't be explanatory as well. In order to act for some reason, it must be the case that you were suitably inclined to be moved by some rational considerations, or that you were suitably informed of the relevant features of your practical situation. But such contrastive explanations lay out some of the necessary or enabling conditions for your having exercised your ability to rationally decide what to do. They don't necessarily constitute restrictions on your freedom. This is a point compatibilists usually get right. Where compatibilists often go wrong may be in thinking that such prior conditions necessitate the actual action.
When we turn from the search for necessary conditions for human actions to sufficient conditions for them, then we must inquire into the reasons why the agent is doing what she is doing and such an inquiry into the intelligibility of her behavior is entirely different from the first inquiry. It isn't looking for antecedent "causes" in the past.
A robot it's programmed, a human isn't. A human programs the robot, at that point choices are being made.
The idea of moral responsibility must involve feelings, or else no one would care about it (to care means to have feelings) and it seems to me those feelings ultimately boil down to compassion. We regard a person as morally responsible for a sentient being when we feel compassion for that being and expect that person to feel such a compassion too and to benefit that being.
Quoting John
Humans have a higher level of consciousness than animals, so we expect humans to feel more compassion than animals and to be more able to benefit other humans or sentient beings. Hence we regard humans as having more moral responsibility than animals (if we assign any moral responsibility to animals at all).
Free will entails having control over your acts, which seems to be missing when your acts are unintentional. Like, slipping on a banana peel - an unintentional and therefore unfree act.
Simply doing an action is not enough to intentionally control it. You may simply slip on a banana peel, and it is you who is doing the slipping, but without an intention to do the action you cannot control the action.
This is not so. On my understanding of 'action', what you described doesn't count as a genuine action. Slipping on a banana peel is not something that you do intentionally, it is something that simply happens to you outside of your control.
I'm not committed to the claim that just any sort of behavior or a bodily movement counts as an action that we perform as agents.
If the intention causes the action instantaneously via some different/timeless way of causation, we can still ask whether the act of forming the intention is caused (via this different/timeless way of causation) by an intention to form that intention, and if it is then the act of forming the intention is intentional too, but this leads us to an infinite regress of intentions in a timeless instant.
Quoting Pierre-Normand
Are you saying here that our actions cause our intentions to do the actions? In that case it is difficult to understand how we control our actions. It is more like our actions control us.
There is no control over actions. There is an ability to attempt to move in a particular direction. Outcomes are always uncertain because of other constraints.
In this case, a choice was made to move in a particular direction. The choice ofaction is taken, the movement is made, contact is made with a banana peel, and then the outcome.
Choices are made, outcomes are always unpredictable.
I agree but it is so because you don't have an intention to do it. If you do an action without an intention to do it, it is as if the action or event "happens to you", it is outside of your control.
I don't necessarily mean complete control of the action, but the ability to influence the action.
Yes. What it's being influence is the direction the body might move. A person may perceive the banana and try to avoid it, but again outcome is unpredictable.
Yes, indeed, which is why I am agreeing with you that intending to do something (or forming such an intention) does not require a prior act of intending to form that intention. Also, an intentional action, on the account I have been recommending, isn't a further act causally downstream from the act of intending. Rather, to say that an action is intentional just is a way to characterize the actions of rational animals as the sorts of rationally structured behaviors that they are.
Yes and no. Intentions are internally structured by means-ends relationships. They are teleologically organized, we may say. That's because the parts of our action (their "proper temporal stages", we may call them, although this way of characterizing the "parts" of our action is slightly misleading) are actions done by us as means to realizing what our overarching action is done for the sake of. They are means to our ends.
So, to refer back to my earlier trip-to-Cuba example, if I intend to go to Cuba next month, then this already existing intention can be the cause, in a sense, of my forming today a new intention to book plane tickets. So, whenever A is a means of doing B, then what causes my intending to do A is my intending to do B. The sort of causation that is at play here might be called rational causation. It is because it is rational to do A when one intends to do B that one forms an intention to do A.
Now, this sort of causal explanation of intentional action could be thought to lead to troublesome regresses in two different ways. The first worry is that when the time comes to act on an intention for the future, one must then intend to act on one's prior intention when the time comes. (This is what John Searle views as the "gap" problem.) The second worry is that, if a prior intention to do A explains why one intends to do B, when A is a means to do B, then there ought to be another action C that one intends to do in order to explain why one intends to do B, and so on ad infinitum.
In order to block the first regress, my suggestion (similar to Fafner's) is that intending to do A and doing A intentionally just are two ways to characterize the very same thing. Intentions aren't purely mental acts that stand behind people's intentional actions. They rather are the manner in which such actions are rationally structured. (Compare Gilbert Ryle in The Concept of Mind on "mental processes" and "intelligent acts").
I've also sketched a way to block the second regress through appealing to the considerations in the light of which an agent acts. When a chain of 'why?' questions terminates with the mention of the broadest intentional action (e.g. "Why are you breaking eggs?", ... , "Why are you making an omelet?", ... , "Why are you having guests for dinner?") then the final answer need not refer to an intention to form the intention of having guests for dinner. Rather, it terminates with the mention that one enjoys the company of those particular guests or whatever. (There'd be more to say about the way reasons can rationalize and, at the same time, explain, other actions in a way that is very similar to the way overarching actions rationalize their component actions.)
Free will is ontological freedom in conjunction with will phenomena.
So no, your supposed support is question-begging.
Yes, but there is no implication in the ability to choose a direction of action that there is also control of outcome.
Granted, there are certain authors and groups who suggest we create our worlds, but as you indicate, there does not seem to be any control of outcomes.
There are many, many constraints on actions, all we can do is try to move in a direction.
The problem is that I don't understand how you can control the intentional action if your intention doesn't influence it. The intention on your view seems to be just an epiphenomenon that is formed simultaneously with your action.
Quoting Pierre-Normand
This seems to be ordinary causation where a temporally prior intention (to go to Cuba) causes another intention (to book plane tickets).
Free will is about control. If you don't have control over your action then the action is not freely willed.
And this trying influences the movement. Even if there are other factors that influence the movement, your influence gives you at least partial control over the movement.
You don't need to control the intentional action since your being engaged in an intentional action already is your controlling what happens with your own body and surroundings. It's not as if you were a puppet and what you need in order to control your action is to be able to pull your own strings. On my view -- which I also take to be broadly consistent with the view of several contemporary philosophers of action -- our intentions aren't epiphenomena that accompany our bodily movements. Rather, they are being manifested in the rational structure of those voluntary movements. (Compare how the canvas of a tissue accounts for its tensile strength, but doesn't cause it in the manner of an antecedent condition.)
It might be useful to compare this with Gilbert Ryle's polemic against the "dogma of the Ghost in the Machine". On Ryle's view, when you are talking intelligently you may on occasion, but usually don't, think how you are going to string your words together in order to convey an intelligent thought. Rather, in the usual case, your ability to intelligently come up with the correct words, on the fly, as it were, is partly constitutive of your ability to think out loud. Likewise, in the case of intentional action, your ability to intelligently manipulate the material world around you reveals your activity as intentional and responsive to instrumental reasons (among other sorts of reasons that you are freely endorsing at the time when you are acting).
I would rather say that your prior intention to go to Cuba, as well as your ability to reason instrumentally, is manifested in your now booking the plane tickets (and many other things that you do, or refrain from doing when that would interfere with your plans). This is a manifestation of your practical knowledge (i.e. your knowledge of what it is that you are doing and why you are doing it) being retained over long stretches of time: until such a time, usually, when your intention has been realized.
Compare this with the case of theoretical knowledge. If you acquire on Monday the knowledge that pi is an irrational number and are being asked on Wednesday whether the decimal expansion of pi is periodic, you will say that it isn't. This is a manifestation of you persistent knowledge that pi is irrational (and your ability to rationally infer things from what you know). It would be strange to say that your answer that pi isn't periodic has been caused by whatever caused you, in the past, to believe that pi is irrational. (Though that could be a sensible contrastive explanation to offer to someone who knew that you used to be ignorant of the fact that pi is irrational). Your answer to the question rather is, in the usual case, best explained as the continued manifestation of a piece theoretical knowledge that you once acquired and now retain together with your ability to make inferences on its basis.
I would not characterize any influence as control. It it's more like intent of movement, so yes if a person is attempting to move against a wind force, no one force controls the movement, rather they create one holistic event with an unpredictable outcome.
So the support is a question-begging stipulation. Nice.
But if my intention does not influence the action then my being engaged in the action is not controlling the action. I wouldn't even say that the action is intended (intentional).
Quoting Pierre-Normand
I would say that my intention to express something verbally causes the related words to come to my tongue. For example if I intend to communicate to someone that I have the feeling of hunger, this intention draws the word "hunger" from my lexical memory and pushes it to the speech center in my brain which activates my tongue, lips, breathing and so on in such a way that the sound of the word "hunger" is produced. I guess this is roughly the causal neurological process.
Quoting Pierre-Normand
It seems that the part "is manifested in" can be easily substituted with "causes".
Quoting Pierre-Normand
Why? To believe that pi is irrational means to believe that its decimal expansion is infinite and is not periodic. So that which caused me to have this belief also causes (indirectly, through the belief) my answer when I am asked what I believe about pi.
Your stipulation is question-begging. Are you saying that slipping unintentionally on a banana peel is a freely willed action?
Does that involve will phenomena?
What do you mean by "will phenomena"?
Well, you know what we're referring to by the term "will" right? It's kind of hard to talk about free will if we don't know what "will" is.
When we characterize an intentional action we often use a verb phrase that doesn't merely describe the bodily motions of the person who is acting but also the ends that she is pursuing. For instance, we might say that she is (intentionally) making an omelet. And this explains why she is heating the frying pan, breaking eggs, chopping up mushrooms, etc. All that purposeful activity is geared towards realizing the end characterized as "having made an omelet". As long as the person is performing this overarching action intentionally, all the component actions that are means towards that end are being performed by her thanks to her understanding them to be such necessary means. So, I am suggesting that what makes the action intentional under such a description (i.e. "making an omelet") is the fact that the agent is pursuing that goal while being able to deliberate practically towards realizing that goal; that is, judging what the necessary means are and executing them for that reason.
So, yes, you may say that the intention influences the action, but that is merely to say that the agent's self-determination of her own goals and her ability to reason instrumentally towards achieving them, explains how her basic actions are being structured by her while her overarching action progresses.
It seems doubtful to me that there is a wordless thought process that operates upstream from any of our exercises of abilities to use words when we are reasoning or forming intentions. And, in fact, I think there might be evidence to the contrary from cognitive neuroscience. but if you don't accept this then my Rylean example will not be helpful.
I may not absolutely need, however, to appeal to this Rylean model in order to argue that freely chosen courses of action need not be controlled by prior intentions that themselves are chosen intentionally, as you suggested in your original post (as an alleged requirement of libertarian free will). Even if we construe the forming of an intention as a purely mental act, that occurs prior to acting, and that controls our actions, there still need not be a separate act of choosing to intend in this way in order that the intention be free and that we be responsible for it.
As Fafner suggested, for an intentional action to be free in the relevant sense that secures the agent's responsibility, the source of the intention must be the agent herself rather than antecedent causes that lay beyond the scopes of her control and agency. But if, as I suggested, what the formation of an intention essentially reflects is the agent's sensitivity to the practical considerations that, by her own lights, make it reasonable and intelligible that she would pursue this intended course of action, then she is a free as anyone may wish to be when she so intends.
As I also suggested, such an explanation of action looks very much like a compatibilist account. But it is crucially distinguished from standard compatibilist accounts in an important respect. If what grounds the agent's decision is her being sensitive to the features of her practical situation that make it reasonable, by her own lights, that she ought to so act, then her actions aren't determined by prior causes that have receded in the historical past and that therefore lay beyond her control.
It can't be so substituted since what is being manifested in intentional action is the agent's sensitivity to the reasons why she acts and the rational outcome of this sensitivity isn't caused by past events. Such a capacity is only, at most, being enabled by the past history of the agent. We are not free the become rational agents because we are relying on our having suitable biological and social endowments. But when those necessary causal requirements are met, then we acquire the sort of rational and moral autonomy that make us free and responsible.
That would be correct if we were always being passively caused to acquire our beliefs through the impact of brute external events. But this would be to deny that we have rational abilities to critically assess our beliefs and their sources in such a manner as to secure genuine knowledge. This is why my example was focused on knowledge rather than belief, since our rational ability to know is analogous to our ability to reasons practically and determine our ends.
When we have a rational ability to know, then the reasons why we come to endorse specific beliefs and repudiate others can liberate us from the past vagaries that caused us to acquire them in the first place. We can then submit them the rational criticism (which may be a quite trivial business, such as checking for commons sources of illusion, or ceasing to trust habitual liars, etc.) and what thereby comes to be the cause of our states of genuine knowledge becomes our own self-determined power to asses the justifications of our beliefs.
"Phenomenon" - simply an occurrence, something that obtains. "Phenomena" is the plural.
So, in summary, you account of free will is that it's real freedom accompanied with things that obtain. Have you thought about submitting it to a philosophical journal?
You should think about trying to edit a journal, given the reading comprehension you're displaying. Good thing we're not attempting anything more complicated than a few words.
Your basic idea seems to be that the relation between the intention and the resulting action is causal (e.g., your talk about influencing our actions and so on), but here's why it can't be causal. Causality is a relation between events that we discover aposteriori through experience. As Hume has taught us, there's no way to deduce apriori the effects from their causes, but you have to observe causes and effects and see if they come in constant conjunctions etc.
But now, do we learn by experience that every time when we have a certain sort of intention, we always find ourselves behaving in some corresponding way? Imagine that you have the intention to go outside for a walk. Can you imagine the possibility that while you are having this intention (you are about to go outside), your body suddenly 'decides' to do something else entirely? Of course all things can happen: your body may become paralyzed, you can change your mind in the meantime etc. But whatever happens, you are not going to say to yourself "I was wrong, it wasn't an intention to go for a walk after all, but something else". Or suppose that you have intended to go for a walk, but suddenly a burglar appears and at gun point forces you to hand him all your money. Would you say that you were wrong about your intention, that you really didn't intend to go for walk, but actually to hand all your money to a burglar? (or does this possibility even make sense?) After all, how did you know that you had the intention to go outside, if something totally else happened to your body as a result?
All this shows that you can always recognize in the intention itself what sort of action is the 'correct' or the 'corresponding' action that would count as the realization of that intention. And this is not a prophecy about the future (since you can intend something, while you fail to realize it for all sorts of reasons - you cannot predict the feature in some extraordinary sense just based on your intentions), but rather it is a logical connection that we draw between intentions and the actions that realize them. And so it is wrong to try and explain this relation by postulating the existence of some hidden psychological mechanism where intentions simply cause actions as a matter of contingent psychological fact.
What actually happens -- and this is how we come to have the concepts of intention and free actions -- is that we simply, as a matter of fact, are not being constantly surprised by what our bodies do. We don't first recognize in ourselves a distinct psychological state of 'intention' and then wonder or try to guess what kind of behavior it is going to cause; rather we just act as a matter of course, and make the distinction between voluntary actions and other sorts of unintentional or forced behavior on the basis of this fact. So when we explain our actions by citing our intentions, we are not giving a causal explanation that involves two distinct entities that always coincide for some reason, but we are simply making a logical distinction between two different sorts of behavior: behaviors that we control as agents, and the behaviors that we don't.
Of course it is a matter of experience what sorts of behaviors are and aren't under our control; but the crucial point is that you don't infer that you did something intentionally on the basis of first recognizing that an intention has preceded it, and then conclude for this reason that it must've been you that caused your behavior and not someone else. And compare this to a case of some unknown mechanism in which you try to identify which part causes some other part to move. Here it makes sense to form hypothesis about what causes what, but not in the case of inferring which of our behaviors are voluntary.
I can't see any relevance in what you say here. If libertarian free will is, according to rational thought, inexplicable, and you want to conclude from this that it is impossible, then although you might still have fellings of your own, and feelings about others', moral responsibility, it certainly doesn't follow that those feelings are rationally justifiable.
You need to show how the special idea of moral responsibility which is necessarily based on the belief that human behavior is not exhaustively determined by natural forces could be compatible with its being exhaustively determined by natural forces and the idea that no human decision or act reaaly could have been other than it was.
But all of this can be explained by ordinary temporal causation. Factors like the feeling of hunger, food desires/preferences, belief about what ingredients are available in the kitchen, knowledge of how to make an omelet etc. cause the agent's intention to make an omelet. Then this intention, along with other factors like the knowledge of how to make an omelet, causes the agent to perform actions like heating up the frying pan, breaking eggs, chopping up mushrooms. Logical and physical operations can also be performed by a machine - no libertarian/incompatibilist free will is necessary.
Quoting Pierre-Normand
But then the act of forming the intention is not freely willed (since it is not controlled by an intention to do the act) and thus the act that is controlled by the formed intention is controlled by something, an intention, that is not freely willed. For a compatibilist it is not a problem, but the fact that the final act is ultimately controlled by something that is not freely willed would clash with the libertarian concept of free will.
Quoting Pierre-Normand
But what does "being sensitive" mean? It seems we can again explain it causally - being sensitive to something means being able to be influenced by something, for example by the agent's needs, habits, desires, beliefs, knowledge, intentions... These factors influence the agent's actions.
Quoting Pierre-Normand
Of course in real life the process of "critically assessing" our beliefs and their sources may be complicated, but in principle these seem to be logical operations that also a machine could perform, reducible to causal processes.
But since there are stable regularities in nature that we can describe in terms of causation, we can also successfully predict effects from causes. Often there is more than one significant cause in a given situation and then the effect is caused by several causes that may be difficult to identify and thus the natural regularity may be obscured. Science is successful in the causal explanation of such cases too.
Quoting Fafner
Since our behavior is generally influenced not only by our intentions but also by other factors, our intention to do a certain act need not always be followed by that act. But there are plenty of cases where such an act occurs pretty regularly because other factors that might block it are insignificant, for example when I intend to raise my right hand I usually do it successfully.
What do you mean by rational justification of feelings? How can we rationally justify compassion? It seems to be an evolved feeling that is useful in some way. It enables us to form emotional bonds with others and seems to be a part of integrative processes in our brains/minds.
Quoting John
The idea of moral responsibility that is based on the concept of libertarian free will is just as meaningless as libertarian free will. But I offered an idea of moral responsibility that doesn't need libertarian free will.
Let me grant you, for the sake of argument, that libertarian free will isn't required. You are still agreeing with my main point in that case. If the obtaining of all of those causal relations between the agent's prior states of mind (beliefs, desires) and her intentional actions is all that is required for those intentional actions to count as being free, then there is no need for the agent to "control" her intentions for the future and there is no regress looming. This was the main argument that I was making here. The conditions for intentional actions to count as being free don't include a requirement that the intentions themselves be controlled by the agent. It is sufficient that the mental states of the agent that are manifested in the pattern of her intentional actions reflect her sensitivity to whatever features of her practical situation she takes to constitute good reasons for them. So, this is something that I am in broad agreement with many compatibilists about.
Quoting litewave
On my view, the action is indeed controlled by the intention of the agent (and therefore, by the agent). What makes it the case that an action is controlled by an agent precisely is the obtaining of the conditions under which the action is intentional. Granted, there arises a problem for some libertarians who believe that acts of the will only are free if the agent could have acted differently in the exact same circumstances, where those circumstances include all of the agent's states of mind and the dispositions of her character. But that's not my account.
Quoting litewave
Sure, but the relevant factor that I am identifying as the ground of the agent's intentional actions are the features of her practical situation that she can adduce as the reasons why she is doing what she is doing.
If you ask me why I am doing something, you can then challenge my reasons, or try to convince me that I am not acting in a way that furthers my own self-interest, or that my choice has been made on the ground of some false beliefs, etc. But if you tell me that I am not free since my action is being "influenced" by my awareness of the reason that I just gave you, then I can shrug this off. Who would want to be acting "freely" on no rational ground whatsoever?
If, on the other hand, you are arguing that my action isn't free since my being aware of the particular reason why I act depends on my having the beliefs and motivations that I in fact had immediately prior to making my choice, then it seems that I can also shrug this off. This is only relevant if you can show me that some of those beliefs and motivations are states of mind that are interfering with my awareness of better reasons that I might have for acting differently right now. But then, what you really should be offering me are those better reasons. My having prior beliefs only constitutes a limitation (of sorts) on my freedom if the beliefs are false and therefore interfere with my ability to achieve my goals. And likewise regarding misguided motivations or bad character traits that may cloud my judgement regarding what it is that I should set as my goals.
Quoting litewave
To the extent that a robot would perform those rational tasks just as well as a mature human being, then it would also be free. But the mere fact that the computer controlling the robot might run a deterministic algorithm would be irrelevant, on my view. If the robot's emergent behavior is such that it manifests sensitivity to good reasons for acting, and the robot is able to revise its beliefs, and steer and adjust its own motivational states accordingly, then its emergent behavior will not be deterministic even though the mechanism that produces its bodily movements might be. Some emergent properties of complex systems can be indeterministic even if the laws that govern the evolution of its constituent parts aren't. The behaviors of animals or robots characterized in high level intentional terms are such emergent features that are distinctive from the 'raw' bodily motions and the antecedent neural/computational states that generate them.
If the intention is not freely chosen then all of the agent's actions are completely determined by factors that the agent has not freely chosen. This is something that I think libertarians would have a problem with because the idea of libertarian free will seems to require a freedom that can override any determining factors.
What you're failing to see is that the ordinary everyday feeling of being morally responsible is based upon the feeling of being free and the belief that we are in fact free. What I am saying is that the belief that we are free is not rationally justifiable, but that the logic that underlies that belief is nonetheless
the logic of libertarian free will.
Quoting litewave
The idea is not nonsensical at all, we all understand it perfectly well. It is just that it is un-analyzable. It's kind of like Zeno's Paradoxes of movement; analysis produces infinite regress. What you have to realize is that being is not thought; being cannot be analyzed rationally without producing seeming paradoxes and aporias, and this is so not just with the case of human freedom.
There is no coherent idea of moral responsibility that "doesn't need libertarian free will"; the idea of responsibility without the latter notion collapses into causal responsibility which is the same as with all natural phenomena, and you have definitely not offered any account of such an idea that "doesn't need libertarian free will".
I agree that this a problem that afflicts many traditional libertarian accounts of free will. But I think the main assumption that generates this problem is a mistaken assumption that is generally shared by (most) libertarians and (most) compatibilists. And this is the assumption that the antecedent features of the agent's ability for practical deliberation (including her antecedent beliefs and motivations) -- which she had prior to the time when she deliberated and/or chose what to do -- constitute antecedent constraints on her power of deliberation that she has no power over. Where the traditional libertarians and the traditional compatibilists disagree is whether this lack of present control does constitute a threat on the very idea of freedom of choice.
I am actually agreeing with compatibilists that "present control" on (or ability to override, as it were) the causal efficacy of one's own antecedent beliefs and motivations isn't a requirement for freedom and responsibility. One's own antecedent character indeed isn't something that is external to one's own power of agency. It is rather constitutive of it. I am however disagreeing with compatibilists that the manner in which an agent's antecedent beliefs and desires make intelligible the actions that she chooses to do can be construed in a deterministic fashion.
There are two reasons for that. First, just because one's antecedent beliefs and motivations contribute to explaining what one does doesn't generally absolve one from responsibility. And that's because one's responsibility for those features of one's character often extend to the past. If one acts badly because one has acquired a bad habit, one often is responsible for having acquired the bad habit in the first place.
Secondly, and more importantly, in order to explain what someone does on the basis of the beliefs and motivations that she has, it isn't generally sufficient to merely mention those beliefs and motivations as brute facts about her and her antecedent "dispositions". It is also generally necessary, in order to so much as *make sense* of what it is that she is doing (and hence construe her behavior as genuine intentional actions as opposed to mere conditioned responses to present stimuli, say) to get a handle on the reasons why she takes some of her motivations and some of her beliefs to be relevant to her present decision. One does not always act merely on the strength of one's "strongest" antecedent desire, whatever that might mean. Rather, one acts on desires, values or considerations that one takes to highlight specific features of one's practical situation that are salient on rational and/or moral grounds. And this can't generally be explained in terms of "antecedent" states of mind.
An intention may be constrained or influence but it doesn't mean that someone cannot try to choose movement in one direction or another. One can try but because of constraints or influences the probability of achieving is unknown. Some choices may have more probable outcomes than others (because of influences and constraints) but who knows? Outcomes are always unpredictable.
While some may couch the issue as Free to Choose Outcome, it appears that Ability to Choose Direction of Action may be a better description of human choice.
And the acquisition of the habit was completely determined by factors that the agent has not freely chosen, whether those factors were intentions or whatever else.
Quoting Pierre-Normand
Why else would he act then?
Quoting Pierre-Normand
Why would he do that? If he does it intentionally then he does it because of an intention to do so and that intention controls that action.
Yes, we understand it just as perfectly as we used to understand the idea that the sun moves around the earth (everybody can see that) or that the earth is flat (everybody can see that if the earth was round then those on the bottom would fall off it). These are all feelings we have but they are not an accurate picture of reality.
Quoting John
It is analyzable, as I showed in OP, and it leads to a contradiction. So the idea of libertarian free will is incoherent.
Quoting John
But there is a difference between natural phenomena and humans: humans have consciousness and capacity for compassion, without which any notion of morality is meaningless.
Some people are compassionate and others are not. If determinism is the case then people cannot be praised for possessing, or blamed for failing to possess, compassion. If a person;'s moral responsibility depends on their feeling compassion then only those who do so could be morally responsible.
Your first point is so irrelevant and your second so lamely wrong, that neither warrants any response.
They can still be praised or blamed. Praise and blame are motivators and feedback signals about whether we have done something good or bad.
Quoting John
The relevance of my first point is this: just because we have a feeling doesn't mean that the feeling is an accurate picture of reality. People have a feeling that the sun moves around the earth, but that doesn't mean that the sun really moves around the earth, even though it was almost universally considered to be so until Renaissance. I don't deny that we may have a feeling of libertarian free will/ultimate control, I just deny that we have libertarian free will/ultimate control.
Regarding my second point, you have never said what was wrong with my argument in OP. You just appealed to praise, blame and moral responsibility, but my argument doesn't depend on that.
The question is what is creating feeling (and all other qualia) and why?
The standard answer is that it all emerged out of the Big Bang "soup" as determined by the undefined Laws of Nature which also magically emerged out of the soup. So the essence of the determinism Genesis story is that from nothing came everything, which doesn't explain anything but does satisfy scientists who wish to study humans as mechanical robots.
But the story doesn't quite end there. For some unexplained reasons, the Laws of Nature fool is into the thinking we have Choice. It is a Buddhist-like illusion. Why? But not all of us have this illusion. Some (the determinists) see through the illusion and know they really have no choice. Why does the Laws of Nature allow some to see through the illusion and not others? Why and how? But the mystery goes deeper. Those who are determined (by the Laws of Nature) to see through the illusion are also determined to act as if they don't. Why?
Inquiring minds might question to jus Genesis story, but not Determinists. One of faith doesn't question the ways of the Laws of Nature.
I think the feeling of having ultimate control comes from our ignorance about all the factors that influence us and in totality completely determine us. This ignorance creates the impression that we are ultimately in control of our actions. Even when you see through the illusion it is already so hard wired in us that the feeling remains, like when you rationally recognize an optical illusion but visually it doesn't go away or keeps returning.
Quoting Rich
The illusion may be quite strong and many people just take it for granted and it doesn't even occur to them to question it because they can usually do quite fine with it in everyday life. I took it for granted until some 8 years ago when I had a discussion about free will with my Catholic friends and that compelled me to think about the issue.
Yes, this would be very much akin to Hindu Maya. But what remains to be explained is why Natural Forces would be creating all these illusions but at the same time sleeping people yourself to see through these illusions but not people such as myself? Why are the Laws of Nature (God) playing all of these tricks and precisely which laws are at work?
It is basically absence of knowledge or awareness. We cannot know or be aware of everything. In order to survive, thrive or reproduce we must focus on that which is important for these goals and not get distracted by other stuff. So there is the limit of mental capacity combined with evolutionary pressures. But when people have more time for philosophizing or perhaps undergo an extraordinary experience that makes them question common wisdom, they may realize something radically new.
I would rather say that the relevant factors -- in this case: the fact that the agent isn't engaging in the bad habit for the first time in her life but rather has a history of doing so -- is a manifestation of her free agency that is spread over time.
My main point was to question the picture according to which acts of the human will are decisions that occur in an instant or, at any rate, over a very short period of time when the agent was deliberating. Consider the case of a criminal who plots her crime over a period of months. It is not a good defense for her to say that after having gone to such great lengths to prepare her crime she wasn't emotionally free anymore to refrain from pulling the trigger when the time came. She is not just being blamed for not having changed her mind at the last moment but also for the whole sequence of events -- the premeditation -- that shows what the orientation of her will has been during the protracted period when she was in charge of laying down her own path, as it were, and mustering up the resolve to eventually perform the deed. The very idea that the agent's own character works behind her back, as it were, from moment to moment, to compel her into performing all of her habit forming actions precisely relies on the dubious picture of instantaneous decisions that is shared by many compatibilists and libertarians alike. The picture is dubious because it is separates the agent from the very features of her mind (i.e. her character and habits) that are constitutive of her power of agency. And to operate this separation is incoherent.
As I explained, she acts on the basis of reasons. That doesn't mean that she acts apathetically, as it were, as Mr. Spock maybe would. Rather, the specific desire she choses to act on, among many competing desires, need not be the desire that is the "strongest" when considered in isolation, but rather the desire that she judges to be the one that it is reasonable to be acting upon in the circumstances. And such a choice is a act of practical reason. This is why when you ask someone why they did something, they seldom simply respond trough mentioning a desire except in the case where nothing more than the satisfaction of subjective personal preference hinges in the balance (e.g. why did you choose this particular flavor of ice-cream?) It is not unusual to hear as a response: "I would have much preferred doing ..., but...". What figures in the place of the first ellipse might be what the agent desired most at the time of acting (because it is an intrinsically appealing act to her) and what figures in the place of the second ellipse is something that the agent "desires" to do because she *judges* it to be best to act in this way in the light of her duties, values, commitments, etc.
It is no *because* of an intention that an agent acts. The reason why someone acts often is, precisely, the reason. An action shows up as intentional when it is done for some reason or another. This is why when you ask someone why it is that she is doing something, she doesn't usually answer that she intended to do it. This is an example provided by Bede Rundle in Mind in Action, if I remember: Someone asks her neighbor why she is trimming some part of her hedge. The neighbor replies "because I intended to do so". The reason why the answer isn't satisfactory is because the fact that she was doing it intentionally was assumed by the questioner. What the questioner wants to know isn't what sort of thing (e.g. and intention or neural event or something else) causes the action but rather what is the reason the agent has to do what she did. It's only though the disclosure of this reason that the actions will show up intelligibly as the intentional action that it is.
Fafner had usefully explained in an earlier post why actions and the intentions that they manifest are internally (conceptually) connected rather than them being externally (causally) connected through contingent laws of nature that we don't control.
By the way, I just finished reading a nice short paper by Chris Tucker: Agent Causation and the Alleged Impossibility of Rational Free Action. It's just 11 pages long and quite on topic for this thread.
The account of agent-causation that Tucker develops is a little thin on my view, but, to be fair, it's only presented in order to highlight some shortcomings of Galen Strawon's "Basic Argument" against free will and responsibility. Strawson is of course a hard-compatibilist while you yourself are a compatibilist. But Strawson's argument is similar to your own regress argument. It is useful to see how Strawson wields his argument against both compatibilist and incompatibilist accounts of free will. If you attempt to expose flaws in this argument such that compatibilist free will can emerge unscathed, then you may find out that you also open the door to some forms of libertarian free will. And if, on the other hand, you attempts to strenghten the regress argument just enough to rule out agent-causal accounts of free will, you may find out that compatibilists accounts don't escape unscathed either.
Of course people can still have feelings of praise and blame; I wasn't disputing that. What I meant is that without the premise of freedom, moral responsibility, and the attitudes of praise and blame that go with it, cannot be rationally justified.
Quoting litewave
Yes, but you don't have a cogent argument for that denial, which leads to your second point again.
Quoting litewave
I have said what was wrong with your argument. I have said that all that infinite regress arguments show is that reality cannot be adequately modeled dialectically. I believe I mentioned that this is similar to the case with Zeno's Paradoxes of movement.
Consider another example, which is related to the ancient skeptics' denial of the possibility of knowledge. You say that you know, but how do you know that you know, know that you know that you know, and so on, ad infinitum. This is just like your infinite regress argument about deciding: if you decide, do you decide to decide, decide to decide to decide, and so on ad infinitum?
The answer to both is that before an infinite regress of knowing or deciding can be a cogent idea at all, it must first be presumed that you can know or decide. Of course, this can never be proven; it must simply be assumed before anything at all in the way of discourse or moral philosophy can even get started. The point is that as with all human discourses and sciences the ground cannot itself be grounded.That it might 'somehow' be able to be is the most persistent intellectual illusion that afflicts mankind. And your OP is a prime example of that illusion. If you are going to respond please read more carefully and don't keep distorting what I have said; it really is tiresome.
I agree that the fact that the agent has been repeatedly engaged in an action says something about her character (her relatively stable properties) but it still doesn't give her ultimate control or responsibility for the action. We engage in the activity of breathing all the time and it doesn't give us ultimate control of our breaths even after many years of breathing. But it says something about our nature (namely that we are breathing creatures), which we cannot freely choose.
Quoting Pierre-Normand
Of course we have different kinds of desires - for carnal pleasures, for compassionate love, for duty, etc. - but they are all motivators for the formation of our intentions and the performing of our actions, and all of those desires (and consequently intentions and actions) are ultimately determined by factors over which we have no control.
Quoting Pierre-Normand
Sure, the questioner obviously assumes that the neighbor's action is intentional, so he is not interested to hear that the neighbor intended to perform the action. He is interested to hear what were the factors that caused/influenced the formation of the intention and consequently the performance of the action.
The article is behind a paywall, but honestly I don't see how the so-called agent causation can save libertarian free will.
Praise and blame are rationally justified as motivators and feedback signals. Moral responsibility is rationally justified as the capacity for compassionate action.
Quoting John
Sorry, I forgot about the Zeno reference. Well, Zeno's paradoxes are still regarded as genuine paradoxes? From what I remember, Zeno posited that the movement along a finite route can be cut up into an infinite number of steps (assuming that space and time are infinitely divisible, which, by the way, is denied by quantum gravity) and then he wondered how the movement over an infinite number of steps can ever be completed. The solution lies in the fact that, assuming constant speed, the smaller a step the shorter the time interval it takes. The steps form an infinite geometric series with a finite sum - problem solved.
But how could this help with an infinite series of intentions? We would still have an infinite number of intentions (steps), and they would have to take progressively shorter time intervals so that the total time they take is finite. This is obviously not how it works in reality - no one is conscious of an infinite number of intentions, even if they were squeezed into a finite length of time. Neuroscience actually shows that we are not conscious of what happens during time intervals under the scale of tens of milliseconds. The way our decision making works is that we have a finite number of intentions, the first intention just pops into our minds without our intentionally choosing it and then causes another intention or action - and so our final action is ultimately caused by something we have not freely chosen (the first intention).
Quoting John
Knowing is a conscious experience, it's a quale of consciousness. No infinite series of steps seems necessary in order to have a conscious experience. For example, I experience pain in my knee - I know there is pain in my knee. Then I might take another step and realize that I know that I know there is pain in my knee. Now I know that I know there is pain in my knee. Again, no infinite regress - I just took two steps.
That's unfortunate. However, if you google the four separate words: "Galen Strawson basic argument "(without quotes), then the top two results are (1) a link to The Information Philosopher's page on Galen Strawson, where his "Basic Argument" is summarized, and (2) a link to Strawson's own paper The Impossibility of Moral Responsibility where one version of his argument is developed. The reason why I am pointing you to Strawson's argument is because it seems to express the same worry that you are trying to express (in the form of a regress argument) but it doesn't suffer from the same flaw. It doesn't misconstrue an intentional action as a sort of action that must be controlled by a prior intention in order for it to be intentional. And also, as I mentioned already, if you accept Strawson's argument, then it threatens compatibilist free will just as much as it threatens libertarian free will, which is instructive and may motivate you in trying to figure out what's wrong with it.
You still seem to be missing the point; these are justifications in terms of practical, not pure, reason. The point is that you cannot produce a rational model that shows that we are both fully determined in our actions by forces beyond our control, while being at the same time morally responsible for those actions.
Quoting litewave
When moving no one is conscious of an infinite number of steps that "take progressively shorter time intervals", either. My point was that the whole notion of infinite series is flawed and is a chimera of dialectical reason. Since the notion is flawed it cannot be used to prove a point against radical freedom of intention any more than it can against movement. Also, what leads you to believe that intention must be conscious in order to be free?
Quoting litewave
I would not say that knowing is necessarily a conscious experience. But in any case, equally so is the sense of freedom an experience, and certainly at least sometimes conscious. And whether and how you know anything can be doubted and questioned in just the same way as the sense of freedom; precisely by introducing an infinite regress.
We can't choose not to breathe but we can chose not to lie or steal, for instance; at any rate, we can chose not to do those things unless we have some good overriding reasons not to refrain in specific instances. But although it might be begging the question against the ultimate-responsibility skeptic to say so, my only intent here was to dislodge the picture of responsible action according to which the responsibility of the agent only attaches to her momentary choice -- in the instant when she deliberates and act -- to behave badly and to yield to her bad impulse. We also typically are blaming her for having acquired the bad character that accounts for her having such bad impulses, and that also accounts for her lack of control over them. And this means that we also hold that she was free to choose a different path in the past and not indulge in the behaviors that molded her character in this fashion. That she was thus free to choose a better path in the past must also be argued for separately, of course. But it is important not to assume without argument that acting freely just means being able to make choices regardless of one's present character and motivations.
Yes, but I would argue that what normally terminates the chain of "why?" explanations of rational behavior need not be construed as a mental state that one had prior to deciding what to do, but rather one's ultimate reason for doing so. Hence, imagine that the house is burning and, as you escape, you have the opportunity to grab your sleeping child in her bed. If you are doing so it's because you are valuing her life, say. That would be a reasonable explanation of your action. Your intentional life-saving action is grounded on the value that you ascribe to your child's life. In order to start a regress argument, you would have to argue that you were only thus sensitive to this rational consideration because you are a person who values your children's lives. And you would also have to argue that your being such a person isn't something that you have any "ultimate" controls over.
It may be true that you don't have any such "ultimate" control over this in the sense that you were indeed lucky enough not to be raised in circumstances that would have turned you into some sort of a sociopath. A sociopath might be someone who suffers from some form of moral blindness. But it wouldn't make you any freer if you could have, in the past "freely" chosen evenhandedly between becoming a normal compassionate person or a sociopath. Hence, in order to block the regress, it is sufficient to point to the values that motivate you in acting and challenge the proponent of the regress argument to show you why your endorsing such values isn't a rational act.
If the skeptic about ultimate responsibility would rather argue that you weren't free to become such as to be motivated by those values, you can simply reply that you now are free to endorse them, or revise them, on the condition that good reasons might be offered for your doing so. And it is this ability to reassess your own values at each moment of your life (from the time when you became morally and rationally mature) that makes you free and ultimately responsible for your actions.
I was arguing that she wanted to hear the reasons why her neighbor was trimming her hedge. The question of the causal factors that were implicated in her becoming aware of those reasons is a different question. It may relate to the explanation of her having the ability to act intentionally, but not to the explanation why she exercised this ability in the present circumstances. When you ask someone why she is doing something, you don't normally mean to inquire why she had an ability to behave rationally. (That's just because she is a normal human being). You rather are assuming that she is rational and are inquiring about the specific reasons she may have in the present circumstances.
According to the Information Philosopher's website, Strawson's argument says that a free action must be the function of the agent's mental state. I don't know how else to interpret this than that a free action must be influenced by the agent's mental state. Acting for a reason means acting influenced by a reason. And at some point before the action the mental state must include an intention to act, otherwise the action cannot be intentional and thus free.
Quoting Pierre-Normand
How does Strawson's argument threaten compatibilist free will?
The kind of moral responsibility you want is itself irrational and incoherent so there is no rational model that can justify it.
Quoting John
But when you intend to do something you must be conscious of intending to do it - you must be conscious of the intention.
Quoting John
What is flawed about infinite series? It's a mathematical object.
Quoting John
If knowing is unconscious then I can't say that I know, simply because I am unconscious of knowing. No infinite regress appears.
Quoting John
You can stop at something that is evidently true or plausible.
Ultimately, we can't choose anything (because everything we do is ultimately determined by factors over which we have no control). If we have more time or opportunities to do something then there may be a greater probability that we will do it, but ultimately we can't choose it.
Quoting Pierre-Normand
I say that the regress of intentions must be blocked at some point. If the first intention is caused by values, so be it.
Quoting Pierre-Normand
But reasons are causal factors too. Acting for a reason means being influenced by the reason.
I have already agreed that it is not rationally consistent with the scientific view of nature, and that it cannot be justified by pure rationality. But then neither can causality be justified by pure rationailty or induction, as Hume showed. .The radical account of freedom is not, on account of it's being unable to be modeled consistently with causality, incoherent, either. The two models, of natural action and human intentional action, are simply incommensurable.
Quoting litewave
Not at all, humans act in accordance with unconscious intentions all the time.
Quoting litewave
What is flawed is the claim that infinite series reflect the real. They are products of dialectical reasoning which can only model the real to a limited degree.
Quoting litewave
There is no contradiction in saying that you could know without knowing that you know. The sorts of problems you are wrestling with arise when you take simplistic models to be the reality.
Quoting litewave
Exactly, and the reality of radical human freedom is eminently "evidently true or plausible" despite the limitations of the discursive intellect to understand it. It's all about being realistic and acknowledging the limits of your capacity to understand dialectically. Freedom can neither be proven nor disproven; whether you intellectually accept it as a reality or not depends entirely on presuppositions which cannot be justified by discursive reasoning; it's always going to be a leap of faith. On the other hand you cannot really doubt your own freedom and responsibility in your heart and as C S Peirce said:
“Let us not doubt in philosophy what we do not doubt in our hearts.”
This sounds more like a hard determinist line than a compatibilist line. The sort of thing that a compatibilist might say -- someone like Daniel Dennett, for instance -- is that people can chose to perform specific actions, and avoid performing other actions, even though whenever they make such choices there wasn't any possibility for them to have done anything else. It is rather hard determinists who claim that the lack or an ability "to have done otherwise" precludes the ability to chose at all. (Although some compatibilists also assert that possession of the general ability to have done otherwise is consistent with the impossibility of its being exercised in the specific situation).
Hard determinists such as Galen Strawson or Derk Pereboom deny that free will and moral responsibility are compatible with determinism (and also with indeterminism!) but they maintain that praise and blame, reward and punishment, can nevertheless be justified on utilitarian grounds, which seems to be your position. Your position therefore seems to be identical with the position that philosophers who defend it qualify as hard determinism (or hard incompatibilism) but you would rather call it "compatibilism" for some reason.
Another option is to maintain that free will and responsibility don't require "ultimate responsibility". In order to defend such an option, you still need to contend with Strawson's Basic Argument, it seems to me. It would seem especially important that you would do so on account of the fact that your own regress argument is similar to Strawson's.
Sure, this "blocks" the regress. (Rather: it terminates the regress at some point in the past where the agent wasn't responsible). But it block the regress in favor of the hard determinist, and not in favor of the the compatibilist. The compatibilist wishes to block the regress in such a manner that the agent's responsibility for her own actions isn't removed!
For sure. Whenever an agent acts intentionally in a context where she might be held (or hold herself) to be responsible for what she did, then her behavior is a manifestation of her being sensitive to some rational consideration or other. When the reason was bad, we may blame her and when the reason was good we may sometimes praise her (if there is some point in doing so). We blame her (or she feels remorse or expresses regrets) when her having had a bad reason for acting reflects badly on her character.
However, reasons thus construed as abstract features of the agent's practical situation that she might rightly or wrongly take to be justifying her behavior aren't antecedent causes of her action in the same way mental states such as beliefs and desires might be. It makes sense to say, retrospectively, that wrong beliefs or questionable motivations might have "caused" you to act badly and that they might, in some circumstances, absolve you in part of your responsibility. (You were not free to do the right thing on account of a lack of knowledge that you couldn't have had, or because of an addiction that clouds your judgement, say). But in the case where you are well informed and don't suffer (through no fault of your own) from some addiction, say, then to say that you had no choice in doing what you did because you were "influenced" by you reason for doing so doesn't seem to make sense. Freedom from rationality isn't freedom at all. It's just being unmoored.
Libertarian free will is a logically contradictory concept because it requires that a free action be intended and not intended. Intended because we cannot control our unintended actions. And not intended because an intended action is controlled by an intention we cannot choose.
Quoting John
If you postulate unconscious intentions you might as well say that unconscious robots have intentions too. But such an intention could hardly be the basis for free will because we cannot control an action when we are not conscious of intending to do it. At most we could helplessly watch it unfold.
Quoting John
You can cut up a finitely long interval into an infinite number of parts and then, using calculus, add them up to get the exact finite length of the interval.
Quoting John
In the case of libertarian free will it would be a leap of faith into logical contradiction, hoping that a triangle is a circle.
Quoting John
Hopefully he was not a flat earth proponent; that belief was banished from (most) people's hearts centuries ago.
In my understanding of compatibilism, a compatibilist admits that all our actions are ultimately determined by factors over which we have no control but he also claims that we have control and thus free will in the sense that we can do what we want to do or what we intend to do without feeling coerced to do it. Thus a compatibilist denies ultimate free will and ultimate responsibility but accepts free will and responsibility in a limited sense. Yeah, it is a utilitarian and pragmatic approach but it does appeal to an important sense of the idea of freedom - freedom to do what we want.
Quoting Pierre-Normand
Well, your reasons include the information you have. A person who acts for wrong (detrimental) reasons should be corrected, through advice, education, therapy, blame or punishment, or his freedom to act should be restrained.
The intention is the choosing; whether that choosing is conscious or not. You are confusing yourself by reifying abstract notions. I don't have any more time or energy to devote to trying to clear up your confusions, since it has become abundantly clear to me that you don't want them to be cleared up.
Most contemporary compatibilists (I say "most", but I don't actually know of any actual exception) defend a view of compatibilist freedom and responsibility that requires more from an agent than her simply being free from the feeling of coercion when she acts. One of the main points of contention between compatibilists and hard determinists concerns the source or our desires, or of our wanting what we want, when we are indeed "doing what we want". Pretty much everyone agrees that, in cases where we don't have any sort of control over our own desires (or on our desires' effectiveness in making us acting on them) then we aren't free even if we don't feel being coerced.
The classical example concerns the nicotine addict who wishes that she would not desire to smoke but can't help but acting on this desire. If we imagine that she indeed is powerless in getting rid of her addiction (and may be blameless for her having acquired it, let us suppose) then, in a clear sense, her addiction constitutes a restriction on her freedom. And that is the case even if whenever that person lights up a cigarette nobody else is coercing her and she is doing what she most wants to do at that time.
Harry Frankfurt has famously developed a "Second Order Desire" theory of free will in order to deal with cases such as addiction. This theory ran into new problems so Frankfurt later patched it up into his more recent "Deep Self" view. You can looks those up; there is an abundant literature about them. (Many other compatibilists such as John Martin Fisher, Michael Smith, Susan Wolf and Kadri Vihvelin hold roughly similar views). I don't think any one of those compatibilist views is entirely successful, but the main point is that the mere absence of a feeling that one is being coerced by an external agent, or from external circumstances (e.g. from being locked up in a room) isn't sufficient for guaranteeing the sort of freedom that grounds rational and/or moral responsibility. And that this is the case is a point that compatibilist philosopher generally grant to the hard compatibilist.
Let me also mention another related reason why the simple form of compatibilism that would equate freedom of the will with the absence of felt external coercion, or of external impediments, is unsatisfactory: On such a simple account, mature human beings wouldn't be anymore or any less free than dogs and cows are. However, we don't hold dogs and cows morally responsible for what they do (although we may reward or punish then when this is effective). So, there ought to be something more to our own freedom on account of which we can hold ourselves responsible for what we do than merely being uncoerced by external agents or circumstances.
I'll come back to this at a later time.
What you say here leads me to highlight something about litewave's determinism. According to it, there is no source of action that is not an "external coercion" or "external impediment"; whether it is "felt" or not is really a matter of indifference. The idea of a self that originates intention is simply seen as an illusion on that view. The whole notion of moral responsibility is logically inconsistent with such a view, which is what I have been, apparently unsuccessfully, trying to point out. On such a view all circumstances are extenuating circumstances.
The simply fact that ALL our thoughts and intentions can be described as coming from genes and experiences, shows that we are not the originator of our thoughts and intentions, it simply makes no sense at all to say that 'we' created them.
Our brains did, sure. but 'we' ( the conspicuous experience) simply witnessed the result, we took no part in it. just as our bodies pump blood, our brains come up with thoughts entirely outside of our influence.
i feel that this sentence really does eviscerate the notion of 'free will';
'To say that you could have done otherwise, is simply to say that the universe could have been different at that exact point in time'
This claim seems to rest on a misconception regarding the way human beings, qua responsible agents, relate to "the universe". The universe simply is everything there is, including you. You are a flesh and blood animal; you are not a disembodied Cartesian ego: the mere passive spectator of epiphenomena being generated by your brain. So, of course, if you had done something else than you actually did, then the universe would have been different. That doesn't mean the "rest" of the universe would have made you do it or that you didn't have the opportunity and freedom to do it in the actual case where you didn't.
i fully understand that, and actually thought i had basically said it..
My point was that this rules out the notion that at a specific instance in time, any given person could do one thing OR another. Its simply not true, in a given situation (the situation includes your brain state etc) you can and will do one thing.
As far as i can see, to think otherwise is to presume the existence of magic.
Yes, I think most compatibilists, because of the metaphysical picture that comes bundled up with the uncritically accepted doctrine of universal determinism, generally have a hard time distinguishing what it is in the aetiology of human action that constitutes external constraint to our freedom from what it is that is a constitutive part of (internal to) our power of free agency. It is just very hard for them to see how it is that free agency comes to be constituted, what its biological and social/cultural "determinants" (or enabling conditions) really are.
I say that all thoughts feelings and 'decisions' are deterministic just like everything else we see, you seem to assume that there is something else, based on... what exactly?
This is precisely what I think is a bit nonsensical. You yourself are not part of the practical situation where you are called to act. That would only make sense if you would picture yourself floating like a ghost alongside yourself at the time of acting and trying to figure out how to pull the strings that animate your own body. What rather constitutes your practical situation, at the time when you must make a responsible decision, are the opportunities open to you, the set of your practical abilities, and the rational considerations that may tell, by your own lights, what it is that it would make sense for you to do.
There are both deterministic and indeterministic systems in the world. From a quantum mechanical perspective, most physical systems are indeterministic although for some practical purposes the indeterminism can be abstracted away (e.g. as is the case for many macroscopic, non-chaotic systems). Maybe more importantly, for purpose of understanding the behavior of rational agents and other animals, the principles that govern them need not reduce to or be explained by the deterministic laws that govern their material constituents. It's usually more a matter of form and function: how those parts normally function together.
what IM saying, is that if you go to any MOMENT, a single moment, not a period of time, a single instant in time, and everything in the whole universe is a particular way, every atom, every quantum state, all of it (obviosly including your body and brain) then the thing that happens next is determined by the current setup, and we as agents have NO INFLUENCE over that whatsoever. and that is precicely what most people believe free will is, the capacity to overcome determanism, to break it, to do something outside of what is determined by the universe.
If you dont support that notion of free will, then we have no argument.
But quantum states are quite irrelevant, and they don't allow for the free will people claim exists. to claim that something on the scale of a human brain acts in an in-deterministic way is absurd and baseless.
I am unsure if this is really what "most people" believe free will is. Sam Harris for sure seems to believe that this is the conception of free will that must be refuted. He seems to hold to this naive conception very dearly because that saves him the trouble of refuting (or of learning anything about) less philosophically naive conceptions of our ordinary concepts of agency freedom and moral or rational responsibility.
You ignored the second part of my comment. Even if human brains can be construed as deterministic systems, that doesn't mean that their functions, let alone the functions of the distributed systems that they are integral parts of (including human bodies, their environments and their cultures) are governed by principles that are reducible to the laws that deterministically govern the behaviors of neurons.
of course, but why would you assume thats the case, i see no evidence of this ability and thus see no reason to come up with explanations for it...
You seem to assume the ability is their, and then you consider that maybe our brains break determinism to make it happen. seems pointless to me.
on your Sam Harris comments, i disagree, i dont think he is as ignorant of the more nuanced views as you think, i think he is arguing (as i am) against the only concept of free will worth arguing over, i see no reason to argue against more nuanced philosophical views of free will.
If you dont support this idea of free will, then whats the problem? so basically i dont understand this criticism you gave. Also, wasn't it basically just an argument from authority? you gave no real argument.
What ability don't you see any evidence of? The ability to make justified rational decisions or enlightened moral choices?
But the conception of free will you are arguing against just is sophomoric and ridiculous. No philosopher who I know endorses it. (And I've read papers by well over one hundred philosophers who have published on the topic). Maybe "ordinary people" who are being probed into coming up with explanations regarding the source of their abilities to act responsibly in a universe that is allegedly governed by impersonal forces come up with funny explanations. But just because the explanations aren't very good, or are overly simplistic, doesn't entail that what is explained doesn't exist!
No... straw man alert straw man alert! :P
no no, its just the choices bit, of course our actions are influenced by morality and rationality, just like a computers actions are influenced by energy states, logic circuitry etc. its a wonderful and beautiful phenomena.
'But the conception of free will you are arguing against just is sophomoric and ridiculous"
great then you agree with me, so why are you arguing against me?
wait... but you DID defend that sophomoric and ridiculous conception just before.. didnt you?
You implied that we could do multiple different things, based on our decisions entirely abstracted from determined reality... didnt you?
im confused here... the ability for a person to act responsibly obviously comes from our genes and our culture... right?
Whats this got to do with free will/determinism?
Those are two rather different sorts of influences. The deterministic computer isn't responsible for the inputs that are provided to it and those inputs determine the outputs. Hence, we don't hold the computer responsible for having had any choice in churning out those outputs, given the inputs that it didn't have any choice being provided with.
The case where humans are being influenced by principles of rationality or morality is quite different. The principles of rationality are not part of the initial state of the universe or the laws of physics. Both the laws, or the initial state, could have been different and this might not have had any relevant impact on what the principles of rationality are. They would remain the same. If you are asked to evaluate whether modus ponens is a valid rule of inference, for instance (or whether its application to some specific bit of practical reasoning is relevant) then it is absolutely no use to inquire about the initial state of the universe or the laws of physics. It is also quite irrelevant to inquire about the causal impacts of the "inputs" to your brain. The principles of rationality aren't inputs to people's brains. This is not where to look for in order to understand why people make the choices that they make, in the case where they are acting rationally or morally.
In the specific case of morality, looking for its source in our evolutionary past, for instance, leads one straight to the commission of the naturalistic fallacy. What makes something worthy of being valued can not be reduced to any sort of causal explanation as to why you actually came to value it.
what if we make a computer that changes its own program, put it in a robot and it ends up killing people, it it then personally responsible for its actions? was it not an unfortunate series of events originating in a lack of foresight on whoever originally made the robot?
i dont see how any of this leads you to human minds having the ability to break free of determinism.. my argument is very simple, it is as follows.
there is no evidence whatsoever that human minds have any ability to make decisions outside of deterministic behavior, just like a complex computer.
Thats it, thats all im arguing. do you have such evidence? because most people seem to simply assume we have such a magical ability, presumably based on the way 'choices feel'.
morality and whatnot is a rabbit hole we should probably not go down here. but suffice to say, i think its perfectly rational and requires no supernatural shenanigans, morality seems to simply be a desire to improve everyone well being. (yes i 100% follow Sam Harris' thinking in this area)
You misunderstand. I argued the exact opposite: that you ought not to construe the free human agent as an entity that can control the unfolding of the universe from some ethereal standpoint outside of it (and from outside of her own body and brain). It is from within the universe, as an integral part of it, that the embodied human agent exerts control over her own future. And you have not shown how the deterministic laws that govern physical systems (while abstracting away most of their significant functional features) preclude human beings from having such abilities to freely and responsibly determine their own futures.
Im not argue that they do.. nor am i arguing that humans don't have that ability.
im arguing that 'humans' are just yet another physical entity, that we are no different from machines, except in complexity. a computer can determine its own future too. the computers electronics fail, and it dies, its insufficiently constructed hardware determined its future. ie, part of the computer, determined the rest of the computers future.
so what? wheres the free will come into any of this?
Daniel Dennett says that we are "wet robots". He may be called a mechanicist-compatibilist since he endorses a view of the universe (and all the living things in it) being a set of complicated mechanisms. I don't personally endorse this metaphysical picture, but I think he has a point. The neural circuits inside of our brains (and inside of an intelligent robot's computer) could perfectly well run deterministic algorithms and this fact alone would not have any incidence on our freedom and responsibility. If a robot would become a killer robot then maybe its creators would share some of the blame. That would not necessarily absolve the robot. Likewise, if you hire a hit man to kill someone, then you are responsible for the murder just as much as the hit man is. Responsibility isn't a buck that must stop in just one single place. It is more a matter of social, moral and political decision to decide how responsibilities for rational actions must spread out among multiple agents (where some of the agents -- parents for instances -- hold some responsibility for raising or supervising other agents on their way to the acquisition of greater rational and moral autonomy).
People's being responsible for what they do therefore isn't independent of the way they are being held responsible according to (sometimes freely endorsed) social norms. But just because those two things are being created together doesn't mean that responsibility isn't real. It just means that it only exists withing a determinate social context. (And, analogously, it can also hold withing the practical perspective of a single rational agent -- on a desert island, say -- who choses to lead a rationally integrated life and to hold herself responsible for her own past shortcomings).
he believes society would break down without the delusion, so he hides the truth. he has openly admitted to this. he believes some things are better not known.
he pretends (knowingly) that free will is 'degrees of freedom'. while in reality, no one means that when they use the term.
I disagree with the conclusions he draws from a lack of free will, he thinks they are bad, i dont.
I agree with everything else you said about responsibility, i hope you didnt think i didnt. it all seems quite obvious to me, but thank you for making it clear anyway.
my argument is more about blame, the only place i see a lack of free will having an effect on how we think, is in blame.
i dont blame anymore, i recognize that peoples flaws have reasons, reasons beyond their control. 'bad' people are sick people, they need help not hatred.
To praise and blame people just is to hold them responsible. When you are holding someone responsible for having acted badly, because in this instance her having acted badly wasn't purely accidental but rather reflects badly on her character, then you are blaming her. If the blame is merited, then it ought to be met by that person with some sense of shame or regret. Feeling ashamed or regretful just is for one to recognize that the blame is merited, and being on that account motivated in making amends and trying to do better in the future. The social configuration of those emotions and "reactive attitudes" (as Peter Strawson calls them) not only enable people to make progress on the path towards greater rational and moral autonomy, it is also in part constitutive of those rational and social abilities. She can't be rational who doesn't hold herself responsible (i.e. isn't happy or unhappy about herself) for her successes or mistakes in reasoning. She can't either display moral awareness who wouldn't feel any shame for her own misdeeds.
Rewards and punishments likewise can be social practices that scaffold autonomous abilities and partially constitute them. Parking tickets punish people who park illegally while respecting their autonomous choices to do so in some circumstances. (Sam Harris would probably see this as a second best solution to some form of brainwashing or brain surgery that would entirely remove people's abilities to park illegally in any circumstance.) And, of course, rewards and punishments are effective with dogs who can't be reasoned with, or with children who can't reason yet. But in the latter case, they also instill in them the more mature reactive attitudes that lead them on the path towards greater rational and moral autonomy.
I actually disagree here, to hold someone responsible is not to blame, i distinguish between the two.
we can of course, ignore the words and stick to the ideas. so please feel free to substitute better words in :P.
if someone harms me, i hold them responsible, i expect them to apologize if they are a moral agent, i ask for them to make amends, all for pragmatic reasons, but i dont blame them, i blame their environment, their imperfect genes, the whole multitude of variables that led them to their current situation.
there is a practical (and thus logical) place for personal responsibility, there is no such place for blame.
please note, im using the word blame here to include a negative emotional response, thats how i distinguish it from responsibility.
perhaps a better way to say this, is that i hold people responsible for their actions, but i recognize they are not ultimately responsible for who/what they are, and thus i feel towards them the same as i would any human. (or at least i try).
"(Sam Harris would probably see this as a second best solution to some form of brainwashing or brain surgery that would entirely remove people's abilities to park illegally in any circumstance.)"
He wouldn't put it that way, he would likely point out (not to put words in his mouth :S) that what you call 'brainwashing' we call society. is treating theft like its a bad thing 'brainwashing' the next generation?
but, why are we talking so much about morality? i dont see how its remotely related..
if the world was populated by beings, it is indeed objectively better if those beings have a sense of personal responsibility. But what does this have to do with free will?
This is strange, and I doubt if you can really live up to this lofty (however misguided) ideal. Your pragmatism seems to be grounded on an utilitarian reconstruction of the pont of ordinary reactive attitudes. But you are claiming (as Harris does) some sort of detached, emotionally withdrawn, purely theoretical stance on your own daily social intercourses, rather on the likeness of Star Trek's Mr Spock.
It seems to me that Sam Harris often fails to distinguish practical from theoretical reason and thereby seeks to substitute to our practical understanding of our interactions with our fellow human beings a theoretical understanding of the causes of our behaviors. He wants us to treat each other like we were dogs. Hence, his utilitarianism, combined with this theoretical-instrumentalist stance, yields an understanding of the point of morality rather similar to what is depicted (critically) in Aldous Huxley's Brave New World and (uncritically) in B. F. Skinner's Walden Two. (A Clockwork Orange also comes to mind)
But obviously im not a robot, i still react emotionally all the time. and sometimes i react emotionally against people who could not have done otherwise, its a difficult instinct to get over.
I dont see any contradiction here, do you?
reason is reason, there is no theoretical/practical reasoning, what are you talking about?
"He wants us to treat each other like we were dogs"
what? no..
wait.. are you like some theist who believes we have souls are are not animals and need to be treated with special human dignity and all that stuff? if so then we have likely hit an impasse. if not then im very confused about where the heck this came from.
i fully stand with Sams view of Morality, and im yet to hear a remotely relevant criticism against it. I have however, heard a LOT of criticism of straw man versions of it..
so, im really confused about this practical / theoretical understanding thing. Id appreciate if you could explain further.
The way i see it (this should help you set me straight) is that we all create models of other peoples behaviors in our minds (theoretical models) these models are based on our real world experiences of people (derived practically)...
I dont see the difference, practical reasoning seems to just be intuition? surely not... you surely are not appealing to intuition over reasoning.
Deciding what to believe isn't the same as deciding what to do. Of course, both of those abilities rest on rational abilities, broadly construed, but they are still distinctive ways of making use of them.
and i don't think anyone decides what to believe, belief just happens. things are as convincing as they are to us, we cant just decide to believe something that is not convincing to us.
i mean... maaaaaybe its possible i guess, but its certainly not the norm.
Even if you could somehow acquire a perfect "model" of a fellow human being and thereby know exactly how different "interventions" on them would "produce" different behaviors and emotional responses, you would still not know what to do since this theoretical knowledge would not speak to the reasons why you should intervene in a way that produces such results. The aim of practical reason is to decide what to do and this is governed just as much by the evaluation of the desirability of the ends as it is with the effectiveness of the means (or their permissiveness). Theoretical reason is completely silent regarding both permissiveness (duties, commitments, responsibilities, etc.) and the valuation of ends.
obviously no one has a perfect model of another human being, we certainly dont have that capacity yet.
so what? i dont see your point.
Sam is making a model just like anyone else, your not even interacting with it, your saying its bad somehow because its 'theoretical'... its based on his experiences with people just like any other model is...
ie, people dont like being punched, my goal is to safekeep others wellbeing, therefore i shouldn't punch people. this is a model, its practical, and its theoretical. its not hard science, its a hypothesis of sorts... your drawing a strong distinction without any difference in my eyes.
I see it as the same as anyone else's views in this area, just more coherent and defend-able.
I just argued that *even if* you had a perfect predictive/causal model of the behavior of a human being, that still would not tell you how it is that you ought to behave towards her. And that's because knowing how your interactions with (or manipulations of) that human being will affect her doesn't tell you whether you should do it. To gain knowledge of the potential effects of your actions is a matter of theoretical reasoning. To arrive at a decision regarding what it is that you ought to do is a matter of practical reasoning.
You have read The Moral Landscape, right? In that book Harris takes as an unquestionable premise that it is morally better that every sentient creature experience well-being rather than that every sentient creature feel crappy. From this unique premise, Harris purports to derive his "moral landscape" utilitarian theory. But the premise can't be supported by empirical scientific investigation. Harris is the first to admit this. In fact he pretends that only intuition can support it and that he doesn't know how to respond to someone who would deny it. So, Harris himself recognizes that his utilitarian theory can not rest entirely on a predictive/causal model of the behavior of human beings (and other sentient creatures). You need, in addition to any such model, however perfect or imperfect it might be, some premise or principle about what it is best to do. But deciding what is best to do, or arguing for the validity of moral principles is traditionally regarded to be a topic for practical reason. Harris rather regards it as a matter of faith in his own intuition and he simply voices astonishment that anyone else's intuition could be different.
He does insist, though, that any moral system requires some unquestioned premise. But this is merely to assume moral foundationalism. Principles of morality need not be structured similarly to a mathematical axiomatic system. Practical reason need not rest on rules of deductive inference at all.
Quoting Pierre-Normand
I think your missing the point of a 'should'.. there are no 'should's' magically floating about in the void. A should must be couched upon a goal.
no one is claiming that anyone 'should' do anything as an abstract absolute, that would be silly.
what Sam claims, what i believe, is that IF you desire wellbeing, then you should strive to improve it. which is admittedly a completely obvious point.
So your main criticism of what im actually claiming, is simply not of a claim im actually making, nor is Sam IMO.
"Harris rather regards it as a matter of faith in his own intuition and he simply voices astonishment that anyone else's intuition could be different."
this is absolutely false, and based on your mistaken view of him claiming a should exists where it does not. he makes no claim that anyone should act this way as an absolute rule of reality. he is just saying that it is objectively better if people are well off, which is incredibly obvious.
Intention is a desire that stimulates and directs action. I don't see what you find confusing.
When we talk about coercion we typically mean external coercion but there can also be internal coercion such as addictions, diseases or handicaps that force us to do something or prevent us from doing something, against some of our wishes, and thus constitute impediments to compatibilist free will.
Quoting Pierre-Normand
Sure, as I mentioned, humans have a higher level of consciousness than animals. This entails more capacity for compassion and more sophisticated intelligence, so we regard humans as more morally responsible than animals. Humans are also more free than animals in the sense that human intelligence enables them to find more ways or more effective ways to satisfy their desires and needs.
I pointed out that humans have a higher level of consciousness than animals, with greater capacity for compassion and greater intelligence - and this constitutes grounds for their moral responsibility.
I don't think compatibilists have a problem with distinguishing the constitutive part of free agency - they think that free agency consists in the ability to satisfy desires, carry out intentions. But libertarians surely have this problem because of their insistence on the incoherent concept of ultimate control. And by the way, compatibilists don't require that reality should be completely deterministic; they claim that free will is compatible with determinism and that a certain degree of determinism is necessary for the exercise of free will, simply so that we can determine our actions and cause what we want to cause.
If by principles of rationality you mean logic and mathematics then principles of rationality are pretty much features of the universe - that's why science is so successful in predicting the behavior of nature and in harnessing the behavior of nature in technology. The behavior of nature reaches its most complex manifestation in human consciousness and thought. Morality stems from this consciousness and thought, from the feelings of joy and pain, compassion and intelligence.
Quoting Pierre-Normand
We value joy and hate pain; it actually seems to be true by definition: joy is that which is valued (accepted and sought) and pain is that which is hated (resisted and avoided). Evolution tends to arrange that that which is valued is useful for survival, health and reproduction, while that which is hated is the opposite. Thus our values are formed.
If someone tells you that she believes the weather will be rainy tomorrow, you can ask her why she believes it. If she tells you that she intents to spend her next vacations in Pyongyang you can ask her why she intends to do so. Although in both cases you are expecting her do provide you with some reason, those reasons also are expected to have different forms. In the first case you expect to be given some form of evidence for her beliefs while in the second case you expect, in addition to evidence, to learn something about her values, preferences or prior commitments, and/or her abilities and opportunities. Those latter practical considerations, though, are generally irrelevant to the rasons why someone believes something. If she would tell you that she believes that the weather will be rainy tomorrow because she is fed up with the recent sunny weather, that would be irrational.
The distinction between practical and theoretical reasoning is very commonplace in philosophy (since, at least, Aristotle who has done much to articulate the distinctive forms of practical and theoretical syllogisms), as well as social sciences, economic modelling, mathematical game theory, rational choice theory, cognitive science etc. It is quite uncontroversial that there is such a distinction although the specific manner in which both forms of reasoning are related is a topic of great interest and controversy. I think the onus is on you to explain why you think there is no difference.
It certainly is quite obvious and there indeed is little reason for anyone to deny it. What is questionable is Harris's use of this commonplace assertions as a unique ground for building up an all encompassing moral theory. Just because pleasure is more fun than pain hardly proves utilitarianism right. Likewise, just because it's better to get your own stuff rather than steal it from someone else hardly means that Ayn Rand's libertarianism is right. This is another theory that is hopelessly simplistic because it strives to reduce all of morality to one single moral consideration.
Finding ways to satisfy your needs and desires just is a small part of the function of practical reason and of the scope of human freedom. Human beings aren't merely more skilled than dogs are at finding food and shelter. They also have the ability of assessing what their needs are; when their needs of the needs of other take precedence, what habits and desires are worthy of being cultivated; and lastly, and most importantly: given the desires that they actually have, which ones among them are worthy of being satisfied in particular situations.
I don't know any contemporary compatibilist philosopher who endorses such a simplistic conception of compatibilist free will. Can you point me to one?
Maybe most libertarian philosophers face some problems (such as the luck objection, or the problem of intelligibility) but you haven't shown that any specific libertarian proposal is incoherent. Rather, you have straddled all libertarians with a dilemma regarding the source of "intentions", but you have in the process misconstrued what it is for one to have an intention as if it were caused by an antecedent act of the will rather than its being itself an act of the will.
If you have a suitably abstract view of "the universe" such that numbers and other abstracta make up an integral part of it, then, maybe, you could argue that principles of theoretical and practical rationality are "parts" of the early universe. But they are not parts of physical laws or of the initial conditions of the universe as those two thing are generally conceived to jointly determine human behavior according to the standard deterministic picture. To view them as such would be patent nonsense. It would mean, for instance, that if a friend of yours purports to have proven Goldbach's conjecture, and ask you if her demonstration is sound, then it would make sense to say that you can't know for sure until such a time when physicists have discovered the fundamental laws of nature or what the past state of the universe precisely was. But surely, those two things simply are irrelevant to the question of the soundness of the mathematical proof. Principles of mathematical rationality, though, are quite relevant.
Likewise if someone would seek you advice over some moral dilemma that she is facing: She promised to return a gun that she borrowed from a friend who she suspects might make use of it to commit a crime, say. It wouldn't make any sense, in that case either, to claim that you can't know what is advisable to do until such a time when physicists have gained a more precise knowledge of the laws of nature or of the past state of the universe. The principles of morality, just like the principles of theoretical rationality, happily abstract away from such contingent facts about the laws of nature and the past "state" of the universe.
Evolution has its own agenda. Human beings have a different agenda. For sure, contingent features of our evolutionary history can account for some tendencies and general abilities that we have. The naturalistic fallacy is the fallacy of inferring what it is that one ought to do on the basis of what it is natural that one would be inclined to do.
No, this is too simplistic; intention is the decision, whether conscious or not, to act on one desire rather than another.
I think your missing the point of it.. You have said just before, that Sam Harris says that we ought to act like x, because of y, this is entirely false, he makes no claim that anyone ought to do anything. an ought cant just exist on its own, that makes no sense whatsoever.
An ought MUST be based on a goal.
So, sam is simply pointing out what is the best goal. the real thing he is doing though, is claiming that all of morality can be objectively studied, thats really where his point lies.
And i have no idea how anyone can doubt it. wellbeing is everything that could possibly matter, by definition. To say its 'simplistic' is to miss the point entirely. its defined as everything that could matter so it can hardly miss stuff out can it?
It doesn't seem like you have read The Moral Landscape then. Or, if you have, you may not have paid sufficient attention. Harris is a moral realist. On his view, what it is that one ought morally to do is an objective fact. Furthermore, on his view, there is no distinction between empirical facts and moral imperatives; there is no is/ought distinction.
"For instance, to say that we ought to treat children with kindness seems identical to saying that everyone will tend to be better off if we do. The person who claims that he does not want to be better off is either wrong about what he does, in fact, want (i.e., he doesn’t know what he’s missing), or he is lying, or he is not making sense." -- Sam Harris, The Moral Landcape
Quoting PeterPants
Since Harris denies the categorical distinction between facts and values, that would not make much sense for him to make values only rest on contingent goals. I've searched the few dozen instances of "goal" in The Moral Landscape and nowhere does he make moral values rest on goals. If anything, he seems to think contingent goals of human being ought to be aligned with objective values, although how this would come about he doesn't say. His epistemology of values is non-existent. If queried about the source of his knowledge that his fundamental moral premise is right, he simply asserts that it's a self-evident truth and anyone who disagrees must be confused. So, he's also an ethical intuitionist.
What you say here is circular. If well-being is the best goal (which is itself questionable because of the ambiguity of the notion "well-being", but granted for the sake of the argument) and we ought to act according to the best goal; then the principle that we ought to act according to the best goal is dependent only upon itself; which means it is entirely circular and thus groundless.
So when you say an ought must be based on a goal (the goal is well-being, so we ought to aim for it); your argument also implies that a goal must be based on an ought (we ought to aim for the best goal; which just happens to be well-being; so we ought to aim for that).
And none of this deliberation about what we ought to do makes any sense at all under the presumption of hard determinism, because it would want to claim that all our deliberations, decisions and actions are exhaustively determined by microphysical events which are unknown to us (at least in the 'real time' of their activity) and are thus completely beyond our control.
BTW, it would show some consideration for your readers if you took the trouble to edit your posts. It would also make it seem more like you what you write is backed by some real conviction.
In that case it is also useless. In any given practical situation, a real human being -- as opposed to a God who contemplates the whole universe from outside of it, say, and could evaluates how high it ranks on the "moral landscape" -- is faced with several things that matter to her (and, indeed, that ought to matter to her) and that she can't all pursue or salvage at once. Hence she has to make choices.
Utilitarians believe that everything that matters can be ranked on one single scale of 'utility'. But if what matters extends over things that can't be valued on a single scale, then Harris's theory comes crashing down. It provides no guidance for action except in the very simple situations where everything that matters can be neatly quantified on a unique one-dimensional scale. (Classical utilitarians and their consequentialist descendants strive to address those problems, but Harris seems not to have given any thought to them.)
the biggest thing would be that neither me nor Sam are actually claiming that people 'should' act a certain way as an absolute rule, we are saying that people should act a certain way IF they want to achieve a certain outcome. (a truism for sure, right?)
so instead of going down a rabbit hole of, 'i didnt say that, i dont believe that' etc. ill just try and restate what im actually arguing for.
1- wellbeing is being defined as 'everything that matters, everything of value, all past present and future facts that have any effect on the quality of life of all beings' (hence the argument that wellbeing is not necessarily that, is pointless, because its the idea we are using, the word is irrelevant)
2- morality is about values,in order for anything to have value, it has to have value to something sentient, therefore morality is entirely about wellbeing (as defined above).
3- If we desire more wellbeing, then we ought to try and understand how wellbeing works and how to effect it.
4- it is objectively better to improve wellbeing.
those claims are really all im claiming, most of it is totally obvious and almost silly to even point out.
The main thing people seem to argue against is the notion that we could objectively say some action or desire is better or worse. do you guys feel this way?
Please note though that im certainly not claiming that we can know 100% what is the best thing to do, its likely far to complex to ever get there. But this is not a reason to not work towards a better understanding, obviously. We wouldn't stop doing biology if we found out it was fundamentally impossible to know every biological fact.
How can you say this? The very title of the book you yourself have brought up a number of times, is 'the moral landscape'. The whole point of the analogy to a landscape is to show how its a complex system with multiple peaks and troughs, he explicitly says a number of times that there may be many equal peaks, there may be better or worse ways to get to a peak etc.
of course their are multiple ways of being 'well off'. this is the whole point, we should study it. Is it generally preferable to have a life of blissful pleasure, or a life of intellectual stimulation? a life of overcoming hardship, or a life of ease? If either can be equally as good given the right upbringing and thinking tools, then which is more sustainable? etc.
these are moral questions with objective factual answers, whether we can ever know those answers or not.
You might want to rethink this. Things that have a causal impact (positive or negative) on the wellbeing of sentient creatures aren't part of wellbeing anymore than than a thief or a robbery figure themselves among the stolen goods.
Morality keeps an eye on value; but it is also concerned with rights, obligations, justice, virtue, human dignity, human autonomy, personal relationships, etc.
Oftentimes, doing what is right is done for its own sake rather than for the sake of making something "work". The idea that understanding what one ought to do (or what is right) amounts to understanding how something "works" confuses theoretical rationality with practical rationality. One can understand fairly well how things work and be quite in the dark regarding what to do. Were this not the case, Harris would have no need for his fundamental premise grounded on pure intuition. He rather would be able to demonstrate it through investigating how things work, but this is impossible to do by his own admission.
This can be construed rather tautologically as the claim that it is objectively better to do whatever ought to be done (which your definition of "wellbeing" suggests) or as the claim that when favoring someone's wellbeing (ordinarily construed) conflicts with something else then this something else (e.g. personal duty, respect for human dignity, or justice) must always be sacrificed for the sake of wellbeing.
No. I am a moral realist as are very many philosophers who aren't utilitarians.
Yes, this is just one of the glaring contradictions in Harris's confused theory. To be fair, such inherent contradictions have a tendency to crop up within most efforts to account for the demands of ordinary principles of justice or morality within a strict consequentialist framework.
The trouble with this is that the measure of elevation at some point on the multi-dimensional "moral landscape" represents the aggregate state of wellbeing of all sentient creatures, according to Harris. It follows from this definition that there can't be better or worse ways to reach a given peak consistently with Harris's insistence that wellbeing exhausts the content of morality. If there were better and worse ways, then, presumably, reaching some slightly lower peak would be a better option than reaching a slightly higher neighboring peak, on account of the paths available and the worthiness of the paths. But if that's the case, that means that wellbeing (the elevation of the peaks) is not the only objective moral consideration. Harris's theory is thus self-contradictory.
This is from Wikipedia's entry on compatibilism:
Quoting Pierre-Normand
Intention is a mental state, a desire that stimulates and directs action. If the intention was not caused by an antecedent act of will then it was not intended - it formed in our minds without our intending to do so and thus without our control.
Why not? Physical laws and initial conditions of the universe have mathematical and logical features; they can be accurately described with the mathematical and logical apparatus of science.
Quoting Pierre-Normand
I have no idea what you meant here. Computers - causal machines - can perform logical and mathematical operations, so why would humans need something non-causal to perform such operations?
Quoting Pierre-Normand
We act based on the limited information we have. That doesn't mean that our thought processes are non-causal.
Evolution also allows random mutations - so we can have any values that can possibly happen to us. But natural selection will tend to remove those that are detrimental to survival, health or reproduction.
Any mental state is a decision, because the state is what it is rather than another state.
This hardly answers the charge of naturalistic fallacy. Some inherited agressive tendencies, which may contribute to explaining why some people commit murder or rape, may have had evolutionary advantages in the past and have been selected for that reason. That doesn't make rape or murder moral. Just because a form of behavior has a tendency to promote survival and reproduction doesn't make such behaviors moral.
This is clearly a simplification. This simplified definition is immediately followed by a quote from Schopenhauer. I had asked you if you knew a contemporary compatibilist philosopher who endorses such a simplistic conception of an act of free will. Wikipedia often offers fine explanations, but it is not an actual philosopher. It is a collection of articles written and edited by people like you and me.
The Stanford Encyclopedia of Philosophy is generally better source.
"It would be misleading to specify a strict definition of free will since in the philosophical work devoted to this notion there is probably no single concept of it. For the most part, what philosophers working on this issue have been hunting for is a feature of agency that is necessary for persons to be morally responsible for their conduct. Different attempts to articulate the conditions for moral responsibility will yield different accounts of the sort of agency required to satisfy those conditions. What we need as a starting point is a malleable notion that focuses upon special features of persons as agents. As a theory-neutral point of departure, then, free will can be defined as the unique ability of persons to exercise control over their conduct in the manner necessary for moral responsibility. Clearly, this definition is too lean when taken as an endpoint; the hard philosophical work is about how best to develop this special kind of control." -- From the SEP entry on Compatibilism
I can grant you that there does not occur an intention, or an intentional action (in progress), without an act of will. But the content of the act of will isn't any different than the content of the intention. If you intend to walk to the corner store in order to buy a dozen of eggs, then what might the content of your "act of will" be such that it would "stimulate" the intention? That seems confused. The intention and the act of will just are two names for the very same thing. Can you imagine an act of will that would somehow fail to constitute the corresponding intention?
I wasn't here arguing that human beings "need" something non-causal. I just pointed out what ought to be uncontroversial, but that you seemingly are overlooking: And this is the fact that knowledge of actual physical laws, or of past historical facts, isn't required for one to assess the soundness of a mathematical demonstration. Do you disagree with my example? Do you hold that our judgments regarding the soundness of a putative proof of Goldbach's conjecture ought to be held hostage to potential new discoveries about the laws of physics or about the distant historical past?
What I am arguing is that the limited information that we have regarding our present practical situations often (or at least sometimes) is sufficient for us to make sound rational of moral judgments. And in those cases where such knowledge is sufficient, any sort of information about the fundamental laws of physics (if there are any), or the distant historical past, generally is irrelevant to the correctness of our judgements. They may be relevant to the explanation how it came about that we acquired our abilities to think rationally, and to be swayed by moral considerations, but they don't have any relevance to our evaluation of the validity and soundness of our judgments.
I am not arguing that our thought process is non-causal, or causal. I am not talking about any sort of process. The working of our brains is causal and "mechanical", in some way. But our judgments are constrained by norms. To know how our "thoughts" are caused doesn't tell us whether those thoughts constitute sound judgments anymore than knowing the physical principles that govern the behavior of a computer tells us whether or not the program that it runs is buggy. Appeals to rational or functional norms are irreducible to causal explanations. And that's because things that flout norms (buggy computers or irrational agents) still obey the laws of nature perfectly (or rather, their material constituents do). Judgments can be right or wrong, but laws of nature just are what they are. This is why you can't learn right from wrong through studying the laws of nature or the manner in which material things are governed by them.
Evolution promotes values that are beneficial to survival, health and reproduction. Not all of those values can be regarded as moral. Morality is based primarily on one of the values that evolution promotes: compassion. It is an important value that facilitates emotional and cooperative bonds between people and seems to be part of the integrative processes in our minds.
The Wikipedia article gave the essense of compatibilist free will: it is the freedom to act according to one's motives without obstruction. You can analyze and differentiate what the "motives" or "obstruction" are but compatibilist free will remains compatible with the fact that everything we do is ultimately determined by factors over which we have no control, while libertarian free will is not.
It would be an intention that stimulates an intention. For example, you have the intention to eat eggs. This intention, along with other factors, may stimulate your intention to go to the corner store to buy some eggs.
Quoting Pierre-Normand
If the intention to do an action and the intended action itself are one thing then I don't see how we can have control of our actions. It seems we would have no time to plan the action or think about it in advance because when we intend to do it it is already happening.
So what? Even when we don't know physical laws or the past state of the universe they still influence us and everything we do we do within their context.
But norms are just another factor that causally influences us. They are simply values or habits that influence our mental and physical actions.
You can't cast the content of moral thought solely in evolutionary terms. If you are going to grant that evolutionary pressures account for both moral and immoral tendencies, then you have thereby failed to account for our ability to distinguish between those two sorts of tendencies. And yet, we are able to do so.
While it may be the case that some of our naturally evolved cognitive abilities and emotional tendencies (e.g. a capacity for empathy) are required for sustaining our ability to make moral judgments, those evolved tendencies aren't guaranteed to yield sound moral judgments, or even to have sound moral judgment as their aims, and they indeed often don't. The only ultimate aim that they have is reproductive fitness, and this is something distinct from the aim of morality.
This is only a negative characterization of "compatibilist free will". Of course, saying that one is a compatibilist just is to say that one holds that the capacity of free will isn't inconsistent with universal determinism. When you want to go further than that and specifies what it is about free will that characterizes it a such (i.e. as being "free" in the relevant sense) and that is being alleged to be compatible with determinism, then the overwhelming majority of philosophers stress the essential connection of freedom with responsibility. This is also the ground for denying the ascription of free will to non-rational animals; and the reason why absence of compulsion doesn't cut it as a criterion.
The SEP article that I quoted makes this clear. Interestingly enough, while I don't know any contemporary compatibilist philosopher (as apparently you don't either) who doesn't stress this essential connection between freedom, in the relevant sense, and personal responsibility, there are a few libertarian philosophers who seem not to bother too much with it. This is why libertarian accounts sometimes run into the 'luck objection' or the 'problem of intelligibility'. But if your own account of "compatibilist free will" happily dispenses with the necessary connection with responsibility, then that would seem to make it indistinguishable from some crude libertarian accounts. If our deep motives can be necessary outcomes of the impersonal laws of nature then they may just as well be the contingent outcomes of random quantum fluctuations. It wouldn't seem to make any difference as far as our freedom and responsibility are concerned.
When you do X in order to do Y then your doing X can be construed as a manifestation of your intention to to Y. If you are breaking eggs in order to make an omelet, then your breaking eggs isn't merely "caused" by your intention to make a omelet. It is rather part your action of making an omelet. This is why Elizabeth Anscombe explained intentional actions (in progress) as exercises of practical instrumental rationality. Actions and their "parts" are internally structured by means-end relationships. Furthermore, the instrumental rational abilities that are being exercised while acting are constitutive of those abilities to act intentionally at all. If you don't know that (and how) you must break eggs in order to make an omelet then you don't know how to make an omelet either.
So, the sense in which the intention to Y "causes" the intention to X, in the case where you are intending to do X in order to do Y doesn't refer to the same sort of causal relation that holds between throwing a rock at a window and the window breaking. It is a manifestation of instrumental rationality, and this rational ability is internal to the agent's own ability to Y. So, saying that intending to Y causes your intending to X is rather like saying that your believing that 102 is an even number causes your belief that 102 isn't a prime number. This is misleading, at best. It is better to say that your knowledge that even numbers above 2 aren't prime is constitutive of your ability to judge that 102 (or any other even numbers above 2) isn't prime. In any case, it should not be construed on the model of causation between events in accordance with natural laws.
I am not arguing that the laws of nature, and whatever may be happening in my brain, or my past education, experience, etc., don't "influence" what I do (whatever those "influences" on my thinking may amount to, exactly). What I am saying is that *all* of those influences and "causal factors" are utterly irrelevant to the question of the validity and soundness of the mathematical demonstration that you are purporting to evaluate. Your only guidance for doing this is your knowledge of sound principles of mathematical reasoning. If someone is going to challenge your understanding of those principles, or the manner you are bringing them to bear to a specific problem, then it is only incumbent on that person to make a rational argument. The laws of physics and the past "causes" of your mental states, whatever they may be, are irrelevant. Only the rational 'form' of your thinking is relevant.
Again, some of those causal antecedents may be necessary in accounting for your having developed the necessary cognitive skills. But when you have developed them to a point sufficient for your becoming intellectually autonomous -- for your having acquired an ability to think rationally -- then, from that point on, what it is that is relevant to governing your thinking are the rational principles that you have come to understand. And those principles are not hostage to any sort of future discovery about the deep workings of the physical universe or the specific inter-connectivity of your brain cells.
This is incorrect because the manner in which norms of sound reasoning (either practical reasoning or theoretical reasoning) govern our behaviors and our thinking when we understand them is categorically different from the manner in which physical events cause physical effets in accordance with laws of nature.
One intuitive way to highlight this categorical difference between laws and norms is by appeal to the idea of direction-of-fit that has been popularized by John L. Austin and his student John Searle, but that apparently traces back to Aquinas. The main idea is very simple. The Earth is caused to orbit the Sun along its actual trajectory in accordance with Newton's laws of motion and of universal gravitation. If, however, there is a deviation between the "laws" and the actual trajectory, then there is something wrong with our understanding of the laws. Our knowledge of them must be revised (although, oftentimes, a merely apparent violation of the laws can be accounted for by some external influence). In any case, the Earth is not breaking any actual law of nature. On the other hand, if a computer, a cat, or a human being behaves in a way that fails to accord with a norm of design, a biological norm, or a norm of reasoning, respectively, then that doesn't show that there anything wrong with our understanding of the norms. That may rather show that the computer is buggy, the mouse is sick, or the human being is irrational.
This is in part why our sensitivity to norms of sound practical reasoning don't account for our behaviors being in accord with them (or failing to be in accord with them) in the same way laws of nature account for material effects following material causes in accordance with them.
If evolution produces desire for sex and desire for sugar does that mean we can't distinguish between these two desires? Of course not. Evolution produces different values and we can distinguish them. Compassion and morality are just one of them.
Quoting Pierre-Normand
Morality is related to that. As I said, compassion facilitates bonds between people and is part of our ability of mental integration. These are abilities that also facilitate survival and reproduction.
"Without obstruction" is the negative part. "According to one's motives" is the positive part.
Quoting Pierre-Normand
Moral responsibility is the ability to behave compassionately. Since it requires a high level of consciousness (including compassion and intelligence) it is primarily expected of humans rather than animals. It becomes a value when evolution reaches human level.
I see no reason to postulate a different sort of causation. The causation between mental events happens in our heads but it boils down to electromagnetic forces that also work outside our heads. Even if the causation in our heads or minds was based on a different kind of forces it still wouldn't affect my argument in OP.
Quoting Pierre-Normand
I see no problem with it. One thought (mental state) causes another through neural forces. Computers can perform such mathematical and logical operations too.
But we are able to perform such an evaluation thanks to those causal factors. We cannot do it without them.
Quoting Pierre-Normand
And this knowledge is encoded in neural states that exert causal influence on other neural states.
Human behavior can be very complex because it is influenced by many factors. The norms that are incorporated in our minds exert causal influence on our behavior just like gravity exerts causal influence on objects in its field. Just because we may not behave according to the influence of the norms doesn't mean that the norms don't exert causal influence on our behavior. Just because a helium balloon rises and thus goes against the influence of gravity doesn't mean that gravity doesn't exert causal influence on it. It just means, in both cases, that also other causal factors are involved in the situation and the resultant behavior is the result of the joint influence of all factors involved.
Regarding such a distinction that might plausibly additionally influence my ‘choice’ of action there then immediately occurs the question as to how I might ‘know’ an act to be immoral and in this connection it can surely be observed that there exists in practice ‘degrees’ of moral awareness. As an example, the novelist Dostoyevsky was absolutely opposed to the idea of capital punishment on the grounds of a personal experience he underwent which, he avowed, had imparted him a knowledge of the nature of this act as a reality which he could not otherwise have vicariously suspected:
As is well known, when a young man, Dostoyevsky along with some comrades was sentenced to death by firing squad for a political crime. Of course no external observer could vicariously suspect the degree of terror which this prospect had then inculcated within those sentenced during the weeks approaching their execution date. - Dostoyevsky vividly subsequently described for example the trance-like state of himself and his comrades during the grisly process of head-shaving and being made to put on mortuary gowns that occurred the night before the appointed execution. – The following morning of course, at a prearranged moment when the firing squad were about to pull their triggers, an emissary from the Tsar theatrically appeared to proclaim that their sentence had been commuted to imprisonment and thus that the prisoners should be untied from their stakes - by which time one of the members of the group had succumbed to insanity from which they were never to recover.
Dostoyevsky thereafter maintained that he absolutely knew as a result of this experience that his life-long implacable opposition to capital punishment as an act constituted a type of objective moral knowledge regarding its’ iniquity capable of transcending what in others might effectively be unconscious influences on their personal perception of its justification.
Multitudes of individuals will of course by chance have encountered various such profound experiences relating to all sorts of moral dilemmas, most of them being anonymous and unreported. But on the basis of such reported experience there does exist a theory that the idea of a moral autonomy ultimately capable of transcending neural determinants may be valid and therefore that the free will problem regarding moral choice is really one related to the nature of moral knowledge rather than causality. The convoluted and less significant question of amoral autonomy is regarded by this theory as being a logically distinct matter the investigation of which is more suited to neural science than philosophy.
Did this a bit rushed – but my excuse is I’m plumb out of time right now! ?
I don't understand why Dostoyevsky's aversion to death penalty would need a non-neural or non-causal explanation. Isn't it obvious that it was caused by his terrifying experience?
Well, by acquiring knowledge our understanding and abilities expand and that may help us in living a more fulfilled life, in finding more effective ways of satisfying our desires and needs, or in being more compassionate and moral. That would be an expansion in compatibilist freedom.
Consequently this theory regards any attempt to investigate the possible validity of Free Will by means of a type of causal analysis – ex by considering issues related to the complex and perhaps inseparable interplay occurring between individual neural idiosyncrasy and environment – to be appropriate only to the question of free will as this is related to amoral choice and to be a methodology irrelevant to an investigation of the possibility of moral autonomy, the latter problem being viewed as a subset of the problem of moral knowledge.
In this regard, the questions viewed as relevant to the consideration of the possibility of moral free will are:
1) Is the idea of objective morality in terms of there existing a set of objective moral values meaningful? - The theory considers the principle of moral relativism to be irreconcilable with the concept of moral free will.
2) Given the validity of the concept of moral objectivity, in terms of what then would such objective moral knowledge consist?
3) How in principle could such knowledge be acquired and then permit a capacity of irreducible personal moral autonomy? - The idea is that such moral knowledge by virtue if its specific nature could be demonstrated to be inimical to qualification by neural idiosyncrasy in that the latter may determine impulse only capable of displacing casuistic intellectualy accapted values as distinct from being able to displace this particular type of non-intellectual knowledge gained by actual personal experience.
How so? The experience causes moral knowledge and the moral knowledge causes moral behavior.
Quoting Robert Lockhart
Assuming that human minds or brains are similar in a significant way - enabling a high level of consciousness characterized by sensitivity to suffering and joy and by compassion and intelligence - we can generalize moral values as universal human values.
Quoting Robert Lockhart
It would consist in sensitivity to suffering and joy, in compassion and intelligence.
Quoting Robert Lockhart
The human mind in general provides capacity for moral knowledge but this capacity might need to be complemented by reason and experience.
While a capacity for moral behaviour is in principle contingent on possessing experientially gained moral knowledge, so that the latter is effectively the ‘cause’ of the former, nonetheless such knowledge, even though it renders moral transgression on the part of the individual impossible and moral observance inevitable, does not in practice ‘causally determine’ morally observant behaviour, but in reality enables the individual to autonomously elect such. (The distinction is a little beyond the scope of this post.) Thus a description of the type of non-causal relation existing between moral knowledge and behaviour would not be susceptible to the methodology of causal analysis, appropriate as the latter type of reasoning is to an investigation of the question of amoral autonomy – Like whether I might be able to autonomously choose to bake pasta tonight.
As a prelude to commencing the Islamic act of worship every individual is of course first required to kneel and then to bow their head until they are physically touching the floor, this requirement on the part of the individual also requiring, as it does, to be practised in the company of serried anonymous ranks of others similarly prostrating themselves. The act unavoidably entails – backside inescapably stuck up in the air and all that - the adoption of what is to each individual a personally and explicitly undignified and rather ignominious posture. But of course, that is part of the very point of the exercise – through the acceptance of such a requiral assenting thereby to the personal disavowal, without such acceptance having a bearing on personal self-respect, of the visceral instinct we all possess towards achieving superiority over others and so via such personal education towards acquiring a moral attitude transcending our neural ‘programming’. The idea is that experience of such type does not act to 're-program' instincts within the individual but that instead it acts to provide knowledge capable to release the mind from its previously programmed state and substitute instead a conscious awarness of values enabling a moral autonomy. – A monks tonsure is an obviously analogous act.
The crucial point about all this is that it is personal experience itself which is conferring moral awarness, such knowledge being in principle inimical to being perceived by intellectual reasoning.
NB. Of course motive in an act determines it’s ultimate validity. – The acts referred to could for example be practised as no more than a badge of ‘gang identity’.