Causality
One problem I have with a lot of discussion about causality is that it looks like a lot of bottom-up billiard-ball theorizing. I have a gnawing suspicion that discussions of causality, particularly in the analytic tradition, fall out of the this-ball-hits-that-ball Newtonian materialism that (should have) died a long time ago. It's not that there's an unspoken premise based in that materialism, but that that picture of the world has a kind of "atmosphere" that influences how we think of causality. I'm interested in a metaphysical view of the matter.
I'm not going to ask, "What does it mean for one thing to cause another?" That very question, in itself, seems rooted in the picture I just described. "One thing causing another," like billiard balls hitting each other. There is no reason why that should be a paradigmatic case of causation, and this becomes clear when you consider cases like "This happened to this organism because it exists in this ecosystem." When you consider something like that, the one-thing-causing-another approach becomes hideously complicated (take a look at Mackie's work if you don't believe me). So let's find a better question.
I think this is a better question: what is a way of approaching causality that enables us to understand diverse varieties of cause in a unifying way? I'm not looking for a Grand Theory of All Causation. I want a lens or perspective that lets me look at things from "one billiard ball hits another" to "the economy does this" in way that makes those things show up as causal, without necessarily finding some unique single property that they all have.
The best swipe I have at it right now is to say that causation is a tension between those properties of a thing that (more or less) depend on its present context, and those properties of a thing that are (more or less) independent of its present context. For example, you've got a Euclidean plane with two lines that aren't parallel. You can't see the entire infinite extent of those lines, but you know they have to meet somewhere because they're not parallel. This is an interaction between the relation between those lines and the postulates that define the Euclidean plane.
(I grant you that the distinction between intrinsic and extrinsic or inherent and relational properties is problematic at best, but this is part of my point; the tension between "hardy" properties that are resistant to context and "fluid" properties that mostly depend on context is the tension the world is working through when causation happens)
I'm aware that this is vague. It's supposed to be. I'm not advancing a completed thesis here. I'm trying to get a discussion rolling, and the above is fodder for it. Any takers?
I'm not going to ask, "What does it mean for one thing to cause another?" That very question, in itself, seems rooted in the picture I just described. "One thing causing another," like billiard balls hitting each other. There is no reason why that should be a paradigmatic case of causation, and this becomes clear when you consider cases like "This happened to this organism because it exists in this ecosystem." When you consider something like that, the one-thing-causing-another approach becomes hideously complicated (take a look at Mackie's work if you don't believe me). So let's find a better question.
I think this is a better question: what is a way of approaching causality that enables us to understand diverse varieties of cause in a unifying way? I'm not looking for a Grand Theory of All Causation. I want a lens or perspective that lets me look at things from "one billiard ball hits another" to "the economy does this" in way that makes those things show up as causal, without necessarily finding some unique single property that they all have.
The best swipe I have at it right now is to say that causation is a tension between those properties of a thing that (more or less) depend on its present context, and those properties of a thing that are (more or less) independent of its present context. For example, you've got a Euclidean plane with two lines that aren't parallel. You can't see the entire infinite extent of those lines, but you know they have to meet somewhere because they're not parallel. This is an interaction between the relation between those lines and the postulates that define the Euclidean plane.
(I grant you that the distinction between intrinsic and extrinsic or inherent and relational properties is problematic at best, but this is part of my point; the tension between "hardy" properties that are resistant to context and "fluid" properties that mostly depend on context is the tension the world is working through when causation happens)
I'm aware that this is vague. It's supposed to be. I'm not advancing a completed thesis here. I'm trying to get a discussion rolling, and the above is fodder for it. Any takers?
Comments (209)
I think I'd modify this a bit to recognize that there is never not a context to begin with, so that it's no longer a question of 'independence', but of what variety of context is in play. The term I prefer a bit better would be that of conditioning (insofar as one cannot separate a thing from it's conditions). Causality - or at least efficient causality - is always conditioned by the environment (or 'system') in which any causal event takes place: change the conditions, and the 'cause' might act entirely differently (to the point where it may not act to cause anything at all).
Perhaps another nice way to think about efficient causality is not as something inexorable (Cause A will always necessarily give rise to Effect B), but as a trigger that is effective only in the right conditions (so efficient causality can look inexorable when conditions are stable). Mario Bunge gives the example of an arrow released from a bow, which requires the interplay between both cause and process for any effect to take hold: "The act of releasing the bow is usually regarded as the cause of the arrow’s motion, or, better, of its acceleration; but the arrow will not start moving unless a certain amount of (potential elastic) energy has been previously stored in the bow by bending it; the cause (releasing the bow) triggers the process but does not determine it entirely. In general, efficient causes are effective solely to the extent to which they trigger, enhance, or dampen inner processes; in short, extrinsic (efficient) causes act, so to say, by riding on inner processes." (Causality and Modern Science, p. 195).
I think once you start thinking of causes as embedded in larger contexts, systems, or processes which in turn condition efficient causality, you can start to rethink alot of the ways in which classical problems of causality are generally posed. For example, questions around infinite 'causal chains' (and the 'free will' vs. 'determinism' baggage that it carries in it's wake) can come to be seen as particularly naive, insofar as it no longer becomes an issue of unbreakable 'chains' of causality but the complex interplay of processes and causes which can overlap, interfere, condition and change one other (causality becomes 3D instead of 2D, or even multi-dimensional across different scales). The point would not be to 'dismiss' efficient causality - as if one could ever do so - but to instead situate its instances and recognize the contingencies and necessities which always condition it's operation.
More precisely, there is such a thing as independence, simply because not every change in context obliterates a given particular. But independence is not absolute, and neither is context. That's why I prefer to see causation as a process of the world working-through the tension between particulars and their contexts.
So I think, rather than defining causality in a metaphysical way I'd take a stab at saying it is more a feature of our knowledge, how we create knowledge, and what counts as a satisfying explanation of change over time. I think, then, that the question of understanding causality becomes one of characterizing time.
A sign of this is that conceptually, cause and effect are interdependent just as subject and object are.
This was one of the catalysts for the OP. I'm thinking of the tension between "eternal" laws (e.g. math, physics) and events embedded in time. Becoming, on this view, isn't a "falling away" from Being. It's rather the tension between the ontic and the ontological, to use Heidegger's terminology. Particular vs. universal, general vs. specific. I think of causality as one way that this tension unfolds, the constant working-through and tug back and forth between individual and context. Causality is not a creature of becoming opposed to being, but is the tension between them - or, to shy away from a strict definition, is one way in which that tension "shows up" for us.
Causal processes take time to happen, of course, so they're clearly related. Time is a constitutive world-feature, inasmuch as you can't have a world without it, or at least, not a world like this one. I think time is something that characterizes our lived experience as creatures subject to becoming, and here I'm talking about felt time, the feeling you have right now of time "passing."
Agreed, but the point is rather that the specific phenomenon of efficient causality is what takes place precisely at the intersection or the meeting point between the two: efficient causality just is cause + system. It's not chicken or egg: it's chicken and egg, to mess up the metaphor. The temporarily involved is not linear but contemporaneous. Or, to paraphrase Kant, causes without conditions are impotent, conditions without causes are ineffectual. Perhaps part of the problem is grammatical, insofar as it's too easy to speak of 'cause' as an independent entity, whereas the formula ought to be, in set theoretic terms: efficient causality={cause, condition}.
The question is whether contexts themselves consist in nothing more than networks of efficient causal interactions. Is the whole really something more than the sum of all the parts and their efficiently causal interactions or does it merely appear to be so due to the limitations of our understanding?
That being said - and this is coming from someone with an analytic background - I think it's a waste of time to try and "do causality" with FOPL. You can do specific instances of causality and understand certain laws using math (physics!), but FOPL will get you nowhere. People have tried that for a long time, and it runs into an intractable tangle as soon as things get complex (again, Mackie's work and those who responded to him). I think that, if we want a general understanding and perspective (not "theory") on causality, we need something vague enough to make various causal processes "show up for us" as causal, but specific enough to throw some light on them. That's a delicate dance.
Quoting StreetlightX
In addition to grammar, one big problem is that everyday paradigmatic instances of causation that are readily visible to us fit the "billiard ball" image quite well: rock hits window, window breaks. How about evolutionary biology or something like that? A notion of causality derived from the former will break down when applied to the latter, as we've both observed in this thread.
Efficient causation, I think, is basically a heuristic. An object interacts with its environing system and change happens. We observe this interaction and parse out a relevant characteristic of the object/system interaction that we're interested in (usually something we want to control). This characteristic is then sedimented, in our minds at least, as the efficient cause.
This goes back to a debate you and I had on the old board about causation. I posited that any event is the result of a vast history of changes that eventually led to that event in a manner reminscent of light cones in physics. You stated that there was nothing in my account that qualified as "causation." I think I agree with you that there was no efficient causation in it, but I don't think that efficient causation is a paradigm of cause - at most, it's a special case that we happen to be familiar with because of the kinds of organisms we are. The problem with my old account was that I was taking a FOPL-esque approach, which, as I said above is not useful in this context.
Maybe one approach would be to ask, instead of the cause of why something happens, ask for the reason that it happens. They're sometimes the same, but they're sometimes not. One example that might draw out the difference it the two possible responses to the question 'why is the water boiling?' One answer is: water boils at 100 degrees Celsius, and it's being heated by an electric current. Another answer is: I am boiling the water because I intend to make tea. It's a trivial example but nevertheless makes a distinction between efficient and formal causation in a pragmatic way.
The first kind of causation is generally more specific and is often the subject of scientific analysis - what causes this malady; why do continents move; why do planetary orbits have the form they do. I think that's the source of a lot of your 'bottom-up billiard ball theorising'.
Whereas contemplation of the 'principle of sufficient reason' is obviously going to be a lot more speculative, because you might encounter questions of the kind 'why is there something rather than nothing' or 'why did intelligent life evolve', and other such questions which are generally in the province of metaphysics and philosophy rather than science.
Heh, you wouldn't catch me dead doing almost anything with FOPL, so I'm on board with you on this one.
Quoting Pneumenon
To follow through on this - along with your intuition that efficient causation is a 'heuristic' - I would put it that these 'readily visible' instances of causation are such precisely because they take place against a stable 'background' of no-less-causal factors that are bracketed out for the sake of our analysis. Aristotle's fourfold catalog of causality - formal, efficient, final and material - was meant to get precisely at this point: yes, the billiard ball hit the other billiard ball and 'causes' it to move, but causally significant too is say, the material integrity of the ball such that it doesn't shatter when hit, the tactile qualities of the table surface which enables the balls to move smoothly across it, the temperature of the room - not situated on the sun - so that the whole set-up doesn't simply melt.
Seen from this angle, there isn't anything that isn't causally significant in this scenario, and the point of analysis is to make a decision as to what factors we want to hold stable - what we want to background in order to bring out a foreground - such that we may make a conclusion about something or another. Hence the qualification of scientific experiments that results are always derived ceteris paribus - 'all other things held equal'. This 'holding stable of a background' is what Apo consistently calls an 'epistemic cut', and it marks our own imbrication in our objects of analysis. Susan Oyama, writing specifically in the context of evolutionary biology in fact (her question not being that of efficient causality per se, but on causality tout court), makes this point very nicely:
"To gain information we need to specify a context and a set of possibilities. It is in this sense that organisms generate information and it is in much the same sense that scientists do. Events do not carry already existing information about their effects from one place to the next, the way we used to think copies of objects had to travel to our minds for us to perceive them. They are given meaning by what they distinguish. Thus we find that a gene has different effects in different tissues and at different times, a stimulus calls out different responses, including no response, at different times or in different creatures, and an observation that is meaningless or anomalous at one stage of an investigation or to one person becomes definitive under other circumstances. A difference that makes a difference at one level of analysis, furthermore, may or may not make a difference at another." (The Ontogeny of Information, p. 185)
Importantly, this need to specify context does not mean that efficient causation is a mere (human?) heuristic; as Oyama notes, this selectivity is something that nature already does: "For coherent integration to be accomplished, an investigator must do by will and wit what the developing organism does by emerging nature: sort out levels and functions and keep sources, interactive effects, and processes straight." The affective capacities of 'natural' things already carry out a selection, as it were, of what are and are not causally significant, in a way that human analysis merely extends upon or harnesses for our own (scientific/investigative) reasons. We just happen to have a greater creative scope with respect to how we go about making those selections: devising apparatuses of experiment and observation, etc, in a way that natural organisms are not always able to do.
Oyama once more: "This is not to say that selection of variables must be random or that analysis is impossible. It is to suggest that guidance is more likely to come from the system under investigation than from some more abstract assumption about genetic or environmental influences. Fine investigators have always been guided by good intuitions about what their phenomenon is "paying attention" to ... Scientific talent is partially a knack for reading one's particular system productively."
In regards to the last remark quoted from Oyama, I think that a good analysis of causation in a system depends on two things. First, you need a good "nose" for the level of invariance in the relevant background conditions - that is, when you use a ceteris paribus, you have to have a good estimate of how likely it is that everything else really will be held the same. Second, you have to be sensitive to scope. Anything that posits an effect on a particular scale has to take into account causes/conditions on that scale. If I claim that a riot in a small town will spread across a nation, I have to take into account conditions in the rest of the nation, for example.
One place where this really irks me is getting causation backward and, more importantly, confusing one's mode of knowing with the thing known. There is a difference between knowing that there is fire because you see smoke and assuming that smoke causes fire (or, indeed, that fire causes smoke). The post hoc ergo propter hoc is a more insidious and subtle fallacy than you might think - it's not just about people taking statistics the wrong way.
For an example of how it can be insidious (that relates to our discussion of causality), consider this statement: "Every time I've seen A, I also see B." Some smartass tells you that this is just "anecdotal evidence." The correct reply (which people always seem to miss) is this: "What are the odds that I would always see A and B together if one did not cause the other?" That is how you know when a anecdote comprises a data point, I think.
(Sorry if this is all a little vague - I am somewhat sleep-deprived right now)
Explanation in sciences takes the form of theories. A theory is wholistic, it does not come down to isolated causes and effects. Theories that describe a system's dynamics are often - but not always - causal (and not all theories are dynamical). Their causal character may owe something to our preference for a certain kind of historical narrative, as suggests, but it can't be just that: reality is not so flexible as to accommodate any mode of description for which we might have a preference. And where did this preference come from in the first place? The causal character of a lot of physical theories has to do with the causal character of interactions that occur in our universe: there is an arrow of time; interactions are local; and influences propagate at a finite speed. As a result, we can show how events are shaped by proximate events in their past.
(Quantum physics complicates this idea of causal interactions, but does not necessarily destroy it. It prompts us to think more carefully about locality, interactions, and influences.)
Quoting Pneumenon
Yes, in the sense in which we tend to talk about causation, focusing attention on the most relevant events (from our point of view) and bracketing out others. But, as I think is also saying, this is no accident, no mere whim. The world is such that, while being wholistic, it is quite non-uniform. Just as its material fabric tends to cluster into things, atoms, its interactions also often lend themselves to such heuristic analyses in terms of prevailing causes.
Because it seems like once we make this reduction - into the particle world - then all Aristotelian causes begin to fall apart? Particles causation doesn't seem to make sense of certain formal, material, or final causes?
I'm also a little wary of an explanation of causality being explainable on a particle level? Because these mechanical explanations could just keep going ad-infintum if we go through this bottom-up route. And bottom-up causality seems to be a bit spooky.
Asking "why", or "how" is the same as asking the "reason" something happens. I still don't see a difference between "efficient" cause and any other cause. Your intent preceded all these other causes that end up leading to the effect of making tea. Your intent still had to move your body to the kitchen, get out the pot, fill it with water, put it on the stove, turn the stove on, put the tea in the water, etc. If just one of these things don't get done, then you won't have tea. The tea doesn't make itself. The tea in the future doesn't bring all these causes to fruition. After all, we may fail to make tea as something may interrupt us and cancel our tea-making. In other words, intent doesn't always bring what is projected into view. Actually, intent is the only cause that often fails in producing the intended effect. That doesn't seem to happen with the other kinds of causes we observe in nature. This is because intent is really just a prediction.
The problem is that we never know the whole system. An enormous part of it is always inferred from what we do 'know'. Causation itself is inferential. We can think of causation at any scale; the parts aggregate into wholes, into what are themselves micro-contexts; but when it comes to trying to understand how such aggregations are formed, it is always (insofar as we can understand it at all) in terms of efficient causation. Efficient causation is the understanding of bodies exerting influence upon one another via energy exchange. we kind of get that because we directly experience our own bodies as interactively affecting other bodies and forces and affected by them. Do we have any other intelligible model of causation?
I think the only idea we have of any form of causation beyond the efficient is the idea of lawful action, or 'final' causation, and I don't believe we have a really clear understanding of that at all. So I would say the only causation we can model, on every scale, is efficient causation. That is not to say that our models represent exactly what is being modeled, though. This is glaringly obvious in that we can model nature in such a way as to be intelligible to us only as deterministic, but modern physics seems to suggest that it is "really" indeterministic.
The mistake is thinking we need to know the whole system to know what's happening. It's flawed to think of a cause as a origin point which made everything.
Is any cause an origin which destined everything? If we stop to think about this, it's pretty obviously absurd. Consider God at the beginning of the universe. God lets off the Big Bang. Is this enough to define all future events of the world? Can everything state of our world be described by those initial states of the universe triggered by God?
No, they can't. To define everything that happens, many more states are required, many more causes (which are not God's act of creation) need to occur. Before the present, many causes need to act, to form the present world.
If I'm to make the post, millions upon million of people have made specific choices, restricted others in certain ways, to cause this state. Change many of them, I wouldn't be here at all. Causality is not singular. Our world is a sum of countless causes, which are constantly emerging and triggering new states (some of which we don't expect).
So what is a cause? If it's not the origin of everything, what is it? Clearly, causes are about forming one state rather than another. We point out casualty to identify responsibility for the existence of one state rather than another. How much to we need to know to be aware of a cause? What does it take, for example, hitting someone face will cause them pain?
Hardly anything. No-one needs to understand the origin of the universe to know hitting someone causes pain. It only takes awareness of the cause (hitting someone in the face with your first) and the resulting effect (the victim feels pain). All that's required is definition of an individual state (cause) which results in another (effect). "The whole" doesn't matter. It's is irrelevant. In any case, it's individual states that do the work of causality, not "the whole."
In causality, there is no aggregation of how it is all formed. There are only individual causes and effect, each responsible for themselves. Causality is "deterministic," as is required for states to set other in motion.
But, contrary to the classical understanding, causality is also indeterministic. Since any casual relationship is a matter of an individual causal state and an individual effect state, there is no "whole" or rule that sets out what must be caused. At any time, the causality may zig rather than zag (as we expected), for each emerging causal relationship has no dependence on what happened in the past.
I haven't heard any convincing arguments for it. The ones I hear can easily be flipped around: "Everything your body does boils down to the interaction of its fundamental particles." Flip it around: "Any fundamental particle in my body does what it does as a result of its interactions with the particles around it."
Any attempt to boil this stuff down to fundamental particles can be bounced back up to a higher scale in analogous manner. There's just no reason to be that kind of reductionist.
I don't understand the distinction.
Physics constructs intelligible models (what else?), and some of these models happen to be probabilistic (stochastic). Quantum physics is not the first or the last physical theory to have stochastic elements - before that there was (and still is) statistical thermodynamics. Stochasticity is fairly common in applied physics and engineering. It is intelligible and manageable.
Right, I don't understand this what/why distinction and how you relate it to explanation and causation. Also, I am not sure whether you think you are explicating preexisting meanings or inventing your own.
Talk of "fundamental particles" and of "higher scales" is really just talk about different perspectives of the same thing. Different perspectives can be in different locations of the same size scale, or from the from different size scales. Your argument is derived from the idea that any "view" can only be a anthropomorphic one - one that only exists on our time and size scale, with the tiniest things that we can observe (tiny compared to us) being "fundamental" while the large things (large compared to us) are "higher scale".
I'm sorry, I'm not sure what to say. "What" and "why" are two different English words. That's all.
We can ask why stochastic models work for certain physical phenomenon. Does the indeterminism of QM represent our ignorance, or something fundamental about the world?
People, including physicists, have asked this sort of question, and debated the proposed answers.
Yes, but do we have any model of those interactions that are not understood in terms of efficient causation?
If you want to support a model other than efficicient causation you will need to show that other models are intuitively comprehensible.
I show you a broken crowbar and tell you what it is: "This is a broken crowbar."
You ask, "Why is it broken?"
I say, "Because it's in two halves."
This is not a good answer on my part if you want to know how the crowbar ended up being broken.
Quoting Pneumenon
You didn't answer "why" it was broken. You answered "what" broken is, which wasn't the question, which is why your answer is no good and has nothing to do with why the crowbar was broken.
When we ask, "What?", we are asking for the properties of some object or state-of-affairs as a "snap-shot" in time - the way it is now. But objects and processes exist in time. They have a history of interacting with the rest of the world. Asking, "Why?", something is is to ask about it's existence in time and about it's relationships with the rest of reality.
The Ontogeny of Information looks interesting, though. Would you recommend it?
A few things, I guess: first, at the level of sheer instinctual/aesthetic response, I think it's an incredibly naive, even childish thesis (I mean this literally - as if the universe were one giant colorful ball pit). Second, what are the exact arguments in favor of it? It's the arguments which need to be evaluated and assessed, more so than the actual thesis itself. Third and most substantially, despite the apparent simplicity of the thesis, it's not even entirely clear what it would even mean to account for all causal changes in terms of 'the set of all fundamental particles'. In modeling a dynamic system, for instance, the fundamental parameters to take into account tend to be: (1) the set of attractors (roughly, the set of values toward or around which a system tends) and (2) the rates of change which define the dynamics of the system; (3) the 'tipping points' or critical thresholds which indicate when/where the system will undergo a phase transition (to become another 'kind' of system - the difference between a flourishing or dying ecosystem, for example). One can also add to this list rate-independent informational constraints for some systems, but that's another kettle of fish...
So the question becomes: what is the relation between these parameters and the 'set of all fundamental particles'? What here is doing the explanatory heavy-lifting, as it were? The parameters? The particles? Interactions between both? Including the environment in which these interactions take place? All of the above, depending on conditions? These are the kinds of questions that need to be asked and answered in order to assess whether it even makes sense to have "the set of all fundamental particles account for all causal changes." Personally, I'm not even convinced the thesis is coherent. Looking to something like processes of evolution (raised by the OP), for instance, I don't even know what it would mean to say that 'the set of all fundamental particles' explains evolutionary change - it wouldn't even be a 'wrong' thesis so much as a misuse of grammar.* So again, the important thing is not necessarily to 'rebut' the thesis, but to understand, at a minimum, weather it even in fact makes sense to begin with.
And I would totally recommend The Ontogeny of Information. It's one of those books I find myself going back to time and time again, and at a minimum, anyone interested in evolution ought to read.
The critique of this position, made by Leibniz and Newton, is that the Cartesian qualities cannot account for the "equal and opposite reaction" of things. Leibniz explains this using active and passive forces; Newton by energy.
I think that the passage of history have made us overlook these two aspects of Hume's refutation: (1) we cannot know causality a priori and (2) that nothing visible can give us an idea of force, power, or energy. Hume spends the bulk of the text arguing for (2), but basically assumes (1).
I like to think about it is: who or what is doing the causing? Is it just "fundamental particles?"
Nothing in life is merely fundamental particles in action. Such particles are always part of something or interacting with other states which are more than just elementary particles. Causal relationships don't happen without this wider significance. It's the difference between the sun causing a temperature on Earth and someone opening their hand relating in an object falling.
"Just fundamental particles" is just a failed description, a reduction of everything in casualty to one idea, for use as an easy shorthand (usually to a political purpose). Another in a long line of human heuristics that mistake metaphysical significance for describing what's going on in the world-- "God (fundamental particles) did it."
I think the point is that causality understood in terms of mechanical interactions of "fundamental" particles is not useful at larger scales (for example in Biology or even geology).
On the other our intuitively understood models of interactions between fundamental particles seem to be analogues of our intuitive understanding of mechanical interactions between bodies at larger scales; i.e. at our 'lived' scale. I think this efficient kind of causality is really the only kind of causality that we intuitively 'get', except for formal and final causation involving what we understand to be intentional agents.
It seems to make understanding biological processes, for example, evolutionary selection, much more intuitive and less cumbersome if they are understood in terms of purposivity; even though they are generally thought to be, in principle, even if not in practice, reducible to mechanical (really energetic) microphysical processes. The so-called laws of nature themselves, though, are not necessarily thought to be reducible to being modeled in terms of fundamental forces; but their action or influence can only be understood as such, at least as far as I can tell. I could be wrong about this, though, and would be pleased to be further instructed by some clear alternative explanation. Also, I'm not suggesting that if this is the only way we can understand causality, it follows that this is 'the way things are' in any "ultimate" sense.
I think Hume did not account for the fact that it is not merely a matter of what we can see but is also a matter of what we can feel. We feel mechanical forces exerted by, and on, our bodies.
How did words get onto this screen if not for you thinking about moving your fingers in particular way to produce them? They didn't appear simply by thinking about them. Why would you have to think about moving your fingers to get text on a screen if not for causation?
The only way to show that something is intuitive is to look at it and see if your intuition likes it. That's what "intuitive" means.
That being said, look into Aristotle's causality - there are some old, old, old folk-notions of cause that are intuitively plausible.
Hume seems to dismiss this in a footnote because he lacks an account of intentionality, i.e. he can't distinguish between "our feeling" and "what we feel."
This venture seems a bit circular to me. At least, it does if a 'way of approaching causality' includes a definition. If it does then one cannot understand the goal ('enabling us to understand diverse varieties of "cause" ') until one has decided on a definition of 'cause'. But then one cannot use the goal to decide what definition to choose.
It seems to me that discussions of causality usually are circular. Certainly I find Aristotle's notion of Efficient Cause circular.
What I wonder is what use the term 'cause' has. If we want to make something happen ('cause' it to happen) we can get by with the much simpler, non-circular notion of 'prediction'. Alternatively, if we want to understand 'why' something happened, we can address that by seeking an understanding of the prior environment. In neither case do we need a notion of 'cause'.
Even cones of causality, aka 'light cones', that are used in physics are able to be expressed - and IMHO are more clearly expressed - as cones of predictability. So what seems to be a case of the notion of 'cause' being inextricably embedded in core science is actually an illusion arising from etymological happenstance.
Formal definitions, that capture every single instance of a class, aren't what they're cracked up to be. For the most part, they're not even necessary. A cursory reading of Wittgenstein will show you that such definitions, in addition to being quixotic, are not even necessary. We all can use the word "cause." A philosophical investigation need not have a definition for it. Definitions are not what philosophy is about.
Yes we can all use the word. But one need only look at a litigation or an inquest to observe that we (all of us, not just philosophers) do not know what we mean by it.
My prescription is not to introduce a definition (although I do have 'one I made earlier' if anybody wants to see it (biscuit conditional alert)), but to avoid the use of the word 'cause' except where there is no possibility of disagreement over its use.
I'm fully in support of being Wittgensteinian in the approach to this. I don't think the later Witt would have seen any point in spending time trying to find a way to approach causality.
The later Wittgenstein wouldn't have seen any point in having this discussion. And yet, here we are.
Anyway, we do know, in some cases, what we mean by cause. The cases where we DO know far outnumber the ones where we don't, all considered. I know that touching the keys on this keyboard cause the letters to appear on the screen. If you agree that touching the keys causes the letters to appear (and you do), then we have one case right here where we know what cause is.
[quote=Pneumenon]If you agree that touching the keys causes the letters to appear (and you do), then we have one case right here where we know what cause is. [/quote] No, I'm afraid I don't agree about the keys.
But that's by the by. If it is clear to you what you mean by that statement about the keys, then why do you feel the need for an investigation into an approach to causality? Is it simply for the sake of enjoyment as well? If so, I think it's a great idea.
Okay, so the following sentence is false: "The letter appears on this screen because I pushed a key."?
Come on, man. Even philosophical prevarication has limits.
Quoting andrewk
Because there's more to causality than pushing keys.
That you feel that is what most interests me here. What sort of benefit do you hope to obtain from an investigation into an approach to causality - beyond the sheer joy of human interaction in conversations like this?
Irrelevant. My pushing the keys causes the letters to appear, and we both know this. Even if you think this is all a silly game, you could at least humor me and pretend to be serious, or pretend a little more convincingly.
Quoting andrewk
I want to investigate causality in order to investigate causality, because that's what I want to do.
No, we don't.
What we may be able to agree on is that you thought to yourself that if you pushed the keys you would expect some letters to appear and, having thought that, you decided to push some keys, and then some letters appeared. If that's what you thought - whether explicitly or implicitly.
Injecting the word 'cause' into that quite clear scenario only confuses things.
I don't think it's a silly game. I just think it's a confusion over words.
If you want to make progress, one avenue to try is to think of a sentence containing the word 'cause' that somebody might use in everyday life. 'My pushing the keys causes letters to appear' is not such a sentence - or not in my experience anyway.
My provisional contention is that, in real life sentences containing the word 'cause', they are either meaningless - as in most cases where a litigator claims that somebody 'caused' somebody else to incur an injury, or the sentence can be understood by considering it as a whole, in its context, without requiring a notion of 'cause'.
This is the kind of thing that only pops up in a philosophical discussion. If I asked you why the letters appeared, youd reply, "Because you push the keys." It's not that hard. Outside of discussions like this, you know good and well why those letters appeared.
In all sincerity, you seem to be assuming that I'm appealing to some metaphysical or formal definition of cause, because that's how you're used to responding to people when they talk about causation. But I am doing no such thing. We all know that some things result from other things. Saying that it's not about causation because the exact word, "cause," isn't used, is like saying that this post isn't addressed to you because I'm not calling you by your full name.
If we're going to discuss this issue, you have got to meet me halfway, not just launch into the Standard Wittgensteinian Script in response to someone doing philosophy.
I'm afraid I don't understand what your second paragraph is getting at.
I remain curious about what benefit you hope to gain from investigating an approach to causality. Last time you said something like 'I'm doing it because I want to'. I appreciate the wit of slipping the word 'because' into that answer but I'm still interested in an answer to the question I asked, which is 'what expected benefit?', not 'why?'
Simple question: what do you think happens when Pneumenon doesn't press the keys? Will letters still appear on the screen? The question of causality depends on this relationship. Does the world depend on Pneumenon pressing keys to have words appear on the screen?
I don't know about the 'depend' bit though. the notion of dependency seems very vague to me. Certainly I can imagine letters appearing on the screen without keys being pressed - eg if the dreaded Blue Screen of Death suddenly appeared.
.
“Let us not doubt in philosophy what we do not doubt in our hearts.” (C. S. Peirce)
Or a representation and what is represented?
By "intuitive" I mean something like 'imaginable' or 'intelligible', but also 'able to be comprehensibly modeled'. I think we can both imagine and comprehensibly model efficient causality in terms of forces. We can imagine formal or final causation, but only in terms of intentionality, I would argue. And we cannot model intentionality without it becoming deterministic, and thus ceasing to be truly (as we understand it in terms of freedom) intentional. I think this inability is one source of the endless free will/ determinism debate.
Why do you assume that I am not familiar with Aristotle's "four causes"?
I mean, if your position is that you have no idea whether or not my pressing these keys has something to do with letters appearing on the screen, then I can't help you. If you told a psychologist that during an evaluation, they'd wonder if you had problems. But we both know you'd never say something like that outside of this discussion. Calling it quits for now.
That is not my position, and I never said it was.
Again you are ascribing opinions to other people that they have not expressed.
I'll ascribe another one you haven't expressed: you know that you typed those words. My, aren't I presumptuous?
That's a different causal relationship. If we are talking about the letters which appear on the screen as someone types, we aren't just talking about an appearance of any text. We are asking if specific writing will occur.
So, for example, will the writing of Pheumenon's post still appear if he doesn't touch the keyboard? Or does the appearance of those specific letters depend upon him pressing the keys?
My approach to all this is the same as that of most other people I've ever had occasion to discuss predictions and explanations with. There is nothing particularly sceptical about it, nor any particular doubt.
The only point of contention seems to be that, if we start with the perfectly concrete and definable concepts of prediction and explanation, the notion of 'causality' adds nothing to our understanding of the world and just confuses discussion of it. It also generates unnecessary arguments and lawsuits, amongst non-philosophers and philosophers alike.
I'm not quite convinced. Do we retreat to predictive talk just because of the difficulty of adequately specifying the ceteris paribus conditions in causal talk?
I expect many people believe that pressing the 'A' key should cause an 'A' to appear on screen, if everything works the way it's supposed to, but they know perfectly well that there are many things that can go wrong between keyboard and screen. Even a typical causal claim might involve a prediction that conditions will be normal, in some sense that may be difficult or even impossible to define.
You could then just absorb the causal claim into the predictive claim, but they are fundamentally different aren't they? Or are they?
I think that some philosophers might raise an eyebrow at your claim that explanation is "perfectly concrete and definable." The voluminous literature on scientific explanation alone would seem to indicate that it is far from settled what constitutes an explanation of some phenomenon or state of affairs.
I'd say they are different in that the predictive claim is clear whereas the causal claim is capable of many different interpretations. Just think of how many arguments you've been in or witnessed about whether something was somebody's fault.
Say I run into an oncoming car that turns in front of me to cross my stream of traffic, leaving me no time to avoid collision. A huge argument may ensue as to whether that driver caused my injuries. But not many would contest that, once they have turned their steering wheel, a collision can be confidently predicted.
Or another of my favourites - from the good 'ol NRA:
'Guns don't kill people. People [firing guns] kill people'
[or is it the bullets that kill people? or the wounds?]
Agreed, there are different interpretations around for explanation. Consider 'Why is the sky blue?' An answer to that that may satisfy one person may not satisfy another.
My feeling is that 'explanation' is in the eye of the explainee. That is, it is an explanation if the explainee is satisfied with it. A definition of explanation that currently appeals to me is:
A deduction that starts from premises that the explainee understands and believes, proceeds by steps that they understand and in whose validity they believe, and reaches a conclusion that is the phenomenon for which an explanation was requested.
The thing about causality is it is really outside observation and NOT about prediction at all. We can see this in how a caused event is indistinguishable from a random one in observation.
Consider an instance where letters appearing on a screen is merely a coincidence with pressing keys on a keyboard. It produces the same outcome as if pressing the keys caused the letters to appear.
The difference between causality and coincidence is instead defined in logic, between objects in the context of possible world. To say something is a cause, rather than just coincidence, is specify a logical significance of responsibility of an event.
In the case of typing on a keyboard, to say we cause letters to appear on the screen, is to specify our existence is responsible for the existence of those letters on screen. We are distinguishing if we did not type, those letters would not come to be. Rather than just about what happened, causality itself is about what didn't happen and how that relates to states.
This is why we can only test theories about causal states through falsification. Merely taking any present state following another can't distinguish between causality and coincidence.
It was epiphenomenalism that undid my confidence. The standard wisdom is that epiphenomenalism says consciousness is correlated with but not causative of brain activity - that a causal arrow points from activity in certain designated areas of the brain (the 'neural correlates of consciousness') to consciousness but there is no arrow in the reverse direction. But when I reflected as hard and long as I could on what that means, I was unable to find anything of substance.
If consciousness cannot occur without activity in the neural correlates and that activity cannot occur without consciousness arising, it seems to me that we cannot say either is the cause of the other. The two arise co-dependently, to use the Buddhist phrase.
As you say, we need to test theories about states through falsification. I think that process is best described in terms of the nature and persistency of the correlation, rather than getting ourselves muddled up in the philosophical vagueness of causality.
Say we have observed that phenomenon C, which we want to harness and make happen at will, usually follows phenomenon B. We have observed the two to be correlated. What we want to know is whether that correlation will continue if we induce B to happen. So we induce B under various circumstances and observe how often C follows.
The difference between our observations in the first and second phases of the project are that the observations in the first phase are of naturally-occurring, non-induced instances of B, whereas the observations in the second phase are of induced instances.
This is no philosophical nitpicking either. It is often the case in pharmaceutical research that a certain healing phenomenon is observed after ingestion of a certain plant. We isolate a chemical that is in the plant and test whether the healing still occurs if it is ingested. If that works then we test whether ingestion of a synthesised version of the chemical has the same effect. If it does we may then start producing pills with that ingredient, despite the fact that we have no idea why healing usually follows ingestion of the chemical. All we know of is a correlation. But what the laboratory research has done is confirm that the correlation remains at a useful level when ingestion of the chemical occurs in circumstances different from those in which we made the original observations.
Of course, we have greater confidence that the correlation will persist if we have a theory explaining why the healing follows the ingestion. We call that a 'mechanism'. A mechanism gives us an explanation of why the healing occurs and the ability to predict that it probably will occur. But lack of a mechanism doesn't stop us mass-marketing pills. We may lack an explanation, but we still have the prediction based on a persistent correlation, and that's what we care about.
Then we observe that nowhere in this decision process did we need to use a concept of 'cause'.
The 'correlation vs causation' distinction is able to be concretely expressed as simply whether the originally-observed correlation will continue to be observed in artificial circumstances of our making.
So two examples: consciousness and pharmaceuticals.
As for the second, I would have guessed that if you asked most researchers, they assume there is always some definite mechanism at work, but we just don't know it is. There are practical challenges to figuring out what those mechanisms are (the complexity of the systems at work, the limits of our current technology, etc.) but does anyone think there's just nothing there to know? That correlation, and that at a pretty coarse level, is the best we'll ever be able to do?
I've got nothing for you on consciousness, but I wonder if you should give it so much weight. Consciousness is some pretty weird shit, as the natural world goes, isn't it? Hard cases make bad law.
OK, so mental and physical events seems to be correlated. If they are two totally different kinds of things, then it would seem to follow that there can be no causality operating between them. If they are really one thing seen under two different kinds of perspectives, and if our ways of understanding causality in each kind of perspectival context is different to the other: causes as reasons in the mental connection and causes as forces in the physical, say, then it would seem to follow that it would be to indulge in a category error to start speaking of causation from the physical to the mental and vice versa.
That said, within both contexts, the physical and the mental, there are clearly, in different ways, logical distinctions between correlation and causation; which you seem to be ignoring, or wanting to dissolve.
Certainly not ignoring.
I went looking for them, quite determinedly, and was surprised that I could not find them. My presupposition was that they were there waiting to be found.
If you think you have found some distinctions that go beyond the above-mentioned one of whether or not we have identified a theory/mechanism, that's great news. Let's get them out on the dissecting table and inspect them.
Quoting Srap Tasmaner
Maybe somebody else thinks that, but not me. If we have identified a mechanism, we have a richer understanding and a more confident basis on which to make predictions.
That's a major achievement!
My point is that we don't need a notion of causality to obtain that understanding. Of course we can label the mechanism 'causality' if we want. But that does nothing other than add a superfluous label to a concept that was already perfectly clear.
The distinctions are very basic.
In the physical context an efficient cause is something that acts on something else to produce a change in the latter. So, for example, the force of striking a nail with a hammer causes a nail to be driven into timber. and thus we understand the striking to be the cause. The striking of nails with hammers (or some other instrument) is also more or less universally correlated with nails entering timber, but it is not considered to be a mere correlation. The sound of striking is also more or less universally correlated with nails entering timber, but it is considered to be a mere correlation, and not a cause, of the nail entering the timber, because the sound exerts no force on the nail sufficient to drive it into the timber.
In the mental context; I might have a thought that I need to pick up food supplies for the week, a thought which causes me to go to the shopping mall. I might also have a hundred other correlated thoughts, that I might buy an ice cream, or a coffee, that I might meet someone I know, wondering what time the shops will close, whether there will be any sales, and so on, all of which thoughts are correlated with my thought of needing to pick up weekly food supplies, and hence with my going to the mall; but which are merely correlated with, and do not cause me to go to, the mall.
I'm afraid I can't see what this account gives us that we don't already have with a simple physical theory that describes a scenario in which a hammer strikes the head of a nail, with a certain configuration of hammer, nail and wood, and predicts that the nail will enter the wood. Who needs a cause when we have a mechanism?
A far as I can see, all this description does is introduce confusion into an otherwise clear situation. for instance:
Looking for causes is like looking for the 'source' of a river. Most rivers come from the flowing together of many, many tributaries, starting as little trickles, which join, then join some more and keep joining until we end up with something like the Nile delta. I remember, long before I became interested in philosophy, reading about the legendary search for the Source of the Nile, and thinking what a bizarre notion that was, since in all likelihood it will have many sources, not just one. As far as I can see, the search for a cause is just as empty. We can describe the mechanism of how all the tributaries flow into one another to end up at the Nile Delta. What is there of value that can be added to that?
I cannot see any force in your objections. I think they, if taken seriously, would introduced confusion where previously there was logical clarity.
1."Acts on" is a phrase we understand perfectly well. We feel our own bodies acting on and being acted upon. 'Acting on' is not a "loose synonym" for 'cause', but captures precisely the logical difference between cause (in the efficient sense, at least) and correlation.
2. The mere motion of the carpenter's arms will obviously not cause the nail to be driven, and nor will the carpenter;'s decision unless it results in the hammer striking the nail. The softness of the wood, the manufacture of the nail and "a thousand other things" are conditions, perhaps necessary, perhaps not, they are not causes in the sense that we are discussing the idea of 'cause' here. They might qualify as examples of Aristotle's 'material cause', but they do not qualify as efficient causes. The word 'efficient' expresses the idea that precisely relevant and necessary work is being done.
Seems like causation is inherent to the concept of mechanism. Why does it rain? Because the heat from the sun evaporates water into the atmosphere. Then we can go into the mechanism of how that all works with sun's radiation, water molecules, cloud formation, etc. Every single step will have inherent to it B happening because A, even if it's some A radiation and some B H20.
The overall picture is that the sun causes the Earth to heat up, which includes bodies of water, and some of that water evaporates as a result, and the moisture in the air eventually forms rain clouds.
There is no doubt that the sun is heating the earth, and if it stopped shining somehow, the Earth's temperature would drop dramatically, and the rain cycle would come to an end once the Earth's temperature had dropped to the point that evaporation no longer occurred.
That last paragraph looks like a prediction, but it's a counterfactual, because the sun has enough nuclear fuel to burn for a long time, so we can never test the actual scenario.
That's OK. We can differ on that.
OK, seems as if we're done then...
In a climate model it's going to be more like
- at step 4,983, approximately 500 million things happen
- at step 4.984, approximately 500 million different things happen, that are related to what happened in the previous step by a large, complicated system of simultaneous equations....
and so on.
It's hard to imagine something much closer to the Buddhist (and quantum mechanical) paradigm of 'everything depends on everything else' and further from the Aristotelean paradigm of 'this single localised phenomenon is caused by that single, localised phenomenon'.
If we want to say that the state of the entire system at time t was the cause of the state of the entire system at time t+1 then I'd be happy to agree, but I doubt Aristotle would like it.
One use of the concept, though, is to help us weed out spurious correlations.
Consider me part of the camp that finds this line of thought confusing. The distinction between correlation and causation seems pretty clear to me. For instance, the symptoms of a particular illness can be said to be statistically correlated with one another, but that does not mean that they are in a relationship of causality. In actually, they are both said to be shared by a common cause, the illness itself. Although causation necessarily implies correlation, correlation does not imply causation. The former is merely the subset of the latter.
You've probably responded to this line of thought already, but I just want to get something to chew on.
Do you doubt that the sun heats the heart? Is there anyway this is mere correlation (outside speculative metaphysics and matrix/God scenarios)?
Some causation involves a very complex system such that we can't exactly identify what causes what, except at a high level, such as the sun warming the Earth resulting in water evaporation which leads to rain clouds.
But other events, like throwing a brick through a window, are very straight forward Aristotelian causation, unless one wants to engage in speculative metaphysics where something else, like the code in the Matrix, actually causes the glass to shatter.
So really the issue is that Aristotelian causation doesn't scale up to complex phenomena, not that causation itself is the issue.
I don't doubt for a second that weather has causes, I just doubt our ability to accurately identify all of them at any given time.
I have trouble explaining myself because 1) I'm not good with words and 2) I'm coming to believe that the verbal quest for truth culminates in a tip-of-the-tongue experience. The world is utterly strange and literally beyond words. I can feel that I'm making progress in understanding, but whenever I attempt to pin it down, something seems to dissipate. To say the attempt causes it to dissipate would be yet another such attempt. Thinking in terms of 'causes' is a good example of that sort of activity.
Real truth can only be held very gently. Or so it seems to me lately..
I, for one, have not been meaning to imply that the existence of everything, in the ontological or metaphysical sense, can be explained in terms of efficient causation. But I do think that efficient causation is the only 'kind' of causation we have any kind of coherent intuitive 'grasp' of in the physical context; and I believe that is because we feel our own bodies acting upon other bodies and being acted upon by other bodies and forces such as wind, water and heat. I'm only interested in clarifying the logical difference between causation and correlation, not in blurring it.
Of course, as in the 'hammer and nail' example there are countless other conditions that must be in place to make it possible for the hammer to drive the nail, but that does not change the fact that it is the hammer that drives the nail.
I should add that I think we also find the notions of formal and final causation intuitively coherent, but only in contexts where the free purposive actions of agents that are thought as being not utterly constrained by efficient causation are posited.
Quoting Srap Tasmaner
In my work and in my play I have occasion to do many regressions - statistical analyses of the association between observed phenomena. In that context, for many years I have preached the gospel of 'correlation does not imply causation' and pointed to regressions that identify strong relationships between tea sales in Germany and rates of divorce in Canada as evidence of the difference.
What has changed in me is not that I think that there's no difference between useful correlations and spurious ones. It's that I think 'causality' is the wrong razor to use to make the distinction.
The most obvious reason for that is that a razor is - the etymology declaims it - something sharp, accurate, defined with excruciating precision. We have seen in this thread that nobody wants to define causality, and it has even been suggested that it's a mistake to try to do so. That's fine, but without a definition we cannot use it as a razor. We need a different concept - a clear, precise, well-defined one - to distinguish between spurious and non-spurious correlations.
My current thinking is that a good candidate for that razor is the persistency of the correlation under different circumstances. That is what I was trying to elucidate with the pharmaceutical example. Most spurious correlations will disappear if we can conduct the experiment under different circumstances.
An even better razor is if we can identify a mechanism that enables us to predict that if B occurs, C is likely to follow. We can't always do that, so we have to fall back on the first razor. Sometimes we can't even use that, so we remain in a state of ignorance as to whether the correlation is spurious or persistent. But we keep trying.
There's nothing illogical about saying 'the cause of your fever is that you have influenza'. It's just that I see it as an imprecise, slang statement that's great for everyday life but doesn't fit well in philosophy, or in law courts or other arguments about whose fault it was. Its meaning is usually something like 'you have influenza, and in the process of working through one's influenza, one usually develops a fever.' The latter statement has a precision that the former does not. If more detail is wanted, one can describe how the immune system typically reacts to its detection of the influenza virus, the rapid increase in the activity of T cells and white cells, the battles that take place in the blood stream, and so on. It's all about mechanism.
Another point - God I prattle on, don't I? The use of the word 'cause' as a substitute for mechanism seems to depend haphazardly on the history of the discipline. In physics we talk about 'light cones of causality' even though they are better described as 'light cones of predictability'.
Against that is the example of Credit Risk Analysis - the discipline of predicting how many borrowers are likely to default on (fail to repay) their debts. Poor credit risk analysis was a major factor in the global economic disaster that started in 2008 and whose effects are still being felt. In this field there are two types of mathematical models used to predict probability of default. They are called Statistical and Structural models respectively. Statistical models, as the name suggests, look solely at the characteristics of borrowers and do regressions to work out which characteristics are correlated with default. Structural models focus on the financial structure of the company - its assets and liabilities - and the movements of stock price indices and use an economic model to predict which companies are likely to default, based on the observation that default occurs when one's liabilities exceed one's assets. This type of model looks at what some would loosely describe as 'cause' of default whereas Statistical models do not. But interestingly, the word 'cause' is barely mentioned in the literature. The word 'Structural' is used instead, which has a natural similarity with Mechanism.
This is good stuff, Andrew. It's good to hear the thoughts of someone in the data trenches. I have a few more questions though.
I think there was a bit of special pleading in the influenza example. There's a "usually" thrown in with the predictive version that the causal version doesn't have. Surely the user of causal talk could be just as modest.
That "usually" made me wonder if we shouldn't try to separate the unpredictability of what we're talking about from the imperfection of our understanding of it. I had wanted to say there's a difference between saying, "Influenza is usually accompanied by fever, but we have no idea why," and saying, " Influenza is usually accompanied by fever, because [details]." But the "usually" itself could mean, "We don't know why it happens sometimes but not others," or it could mean, "There is some randomness to the way this process works, such that [details]." It seems worthwhile to keep those separate.
I just keep thinking that once you've given up this one big distinction, at least as an ideal to strive for, that all sorts of other meaningful distinctions will fall away too. I really like distinctions.
It seems like you are saying that we should do away with the concept of causation altogether, but I thought your point was that there was no distinction between cause and correlation? If you're saying that causation is correlation then that's one thing, but if you were just saying that causality is a mistaken concept to begin with then that's something else entirely.
Also, the kind of correlation that I gave in my earlier example (of two symptoms of an illness) isn't exactly spurious as it certainly implies a deeper relation that isn't coincidental and is thus in that sense useful. However, my point was that relation wasn't one of causality, where one causes the other to happen. Instead, we would say that they both are related in sharing a common cause. They can be related, but not in the same way that, say, a brick thrown at a window is related to its being broken.
I am not sure how you plan to distinguish between both cases, or if you even intend on doing so, but it doesn't seem like persistency is a good enough way of doing so. The symptoms of an illness are usually found to be tightly correlated in many cases, as is healing when a chemical is ingested. But that doesn't mean we should treat both cases the same way.
IMHO the 'mistake' is to take that vague term and then try to use it in areas where clarity is necessary - like philosophy, science, law and arguments over fault. There are much clearer and more precise terms available, as discussed above, and I am advocating their use in those contexts, rather than 'cause'.
Yes, but 'cause' has at least two precise senses which I don't believe can be done away with in "philosophy, science, law.." etc; to do so would be to lose. not to gain, clarity (not to mention most of our explanations). The first is efficient causation; the action of a physical force, and the second is formal or final causation; agents acting for reasons. Are you really claiming that the concept of efficient forces can be done away with in science, or that the notion of agents acting for reasons could be dispensed with in the humanities?
I'm not so concerned about 'final cause', and certainly wouldn't want anybody in the humanities to have to change their patterns of speech. My concern in this discussion is about precision and logic, and I see the humanities as being about emotions rather than about precision and logic. If somebody can write better poems or novels by using the word 'cause' then that's great. I might make an exception for history though, as that is a bit like science in some ways. I find discussions such as 'what was the cause of the Great War' profoundly mistaken. I don't mind so much if they ask 'describe some causes of the Great War' although personally I prefer the talk to be about enabling conditions.
So, no mechanical forces are understood to operate in geology? Physical science doesn't posit four fundamental forces? Animals are not understood to be subject to climate and topography, and the forces and conditions attendant upon those?
Also, I don't understand what you mean when you say that the " humanities are about emotion rather than precision and logic". Are you saying there is no logic of human emotion? That logic and emotion are entirely separate and unrelated?
Of course the causes of any complex social (or antisocial) phenomenon such as the Great War are multifarious. Is there a cogent conceptual difference between a "cause" and an "enabling condition" in the context of history? Surely conditions that lead to events in the human sphere just are the causes of those events, no?
This seems to be a somewhat physics-centric view of "science" (a not-uncommon view in discussing the philosophy of science - sometimes fields other than physics are referred to as the "special sciences." Some might thus be reminded of Dana Carvey's Church Lady character from SNL...but I digress).
Science as a whole has not dispensed with the notion of cause. Epidemiologists speak of the cause(s) of disease outbreaks, paleontologists speak of the cause(s) of mass extinctions, etc. I also don't think that every science concerns itself with equations or predictions (though all are theory-laden, certainly). Some scientific fields are largely qualitative, and are largely retrodictive, as opposed to predictive, in nature (the aforementioned field of paleontology likewise applies here).
For a quick-and-dirty illustration of my point, a search for "caus*" restricted to titles alone in the public medical literature database at PubMed.gov yields 255,563 hits. The same search performed at arXiv.org yields too many hits to be displayed, maxing out at 1,000 hits.
I'm afraid I don't understand these rhetorical questions, but they sound interesting. Can you explain them, and how they relate to the discussion?
For example, isn't gravity posited as the cause of planetary orbits, black holes, the rate of objects falling, etc?
If our universe lacked the force of gravity, then none of those things would be the case. If there was no electromagnetic force, then there would be no molecules. You might wish to think of the fundamental forces in terms of their theories and how they explain things, but I don't know how you do away with causal aspect. Part of explaining how matter attracts is that gravity is the cause of matter attracting.
Let's say I enable conditions for you to rob a bank. I give you a weapon, a getaway vehicle, code to the safe, and the best time to commit the robbery, and whatever encouragement I think you need.
That doesn't mean you will go through with it. Enabling conditions aren't enough to determine whether you commit the crime.
You know, mechanical forces otherwise known as efficient causation, for example, the movement of tectonic plates causes uplift, water and wind are efficient causes of erosion. Climate and topography cause unique opportunities for, and constraints upon, plant and animal life, micro-organisms cause decay and so on. There are countless examples of the idea of efficient causation in science; in fact the notion is pretty much constitutive of the conceptual models that science consists in. The cycle of nature is understood to be a vast network of interacting forces, mechanical, chemical, electrical, building up and breaking down bodies at all scales; all this just understood to be.efficient causation.
Quoting Marchesk My position is that any attempt to use the word 'cause' must relate to a theory. This viewpoint is explained in detail in this essay I wrote a few years ago. That position has shifted a bit since then, and this discussion has helped that - given me new insights. But I still hold the central idea that reference to a 'cause' without specifying the theory to which it relates is as meaningless as the word 'here' when we don't know where the speaker is.
Quoting John
People interpret the theories as being about 'efficient causation'. That is not the same as the theories themselves containing propositions about efficient causation. It is very common for people to mistake the interpretations of scientific theories for the theories themselves. This is particularly prevalent - and comes into particularly sharp focus - in quantum mechanics. But it happens with other theories as well.
I would say it's not merely a matter of interpretation. Science is all about explaining why things happen; it's all about explaining the mechanisms of action and interaction that drive changes. Science asks why things happen the way they do and it answers " Because..."
Scientific theories do not contain propositions about efficient causation, because that would be the province of philosophy or metaphysics. But all scientific theories do contain propositions that embody notions of efficient causation. This seems undeniable to me. Perhaps you could give an example of a scientific theory (apart from QM) that does not comply with this condition.
It is the interpretation that asserts that embodiment, not the theory. In its purest form, the theory is a bunch of equations.
There's nothing wrong with inserting words like 'because' into a presentation of a scientific theory, but it is purely optional. As I said above, an explanation is a deduction that starts from premises that the explainee understands and believes, proceeds by steps the explainee understands and believes, and reaches a conclusion that is a prediction of the occurrence of the phenomenon for which the explainee had requested an explanation.
Yes, "cause" is a vague term, but that is principally because it has many different senses, and it is used ambiguously. In philosophy this ambiguity is a invitation to equivocation.
Quoting andrewk
I don't think you should be so quick to dismiss Aristotle's clarification of the four ways in which "cause" was used. Though it appears in his "Physics", it is a logical work, clarifying the distinct ways in which the word was used, in an attempt to avoid the ambiguity referred to above. It was necessary to get this clarification over with prior proceeding with physics. There were actually six ways that "cause" was used, presented by Aristotle. The final two were "chance" and "fortune", which he dismissed as not proper use of the word, leaving us with the familiar four. Of the four, common usage over time has shied away from "material cause" and "formal cause", such that we do not use "cause" in these two ways any more. This leaves us with "efficient cause", and "final cause" as the two principal ways in which cause is used. So the ambiguity with the term is generally between these two distinct ways.
Quoting andrewk
You should be concerned with final cause though, because this is where the term "cause" is useful. As you explain, we can do science without "efficient cause", because we use concepts such as explanation and prediction. "Efficient cause" seems to require a certain logical necessity which cannot be logically validated. But "final cause" is based in the determination of a different type of necessity, we determine what is needed (necessary) to produce the desired end. The end itself is therefore of the highest order of necessity, validating the need for the means. When the end, that which is wanted, or desired, is determined, then what is needed to bring about that end can be determined, and this is "caused" to come into existence, by an act of willing. So the act of willing is a cause, in the sense of final cause. And as much as we can refer to prediction to explain the fact that we determine the means to the end through the use of prediction, we cannot explain the fact that the end is desired, and that we cause the existence of the means, for the sake of the end, by referring to prediction.
So let's go back to Pneumenon's example, of creating letters on the screen by hitting the keys. Hitting the keys is a means to an end. It is therefore an intentional act of willing. Assuming that we have fully dealt with the ambiguity between different ways of using "cause", why do you think that we should not use "cause" to refer to how intentional acts create things?
'Theory' is a polysemous term with different applications in different contexts. You are adverting to the notion of 'theory' relevant to QM, which is a special case. In most other sciences and in the humanities, theories just are explanations, not mathematical models. Mathematical models may help support such theories but the substance of them consists in explanations in ideas expressed in ordinary, even if sometimes more or less technical, language.
Your definition of an explanation is incorrect, as I see it, because it conflates what the explanation is trying to do (which is to give the reasons why something is as it is) for what would support the verity of the explanation (which is what we think we would expect to observe if the explanation were true).
I seems that to give reasons for why anything is as it is is always to give an explanation in terms of causes. If you disagree then why not give an example of an explanation of any phenomenon which is not couched in terms of causality?
This is not true. Take the hammer and nail example. If striking the nail with the hammer is the efficient cause of driving the nail, that entails no necessity that the hammer striking the nail with sufficient accuracy logically must result in it being driven, or even that its being driven is physically necessary . If QM indeterminism is true, then the driving of the nail is merely probablistic. It might be expected to happen a quadrillion times a quadrillion times; but there is always the vanishingly tiny chance that it might not be driven, that the hammer might pass through the nail, for example.
Newton wonders 'why does the apple fall from the tree?'.
He comes up with his gravity theory that there is a gravitational force F on the apple, whose magnitude is given by the gravity equation. He also comes up with his law of motion that F=ma. Putting the two equations together, he deduces that the apple will accelerate towards the centre of the Earth with initial acceleration GM/r^2.
There is no use of the word 'cause', or any synonym thereof, anywhere in that explanation.
A philosopher may listen to the explanation and say 'Aha! So the cause of the apple's fall is the force of gravity'. But that is the philosopher's interpretation of the explanation, not the explanation.
I don't see people around me using 'cause' in the sense of his 'final cause' though. Maybe it's just the society in which I live, but people I know just don't use the word 'cause' that way. I have no reason to suspect that Ancient Greeks didn't though. The closest I have observed is that people will use the word 'because' to explain why they did something. But I find the similarity between 'because' and 'cause' purely textual, not semantic.
In summary, if the main thrust of the 'cause' discussion is about final cause rather than efficient cause, maybe Aristotle and I are not at variance after all.
Quoting andrewk
Newton postulates the application of a force which affects the apple, and produces its acceleration; its acceleration is the effect of the force. The idea of an affecting or effectual force, a force which produces an effect, just is the idea of an efficient cause. Nothing you have said presents anything at all convincing to the contrary, as far as I can see. To be honest I still have no idea what your proclaimed objection consists in.
That you are unconvinced is an observation on which I thought we had already agreed.
Again, this is incorrect; what I am asserting is that the ideas of cause, and the ideas of force and influence are essentially the same. Are you saying that the force of gravity does not cause the apple to accelerate? I understand that you might object that I could simply say that the force accelerates the apple; but the idea of a force influencing a body just is the idea of efficient causation; the latter is its general formulation, and covers all cases of different kinds of forces affecting different kinds of bodies differently. It is precisely the idea of efficient forces in play that distinguishes causation from mere correlation.
Seems right to me. The final cause of a heartbeat is to create blood pressure. Final cause answers: for what? It's pervasively used in biology.
No. That would be a use of the word 'cause'. When being careful, I generally avoid uses of the word 'cause' (as opposed to references to the word, of which I have made two so far in this post) because it does not have a clear definition and, in my opinion, leads to confusion.
I neither accept nor deny what is in the quote above, but rather suggest that it is not a well-formed proposition, because the word 'cause' does not have a clear meaning in this context. Fortunately for me, 'Gravity causes the apple to accelerate' is a sentence that I cannot recall having ever seen anybody utter, except on a philosophy forum.
By the way, I wonder about your notion of 'explanation', that an explanation is an identification of a 'cause'. If so then it seems you use the word 'explanation' very differently from how those around me do. Under such a notion, explanations would be very short, consisting solely of a reference to the phenomenon that the explainer considers to be the 'cause'.
Q. Why does the apple fall? A. Gravity
Q. Why is the sky blue? A. Refraction
Q. Why do people with mostly African ancestors have darker skin on average than people with mostly European ancestors? A. Evolution (or Melanin, or Vitamin D deficiency, or Skin cancer - it's hard to know which one to pick)
In my experience, explanations are much longer than this. The explanations I have received, and have given, have been narratives, not mere references. The narrative (which as I have said, has the formal structure of a deduction that starts from premises that the explainee understands and believes, and proceeds by steps that the explainee understands and believes) will usually refer to many different phenomena along the way, with none of them distinguished from the others and having the special label 'cause' affixed to it.
I think if the explanations I had given my children when they asked about things had been like the examples given above, they would have been quite frustrated.
What do all those explanations have in common? It seems to me that all you are doing is producing one word explanation which are not grammatical sentences because they tendentiously avoid using the words 'caused by' or 'because' , which are nonetheless there implicitly.
What if I asked you what relation gravity has to an apple falling, or what relation refraction has to the blue sky? How would you explain those relations without referring to the influence of forces or materials (efficient causation)?
You suggest that darker skin might have come about in African peoples due to selection pressures that occurred due to deaths attendant upon lighter skins in earlier times. Do you really want to claim that 'cause of death' is somehow a confused or incoherent notion? Those deaths were not caused by lighter skins. Although they were correlated with lighter skins, lighter skins were merely a condition that required exposure to the sun to produce melanomas, that resulted in early deaths (given that the theory is correct).
It is just the same as the hammer and nail example; there are all sorts of conditions; the existence of humans to wield hammers, the existence of hammers and nails, the existence of materials soft enough such that nails could penetrate them, and materials hard enough to work as hammers and nails and so on. And yet we have very good reasons to believe that no nail has ever been driven without being struck by a hammer or some suitable substitute.
The necessity is found in the relationship between the cause and the effect. In order to say that striking with the hammer was the cause of the nail being driven, we assume a necessary relation between the two, such that it is impossible that anything else caused the nail to be driven. That a nail could be driven by something else, or that the hammer might strike the nail and fail to drive it, is irrelevant to the case in which the striking of the hammer is said to be the cause of the nail being driven. In this latter case, in which the striking of the hammer is said to be the cause, there is assumed a necessary relation between the striking with the hammer, and the driving of the nail.
I'm nearly convinced but this part throws me a little, so I could use an example.
For the "why is the sky blue" example, you would do something like this?
1. Our atmosphere contains such-and-such gases, water vapor and dust.
2. If light strikes such-and-such objects, it behaves in such-and-such a way.
3. Thus when light from the sun enters our atmosphere, such-and-such happens and we see blue.
Is that the idea?
Yes, if you read that passage in his "Physics", that is what he says, that he is distinguishing the different ways that "cause" is used. As I said before, you don't see "cause" being used now, in the sense of material cause or formal cause, these ways seem to have been phased out. And Aristotle at that time ruled out "chance" and "fortune", as accidentals, though he said that they were sometimes referred to as causes.
Quoting andrewk
I think you will find that people do use "cause" in the sense of final cause, quite commonly. This is "cause" in the sense of an intentional act, and is what you described as coming into play in cases of liability and such things. In order to assign blame, we seek the individuals whose intentions played a role in causing the situation.
"Because" is related, because when we ask why of an act, we answer with "because". So "because" speaks of the reason for the act, but the actual cause of the act is the will of the individual. If the individual is not considered to be a freely acting "cause" of a situation, one cannot be held responsible for that situation.
I see what you mean, but my point would still stand in a somewhat different way insofar as the explanation would not merely be "gravity" or "refraction", but a narrative about how gravity causes apples to fall, or how refraction causes the sky to be blue, and so on.
I can't make sense of this; I can't see how necessity can be understood to obtain in particular interactions unless determinism is assumed to be the case.
Do you really think that or do you think you are just humoring me?
What you have outlined is the sort of shape I would expect a satisfying explanation to take, but with all the 'such-and-such'es filled in. So, yes, that's the idea.
Anyway, "deductive form" something like what I did?
Yes.
And thanks for the explanation. I know about Rayleigh scattering courtesy of a project I did on Monte Carlo simulation of photon diffusion through water bodies. But it never occurred to me that it was the same phenomenon going on in the atmosphere that made the sky blue. The water work was focused only on the amount of light that gets down to different depths, not on its wavelength.
There's a reasonable chance that I'll even remember that explanation now.
I had always been a little uncomfortable using conditionals talking about cause and effect. Where the conditional shows up in this "deductive form," is it regular, old material implication?
Exactly, that's why the concept of efficient cause quite easily leads one into determinism. We only escape determinism by assuming that there are things which are uncaused in the sense of efficient causation. We still allow that these things are "caused" though, in the sense of final cause. But this means there is a radical difference between efficient cause and final cause.
It's not really about holding one responsible for their actions. Punishment is meant as a deterrent for that person, and others, from thinking about performing that action in the future.
We like to say that we are punishing someone for their actions because that implies the notion of free will, where we don't actually have it. What we are doing in punishing someone is simply inserting a cause to change their behavior, and others, in the future.
The point is that a person is considered to be the cause of one's actions. If one were not the cause of one's actions we could not hold the person responsible for those actions. Punishment is irrelevant.
Holding one responsible means to recognize the individual as a cause. It may entail many things, blame, praise, judgement of guilt, trust, distrust, etc.. Whether or not punishment is due is not necessitated, and this requires another judgement.
I do agree that the concept of efficient causation can quite easily lead to the idea of determinism; I just don't agree that the idea of determinism is entailed by it.
If indeterminism, the position that some things are not efficiently caused at all, is understood to be the case, then those undetermined events must be seen as either purely random, absolutely arbitrary, or purposely caused by 'something' outside of the system of efficient causation. Any truly final cause must, logically, be a cause which is not itself caused.
I wouldn't say that a final cause is necessarily not itself caused, because it could be caused by another final cause. One thing may be done for the sake of another, which is done for the sake of another, etc., so that we have a chain of final causes just like we describe chains of efficient causes.
So for instance, the apprehended means to the end are brought into existence to create that end, by acts of willing, and are therefore final causes themselves. But these acts of willing, being the means to the end, can be said to be caused by the end itself. And if we look further, this end may really be the means to a further end, so the chain of causation we can follow until we designate an ultimate end, just like Aristotle did in the N. Ethics, designating happiness as the ultimate end.
I would say the ultimate 'for the sake of which' (if there is one) would count as the final cause. All the others would be, in any case, merely formal causes.
What use is blaming someone without punishing them (creating a negative consequence as a result of their action in order to prevent those actions in the future)? In my experience, simply blaming people isn't useful. You have to supply a negative consequence in order to prevent future acts, or they just end up doing it again.
Not for me they are not the same. These named things are the result, or consequence of holding one responsible. Responsible means that one is accountable. Blaming and such only occur posterior to an act which one is held accountable for. I other words, being accountable, (responsible), is necessarily prior to blame, judgement, etc..
Quoting Harry Hindu
That's not true. In fact, I think it's nonsense. Distrust is held for the protection of oneself, not to punish another. The fact that the one being distrusted may not like being distrusted does not necessitate the conclusion that the distrust is being held for the sake of punishment. I can't imagine distrust being held for the sake of punishment, that seems like a misunderstanding of "distrust" to me.
Quoting Harry Hindu
This is complete nonsense. First, punishing someone, unless it's a youngster just learning habits, rarely prevents the person from carrying out a similar act in the future. In most cases, the adult who misbehaves will continue to do so regardless of punishment. Does getting a speeding ticket prevent you from speeding again? If that were the case, we could get rid of all the speeders by handing out tickets. Second, blaming someone without punishing that person is extremely useful, because it allows you to remember something about that person's character. This information will be very useful in your future decision making concerning dealings with that person.
Sheesh, MU. Can you use the dictionary, please?
Merriam-Webster definition of "blame":
1 : to find fault with
2
a : to hold responsible
b : to place responsibility for
Quoting Metaphysician Undercover
How is it nonsense? Answer the question I posed. Would you like others to distrust you, yes or no? If not, then wouldn't it be a punishment if someone distrusted you after something you did that you are being blamed for? Yes, or no?
Quoting Metaphysician Undercover
If what I said is nonsense, then I have to question your existence as a social human being. It has been in my experience throughout my over 40 years of life that, if you give people a taste of their own medicine, then they stop doing what it is that they are doing that you don't like. They may continue to do it to others, but they won't do it to you any more.
Adults are like children in that they need consequences to adjust their behavior. Adults can be rehabilitated, or change their behavior as a result of the consequences of their prior actions. The problem with consequences comes when they aren't applied consistently. I guarantee you that if every speedster received a ticket every time they sped, then yes, they would stop speeding. If they were rich, then they might be able to afford the tickets and it wouldn't be much of a consequence because it doesn't place enough of a negative impact on them. Rich people would have to be fined more for it to begin to affect their behavior.
This is the problem with some adults today - that they weren't raised by parents that were consistent in their application of consequences when they were young. As adults, they think they can do what they want without any consequences.
Some people I would not like them to distrust me, others I don't care if they distrust me.
Quoting Harry Hindu
No, I explained this. The fact that one dislikes what another person does, does not imply that the act of the other was carried out as punishment. I do not like a lot of things which a lot of people do, but it does not follow that these things are punishment to me. "Punishment" implies intent to punish, on the part of the punisher. As I explained, distrust is not intended as punishment, it is intended as protection for oneself from the other.
The logic of praise and blame entails that the person to be praised or blamed for some act or achievement is the unconstrained agent and origin of the act or achievement. If every act of a person is determined wholly by factors beyond their control; whether that be genes, social conditioning, neuronal activity or whatever, then there can be no rational justification for praising or blaming them.
Trust or distrust, approbation or disapprobation, may be emotionally emotionally driven, in which case it cannot be rationally justified.
Arguably, there could be rational justification if said praise or blame elicited a change in the agent's future behavior in line with the desires of the praiser or blamer. Even if we believe that our child was determined by forces beyond his control to take the cookie from the cookie jar, leveling blame (in the form of verbal admonishment) upon him may thereby decrease future incidents of his taking a cookie from the cookie jar, which suits our desire of our kid not sneaking so many cookies.
In other words, the praise or blame can itself become part of the causal chain which ineluctably leads to another agent performing or refraining from performing some particular act.
Fine. We can substitute the word, "punishment" with "consequences". Punishment is a kind of consequence. I did use the word, "consequences" in my previous post to make the same argument, so your argument doesn't do anything to take away from my assertion that knowledge of the consequences causes changes in behavior and decision-making.
Quoting Metaphysician UndercoverExactly. You value certain people's trust more than others. Losing their trust would be a dire consequence that causes you to think twice before doing something that would jeopardize losing that trust.
Of course there can be rational justification for praising or blaming someone in a deterministic world. Praising and blaming causes changes in behavior in the future. It's the logic of praising or blaming someone after the fact, that I don't get. Why praise or blame a "free agent" (and what does "free agent" mean, anyway?)? Isn't it future decision-making and behaviors that we are trying to change? Isn't that why we try to make the consequences of other's actions known to them - so that the one's making a decision will consider the consequences as part of making the decision?
At any rate:
Quoting Pneumenon
Doesn't the second paragraph posit a "single unique property" for causation? Namely, the "tension" in question (whatever the heck that would amount to--I have no idea what you're saying there)?
Also, why are you looking for a way to characterize a bunch of things as causality while wanting to avoid looking at what causality is (a la "What does it mean for one thing to cause another")?
It's as if you didn't at all realize that he said in and independent of something's present context. He in no way implied anything could be independent of context period.
Becoming isn't at all distinct from being. Everything is in process, and process (change) is what time is.
Sure there would be practical outcomes that result from praising and blaming behavior, just as there are from any behavior. In a deterministic world things are not done for reasons but accompanied and rationalized by reasons and everything that happens is what it is and never could have been otherwise.
In any case my point was not about behavior at all but about attitudes and feelings of praise and blame.
Quoting Harry Hindu
If you remove the intent (final cause), you no longer have reason to use the words "cause" or "causation", in accordance with what andrewk was arguing.
Quoting Harry Hindu
That's not true. The idea of losing someone's trust doesn't cause me to think, I am thinking all the time anyway. It may be one of the many things which I will consider within my thoughts, but there is no such causation. Neither does punishment cause me to think in any particular way. Your argument is nonsensical. Punishment and consequences have no such causal power.
Quoting John
How are attitudes and feelings of praise and blame useful without the actual act of praising or blaming?
I have no idea what you are talking about here. Please, rephrase. Maybe Andrewk could do a better job of making his own case?
Quoting Metaphysician Undercover
Give me a break, MU. You do know what the phrase, "think twice" means, no? Here, let me help you because you seem to be having a very difficult time with using your terms and understanding commonly used metaphors:
https://www.google.com/search?safe=strict&q=think+twice+meaning&oq=think+twice&gs_l=serp.3.0.0i71k1l8.0.0.0.3174.0.0.0.0.0.0.0.0..0.0....0...1..64.serp..0.0.0.VRiqmNQyaHk
So what you are saying is that you would make the same decision if you weren't aware of the negative consequences as you would have if you were aware of the negative consequences that would follow your act?
If you were about to perform some practical joke on your best friend and your best friend noticed what you were going to do before you did it and said, "If you do that, I'm not going to be your friend anymore.", that wouldn't prevent you from doing what you were going to do? That information - that your best friend will no longer be your friend - is causing you to re-think performing that action.
It's not exclusively one way or the other. At least some people, with some actions, will hesitate because of possible consequences. But others, or even the same people, with at least some actions, will act far more impulsively, sometimes where they have little control over their actions, especially with outbursts, anger, etc., and they won't consider the possible consequences at all, even though they might be otherwise aware of those possible consequences.
IIn a causally closed, deterministic world there is no you making prior decisions; that also is a rationalization after the fact, or an illusory epiphenomenon, if you like.
I'm not arguing for this position just trying to elucidate its logic for you, so you can see that the logic of the idea of moral responsibility is not and cannot be a compatible logic. This is because the logic of moral responsibility says that your decisions and acts must have their origin outside the causal order, but that is impossible if the causal order is closed.
Quoting Harry Hindu
Their usefulness is irrelevant. The point was there can be no rationally consistent justification for feelings of praise or blame under the assumption of determinism. You might, in fact you inevitably will, have feelings of praise and blame regardless of whether determinism is the real case, though. And under the assumption of determinism you could even say that feelings of praise and blame, translated into actions have practical consequences.
The further point, though, is that under the assumption of determinism, all human decisions, feelings, experiences, thoughts, desires, volitions and even actions are not really causally efficacious (the real causation happens at the 'bottom', at the invisible microphysical level that really determines everything), but are really just illusory epiphenomena.
That all depends on the situation.
Quoting Harry Hindu
Sorry to disappoint you, but I would have thought of that before planning the practical joke, and I would already be prepared for the likelihood that my best friend would no longer be my best friend if I carried out the action. So no, it wouldn't cause me to rethink, because it would just be a statement of what I already thought.
Quoting Harry Hindu
I don't know about you, but all of my serious thinking is thinking twice, as per the definition you referred me to. And, it's not other people who cause me to think twice, I do this of my own volition, because I think it's a good thing to do.
.
I take it you are an incompatibilist, then?
I don't believe it is coherent to say that feelings are illusory. When it comes to feelings, there is no difference between perception and object. There is nothing for feelings to be illusory about.
That's a good point; logically, feelings as such, as distinct from reflexive awareness of them, or attitudes about them, cannot be illusory in the strict sense. Still, under the assumption of eliminative materialism consciousness is considered to be epiphenomenal, and feelings are part of consciousness. And please note, this is not my standpoint.
I am. The logic of determinism and moral responsibility as understood in a kind of substantive, as opposed to a merely conventional, sense are certainly incompatible, as far as I can see. I have never seen a convincing or even a minimally coherent explanatory argument for their purported compatibility. As I see it, it's just that determinists can't face the logical entailments of their preferred worldview, or they.want to have their cake and eat it too.
This just means that consequences are subjective. What is a consequence for one, isn't for another. You have to find that negative consequence that is harmful enough to the actor to prevent them from acting. Fining $100 is more harmful of a consequence to a poor person than to a wealthy person. A stranger not trusting you is less harmful than your best friend not trusting you. So you may perform that act with the stranger, but not with your best friend.
Quoting John
This is preposterous. Again, the will is part of the causal order. There is a decision-maker and then there is the information one has access to make that decision. The information one has is dictated by one's experiences over time. This accounts for how we make mistakes where we made a decision in which we never intended to harm, but we did. This is because we didn't have access to the information that would have prevented the harming. We only have a limited amount of information, and time, in which to make a decision. This is what makes our decisions deterministic.
Can you provide an example where what I said wouldn't apply?
Quoting Metaphysician Undercover
LOL! You didn't disappoint me at all, MU. You finally agreed with me that knowledge of a consequence causes you to behave in certain ways and not in others. It doesn't matter the way in which you came to know the consequence.
No. You're still assuming that people always think about consequences before acting. You're just thinking that sometimes the consequences aren't severe enough to prevent them from acting. That's not the case. Some people act impulsively/uncontrollably at times where there's zero thought of consequences at the moment. So no consequences would make a difference in that situation.
And I know this from personal experience, because I sometimes act impulsively/uncontrollably. It doesn't happen near as often now as it did in the past--just because I've aged and mellowed, but it does still sometimes happen. And I'm not the only person in the world who is that way.
"Why is the sky blue?" an explanation for why it is blue and not red can be given, but you cannot explain why that explanation and not some other explanation is possible... the explanation of why physical properties are the way they are that is to my mind the difficulty. Perhaps it's the laws of the universe that make it so, but why these laws and not some others? Perhaps because inductively that is the way the universe is. Causality, I think, is a lot like the blue sky, yes it is blue, but why is it blue.
None of this takes away from the argument that I have made in that knowledge, or maybe a more accurate word would be "prediction", of consequences influences one's decisions and actions. After all, we could make decisions with anticipated consequences that would have never happened.
Yeah, I'd agree with that. It's important to remember that it's not the same in all situations
As long as you don't mean that it influences all of everyone's decisions. It certainly influences some decisions, and maybe all of some persons' decisions.
It is precisely those decisions we make without incorporating the outcomes of our decisions into the decision-making process that leads to us harming others with our decisions, or one could say that leads to bad consequences, or outcomes. There is a correlation between the amount of time we take to make a decision (mulling over the outcome of the decision and how the outcome compares to our intended goal), and the chance the predicted outcome has of coming about. The less you think about the outcome of your decision, the higher the chance that your goal won't be achieved in the way in which you intended, or won't be achieved at all.
This is a conversation that I'm sure you could understand:
Actor: "I'm sorry. I didn't intend to hurt you. I didn't think that would happen"
Victim: "It seems to me that you didn't think at all. You just did it without thinking about the consequences."
Often when people act on impulse they're not really making a decision. Sometimes acting in rage, say, feels like not only not making a decision but like you have zero control over your actions.
People can also make whim decisions. I do that often because I enjoy it. Making a whim decision often doesn't have a goal beyond itself.
Quoting Terrapin Station
Can you provide an example of one of your whim decisions? How did it appear in your mind and in what order?
Whim decision examples: what exact route to take while bike-riding or hiking. What album to listen to.
Now, if you havent been on either route, or havent listened to either album, then it would be safe to say that you don't know the outcome of your decision. You don't know what will happen when you listen to this album or that album or take this route or that route. In this case, you wouldn't have a reason to choose one or the other. One might say that you don't actually make a choice at all. You just go with the first thing that pops into your head. It only seems like you had a choice because there were two options you were aware of. If you went with the 2nd thing that pops into your head, then that must mean that there was something about the first that you didn't like, in which case the decision was no longer a whim decision.
But then they wouldn't be whim choices. I'm talking about whim choices. The mental equivalent of rolling dice.
Nothing you say is relevant to what I have arguing. You say that other external causes "have an influence" on your decisions. This is equivocal language. Do other factors exhaustively determine your will, or is it to at least some degree, free and self-originating? That is the salient point.
If you want to say that will is free and self-originating to some degree, then how is that possible if the causal order is closed, and the will originates entirely within that order. How does will get to be free when everything else, in the final analysis, is rigidly determined by external factors? That is what you would need to explain. Further bare assertions couched in ambiguous language are not going to help your case.
Whenever I feel strongly about a particular act, I will proceed despite the negative consequences. So for instance, if something like moving a heavy object, which requires physical labour, and imminent pain, is required, I will proceed despite knowing about the negative consequences. It is very often that we proceed despite knowing about imminent negative consequences. This is a power of the will, it manifests as a virtue called "courage".
Quoting Harry Hindu
That I consider something within my thoughts, doesn't mean that this particular thing "caused" my conclusion. When I think, I consider many different things before coming to a conclusion. None of them can be said to cause my conclusion.
Your claim that knowledge of a consequence causes me to behave in a particular way is categorically false. That is because the things I consider within my mind, are passive thoughts, ideas and beliefs. Being passive, none of them have any causal power. I move these thoughts around within my mind, they do not move me around, because they are passive and I am active.
But you are making a decision. You are making a decision to make a decision. You can either do nothing, or choose one of the other two options. Time is probably a factor, so you need to make a decision now, or it will be taken out of your hands. Because you have no reason to choose one or the other, you resort to choosing one instead of choosing neither, because you do have a reason to choose to do something rather than nothing, or rather than choosing to stand there not able to choose between the two options when there isn't a reason to choose one over the other. When someone tells you to hurry up and make a choice - a choice in which the outcome isn't known - you better choose one, or you get neither.
None of what you said hurts my argument that consequences of past actions, and the consequences observed happening to others as a result of their behavior, are incorporated into the decision-making process. Not all the time, but much of the time. After all, the consequences are just information that is part of the decision-making process that sometimes gets left out because of limited space in memory, or there wasn't enough time to make a decision where you thought about the consequences. The idea of the consequences can change people's behaviors in the future. Praising and blaming others for their actions has no teeth in making them change their behavior. It seems that praising and blaming without the consequences is redundant because most of the time the person knows they are the reason the action took place. They know what they did wrong, and they know that it was they that did it. In the case where one doesn't know what they did wrong, or don't know that the bad outcome was a result of their actions, what does praising or blaming do - just inform them that they are the cause of the bad outcome. That's it? How is that useful in either a world with free will or one without?
MU, I'm getting in the habit of responding to your posts by simply referring you to a post that I already wrote in this thread. This argument is easily handled by pointing you to where I talked about how consequences have to be harsh, or pleasurable, enough to make you change your behavior. Again, the goal itself is a consequence. What are the consequences that you want to follow your action - that the heavy object gets moved? That is the goal and if it hurts a little, then so be it, moving the heavy object is more important than experiencing a little pain. However, if you had a bad back then the consequences of the pain may prevent you from moving a heavy object. Letting the heavy object stay there, or getting someone else to move it, would be more preferable than throwing your back out. We all make decisions based on the predicted outcome of our actions and how it matches our goal in the moment.
Quoting Metaphysician Undercover
Then how do you learn anything, MU? What is it that makes you learn to do things and not others? All of your actions have consequences. Isn't the consequences, the end result of your action, and how that matches your present goal, what you are choosing? If not, then what do you hope to accomplish when you make a decision?
I don't disagree with any of that. I'm not saying that they're not choices. It's just that they're not choices for reasons or biases, etc.
First, I wouldn't say that anyone is choosing to do something rather than nothing unless they're specifically have that idea in mind.
The act of thinking is how I learn things.
Quoting Harry Hindu
There is something missing in your logical process Harry. You seem to think that consequences magical cause people to make the decisions which they do. But that's not the case, it's the act of thinking which produces the decisions, not the consequences of prior actions. That this is true is very obvious from observing people with mental illness, or who have different types of mental deficiencies. Clearly, it is the thinking which causes the decision, not actual consequences of past actions, nor possible consequences of future actions.
There's no point in your attempting to now shift the topic of the argument. It has been what you have said that has failed to be relevant. You initially responded to my contention that determinism understood the way I have been outlining is incompatible with a notion of moral responsibility that is robust enough to rationally justify emotions and attitudes of praise and blame. Moral freedom understood in that context is necessarily outside the causal order, and is utterly incompatible with determinism.
And you need to read more closely; I have never said that I am arguing for this as being my position. I am just trying to make clear to you why the logic of freedom and determinism (taken in their strong senses, as I have outlined them) make them incompatible positions.Taken in their weak senses both freedom and determinism are incoherent and fail to constitute what they purport to constitute anyway; they are capitulations. This is all merely a matter of logic, not of ontology, as you mistakenly think.
Ridiculous. If you make decisions to do anything, one of the options available in making that decision, is to do nothing. Sometimes you end up doing nothing if you take to long to make a decision. Doing nothing also has it's consequences.
Quoting Metaphysician Undercover
Yes, thinking about the consequences, or the outcome, of your actions tends to have an effect on the kind of decision you make. In order to think, you have to be thinking about something, MU. Your obtuseness is getting old, MU.
In my view it's ridiculous to say that decisions involve options that you didn't have in mind.
Possibilities can involve options that you didn't have in mind, but decisions do not.
Again, I'd agree that if you explicitly think something like "I can do this or do nothing" then you'd make that decision (well, as long as you do make a decision and don't forget about it or whatever). But I wouldn't agree that that's part of a "background" of a decision where you don't explicitly think that.
There seems to be a point in time where you are aware of your options, then a point where you are aware that these options have nothing that stands out for you to choose one over the other. Then, at some point, you must make a decision to make a decision, no?
Now that I think about it, it seems that in these particular instances where we are faced with options that don't stand out from one or the other, other options come to mind, or else we'd be stuck in indecision. When faced with the options of two albums to listen to when neither one stands out as the one to choose, eventually there comes a point where other options come to mind, like choosing a third album, or doing something else entirely (which could be nothing). It seems natural that other options come to mind in a instance of indecision, or at least the mind tries its best to find a reason to choose one or the other.
I don't buy the notion of subconscious mental content. So I don't buy that subconscious mental content could be a factor in making a decision.
The question was "How do you learn anything, MU?". The answer was "The act of thinking is how I learn things". Where's the problem?
Quoting Harry Hindu
Sure, I'm thinking about things when I think. But all these thoughts, and things which I am thinking, are inside my mind, and just part of my act of thinking. Why would you think that something outside my mind, such as "consequences", has any causal power over my act of thinking? That makes no sense to me, because only thoughts enter into my act of thinking. So thoughts about consequences may enter into my act of thinking, as part of the act of thinking, but the consequences themselves don't enter into the act of thinking and therefore do not have any causal power within the act of thinking.
I'm not going to bother answering irrelevant questions based on misunderstanding. When you learn to read carefully we might be able to begin a discussion.
For one, try answering that question that followed right after that one.
Quoting Harry Hindu
Second, thinking and learning don't necessarily correlate. You can think of imaginary things, or just colors. What are you learning there? Don't you learn by experience - like the experience of doing certain things and observing the results?
Quoting Metaphysician Undercover
As I have already stated, the consequences in your head are predictions of the consequences, not the consequences themselves. Who would ever say that ALL the consequences in your head exist out in the world? It seems to me that if determinism, then only one consequence exists outside your head, which may or may not be one that is predicted in your head, which explains why your sometimes fail to predict the consequences, which ironically are the ones you learn the most from.
Correct. A computer making a "decision" is only metaphorical--it's a way that we think about it, anthropomorphizing it, to make it easier for us to conceptualize.
Decisions require conscious options. We pick one of the options we're conscious of.
You have only yourself to thank for the outcome here, dude.
It's the same question with different wording, so it has the same answer. My thinking is what makes me learn some things instead of others.
Quoting Harry Hindu
Just because I learn from thinking doesn't necessitate that all thinking results in learning. Only some thinking results in learning.
Quoting Harry Hindu
No. I learn from the thoughts which interpret the experience, not from the experience itself. Experience is very fleeting, as time flies by rapidly. I must make a conscious effort to remember and review my experience in order to learn from it.
Quoting Harry Hindu
The consequences "in your head", are not consequences at all, they are imaginary. Imaginary things, like predictions, do not have no causal power.
Quoting Terrapin StationSo, only humans make decisions? Are we anthropomorphizing other organisms that seem to behave in ways that imply that they make decisions to? Why is it useful to use the term "decision" as a metaphor for what the computer is doing when processing IF-THEN-ELSE statements? What is the exact process of making a decision? How does it proceed in time?
Quoting Terrapin StationNo definition I have found mentions that it requires conscious options. Besides, introducing the word, "conscious" just opens a big can of worms and complicates things considerably.
Merriam-Webster says:
decide:
1
a : to make a final choice or judgment about
b : to select as a course of action
c : to infer on the basis of evidence
Only animals that have consciousness, that can have options in mind and then choose one. Whether that's only humans or not, I don't think we know for sure. I assume that a number of animals with complex brains not too far off from human brains can make decisions though.
Quoting Harry Hindu
That's kind of like asking why metaphors are useful in general. Aren't you familiar with metaphors in general?
Quoting Harry Hindu
Are you asking for a blueprint of just what goes on in the brain? Because we don't know that very well yet.
Quoting Harry Hindu
What does that have to do with anything? Are you thinking that I'm doing a dictionary survey for you?
Sorry to have to disillusion you, but it wasn't my behaviour which caused any of this, it was your interpretation of my behaviour which caused this. Your mind created this prediction, not my behaviour. Another person would have interpreted my behaviour in a completely different way, producing a completely different prediction, and that's why I think it's all a creation of your own mind.
How could I predict your behavior without having first observed it? You first behaved some way for me to interpret and then use that interpretation to make future predictions of your behavior. If I had never observed your behavior, I wouldn't be able to make a very good prediction. I'd just be making an educated guess of your behavior based on my experience with other people.
Quoting Metaphysician UndercoverExactly. It seems like you're finally coming around. Predictions of some outcome has a causal influence on your actions. Different predictions can produce different actions. How do you explain how the same behavior can produce different interpretations, which in turn produce different predictions of the outcome?
Then it isn't anthropomorphic to describe other non-human systems as making decisions.
Quoting Terrapin Station
Ok, then why are metaphors useful? Isn't it because there is an element of truth in them?
Quoting Terrapin StationNo. I'm simply asking you what it's like for you to make a decision. Give me the process, step-by-step.
Quoting Terrapin StationFor one, it's a made-up definition. Second, it's a more complicated definition. Like I said, you open a can of worms when using the term, "conscious" - something that hasn't been clearly defined either. Why don't you simply try answering the question of how you make a decision so we can move this discussion forward.
It is if people are thinking of it as a metaphor for humans making decisions.
Quoting Harry Hindu
That's fine to say, but it doesn't make a metaphor not a metaphor.
Quoting Harry Hindu
Oh. Well, it depends on the decision. It's not as if they're all the same. For whim decisions, it's simply like mentally throwing dice or hitting the button on the random number generator at random.org. For other decisions it's much more of a process.
Quoting Harry Hindu
All definitions are. Definitions are not found under rocks. People make them up.
Huh?Quoting Terrapin Station
That's fine to say, but it doesn't mean that a metaphor isn't saying something useful, and therefore truthful.
Quoting Terrapin StationI don't understand "mentally throwing dice" or "hitting the button on the random generator at random.org" Are you saying that you are visualizing rolling dice or hitting some button on a website when making a decision? Why don't you explain the process of one of these other decisions that you make.
Quoting Terrapin StationI never said definitions are found under rocks. They are found in dictionaries.
Sure we can make up whatever definition of whatever string of symbols we want, but then in order to communicate, you'd need to use the definition that most people understand, which is the one in the dictionary.
It depends on whether people are thinking about as a metaphorical human (at least in that respect).
Quoting Harry Hindu
Sure, it's just not literally a decision, or it wouldn't be a metaphor.
Quoting Harry Hindu
No . . . it's that it's simply the equivalent of doing that. Maybe you don't really make any whim decisions? That's possible.
Quoting Harry Hindu
You mean for non-whim decisions? I can do that, but you know the process of that already. What I wanted to express to you was that some decisions are just whim decisions (for me at least).
Quoting Harry Hindu
Some are, sure. But that's simply reporting common definitons that people made up. Surely if you're a philosophy fan you're aware of philosophers specifying at least somewhat idiosyncratic definitions. You can communicate just fine as long as you specify what the definition is.
Yes, you observed my behaviour. Then you have a memory of my behaviour. The memory is an interpretation. You may use this interpretation when you make your prediction. There are many other interpretations which you hold, that you also might use in your prediction. Since you must choose which interpretations to use in your predictions, I don't see how any particular interpretation can be said to be causal. You choose the interpretation, so the proper "cause" is your choice. Can you explain how you interpret an interpretation as being causal, when each interpretation must be chosen?
Quoting Harry Hindu
A prediction is nothing more than a complex interpretation. I must choose which predictions to believe in. How can the prediction itself be causal, when it is chosen?
Quoting Harry Hindu
I explain this easily, it's a matter of choice. How do you explain this, if you affirm that interpretations are causal, when interpretations are chosen?