Causality, Determination and such stuff.
I have another Anscombe article! Inevitability both a joy and a frustration. This one is Causality and Determination.
So we procure for ourselves a Galton box:

The standard philosophical prejudice is that given an accurate enough account of the position of the box and a given ball, a competent physicist will be able to tell us which of the bins across the bottom the ball will land in.
And in this sense the path of the ball is determined.
But of course no one could determine the final resting place of the ball. Even the smallest error in the initial positions will be magnified until it throws out the calculations.
Anscombe wrote this in a time of only nascent chaos theory, which could only serve to amplify her point.
The notion that the universe is determined fails.
So we procure for ourselves a Galton box:
The standard philosophical prejudice is that given an accurate enough account of the position of the box and a given ball, a competent physicist will be able to tell us which of the bins across the bottom the ball will land in.
And in this sense the path of the ball is determined.
But of course no one could determine the final resting place of the ball. Even the smallest error in the initial positions will be magnified until it throws out the calculations.
Anscombe wrote this in a time of only nascent chaos theory, which could only serve to amplify her point.
The notion that the universe is determined fails.
Comments (234)
It's sweet.
I must have read it long ago, because it mentions Feynman's bomb, which I have used hereabouts previously. A bomb is attached to a geiger counter in such a way that it might explode; but it might not.
What, if anything, caused the explosion? And hence we find that there are different causes.
Quoting Banno
Quoting Banno
You're basing your argument on error. Do you mean that if there were no error, the physicist would know where each ball will land? If yes, then determinism is true. If no, then that's real indeterminism. Can you prove it?
That word never really appealed to me anyhow. Neither 'indeterminism.'
So far so good, I must say but, what exactly does this freedom, this free will, provide, by way of benefit, to us? At this juncture what must be mentioned is the idea of responsibility - if we possess free will then, it's claimed, we're completely accountable for our actions. Now, what exactly are actions? What is an action? Isn't an action just a link in the causal web? Actions are actions to the extent that they're part of causality. Without causality, there can be no such thing as an action. Free will is important only so far as we wish to control causality via our actions.
In other words the idea of free will involves being both not a part of the causal web (choices are not effects) and also a part of the causal web (our actions have effects).. To want to have it both ways is never a good idea unless.. :chin:
So... our choices are accidental? Random?
Quoting TheMadFool
The argument would be something like "I want someone to blame, and I can't do that if you did not freely choose, hence you have free will"? That's, as you imply, a poor argument. Nor, I think, is it cogent.
Yes, free will theorists seem to want it both ways.
The notion that a competent physicist could predict the outcome of a single ball drop fails if there is any uncertainty, however small, of the initial state of the ball. But chaotic systems are nonlinear (sensitive to initial conditions) deterministic systems. The outcome is knowable in principle but not in practise (technological limit).
The same distribution above is also seen in the electron single-slit experiment. In that instance, even if the initial wavefunction were known perfectly, the outcome of where the electron ends up is unpredictable. Depending on your interpretation, that is non-deterministic.
Are you aware of Physics without determinism: Alternative interpretations of classical physics? Mentioned in Has physics ever been deterministic?.
The upshot seems to be that determinism is a metaphysical assumption from which the classical determinist view of physics follows, and that this assumption can be removed with suitable mathematical alterations.
I don't have access to the actual paper.
The determinist metaphysical assumption is very well expressed as Newton's first law. This law assumes as a given, the temporal continuity of any describable set of conditions, requiring a "force" to change anything. The prerequisite "force" is the cause of change, which the determinist latches on to. How do you expect to remove the requirement of a force as the cause of change, without denying Newton's first law?
Cheers, I'll take a look around. Although I guess there's no guarantee of finding it without determinism. :wink:
Preprint here: https://arxiv.org/abs/2003.07411#:~:text=A%20tradition%20handed%20down%20among,certainty%2C%20independently%20of%20any%20interpretations.
I agree that that conclusion fits with Anscombe's paper.
If you write down a bunch of equations involving a time variable, take an initial state at t=0, and evolve the system forward to t=t, determinism there would just be that there is a single state at t=t and that the complete specification of the t=0 state entails that there is a single possible state at t=t. "That" state is logically entailed by "this one".
Determinism as Anscombe seems to characterise has a much broader scope; it doesn't matter whether if a completely specified t=0 state entails a unique t=t state within a model, determinism has to guarantee this complete specification outside the model too. Every state must be completely specified by a trajectory taken in accord with a hypothetical physical law. The determinism spoken about is the panoptic vision of Laplace's demon, if we know all now, then we know all.
We can never be in the position of Laplace's demon; and that's a fact of life. So we can't recourse to Laplace's demon and begin conjuring up hypothetical scenarios that take a hypothetical completely known input state and propagate it through our knowledge to a completely known output state.
Nevertheless, if we have a good model that is deterministic, it works predictively. Such a model doesn't care if reality is watched over by Laplace's demon or not.
We can also drop that the model is deterministic: if we have a good model, it works predictively.
Isn't that right?
Think so! I was writing in the context of the claim: "If there were no Laplace's demon, then our deterministic models wouldn't work", the point being it doesn't matter for deterministic model functioning if there is a Laplace's demon or not.
The belief in determinism is not universal among philosophers. Ever read Karl Popper?
How is this premise concluded with the conclusion below?
Quoting Banno
Just because we cannot measure all parameters necessary for predicting the future, doesn't mean that all parameters aren't there to determine the outcome.
We can argue for quantum randomness affecting the larger scaled world, but it is probabilistic and the probable outcomes become infinitely more deterministic as soon as we leave a planck scale. Saying that something is random when we don't have enough ability to calculate all parameters doesn't mean it is random.
And if we include ourselves into the equation, we are far too big in scale to be free of deterministic randomness.
Take the idea of a quantum dice. The illusion we have is that if we make choices based on this, we are neither free nor determined, but actually acting out of randomness. However, this is also an illusion since the quantum randomness gets determined by measuring it. So surrounding parameters still determine the outcome, combined with the apparent choice of throwing the dice and making choices based on an outcome.
The likely truth of the universe is that it's based on probability on a scale so small that the order created out of it becomes deterministic.
I.e the universe as perceived and measurable for definitions on how it works becomes deterministic and anything outside of that is neither perceivable nor measurable. It becomes rather a reality vs unreality. To describe our universe is to describe it with and through reality. Any attempt to describe it through unreality fails to be relevant as it's not part of reality both in perception and in terms of ability to be measured.
What are we then talking about when talking about quantum randomness if we can't talk about it, perceive or measure it without destroying it into a deterministic reality? Unreality is unable to be compatible with reality, we cannot define it and it cannot define reality. Therefore quantum randomness can be concluded to exist through math, but it cannot be part of reality as it's not a component of what makes up the universe, it's outside of it.
A bit. What do you have in mind?
I'm not sure if it's that profound, although I confess I haven't finished it yet. But looking at his definition of infinite precision, it disallows infinite epistemological precision. When a classical physicist says the initial conditions can be known in principle, she means zero epistemological uncertainty in principle. This is a practically unattainable limit, but that's what "in principle" is understood to mean.
Del Santo's definition, all his own afaik, is that infinite epistemological precision means that the number of of decimal points has no finite lower bound, but may not be infinite. This is equivalent to saying that the error may be arbitrarily small, but not zero, which, to me, says it cannot be arbitrarily small. His infinite precision is that attained by infinite technological progress which always approaches, but never reaches zero uncertainty. It is in itself a reasonable definition, but he is using different language to cast doubt on determinism rather than using argumentation within the same language.
If you couple this arbitrarily small but nonzero uncertainty to a chaotic time-evolution, it is true you cannot predict the outcome of an event. But it is true because you could not specify the initial conditions exactly. I'm at a loss as to what can we learn from this?
In context, I mean that quantum randomness doesn't affect classical physics since classical physics deals with large determined sizes. Quantum randomness boils down to predictability as soon as the scale of the system is above that of scales required for random operation. I.e we have a random system that defines the properties of an object and those properties don't change since the probability of them being broken is so low it could be considered infinitely improbable. Classical physics can't define this randomness since classical physics breaks down at those scales, but that doesn't mean they can't measure deterministic outcomes if the ability to measure is powerful enough, since classical physics calculate the actual reality of our universe. Quantum randomness is outside of this reality since it exists on a scale where our reality does not work and cannot work because of it.
It's the same as with black holes. Spacetime breaks down within it and while we can measure, calculate, and speculate about the reality within a black hole, it is impossible to define it with measurements of our reality. Inside a black hole, there's unreality. The same goes for quantum randomness, it is outside reality. Both the black hole and quantum randomness affect reality, but they both become deterministic when "entering" our reality in the form of causal events.
When we make a measurement, say the velocity of a falling object at t=1, need we assume that there is indeed a rational number that represents that velocity with infinite precision? Are we better off, epistemologically and ontologicaly, making no such assumption?
Edit: That is, why assume that we could have infinite information about our measurement...
Is it possible that as the error is reduced, the accuracy of predictions also increases commensurably?
If yes then doesn't it imply, at least in a theoretical sense, that with zero error we can make perfect predictions? This then goes to show that what we think of as indetermnism is a product of our ignorance rather than something that inheres in nature.
Quoting Banno
Is it an either-or choice between determinism and randomness? Is there no third, more palatable, alternative? Is it possible to retain determinism that applies to our actions but eliminate determinism as regards the making of choices, all the while avoiding the world of randomness. This is what I reckon I described as having the cake and wanting it too - we want to be causes, something that endorses causality but we don't want to be effects, something that rejects causality. If I carry this to its logical conclusion, we all want to be what in theological circles is known as a prime mover. In a sense then humans want to be god or, at the very least, god-like. If free will is real then we are all prime movers in our own right, capable of each initiating a causal chain that has no antecedent cause.
I went off on a tangent there but returning to the question of whether randomness is entailed by rejecting determinism and how this randomness affects free will, I'd say that randomness here doesn't apply to the act of making choices but rather to how attractive or not the choices are. Determinism seems to operate on the desirability/undesirability of options and, within its framework, it's claimed that when a person makes a choice s/he does so because s/he has no control over her preferences, preferences that decide how appealing or unappealing a choice is. The act of choosing itself is not considered as part of this deterministic argument.
If the above is correct then randomness implies that all available options become equally desirable/undesirable, no choice is either more appealing or less appealing than the other. If then a choice is made, it must be that this occurs in the complete absence of any and all influences that can affect making choices i.e. the notion of free will remains unaffected by randomness. In short, randomness doesn't mean our choices are random but that available alternatives in a given situation become all equally desirable.
The situation here is analogous to the following:
There's a person X who's made to choose between two cold drinks, say Pepsi and Coke.
At one time, he's fed with information regarding how beneficial Coke is compared to Pepsi. This information plays the role of determinism by influencing the choice he'll make when offered a drink.
At another time, no information is provided to him and both cold drinks become equally desirable/undesirable as the case may be. This is when determinism is false, and randomness is true, as there's nothing to influence his decision on which drink he'll choose.
Note that the act of choosing itself is not part of determinism or randomness. All that determinism does is make choices appealing or unappealing. Ergo, randomness simply equalizes the appeal of the options given and this being done if a choice is made then that choice is free and free will exists.
When we are faced with a choice, there is (seems to be) a pause in the causal chain as we consider the options available which, to me, suggests an interruption in the causal chain i.e. the act of choosing itself seems to be, in some way, not part of the causal chain. If that's true then determinism affects only how attractive/unattractive options are.
If you don't mind, I'd like your views on this. Thanks. Something doesn't seem right.
Maybe I fail to see how they are not connected? Randomness in causality physics only exists when we have a lack of ability to measure all possible parameters. The only randomness that exist which breaks determinism is at a quantum level, a level that is incompatible with reality and therefore can't be counted as part of it, just like we don't count the breakdown of spacetime in a black hole as part of our reality, but instead exist outside it.
So if our reality is measured with classical physics, determinism is not broken. If we lack the ability to spot errors, that doesn't mean determinism breaks, only that our tools did.
One way of phrasing Laplace's demon is "If the initial state within an arbitrary system was completely specified, it would evolve into a unique state at any given time point". It's an implication; complete specification of initial state => complete specification of trajectory.
But we live in a world where complete specification of the initial state is practically impossible for almost all flows over time. That is, it is a fact that they are not completely specified for the most part.
So the determinism of Laplace's demon is a hypothetical; if complete specification of input state, then complete specification of output state.
That determinism of implication doesn't hold for almost every phenomenon because we know it's practically impossible to completely specify the input state that lead to its emergence. Its antecedent is false, so it is useless as an implication; it ceases to apply. What remains of Laplace's demon as an ontological thesis when it's rendered merely a hypothetical? We can't feed almost every system into its defining implication. So what systems are left?
That is what is questioned by the Del Santo article.
It's also true that a fair dice has probability 1/6 of landing on each side. Randomness as a physical property of systems rather than an epistemic limitation on them is something people really resist. But I think this is tangential to the determinism spoken about in the OP article.
He makes the argument that classical physics are non derterministic in The Open Universe: An Argument for Indeterminism” (a compendium of articles written in the 70s)
I don't follow this conclusion from the Galton box example. The epistemological claim is obvious: I don't know where the ball will land. The metaphysical claim is not obvious: the ball's path is not predetermined.
With God like knowledge (even if we posit is impossible for any human to attain), we could know where the ball will land.
None of this is to suggest that metaphysical indeterminism doesn't exist at the quantum level. It's just to suggest the Galton box isn't an example of it.
I can predict with 100% accuracy the ball will land within the Galton box, and not, for example, on Mars. A newborn infant can make no such prediction. The point being that my ability to predict something about the ball, however limited, does speak to a metaphysical discovery about causation. I do recognize however that my limitations about what I might be able to predict about the ball are due to my infantile knowledge of the forces acting upon the ball.
One argument presented in the article (if I have understood it aright) is that a finite volume of space can only hold a finite quantity of information. But if a measurement within that volume were made to infinite precision, that measurement would consist in infinite information. Hence, infinite precision is not possible.
Problem is that it's inaccessible to non-members. But I see your point. The abstract points to an attempt at solving the bridging problem.
But I was mainly objecting to the conclusion you made "The notion that the universe is determined fails."
Which only makes sense if there was a high probability of large consequences of random operation, which doesn't really exist in our reality. We only conclude something like that based on what we yet cannot measure or do with physics. So at best it's a low probability conclusion that it fails, but more likely that we can measure the existence of it, but not yet understand it.
https://arxiv.org/pdf/2003.07411.pdf
That's absurd... no competent physicist would even attempt such a thing. They would use a demon.
I'm more sympathetic to his alternative, operational definition of classical physics, not least because history has moved on and we understand how deterministic macroscopic behaviour emerges statistically from non-deterministic fundamental behaviour, and indeed exact values of e.g. momentum are, in the initial conditions of a quantum system, modified by complex numbers in weighted sums, i.e. are not real in phase space.
It was more the "Has physics ever been deterministic?" question I'm feeling isn't really addressed in his treatment. Redefining epistemological certainty in such a way that the answer to a question that was previously "Yes" is now by definition "No" isn't so profound.
The idea of a trajectory through phase space does not depend on infinite epistemological precision anyway, and I feel he equates two statements that aren't equivalent. I can define a trajectory z(t) = y(t) = x(t)=t for t>=0. I can't specify to infinite precision in decimal or rational format the position at t=pi, but I can specify it precisely at any t that is representable in decimal or rational format, so yes I can specify points in phase space to infinite precision. But it's not a real trajectory of a real object! he might say. No it's not: it's ideal. If his argument applies only to real trajectories, he cannot generalise to ideal ones as he does.
Real-valued epistemological uncertainty, rather than ontological uncertainty, is already incorporated into statistical physics, ensemble QM, and others. What he seems to be proposing is a different way of doing it, and the measure of that is in its usefulness I think.
Of course; but which one: Maxwell's or Laplace's?
Isn't that covered by 'if'? Laplacian determinism also tells you that if you cannot know the initial state to infinite precision, you aren't guaranteed to know the final state to infinite precision.
Quoting fdrake
But epistemology isn't everything. That we may never know the initial state of a system does not prevent it from having a well-defined value ontologically. The use, I think, is illustrated not by removing epistemological certainty but by removing ontological certainty. Laplace's demon still holds: it's just that the condition is always false. The extent to which QM predicts very different kinds of results to classical mechanics is the extent to which the ontological thesis seems substantiated to me.
But if the point is only that it was wrong in hindsight, well... yes! Or, at least, maybe. Ontological uncertainty isn't specifically anti-deterministic. The difference is that states corresponding to some measurement (e.g. momentum) are multi-valued, which means that trajectories through phase space are also multi-valued. So long as you don't then insist on a way of making it single-valued (e.g. wavefunction collapse), everything is deterministic (i.e. the Schrodinger equation is deterministic), and that seems to me the likely mistake.
This is different from the game illustrated in the OP. We cannot predict where the ball will go at any point if there is any epistemological uncertainty in the initial conditions. However we can visibly see that the actual trajectory is fairly well-defined as it falls. (Not infinitely precisely, but we can see the narrow range of possible trajectories consistent with the uncertain path.)
Depends on the discussion... the conversation about epistemic limits reminded me a lot about Maxwell's demon.
They're distinct... Laplace's demon has free omniscience... Maxwell's demon's knowledge has costs.
Yeah that's fine, and it's interesting from Sec. III onwards. I have no issue with it. I'll be mulling it over for a while, so thanks for bringing it to our attention.
Wrote a comment about randomness as internal to systems a long time ago here.
Quoting Kenosha Kid
Knowledge is everything to Laplace's demon. If such a state of knowledge is impossible in some circumstances, Laplace's demon couldn't function as described in those circumstances. We already know it is impossible in most circumstances.
Quoting Kenosha Kid
Say we've got the following system:
(1) A time variable [math]t[/math] ranging from 0 to [math]\infty[/math].
(2) A normal distribution [math]N(0,(\sigma^2 /t))[/math]
(3) A sampling operator; [math]N(0,(\sigma^2 /t)) \rightarrow \mathbb{R}[/math]. What it will do is generate a sample from the distribution at [math]t[/math]
If we were to observe that process, the measurements would come from the sampling operator, not from the deterministic evolution of the distribution. Even if the evolution of the probability over time is fully deterministic, that doesn't tell us the sample paths; which are the events which actually happen; arise from a deterministic law. It's a baby and bathwater thing - yes, the distributions of time varying random processes can evolve deterministically in time, but no, that does not entail the events which happen due to them are determined. It's more apt to weaken "determined" to "constrained" regarding the observed events of deterministically evolving probabilistic laws, I think. A deterministic time evolution of a probability function still only constrains its realised sample paths. Laplace's demon has absolutely nothing to say about the sampling operator, only the time evolution of the distribution it acts upon.
True, but the whole point of Laplace's demon is that it knows what we cannot necessarily know. Our uncertainty doesn't imply its uncertainty. Ours is a technological limit. The demon is constrained by ontological. Otherwise there's no point considering it in the first place.
Quoting fdrake
Yes, the trajectory I was describing would be the trajectory of the distribution, not the line joining the measurements. This is akin to the Many World interpretation of QM, wherein a particular (non-deterministic) measurement does not change the (deterministic) trajectory of the body under consideration (minus the fact that the probability of a given future measurement does not depend on your measurement history, which is a statement about you rather than the trajectory).
The many-valued, deterministically-evolve state I had in mind was like the distribution, not the measurements which are discrete in time (I presume). however, we can be quite sure that physics isn't like that. If a body is falling from a tower of height T and we measure it first at height 0.9T then later at 0.5T, we can be quite sure that a third measurement won't be at 0.7T, for instance, unless something other than gravity was acting.
Exactly! The majority of the time something else is in play too. When we imagine a system, it's usually determinate - it has a system of equations that describe its evolution. Then you try and measure its parameters and that sneaky [math]\epsilon \sim N(0, \sigma^2)[/math] crops up at the end of the line. In the lab it's measurement error. Outside of controlled circumstances, it's pretty much everything. That epsilon is individual level variation.
If you assume that here is an actual physical value for each item in the Galton box; and that this value could be known.
There is, as per Hume, no contradiction in this; the description is coherent; we know what has been said.
Causation is not logical necessity.
Here we might better follow Hume than Kant.
Your assumption and the article's assumption is that the Universe is simple enough for extremely intelligent humans to predict an extremely complicated universe. This article was written by a simpleton. Much of quantum physics is disagreed upon by many physicists. Not all Quantum Physicists agree on Quantum Physics. People very often assume that Theoretical Physicists always make the right judgements.
If i or you don't 100% understand the math and the lab results behind a scientific theory, you and i (right/wrong/or indifferent) are putting our faith in scientists. Belief, Faith, and partial understanding of a concept are all forms of gambling.
He's got a point. A few times I've seen it written on these forums that Newtons first law is the law of causation. Of course, it isn't. But this is so hard to argue when those who advocate causation will not agree as to what it is.
You're new. You will need to present more than this barely articulate drivel should you wish to receive responses here.
How is it inarticulate? You know full well what is meant and most people can understand it. Why don't you want to understand what i wrote?
Are you honestly going to argue about what the definition of Causation? Do you believe in Objective truth?
Actually, yes, that's pretty much what this thread is about.
This, @Hanover, is the issue in Anscombe's article, so well articulated above by @tim wood.
I think you want to believe in free will. The only way free will exists is by some cosmic miracle. Free will is a desire of people who want pseudo-god hood and want to look down on people less fortunate than them.
Why would you think that? What have I said that would lead to that conclusion? Or are you just making assumptions?
That doesnt show that determinism fails, it shows the limits of the predictive method used. This is just increasing the complexity of the calculation to create the illusion that it isnt determined (cuz we cant show how, which is a fallacy).
A box of much simpler design would show determinism quite obviously. We could increase the complexity of the box and continue showing how the balls path is determined right up until the point where the complexity grows beyond our ability to predict but that doesnt show determinism failing, it would only show the ability to predict as failing.
Ok. You might be playing games but as long as you claim not to believe in free will, i'll let it go. People should make good decisions because we are all animals, not because it will quickly propel them to some false god hood. Reality is extremely complex. I guess we do agree on some things.
On review, Anscombe seems to me not to be saying that even if we had perfect information we could not predict the landing place of the ball, but rather that since we do not have perfect information, we cannot do so.
While a box of much simpler design would show determinism quite obviously, a box of more complicated design would foil our best efforts. Just as our poor physicists completes their calculations, we add an extra row of pegs...
That is, determinism ceases to be a physical Law so much as a metaphysical desire on the part of certain philosophers.
Add to this the Russel article and the physics posited by Del Santo, and laws of causation looks more like wishful thinking.
If nothing else, they ought not be taken as granted.
Ok, then I would only point out how little is actually being said there, seems pretty obvious at that point.
Quoting Banno
Well it could be both those things.
So there are 3 things at play, the knowledge of how something is determined to go(which we can do pretty well on pretty simple examples), the actual things that determine the way things go and the range of determinate factors we are actually able (and/or not able to) to track.
It seems to me only the middle one is what determinism is about. The others are so much more generic and tangental as to fall under different purview.
Quoting Banno
Agreed.
Quoting DingoJones
Hmm. The first and third concern epistemology. Del santo's Principle of Infinite Precision characterises it thus:
The second concerns ontology:
Del Santo's reply to the ontological point is, in part,
Quoting Banno
Perhaps the state of a particle in phase space might be given not by a point but by a volume.
If you read the ".999...=1" thread you'll see that Banno believes that a small nonzero quantity actually is zero, and considers the alternative as lunacy.
@Kenosha Kid, indeed, before you enter into a discussion with Meta, do take a look at the 0.999... = 1 thread.
Sure, but isnt 2 the only one determinism specifically entails? Thats not mutually exclusive to what you said.
Yeah I've read it. Del Santo's definition pertains to a finite number of decimal points, however large. 0.999... has an infinite number of decimal points, and so is identically 1.
Most of the time logic is used and appealed to directly, without reference to extensions, because it is used for asserting normative statements. The logical description of an ideal electronic circuit is comparable to the expression "Tidy your room! because i said so!". This ideal use of logic is comparable to ethics and it makes no sense to speak of epistemic doubt here. In contrast, if a circuit description was thought to universally describe the operations of real world electronics, this is obviously a highly doubtable proposition.
Physical laws are the joint expression of normative sentiment and physical description and so they aren't pure propositions in the ideal philosophical sense. The normative part is expressed by the use of universal quantifiers that aren't falsifiable and which say "Every X is a Y". But they don't need to be falsifiable, for their purpose is political,namely to assert scientific and cultural policy in the same way as the electronic circuit design that implicitly asserts "Intel should make chips this way".
Getting back to the original discussion, consider how one determines measurement precision. Isn't it's very definition ultimately in terms of the reproducibility of experimental results? In which case, if repeated experiments fail to reproduce results, then by definition measurement precision is lacking.
The problem though, is this:
[quote=https://fqxi.org/data/essay-contest-files/Del_Santo_FQXI_essay_indete.pdf] However, the principle of infinite precision is inconsistent
with any operational meaning, as already made evident by
Max Born.[/quote]
So Del Santo suggests that the tiny uncertainty hidden by a faulty application of the principle of infinite precision, in an instable system, would increase exponentially with time, allowing for the appearance of indeterminacy.
The idea is that any initial position (inertial frame of reference) cannot be represented with infinite precision, so the notion that it might be represented in this way ought to be dismissed. Therefore to have the most accurate representation, which is consistent with the real possibilities of representation, we ought not try to represent it with infinite precision.
[quote=https://fqxi.org/data/essay-contest-files/Del_Santo_FQXI_essay_indete.pdf]
As we will show in the next section, one can indeed envision
an alternative classical physics that maintains the same general laws (equations of motion) of the standard formalism, but
dismisses the physical relevance of real numbers, thereby assigning a fundamental indeterminacy to the values of physical
quantities, as wished by Born. In fact, “as soon as one realizes that the mathematical real numbers are not really real, i.e.
have no physical significance, then one concludes that classical physics is not deterministic.” [13].[/quote]
Considerations of some mysterious "onthological certainty" are useless, because we humans will never be able to access this magic realm of "things in themselves". We're better off assuming hazard exist, because for all intent and purpose, it does exist for us. It's part of our condition.
If you really need to think in terms of what hypothetical demons and gods do, consider that if God exists, He could well have made his creation open, evolutive and able to surprise even Him. Otherwise what's the fun of creating anything?
Consider that any demon predicting the whole future would also need to predict what he himself will think in the future (assuming the demon is part of the universe)... and that if he does so, he will think it now and not in the future!
Anyone proposing that the whole history of the universe was exactly 'determined' at the time of the Big Bang + 1 second -- including me writing this sentence from a Roman bar today -- better try and prove it, because that's quite an extraordinary claim...
More word games, Banno?
If it is a given that the account of the position of the box and a given ball are accurate, then why would there be errors? If it were accurate, then that means that there are no errors, so you're contradicting yourself.
And then you contradict yourself again by asserting that even "the smallest error" determines that the "calculations will be thrown out".
Quoting Banno
You actually showed that it doesn't because you used reasons to determine your conclusion.
Many years ago, I visited the Seattle World's Fair, and came across a large display of a Galton Box or Quincunx, [image below]. The adjacent sign says : "when the falling balls are observed one-by-one the path of each is unpredictable, but taken many by many they form an orderly predictable pattern". This is a graphic illustration of order within randomness. The overall bell-shaped pattern at the bottom is predictable, and seems to be predestined by statistical laws of Probability. However, one of the white ping-pong balls was painted red, and it landed in a different location after each randomized ball-drop. That exception to the rule seems to imply that there is Freedom Within Determinism.
Obviously, ping-pong balls have no freewill, but the Galton Machine reveals a tiny glitch in statistical determinism : there are exceptions to the Normal or Average pattern. Therefore, philosophers who interpret Physics as Fatalistic are wrong. Instead, I take this graphic illustration of Probability to mean that there is a possibility of Individual Freewill Within General Determinism. The future course of the physical universe was indeed fixed at the moment of the Big Bang, with all laws & constants established, and with an unbroken chain of cause & effect. And yet, self-conscious reasoning humans seem to be able to manipulate the laws of Nature to their own ends. Scientists call it "Technology", but I call it "Freewill" : the ability to deny Destiny. :nerd:
Rationalism vs Fatalism : http://bothandblog2.enformationism.info/page67.html
Determinsm : “[i]Determinism is a long chain of cause & effect, with no missing links.
Freewill is when one of those links is smart enough to absorb a cause and modify it before passing it along. In other words, a self-conscious link is a causal agent---a transformer, not just a dumb transmitter. And each intentional causation changes the course of deterministic history to some small degree.[/i]” ___Yehya
Galton Quincunx Machine :
Galton Box in Motion : https://en.wikipedia.org/wiki/File:Galton_box.webm
So regarding the measurement error thing. Wanted to make my argument more precise.
Two main points:
(1) Laplace's demon does not take error terms' interpretations' seriously.
(2) The existence of error terms in a model breaks the deterministic relationship between the observed values of measured quantities in those models.
Say you're testing a linear relationship, the theory says that the following relationship holds between two quantities:
[math]y = mx + c[/math]
where [math]y[/math] and [math]x[/math] are both measurable in the lab. You do an experiment, and there's always individual level noise and measurement imprecision. In a situation like that, you modify the model to include an error term [math]\epsilon[/math]:
[math]y = mx + c + \epsilon[/math]
The relationship which is being studied is [math]y=mx+c[/math], the individual level errors are assumed not to be part of the causal structure being analyzed. Nevertheless, when you make the measurements, there is individual level variation. Its causal structure is unmodelled, it's assumed to be noise. But as part of the model that noise [math]\epsilon[/math] stands in for all other causal chains in the environment which influence the measurement.
Perhaps I'm wrong in this, but I think that from the perspective of Laplace's demon, it's imagined that Laplace's demon knows [math]y=mx+c[/math] holds, but must also know the entire causal structure that yields the [math]\epsilon[/math] to contribute to the measurements as they do. Laplace's demon knows why the [math]\epsilon[/math] that makes every measured [math](y,x)[/math] pair deviate slightly from [math](mx+c,x)[/math] takes the value that it does. But Laplace's demon knows with complete specificity the behaviour of unspecified, unknowable causal chains. Unspecified and unknowable regarding [math]\epsilon[/math] is part of the model structure. Such causal chains are not part of the causal relationship between [math](y,x)[/math] being studied, but they're part of the causal chain in the experiment linking observed [math]x[/math] and [math]y[/math].
The status of that "unknowable, unstructured variation" is part of every model as soon as it ceases to be a theoretical idea and comes to obtain estimated parameters. What I'm trying to highlight is that the structure of interest - [math]y=mx+c[/math] loses its determinism (in the sense that [math]x[/math] intervention yields [math]y[/math] response) as soon as that [math]\epsilon[/math] gets involved. Even in the most precise measurements, it is possible that individual level variation explains the entire observed relationship, it can just be made vanishingly unlikely. That possibility mucks with characterising Laplace's demon's knowledge from how we use physical law, it's at best an unrealistic idealisation from it that forgets how the error term works. Every experimental model that involves an error term breaks the metaphysical necessity of the deterministic relationship contained within it insofar as it purports to explains the observed data.
Determination and predictability aren’t the same thing. The whole point of chaos theory is that even a perfectly deterministic system can still be wildly unpredictable. But so long as the exact same starting conditions still give the exact same outcome always, it’s still deterministic.
My own addition to that topic: backward causation necessarily induces apparent randomness to a forward-looking observer. Prediction of the future approximates backward causation; it’s like getting imperfect information directly from the future. That is why predictive systems are inherently chaotic. So either the universe is random and so unpredictable, or else it’s deterministic and so predictable and so capable of containing predictors who would make it chaotic and so unpredictable.
Basically, so long as you have things like us who will do whatever prediction may be possible, the universe will be unpredictable, precisely because of those attempts at prediction. Even if it is fundamentally deterministic.
Unfortunately, this claim is not testable, and thus determinism is not a scientific theory.
Now why would you say a thing like that?
The issue, as explained in the section following your quotes, is that in the application of real numbers, the infinite is represented as finite. (This is the point of the other thread, the infinite decimal extension of .999... is represented as 1).
Now, in philosophy we understand that what appears as infinite is really indefinite, or indeterminate, and this is a deficiency in our capacity to measure that thing. So when the application rules of the real numbers make what is really indefinite, or indeterminate, appear as definite or determinate, it is simply an illusion created by the customary use of that number system.
Oh, Harry.
No, it's an irrational ratio, that's the point. You might call it an irrational number, but representing it as "a number" is exactly where the problem lies. Making it "a number", is to make it something definite, determinate, when the essence of the irrational ratio is that it is indefinite, indeterminate.
So, according to the above described principle of infinite precision, this irrational ratio, the quotient which proves to have infinite decimal places, indicating a division problem which cannot be resolved, this thing which is by its very nature indefinite, is made to appear as finite and definite. Therefore that principle is faulty.
The point of that part of the article is that in using the real numbers this way, the indeterminateness which exists within the real world (reality), is made to appear determinate.
Given infinitesimals there seems to be no reason to conclude that an infinite amount of information could not be "held" within what we would think of as a finite volume of space. It seems to me the limitation on us is time not space, and that our consequent characterization of spaces as possessing finite volumes says more about us than it does about the spaces.
[s]Did you misword that?[/s] That an infinite amount of information could not be held in a finite volume is a result of Landauer’s principle, apparently; although that is not itself without objections.
I'll not argue the point here, but leave it to the physicist.
Thanks for pointing that out; it's now corrected.
Perhaps it might clarify things to return to the first example from the Del Santo article:
We then introduce an error into the measurement of the initial velocity. Regardless of how small that error is, it will build over time until the error in predicted position is greater than the length of the cavity.
Now the default position adopted in my high school physics class was that the error was introduced by a lack of precision in the measurement. The assumption was that there is indeed some real number that gives the exact velocity to infinite precision, and that the error represented the degree to which one could operationally approximate the actual velocity. The alternative explanation being offered by Del Santo is that the initial velocity does not correspond to some real number, but instead to some region of the real numbers. The boundaries of this region are also indefinite, but lies within the bounds of our arbitrarily accurate measurement.
There's a certain intuitive appeal to this.
A further example. Suppose you are asked to measure a table's width. You use a tape to measure it to within a millimetre. You repeat this measurement a dozen times and calculate a value for the error.
Is it that the table has a specific width that would be given bu some real number, and your measurement approximates to that number; or is it that the width of the table is not definite, but is the range of numbers specified by the measurement and error?
It seems intuitively more difficult to see the width of a table as corresponding to some real number.
That is just quantum mechanics. Which appears to be how reality actually works, so no problem there.
My only point was that even if the universe was perfectly deterministic at its base and errors in measurement were just errors in measurement, a chaotic system still becomes impossible in practice to predict; and a system capable of predicting is inherently chaotic, so even if the universe was perfectly deterministic, our own ability to predict it (however imperfectly) would still undermine its predictability via chaos.
That is, a chaotic system is only unpredictable in practice - operationally. Given infinitely precise initial variables, there is only one outcome.
So I don't see a relevant difference in kind between the marble int he tube in my example and a chaotic system.
Show me if I'm wrong.
Why do you like to quibble tim? I could respond to your quote with the following quote:
But what's the point?
Whether or not a real number is or is not a "real" number is beside the point, and not at all relevant. What is relevant is the "natural uncertainty in all observations", which some use of real numbers tends to veil with the pretense of what he calls "infinite precision".
This natural uncertainty is true of all all descriptions of initial conditions, so it applies to all inertial reference frames. Since the uncertainty develops exponentially with the passage of time, we rapidly become deficient in the capacity to distinguish between an improperly represented inertial reference frame, and an external cause in the occurrence which follows. Determinism as an attitude, is dependent on the assumption of a reliable inertial reference. When uncertainty is apprehended as a feature of the initial conditions, (initial conditions being what limits future possibilities) rather than as a feature of the outcome of the activity, then determinism is vanquished. The certitude required to support determinism cannot be obtained.
It's also worth noting that if our measurements of initial position and velocity are inherently imperfect, so will any subsequent measurements of position and velocity also be imperfect. Of course there will also be magnification of error, but how would we know what degree of the error is due to magnification of error and what is simply due to measurement error, if none of our measurements are perfect?
If we were to experimentally verify a theoretical value of ? millijoule, 2½ millijoule, or ? millijoule with infinite precision, then we'd be in the same predicament, yes? I mean, the particular number wouldn't make a difference?
The marble setup IS a chaotic system. I think you think I’m disagreeing with you more than I am.
Probably.
There is another issue which needs to be considered, and that is the attempt to remove the margin of error through the manufacture of artificial initial conditions. This is what is done in experimentation, the apparatus is intentionally designed so as to supposedly give us the capacity to reproduce the same initial conditions over and over. This produces the idea that the error of measurement can be accounted for, or removed.
Consider your Galton box, the ball is channeled down the narrow throat, and positioned accordingly. This channeling is the creation of artificially limited initial conditions. If the ball is always dropped from the same height one might believe that the initial conditions have sufficiently been controlled. The point which that apparatus demonstrates is that no matter how well we control the initial conditions, it is always a simple matter to add an element of "chance" into such an apparatus which will render the outcome as unpredictable. This indicates that unpredictability is very likely an inherent feature of how we as human beings, produce and describe initial conditions. Therefore attempting to correct for the error is not the right approach, as it is an attempt to do the impossible, correct the uncorrectable. What is needed is a non-determinist approach which recognizes the reality of that unpredictability.
Sounds like determinism to me. Initial conditions lead to subsequent conditions. Banno is using determinism to show that determinism is false.
If determinism were false then we would get things right or wrong regardless of whether or not we had initial errors. Initial states of accuracy or inaccuracy would make no difference in subsequent states. We would never be able to establish a causal link between the initial state of being accurate or inaccurate with subsequent states to then say that the magnified errors were caused by our initial inaccuracy.
(1) Disentanglement of causality from necessity. Positive claim: there can be causes which do not follow of necessity.
(2) Disentanglement of determination from causality. Positive claim: Something can be determined without being caused. It strikes me that Anscombe is ultimately unconcerned with causality. It all but drops out of consideration in the second half of the paper. It was used as a 'way in' to talk about 'determination' and its obverse, 'indetermination'.
(2.1) 'Determination' cannot be thought outside of some given range of possibilities: "to give content to the idea of something’s being determined, we have to have a set of possibilities, which something narrows down to one – before the event". By distinction, causality is post-hoc: "But there is at any rate one important difference – a thing hasn’t been caused until it has happened".
(3) Conclusion: 'Indeterminism' must be admitted, at the very least, as a possibility. Interderminism meaning: given a set of outcomes, it cannot be specified, in advance, which will obtain.
(4) I have a huge question about the level of granularity - mereological and temporal - at which all these considerations are meant to apply. Are these conclusions meant to be the same for the Galton board, taken as a whole, and a single ball travelling along a Glaton board path? Why is each of these two cases individuated as such? What motivates this individuation? Why not consider some balls, and not others? Maybe two balls, rather than one; or why not the Galton board, and the path of one or two or three or all balls? How does taking into account these analytic 'cuts' - seemingly arbitrary, affect the analysis?
The question of 'givenness' ("given a range of possibilites...") has big implications on the status of in/determinism (epistemological? ontological? Something other?). Anscombe is ambigious about this, but intuits it when she discusses the temporality of determination (determined when?) and distinguishes - without coming back to it - between determination and what she at one point calls 'predetermination'. Want to say more about this later. Will just open the question for now, if anyone else can see the issue.
Damn those environmental scientists and philosophy forum posters who keep telling us that humans are determining the future destruction of our planet by our ignorant actions.
There goes the Kalam cosmological argument.
...but one ball or a thousand, the result is still a normal distribution. So despite causality being after the event we can predict the outcome. Is there a contradiction here?
Asserting determinism, or at least what is often referred to as "hard" or "rigid" determinism consists in claiming that from any set of conditions there can arise only one outcome at any subsequent time. An often cited "thought experiment" is that if it were possible to restart the evolution of the universe from the initial moment it came into being it would again unfold exactly, down to the minutest detail, as it has actually done this time.
This is an entirely groundless assumption. Under the aegis of Newtonian mechanics this may have seemed obviously true, but in the light of QM it seems not only vanishingly unlikely, but even just plain impossible.
See this. Harry's out of his depth.
Just to clarify what it is that I was saying, in case that turns out to be useful to anyone in this thread:
Say we had a simulation of a Galton box, simulated using Newtonian physics, so no quantum stuff going on in the simulation (and the simulator sufficiently insulated from quantum noise, as most macroscopic stuff is, that it's completely negligible for our purposes).
That simulated Galton box would be strictly deterministic. You could play it forward, note where a given marble ended up, rewind it back to the beginning, then play it forward again and know in advance where that marble would end up.
But still, that system is nevertheless (probably -- I'm not super familiar with Galton boxes) still chaotic in that the tiniest deviation in the starting positions of the balls or pins or anything would result in huge changes in the outcome.
If we had to measure the positions of the balls a posteriori and program them into a different copy of the exact same simulator, we would necessarily have less than perfect measurement of them, and so have tiny differences in the starting conditions, and so see a vastly different result.
But so long as the simulator does actually have the exact positions it used the first time, it will give the same results every other time, over and over and over again.
It is conceivable in principle (though does not appear to be contingent fact) that our universe could be deterministic like that, and yet nevertheless chaotic in places (and therefore as a whole), meaning that if we have anything less than absolutely perfect knowledge of the present (which we can't have), then we could not predict the future, even if the universe were perfectly deterministic.
And then on top of all of that: predictor systems like us are inherently chaotic, so even if the universe was perfectly deterministic and otherwise completely non-chaotic, our attempts to predict it would make it chaotic and unpredictable.
So, the universe does not appear to be deterministic, as a matter of contingent fact.
But even if it were, it could still be chaotic, and so unpredictable in practice.
And even if it weren't otherwise chaotic, any attempt to use that non-chaotic determination would make it chaotic, and so unpredictable anyway.
Certainly not one ball (you can't have a distribution of one ball).
Quoting Banno
I don't think so - what motivates this question? And note that the question of prediction is almost entirely absent from the Anscombe paper. She mentions it twice, and both times they are ancillary to her main points. And there's the matter of being clear about what 'the outcome' refers to: surely you mean - 'the outcome' of a normal distribution of balls. But if determination is - as Anscombe argues - externally related to causality - it's not clear why one would think that a contradiction results from the predictability of outcome and the post-hoc nature of causality.
However there is **no** positive number delta such that, for **any** initial location X of the ball, a measurement of initial conditions to accuracy delta allows certain prediction of the outcome.
For the mathematically inclined, that's because the function f that maps the initial position of the ball to its final location is continuous on the domain D that excludes only the set of measure zero comprising positions that lead to unstable equilibria. But f is not **uniformly continuous** on D.
Although I am not a hard determinist, I don't see anything in the notion of a Galton box in a context of classical physics, that defeats a belief in hard determinism. The unpredictability of the Galton box is just an instance of chaos theory, and chaos theory focuses on the consequences of practical limits in measurement accuracy, not on the theoretical impossibility of making a measurement with zero error. There will be some ridiculously small but nonzero error limit such that, if we could measure everything to within that limit, we could predict a Hurricane in Haiti from the flap of a butterfly's wing in Mongolia.
.
A thought experiment that generates similar questions about determinism is Norton's Dome, which is also based on an unstable equilibrium. While it is an interesting case to think about, it can't tell us anything about our world for the same reason as the Galton box, viz, the probability is zero of the ball being over the exact single point where the paradox arises.
Even if we were to conclude that predictability dissolves when objects are in an unstable equilibrium, I doubt it would discourage hard determinists. To go from 'hard determinism always holds' to 'hard determinism holds everywhere except in a special set of circumstances that has probability zero of ever arising' doesn't sound like much of a concession.
That's a possible feature of how the balls are input, no? If you have a robotic arm capable of placing balls to arbitrary precision; like we can on paper by specifying an initial condition, then that's going to hold. If the causal processes that puts the balls into the Galton box by design does not constrain it in that manner; effectively evolving a volume of initial conditions forward through the box; then the output pattern is going to be close to binomial (on left vs right hole transitions) or approximately normal (on horizontal coordinate of box base) assuming the sample of initial conditions isn't really weird in some way.
The contrast is between:
(1) The Galton box is deterministic because there is a hypothetical arbitrary precision mathematical model of it that allows perfect prediction for every input trajectory that doesn't result in unstable equilibrium. Complete specificity of initial conditions. Does not actually occur in actual Galton boxes.
(2) The actual operation of the Galton box doesn't have that. Vagueness of initial conditions - a distribution of them.
The initial conditions of each ball in the Galton box are not specified to arbitrary precision, they're kinda just jammed in. So for the above Galton box and initial condition specification (kinda just jamming it in), and for any particular bead, it's true that we can't predict its trajectory. That sits uneasy with the hypothetical claim that we could if only it were specified to sufficient precision; that "if only" means we're no longer talking about the above box.
:up:
Some additional thoughts:
For me, the most important take-away is the fact that for Anscombe, determination is 'possibility-relative': "We see that to give content to the idea of something’s being determined, we have to have a set of possibilities, which something narrows down to one – before the event".
But the question is: whence this set of possibilities, and not another set? With the Galton board, it seems 'natural' to pick out the relevant set of possibilities as distribution of balls among the pipes, but it's important to recognize just how arbitrary this is. It's certainly not a given of nature, for instance, that these possibilities must be thought together to the exclusion of all other possibilities (that the sky is blue at the time of the experiment, for instance). And if that is so, this means that 'determinability' (and with it, indeterminability) is itself not a 'natural' category - we cannot ask of nature, taken as a whole: 'is it determined or not?'.
Or, to introduce another distinction (whose applicability in these situations was brought to my attention by @fdrake): we cannot ask the question of determinability at a global level, only at a local one. And what 'picks out' or individuates a local situation ('set of possibilities') is, or must be, a question of motivation. Part of the problem with using a Galton board to think about this stuff is precisely because it is so arbitrary: the distribution of balls does not correspond to any particular effect which follows from that distribution - it does not couple to any system for which the distribution makes a difference.
So QM determines that determinism is impossible?
It certainly isn't a groundless assumption that events would be the same if the universe were restarted. It seems to me that the burden is on you for stating otherwise. Given the same conditions at every moment in time, the same effects will happen. It follows that given the same initial conditions, you get the same results. It doesn't follow that given the same initial conditions that you will get different results. In other words, logic itself would be useless in an indeterministic world. Your reasons determine your conclusion. You keep using determinism every time you make an argument where you conclusion follows your premise.
You and Banno seem to be making the mistake of believing that every instant of time is the same - as if the ball dropped at this moment is the same at some latter moment. The initial conditions at some moment are different than at some other moment, and ignorance is a factor because we could be oblivious to all the initial conditions that make a certain event happen. We might get it mostly right and it's just enough to make an accurate prediction, or we might get it wrong and then think that determinism is false. But it's not. It's just that you can't recreate a same moment in time at some other moment in time. But if you restarted the universe these events will happen at the same moment in the same way, and you will still be ignorant of all the initial conditions at any moment.
As for QM, what is it about QM that determines that determinism is impossible?
Quoting Banno
Sounds like more determinism. Seems like errors determine outcomes.
That role seems to be played by the initial conditions. For a given initial condition, there's a guaranteed outcome. For a range of initial conditions, there's a range of outcomes. Imprecise specification of an initial condition gives a range of outcomes consistent with (determined by? @Kenosha Kid) the range of inputs concordant with the imprecisions. The sleight of hand that makes determinism seem to be a system property seems to be the specification of an initial condition with sufficient precision; as if the specification of an initial condition was done externally to the dynamics of any actual Galton box.
I fail to see how a digital system used to measure an analog reality indicates that reality is indeterministic. It seems that what you are saying is what is indeterministic is our measurements, not reality. With that, I would agree. Measurements are like views, which could explain some of the results of the double-slit experiment. Taking measurements or views alters the effects. That doesn't mean that indeterminism is true, it means that our existence as observers and our measuring devices plays a causal role in the very events we are observing and measuring. Solving the mind-body problem I believe will provide the necessary link between classical mechanics and QM - between the macro and the nano, and unite them.
Yes, exactly! I was thinking about this in relation to the little Galton board video you posted - funnily enough, the balls don't fall exactly into the 'normal' distribution - some lines are a little over, some are a little under. And it got me thinking - does this mean that this Galton board is a badly designed one? Well no - if the board were designed such that you really did get a perfect distribution, then then it's precisely the tweaking of inputs which guarantees consistency of output. In truth, the normal distribution is a totally ideal property: it's what the sum of infinite runs of the board would converge to, at the limit (ergodic property?).
So you're right: the 'search' for initial conditions ("if we just knew the initial conditions with enough precision...") can be nothing other than a fixing of initial conditions in order to make determinism a system property. I'm reminded here of Kant's 'intellectual intuition': that wherein knowledge and being coincide, available only to a God, who needs no mediation of the sensible (time and space). Or else Wittgenstein's meter rule: that which neither is nor is not a meter. The 'fixed' Galton board would be like that: neither a Galton board not not one.... it would be like, a gif of a Galton board, a moving image.
The probability of one ball falling in any particular bin is given by the normal curve.
Quoting StreetlightX
Yep.
@Pfhorrest, do I understand correctly that you disagree? I'm not following what you are saying about chaotic systems. For instance given any point on the complex plane we have an algorithm for finding out if it is a member of the Mandelbrot Set or not - so isn't membership determined for any given point? It is a member iff iterations of z_(n+1)=z_n^2+C converge.
I'm not sure what your point about the Mandlebrot set is regarding chaos theory, though what you say about it sounds true as far as I know.
My point about chaotic systems is that you can have a perfectly deterministic system (where if you perfectly specified the initial conditions then you would get the exact same end results) which behaves in the way you would expect of a deterministic system, where tiny changes in the initial conditions produce only tiny changes in the output and so tiny errors in measurements of the initial conditions produce only tiny errors in the predicted output. That is a non-chaotic system. On the other hand, you could have a different, still completely deterministic system (where if you perfectly specified the initial conditions then you would get the exact same end results), which behaves in a wildly different way, where a tiny change in the initial conditions produces drastically different output, and so a tiny error in measurement of the initial conditions produces wildly incorrect predictions. That is chaos.
Determinism is about whether the output would be exactly the same given the exact same inputs. That might or might not be the case, regardless of how well you can in practice measure the exact same inputs.
Chaos is about whether or not, given a deterministic system (as above), differences in output/predictions are disproportionate to differences in input/measurement.
If we somehow knew the Galton box did behave in a perfectly deterministic manner (exact input leads to exact output; say we're dealing with a simulated Galton box) but we didn't have the exact initial state (say because we were measuring the initial condition of a real Galton box and inputting it into our virtual one), we would have great difficulty predicting even the approximate output because even the tiniest errors in the measurement of the initial state would produce huge differences in the prediction, because the system is chaotic, not because it is non-deterministic.
If the system were truly non-deterministic, then even if we rewound the virtual Galton box to the exact same initial conditions and ran it forward again, we might still get different results.
Hmm. I'm going to be pedantic and point out that chaotic systems are algorithmically calculable... so if you put in a "1" you will get the same result every time. What characterises them is that if you put in "1.0000000001" you may get back a wildly different result. But if you put "1.0000000001" in again, you will get the very same answer.
Is that not so?
Now instead imagine a system that is not chaotic, but is still deterministic. If you change the input by 0.00001, the output changes only by 0.0001, or something proportional to that; not in a crazy complex way.
Now take that deterministic and non-chaotic system, and put a rand() function in there somewhere. Now if you put in 1, you get, say, a number between 2 and 4. If you put in 1.00001, you get a number between 2.00002 and 4.00004. But WHICH numbers in those ranges you get will vary every time you run it. The system is non-chaotic, because changing the inputs only changes the outputs a proportional amount, but it’s truly random, because even the exact same inputs won’t always give the exact same outputs.
...but see https://mathworld.wolfram.com/RandomNumber.html
Rand() is an algorithm.
Yes, but even aside from the measurement problem if quantum events are uncaused, then tiny divergences from initial conditions will add up over time to great divergences. What evidence can you adduce that no quantum events are uncaused? The only evidence I think is available to you to call upon is expert opinion and the expert consensus is that (at least some) quantum events are uncaused.
Presumably this is a reference to Davidson's first paper, Actions, Reasons and Causes. SEP sets the issue out thus:
Anscombe being Wittgenstein's proxy.
An issue for me, since I rather like both accounts. What to do?
What we in the article though is indeterminism in a classical system without reliance on quantum phenomena.
The salient point is that determinism is not found in classical physics but assumed. The article goes some way to showing that the assumption might be removed without cost.
If that is the case it is a point worth making, especialy given the number of threads involving causal chains hereabouts:
Much hinges here. We ought be clear about it.
As indicated in my first post, the conventional interpretation of Newton's first law is what produces the assumption of determinism. To get beyond this, we need an unconventional, or flat out denial of this law. The common theological/metaphysical/mystical perspective is to understand that the temporal continuity
of existence, described by this law, and expressed as inertia, (and in general, the existence of matter), requires a cause itself. So at each moment of passing time, a "cause" is required to ensure that things continue in an orderly manner, consistent with the last moment, instead of random difference at each passing moment. This "cause" is often expressed as the Will of God.
There's a simple argument to consider. The human will can create an action at a randomly determined time, therefore uncaused by any external physical force. Free will is demonstrated by the randomly determined act, and so that the causal force of the act must be the will itself. This action could change the world in a materially significant way (the POTUS could push the nuke button for example, at any random time). Therefore the continuity expressed by the law of inertia is not necessary. The human will can interfere with this continuity. So the law of inertia does not have complete, universal, and absolute application. It is not necessary. If the temporal continuity of existence expressed by the law of inertia is not necessary, it is contingent, and therefore its observed reality requires causation. Without this necessity the assumption of determinism is not supported.
There are clearly two distinct perspectives. One is that Newton's first law is true, absolute, and therefore expresses a universal necessity. This leads to the determinist assumption. The other perspective is that this law is incomplete, and the argument for this is that free will acts are outside the inertial framework. These acts are understood as constituting a force outside the concept of inertia, or perhaps a force internal to matter; as if a piece of matter can decide to start a new motion at any time. But more precisely, something outside the conceptual scheme of matter, can create matter with inertia, at a chosen time.
So the issue of determinism and free will, is how we approach and interpret Newton's first law.
And yes, quantum phenomena are random, at least from any particular entangled observer’s point of view. (The math describing the evolution of the wavefunction itself is deterministic, it’s only at the supposed “collapse” of the wavefunction that anything random happens, and there’s disagreement about whether that collapse is a real thing that actually happens or only a shift in an observer’s perspective as they become entangled with the observed system.)
I think it's true that the hypothetical "if initial state is completely specified, then trajectory is completely specified" is true of the systems like the Galton's box I linked; if that's all someone means by determinism, I think it holds of the box. If they additionally assert "the initial state is completely specified in this Galton box", I don't think it holds of the box. At least, there's room for doubt.
What you & @StreetlightX have been talking about, is it something like this?
-Within some local systems, the state of things at t=0 does determine a unique state at t=x. (i.e. there is only one possible state at tx, given t0)
-But the universe (or 'everything' or 'the one' or ' the total totality' etc )cannot be treated as a closed system where 'everything' is such and such at state 0, determining unique states at t=x
-What counts as a closed system always requires some sort of constituting 'carving' in order to foreground certain aspects, while ignoring others.
-That doesn't mean that the 'subject' or 'mind' or 'knowledge' is the ultimate ground for what happens in that system -things still happen as they do - but how you're tracking what's happening is based on how you've carved
-[less sure of this] if you're focused on deterministic systems, you're going to keep finding them, because that's where you're directing your attention.
Is that roughly right?
So, building on that, the part of the essay that most jumped out at me was this:
[quote=Anscombe]There is something to observe here, that lies under our noses. It is little attended to, and yet still so obvious as to seem trite. It is this: causality consists in the derivativeness of an effect from its causes. This is the core, the common feature, of causality in its various kinds. Effects derive from, arise out of, come of, their causes. For example, everyone will grant that physical parenthood is a causal relation. Here the derivation is material, by fission. Now analysis in terms of necessity or universality does not tell us of this derivedness of the effect; rather it forgets about that. For the necessity will be that of laws of nature; through it we shall be able to derive knowledge of the effect from knowledge of the cause, or vice versa, but that does not show us the cause as source of the effect. Causation, then, is not to be identified with necessitation.[/quote]
I like this. I think it directs us back to how we first think of causes: something happens and we know it happened due to this other thing. It doesn't mean forensically establishing a necessary frame-by-frame progression, but simply recognizing that the presence of this led to that. That's it. How the one lead to the other depends on the case. Whether the one had to lead to the other also depends on the case.
I think learning to be attentive to how we actually encounter causality, leads to an epistemic humility that counteracts a prideful awareness of an epistemic ideal, never actually attainable. (The 'pride' is in offloading precision to a dream of total comprehension (absolute accounting) that you are at least aware of, while others operate blindly, not even aware of the dream) Like most things born out of pride, the awareness of an ideal lets us undercut others, while simultaneously undercutting ourselves (we are aware there is a 'deterministic order' and can rest on the laurels of having apprehended that). And I think that that kind of epistemic humility (there is no way to shatter the universe into a series of precise distributions of matter along a time axis) ultimately lets us track causes in a more effective way: If you drop the idea of a demon who could do it at a subatomic level, you have to simply look at how you see causes being responsible for certain effects. There is an art (and pragmatism) of understanding causality and there is no metaphysical reason to see that as mere 'folk' understanding of causation.
(Now obviously this can cut both ways (one can 'see' false causation, and that has historically happened a lot, and caused horrible things) so a methodology and an ethics of how ones casts causal relationships is still necessary, but dropping the metaphysical pretense at least clears the ground for going to work on that.)
Yeah exactly - 'look and see' being the empirical principle par excellence. I think it also makes more sense when it comes to the phenomenology of scientific - or other - investigation: we pay attention to what the system under investigation 'pays attention' to, where, even if we delineate what constitutes a 'system', we still need to follow it's lead. I'm very fond on these lines from Susan Oyama, whom always comes to mind when I deal with this stuff:
"For coherent integration to be accomplished, an investigator must do by will and wit what the [system] does by emerging nature: sort out levels and functions and keep sources, interactive effects, and processes straight. ... This is not to say that selection of variables must be random or that analysis is impossible. It is to suggest that guidance is more likely to come from the system under investigation than from some more abstract assumption ... Fine investigators have always been guided by good intuitions about what their phenomenon is "paying attention" to ... Scientific talent is partially a knack for reading one's particular system productively."
[And because it's on my mind - the kind of thing accords to what D&G call 'minor science', which is always a matter of "following the singularities of a matter", which they distinguish from "reproducing", which "implies the permanence of a fixed point of view that is external to what is reproduced: watching the flow from the bank".]
Think you're mostly right. "Open" and "closed" don't mean quite that though. A closed system is one that isn't subject to any external net force or matter/energy transfer. Once the balls in the box are set in motion, it's a closed system (to a good approximation).
"Necessitation" is a tricky word, because "necessary" and its derivatives have very many different uses. An effect, by that name is a contingent event, so if it occurs it has been necessitated by its causes. But I think what misleads people is the idea that an event, as an effect, has one cause. Contingent events generally require the fulfillment of numerous conditions, all of which can be called causes of the event. So the idea of a one to one cause/effect relation is what ought to be scrutinized.
What we commonly call "the cause" of an event is one of many contributing factors, and it is only within a very specific (subjective) perspective, that it is designated as "the cause". The experimental process which attempts to fix initial conditions for repetition, and note consistency and inconsistency in the results, is not actually looking for the cause of the results per se. The cause of the results is more properly attributed to the fixing of the initial conditions. What the experimentation is looking for is the cause of differences, the degree of consistency in the results. If there is inconsistency in the results, we want to know "the cause" of the inconsistency. If the single variant factor, which leads to an inconsistent event can be identified, it becomes known as the cause of that event, the inconsistent event. But to designate this as "the cause" is to neglect the fact that the more substantive "cause" is the prerequisite fixing of the initial conditions in such a way so as to allow "the cause" to produce its effect.
On reflection, I think that on topic arguments shouldn't really be concerning themselves with randomness vs determinism here; it's more regarding whether it's appropriate to consider the Galton box as an example of bounces following bounces as a matter of logical necessity in the wild. The central question for whether it's determined in that logical sense is whether a real Galton box can sensibly be modelled with infinite precision inputs. I don't think it's fundamentally about whether deterministic mathematical models are successful in providing insights about 'em (they are), it's regarding the relationship of a real Galton box to the "input completely specified => unique trajectory" implication.
I wanna have my cake and eat it to, really. I'd like to insist that "input completely specified => unique trajectory" applies to real Galton boxes since deterministic models of them work, but that nevertheless real Galton boxes do not have a specifying mechanism that enables anyone to do anything to them that pre-specifies any ball's initial conditions to sufficient precision for that mechanism's actions to collapse the outcome set to a unique hole for any given ball.
I also wanna insist that that isn't "just an epistemic limitation", it's built into the box that it cannot be manipulated in that way. Maybe for other boxes you can.
Quoting Banno
Quoting Pfhorrest
Typically chaotic systems are defined to be deterministic but nonlinear, nonlinear meaning that a small change in the input can yield a huge change in the output. The system described in the article is I think normally known as a chaos machine, because small changes in the initial conditions of the ball give rise to large changes in its final position. The article is also putting forward a epistemological means of accounting for those practical differences, so while the system is chaotic, the ball is epistemologically indeterministic as well, and using this to propose what I think is an ontological agnosticism about determinism.
That indeterminism does not depend on the chaotic nature of the system that exemplifies it, but obviously since linear systems would not be demonstrably different from Laplace's demon (i.e. differences in output would be as immeasurably small as the immeasurably small differences in input), it's apt to use a chaotic system to illustrate the point. Quantum mechanics is deemed ontologically indeterministic, although there are deterministic interpretations (Bohm theory) which are chaotic.
I see no reason not to consider chaotic, indeterministic systems as well as chaotic deterministic ones. In that case, you would have to account for errors in the positions of each of the pegs too. I imagine the reason why chaotic systems are deemed to be deterministic is that no one has had cause to explore chaotic indeterminism; it's typically been one or the other.
The expert consensus is also that QM and classical mechanics appear to contradict each other but they both work. The consensus also includes a need to unify both theories, or at least explain why one is so useful and incorrect, while the other is correct. I think that the unifying theory lies in explaining consciousness, as consciousness is a kind of measurement.
What effects do uncaused quantum events have on the macro-scale world? You'd think that all those "tiny divergences from initial conditions will add up over time to great divergences" would be observable at the macro scale, but what we observe is consistent - similar causes lead to similar events.
And we have experiences where we are ignorant of the causes - where it appears that there isn't a cause, but there is. You need to account for those kinds of experiences and the fact that it shows that uncaused appearances can be deceiving.
Quoting Banno
Them indeterminism is not found in QM, but assumed. And you seem to be agreeing that certain observations cause you to assume certain ideas.
To your knowledge, Banno, has that ever been tested.
Lots of balls falling strike against each other and have different mass distributions to influence the fall...so the bell curve distribution is understandable.
But if one single ball...always the same ball...were carefully and systematically dropped, would it randomly create a bell curve...or will it favor one or two slots?
Just wondering.
You'd think that, all those Quoting Janus
And you'd observe the behavior of the balls greatly diverging.
Suggestion for what might be a doctoral level experiment, B.
Mark a ball with a red dot...and drop THAT INDIVIDUAL ball each time with the red dot in the same position at release...keeping the drop effort as similar as possible...
...and see if the randomness occurs.
That one ball...skewed slightly as every ball must be...might favor one or two adjacent slots.
Just an idea. Or maybe one you could pawn off to a serious student to attempt and report.
Imagine a steel wedge on a table or floor, with a sharp hedge pointing upward ( like ^ ).
Drop vertically a light steel ball on the wedge edge. Sometimes the ball will end up falling right of the edge, sometimes left. If you drop the ball right on the edge each time, and long enough, logic dictates that it will fall on both sides a near equal number of times, 50-50.
Can anyone predict where the ball will fall next?
Does anyone suppose the ball could EVER bounce on top of the edge several times before settling there, in equilibrium exactly on top of the edge? Intuitively this seems impossible, and yet... if it were a perfectly round and homogenous ball falling exactly on a perfect wedge, that's exactly what the math says the ball will do...
Might be more accurate to say that evidence suggests nondeterminism?
Roughly speaking, but needs elaboration. Not a definition of nonlinear in the strictly mathematical sense. And what is "small"?. For example:
Linear:
[math]f({{z}_{1}}+{{z}_{2}})=f({{z}_{1}})+f({{z}_{2}}),\text{ }f(\alpha z)=\alpha f(z)[/math]
[math]f(z)=Mz,\text{ }M={{10}^{10}},\text{ }\varepsilon ={{10}^{-3}}\Rightarrow f(z+\varepsilon )-f(z)=M\varepsilon ={{10}^{7}}[/math]
Nonlinear:
[math]f(z)=\frac{1}{z},\text{ }z=10,\text{ }\varepsilon =\frac{1}{10}\text{ }\Rightarrow \text{ }f(z+\varepsilon )-f(z)=.01[/math]
The world is one; it's not neatly divided into micro and macro scales. E.g. radioactivity, a quantic phenomenon, is an important cause of genetic mutations, which are an important driver of evolution.
Wrote a post trying to explain some chaos concepts a while ago. Since you're a meteorologist I'd guess you probably already know it and are making a point regarding chaos being a buzzword most of the time, but just in case.
On the macro scale things proceed more or less as we expect. Water always seems to erode the land, for example, but we have no way of knowing what effect random quantum events might have on the precise courses of erosion.
It is a groundless assumption that, given exactly the same initial conditions, a flow of water would produce exactly the same erosion patterns, down to the micro-physical level, over and over again if we were able to "rerun" it. I say it is groundless because there is no possible way to confirm it.
I see no contradiction between classical and quantum mechanics, they just present different levels of granularity.
We are just trying to approach this objectively. Many of us want to feel in control.
Sure. It might not be a good idea to pretend to be in control when you are not. Better to become comfortable with uncertainty.
I was a meteorologist sixty years ago. A math prof 1971 - 2000. My interest stems from pure mathematics and I've written about extending the iterative process to infinite compositions, mostly in the complex plane. Chaotic behavior crops up, but my main interest is in behavior around fixed points and obtaining striking imagery (not fractals). Indifferent fixed points tend to be the most complicated - even the word suggests thumbing its nose at the mathematician! Thanks. :smile:
Sure, and we also have evidence that suggests determinism. How do we determine which is the case.
Quoting Olivier5
Exactly. Hence my point that QM and classical physics need to be unified - kind of like how genetics and the theory of evolution by natural selection are unified micro and macro theories that support each other, not contradict each other like QM and classic physics. The glue to unify them, IMO, would be a proper theory of consciousness.
Gravity is part of the system we are talking about, not beyond it. And there are theories of quantum gravity, which seems to indicate that there is randomness in the force and possibly the direction of gravity, so why do we see the the balls in the box fall into predictable patterns at the bottom rather than fill all corners and sides of the box?
Quoting Janus
You seem to be confusing you not knowing something is the case with indeterminism.
Quoting Janus It is just as groundless to say that it wouldn't happen the same again, so you need to come up with a better argument that doesn't focus on using our ignorance as evidence that indeterminism is true.
I dunno. I'd hesitate to say prediction = causation in any way. You have to do a lot of work to interpret the estimated parameters of a statistical model causally.
Say if you're studying lung cancer rates observed in hospitals within countries, and you have country level data, hospital indicator and smoking status of the individual as predictors. If you took an individual that was a non-smoker, then made them a smoker [hide=*](calling all else equal in the background or propagating correlations between smoking rates and the other variables into the prediction)[/hide], you'd get something close to a causal interpretation of increased risk. If you took an individual that lived in Scotland, then put them in England, you'd get another change in risk. Does that mean this individual moving to England suddenly gets an increased lung cancer risk as soon as they cross the border?
Can't we have both? Some things are deterministic, some aren't... Actually, this is what evidence suggests. (And perhaps with further nuance, sometimes, depending on wider context or whatever, some things are variously deterministic or not...)
True backward causation introduces true randomness (even if the universe was otherwise deterministic, the moment that information from the future arrives introduces a fork in the timeline, and from the perspective of someone living through that moment it’s random which timeline they “end up in”). So it seems that something that seems to approximate backward causation (ordinary prediction) would in turn introduce something that looks approximately like randomness, i.e. chaos, even if everything was technically strictly deterministic.
IOW it seems like it must necessarily be very difficult to predict the future of something that can predict the future.
If information simply went back in time at all, it would be a "resulting" physical state that is also a "prior" physical state, but being a physical state in at least a Newtonian sort of sense, it should have some effect on the resulting physical state which could lead to a different result. You wouldn't need an intelligence to cause the conflict. Something akin to this is behind the Chronology Protection Conjecture.
Quoting Pfhorrest
Suppose instead of a person, it's just a computer program competing in a "tournament" of sorts. A program may be coded, say, to run Monte Carlo simulations of other programs to affect its odds of winning the tournament. In such a way we can abstract out the intelligence and the person. (Incidentally I've competed in such tourneys before... it's fun). But this isn't changing any program's behavior; it's simply coding what the behavior is. And the result isn't necessarily chaotic just because your program is playing by these rules. Does that make sense?
If we got into it we'd be discussing your philosophical system rather than the thread topic.
You say that in that case the result isn't necessarily chaotic. That's what I'd like more information on, because it seems intuitively like it must be chaotic, because every change in prediction causes a change in behavior which changes the future prediction which changes the behavior and so on in a non-linear manner. (You say "isn't changing any program's behavior", and in one sense, a conditional sense, that's true, but I mean in the sense of the consequent of that conditional. All the "if-then"s are the same, sure, but if every "then" results in a different "if" that then necessitates a different "then" which produces a still different "if"...).
Nothing about this has anything to do with my philosophical system, this is just a comment on the randomness-vs-chaos subthread of this topic. My original point of commenting was just to clear up the conflation of randomness with chaos, and this line of conversation is an extension of that, about how even if we didn't have randomness, the consequent predictability would then make any world with beings like us trying to exploit that predictability chaotic, and hence unpredictable still.
It's a bit tough to talk about since in my mind the specs are a bit fuzzy.
These tournaments I describe are often playing with some game theoretical situation... in these situations there could be an ideal strategy such as "pick at random A 30% of the time, B 5% of the time, C 0% of the time, and D 65% of the time, regardless of any past behaviors of the entity you play against". A simple program playing this strategy could do no better; suppose we had a pragmatic ideal... call it the MTS, which uses a Mersenne Twister to approximate this strategy in a straightforward way (it just generates large numbers uniformly, and uses partitioned ranges according to the above percentages to drive its selection directly). We might then have another candidate in the pool that also employs a Mersenne Twister, but this program runs an incredibly deep Monte Carlo Simulation (so call this the MCS), and it turns out this program would play very close to the ideal strategy as a result. In practice in such a tournament MTS could consistently rank first followed by MCS. Generally in this situation, your top ranking programs would be stable regardless of whether the programs predict (MCS) or not (MTS). But I get the feeling that you're reasoning that MCS would somehow be unstable because it predicts; to which I reply, if in at least some situations, you have a predictor program whose behaviors are not-chaotic, then your reasoning underspecifies what's required for a system to be chaotic, and in this sense your reasoning is flawed.
Intuitively (from my end), there's a bit of fuzziness as to what you're describing. You seem to be comparing (a) what an entity would "ordinarily" do, to (b) what an entity would do "if" it had a predictor. But what are (a) and (b) actually describing? In the above scenario the MTS is a type of "(a)", but how would you apply "(b)" to it? And the MCS is a type of "(b)", but what is the "(a)" that the MCS would do? Or is (a) vs (b) comparing the MTS to the MCS? If that's the case, the actual behaviors of the MCS in this tournament are really more approximations of a stable strategy, not a factor generating chaotic behavior (at least in and of itself).
I guess the only real upshot of this to the implications of determinism on free will is that a predictor system can arbitrarily evade prediction, by doing other than what it predicts other predictors will predict of it.
I don't know now, I'm feeling burnt out from work search shit and can't think straight anymore today.
Definitely.
Yeah, I'd rather this thread did not become yet another platform for @Pfhorrest.
You are asserting that determinism is the case. I am not asserting that it is not the case, but that we have no way of knowing either way.
I'll pay that one! :lol:
This, to me?
Those ubiquitous threads that link causation to the beginnings of spacetime and variously invoke God or panpsychism or Roger Penrose for the most part fall to this view.
We still have the discussion of reasons as causes to attend. But if causes are not as determined as was thought, perhaps it doesn't matter all that much whether we choose to treat our reasons as causing our actions. SimIlarly, the demise of determinism takes the pressure off our feeling justified in punishing those who choose stuff we don't like.
So, where would we start? Rocks are deterministic, and human beings are not? How about a mosquito?
I suppose this is a variation of Laplace's demon, which has been mentioned farther up in this thread, but it's something I've considered since my early teen years, minus the specific example of the Galton box. In other words, is the universe a function, where variable y is the only possible outcome of x, or is it a nonfunction, where two or more outcomes y1, y2,...yn are possible given a single input x? Are quantum states completely and ideally random or are they dependent, where the first would theoretically change outcomes and the second would not?
Does that mean it would be fun to play with you, but not in public? :joke:
Put differently, we've pretty much concluded that events in the future are not fixed by the state of the universe now. Does that invalidate the notion of block time?
As you should be!
I’d say it requires multidimensional time.
The universal wavefunction does evolve deterministically, but that wavefunction is itself a multidimensional construct containing an ensemble of possible states of the universe. So that kind of gets you to quantum block time in a more conventional way there.
But my preferred way to think of it is to envision time as a path through the configuration space of the universe, which I take to mean a different kind of modal realism. Other possible worlds (in a Kripkean sense but not a Lewisian sense) are the exact same thing as other times; futures and pasts are just other possible worlds that bear a particular kind of relationship to this present actual world.
Modal possible worlds are created by an explicit act of "what if...". that's quite different from a collapsing wave function...
Because you failed to explain consciousness and measurements properly, both of which are a causal processes.
What determines that we cant know either way?
This is basically dualism and all the problems it brings, like how deterministic and indeterministic things interact - deterministically or indeterministically?
Depends on what you mean.
1. Events in the future are (not) fixed by the state of the universe now. I read this as implying that there is a uniquely correct theory (not necessarily known to us) that describes events in the past and in the future, and that the theory is (in)deterministic.
2. Block time. This is often taken to mean that events in the past and the future exist in some sense.
Setting aside the truth of (1) and the meaningfulness of (2), it is clear that (1) has no implication on (2). One can be a determinist and a presentist, i.e. believe that although future events are fixed by the present, they do not have the same ontological status. Or one can be indeterminist and yet subscribe to block time. The fact that no correct theory fixes the future given the present does not imply anything about the future's ontological status.
Note that I only talk about these positions as actual positions that some people hold. Personally I am very skeptical about the meaningfulness of presentism/eternalism debate, and somewhat skeptical about determinism/indeterminism.
We go by evidence. Say, findings like planetary orbits, quantumatics, ..., whatever. The world doesn't care about our metaphysics or whatever we think. Rather, our beliefs are the adjustable parts.
So our beliefs are determined by evidence? If not, then what determines what you believe? If I asked you why you believe in something, wouldn't you provide me reasons for what you believe, and those reasons would determine what you believe, no?
OK, so the evidence as I see it, indicates that rocks are deterministic, and human beings are not. It appears to me that mosquitoes are not deterministic either. Nor do plants appear to be deterministic. So I think that inanimate things are deterministic, and living things are not. Do you agree?
We cannot examine microphysical processes such as to be able to decide if they are truly uncaused or not. The consensus among the experts seems to be that they are uncaused. If you can outline a method whereby we could do that examination, down to ever-smaller scales, I'm prepared to listen.
The other thing is that if, for example, radioactive decay turned out to be caused, if we discovered and were able to observe the cause, then we would still be left with the question as to what caused that cause...and so on ad infinitum. We could never know whether we had arrived at the "first cause", and if we had it, logically, would have to be uncaused in any case.
But the microphysical is really the same reality as the macrophysical, just from a different view, and the macrophysical is deterministic and includes humans and their thoughts, beliefs and views. So, as I've been saying, I think that a proper explanation of consciousness could help to unify the different views into a consistent whole. We are missing crucial information to make sense of these contradictory views.
Quoting Janus
Or that there is a causal loop. Think about the causal relationship between predators and prey.
Sure we could come up with better explanations, but no matter how good any explanation is it could never prove "rigidly" or absolutely deterministic causation, even in regard to the "macro'.
Quoting Harry Hindu
How would that would work? Take the example of radioactive decay; when the particle is emitted either it is uncaused or it is the result of something else acting on it to make it happen. If something else acts on it to make it happen, are you suggesting that "something else" could be acted upon by the radioactive particle itself in order to make the unknown agent in turn act upon the particle?
It is only the assumption that free will comes from outside of the natural universe which is incompatible with determinism. Who you are, your pattern still has agency and makes decisions its just that they are predictable to a perfect predictor, which is normative...
What would proof of determinism or indeterminism look like? It seems to me that if the latter were true, our posts wouldn't continue to exist in the original state long enough to have a conversation, much less be able to make any predictions, except by luck and our success rate using the theories we have is much higher than 50% - that smart phones wouldn't work so well for so many. If indeterminism were true, why would we ever have any evidence for determinism?
It seems to me that if the former were true then we can have meaningful conversations, predict events whose success depends on our level of understanding of the event in question. You have to acknowledge the fact that understanding and ignorance seem to go hand in hand with determinism and randomness - and that the understanding of some event can be carried over to similar events, not dissimilar events. So, while we may not be able to prove determinism, we seem to have a good amount of evidence for it. If indeterminism were true, what use is an explanation? It seems to me that determinism can be true and we can also be ignorant, which would look like what we have now - predictable patterns that we have to learn before being able to predict, and that ignorance of some pattern can make the pattern appear unpredictable, or random.
Quoting Janus
Sure, ever heard of the observer effect? And it doesn't have to be that simple of a loop. There could be other processes involved that make it more complex where there is more than the radioactive decay and a "something else" involved.
Actually not quite. In fact, one of my preferred interpretations of QM has precisely this backward causality, along with normal forward causality. The condition that stops what you expect is self-consistency. The question remains then as to what laws of physics guarantee this self-consistency.
An example is matter-antimatter pair creation and annihilation. A photon excites the electron field temporarily, leading to an electron and positron being created which then attract one another and annihilate. If the photon is energetic enough, the electron and positron can be created with sufficient kinetic energy that they escape one another's attraction and become 'real'. But there is a charge-parity-time symmetry in the universe that is obeyed in this phenomenon: a spin-down positron is just a spin-up electron moving backwards in time. (This is true of all antimatter.) Taking the electron's PoV where energy is insufficient to become real, it is excited from the vacuum, then spontaneously emits a photon of twice its own energy and reverses direction in time to conserve energy. It then does it again to change direction to our normal forward direction in time, and it follows this causal loop for an infinite number of iterations. From the outside, a pair is created then annihilated, and that's that.
The transactional interpretation of quantum mechanics interprets the complex conjugate of the wavefunction to be a wavefunction going backwards in time. (Which is not unreasonable: the wavefunction has an exponent proportional to i*t -- the imaginary number multiplied by time. Taking the complex conjugate [i*t --> (-i)*t] and reversing the direction of time the wave travels through [i*t --> i*(-t)] are mathematically identical, and therefore equally valid.)
Self-consistency is easier to arrive at in this case because the wavefunction itself is not measurable, but the probability density (via the Born rule) is. A photon being emitted from an atom, for instance, will deterministically spread outwards in all or many directions at once. Where that wavefunction encounters other atoms that can absorb it, those atoms emit photons whose wavefunctions move backwards in time (photons are their own antiparticles: a backwards emission is identically an absorption). The only "cause" consistent with the "effect" is the original atom that emitted the forwards photon. This self-consistency is maintained by the requirement that real phenomena have wavefunctions that are not orthogonal to each other. It is the same process of elimination that stops the cat being in a state of dead when radioactive decay has not occurred.
In principle, this back-and-forth could evolve over an infinite number of iterations, with certain paths initially tried by the original photon ironed out (sum over histories) until we're left with a single set of trajectories with the same cause and the same effect(s). Not knowing the future, we can only consider the forward propagation and cannot explain the decisiveness of the effect (the measurement problem).
If you mean macroscopically, like the grandfather paradox, I would suggest that the above, if true, would be subject to the same classical limit as all quantum effects. A person is, while technically a wave, not wave-like, period being inversely proportional to mass. Any back and forth jiggery-poker would have to occur at insanely, maybe impossibly small timescales.
Nice commentary, Kid. In 1954 I wrote a short paper on this for my physics class in high school. At the time I loved reading science fiction. Of course, the technical details were beyond me, but my teacher, an elderly lady we all loved was impressed. :cool:
Thanks! Yeah I ended up doing a physics simply because I couldn't get past the mathematics of the physics books I wanted to read. For the love of gawd, someone needs to invent an easier way to do physics than maths! Glad your teacher liked it, she sounds like a cool old lady.
Quoting Harry Hindu
The problem with this is that we are discussing a single microphysical event, and you are claiming that every simple event has a proximate and rigidly determining cause. If the cause were just the general global or local conditions, then that would not be saying anything other than that the particular lump of radioactive matter in its particular conditions caused the particle to emit, which tells us pretty much nothing except that the conditions were the conditions.
I can't see what the observer has to do with it. Are you saying the observer causes the particle to emit? How would that work?
I don't understand how countless stochastic micro-physical processes could produce an entity in the macro world which perceives a world that is inherently stochastic, as non-stochastic, by chance. It requires mental gymnastics that my mind isn't capable of performing.
Quoting Kenosha Kid
This is a strange concept. How does some part of the universe move backwards in time while another part moves forward? I thought time was really just a change, and that change relative so some other change is how we measure change/time. So any change some positron undertakes is always a move forward in time. How can something in the universe change "backward" while the rest of the universe is changing "forwards", or is this concept of time inaccurate, or inapplicable in QM?
,,
Also, about the "observer effect", when observing the behavior of electrons and protons, how do we know that the changes we observe are actually changes in the states of what we are looking at rather than something more to do with how consciousness works and perceives/measures quantum-sized processes? It seems to me that if observing these tiny processes changes them, rather than the change we perceive has more to do with how our minds perceive and model quantum processes, then the old model of explaining vision as shooting beams from your eyes to see would more accurate. Either that or these quantum processes "know" when they are being looked at, which seems to indicate some sort of panpsychism.
The observer effect is a theory of QM. I find the fact that you are now thinking QM theories are incredulous when you have been promoting QM as a means of discrediting determinism, yet you think that observing, thinking entities can emerge from uncaused processes to observe causation, by chance..
Because Anscombe uses the words determine, determinate, determined, determinism and deterministic without prior explanation, it sometimes makes it difficult to follow her argument.
For example, if the path of one ball (in the Galton Board) is determined by its initial state and the laws of nature (such as cause and effect), then nature is deterministic, not the laws of nature.
Cause and effect is a law of nature. If an effect is determined by a cause, then determinism is a law of nature.
Therefore, contrary to Anscombe, the laws of nature are not deterministic.
Why consciouness, of all things?
The whole of the article is a discussion of the use of these terms. You would have her explain the article prior to beginning it?
In a double slit experiment, there's "a setup" where you see an interference pattern and "another setup" where you do not. So (a) conscious observers can tell the difference between seeing an interference pattern and not seeing interference patterns. Suppose then that consciousness had something to do with QM; one would then think one could (b) create a setup such that if an observer (subject) was conscious, the (conscious) experimenter would see no interference pattern, but if the observer (subject) had no consciousness, the (conscious) experimenter would see one.
(b) is actually quite a powerful result; it is tantamount to a test for consciousness. We don't actually have (b) though; if we did, it would be a lot more powerful than simply making a strange theory few people understood... we'd be putting everything from tardigrades to puppies in the thing testing which entities had consciousness.
Throughout her article, Anscombe states that the laws of nature are deterministic, not that nature is deterministic, e.g., "It ought not to have mattered whether the laws of nature were or were not deterministic".
In mathematics, computer science and physics, it is the system that is a deterministic, i.e., its initial state plus the physical laws as described by equations.
Similarly, it is nature that is deterministic, i.e., its initial state plus the laws of nature. The laws of nature by themselves cannot determine anything. The laws of nature by themselves are not deterministic.
It may well be that given Anscombe's particular usage of the word deterministic, her argument is logical and her conclusion sound
However, the general reader who believes that they know the common usage of the word deterministic may find her argument unclear.
In such a case, where the author uses a word in a way that is different to common usage, then the author should explain what they mean by the word at the beginning of their article.
Notice that the quote does not support your contention?
It's questionable whether using a word in an unusual way produces a sound argument. For the sake of a logical argument, one can define a word in any way the person wants. But a definition ought to be taken as a premise. And a false definition is a false premise.
...like, maybe, when you insist that 2+2 is not the same as 4, despite the protests of all around you.
In general? Who knows. Someone may or may not become convinced of this or that due to some evidence, and change their minds later. Formation of belief is hardly some trivial well-understood thing. And sometimes this or that is wrong, other times (hopefully) right.
Quoting Metaphysician Undercover
Rocks are predictable, as in they don't get up and walk away? :D By the way ...
That leaves blow around in autumn is fairly predictable, their exact paths not so much, and similarly for mosquitoes. Findings like planetary orbits and quantumatics are better examples.
Sure it does. It explains how observations impact the outcomes of the microphysical (ie collapsing the wave function).
Quoting Olivier5
Because we are talking about consciousness when talking about making observations and measurements.
Anscombe may well mean that a closed system is deterministic if given a situation plus the laws of nature there will be a unique result, but she wrote that for a closed system if given a situation plus deterministic laws of nature there will be a unique result
If deterministic has one meaning, then either the closed system is deterministic or the laws of nature are deterministic, it cannot be both.
The problem for the reader is in judging what Anscombe means by the word "deterministic", when what she means may be different to what she has written.
There are many interpretations of the so-called collapse of the wave function, for example 'Decoherence' is one that does not involve consciousness if I am not mistaken.
I don't pretend to understand much of Quantum theory. My point was that the observer problem has no direct bearing on the issue of whether microphysical processes are caused or uncaused.
If you have knowledge to back up your claim that it does have such a bearing then you should be able to explain what that bearing consists in.
Hi Banno!
I can barely understand all of this. Is this one of the biggest changes from those earlier mechanics to classical mechanics we spoke about?
That causality shouldn't even be a principle anymore? This is all very in depth and I can sort of understand it but not in any way put it into words..sadly.
To prove non determinism it is sufficient to show that mind transcends determinism. If that were the case a mind can choose to do something - like move a stone - now the stone has been influenced by a non deterministic factor.
Because they enter a breaking point?
Apart from God, Don't all theories of mind fall apart if even causality doesn't apply anymore? Or am I misunderstanding? Would there be a possible world in which a kettle on a stove will not heat up?
It's as if my mind is unable to compute such a thing!
Then I misunderstand. My bad.
This doesn't mean consciousness has some magic pan-universe powers. It's only a tool we are using.
I didn't say or even imply that. Consciousness is a local interaction, not magical and not universal.
https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Wigner_interpretation
https://blogs.scientificamerican.com/observations/coming-to-grips-with-the-implications-of-quantum-mechanics/
(Not a bump - accidental post instead of copying to another thread.)
Quoting Banno
It seems so interesting. Thank you for refreshing and sharing it again months later. I will give it a read :up:
Quoting Banno
The experiment itself relies on the fact that steel balls, poured into the apparatus, will fall through into the bins at the bottom - rather than float away, or turn into a vase of petunias, or something. Further, every trial produces much the same result. Note, the distribution is mirrored left to right - consistent with gravitation toward the centre of the earth, and factors conspiring to push a few balls to the far left and far right. Inability to determine the path of any particular ball - is in my view, the wrong question from which to draw a conclusion. I wholly accept that, with regard to any one ball, determining its path and end point, is irreducibly complex. But the results show several consistencies; such that irreducible complexity is occurring within an overall deterministic framework.