Determinism vs. Predictability
OP Part 1
There is a group of issues that I’ve been wrestling with lately. They are ones that come up a lot on the Forum. Specific issues include determinism, predictability, probability, reductionism, emergence, free-will, causation, chaos theory. I don’t want to retread all the recent threads, so I’ll focus on a fairly specific issue. How is determinism different from predictability.
In a recent thread, I wrote the following:
Quoting T Clark
Quoting T Clark
For now, I don't want to get into an argument about the difference between causation and determinism.
In response to this, Wittgenstein posted the following:
Quoting Wittgenstein
I had some problems with the article, and it covered more that I want to cover here, but it raised interesting issues I want to think more about. And I recognize that “determinism” does not necessarily mean the same as “causation,” but I don’t especially want to get in to that here.
I’ll leave the OP there for now. I don’t like posts that are too long. I’ll immediately follow up in my next post with more specifics on how I see the issue.
?
There is a group of issues that I’ve been wrestling with lately. They are ones that come up a lot on the Forum. Specific issues include determinism, predictability, probability, reductionism, emergence, free-will, causation, chaos theory. I don’t want to retread all the recent threads, so I’ll focus on a fairly specific issue. How is determinism different from predictability.
In a recent thread, I wrote the following:
Quoting T Clark
It feels intuitively to me that in some, many, most? cases unraveling cause is not possible even in theory. It's not just a case of being ignorant. Part of that feeling is a conviction that sufficiently complex systems, even those that are theoretically "caused," could not be unraveled with the fastest supercomputer operating for the life of the universe. There is a point, isn't there, where "completely outside the scope of human possibility" turns into "not possible even in theory." Seems to me there is.
Quoting T Clark
If something is completely unpredictable, does it still make sense to say it is caused. Isn't cause inextricably tied up with prediction? It may be possible to model and predict a coin flip or build a machine that can flip a coin with near perfect uniformity, but how about 1,000 flips using 1,000 random coins flipped by 1,000 random people?
For now, I don't want to get into an argument about the difference between causation and determinism.
In response to this, Wittgenstein posted the following:
Quoting Wittgenstein
With regards to a complicated system, l have found the following article whose link is below quite useful. From what l have understood partially is that, a deterministic system can be unpredictable because the uncertainty and the error in the initial measurement of the system will cause drastic change in the calculated outcome.
.... closer look reveals that determinism and predictability are very different notions. In particular, in recent decades chaos theory has highlighted that deterministic systems can be unpredictable in various different ways.
https://www.google.com/url?sa=t&source=web&rct=j&url=http://philsci-archive.pitt.edu/12166/1/DeterminismIndeterminismWordPittsburghArchiveWithF.pdf&ved=2ahUKEwift_CkjonkAhUL-2EKHSknAvwQFjABegQIDxAH&usg=AOvVaw3LDopPZI0btavaExsb5oik
I had some problems with the article, and it covered more that I want to cover here, but it raised interesting issues I want to think more about. And I recognize that “determinism” does not necessarily mean the same as “causation,” but I don’t especially want to get in to that here.
I’ll leave the OP there for now. I don’t like posts that are too long. I’ll immediately follow up in my next post with more specifics on how I see the issue.
?
Comments (152)
"Predictability" is only used in an epistemological sense.
If we're using "determinism" in the epistemological sense, it makes to see it as synonymous with predictability.
If we're using "determinism" in the ontological sense, it's definitely different than predictability.
First, some definitions. Here are some definitions of “determinism” from various places:
I want to be clear that I am talking about a strong determinism - protons bouncing off of each other, not the kind of determinism that comes from people being affected by their genetics and environment.
Here is a definition of predictability from the Wikipedia – “Predictability is the degree to which a correct prediction or forecast of a system's state can be made either qualitatively or quantitatively.”
OK, down to business. Here is what the linked article says:
It has often been believed that determinism and predictability go together in the sense that deterministic systems are always predictable. Determinism is an ontological thesis. Predictability – that the future states of a system can be predicted – is an epistemological thesis…. However, a closer look reveals that determinism and predictability are very different notions. In particular, in recent decades chaos theory has highlighted that deterministic systems can be unpredictable in various different ways.
The article does talk about Chaos theory a bit and I think it might be helpful to discuss it. I may bring it up later.
Looking on the web, turns out I’m not the first to raise the determinism vs. predictability issue. There’s a lot to look through. Here’s a link to an article I found that had something in particular to say. Its called “Determinism and the Paradox of Predictability.”
https://link.springer.com/article/10.1007/s10670-009-9199-1
Here’s what the authors say:
The inference from determinism to predictability, though intuitively plausible, needs to be qualified in an important respect. We need to distinguish between two different kinds of predictability. On the one hand, determinism implies external predictability, that is, the possibility for an external observer, not part of the universe, to predict, in principle, all future states of the universe. Yet, on the other hand, embedded predictability as the possibility for an embedded subsystem in the universe to make such predictions, does not obtain in a deterministic universe.
I didn't read the whole article. I just liked/hated the first sentence so much I wanted to use it. I say “bologna.” Well, no, that’s a bit strong, but, as you can see from the quotes in my first post, I don’t agree.
In the past I've said that epistemology belongs as part of metaphysics along with ontology. Actually, at heart, when I say that there's no difference between determinism and predictability maybe I'm taking the first step in arguing that there's no difference between ontology and epistemology.
Not ready for that in this discussion.
Are you familiar with this guy?
Probability is a way of expressing prediction, but it doesn't apply to unique events. Trying to squash the concept to fit leads to the conclusion that the outcome of any particular event had a 100% chance of happening.
To a certain extent. Actually, his name comes up a lot when you type "determinism vs. predictability" on the web. How is he specifically relevant?
Quoting frank
We're talking about predictability, not probability. Again, how is this relevant to the issue as I've laid it out?
Quoting T Clark
I get the feeling I've stepped off a cliff with this statement. On the forum, unlike in roadrunner cartoons, I have time to step back before I fall. I'll say it differently - just throwing out the words "ontology" and "epistemology" doesn't really respond to my posts. It doesn't really say anything. How about a bit more in depth response.
That guy is in mentioned in the article you posted, l think he hasn't checked it out yet. :smile: I will try break down the article in some basic points so we can remove the scholarly jargon and discuss the real matter at hand and l will fail possibly but let's give it a try.
1. asymptotic unpredictability. (AUP)
This unpredictability is due to how little changes or inaccuracy in initial measurement of system will drastically undermine the accuracy of our predictions in a deterministic system. The reason is that such small or little conditions or values of the system spread over all the predictable range in a short time.
However asymptotic unpredictability is not unique to chaotic systems as some systems can be simple and not complicated and still have asymptotic unpredictability.
Here we can see that the author (Werndl) is placing probability as a kind of predictability. Hence instead of saying all choatic system are associated with AUP, he states that are probability of future states cannot be obtained from initial values of the system.
2.The link between determinism and stochastic systems
How does having the same prediction indicate a similar system ? There is also a problem with predicting stochastic systems behavior but simple deterministic system can be easily predicted. I hope l am not missing something here. The author doesn't seem to connect the 1st topic with the second one.After that, they discuss underdeterminism, which is refuted by stating that evidence supports deterministic systems over stochastic systems, hence they are favourable. Personally, l dont think this is related to our topic.
One of the reasons I didn't like the article you linked to was how poorly I thought it described the issues you are talking about above. I guess the point it was trying to make was that there are, after all, good reasons why a deterministic system might be unpredictable. That brings me back to my previous statements
Quoting T Clark
Quoting T Clark
I just don't see how it makes any sense to say something is deterministic if it can't be used to determine, i.e. predict, something. A quick survey of the web on this issue shows that a lot of other philosophers have felt the way I do, although the majority seem to disagree with my position.
I think the reason these two notions are often conflated is, in part, due to the fact that the ideas of determinability and predictability are, at least in relation to outcomes, synonymous.
You flip a fair coin, the probability of heads is 0.5, the probability of tails is 0.5. The outcome is not predictable, but the probabilistic behaviour can be fully specified. Whether this behaviour arises from true randomness or as a result of an intricate dependence of the dynamics of coin flipping to the forces applied to lend it rotation and project it through the air, the distribution of heads and tails is still part of the system. Probability's a latent structure of even fully deterministic systems.
Would you say probability is a "latent structure" of the deterministic system itself or merely of our attempt to understand it?
Just to be clear, I disagree with the quote you attributed to me. That's not my position, it's the position of the paper I referenced.
Quoting Janus
It seems you're saying they are conflated because they mean the same thing.
When I look at the definitions of determinism above, it seems to me it's talking about, not the ability to predict that after 1,000 throws there will be about 500 heads, but the exact sequence of results - h,t,h,h,t,t.....t,h, h, h, t, t.
Seems to me that probability has a place in this discussion, but I'm not exactly sure how. Maybe I have not been clear enough about what I mean by 'predictable."
Just to be clear I was neither attributing nor not attributing the idea expressed in the text I quoted to you; I was merely responding to the text.
And yes, I am saying that determinism and predictability are often conflated, probably at least in part due to the fact that 'determinability' and 'predictability' mean the same thing in respect of outcomes; that coupled with the obvious relationship between the ideas of determinability and determinism.
You should try to refute AUP, even though it seems wrong intuitively. Maybe it is talking about lack of information causing the unpredictability and we can perhaps predict chaotic system if the initial values are accurately known or maybe it is inherent in the system
I don't think that is the reason predictability and determinism are assumed to be equivalent. I think they genuinely are, in all meaningful ways, equivalent at the point, as I said,
Quoting T Clark
At that point, it becomes what is often called a distinction without a difference. I think that's the heart of my argument. To me, if prediction of a system's behavior is "not possible even in theory," it doesn't make sense to say it is deterministic.
I don't think I was clear enough. AUP doesn't seem "wrong intuitively" to me, I just don't think it changes my argument. As I said, it seems to me it's just one of the
Quoting T Clark
Or am I missing something?
I have two questions for you regarding this statement.
Choatic system are deterministic but does it say in theory ( not practise ) that it is impossible to predict the future states, as you have mentioned in a discrete manner ?
Let's suppose it does, but why is predictability a neccessary condition for a deterministic system ?
That was one of the points I was trying to make. In my opinion if something is so difficult to predict that it is and will never be possible to do so, it doesn't make sense to call it deterministic. To me, that would be the same as saying even if only God can predict it, it's still deterministic. I think that's what people are saying, and I don't agree with it.
Of course, he also recognized that this is not possible for human beings, but only for his hypothetical "Demon". Later, someone (I forget who right now and can't be bothered searching it), showed that, even assuming that nature is utterly deterministic, such prediction would be impossible even in principle due to the so-called Three Body Problem.
It can be either or both. I'll focus on it as a latent structure, seeing as the epistemic angle is well known.
The distribution of outcomes H-T-H-... or whatever derives from the dynamics of coin flipping but is a legitimate part of the system as much as the centre of mass/centroid being in the same place on the coin. Another example there is if you throw a football around (which is very spherical) the probability that it comes to rest on any given side should be approximately equal to the proportion that side is of the surface area. I think this notion is summarised by all points on the faces of the coin and the faces of the football being generic, so that when you partition the surface into the faces (heads/tails, each football face) the uniform distribution holds of the partition.
A more realistic system might be a melting candle in a restaurant near a door. The candle's lit, the wind's blowing through the door. The gusts are of random strength and direction, but they tend to come in through the door. The candle will melt more on the side furthest away from the wind's gusts since they will bend the flame that way. An adequate description of that system would be some model of how the candle melts in the absence of wind plus some directional wind that changes the proximity of the flame to one side of the candle. If you could find the effect of a gust on the rate of change of the candle wax melting in that direction, then make the gusts random (but clustered in appropriate directions and of appropriate strengths), that would be an adequate description of the system.
AFAIK that can actually be done with bridges, stress testing can be done in mathematical models by looking at them in a random form, adding random forces and torsions. Seeing how the bridge would respond to them.
One thought that comes to mind, is that, unless you're a systematic philosopher, the world is not a system. It is rather more like what is required for there to be systems. But I am inclined to believe that the world must transcend any notion of 'system'.
Quoting T Clark
'This guy' is Simon LaPlace, who is given the title 'France's Newton'. He was a genius, inventor of many fundamental ideas in modern cosmology and social science, including the concept of demographics (among many others.)
What I think Frank is referring to is directly relevant, and as it hasn't been spelled out yet, let's do it. 'LaPlace's Daemon' says:
—?Pierre Simon Laplace, A Philosophical Essay on Probabilities.
Now, regardless of the merits of this statement in light of what has happened since 1814, when it was published, I feel as though this statement is hugely relevant to this thread, as I think this is the source of the whole idea of 'determinism' which so many people who turn up on this forum and post seem to take for granted. I often feel like asking them if they've heard of this guy, but I'll hold off for now.
So, what can be the case is only what you can understand, or know, it to be? Sounds like solipsism to me.
Before God created humans, the universe was deterministic but when he created humans, the world became indeterministic? God isn't a god if it's omniscience is erased with the creation of humans. It doesn't make sense. It seems to me that it is entirely possible that the world is a certain way that is different than our knowledge of it (indirect realism).
Our knowledge is fallible. Randomness is the result of a lack of knowledge of some system. Once we acquire the necessary knowledge the system becomes predictable. Predictions and randomness are ideas that exist in one's head as a result of one's knowledge. What may appear random to you is predictable to me because we both have different knowledge of the system.
This is way of presenting logical possibility. True, it's derived from facts about the system, but it's still at most a way of weighing expectation. It's an expression of uncertainty. We actually have no knowledge about which side will land face-up unless the system is rigged.
The way this can be misunderstood is to imagine that probability is about a branching future.
And yet the real instances of flipping fair coins produce this kind of distribution. Very similar to sex proportions in birth. The claim that 'if we knew all relevant information then the future would be fixed' isn't inconsistent with probability; when what flipping a coin is contains that uncertainty about the future.
Who is doing all this knowing? Where is it in the system?
Yes, I agree completely. "You can say that, but there is nothing stopping people speculating.." Sounds like a definition of "philosophy" to me.
I say this all the time - This is a metaphysical/epistemological issue. It's not a matter of fact, it's a matter of choice and usefulness. I think making the distinction between determinism and predictability is not useful in most cases and I think it is often misleading. Here's a quote from William James's "Pragmatism."
[i]Pragmatism, on the other hand, asks its usual question. "Grant an idea or belief to be true," it says, "what concrete difference will its being true make in anyone's actual life? How will the truth be realized? What experiences will be different from those which would obtain if the belief were false? What, in short, is the truth's cash-value in experiential terms?"
The moment pragmatism asks this question, it sees the answer: True ideas are those that we can assimilate, validate, corroborate and verify. False ideas are those that we cannot. That is the practical difference it makes to us to have true ideas; that, therefore, is the meaning of truth, for it is all that truth is known-as.[/i]
I guess that makes me a pragmatist.
That quote was from the journal paper Wittgenstein linked me to. I wasn't trying to make a point by using that word. If it confuses things, we can just use one of the other definitions. That's why I like to provide more than one when I can.
Quoting Wayfarer
How is that different than the definitions of "determinism" provided in the OP? Obviously, it provides a more detailed description, but otherwise I don't see any inconsistency. I don't know if you read the article, but the authors call out this specific quote for criticism.
Quoting Wayfarer
I specifically left out quantum mechanics from this discussion because I wasn't sure how it fit in. I have no objection to you bringing it up. It is my understanding that QM is considered a deterministic theory.
Quoting Harry Hindu
As I said previously:
Quoting T Clark
At that point, in my, and others, opinions, it stops being deterministic.
As I've said, there are times when knowing all relevant information isn't possible, even in theory. Even if it were possible, that information would also have to be processed in order to make a prediction.
If we flip a coin a thousand times, we can be pretty confident that 50% of the flips will be heads. If we lack confidence in logic, we can do it and then be happy that we can predict the future.
If we flip one coin, we know zero, nada, not-a-fucking-thing about the outcome (unless the system is rigged or we have Laplace's demon on hand.) I'm sure you agree with that?
I don't understand what you're asking.
Quoting frank
You actually know quite a lot about the outcome. It's described as 50% heads 50% tails. We don't know what the outcome is but we have a complete specification of the system; all relevant info is encoded there, right? Anyway.
Let's look at determinism as captured in Wayfarer's quote of Laplace.
Quoting Wayfarer
From which you can distill two principles.
(1) If you have a complete specification of a system at some time t, then it is specified for all times before t and after t. Positions, momenta, orientations, that kind of thing.
(2) The specification procedure for all preceding and following states can be obtained by 'submitting the data to analysis'. Presumably this is a codification of all relationships of the basic variables of nature that entail everything about everything else given sufficient manipulation.
and from the remainder of the quote.
(3) In such a description, nothing would be uncertain (for the subject of 1 which has the specification procedure in 2).
I wanna say that (1) is purely an ontological claim; it concerns the nature of nature/being/One/All/Many/process/stuff/whatever. (2) is talking about a codification of the information in (1), if it can all be distilled into some placeholder. It is how the 'complete' specification' in (1) would be articulated. Then we have that if (1) is true and (2) exists, nothing would be uncertain for that intellect.
So there's a lot going on there. (1) and (2) together still look like determinism, and don't have any epistemic valences. But (3) uses 'uncertain' in a colloquial sense, as in 'there is nothing which could not be known/predicated/anticipated' by the intellect.
We know that (1) is false, so long as we take a realist interpretation of the wavefunction in quantum mechanics. It might fail for other reasons too, but this suffices. There's randomness in nature.
Interestingly (2) also seems to be false, at least if we make the assumption that the grand complete specification in Laplace's formulation is the result of an algorithm, anyway. And not some divine act of comprehension.
I know this doesn't really clear up much of the relationship between uncertainty and determinism, or randomness and determinism, but hopefully it provides a useful distinction between what's going on in (1), (2) and (3).
Assuming no asteroids interfere and so on, we feel confident that the outcome will be either fully and completely heads or fully and completely tails. So we know an if/then statement.
Think about what question 50%/50% is actually answering.
Quoting fdrake
Chalmers uses Laplace's demon in a rambling book about constructing worlds. He modifies it to cover objections. I don't have that book anymore, though.
Actually, if I flip a coin 1,000 times, it's unlikely that exactly 50% of the flips will be heads. It is certain, though, that if I flip 1,000 times and it isn't 50/50, if I keep flipping, it will eventually even out to 50/50. I'm not trying to nitpick here. It seems to me to be a pretty important distinction.
Quoting frank
As @fdrake says, knowing that there is a 50/50 chance it will come up heads is not "nada." It's more than we know about lots of things that are a lot more important than coin flips.
50/50 is an assessment of a formal system, not the outcome of a unique coin toss.
Maybe it would help if we considered an unbalanced object. It has a 97% chance of coming up heads. What does 97%/3% tell you about a unique toss?
Which brings us back to the point I've been trying to make. In sufficiently complex systems, which are not all that complex, (1) and (2) are not humanly possible and therefore (3) is false.
Quoting fdrake
As everyone knows, a football is not spherical, it is oblong with pointy ends.
It tells me that, if I bet $1.00 on the fair coin, the expected value of the bet for me is $1.00. If I bet on the unfair coin and I call heads, the expected value is $1.94.
In any case the truth of any philosophical position can never be proven.
Take the "football" example. If the ball is not perfectly spherical then it will not be pure chance even in relation just to the ball itself, without considering any external forces, what part of its surface it comes to rest on. Even if a ball were perfectly spherical, where it comes to rest would be completely determined by forces external to it, and thus not at all a matter of chance.
The only element of chance or probability I can imagine would be if nature is, at a microscopic level nondeterministic, which would seem to entail that macroscopic outcomes, although very close to being deterministic, would nonetheless have a tiny element of chance influencing them.
I've said this several times previously in this thread - this is a metaphysical/epistemological issue. There is no proving required. It's a question of usefulness, meaningfulness, value. What is the value of the "realist" approach in this instance. What value is there in talking about something that we can think about happening but which can never actually happen. What does it contribute to knowledge, wisdom, effective action. That's probably a pragmatist's question.
I don't believe so. I think the 'uncertainty principle' slays LaPlace's daemon. So it's directly relevant to the issue.
Quoting fdrake
It's a theoretical projection based on the principle of what an all-knowing mind would know. As Hawkings famously said, 'If we discover a complete theory, it would be the ultimate triumph of human reason – for then we should know the mind of God.'
I didn't say it wasn't relevant, I said that, as I understand it, QM is considered deterministic. I just went and looked on the web. Apparently that is not completely true - some say it am and some say it ain't. However we come down, I don't think it has any impact on my position.
And that, of course, is one of the principle tenets of the Copenhagen interpretation of physics, of which Werner Heisenberg was one of the chief proponents. It undermines determinism. (Actually I have just learned that if you begin to search Copenhagen interpretation and det.... that google remembers the query and fills in the last word - which tells you something!)
I recommend having a peruse of Heisenberg's Physics and Philosophy, forward by Paul Davies. It's freely available as a PDF. It was written in the fifties and is not an especially arduous read, and is philosophically acute, in my humble opinion.
Of course, we can have one view or the other regarding both of these questions, and there is no question of "proof" as you have agreed. Is there any truth in these matters? If so, is the truth ultimately a matter of consensus, as pragmatism would have it? Or is it a matter of mere personal preference; what works for me or you? Is it a matter of plausibility, and if it is, how do we derive a standard of plausibility that is not itself a matter of mere preference or consensus?
As you can probably tell I am somewhat of a skeptic who leans towards realism.
People playing at philosophy will always try and put their own special spin on it to make themselves feel validated.
As for “ontology” and “epistemology”, I agree. They are the same thing and it is merely a convenient demarcation of speech - the underlying game of philosophy where the physicist doesn’t much bother themselves with such - to be frank - tail chasing drivel (and nor do philosophers of any substance).
I think I agree with your interpretation but, again, it doesn't have any impact on my position. It's just another example of the case I am trying to make - if it ain't predictable, it ain't deterministic.
Quoting Wayfarer
I don't think either you or I know exactly what God can and cannot do.
Quoting Wayfarer
Here's what the Stanford Encyclopedia of Philosophy says about QM being deterministic:
So goes the story; but like much popular wisdom, it is partly mistaken and/or misleading. Ironically, quantum mechanics is one of the best prospects for a genuinely deterministic theory in modern times! Everything hinges on what interpretational and philosophical decisions one adopts. The fundamental law at the heart of non-relativistic QM is the Schrödinger equation. The evolution of a wavefunction describing a physical system under this equation is normally taken to be perfectly deterministic. If one adopts an interpretation of QM according to which that's it—i.e., nothing ever interrupts Schrödinger evolution, and the wavefunctions governed by the equation tell the complete physical story—then quantum mechanics is a perfectly deterministic theory. There are several interpretations that physicists and philosophers have given of QM which go this way.
I'm not taking a position on this. I'm only using it as evidence that not everyone agrees that QM undermines determinism. In terms of my argument, whether or not other people say it is deterministic, I say it's not because events at the atomic level are not predictable under either classical or quantum mechanics.
Wait - I like this one better.
Agreed.
Quoting Janus
Just to be clear, you are referring to metaphysical and epistemological truths, is that correct? If so, then yes, it's a matter of preference or consensus. If we're going to try to work something out together, we have to come to an agreement on these issues, which provide the underlying rules of the game. Otherwise, we'll just spin our wheels, as so often happens on the forum.
...according to the 'relative state formulation' of Hugh Everett, which, however, requires that the universe 'branches' every time an observation is taken.
Again, my only point in this regard is that some believe QM is deterministic. I have no position on any of those arguments. My position is that since atomic events are not predictable under either quantum or classical mechanics, it makes sense to consider them non-deterministic. That is the intended substance of this thread.
It seems to me that if you are making the case that something is the case because something else is the case, then you are making the case for determinism.
If something is completely outside the scope of human possibility and that makes it not possible in theory, and that makes the case that there isn't determinism, then you just made the case for determinism. If there are cases that logically follow other cases, or that there are reasons that you point to for your ideas, then you are making the case for determinism. It seems to me that in order to be logical, you can't escape being deterministic.
Possibilities are ideas in someone's head that can either be reflective of actual states of affairs, or not (imaginings). What exactly is outside the scope of human possibility? How would we know such a thing?
Going out on a limb here.
I guess what I'm gesturing towards is why should we care about the perspective of God on a system when God's external to it? It's a question of how structures are internalised to systems, rather than abstracting away from the details of all of them. So in my question to @frank, "who's doing the knowing?", who does the knowing that vouchsafes this kind of determinism? It can't be located within a functionally bounded system - one which has demarcated modes of operation, it can only be the totality of all things viewed from the perspective of that infinite intellect.
Quoting Janus
Well randomness doesn't have to be like the popular notion of it. Random variables are just normalised measurable functions on some sigma-algebra of events. The sigma algebra of events can be (associated with) a deterministic dynamical system, and there will still be random variables induced by the transitions of this dynamical system. Like the Bernoulli distribution emerges out of deterministic coin flipping (probably due to initial condition sensitivity of the flipping dynamics). Randomness is much more like forgetting some parts of a system's structure through suitable aggregation of its events than unstructured exceptions to linear notions of cause.
It's just folk wisdom that the mead you're drinking isn't going to turn into petroleum on its way down your throat without a knowable explanation.
How is our confidence in that justified? Opinions vary, but I don't think anyone believes it's dependent on somebody knowing something.
Two problems with this. 1) I'm talking about physical determinism, you're talking about logical determinism. Not the same thing at all. 2) I've made it clear that I'm talking about complex systems. I used an example, billiard balls, where a case can be made for predictability and determinism. You don't have to go much up the ladder of complexity before direct empirical predictability is lost and we are left to deal with probabilities. I'm using the words "direct" and "empirical" to mean predictability made possible by actually tracking the positions of particles and calculating future conditions. I'm not sure if those are the right words to use.
Quoting Harry Hindu
Good point. I've tried to make the case that, in all but the simplest systems, empirical predictability is not humanly possible. If I flip a coin 1,000 times, there are 2^1,000 possible combinations of results, each with equal probability. That''s about 1 x 10^300. Web says there are about 1 x 10^80 atoms in the visible universe. That's what I mean by "outside the scope of human possibility."
You're oversimplifying and you're stepping outside the bounds of the specific definitions of "determinism" used in the OP which, as I said, are:
Quoting T Clark
I think I understand what you're saying and I think I agree with you. Could you clarify a bit for someone barely literate in probability and statistics. Terms I could use help with - "structures are internalized;" "functionally bounded system;" and "demarcated modes of operation."
Thank you for your patience. These are still things I'm wrestling with, and it is very good to have someone who is actually interested in my wrestling with them.
"Structures are internalised" isn't something really statisticsy, I guess I just used it as a placeholder. We might say that electrical flow is an 'internalised structure' of a circuit, so are the wires and conductivity, voltages, currents, their relationships. They don't have much to do with nuclear bombs or sledgehammers or wire cutters even though they can all destroy circuits.
"Functionally bounded system" is another placeholder. Whenever you find some system behaving in some way, it behaves in the ways it does rather than in all ways it could behave. Realising any counterfactual of a system seems to require either further evolution along one of its trajectories; like time increasing in heat dissipation of an object as a function of time making the object go to about the environmental temperature. Or the introduction of a new regime of behaviour relevant to the previously established one; like when enzymes lock with their substrates and go from floating proteins (orientations of proteins and position and their time changes) to part of the molecular factory of digestion (enzymes working to produce nutrients). That "Or" is not an exclusive or.
When you have an infinite intellect, nothing really realises in the first sense (within the same regime) and nothing really novel happens (no new regimes), so we're left with a completely timeless completely described blob.
"demarcated modes of operation" was another placeholder. Imagine you disperse some amylase in solution with some starch molecules. Both the starch and the amylase are jiggling about independently for the most part. If you were to model their positions and velocities and orientations, they'd keep on wiggling along indifferent to each other until... they got close, and one breaks the other apart. The starch and the amylase have demarcated modes of operation; moving about independently; until they get sufficiently close to interact (yielding a new regime of behaviour). Their movements may as well be different dynamical systems until they interact, and these two independent subsystems are demarcated modes of operation within the starch-amylase system.
Thanks for the clarification. I think I understand what you're trying to say and I agree up to a point. I guess it all comes back to
Quoting fdrake
The substance of my position is, if I don't believe there is an omniscient God watching and keeping track of everything all the time, and, if I believe it is not humanly possible to empirically predict any but the simplest systems, then saying the world is determined is not useful or even meaningful.
On the other hand, if I do believe in such a God, I think I would accept the case for determinism.
How did I step outside? Btw: causal detetminism does not require that anyone have any knowledge of causation. It's not about knowledge. It is related to a conviction that all causes are knowable. This is naturalism.
Here's what you wrote:
Quoting frank
I don't see how our confidence that mead won't spontaneously turn into petroleum has anything to do with determinism as we are discussing it.
I went back and checked to make sure I understood what you were responding to. I think I did. Yes, a bit more detail would be helpful.
I don't think there is any. The assumption that all causes are knowable has historically been a part if the methodology of science.
At this point, it's also common sense. Doesn't mean it's true, but that's the foundation of causal determinism.
I agree with both statements.
Quoting frank
Causal determinism is the concept that is being examined in this thread. Do you think science will fall apart without it? I don't.
Science (which means knowledge), will fall apart when societies withdraw support. That could happen for a number of reasons.
That's not what I meant. I was trying to say that I don't think science needs determinism as defined in the OP conceptually in order continue successfully.
Quantum Mechanics' probabilistic outputs are used to build many great devices that work.
True, but I don't see how it's relevant to the discussion.
Quoting T Clark
Dont know. Some say science as we know it was born in the age of mechanism. As we graduate from that age, there is fear that letting go of a naturalistic anchor will open the door to rampant superstition and trance dancing.
Science could probably use some help from the part of philosophy that isn't just a cheerleader for a mechanistic perspective.
Only if you're a dualist. For a monist there is no difference. It's all causal.
Quoting T Clark
Yeah, this went over my head. Can you give an example? It seems to me that our predictions are either confirmed or rejected empirically.
Quoting T Clark
Again, your confusing probabilities with reality. Probabilities only exist in the human mind as imaginings.
If we take "random" to refer to processes which are not causally determined, then, under that definition at least, there can be no randomness in a deterministic system.
I agree. The "wheel-spinning" seems to be generated by the unacknowledged incompatibility of people's basic assumptions or definitions. If we can agree on basic premises and definitions, then there might be a decent chance that consensus can be achieved.
It is incorrect to say that God is ‘external’ to the Universe. God is understood as transcendent-yet-immanent - beyond and also within.
Not so. That is contradicted by the wave equation which is precisely a distribution of probabilities. There is not an objectively-existing particle lurking undiscovered.
That there are definitely no undiscovered particles seems to be a wholly unwarranted assumption.
Whether or not we are leaving "the age of mechanism" I don't think there's any reason to throw out the scientific baby with the bathwater. Or is it the baby with the scientific bathwater?
Quoting Harry Hindu
Only in the sense everything only exists in the human mind as imaginings.
Random - Of or characterizing a process of selection in which each item of a set has an equal probability of being chosen.
I don't see why that implies a lack of causation.
Quoting Janus
Agreed.
Seems to me that science has always claimed to see the world from a God's eye view, from the outside, whether or not it was expected a God was there to view it.
It is part of the Copenhagen interpretation. Remember ‘wave-particle duality’? that you see one or the other depending on your experimental set up, but you can’t say what you’re measuring apart from the observation you actually make. So it undermines the idea of there being an objective reality behind the observation. To many (including Einstein) that is shocking, but as Bohr said, if you don’t find it shocking then you’re probably not understanding it.
Quoting T Clark
Not ‘always’, not by a long stretch. It is very much characteristic of modern science, post Galileo-Newton-Descartes.
Karen Armstong’s 2009 ‘Case for God’ was written as a response to the new atheism fad, but from an unusual perspective, namely that of cultural history and comparative religion, rather than regular apologetics. The aspect that is directly relevant to this particular point is the way that she said the early moderns brought God into the picture of emerging modern science, as a kind of guarantor and under-writer of natural law; 'God's handiwork' as Newton would say. However this proved to be a double-edged sword, because as the scope of the natural science expanded (or exploded!), the requirement for a being to 'set the wheels in motion', as the deist God was thought to have done, became less and less; this is the origin of the 'God of the gaps' argument. But the problem, according to Armstrong, was with the entire conception of 'God' as a kind of celestial super-engineer in the first place. It was an anthropomorphic projection that was in some ways an inevitable outgrowth of monotheism and it was that which leads directly to the kind of caricature of religion that is the subject of criticism by modern atheism. There's been an un-noticed perspectival shift behind it which is very hard to see.
[quote=Karen Armstrong]the idea of God as Supreme Being means that he is simply like us, writ large, but just bigger and better, the end product of the series; whereas this divine personality that we meet in the Bible was, for centuries, regarded simply as a symbol of a greater transcendence that lay beyond.
Some theologians (such as Paul Tillich) have called this the God beyond God. And this God isn't just a being like you or me, or the microphone in front of me, or even the atom, an unseen being that we can find in our laboratories. What we mean by God is, some theologians have said, is being itself that is in everything that is around us and cannot be tied down to one single instance of being.[/quote]
To answer that question, I think it's useful to consider a simple system and how it would be represented.
Consider a light switch that is connected to a light bulb. The state of the bulb (lit or unlit) is determined by the state of the switch (on or off). With that specification, the state of the bulb is also predictable. That is, if we know the state of the switch then we can predict the state of the bulb with certainty.
In this scenario, the term "determined" relates to just the system itself whereas the term "predictable" relates to an agent's knowledge of the system.
Some observations:
1. The claim of determinism for the system depends on particular assumptions. For example, there must be power present, the circuit must not be broken or subject to interference, the bulb won't store power, etc. That is, we're considering the system in a formal (or idealized) sense.
2. The example system is a closed system - the output (bulb state) is fully specified by the input (switch state).
3. We could introduce a randomizing component into the circuit such that the bulb is randomly lit when the switch is turned on. If the randomizing element is an input to the system, then the system is both non-deterministic and unpredictable. If the randomizing element is internal to the system but its mechanics unknown, then the system would be deterministic but unpredictable.
4. At a more detailed level of representation, the system may have non-deterministic components. For example, there are molecular quantum events that do not affect the predictability of the high-level operation. Thus the system can be non-deterministic yet predictable.
OK, firstly I thought you were referring to an existing particle of a different kind than any currently known, and were claiming that there were no such particles to be discovered.
I see now that you mean something else; that before the particle is observed it has no existence. I am familiar with the "Copenhagen" idea that the particle has no definite position prior to being measured (observed), but not with the claim that is has no existence.
Can you cite an authoritative text that contains such a claim?
The way I understand it the concept of determinism is the idea that all events have physical causes which determine them 100 percent. QM of course denies this, and claims that there is a genuinely random (in the sense of not 100 percent causally determined) element in physical events. The idea of indeterminism is that at "bottom" physical events are truly random (uncaused) but that due to their large-scale probabilistic nature they average out to produce macroscopic events which seem to us to be 100 percent causally determined, I am very much open to being corrected on this, since my understanding is by no means anything approaching expert level.
Have a look at Paul Davies’ introduction to Heisenberg’s Physics and Philosophy:
(Xii)
An interesting discussion of Armstrong's work, but definitely outside limits of my experience. Maybe science hasn't always claimed to see the world from a God's eye view, but that's certainly the way I learned it. It seems to me that belief in objective reality existing beyond what we perceive requires that there be a God watching from the outside.
Be that as it may, I don't see how it changes the substance of my argument one way or the other.
I read "neither entity" as referring to position and momentum, not to the electron itself. After all it is already an electron we are talking about, and not a photon, gluon, proton, neutron, boson or neutrino.
In the OP and subsequent posts, I laid out specific meanings for "determinism" and "predictability" and the kinds of situations to which I think they apply. You seem to be using different definitions than I did.
I don't want to get too far from the definitions I established in the OP. The substance of the position I have presented is that I don't think it is useful to apply the concept of determinism as defined there. I have presented reasons for taking that position. This seems to be consistent with the position you've described. I agree that the system you describe which is only statistically predictable does not meet the standards for determinism presented in the OP.
As I said in the post you quoted, it is not clear to me that "random" and "uncaused" mean the same thing. One way or the other, it is not particularly relevant to the substance of my position.
I think the example of the light switch/light bulb system captures the definitions you gave in your opening posts. That is, determinism (or non-determinism) relates to the system itself while predictability relates to an agent's knowledge (or information about) the system.
How would you summarize your definitions if you understand them to be different to that?
This nasty slight of hand is how one goes from science to Deepak Chopra woo shit.
I don't think the wave function tells you about one measurement. It tells you about multiple measurements.
One thing that follows from this understanding is that randomness can only be spoken of in relation to a fixed system. Something is random insofar one cannot choose, in advance, between fixed outcomes. So a coin toss is random because the two outcomes, head and tails, are fixed in advance, and what makes the toss random is the equiprobability of outcome. Conceptual problems creep in when this relation to fixity is lost: if the coin turns into an elephant, that's not random, that's nonsense.
A further consequence of this is that randomness is an epistemic, and not ontological concept. If randomness is system-relative (defined only in relation to a fixed system), then no event 'in-itself' is either random or not-random. Instead, you need a distribution of (potential) events relative to a system in to qualify something or some event as random or not. But importantly, what counts and does not count as belonging to, or constituting a system, is itself relative to the kind of investigation one conducts.
That we take a series of random coin tosses to be our object of investigation already supposes artifice; that we count the repetition of coin tosses as constituting a series at all (rather than say, unconnected, singular coin tosses that happen to occur in a row), is the result of a decision, and does not follow from anything 'naturally occuring' in the world.
This is important, randomness is only a property of an artificial system. It is something created. The randomness in QM and other microsystems, discussed in this thread, is a property of those systems which have been created by physicists. Randomness itself, because it only exists within the confines of a created system and therefore cannot be absolute, is necessarily determined in the sense of being created intentionally. That is why it is an epistemic, and not an ontological matter. It only takes on the appearance of an ontological issue, as an illusion, when misguided metaphysicians such as C.S. Peirce, posit randomness as a fundamental ontological principle.
I didn't say this. That a coin toss is random is entirely a real, and not artificial property of a series of coin tosses. In fact it might be fair to say that 'real' and 'artificial don't even come into it at all. A coin toss is random, no qualifications attached. But that our object of investigation is a coin toss it itself, follows from a choice made by an agent.
Moreover, Peirce did not posit "randomness as a fundamental ontological principle", but chance. The two are not interchangeable.
Quoting Wayfarer
Then solipsism?
Quoting T Clark
I don't see a difference between an outcome between two billiard balls colliding and the outcome between your finger colliding with a side of a coin. They are both predictable in the same way - by knowing the motion and force applied to all particles involved.
Quoting T ClarkLike I said before: you are arguing for solipsism.
As I discussed previously - I'm not talking about flipping a coin and trying to predict the outcome. I'm talking about flipping the coin numerous times and predicting the exact sequence of heads and tails.
Quoting Harry Hindu
Maybe it would be solipsism if I were to write "Only in the sense everything only exists in my mind as imaginings", but that's not what I wrote or meant.
The idea of randomness kind of snuck into this discussion. It's not something I've thought enough about to be comfortable with my understanding. Your post is really helpful. I'm going to keep it to use as a reference in the future. I'll quote it to pound other posters into submission.
Thanks.
I think I was too offhanded in my response to your post. Let me explain more.
Your post provides a good description of a simple system where it is reasonable to talk about what I have been calling "empirical determinism." My main point, however, is that as a system becomes more complex, it quickly becomes practically impossible to predict it's outcomes empirically. At that point, it no longer makes sense to talk about the system as determined in that sense.
By the way, I have been making the distinction between empirical and probabilistic determinism and predictability. I have a feeling those are not the right terms to use. Are they ok or are their others I should be using?h
Well, don't leave us hanging, tell us the difference.
That's one flavour of randomness, though a biased coin flip is still random.
Quoting StreetlightX
I agree that something has to be 'fixed' in the background for 'randomness' to make sense, but this 'fixing' isn't necessarily epistemic (though it can also be that as well). In the case of a biased coin flip. If we flipped a biased coin 1000 times, the differences in flipping strategy each time provide different initial conditions (forces, rotations, locations) which are carried through by the deterministic (or functionally so, anyway) dynamical laws of coin flipping to final head or tail states. In this case I bet that the fixed background which allows the distribution to emerge is precisely the presence of those dynamical laws, the space of initial conditions, and the geometry of the coin (this coin will be develop along these flipping trajectories with these initial conditions).
When we view this from the perspective of the outcome H-T-H or whatever, we can't retrofit back to the initial condition which generated the outcome, too much has been lost by the encoding. This encoding isn't merely epistemic though, the coin being able to land on either side and that it will get stuck in those states (through an impact or two with surfaces) is every bit as valid a property of coin flipping as the underlying deterministic laws which transform hand movements to head or tail.
The probabilities of attaining head or tails emerge from whatever biases the coin, those biases in the coin propagate through the dynamics of coin flipping to biases in the proportion of outcomes.
Quoting StreetlightX
Though I do definitely agree with this. It's important not to reduce reality to models of it, or to hypostatise models to reality. Good models are always more than just models though!
Edit: if you want a Deleuze-inspired fuzz on it, randomness is a (there are others) virtual complement of actual outcomes and no less real for that. It's one way nature holds itself in suspense until it resolves (or realises) itself. Edit2: and this virtuality of randomness shows up in why it can be both a sensitivity to externality (unmodelled noise, disruptive perturbation) and a codification of immanent potentials (the distribution of coil flips, birth sex of babies etc)
Again, there is no difference between predicting the outcome of 1000 coin flips or 1000 billiard collisions. We are still talking about predictions based on the forces involved with each event.
Quoting T Clark
Then you'd be inconsistent because other minds could be just as imaginary as everything else. You have just as much evidence for other minds as you do for everything else that you claim is imaginary. There is no coherent middle ground (ie idealism). It's either realism or solipsism.
You and I (and several others) have gone back and forth on this quite a few times in this thread. I think we've taken it as far as we're going to get.
Quoting Harry Hindu
Yeah...., well...., no.
Sure, if you can't make a coherent distinction between the two causal events then I guess we are done here.
Quoting T Clark
Sure it is. You can't make a good argument as to why you believe other minds exist but not other things that arent minds when the only evidence you have for other minds is other things - like organisms.
A coin is something natural?
What happened to you claim that "randomness can only be spoken of in relation to a fixed system"? A "fixed system" is an artificial system.
This is the salient distinction I was trying to tease out with @fdrake. Putting it another way is to say that randomness is indeterminability. Ontological randomness would be ontological indeterminism, which is defined as microphysical events being not merely epistemically random, meaning they are not determined by anything at all, they simply happen without cause.
Quoting StreetlightX
Perhaps we are speaking about different things then, because what I have been saying is based on thinking of randomness and chance as one and the same.
A fixed system can and does capture real phenomena. A great deal - if not all - of experiments in science involve fixing possible variables in order to isolate some dynamics of some system or another. That does not make scientific results artificial.
Sure, just like any artificial thing is real.
Quoting StreetlightX
What? Scientific results are not artificial? Artificial means produced by human beings. Are you suggesting that scientific results just pop into existence without being produced by human beings. You've degenerated to new levels of nonsense StreetlightX.
Determinism, if true, would allow us to predict the future in complete detail.
Clairvoyance, knowledge of events, may not be deterministic in nature. It would allow us to make predictions too.
So determinism implies predictability but the converse isn't true.
Mm, I was not entirely comfortable with my use of the ontological/epistemic distinction. I suppose what I wanted to emphazise was the necessity of an intervention by an agent, or at least another system, the interaction between which would alone give sense to any measure of randomness. Any 'epistemic' investigation would of course, be a subclass of this type of intervention, but you're right that the former would not exhaust what fixes the background against which randomness would appear.
I guess that one could put it in terms you've been using recently too: you need a system with sensitivities to the potential distribution of events in order for randomness to make itself 'show'. When we investigate randomness, we set up such systems - we are, or make ourselves sensitive to such situations. Or in a non-epistemic manner, one example that springs to mind is using radioactive decay to generate cryptographic keys: such a process harnesses the randomness of atomic decay to generate unique, hard-to-hack keys for encryption purposes. Would this kind of thing jibe with what you had in mind?
Transgression! How did you loose your will? Unpredictability seeps in, to even the most predictable things. Why?
T Clark has rejected a nonexistent form of determinism, fdrake is banging away about his pet worldview, nobody wants to talk to anybody else. This thread was doomed to end this way since the big bang.
Without wading too much into this, I deliberately avoided questions of 'in/determination' - indeed avoided the word(s) altogether - insofar as I think one can treat randomness - in the sense I outlined - without at all engaging in questions of determination and cause. I'll only say that I'm not convinced that one can make sense of the idea of indetermination or randomness ('ontological randomness'), and that what we need instead is a far richer conception of 'determination' than is usually presented, which is usually just fatalism evacuated of any causality whatsoever.
Keep hammering away
OK, you're deflationary about determinism. Which is to say we either have equations and procedures to make predictions about a system, or we do not. There's no deeper story.
It's a pragmatic approach that similarly deflates/dissolves the issue of free-will and determinism. We can make predictions (she will drink tea rather than coffee) and give ordinary causal explanations without needing to posit metaphysical explanations.
Quoting T Clark
Is that your billiard balls/coin flips distinction? That seems to be just an issue of precision. You can set up a robot to flip a coin to always land heads. Conversely if the billiard balls are small enough (or isolated enough) then probabilistic quantum effects will be observed.
Quoting StreetlightX
Quoting Andrew M
Quoting Janus
So, for @frank and to contextualise the connections between what I've posted and the rest of the discussion. The thread's determinism and predictability. As @Andrew M and @T Clark have shown, a system can be deterministic but not predictable; the light switch with external random source, predictable but not deterministic; any system with little random variation.
Then we have the subthread on epistemic vs ontological randomness. Epistemic randomness arises from epistemic uncertainty; how much do we know, how precise is our knowledge. This relates to degrees of predictability; how accurate and precise are our predictions using our knowledge. Whether a child is born with male or female sex is very unpredictable with no scans etc., whether the sun will rise tomorrow is very predictable.
We can be in a state of great uncertainty with regard to the future of a deterministic system, like a chaotic one, purely due to our epistemic uncertainty concerning it; measurement precision of input variables and initial conditions. Allegedly there cannot be a state of ontological uncertainty with regard to the future of deterministic systems because (their future is not random because {their future states are completely specified by any input state}). So the chain of entailment goes:
(1) No ontological uncertainty in deterministic systems because
(2) Their future is not random because
(3) Their future states are completely specified by any input state.
We can agree with all of these things and still try to locate randomness within deterministic systems, as measures of the probability of their future states given a range of initial conditions. The equations that update climate models are deterministic, nevertheless they're run lots of times to produce "probability of rain tomorrow" and so on. The input variables (initial conditions) are changed slightly to see what happens. In this case, assuming that the climate is a deterministic process completely modelled by its updating equations, the randomness of the future arises from measurement uncertainty.
The thread I was trying to pull on with my coin flipping example was to read the range of initial conditions back into the coin flipping process. With more detail there, we only have a finite degree of precision with how we apply force to the coin, what direction we send it in, and all the other dynamical variables required to completely specify its final state into Heads or Tails. Rather than this range of initial conditions arising from measurement uncertainty of a variable, it arises from bounds on the precision we can control our bodies with and the properties of the coin. The ratio of heads to tails produced in flipping a fixed coin repeated times relies upon the natural level of specification precision of its trajectory by our flipping actions, and how that is conditioned by the coin.
Such randomness isn't just a result of epistemic uncertainty; our knowledge of the coin and our bodies helps us little to change how coin flipping works; but nor is it a-causal ontological indeterminism - the system is fully deterministic; once a trajectory is fixed, the coin will land as it would land from the start. But when we come to flip the coin, it does form a distribution of heads and tails; this must therefore arise from variation in our set up; in which initial conditions we propagate forward along their trajectories. Where those initial conditions vary is due to the variability in the behaviour of our body material in a process held as equivalent (coin flipping, "fixed background"), not in states of knowledge regarding the coin.
Edit: I think it's more precise to say that there are features of the process (of coin flipping) which can be held as equivalent (heads or tails end states), from which we can calculate the set of initial conditions (a pre-image of heads and of tails) which yield each outcome. The proportion of the initial conditions which yield heads give its probability, the proportion which yield tails give its probability.
If clairvoyance could give actual, verifiable predictions of future events, that would be good evidence for determinism, although perhaps not exactly in terms of the issue in this thread as I've laid it out and as it has developed.
In spite of your snotty criticism, I have found this thread very helpful in clarifying my ideas about determinism and predictability. Randomness has now been added which, although perhaps a bit outside the OP, I am also finding helpful and interesting. I hope it will continue. I'm pretty happy with the way things are going.
If you don't find it interesting or helpful, I can see why it wouldn't be satisfying, in which case maybe you should try something else.
As I indicated in the OP, I see the determinism/predictability question as just one in a larger set of issues. I would love to broaden that discussion or focus on a different question as we have started to do by bringing randomness into it. Would it make sense to do that in a separate thread. I had been thinking about suggesting that anyway, but I didn't want to disrupt the current discussion, which I am enjoying very much.
Alternatively, we could revise the title to read "Determinism vs. Predictability - Now - New and Improved with added Randomness!"
Are you using "indeterminability" as a synonym for "indeterminism?" I don't think that's correct. It seems to me it is closer to being one for "unpredictability."
I'm not sure I agree with "...they are not determined by anything at all, they simply happen without cause." Maybe it's in agreement with my position, but with all the new terms flying around, I'll need to think about it.
[Edit - changed "predictability" to "unpredictability" in first paragraph.
Thanks. Really helpful post, although it's helpful in making me think about broader issues, not necessarily about coming to conclusions about my original questions. That's fine with me. Some thoughts:
Quoting fdrake
Just to be clear, in my formulation, which I've labeled "pragmatic," if an event isn't predictable, it isn't deterministic. Billiard balls yes, multiple coin flips no. Let's work a little on definitions, please. What you are calling "ontological determinism" is what I called "determinism" in the OP, i.e. if someone knows the position and motion of everything at a given time, they can predict the state of the universe at any time in the future. What I think we are now calling "epistemic predictability" is what I am calling "predictability" in the OP, i.e. a system is sufficiently simple that it is practical for us to keep track of all the causal factors in order to predict future states. Is that correct? If so, I will be happy to use those terms in the future.
Quoting fdrake
Maybe I don't understand or maybe I disagree. It is my understanding that chaotic systems are completely unpredictable given passage of sufficient time. Sufficient time is determined by a time scale which varies based on the system.
I want to say more and I will, but I have to go now for a few hours.
The fixed background is a formal system we analyze. We express our expectations (derived from rules of logical and physical possibility) as probability.
If testing agrees with our expectation, we then feel confident that the real system matches the formal system. Probability does not extruded through the system. It's a proposition about the system.
A sign that we rely heavily on logical possibility is that if we test a system and statistical analysis of the outcome shows that the system isn't performing as expected, we don't update logical possibility. We start looking for the discrepancy in the real set up.
Quoting fdrake
It has been my position in this thread that I don't think this make sense from a pragmatic point of view.
As I said in the OP:
Quoting T Clark
Also, it is my understanding that some magicians (and cheaters I guess) can control their coin flips so that they can control whether a flip comes up heads or tails. I'm thinking through what that means for our discussion.
Quoting fdrake
Sure, I can see that the equations may be strictly deterministic. but that doesn't mean the system in the real world is. I don't think. Maybe. Kind of, sort of.
Are you talking about weather or climate models? I read somewhere that the appropriate time scale for weather forecasting is about a week. After that, predictions become very imprecise quickly. I don't know what the time scale is for climate models. I assume much longer. Also, climate models are greatly simplified as compared to actual climate systems. It is my understanding they can, accurately we hope, predict trends and tendencies reasonably far into the future, but not detailed specifics.
My approach in this thread has been to try to get clear about the logic involved in notions of indeterminability and indeterminism and the contexts in which positing them or their antitheses makes sense. So, I am not at all trying to arrive at any metaphysical conclusions.
So, it seems to me that 'indeterminability' is a posit which belongs in the context of epistemology. A complex system could be strictly deterministic (i.e. without any actual random or chance events whatsoever) and yet future outcomes of that system could (in fact arguably would) be indeterminable. Saying that a system is strictly deterministic is an ontic posit. Saying that the same system is indeterminable, or unpredictable, is an epistemic posit. The weather system is a good example. Of course saying that the weather system is indeterminable or unpredictable does not mean that we cannot model the system and make more or less accurate predictions about it; it means that determinations or predictions are subject to degrees of uncertainty which become vastly amplified as the time-frames for the predictions are increased.
The next point I want to clarify is about randomness or chance, which I see as the being the same notions ( in this context at least), and I think positing them about a system constitutes an ontological claim, not merely an epistemological claim. You quoted "...they are not determined by anything at all, they simply happen without cause". It seems to be well-accepted in QM that (at least some) microphysical events are acausal, they simply happen, and that is what I was referring to in the sentence you quoted part of there. The claim that these microphysical events are acausal is not merely an epistemic, but an ontic claim.
But again, I am not making any metaphysical or ontological claims here, I'm merely trying to get clear about what these terms are being used to posit, and in what context, epistemic or ontic, such posits are apt.
That's right, there is no such thing as a completely deterministic system. That's a fantasy.
Quoting fdrake
There is a very real problem with this assumption, and that is that such a system is not real. A system cannot be completely deterministic, because it is always subject to outside influence. A completely deterministic system would be a completely closed system, which is impossible to construct, and even if it does exist somewhere naturally, it couldn't be observed. There is no such thing as an absolutely "fixed", or determined system, so it makes no sense to talk about what does or does not exist within such a system.
Sorry. I'm lost. I could keep track of some of what you wrote, but in the end, it spun off.
I agree, but from what you've said, I think you and I have different reasons for thinking so.
One point where we may disagree, or be talking at cross-purposes, is this: from what I understand you think that it is incoherent to say that a system could be deterministic, if it is not epistemically deterministic. For example, the internal combustion engine is epistemically deterministic. That just means it is a simple system whose function is reliably predictable. Since the weather is not epistemically deterministic, being a complex system that cannot be reliably predicted, I take it that you would say that it would be incoherent to think that the weather system could be ontically deterministic. Did I get that right?
I have found it's very common that I write something that I think is clear but other people don't understand what I'm trying to say.
Quoting Janus
Sorry, I've gone back and reread the second and third paragraphs of your previous post twice and I just can't figure out what they mean.
That we come to the same conclusion from different approaches is good support for the conclusion.
Quoting Janus
It isn't though, because the car breaks down when you least expect it. You're over simplifying "deterministic", and "predictable", in order to say that if you can predict something there is a deterministic system involved.
The universe would be the closed system. If there are multiple universes, then the Multiverse would be the closed system. In other words reality itself is the closed system. Determining the motion and position of every particle within the universe would allow you to predict the future of the Universe and everything inside of it - something that may be beyond the ability of the human brain but maybe within the power of a computer.
Now, if the universe is infinite in space and time then that would make it impossible for the universe to be deterministic on large scales of space and time. We might be able to make predictions on small local scales, but our predictions become less reliable the larger and further we try to reach.
- I don't think omniscience will ever be possible. The physical world is not self-aware so the particles themselves don't know what they're doing and are just passively responding to the various forces.
"power of a computer"
- But such a massive supercomputer would itself exist inside the universe and so in order to make predictions it would have to be simultaneously aware of every particle that constitutes this computer and every other particle in the universe. I don't think this would be possible at the same time.
Why do you think that?
This is a statistical/probabilistic argument so bear with me.
Imagine a coin flipping experiment. No two flips are causally connected i.e. each event is independent of the next. It is extremely improbable that you'll get a 1000 heads in a row but it isn't impossible. A clairvoyant person could be just one very lucky dude/gal if you prefer.
Complete prediction is not possible from within the closed system. See Determinism and the Paradox of Predictability.
Right. So, complete prediction of the closed system in not possible, but that isn't to say that the universe isn't deterministic in that the states-of-affairs in local areas aren't predictable, and that is all we really need. Do we really need a complete prediction of the closed system to accomplish what we want at any given moment? NASA can still get spacecraft to Pluto without knowing where every atom in the solar system is. And if we could acquire the motion and position of most of the particles in the universe would that allow us to narrow down the possible futures of the universe so that we can at least eliminate contradictory predictions?
I think you're right, there are systemic reasons why chaotic systems are chaotic, even though (AFAIK) there isn't just 'one thing' which is chaos. Even if the system is sensitive to initial conditions, there has to be a reason for why it's sensitive to them.
One of the things that makes a chaotic system chaotic is how it acts to disperse points away from themselves (called topological mixing); that there exist (sets of) states in the system which travel so far and so fast away from themselves (under the evolution of the system) that their trajectories never return to where they came from after an amount of time. This occurs when (and only when) there exists a state that can evolve arbitrarily close to any other state in the system. There are related notions for this that rely upon probability; if trajectories return in the above sense with probability zero, or if there exist points which go everywhere except collections of states with probability zero, the system will still be chaotic in some sense.
Not all the points of a chaotic system have to have this property for the system to be chaotic. Only some of them do. The trajectory might get stuck somewhere in the state space, like falling down to the bottom of a hill and being unable to get back up its slopes again, and these 'somewheres' are called attractors. Attractors come equipped with sets of initial points that will eventually end up in them, and these are called basins.
Generic points of chaotic systems usually do not belong to basins of attraction, most places in the state space don't lead to being stuck in a rut, so those points never end up getting stuck in a stable repeating pattern of behaviour. This most is why chaotic systems usually have divergent trajectories from small changes in initial conditions; introduced by measurement/instrumental error or limitations of computer precision in representing numbers; though precisely how quick nearby trajectories diverge from one another depends on the system and on the trajectory itself (discussed in the mathematics of the Lyapunov Exponent). The presence of chaos does not depend on the divergence rates, but how much it effects predictability does.
In this regard, a chaotic system can be said to be more predictable (relative to others) when its trajectories diverge slower (than them). How quickly they diverge quantifies the predictability of a chaotic system without an appeal to uncertainty of the initial conditions (like measurement error), the uncertainty of the initial conditions is amplified over time into divergent patterns of behaviour within the measurement precision of the input.
Keeping in mind that flipping 1,000 heads in a row is no less likely than any other specific series of heads and tails, there are 2 ^1,000 possible combinations of heads and tails. Of course 1,000 heads could come up on your first flip. It's much more likely you will flip coins until the end of the universe before it happens. That, to me, is a fine definition of impossible, which is the case I've been trying to make since the OP.
As usual, I think you know a lot more about this than I do. Reading more about chaos and complexity are high on my reading wish list. Any particular recommendations?
I'm not sure it makes any difference to my primary position - if predicting future states of a system is so difficult as to be practically impossible, I don't think it makes sense to consider the universe deterministic. I think that's what we have been calling "ontologically deterministic."
This (and the whole channel) is excellent for visualisations and doesn't skimp on the math.
I'll take a look. Thanks.
No.
Quoting Harry Hindu
I can't think of an instance. It depends what you're attempting to predict. For most relatively closed subsystems I'd have thought that it's not a problem.
The car is epistemically deterministic in the sense that the problem can be identified and the car repaired or an irreparable part replaced. On the other hand the human body is not like this; many things can go wrong that we do not fully understand and repair is often impossible.
Again, if you're interested enough to identify which parts or words are giving you trouble, I will be happy to explain. If not, no problem.
The existence of free will demonstrates that the universe, as we know it, is not a deterministic system, nor closed system. To say that there are multiverses which comprise a closed system is nonsense, indicating that you do not know what a "system" is..
Quoting Janus
The fact that human beings can identify the problem after the fact, and repair it by replacing the worn parts, does not make the system deterministic. After all, human beings built the system in the first place, and it is the fact that the machinery will break down which makes it non-deterministic.
Sure. A system is an assortment of interacting parts - like neurons, people or universes. Free will is an illusion.
Quoting Metaphysician Undercover
If you predicted that it would break down, and it eventually does, then that is deterministic. Deterministic means that the outcome of some system is capable of being predicted by some mind. It follows some logical pattern. It is logical.
That's not true. When you predict that something will happen, and it does, this does not mean that the thing is deterministic. This conclusion would require a further premise which states that something can only be predicted if it's deterministic.
Quoting Harry Hindu
Neither is this true. Minds can predict things which are not deterministic by many different means, like chance, by some system of statistics and probabilities, or through vagueness in terms . I can predict the outcome of a coin toss. If I am right, I've successfully made the prediction. I can also predict that if I flip the coin 100 times half will be heads and half tails. If the score is 51 to 49 I can employ vagueness to claim that it's close enough to count as half and half, therefore my prediction was correct. For a prediction to be correct, it is not required that the thing predicted is deterministic, nor that the thing follows any logical pattern, it only requires a successful strategy by the predictor.
No it doesn't. It requires a definition of determinism that implies prediction-making.
https://www.merriam-webster.com/dictionary/determinism
Determinism: a theory or doctrine that acts of the will, occurrences in nature, or social or psychological phenomena are causally determined by preceding events or natural laws.
Predictions can only be made if occurrences that we observe are consistently determined by prior causes. If they aren't then we can't make predictions. Because they are consistent, we can make predictions both forward and backwards in time. We can predict the causes of some occurrence or predict the occurrence of some causes. Because the causal relationship is consistent, we can predict the cause or the occurrence. If it weren't we could never establish any kind of reliable predictions for very long. Our knowledge would be even less reliable than it is now - to the point of being useless.
Quoting Metaphysician Undercover
Chance and probabilities are ideas in the mind that relate to our lack of knowledge of some system. When we use these terms, we are emphasizing that we don't fully understand the causal relationships, or that the causal relationships are too complex, or there is too much information involved for our minds to make predictions about. This is one purpose that we have given computers - create simulations with massive amounts of information of causal relationships so that we may better predict the behavior of hurricanes.
Right, the faulty definition of "deterministic", which you added is the premise required. It's a false premise though because it's not an acceptable definition of "deterministic".
Quoting Harry Hindu
This is not true though, as I explained, a prediction could be made randomly and be correct by chance. Or, a prediction could be made using many other strategies, some of which I described, without the need for determinism. That the actions of a human being may sometimes be correctly predicted does not prove determinism, nor disprove free will, which would be the case if prediction could only be made when actions are predetermined.
I think you are assuming "prediction" which has absolute infallibility, no chance of failure. This might require a deterministic system, but human prediction is unable to obtain such perfection. So, determinism is not required for a correct prediction.