Chance: Is It Real?
Quantum physics, which I don't understand, aside, the world on the human scale (macroscopic world?) is governed by fixed natural laws of matter, energy and force.
Even the roll of a dice or the toss of a coin are governed by laws of mechanics. That's to say what we commonly interpret as probability is NOT objective probability. Rather, it's some sort of rough interpretation that helps us make decisions and comprehend what is vastly complex in terms of mechanics.
That means probability isn't real/objective in the sense that the world, in itself, works probabilistically. Rather, probability is an attempt to make sense of what is fundamentally an extremely complex web of causation.
I've heard that, for instance, radioactive decay is objectively a chance thing - which atom will decay is entirely random (so they say). However, this too is an issue of our ignorance - we don't know which atom will decay. There's an extra step between this state of ignorance and labeling radioactive processes as probabilistic and this, I think, is unjustified.
Also, even if the quantum world is a chance game, it's proven that the macroscopic world, in which we live, is governed by laws which have been mathematically expressed.
So, is probability an illusion?
Even the roll of a dice or the toss of a coin are governed by laws of mechanics. That's to say what we commonly interpret as probability is NOT objective probability. Rather, it's some sort of rough interpretation that helps us make decisions and comprehend what is vastly complex in terms of mechanics.
That means probability isn't real/objective in the sense that the world, in itself, works probabilistically. Rather, probability is an attempt to make sense of what is fundamentally an extremely complex web of causation.
I've heard that, for instance, radioactive decay is objectively a chance thing - which atom will decay is entirely random (so they say). However, this too is an issue of our ignorance - we don't know which atom will decay. There's an extra step between this state of ignorance and labeling radioactive processes as probabilistic and this, I think, is unjustified.
Also, even if the quantum world is a chance game, it's proven that the macroscopic world, in which we live, is governed by laws which have been mathematically expressed.
So, is probability an illusion?
Comments (171)
That is a BIG aside.
Quoting TheMadFool
With quantum physics aside, exactly which laws are you referring to?
Quoting TheMadFool
Which law would this be?
Quoting TheMadFool
Quantum physics says no, probability is baked in and quantum behavior had been observed at the molecular level.
Quoting TheMadFool
No, it is not a matter of ignorance. You have to read up on quantum theory.
Quoting TheMadFool
No, there are no laws which govern everything. Is this something you read? You believe? Someone told you?
Quoting TheMadFool
Well according to biology, the mind is an illusion so everything is an illusion including you. So what do you think about that?
To sum up, every single sentence you wrote is questionable. Does this peak your curiosity?
He is restating a 17th century philosophical faith that someday science will discover the Laws of Nature that will enable scientists to predict everything. It is the Materialist Determinist faith. Unfortunately, all hopes were pretty much pulverized 100 years ago, but old ideas die hard and materialist-determinists keep faith alive in all levels of education. The problem is you can't put quantum theory aside, even if it is only 100 years old.
How do materialist-determinists keep hope alive? It's tough. I empathize with their efforts. Everyone needs hope in their lives.
You could then reference it in equations and because it is in a nutshell, science could seriously begin to hunt for it, just like they have for the other forces.
I looked at the unifying equation they have so far, and it is only a serious of pointers to other equations. Throw the life force letter into the mix too.
Bohm imbedded it in the quantum potential initial variable of his quantum mechanics equation, but I don't believe such an approach had merit. Mathematical equations are symbolic and are not ontological. The only way to understand nature is by direct observation, not symbolic substitution. Symbols freeze, make immobile, into the discontinuous while life is continuous.
It takes a very brave person to challenge the materialistic-determinist priests that rule academia. Bell and Aspect did, and much to the consternation of materialist-determinists, found that that the non-local aspects of Bohm's quantum potential is observable in laboratory experiments at the molecular level and at great distances (the recent Chinese experiment from satellites). Pop goes "quantum doesn't affect the macro" mantra.
But no worry, materialist-determinists hold fast to their faith. Their faith is strong.
It's good they disagree. Let's go get 'em. :)
Not that big. Have you noticed when you kick a ball that it moves with a certain speed, in a certain direction and with a specific spin - all of which can be calcualted, thus predicted, in Newtonian terms.
Yes, it could be (I'm not sure - need proof) that QM is probabilistic but that doesn't matter because the world on a human scale is NOT probabilistic (as I've shown above). So, chance plays no role at the human level of existence.
We could say that the mind is a Quantum process but, a BIG but, its effects, so far as we're concerned, are NOT. For instance, a QM process in my mind may be a desire to lift my arm but the process of lifting my arm are not probabilistic; they're determined by laws of chemistry and physics (science has proved that).
Quoting Rich
Pick up a science book and you'll see them. Newton's laws, Pascal's law, Boyle's law, etc. etc.
Quoting Rich
I'm saying that probability is deeply linked to ignorance. The process by which we conclude whether or not a certain process/thing is probabilistic or not is exclusion.
What I mean is, first, we assume the existence of a general law that governs a process. If we find one, we name the law and express it mathematically. Only if not, are we warranted to think the process/entity is probabilistic. The problem is we can never know if our search has been exhaustive or not. There are just too many possibilities to consider. Hence, the label ''probability'' says more about our ignorance than anything about the process/entity itself.
Quoting Rich
Can you please explain the probabilistic nature of QM to me. Thanks.
Quoting Rich
Gravity doesn't apply to all matter?
Quoting Rich
Do you question your own existence? What is it that engages you in this conversation?
[I]Everything is an illusion[/i] doesn't make sense. An illusion, to exist, must have a real counterpart.
Quoting MikeL
Yes, at least on the scale of human existence.
Approximately. That is all.
Quoting TheMadFool
All approximate. None complete. Certainly none that govern human behavior.
Again, did you research this or are you just repeating something, maybe something often heard on this forum? Materialism-determinism survives based upon faith.
Quoting TheMadFool
This is fine. It is your faith. Quantum physics says the opposite.
Quoting TheMadFool
Ok. This is called quantum physics. There is no other. It says the universe is probabilistic. Now you can override this with a materialistic-deterministic determination (Einstein held to this faith until he died, so you would be in good company), or you can set aside your faith. Change is difficult but you have a choice. Based upon that you have a very strong faith in materialism-determinism (as do others) you probably will not change, but you might. Such is the probabilistic nature of the universe. You might change, because you have a choice, but probably not because of habits.
Quoting TheMadFool
Quantum physics is quite similar the Schrodinger (or Bohm's) probabilistic equations. It supplanted Newton's equations 90 years ago though Newton's approximations are still used because it is simpler and good enough for practical purposes. Remember, it only takes one, itsy-bitsy, teeny-weeny probabilistic event anywhere in the universe to eliminate determinism. Quantum physics says they are happening everywhere, all the time. Materialist-Determinists tend to ignore this.
Quoting TheMadFool
Gravity is everywhere. Quoting TheMadFool
Of course. Claiming an illusion is a cop out, but that's the best biological sciences has to offer now for human consciousness, which is why I ignore it.
Faith? My statements, hopefully, are based on facts. Also, what are your beliefs based on?
Quoting Rich
That's exactly the problem. Probability is arrived upon through two processes:
1. Approximation of complex deterministic processes e.g. coin tosses and dice rolls
2. By a process of elimination i.e. we first look for deterministic processes and, upon finding none, conclude the phenomena to be probabilistic. And this elimination method can never be exhaustive - there's always the possibility that we've overlooked something. See the flaw?
Let's take an everyday, scientifically and technologically important, well studied, non-quantum example - the behavior of a gas in a closed container. What are the properties of this gas? Pressure (P), volume (V), mass, temperature (T), number of molecules ( N). The Ideal Gas Law says PV=NRT. "R" is a constant which depends on the units of measurement.
The basis of the relationship expressed in the equation is the average behavior of vast numbers of molecules bouncing off each other and the walls of the container following the laws of classical physics. We don't know the actual velocity or direction of any one molecule. Even if it could be done, which quantum mechanics says it can't, a computer would have to have almost infinite capacity to track the molecules of even a small volume of gas. I'm sure someone can do a calculation to show that the program would require more time to run than the universe has existed.
And that's the way the classical world at human scale in general works. It's all average behaviors stitched together by the laws of probability.
Thanks for your input. That's what I mean. As far as the human level of existence is concerned, these probabilistic behaviors, you demonstrated so well, average out deterministically.
So, as we don't exist at an atomic scale, the probabilistic nature of quantum phenomena don't affect us. In other words, we live in a deterministic world.
Well, I don't necessarily disagree with what you say, but that doesn't seem like the question you were asking in the OP and the one I thought I was answering.
In a pure Newtonian set of physical rules, this is true.
QM does not say it this way. This is interpretive language, which you are free to use, but such language is not QM.
Quoting RichEven in hard deterministic universe without QM, such predictability is easily disproved. Inability to predict has nothing to do with determinism or lack of it. You seem pretty bent on a different stance.
Quoting TheMadFoolWait, what if the law above is a probabilistic one? It means the mathematical model has probability baked in. Interpretation of that model on the other hand is open. There are multiple consistent (valid) interpretations, and if it is meaningful that one of them is more correct, then that's where the ignorance comes in: There is no way to choose among valid interpretations, so the typical course of action is to choose based on what you want to be true.
If the world on the human scale, as you put it, is governed by these natural laws, and you know what these laws are, why don't you have perfect knowledge of the future? Why is every moment so riddled with uncertainty?
All you are expressing is some sort of faith in the natural sciences. If only the natural sciences could have predicted what would win the St Leger yesterday I'd be a rich man.
I'm disappointed that all the replies, including Rich who can usually be relied on to be splenetically anti-science, appealed to the natural sciences.
Laws don't govern worlds. It's that when humans investigate certain defined worlds, usually in a laboratory or by imagining some parts of the world away to focus better on the problem at hand, we can make a good stab at understanding how they work, and some of the rules we call 'laws'. Bridges mostly don't fall down, I have a magic telephone, and soon elaborate cars will be able to drive themselves. Still, I have no idea what I or my neighbour might do next, even if my neighbour insists that she lives in a deterministic world. This strikes me as uncertainty.
If course. Old ideas die hard. There is no such thing as precise prediction of anything. And all Laws are just approximations that are practical. I have no idea why they are called Laws. They are just practical equations.
"Hopefully" is the operative word.
Everything I say is a belief subject to change and revision. Everything is constantly in flux.
What are called "laws" of science are not laws or rules at all, they are descriptions of how matter and energy generally behave in certain situations. It's not how the world must be, it's just how it happens to be.
Since there is no evidence of this and all evidence is to the contrary, it would be best if you end such statements with an Amen - for accuracy sake.
Humans are amazing when it comes to faith.
Calculations in quantum physics are typically deriving or derived from probabilities of certain events [or their probability distributions]. The probabilities of events are related to a system's wavefunction. In my experience physicists believe that the fundamental objects of quantum mechanics - wavefunctions - are real. The probabilities calculated about quantum systems are not generally thought of as representations of our, that is sentient life perceiving events, lack of knowledge. They are instead thought of as the distinguishing properties of a quantum system. What is real are probability and wave functions, at the quantum level. So reality does have intrinsically random elements because of this.
The presence of quantum behaviour in a physical system depends on the size of its constituents. Molecular and compound level scales can still exhibit this behaviour. Whether there is a characteristic size below which quantum behaviour is relevant and above which it is irrelevant (quantum -> non quantum transitions being discrete), or whether quantum effects smoothly decay with respect to some length scale says nothing about whether reality is 'really probabalistic' or 'really deterministic'. It's usually possible, as done in this thread, that since such a transition occurs - we can claim that reality is deterministic 'really' since for most length scales they behave like some largely pre-theoretical folk-physics which usually comes with some intuition of cause and effect and repeatability of events yielding a (usually also undefined) notion of determinism. This largely unarticulated sense of determinism usually conflicts with the following observations:
If definition: intrinsic randomness is random behaviour in a system which cannot be removed through increased knowledge of the system: is permitted, it is actually the case that there are commonplace observable phenomena that arise from random processes in a natural way on usual human length scales (say from about 1 millimetre to the size of America). Usually to do with the aggregate properties of ensembles. Such as there being more small hospitals than large hospitals which on a given day have >60% of their children being born boys (law of large numbers), settlements having the highest per capita and smallest per capita rates of diseases are typically small (law of large numbers). Stock market prices are also random as they depend in a non-trivial way on the properties of ensembles and show many characteristic features of randomness. It's also true that Heisenberg-like uncertainty principles occur with any audio-signal or more generally a sequence of records over time - the uncertainty being an intrinsic property of signals and sequences in a similar fashion to the 'fuzziness' of quantum systems.
**edit: another macro scale example of random properties are gasses.
***edit: another macro scale example is small scale eye movements (jitter/microsaccades) obeying the properties of spatial white noise.
****edit:
The idea that with sufficient information all laws can be derived from quantum mechanics is also quite ridiculous, since such a theory would include limiting behaviour to precisely account for the macroscopic irrelevance of quantum mechanical laws. Such as in relativity when c->infinity for systems much slower than the speed of light (Lorentz factor tending to 1).
You must mean something different when you use the word "precise" than I do. They can determine the distance from a point on the earth to the moon within centimeters. They can predict how much a cesium clock will slow down when it's carried from the first story to the second story in my house within fractions of a nanosecond. They can predict exactly where and when a solar eclipse will take place 1,000 years in the future.
Also, I can predict exactly what you will say if I use the results of science in one of my discussions - "Faith, it's faith. Just faith. You put your faith in the God of science. Faith, faith, faith."
Great. But it is just an approximation. Everything is always an approximation because everything is in continuous flux. The measurement is "old" before it is even made.
Reading anything more into it is an act of faith based upon some hope that everything is fated. Many religions share this point of view and are quite comfortable with it. Materialists-Determinists who view themselves as objective scientists seem to have a very difficult time with their faith.
Or is your stance that lack of predictability implies lack of determinism that you so spit against?
Quoting RichMaterialist-Determinist is a philosophical stance, not a scientific one. Science does not depend on the stance, even if some scientists hold the stance in faith, as you do whatever yours might be.
Determinists? You know, all those who believe that everything is fated ever since the Big Bang blew its top.
In any case, science is quite clear, there is no determinism though it doesn't stop scientists and educators from perpetuating the belief. Where do you think the OP got the belief from?
I'm don't consider myself a materialist or determinist. I was when I was young because I love physics and that's what I thought it said, but I was wrong. Physics doesn't have anything to say about free will.
Quoting noAxioms
Exactly.
What 'clear' evidence have you against the determinism aspect? The fact that we can't predict things (trivial, isolated systems for instance)?
For that matter, what evidence is laid out there FOR determinism, that it would be perpetuated as fact as you say is done?
I guess if everything is unpredictable then there is zero evidence to support determinism. Anyone can believe what they want though. It just becomes a matter of faith, which is what I said. Determinists simply have a very strong belief that everything is determined. This isn't even philosophy. It is flat out religious in nature, which is fine with me. Determinist dogma is that everything is fated. For further reference, please Google Calvinism.
There are no Laws. There are some equations that roughly approximate physical conditions for non-living matter. And as science understands the behavior of matter it all probabilistic, which hopefully answers the OP.
Determinism makes no claim of predictability, and lack of predictability is zero evidence against determinism. Is that the 'quite clear' evidence against it?
Quoting RichIf the evidence was as clear as you claim, it would not be a matter of faith, but rather a matter of holding a belief in a position inconsistent with evidence.Quoting RichIt does answer the OP, but the OP wasn't about determinism. I'm saying that your dragging that into the conversation was irrelevant to the subject at hand.
The standard determinism story (and all if it is just a story) is that if everything is known coupled with the mythical Laws of Nature then everything can be known. The problem, as the story goes, is that it just soooooo complicated, we can't predict - but it is all fated. As such, determinists must revert to blind faith in their story because there is nothing too support their story. I don't care what Determinists believe. Faith is everywhere in abundance. I just don't know what materialists-determinists have such a problem admitting to their faith.
Now, to answer the OP, the probabilistic universe is baked in to quantum physics. As for determinism, there is zero evidence to support it, so don't try looking for any. Stripped of any evidence whatever, if you are a determinist, you are one based on your faith in the story that everything is fated, just as others have faith in God. One can choose either as they wish. All one needs is very strong beliefs.
Quoting RichNo proof perhaps, but zero evidence is a pathetic claim. There is in fact quite a bit of evidence for both sides of the debate. You seem to have chosen a side and justify that bias by refusing to acknowledge existence of evidence to the contrary. Cherry picking is always a good way to bolster your biases, but it sucks as a method for real discovery. Embrace contrary evidence and win past it. Hiding from it only demonstrates that you fear to face it.
Quoting RichHaven't stated my position. Not sure if I have one,
You have a stated faith in God. I suppose that usually necessitates a non-deterministic stance, but said stance is then backed by the faith, not by any evidence.
Fine, then give some evidence for determinism. Do so without being in conflict with quantum physics.
@noAxioms
@TheMadFool
@MikeL
@fdrake
@mcdoodle
@T Clark
This discussion was deleted accidentally and has now been restored. Please feel free to continue.
Sorry, can't cover all the bases at one go. Anyway, the point is quantum phenomena manifest, at the human scale, deterministically (you said so and I agree).
So, chance is not an objective aspect of the world at our level of existence. It's just a good way of approximating complex deterministic causation.
You mean, for example, we can derive the laws of motion from QM principles? This is interesting but doesn't really damage my position.
At the macro-scale, the world is regular i.e. follows fixed inviolable laws and we live in that world. So, chance, even if it's a feature of the atomic scale, isn't an objective property of the world we can see, hear and feel.
Quoting noAxioms
I thought for a choice to hold the math has to make sense.
One thing I didn't mention was human, actually life. The mind is, like it or not, a chemical reaction and I see a place for QM to manifest its probabilistic character. However, you already know, minds affect other minds through fixed, definable laws. For instance, if I insult x, x feels hurt and this is a general law, making reactions predictable; in fact, I think, this predictability (requires general principles or laws) is the basis of our social dynamics. So, again, we see that QM and chance doesn't manifest in the world of humans probabilistically.
To say that 'it could have happened differently' is very interpretative language and implies that things can be put back into a not-yet-happened state, in violation of the ontological status consistency suggested by Einstein's work.
Of the three more major interpretations of QM, two (hidden variables, no-collapse) are deterministic. But this is evidence only of consistency, not direct evidence for or against determinism.
On a less scientific and more philosophical front: A non-deterministic universe seems in need of creation, meaning it is a byproduct of a larger universe in which it was created, and thus not really a universe at all, but just another object/process among other things. That's a circular inconsistency that is too often dismissed by asserting that is against the rules to question the logical consistency of the parent universe.
The math makes sense in all of them, else they'd not be valid interpretations, but rather disproved hypotheses.
Without getting into the nature of time, there is nothing in Relatively, either Special or General that supports determinism. If it did, it would contradict QM. Relativity is just transformation equations between frames of reference and a way to imagine gravity which may or may not have ontological relevance. Time in Special and General are defined differently. Nothing there about determinism. Einstein spent his whole life trying to bring determinism into QM and failed. Despite this, we have these kind of threads. QM reigns and it is probabilistic. Zero determinism.
Stochastic phenomena like regression to the mean are commonplace. It occurs in the relationship of child height to parent height - more generally for many quantitative traits. It also occurs in the performance of individuals in repeated tasks (like practicing something complicated).
When physicists see if an experiment is consistent with a theory, they also assume their data comes from a random process and see if the best fitting model is within error bounds of the (a?) theoretical solution.
edit: * indeterminate in this sense isn't the same as randomness.
Well, obviously, if you have written the book that explains how the general law of insults works, please link me to it. (Indeed I'm surprised I haven't already heard of it as it would be a trail-blazing work) Otherwise I'm going to carry on thinking this is all empty assertion. You believe in determinism, therefore you assert that everything in sight is deterministic; but there are no working models for what might happen next. Indeed the people I know best act with remarkable unpredictability: how can this be? Probably it's just an unscientific weakness in me.
As I said, unpredictability is (they say) a feature of the quantum world. I'm willing to accept this for humans and living organisms because the brain is a chemical factory and quantum laws will apply to it. However, two people, x and y, aren't connected chemically are they? Indeed an intent and subsequent action may have quantum origins but the effect is macro-scale (the world we see, hear and feel) and this world is deterministic.
You can't disagree on this. Our whole lives are predicated on predictability. Don't we plan our actions? Planning would be pointless if the world weren't predictable.
People often say this. They can't, however, model it.
Quoting TheMadFool
On the contrary: if the world were predictable, there would be no need to plan.
Quoting mcdoodleAgree with TMF here, sort of. The world is for the most part predictable, but that does not in any way imply deterministic.
Planning would indeed be pointless of the world were not predictable. No point in planning if there is zero idea of what's to come. Most life forms are evolved to be excellent predictors despite the imperfect nature of any prediction made. I draw breath not because it benefits me, but I predict it will benefit me in 15 seconds.
If the world were no predictable, planning would be pointless I would think.
Probabilistic is not determined. That is why we used different words. There is zero support for determinism. You and the OP late looking for some hidden variables that are deterministic. There aren't any, there never has been, and there never will be because the mind is the agency that chooses.
However, everyone harbors faith. There is no reason that you shouldn't have your own. One can believe in God or Determinism, it doesn't bother matter to me. I never object to faith. It is an aspect of human nature. What I object to are all of the materialists to try to foist their faith on others under the cover of some pseudo-scientific mumbo jumbo. I'm not interested in parlor games of who can out linguistic who. I am only interested in the nature of nature.
Whether this is true or not, it is at least a feasible justification for a supra-form of determinism, given the apparent randomness of quantum events.
Psychologically, determinism doesn't bother me because I know it is irrelevant to my life...
The universe hasn't changed in terms of physical laws since animal life emerged. This means there are properties or processes that allow the fuzzy quantum soup to produce macroscopic phenomena without animal life. This would be impossible without the interaction of a quantum system with another system also inducing a measurement. This is understood as a map from a probability distribution to an observation from it. Nothing requires consciousness. If it required consciousness conscious life couldn't've arisen. Bizarre quantum vitalism is just as vulnerable to arche-fossils as any idealism.
Not that this tangential matter implies anything about the actuality of chance. Randomness in the territory rather than the map.
We have no evidence of this one way or another. What we do know is that our observations and understanding of nature are in constant flux.
Quoting fdrakeQuoting fdrake
Nothing bizarre at all. The subtle wave movements manifesting as quanta is consciousness at work and making choices, even weaving different substantiality of matter. This was known thousands of years ago, though QM does provide some probabilistic equations that describe the habits of consciousness.
Artists embrace life. Unfortunately scientists, because of their own viewpoint, prefer to deny it. They are really missing out on a lot, but it is their life not mine.
Having the scope of observation include all possible interpretations of it includes interpretations which do not have consciousness as a prerequisite. Insofar as they include the necessity of consciousness for quantum systems taking determinate values, they are consistent with your position. Insofar as they don't, they are not. Ionic and covalent bonding for example is well within the quantum length scale (compounds can diffract) and both are prerequisites for presence of carbon based life. Both require quantitative shifts in wavefunctions and eventually a merger to the wavefunction of the compound. When the electrons are shared they are measured.
Quantum mechanics demonstrably was not known thousands of years ago. There is no evidence that consciousness is necessary for quantisation. Another confusion is that you seem to believe quantisation is generated by observation - it is not. Quantisation describes the propensity for many quantities on the quantum scale (angular momentum, photon emission energy, spin etc) to take values on a discrete rather than continuous spectrum. This occurs before any probability calculation can arise.
You are misinformed on basic properties of quantum systems despite wanting others to 'read up' on them to attain your level of knowledge. Let's try to constrain the discussion.
1) The formation of covalent and ionic bonds requires measurement. This occurs prior to the advent of our consciousness. Therefore our consciousness is not required for wavefunction collapse.
2) There is evidence for the laws of physics applying before the advent of consciousness. Red shift in light makes accurate statements about the ancient light coming to our planet. A simple calculation demonstrates this. The radius of the observable universe divided by the speed of light gives the oldest photons that 'come to our shores' so to speak an age of 46.5 billion years. Roughly 10 times the age of the Earth, nevermind the time elapsed since the advent of animal life.
Really? Well, I guess one believes what one wishes to believe. I don't recall any written observations from 100 million years ago to compare with. Can you remind me when the red shift was first observed and recorded? I must have missed something in my readings. I really prefer concrete when discussing specific scientific descriptive equations.
Quoting fdrake
Interpretations require consciousness.
As for the rest of your post, thanks for sharing your opinions and ideas. I find them very creative.
You are blessed with a very imaginative mind. Good for you!
2) There are no written records of [the fact that X influences Y] before time t.
3)There is no evidence that X influences Y.
t must be after now in order for 2 to imply 3 in some way. t = now is however sufficient for there to be written records when discussing these issues. So your argument is invalid. Perhaps you would have already found this information contradicting your views if you were looking for articles that have already been written?
I forgot to address 'interpretations require consciousness'. This is in the context of whether consciousness is required for observation/measurement. You said 'observation in the widest sense', so this can include interpretations of observation /measurement which do not require consciousness. It is a moot point that communicating these ideas requires us both to be conscious human beings with a common language. Completely irrelevant to whether observation/measurement requires consciousness.
I have addressed everything you've posted in response to me. You have consistently ignored any evidence or arguments I have presented against your position. You even didn't engage when I made two linked examples to engage my arguments through (red shift and the age of the light it describes). This elusive behaviour is paired with your propensity to respond to small sub-phrases and offer single line refutations with little to no reasoning in them.
I am surprised you don't take your position seriously enough to address major flaws in it.
Your attempt to dismiss what I've said with generalised statements about your world view has not gone unnoticed, nor has your similar elusive, goal-post moving behaviour with other posters in the thread. This leads me to believe that you are playing a game to preserve your worldview after your arguments have taken a critique you don't know how to address.
I would be happy if you proved me wrong and we had a discussion about the behaviour of red-shift I said and what it implies for your position.
As I said you are blessed with an extraordinary imagination.
You have provided zero evidence (not surprising) but loads of opinions which I already thanked you for. I would suggest you continue your parlor game with someone who enjoys it.
Evidence that there is a world before animal life which obeys the same laws of physics - red-shift. Evidence that consciousness is not necessary for a quantum system to attain constrained states - ionic bonding.
These are spelled out in my posts.
So you've repeatedly asserted. You are free to deny any evidence that does not convenience your faith.
I think hidden variable interpretation is bunk, but it would be an example of deterministic physics if it were the case.
Now, if you find an some observation made 100 million years ago that was made under the exact same conditions that you or someone else made of the same observations, you have some evidence of some kind immutable law and then we can talk about something particularly the conditions of reach observation. As it stands now, all we can say is that science and it's observations are constantly changing, whatever that might entail.
Until you provide such evidence, you are merely sharing your faith with me, which I appreciate and as a rule, I don't discuss faith because faith is not discussable.
You seem to be unclear on the difference between evidence and proof. Yes, there is no disproof of idealism, but evidence abounds. It is also illogical to debate idealism since you're having a debate with a consciousness that cannot be experienced, and hence doesn't exist.
I have already addressed this argument. It boils down to the following structure:
1) There is no written record that X influences Y before time t.
2) Therefore there is no evidence that X influences Y.
Let's ignore the idea that written record != evidence. Or the validity of the argument. Because regardless this argument has a false premise when t=now for any of the phenomena we've discussed. But let's throw more words at this in the hope that some get through:
The point of understanding a phenomenon means that you understand it whenever it occurs. For example, there is much the same understanding for ionic bonds irrelevant of when the ionic bonds were formed. Red shifting is a theory evinced in human history whose correctness entails that there was a world before consciousness was there to perceive it. So is the theory of ionic bonds, also for nuclear fusion in stars producing the elements on earth, radio-carbon dating...
Let's take radio-carbon dating as another example. The amount of the unstable isotope of Carbon, Carbon 14, in something can be used to see how long ago the thing stopped exchanging Carbon with the atmosphere. The accuracy of this operation requires that radioactive decay operates from the date the entity stopped exchanging carbon to the present date. There is excellent evidence that this procedure is valid, and is well accepted as providing age estimates of the appropriate order of magnitude among the scientific community. The accuracy of radiocarbon dating implies that there was a world prior to consciousness and that the laws of radioactive decay have been constant in this time. This is also evidence that however nature operates has been the same.
More generally, the operation of the entities within these theories predates the theories, and this universality is part of these theories. They have more than sufficient evidence to warrant belief, so there is more than sufficient evidence to believe there is (was?) a world prior to our consciousness whose laws of physics are the same.
I never use logic, because I don't like playing that game. Thus, I have no idea where you got this.
You stated that something called laws (I presume you are referring to some equations) never changes over the history of the universe. Let's take one law, any law. Give me the history and show me all like the evidence over any period of time that supports your statement. It's your choice. Go for it.
BTW, you are similar stating what Sheldrake calls the 4th dogma of science. There is no evidence, just faith. But that's up to you to figure out.
The constancy of nature is a far weaker claim than nature being already described by an immutable set of equations. All that is required are consistent physical phenomena over long time scales. This is exactly what we have on Earth.
Let's take covalent bonding as an example. Whenever you have carbon based life, which is all life (except maybe sulphurous lifeforms at the bottom of the sea) on earth, the molecules and compounds inside it will mostly be covalent bond architectures with some ionic based compounds floating about in them (salts, iron in blood etc). We have good evidence that (almost) all life on Earth is carbon based and WAS carbon based. You can see this from the fossil record. Bones are bones are bones. Plants photosynthesise (chlorophyll compounds).
This is being used to demonstrate A) that there was a world prior to consciousness [fossil record] and B) that there is a theory (carbon based covalent bond architectures) which applies over these timescales.
The significance of A and B to your points is that A') the world doesn't depend on consciousness for its existence and that B') theories which have been evinced during the span of human history (which is all of them) can apply to events before the advent of (human) consciousness.
Aside: your favourite QM also applies seamlessly from 10^-30 secs into the universe's existence and will apply so long as things are sufficiently cool (no unification with gravity).
It's actually very simple. Just give me an observation dating 100 million years ago under the exact same conditions as now, and we can do the comparison together. Restating your belief system really won't get v us anywhere, though I understand your faith in it, since it is a fundamental dogma of science, but as evidence dogma it's a big zero on a scale of 1 to 100.
You can apply the same thing to redshift.
trolllmode: but you can't know there was 100 million years ago since there are no written records from that date, thus your entire post is meaningless.
Right. And your posts are simply a regurgitation of a dogma of science. So, you have no evidence. You are just repeating something someone told you and you bought it hook, line, and sinker. You are welcome to your faith. Be good. Enjoyed talking with you.
Why is that what he has to show?
1) So you agree it's the same bone now and whenever the dinosaur was alive.
2) So it's made of the same stuff (up to some ageing, the interior of the bone is fine). (from 1)
3) So the stuff at time t and a different time t' is the same. (from 2)
4) Let t be now. Let t' be 200 million years ago.
5) So it's the same now and 200 million years ago. (from 3,4)
6) We observe a covalent bond in a molecule in the interior of the bone now.
7) That covalent bond was there at t'=200 million years ago. (from 5)
8) We observe a covalent bond that formed at least 200 million years ago (from 7)
9) The covalent bond behaves in the same way at t (now) as it did at t' (200 million years ago).(from 3,5,6)
10) We have observed that there is no change in the covalent bond in the molecule over 200 million years. (from 9)
11) We have observed that natures' operations haven't changed in any consistent way since t'=200 million years ago. (from 10)
12) We have observed evidence for the constancy of nature. (from 11)
For some more details on the unsoundness of your argument - the first premise is not true since there are records of these phenomena. The argument is also invalid since a theory can make statements about events which occurred before the inception of a theory and have those be accurate AND evinced. See red-shift and the radio-carbon dating as worked examples if this point is unclear to you.
1) Obviously not. I'm going to try one more of your statements, but after this first one, things are looking bleak.
2) OK. You're again just laying out the dogma.
3) That does it. Bye, bye. It's been a pleasure. Really.
But Rich rejects logic. Rich rejects evidence. Rich rejects inductive method. Rich rejects the burden of proof. You got it, he rejects it.
That the equations that are used now to measure some results are the same results that were observed 5 billion years ago. Preferably the exact same experiment so we can do a reasonable comparison.
You also seem committed to the non-existence of dinosaurs prior to the advent of human consciousness. If there was no 'observation' without human consciousness there'd be no definite molecular properties, no chemistry... but, alas, we've already been through this.
BTW, do you have any evidence for your faith or did you just pop in to give the faithful some moral support, as any priest might and should. Mustn't let the flock wonder from the true faith, right?
Is it finally over? Really, I know all about the dogmas of science and I'm not converting.
This is the whole talk. Also lots of fun.
https://youtu.be/1TerTgDEgUE
What?
dogmatism, noun: the tendency to lay down principles as undeniably true, without consideration of evidence or the opinions of others.
I don't think I have displayed this at all. I even went some of the way to critiquing my position for you by presenting that step by step argument (which I also believed was invalid). Your responses to arguments/sub-arguments are generally one liners with, self admittedly, little to no logic in them; bold assertions.
I probably appear as someone who is part of the oppressive academic consensus to you, but you really know nothing about me. Or about my relationship with @apokrisis (which is largely non-existent, since I think it was a disagreement about long term frequency vs Bayesian probability interpretations 5 years ago).
This is a very convenient defence mechanism, as soon as someone presents a systematic and sustained challenge to your worldview you instantly label them as an out-group threat, rather than engage them in an argument. The latter strategy is really what you expect for a philosophy forum.
As an aside: Rupert Sheldrake's 'science' has been refuted at every turn. His most major contribution is the idea of resonance between morphogenetic fields. Rupert Sheldrake (to my knowledge) has never explained how morphic resonance can occur in real life. Instead linking it to proclivities and selection processes which have already been observed which have been shown not to need morphic resonance to work. The most damning evidence of its irrelevance is that you can't find references to Sheldrake's concept of resonance even within the parts of developmental biology that still entertained morphogenetic fields.
Sorry, I mean I don't understand what this says:
Quoting Rich
I think you mean education.
Quoting Rich
Yep. You meant education.
Actually you not only displayed it, but you did so repeatedly.
As for Sheldrake, it's a fun watch, especially about his description of science's intellectual phase locking.
Can you give me some examples of how I have been dogmatic?
Sure, at school, there often is too much stress on regurgitation at the expense of teaching critical thinking.
But the question here, on a philosophy forum, is are you able to demonstrate a capacity for critical thinking?
You have your own faith to peddle. Morphogenetic fields, holographic quantum mind projection and other routine New Age babble.
What people are pointing out to you is the big difference between critical faith and uncritical faith. If you accept no method of fixing belief, then you didn't even learn that lesson at school.
I don't actually believe that it's necessary for them to be fixed. If you look at the start of the universe it's predicted that the four fundamental forces of nature join. Having a unification for gravity, the strong force, the weak force and the electromagnetic force is a wildly different reality from our usual gravity + strong + weak + eletromagnetic or gravity+strong+(electroweak) which is sometimes used.
If the laws of nature were not fixed I would be very interested in finding out how they change and whether it's predictable. I would love to see an experiment or theory which, say, found different values for the cosmological constant for different periods of the universe.
However, I think there is good evidence that nature behaves in a roughly constant way over large time scales.
Dogmatism isn't really a function of a person's beliefs, it's a function of HOW they believe them. I try to find evidence for and against my beliefs, this is why I decided to challenge you on this to see if there was any 'cause for concern' in some substructure of my beliefs. As a reward I got a few interesting thoughts about the non-constancy of nature's laws over long time scales, and a few 'arche-fossil 101' arguments to use against QM vitalists. I learned some stuff from talking to you. This is a very non-dogmatic viewpoint. You also probably assume that I dismissed Sheldrake immediately, I didn't - I did a bunch of reading a few years ago and found his ideas not cogent and not relevant, and evidence for these beliefs.
So, have your beliefs shifted at all? Have you learned anything? I don't think you have, since I don't think you spent time trying to understand the arguments I made OR why I disagreed with you in the first place. That's dogmatism, try to avoid it. I hope you did.
We have evidence for only over a short period of time, and as Sheldrake relates, even a very short period may be too long.
When we discuss the nature of nature, I prefer evidence over stories. Stories tend to be biased toward pre-ordained goals. If nature is living, then everything is evolving. It's possible. Whether or not it is testable, I don't know, but we do know it's that scientific theories and experimental results are constantly changing.
Nothing I believe it's dogma. My beliefs are always changing because I seek change. I am certainly not going to entertain any dogmas of science.
No, we have evidence over a very long period of time. We, roughly, have a paradigm in physics called quantum mechanics which has been around for just under 100 years. The contents of this theory allow predictions on particle behaviour which occur over long time scales, for example an explanation of the slow decay of carbon 14 based on the low probability of observing sufficiently energetic W bosons (which is why radiocarbon dating works) for beta decay. Theory can, and does, make predictions for times before and after the development of the theory. It would be a terrible theory if it couldn't.
100 years it's a very, very short period of time, and all quantum mechanics provides is a probabilistic equation as well as an Uncertainty Principle. This doesn't really give us much to roll with.
https://www.researchgate.net/post/Why_cant_Schrodingers_equation_be_used_with_high_accuracy_for_atoms_that_are_different_from_hydrogen_atoms
You probably didn't read most of my posts in the thread fully, but uncertainty principles occur in lots of contexts. Every time you have a sequence of records over time there is a derived quantity which has an uncertainty principle associated with it. Anyway:
Just because there are current avenues for improvement or further research in a field doesn't make all the predictions of a field wrong. Quantum mechanics has been amazingly successful in producing semiconductors, radiocarbon dating techniques... If you've never read Isaac Asimov's 'The Relativity of Wrong' it's an excellent read:
It actually has given us a lot to roll with. Quantum advances have been partially responsible for Moore's Law of computational power growth along with PET scans, radiocarbon dating...
Never said this. It is possible that the cause of inaccuracies may be the evolution of the universe itself.
Quantum physics is fine For All Practical Purposes (FAPP) as are Newton's Laws where applicable. It doesn't mean that they aren't slowly evolving. Given that there is no evidence that the equations are precise and unchanging, it is a leap of faith to say otherwise.
Now really, this can't go much further. You want to believe in unchanging laws of nature, I am not hear to convince you otherwise. I am just saying there is no evidence. We all have a choice in what we believe and what we don't.
Claiming that the laws are changing also requires evidence. To be consistent with current physics, all changes must be within experimental error for all experiments (otherwise the change would be noted). The idea the laws change with time thus has no support since all observations which could've supported it are also within a constant laws of nature solution by construction. You would also end up with some crazy things and 'unchanging laws governing time evolution'.
A photon has energy equal to h*f, where h is plank's constant and f is its frequency. Assume that this law holds before some time t, and that after t instead we have a new constant i. This induces a discontinuous jump. This could also be modelled as an 'unchanging law of nature' by instead having the energy be equal to h(x), where h is a function of time x, such that h(x
In order to not be in a scenario equivalent to one with unchanging laws, you would require the changes in natural law to behave in a completely patternless manner, essentially adding a huge variance noise term to every single physical law. This is already falsified since, say, plank's constant can be measured very precisely!
I never said this. What I did say is that science is constantly changing which may be attributable to underlying universal evolution. Something to ruminate over. Really, take some time to think about it.
What other interpretations can there be for your statement:
?
Like most people, you have a horrible understanding of what probability is. Probability is the frequency of possible outcomes. Whether or not that is a result of predetermination or "chance" is irrelevant.
Can you teach me the correct understanding of probability?
To understand what it's doing, we need to look at what a random variable is. A random variable is a mapping from a collection of possible events and rules for combining them (called a sigma algebra) to a set of values it may take. More formally, a random variable is a measureable mapping from a probability space to a set of values it can take. Intuitively, this means that any particular event that could happen for this random variable takes up a definite size in the set of all possible events. It is said that a random variable X satisfies a probability measure P if the associated size of an event (E) which induces a set of values from the random variable has probability P(E makes X take the set of values).
There's nothing in here specifically about 'frequency of possible outcomes', since for continuous* probability spaces the probability of any specific outcome is 0.
Fundamentally, all probability is an evaluation of the size of a set with respect to the size of other sets in the same space.
*specifically non-atomic probability measures
You have some aspects right and others wrong. The definition of probability is the frequency of possible outcomes of repeated random events (random in this context means that all events have an equal chance of being selected).
Think of probability as a ruler, and we are using it to measure possible outcomes, in the same way you might measure a length of string. Now, there is a true frequency of occurrences for those possible outcomes, which is every bit as objective and real as the length of the string, and, like the length of the string, we lack the ability to measure the true value. We can approximate the length of the string, but our methods and tools are not fine enough to find the true length of the sting. The same holds true with probability.
For the given possible outcomes, there is a true frequency of occurrences, which we measure and approximate with a "ruler" we call probability. The fact that those occurrences occur by contingent causation is irrelevant to that measurement, as that is not what we are measuring. We are measuring the frequency of possible outcomes, which does have a true value - even if we can only approximate it.
This sounds more like deviation. Did you quote that from somewhere or are those your own words?
When mathematicians and statisticians speak about probabilities, they are secretly speaking about these. The definitions are consistent with both frequentist (frequency, asymptotic frequency) and Bayesian (subjective probability) philosophical interpretations. Also more general notions of probability where probability distributions can represent neither. Such in Bayesian shrinkage and frequentist regularization approaches.
I actually asked you if you wrote it or quoted it.
And what are your qualifications for making such an assessment?
I am a 4th year statistic major, and to me it sounds like you are talking about deviation and not a probability distribution.
I have taken some probability mathematics and still have more to go.
However, probability was defined in intro to stats, and has been echoed through all courses as the frequency of possible outcomes from a repeated random event. I am very clear on that, it is in my text books, it is defined and used that way in academic papers, and it is not a hard concept to grasp.
Also, if you're going to give a reference, then give a direct reference, and not a "it is somewhere in that general direction." Statistics is the second degree I am working on; writing was my first, and I know that that is a poor citation.
I did not intend you to feel intimidated or patronised, so please try to be less aggressive.
My posts aren't intended to be parts of academic papers, that would be boring. But if it helps, here is a description of some of the references I gave:
The first two links in 'here' and 'are' have the definition within the first two pages. The last is literally a whole course on measure theoretic probability, the definition and intuitive explanations thereof is contained in the section labelled 'Random Variables'. It goes from the intuitive notions you will have already met and the formalistic notions you will see if you take a course in measure theoretic probability towards the end.
Yes, a non measure-theoretic definition of probability is what is used in introductory stats courses and wherever the measure theoretic properties are irrelevant. Probability being 'the frequency of possible outcomes from repeated random events' is the frequentist intuition of probability. It is arguably incompatible with the use of Bayes' Theorem to fit models because of 1) the existence of a prior distribution and 2) the interpretation of population parameters as random variables.
Measure-theoretic probability governs the elementary things in both of these approaches - random variables and probability distributions - and so clearly implies neither.
I treat everyone the same, I equally dislike everyone, you are not special to me, therefore I see no reason to treat you any differently. Also, I am not interested in your excuses for such sloppy citation, if you hold a college degree you should know how to do proper citation regardless of area of study. You should know that referencing a whole course is just dumb, and it is what people typically do when they are evading. If you don't like my attitude, then don't talk with me, it is that simple and my feelings will not be hurt.
I am glad you are statistician, and to be honest I don't think we are that far apart, I just find some of your word choices confusing. However, it should be pointed out that as a base principle I reject notions of yielding to what may be seen as a greater authority. Which I admit you would hold that edge, but I can't surrender my own process of reasoning to someone I know nothing about.
Please clarify what you mean by "random variable." Or don't; if you don't like me enough to response I will understand.
Ok. This is a random variable:
Let (O,S,P) be a probability space where O is a set of outcomes and S a sigma algebra on the set of outcomes, then a random variable X is defined as a measureable function on (O,S,P) to some set of values R. A measurable function is a function X such that the pre-image of every measureable set is measureable. Typically, we have:
O be (some subset of) the real numbers, S be the Borel sigma algebra on O and R being the real numbers. Measureability here is defined with respect to the Lebesgue measure on measure space (O,S) and (R,B(R)) where B(R) is the set of Borel sets on R. In this case we have a real probability space and a real valued random variable X.
A probability measure is a measure P such that P(O)=1. It inherits the familiar properties of probability distributions from introductory courses by saying that it is a measure. The missing properties are entailed by something called sigma-additivity, which is that if you take a countable sequence of sets in S C_n, then P(Union C_n over n) = Sum(P(C_n) over n).
If you would like a quick reference to these ideas: see this. Chapter 3 contains the definition of measures and measureable spaces (also including discrete random variables which I haven't here). Chapter 8 itself studies probability measures.
Thanks for the reply, I'll have to examine it later when I have time for a response.
The reason I used intuitive rather than mathematical description was because nobody knows what sigma algebras and measureable functions are.
Personally, I just think writing is not your strong suit.
Tell me, how is this:
Quoting fdrake
Not the same thing I said? When I said:
Quoting Jeremiah
The repeated is how we approximate the true value.
I probably won't respond. This thread is about chance, and our discussion is relative to that. I see no reason to start another thread.
Btw, I would very much like to know how you plan to do statistics without repeated random events.
I think the mean is 12, yep 12 just feels right.
Markov Chain Monte Carlo uses a Bayesian interpretation of probability. The methods vary, but they all compute Bayesian quantities (such as 'full conditionals' for a Gibbs Sampler). If you've not read the thread I made in response to you, please do so. You're operating from a position of non-familiarity with a big sub-discipline of statistics.
1) Statistics deals with random events.
2) Probability's interpretation.
?
This is a response to @Jeremiah from the 'Chance, Is It Real?' thread.
Pre-amble: I'm going to assume that someone reading it knows roughly what an 'asymptotic argument' is in statistics. I will also gloss over the technical specifics of estimating things in Bayesian statistics, instead trying to suggest their general properties in an intuitive manner. However, it is impossible to discuss the distinction between Bayesian and frequentist inference, so it is unlikely that someone without a basic knowledge of statistics will understand this post fully.
Reveal rough definition of asymptotic argument
In contemporary statistics, there are two dominant interpretations of probability.
1) That probability is always proportional to the long-term frequency of a specified event.
2) That probability is the quantification of uncertainty about the value of a parameter in a statistical model.
(1) is usually called the 'frequentist interpretation of probability', (2) is usually called the 'Bayesian interpretation of probability', though there are others. Each of this philosophical positions has numerous consequences for how data is analysed. I will begin with a brief history of the two viewpoints.
The frequentist idea of probability can trace its origin to Ronald Fisher, who gained his reputation in part through analysis of genetics in terms of probability - being a founding father of modern population genetics, and in part through the design and analysis of comparative experiments - developing the analysis of variance (ANOVA) method for their analysis. I will focus on the developments resulting from the latter, eliding technical detail. Bayesian statistics is named after Thomas Bayes, the discoverer of Bayes Theorem', which arose in analysing games of chance. More technical details are provided later in the post. Suffice to say now that Bayes Theorem is the driving force behind Bayesian statistics, and this has a quantity in it called the prior distribution - whose interpretation is incompatible with frequentist statistics.
The ANOVA is an incredibly commonplace method of analysis in applications today. It allows experimenters to ask questions relating to the variation of a quantitive observations over a set of categorical experimental conditions.
For example, in agricultural field experiments 'Which of these fertilisers is the best?'
The application of fertilisers is termed a 'treatment factor', say there are 2 fertilisers called 'Melba' and 'Croppa', then the 'treatment factor' has two levels (values it can take), 'Melba' and 'Croppa'. Assume we have one field treated with Melba, and one with Croppa. Each field is divided into (say) 10 units, and after the crops are fully grown, the total mass of vegetation in each unit will be recorded. An ANOVA allows us to (try to) answer the question 'Is Croppa better than Melba?. This is done by assessing the mean of the vegetation mass for each field and comparing these with the observed variation in the masses. Roughly: if the difference in masses for Croppa and Melba (Croppa-Melba) is large compared to how variable the masses are, we can say there is evidence that Croppa is better than Melba.* How?
This is done by means of a hypothesis test. At this point we depart from Fisher's original formulation and move to the more modern developments by Neyman and Pearson (which is now the industry standard). A hypothesis test is a procedure to take a statistic like 'the difference between Croppa and Melba' and assign a probability to it. This probability is obtained by assuming a base experimental condition, called 'the null hypothesis', several 'modelling assumptions' and an asymptotic argument .
In the case of this ANOVA, these are roughly:
A) Modelling assumptions: variation between treatments only manifests as variations in means, any measurement imprecision is distributed Normally (a bell curve).
B) Null hypothesis: There is no difference in mean yields between Croppa and Melba
C) Asymptotic argument: assume that B is true, then what is the probability of observing the difference in yields in the experiment assuming we have an infinitely large sample or infinitely many repeated samples? We can find this through the use of the Normal distribution (or more specifically for ANOVAS, a derived F distribution, but this specificity doesn't matter).
The combination of B and C is called a hypothesis test.
The frequentist interpretation of probability is used in C. This is because a probability is assigned to the observed difference by calculating on the basis of 'what if we had an infinite sample size or infinitely many repeated experiments of the same sort?' and the derived distribution for the problem (what defines the randomness in the model).
An alternative method of analysis, in a Bayesian analysis would allow the same modelling assumptions (A), but would base its conclusions on the following method:
A) the same as before
B) Define what is called a prior distribution on the error variance.
C) Fit the model using Bayes Theorem.
D) Calculate the odds ratio of the statement 'Croppa is better than Melba' to 'Croppa is worse than or equal to Melba' using the derived model.
I will elide the specifics of fitting a model using Bayes Theorem. Instead I will provide a rough sketch of a general procedure for doing so below. It is more technical, but still only a sketch to provide an approximate idea.
Bayes theorem says that for two events A and B and a probability evaluation P:
P(A|B) = P(B|A)P(A) / P(B)
where P(A|B) is the probability that A happens given that B has already happened, the conditional probability of A given B. If we also allow P(B|A) to depend on the data X, we can obtain P(A|B,X), which is called the posterior distribution of A.
For our model, we would have P(B|A) be the likelihood as obtained in frequentist statistics (modelling assumptions), in this case a normal likelihood given the parameter A = the noise variance of the difference between the two quantities. And P(A) is a distribution the analyst specifies without reference to the specific values obtained in the data, supposed to quantify the a priori uncertainty about the noise variance of the difference between Croppa and Melba. P(B) is simply a normalising constant to ensure that P(A|B) is indeed a probability distribution.
Bayesian inference instead replaces the assumptions B and C with something called the prior distribution and likelihood, Bayes Theorem and a likelihood ratio test. The prior distribution for the ANOVA is a guesstimate of how variable the measurements are without looking at the data (again, approximate idea, there is a huge literature on this). This guess is a probability distribution over all the values that are sensible for the measurement variability. This whole distribution is called the prior distribution for the measurement variability. It is then combined with the modelling assumptions to produce a distribution called the 'posterior distribution', which plays the same role in inference as modelling assumptions and the null hypothesis in the frequentist analysis. This is because posterior distribution then allows you to produce estimates of how likely the hypothesis 'Croppa is better than Melba' is compared to 'Croppa is worse than or equal to Melba', that is called an odds ratio.
The take home message is that in a frequentist hypothesis test - we are trying to infer upon the unknown fixed value of a population parameter (the difference between Croppa and Melba means), in Bayesian inference we are trying to infer on the posterior distribution of the parameters of interest (the difference between Croppa and Melba mean weights and the measurement variability). Furthermore, the assignment of an odds ratio in Bayesian statistics does not have to depend on an asymptotic argument relating the null hypothesis and alternative hypothesis to the modelling assumptions. Also, it is impossible to specify a prior distribution through frequentist means (it does not represent the long run frequency of any event, nor an observation of it).
Without arguing which is better, this should hopefully clear up (to some degree) my disagreement with @Jeremiah and perhaps provide something interesting to think about for the mathematically inclined.
Traditionally, priors were chosen to be 'conjugate' to their likelihoods because that made analytic computation for posterior distributions possible. In the spirit of 'equally possible things are equally probable', there are families of uninformative prior distributions which are alleged to express the lack of knowledge about parameter values in the likelihood. As examples, you can look at entropy maximizing priors, Jeffrey's prior or uniform priors with large support. Or alternatively on asymptotic frequentist principles, and for this you can look at reference priors.
Motivated from the study of random effect models, there is often need to make the inference more conservative than a model component with an uninformative prior typically allows. If for example you have a random effect model for a single factor (with <5 levels), posterior estimates of the variance and precision of the random effect will be unstable [in the sense of huge variance]. This issue has been approached in numerous ways and depends on the problem type at hand. For example, in spatial statistics when estimating the correlation function of the Matern field (a spatial random effect) in addition to other effects, correlation parameter can be shrunk towards 1. This can be achieved through defining a prior on the scale of an information theoretic difference (like the Kullback-Liebler divergence or Hellinger distance). More recently, a family of prior distributions called hypergeometric inverted beta distributions has been proposed for 'top level' variance parameters in random effect models, with the celebrated Half-Cauchy prior on the standard deviation being a popular choice for regularization.
Personally I don't think you are making the connection here, I mean at certain points you are in agreement with me and I don't think you realize that. You think this is statistics but it is not, this is, for lack of a better word, philosophy. You make these long winded jargon filled proclamations that are completely off the mark. I mean bring up Monte Carlo was a face palm moment. I think your problem is that you are not thinking about this philosophically, it is the age old debate: Does the string have length because that is an objective property of the string, or does it have length because we created the ruler? The same holds for probability: If it is not derived from the real world for application in the real world is it really a measurement? I am not discarding the conceptual components, but saying that alone they are incomplete.
*** Edit - Auto-spell hijacked one of my words.
If you read the other thread, you would also see I made a comment saying that the differences in probability interpretation occur roughly on the level of parameter estimation and the interpretation of parameters - the mathematical definition of random variables and probability measures has absolutely nothing to say about whether probability 'is really frequentist' or 'is really Bayesian'. I gave you an argument and references to show that the definition of random variables and probability doesn't depend on the fundamental notions you said that it did.
Furthermore, the reason I posted the technical things that I did was to give you some idea of the contemporary research on the topics and the independence of fundamental statistical concepts from philosophical interpretations of probability. If you were not a statistics student I would have responded completely differently [in the intuitive manner you called me to task for].
This is relevant because most topics in the philosophy of statistics have been rendered out-dated and out of touch with contemporary methods. Choice of prior distribution for a statistical model doesn't have to be a distribution (look at Jeffrey's prior) IE it doesn't even have to be a probability measure. Statistical models don't have to result in a proper distribution without constraints (look at Besag models and other 'intrinsic' ones), don't have to depend solely on the likelihood [look at penalized regression, like the LASSO] in frequentist inference. What does it even mean to say 'statistics is about probability and sequences of random events' when contemporary topics don't even NEED a specific parametric model (look at splines in generalized additive models) or even necessarily to output a distribution? How can we think a 'book of preferences' for an agent as classical probability/utility arguments go for founding expert choice distributions when in practice statistical analysis allows the choice of non-distributions or distributions without expectation or variance as representative of individuals' preferences about centrality and variability?
You then asked how to choose a probability distribution without using data in some way. I responded by saying various ways people do this in practice in the only context it occurs - choosing a prior. Of course the statistical model depends on the data, that's how you estimate its parameters.
I have absolutely no interest in rehearsing a dead argument about which interpretation of probability is correct when it has little relevance to the contemporary structure of statistics. I evinced this by giving you a few examples of Bayesian methods being used to analyse frequentist problems [implicit priors] and frequentist asymptotics being used to analyse Bayesian methods.
T_T
There is a difference between choosing and making.
This is rapidly getting very tedious, and now I feel you are just purposely being obtuse.
Ya, I am not buying that act at all. Unless you think a null distribution is divinely supplied by the god of statistics.
No, null distributions are not supplied by God [despite how hypothesis testing is often treated in the applied sciences], they are a combination of distributional assumptions that usually allow the derivation of the test statistic and of specific values of that distribution referred to in the hypothesis. I have no idea this relates to what you're talking about. So I'll ask again.
What actually is your question, and what do you think we disagree on? What is the distinction between choosing and making that you refer to, and how is it relevant to the discussion? Now, how is the null distribution related to the discussion?
Ya, I am not playing. What is the difference between choosing and making? Really? Even a non-statistician knows their differences between the two. You dodged the question and now you are dodging it again.
Here, let me show you how I solve people who repeatedly engage in such misdirection.
If you just mean the usual way, then we choose from among things that already exist, but things we make don't exist until we make them. If this what you mean?
Sure. Do you draw some conclusion from this? For instance, do you have an answer to the question you posed:
Quoting Jeremiah
It is the interaction of the two.
Chance/probability, therefore, can't be objective. As you said, it's just an approximation...of complex causation.
Why do you think chance/probability is objective?
Is "Lawfulness" an objectively meaningful concept in a sense that transcends human psychology, practical decision-making and mathematical convention?
Given that a human being can only make a finite sequence of observations, i don't see what either "objectively lawful" or "objectively random" could add to the description of a human being's life experiences taken as a whole.
The only response i can imagine is
"Lawfulness concerns only the predictability of future observations in relation to past observations".
But how can "lawfulness" refer to observations that haven't happened?
Assuming we aren't fortune tellers or psychics whose minds literally peer into the future, this must be another way of saying
"lawfulness describes the similarity of one previously observed pattern to another previously observed pattern that are for practical purposes considered to be comparable via the invention of some convention for human purposes whereby the positions on one pattern are said to be 'equivalent' to positions on the other pattern".
In which case "lawfulness" merely describes how similar a sub-sequence of observations is to another sub-sequence of observations within the super-sequence of observations it is part of, relative to a convention that defines a notion of 'similarity' to allow for sub-sequence comparisons.
None of this leads to any impression that "lawfulness" is is in any sense objective or diametrically opposed to the converse convention of 'chance' or non-repeatability.