Physics and computability.
I've been pondering for a while about how to verify the truth or falseness (computability) of grand questions that have intrigued so many great minds. Take for example the claims made by many or some that the universe is deterministic in nature or rather as Einstein so eloquently said, 'God does not place dice with the universe.'
Now, how does one overcome the inherent uncertainty that is manifest in the scientific method? As it goes, we start out with observations then proceed with asking questions about the cause and effects of such phenomena, then we formulate a hypothesis, test it, refine it, and eventually it becomes a theory. But, what about truth? Can we ever be able to say that a theory is objectively true? I believe that it is possible to assert the truth or falsity of scientific theories.
This lack of certainty or soundness in the scientific method manifests itself in the many interpretations of theories that we take for granted as true. Let's take quantum theory - for example - irrespective of what interpretation of quantum theory is right or wrong (true or false), we know that quantum mechanics is apparently true. The laws of nature are absolute, intelligible, and unchanging. Now, I don't think many will doubt the validity of the preceding statement.
Proceeding further, in my mind, the only way for a scientific theory (in this case physics) to be logically sound is for it to be replicable or rather computable. David Deutsch has proposed such a conceptual principle that every physical law is computable. This is called the Church-Turing-Deutsch Principle. If it is computable, then it is true or false (depending on the circumstances) and nothing else.
Now consider the flip side. If there are physical laws that can never be known to be true or false (computable), which is no different than saying that they are undecidable or uncomputable, then we can never really have a theory that describes the entirety of nature or have a theory of everything. This fact may be unfortunately true as Godel's Incompleteness Theorem (or as demonstrated by the Church-Turing thesis) demonstrates that there are propositions (think physical laws) that can neither be known to be true or false (computable) or rather prove their own consistency within said system (the universe?).
Interested in any thoughts on the matter as I feel like I'm running in circles.
Now, how does one overcome the inherent uncertainty that is manifest in the scientific method? As it goes, we start out with observations then proceed with asking questions about the cause and effects of such phenomena, then we formulate a hypothesis, test it, refine it, and eventually it becomes a theory. But, what about truth? Can we ever be able to say that a theory is objectively true? I believe that it is possible to assert the truth or falsity of scientific theories.
This lack of certainty or soundness in the scientific method manifests itself in the many interpretations of theories that we take for granted as true. Let's take quantum theory - for example - irrespective of what interpretation of quantum theory is right or wrong (true or false), we know that quantum mechanics is apparently true. The laws of nature are absolute, intelligible, and unchanging. Now, I don't think many will doubt the validity of the preceding statement.
Proceeding further, in my mind, the only way for a scientific theory (in this case physics) to be logically sound is for it to be replicable or rather computable. David Deutsch has proposed such a conceptual principle that every physical law is computable. This is called the Church-Turing-Deutsch Principle. If it is computable, then it is true or false (depending on the circumstances) and nothing else.
Now consider the flip side. If there are physical laws that can never be known to be true or false (computable), which is no different than saying that they are undecidable or uncomputable, then we can never really have a theory that describes the entirety of nature or have a theory of everything. This fact may be unfortunately true as Godel's Incompleteness Theorem (or as demonstrated by the Church-Turing thesis) demonstrates that there are propositions (think physical laws) that can neither be known to be true or false (computable) or rather prove their own consistency within said system (the universe?).
Interested in any thoughts on the matter as I feel like I'm running in circles.
Comments (43)
This is problematical, because there are very few people who claim to understand what this 'principle' means; and then, of course, if you don't understand it, then you can't argue with them, or about the question, because you don't 'get it'. As soon as you introduce Deutsch and Turing, you're in a very strange no-mans-land somewhere between schrodinger's cat and Alice in Wonderland. That's why you're running in circles.
Metaphysics is another matter. For instance, even if it could be demonstrated according to some principle or other that every word Tolstoy wrote had a computable micro-physical cause, would you learn more about life by reading David Deutsch who invented the principle ? Or Tolstoy?
Natch, my reply is Tolstoy. There is a way of knowing which is scientific, but there are other ways, at present and for the foreseeable future irreducible to computation, which are just as if not more important: ethical, artistic, political, spiritual. Personal; emotional.
I am here channelling the absent spirit of Landru, a former forumite.
You will doubtless find Deutsch and his followers rather adamant in their advocacy of the Principle. To be frank that makes me suspicious: their rhetoric seems too sure of itself and unable to imagine disagreement.
In order to answer such fascinating questions as 'Is the universe deterministic?', then one need compute said physical laws as per the Church-Turing-Deutsch Principle and via such a method of replicating the laws of nature inside a computer, then it can be asserted the truth or falsehood of such statements.
If not, then how else to determine the validity of such statements?
I think mentioning 'emergent phenomena' is apt here. Take Escher's paintings for example... These are properties of a system that are at the same time dependent and independent of the system itself. I guess you can call them 'language games' without logical hinges or bedrock beliefs...
You need to speak about the how of computability before you ask questions about the scope of it.
Here's where your problem lies. There is a distinct difference between the "truth" concerning the mathematics of prediction, and the "truth" concerning the description of the predicted event, true interpretation. So the mathematics of prediction may give us a true "law" which is accurate for prediction, but it doesn't give us the whole truth about the event. It doesn't give us why the law holds, and therefore it doesn't give us the complete truth about the event. This can only be provided by the appropriate interpretation.
For example, human beings could map for years, the exact position and time, when and where, the sun rises on the horizon, every morning. From this, they could project, and make predictions, far into the future, the exact place and time that the sun would rise, day after day. This would constitute the mathematical law of prediction. It would be a very true law, because it would predict with great accuracy the position and time of sunrise each day. The problem is, that this predictive law tells us nothing about the real relationship between the earth and the sun, why this event of sunrise occurs as it does, with the changes that it incurs, and why those changes are so predictable. The whole truth is not revealed until this "why" is uncovered, and this is a matter of interpretation. As you can see from the example, the mathematical truth of prediction, constitutes a rather small portion of the overall "whole truth", and it is really just a starting point in uncovering the whole truth.
You can tell if certain physical laws are deterministic just by looking at them. In particular, if they are time-symmetric, then they are deterministic.
General relativity is time-reversible, and therefore deterministic. As I'm sure you are aware, it is a little more than that - it predicts a stationary block-universe in which all instants coexist.
Realist non-collapse quantum mechanics is also time-reversible therefore deterministic. More than that, it predicts a stationary block-multiverse in which all instants and universes coexist.
Have you noticed no CTD-Principle yet? The determinism questioned is answered already!
Here's another question, "How is knowledge possible?" Or if you prefer, "If reality is comprehensible, then what makes it so?"
The CDT-Principle answers that question.
And, by the way, the CDT_Principle is proved.
Yep. Our physical laws are constructed by excluding change or spontaneity. That aspect of existence is instead the job of measurement. We are left to measure variables like initial conditions and plug them into the "computable" models.
So time symmetry in equations is the way we construct the no-change that a measurement can then meaningfully break.
Scientific modelling is based on this epistemic dichotomy. We work to separate the symmetries and the symmetry-breakings. The laws are the frozen view. The measurements are how the laws are animated.
Importantly, the measurement part of the deal is incomputable. One simply has to ... enter the picture as an "observer".
Quoting tom
Fortunately the fact that the measurement part of the deal is informal and thus incomputable means we can dismiss such metaphysical flights of fancy. We already know the epistemology of the scientific method doesn't support it.
So the measurement issue in physics plays the same role as the axiom-forming issue in Godel's critique of mathematical formalism. In the end, the whole point about eternal symmetries is that as some stage they did get broken and there was something to actually talk about.
I am no computer science expert or know all that much about computer architecture; but, what I do know about computational entities is that they are real in logical space. They exist as true or false entities in the logical space that computers recreate. See, this forum is a kind of logical space. The internet is a logical space. A calculator is also a logical space that comes handy. We don't need to know how a TV works to be able to enjoy television, which you might be doing here? Map territory distinction?
Logical space is a concept I've been mulling over for a while now, which I believe was first proposed by Wittgenstein in his Tractatus. I find it an apt description of how the universe might work in a Hilbert space with N dimensions with the wavefunction describing and evolving (deterministically or randomly).
I hope I didn't muddle the waters too much.
I don't understand what you mean by a 'computational entity'. And while 'logical space' has a close analog in the notion of state-space, which can be employed in talking about computability, the two are not the same, and the way you employ the term - especially with respect to 'true and false entities', seems to have nothing to do with the latter. I think you seem to be stuck on this idea of 'computability' without really knowing what it actually is.
I'm confused. You seem to be making an issue about degrees of truth or different categories of truth. In logical space all truths are equal, depending on the relations between different objects.
Yes; but, you're asking me how does this forum exist. I'm just saying that it exists in logical space or if you prefer 'state-space'. It could be that I require further education on the matter; but, it seems to me as if you're asking something akin to 'How does the logical symbol ~(not) exist'?
I can't prove its existence; but, merely show it to you in action.
Truth. Truth makes it possible. This is where I contest with the JTB theory of knowledge. Truth comes before beliefs and justification.
I don't quite see how the CTD principle answers that question, care to enlighten me?
No, I'm not asking that (I have no idea where you even pulled that from?), and no, I don't simply 'prefer' the term state-space, insofar as state space is not 'logical space'. I'm asking you to demonstrate that you know what you're talking about with respect to computability before you start to talk about truth, which you do not seem to be able to do.
Well, I am quite ignorant and uneducated so forgive my lack of knowledge. May I ask if a universal Turing machine is an object that can simulate an artificial 'state-space'?
Why or why not?
Thank you.
This appears to be a problem. Do you recognize the fundamental distinction between a correspondence theory of truth, and a coherence theory of truth. In practise, this manifests as two distinct types of truth, true because it corresponds to reality, and true because it is logically valid. It seems like you believe in only one type of truth, coherence. That's fine if all space were logical space, but from this, how do you gain any real knowledge about real objects in real space?
Well, there is just one concrete thing, the world. There is no reason to assume a gap in intelligibility/understanding between the brain and all its functions and objects in the real world. If there were then we wouldn't even be able to know of it due to its inherent nature, otherwise called an unknown unknown...
So, this makes truth uniform with respect to any potential configuration of objects and things in the world.
I should say that I am a firm believer in the PoS (Principle of Sufficient Reason) namely that every cause or effect is intelligible in nature (which kind of automatically makes me a subscriber to Everettian Quantum Mechanics).
Under realist no-collapse quantum mechanics, measurements are no different from any other type of interaction - they are reversible.
In fact, it is this reversibility that will eventually settle the case, as it leads to different predictions. So much for metaphysics.
Answering this question requires a pretty close analysis of how causality is encoded or modeled in state systems, but there are pretty great arguments against answering in the affirmative. Hence Rosen's conclusion regarding such enterprises: "[Computable systems] are indeed infinitely feeble in terms of entailment. As such, they are excessively nongeneric, infinitely atypical of mathematical (inferential) systems at large, let alone “informal” things like natural languages. Any attempt to objectify all of mathematics by imposing some kind of axiom of constructibility, or by invoking Church’s Thesis only serves to estrange one from mathematics itself." (Rosen, Essays on Life Itself). There's alot of ground to be covered before arriving at this conclusion, but one needs to be at least familiar with the exact manner in which computable systems work before arguing either way. And this before one can even begin to discuss questions of truth and so on.
The Principle of Sufficient Reason is shown to be false* by the Free Will Theorem of Kochen and Conway. This is discussed in the 1st hour of the 6hr series of lectures given by Conway at Princeton:
*Conway's arguments seem to imply that super-determinism would rescue the PSR, but I'm not sure the PSR has any meaning in that context.
So what you are saying is that Chaitin's number is false or what?
There are non-computable numbers, you know. With computation, we can verify some "truths", I would say.
Quoting Question
Turing Machine is a way to show the limitations of computability, an answer to the Entscheidungsproblem. That's something that people seem to forget.
OK, let's start with this premise, there is just one concrete thing, the world. Now, in your last repy to me, you said "all truths are equal, depending on the relations between different objects". The premise that there are different objects contradicts that other premise, that there is just one concrete thing. So according to these two premises, which are contradictory, the idea of truth appears to be a fiction.
Quoting Question
No, it makes a "configuration of objects and things in the world" impossible. There is just one thing, the world.
Most number, overwhelmingly most, are non-computable. Most mathematical functions are similarly non-computable. No physics involves these numbers or functions, so that mathematical truth is irrelevant to computing or simulating reality. In reality, only computable numbers and functions matter.
Well, at least they are reversible all the way back to the first instant of the Big Bang and any other such event horizon. :)
So you are appealing to an infinite regress and I guess some God eventually provides you with the measurement basis you need to define your universal wavefunction.
This passage from Howard Pattee is a typically lucid analysis of the epistemic issues - and an introduction into how a pan-semiotic metaphysics (one that sees physical existence in terms of matter AND symbol, nor matter OR symbol) is the path out of the maze.
So the gist is that the "space" in which maths or computation takes place is physically real - in the sense that material spacetime is a generalised state of constraint in which all action is regulated to a Planckian degree of certainty ... except the kind of action which is informational, symbolic, syntactic, computational, etc.
Physics can describe every material characteristic of a symbol ... and none of its informational ones.
And in being thus an orthogonal kind of space to physical space, information is a proper further dimension of existence. It is part of the fundamental picture in the way quantum mechanics eventually stumbled upon with the irreducible issue of the Heisenberg cut or wavefunction collapse.
So the mistake is to try to resolve the irreducibility of information to physics by insisting "everything is computation", or alternatively, "everything is matter". Instead, the ontic solution is going to have to see both as being formally complementary aspects of existence.
Aristotle already got that by the way with his hylomorphic view of substance.
So nature keeps trying to tell us something. Duality is fundamentally necessary because there is nothing without a symmetry breaking. But then we keep looking dumbly at the fact of a world formed by symmetry breaking and trying to read off "the big symmetry" that therefore must lurk as the "the prime mover" at the edge of existence.
The logic of the principle of sufficient reason fools us into believing that only concrete beginnings can have concrete outcomes. Therefore if we see a broken symmetry, then this must point back to an equally physical (or informational) symmetry that got broke.
But that simple habit of thought - so useful in the everyday non-metaphysical sphere of causal reasoning - is what blinds almost all efforts at "interpretation".
The duality of existence will never make sense until your metaphysics includes a third developmental dimension by which beginnings are vague or fundamentally indeterministic.
Clinging onto a belief in the definiteness of beginnings, the concreteness of initial states, is just going to result in the usual infinite regress stories of creating gods or universal wavefunctions. Folk are very good at pushing the question they can't answer as far out of sight as possible.
No models that we use involve these numbers of functions, so that mathematical truth is irrelevant to our present models that we use. Just like non-Euclidean geometry or Computer science was irrelevant to people during Antiquity.
I think it would really matter when you would understand that the best model of something is noncomputable. And in that case you just use a second best model ...and understand that there are limitations just what you get. Because otherwise we fall into falsehoods like Laplacian Determinism, the asumption that if we had all data and knowledge, laws, SuperTuring Computers or whatever, everything would be computable.
Yep. Rosen did a great job on highlighting the logical impossibility of "computing nature". And the holographic principle now shows that it is materially impractical as well. The speed of light creates absolute event horizon limits so the world itself doesn't even have the physical resources to nail down every event in super-deterministic fashion.
And then there is the flipside to the issue of modelling the world. It is not just that computation can't nail every event down - Rosen's issue of incommensurability. But instead, modelling is based on the principle of nailing down the very least amount of information possible. The aim of modelling is not to simulate the world - re-present it in some veridical sense - but to reduce an "understanding of the world" to its simplest possible collection of habits.
So less is more when it comes to modelling. And that is what the practice of creating physical laws follows. That is why the mechanics of Newton, and all the other varieties of mechanics that came after, feel so pragmatically right. The messy dynamical world can be reduced to the simplicity of timeless universals and particular acts of measurement. You measure how things begin, and then the equations predict how they will unwind forever.
So the current computational bandwagon - the digital physics - is wrong both in believing the entirety of the material world (including its fundamental indeterminism due to holographic limits on decoherence) is actually computable, and wrong also even in presuming this kind of veridical simulation would be "a good thing".
Instead, for modelling minds, it is clear that efficiency arises from the opposite of being "completely consciously aware of every detail of the world." Minds actually arise as a "orthogonal subjective dimension" because of an an ability to pretty much detach from such detail. And that detachment is based on the materiality of the world being reduced to a well-worn system of sign or habit.
Then this biosemiotic insight - this efficiency principle - can now be extended to the physical world in general. That is pan-semiotics. Quantum decoherence is an expression of the same thing. The world is seeking its simplest informational states. Classicality is what emerges as its simplest self-model, the one that minimises the messiness of the causal tale it is telling in terms of its own evolving temporal history.
So there is a duality that pervades all these levels of discussion - the matter~symbol distinction - for a reason. There is a single causal mechanism at work that links it all from quantum to mind.
But that mechanism is also irreducibly complex or triadic in involving the third thing of an axis of development - the vague~crisp distinction.
The matter~symbol distinction is pretty easy to understand. But the vague~crisp distinction is far subtler in being "beyond standard logic" as well as "beyond standard physics". :)
To be frank, I don't think any formal system can entail (or simulate) the world in its entirety. There will be inconsistencies within such a model system.
A conceptual example that comes to my mind is that there are mathematical truths that are unanswerable within the universe itself. However, it is not impossible to recreate a simpler version of the apparent world within the system itself (the universe).
However, I have yet to see a logical proof that a formal system can't replicate itself within the system itself. This might just be my feeble understanding of Godel's Incompleteness Theorem's.
This is interesting and I don't dare to contest those findings by such brilliant minds. However, how does one explain that man can do what he wills but he cannot will what he wills?
Or in other words, why is this reality apparent as opposed to being in any other state of affairs?
Generally, yes. If something can not be proven to be true or false, then is it not undecidable and thus non-halting?
Quoting ssu
That's just saying that a system is incomplete and can not prove its own consistency.
No.
The world entails all the facts (logical relations) of objects within it. They are one and the same.
Quoting Metaphysician Undercover
Yes, there is just one thing, the world, which entails all the configurations or state of affairs between objects.
I read a short part of that paper you linked. The author says:
Both your conception of QM and the authors implies the Copanhagan Interpretation or the measurement effect. However, in this thread I have taken Everttian Quantum Mechanics as a startinig point. Everettian QM is determinisitic from what I have read, and David Deutsch in his Church-Turing-Deutsch principle asserts this as a fact due to assuming that because the machine is physical itself and thus obeys the same laws of the world, then it can itself replicate all those laws. This seems fundamentally different from saying that an initial condition is needed, whereas reality can be an infinite amount of possible states.
As you may have noticed Occam's razor flies out the window when confronted with the infinite amount of realities in the world. Everettian QM is an elegant solution when confronted with apparent infinities, which supersedes Occam's razor.
That is wrong. There *are* models that we use that are non-computable, in the sense that they do not obey the CTD-Principle.
Quantum mechanics obeys the CDT-Principle, as does QFT and the Standard Model. Any future theory will also obey CDT. In fact CTD is a guide to future theories, as are the conservation laws.
That statement is obviously false. It is perfectly possible to reprogram yourself to will, or desire different things. People do it all the time.
Yes, but you can't program yourself to program yourself to program yourself [...] ad infinitum [...] to program yourself to program yourself...
Well something sure flies out the window once you deny the measurements that might locate you in some actual world rather than leaving you to fluff about in a sea of infinite possibility.
See
http://thephilosophyforum.com/discussion/comment/37346/
I'm having a mental cramp over it.
Just keep chanting "all branches of the wavefunction are equally real" until you are a paid up member of the cult of MWI. That way you will never have to trouble yourself with real metaphysics ever again.
Statistically not so! The reality that is real is the one most probable to occur according to the evolution of the wavefunction. The rest aren't as real!
You've clearly not been paying attention.
Quoting QuestionActually the thing (giving a proof by computation isn't universal and adaptable to all models), isn't just that. It actually does have a lot of effect in real world modelling problems.
The most simple example of this is when a measurement effects the outcome and there is no way around it; when the model that should portray reality, itself has an effect on that reality it ought to model. In these kind of situations basically objectivity is lost. In these situations some models can be used to some effect, for example we can use probabilities, or make premisses so that the dynamic model is stable. Yet these do not reply to the question as an normal computation would do.
Wittgenstein in his Tractatus gave the simple reason for this:
For example in economics, self fulfilling expectations are extremely difficult to model. And for example for speculative bubbles there aren't good models around and earlier were simply assumed not to even exist, there wasn't the math to do them. The simple fact is that many Real World economic phenomena are extremely difficult to model. I think the reason is that the best models simply are uncomputable ones. Even in classical Physics you get similar problems.
Well, if you cannot see that it is explicitly contradictory to say that "there is just one thing", and that this one thing is a multitude of configurations of things, "objects", such that you would keep insisting on the same contradiction, then I give up on trying to help you.