Is space/vacuum a substance?
Theres a lot of contention over applying the term "material" or "substance" to certain metaphysical concepts. And I get that. What defines "stuff" because almost always theres an exception to any attempted definition. For example is energy a material or substance considering its equivalence to matter or is it an immaterial phenomenon? Could all phenomena be substances? Is time? Is consciousness?
In this discussion I focus on space/ vacuum because its seemingly contradictory in nature to the term "material". Pure space has no atoms in it. It has no matter. So naturally you would assume it is not material in nature. However vacuum produces particles spontaneously. It also contains heat almost always. It has content and it has inherent qualities. It dictates the density and form and physical and chemical qualities of solid materials by its arrangement around atoms in molecules and its influence on the distance between bonds etc. In essence space makes up more of each object than the material parts of atoms do. An atom is over 90% space.
Space can also be bent by gravity or rather "space-time" can be bent. Space can contract at great speed. It just seems space has too much going on to be considered immaterial or nothing or without substance.
What is going on?
In this discussion I focus on space/ vacuum because its seemingly contradictory in nature to the term "material". Pure space has no atoms in it. It has no matter. So naturally you would assume it is not material in nature. However vacuum produces particles spontaneously. It also contains heat almost always. It has content and it has inherent qualities. It dictates the density and form and physical and chemical qualities of solid materials by its arrangement around atoms in molecules and its influence on the distance between bonds etc. In essence space makes up more of each object than the material parts of atoms do. An atom is over 90% space.
Space can also be bent by gravity or rather "space-time" can be bent. Space can contract at great speed. It just seems space has too much going on to be considered immaterial or nothing or without substance.
What is going on?
Comments (124)
Note the definition of substance is about that "which stands behind everything". So you can consider the reverse proposition. What if "stuff" is the emergent outcome rather than the foundational being?
Could all substances be phenomenal? :grin:
It is useful to flip the assumptions being made. Everything that seems "substantial" to us at our very human-centric scale of observation turns out to be quite "other" to that once we get digging with our scientific tools.
Any solid object is mostly space - as you say. And yet any empty space is also "substantial" in having a temperature, a gravity field - the various other measures that suggest the presence of material properties.
So our standard reified notion of substance has to be treated as suspect and broken down into whatever could cause such solidified existence to emerge.
And hey, Aristotle already did a good job of that with his own investigation into substance or ousia.
His idea that being is emergent from the combo of formal and material cause (as the broad generalisation) holds up pretty well.
For most people, the words "substance" and "substantial" are referring to solid matter (Quanta -- tangible stuff). But Aristotle's Primary Substance was described as more like immaterial Essence (intrinsic quality necessary for existence; Qualia -- mental stuff).
In that case, empty Space (plenum, vacuum) is essential for the existence of Matter. Mathematicians use material metaphors to explain their calculations of spatial topology, even when its "structure" consists of immaterial numerical values. So, yes, Space is a philosophical Substance, even when it contains no matter. :smile:
Substance : https://thephilosophyforum.com/discussion/8807/is-spacevacuum-a-substance
Space : https://en.wikipedia.org/wiki/Space_(mathematics)
As I wrote in a paper a decade and a half ago, “All is but space, and none of it empty.”
Surely what Aristotle meant by prime matter is one of the most fraught debates in metaphysics. But it can’t be cashed out as mental stuff. Nor even, immaterial essence.
It is more like a fluctuation or the least possible notion of a material action or efficient cause, in my view.
Peircean Firstness or tychism in other words.
Aristotle was uncomfortable with Plato's notion of supernatural Forms, yet he still applied the same term to natural things. And the distinction is moot, since he used the metaphysical term "Soul" to describe the "form" component of all beings. So "Form" is both Matter and Mind/Soul, both Potential and Actual. I try to make a distinction, to avoid confusion, by capitalizing the Platonic ideal "Form" (qualities we conceive), as contrasted with real "forms" (things we perceive).
Platonic Form is equivalent to my concept of Universal Information (EnFormAction) : it's not only a physical substance (Matter, objects, Quanta), but metaphysical essence (Mind; processes, Qualia) --- reason, feelings, consciousness, thought, etc, and Soul/Self. Abstract Information is equivalent to the Mathematical/Logical Ratios/Relationships that we rationally infer in physical objects.
The Form or Design or Structure of a physical thing is Informational. And empty space is essentially Form Potential (probability), until something Actual emerges. For example, a Field in physics is empty space with a percentage potential for Virtual Particles to become Actual particles. The "structure" of the Field is mathematical, not material. This is getting enigmatically esoteric, so I'll stop here. :nerd:
Soul : Soul or psyche (Ancient Greek: ???? psykh?, of ?????? psýkhein, "to breathe") comprises the mental abilities of a living being: reason, character, feeling, consciousness, memory, perception, thinking, etc.
https://en.wikipedia.org/wiki/Soul
Prime Substance : [i]Hylomorphism (or hylemorphism) is a philosophical theory developed by Aristotle, which conceives being (ousia) as a compound of matter and form . . .
Aristotle applies his theory of hylomorphism to living things. He defines a soul as that which makes a living thing alive. Life is a property of living things, just as knowledge and health are. Therefore, a soul is a form—that is, a specifying principle or cause—of a living thing. Furthermore, Aristotle says that a soul is related to its body as form to matter[/i]
https://en.wikipedia.org/wiki/Hylomorphism
Information : So, my reading of cutting-edge science indicates that the quantum description of physical reality (informational, relational, mental) is akin to pre-scientific concepts of the metaphysical spirit realm, which is more Potential than Biological. Hence, on the cosmic scale, Mind seems to be more fundamental than Matter.
http://www.bothandblog.enformationism.info/page12.html
What's "going on" is Potential (Virtual), the statistical possibility of Actual (Real). See my reply to Apokrisis above. :smile:
Substance was used before mass was properly identified and defined. It is now no more than philosophers continuing a bad habit.
So "mass" has been properly identified and defined? Or did you mean "massiveness"?
Oh dear. Back into the good old metaphysical debate about "substance" I guess. :lol:
"Mass" is not matter per se, but a measure of a quality or property of Matter (i.e. inertia). Aristotle's "Substance" is also an evaluated quality (what kind of thing) of Matter (physical object). "To measure" (from mensura = mind) is to convert a material thing into a mental or mathematical quality (value). Mass is a measure of Substance only in the sense of Qualia. Philosophers have a "bad habit" of trying to understand the essence of material objects (things). :smile:
Mass : both a property of a physical body and a measure of its resistance to acceleration
https://en.wikipedia.org/wiki/Mass
What is the difference between mass and substance? : https://socratic.org/questions/what-is-the-difference-between-mass-and-substance
Substance : Aristotle analyses substance in terms of form and matter. The form is what kind of thing the object is (identity), and the matter is what it is made of. . . .
Aristotle’s preliminary answer to the question “What is substance?” is that substance is essence,
https://plato.stanford.edu/entries/aristotle-metaphysics/#SubsEsse
Essence : In philosophy, essence is the property or set of properties that make an entity or substance what it fundamentally is, and which it has by necessity, and without which it loses its identity.
https://en.wikipedia.org/wiki/Essence
However I would see Primary Substance as the meat in the hylomorphic sandwich. So it is substance that is indeed enformed - that is, the accidental constrained by the necessary. Or material possibility constrained by formal requirement.
This is the hierarchical order by which some particular dog - let's point to Rover over there - is a "dog" by some kind of higher formal necessity. Dogginess is an abstract idea that is real because it really does serve to limit the scope of material accidents.
Rover might have three legs and still count as a dog - if he lost one by accident. Or he could even be a robot - if for good reason, the fact of being factory-made rather than biology-grown was regarded as incidental. A material particular or material accident.
So the Aristotelean approach to substantial being recognises that the world attains its enformed state of solidity and concrete object-ness because globalised necessities constrain local material possibilities in a sufficiently robust fashion.
And in this scheme, we can thus arrive at...
Quoting Gnomon
...but it is actually a Peircean triad of the accidental, the actual, the necessary. Actuality is emergent as a formal constraint on mere material accident. And so actuality is itself statistical or probabalistic.
Rover is that substantial being, that enformed concrete particular. But also, on closer examination, Rover represents some generalised idea of "dog" that is being honoured in the exception. If Rover loses his leg tomorrow, that counts as an immateriality. Not enough has changed to alter his "essential being".
But keep chipping away, and it will. The leg, for example, no longer fits the bill of being our pet dog.
Quoting Gnomon
Yes, but what was Aristotle - as a naturalist - really meaning? He wasn't a proto-scholastic after all.
Today, he might talk of the genome rather than the soul. We now have better ways to talk about the formal aspect that distinguishes life and mind as distinctive elements of nature.
So for sure, life and mind have a regulating form - one that constrains material accidents in a very strong way. We have informational machinery - cellular membranes, genes, neurons, words - that are the semiotic machinery to encode "schemas" and impose their designs on raw physics.
Rover is Rover because of his genetics, his immune system, his neurally-encoded memories, the fact that he is socially constructed as a pet within my family setting. There is a huge weight of information to enform the Primary Substance that goes by that name, preserving a constant thread of identity as he sheds hair, loses legs, or undergoes other material accidents that the schemas don't count as information - ie: differences that make a difference.
Aristotle discussed the concept of prime matter, because it was a common speculation in his time. He ended up proving that it is impossible that such a thing as prime matter, or infinite potential, is real, with his cosmological argument. Therefore, prime matter is taken by Aristotle to be a fiction.
Read Metaphysics book nine.
Read specifically Bk.9, Ch.8, where he explains how actuality is prior to potency, and how anything eternal must be actual, not potential. This excludes the possibility of prime matter as described earlier as an eternal, and first potential.
[quote=Aristotle, Metaphysics 1050b] Obviously, therefore, the substance or form is actuality. According to this argument, then, it is obvious that actuality is prior in substantial being to potency; and as we have said, one actuality always precedes another in time right back to the actuality of the eternal prime mover.
But actuality is prior in a stricter sense also; for eternal things are prior in substance to perishable things, and no eternal thing exists potentially. The reason is this...[/quote]
We have of course been through this hoop before. Not that I mind a re-run. :razz:
As this Stanford article argues....
...and as I've argued before, there is this confusion between prime matter and primary substance - between the primacy of whatever could constitute the material aspect of hylomorphically-emergent actuality, and primacy that is then the actualised or enformed being which is thus the substantial substrate of further change and development.
So "potential" - quite rightly - has this double sense that needs to be addressed.
There is what I would consider to be prime matter as Peircean firstness or vagueness. Or indeed, the apeiron of Anaximander. This is just the raw possibility of a fluctuation. The least "formed" or "enduring" or "purposeful" notion of a substantial material action or efficient cause. A difference that doesn't make a difference. A mark that is washed away as fast as it is made.
It seems clear that for anything to be, you do need that kind of general ground that is the radically unformed - a kind of chaos without pattern - which can thus become the prime matter which is in fact formed up (enformed) into some kind of substance, something that is a concrete particular.
And then the second sense of potentiality is the potential for the now actualised substance to be the subject of further developmental change. Iron can be forged into swords, flesh into dogs. You just need the formal/final cause that gives the iron or flesh its functional shape.
So when we talk of Being preceeding Becoming, we are talking about Primary Substance - the dog that can become dead, the iron that can become sword.
But when we talk of becoming preceeding being, we mean the Anaximander's apeiron or Peirce's tychism - potential as the pure spontaneity of unformed material fluctuation. If we had to describe such a general grounding to Being, it would be a materiality with the least possible substantiality. And even then, we should be imagining it as just naked "becoming" as "prime matter" with any materiality has already crossed that threshold into the realm of actualised Being.
This may sound an esoteric distinction. But it is of course vital to grounding the metaphysics of modern physics. How else can we understand "quantum potential" or "the Big Bang"?
I realise you are not interested in the real answer here. But physics has arrived at its most general way of measuring units of substantial being - https://en.wikipedia.org/wiki/CGh_physics
Serious metaphysics would consist of a discussion over why the Planck scale has this particular triad of physical limits.
I mean it could be that you are just trolling, or lazy, or something else. But I don't mind if you believe it is this.
You mean that I just exposed your own disingenuous game here. Yes, you can buzz off now.
What else did you think I was picking out here?....
Quoting apokrisis
Banno didn't.
Quoting Banno
I asked you a question that you can’t/won’t answer. I supplied the reference to how modern physics would quantify substance.
Play or go home. Whining is undignified.
Cheers. Didn't think I did.
This is what Aristotle claims to refute with the "cosmological argument", the idea of "emergent actuality". This is why he needed to posit the "prime mover", rather than the infinite potential of Anaxamander's apeiron. Later in his Metaphysics, the prime mover is described as a divine thinking.
Quoting apokrisis
So Peircean firstness, and the metaphysics which follows from it, is not at all consistent with Aristotle's metaphysics, because it adopts the very principle which Aristotle claims to have refuted. You really can't just overlook the fact that Aristotle replaced the concept of "prime matter" with "prime mover", as the foundation of his ontology, to pretend that Peircean metaphysics is consistent with Aristotle's. I mean, that's a pretty significant difference.
Quoting apokrisis
This is exactly the notion of pure potential which Aristotle claims to have refuted. I don't see how you cross that threshold into the realm of actualised Being.
Surely what he wanted to refute was an efficient first cause to the Cosmos. And this led him to claim that the actuality of Being must therefore be eternal.
So he got something wrong. We now know our Universe started in a Big Bang. There is a data point to be dealt with.
But his own theory of substance include finality - a prime mover. And if you put aside the suggestions that “God did it”, then his contrast of immobile celestial spheres and an actuality that is thus driven in circular motion Is not too bad a stab at some kind of naturalistic resolution. It is a fact of quantum theory that spin exists as a fundamental degree of freedom because the classical spacetime universe provides the motionless reference frame that makes it so.
Noether’s theorem at work.
I wouldn’t exaggerate the fore shadowing. But Aristotle was heading in the right direction.
Quoting Metaphysician Undercover
An efficient cause is only so if it is efficient. And a fluctuation is defined by being a difference that doesn’t make a difference. Or only the weakest imaginable difference.
So Peirce was making an argument along the lines of modern symmetry breaking and chaos theory thinking. The old butterfly wing effect. If things are poised and ready to tip, then even the least disturbance, any old action no matter how small or undirected, will cause the system to go in its finality-serving direction.
You can’t really attributed some grand causal power to that fluctuation as any old fluctuation Could have done the same trick. And yet a fluctuation was also a necessary ingredient. The first accident. The first difference to make a difference.
Quoting Metaphysician Undercover
He rejected a first efficient cause in that particular argument against Atomism and the claim that the Cosmos could be created rather than eternal.
But here we were talking about prime matter - that is material cause, not efficient cause (even if I agree the two must be related).
So for example we have this in the Stanford article I cited...
What he was actually refuting was Pythagorean idealism (Platonic idealism in modern terms). What he showed was that if ideas preexist human minds, their nature is as potential. The human mind is what gives them actual existence, in the act of "discovery". Then he demonstrated with the cosmological argument, that it is impossible for any potential to be eternal. This effectively refutes Platonic realism which holds the reality of eternal ideas.
The issue with respect to "matter" is that matter is itself just an idea. This might be hard for you to grasp, because "matter" is exactly what we assign to the physical world as what is independent from us, and therefore not an idea. But as "matter", is simply how we represent the physical world. It is our idea of temporal continuity, what persists unchanged in time, represented in science as inertia, mass, energy, etc.. In reality, what exists independent from us is changing forms, and we represent the aspects which are consistent, constant, as "matter", and this is the basis of the temporal continuity which is called "Being",
When he supposedly refuted idealism, by denying that potential could be eternal, he also refuted materialism, because materialism is actually just a twisted form of idealism, substantiated by the concept of "matter". And the concept of "matter" is not properly supported by empirical observations of the physical world, so it is deficient. The temporal continuity of existence, or Being, ends up being inconsistent with the concept of "matter" demonstrating that "matter" is just an idea, so materialism ends up being an idealism based in an assumed infallibility of the concept "matter".
Denying both materialism and idealism sets up the conditions for an eternal chain of efficient causes, sometimes called infinite regress. Aristotle referred to this as eternal circular motion, and you'll find a similar concept in the Hartle-Hawking no-boundary proposal. This eternal infinite regress is logically repugnant for a number of reasons, best demonstrated by the absurdities produced by the principle of plenitude which dictates that in an infinite amount of time, all possibilities have been actualized. So we move to an alternative first cause, which is "final cause", and the ensuing teleological nature of the universe.
To insist that the universe started with a Big Bang really doesn't get us anywhere. The "Big Bang" only represents the time in which our physical representations are no longer applicable. So the fact that we cannot understand the universe prior to this time called "The Big Bang", only indicates that out concept of "matter", or the modern representation, "energy", is deficient.
Quoting apokrisis
The problem here is that we cannot get an acceptable "naturalistic resolution". This is because our conceptions of temporal continuity, "matter", or energy", are deficient. Once we get beyond this bias, this prejudice, that our conceptions of matter (or energy) are sufficient to give us a true understanding of temporal existence, Being, we see the need to look to other sources. The prejudice makes us believe in the infallibility of these conceptions. Recognizing this prejudice might lead us into the mysticism of human experience, will, intention, and free choice, as an alternative source for knowledge concerning temporal continuity. And when we properly understand the reality of intention, final cause, we cannot put aside the suggestion "God did it".
Here we have divergent courses of study. You would say that we ought to put aside this notion "God did it", stick with the demonstrably deficient and faulty scientific conceptions of temporal continuity, and ignore the vast wealth of accumulated theological knowledge of this subject. Thus you adhere to that prejudice which assumes a "naturalistic resolution" is possible, regardless of the mounting evidence against this possibility. On the other hand, we can take Aristotle's lead and proceed toward understanding the teleological nature of the universe, discovering the completely different understanding of temporal continuity, Being, which is explored in Neo-Platonism and early Christian theology.
Quoting apokrisis
To say that there is a difference which makes no difference is either blatant contradiction, or to take a subjective perspective. If the former, then you take your ontology into dialectical materialism, allowing all sorts of confusion due to disavowing the law of non-contradiction. This sort of ontology is clear evidence of the deficiency in the concept of "matter". If the latter, then there's no point in speaking of the pre-human universe in such terms. There's a difference, but it doesn't make a difference to you, simply because you'd prefer to ignore this difference because it's evidence against your ontology. What kind of metaphysics is that? To claim that there is a difference which doesn't make a difference, simply because if it did make a difference it would be evidence against your ontology.
Quoting apokrisis
Sure, Aristotle goes to great extent defining "matter" in his Physics, and describing the concept of "prime matter" in the early part of his Metaphysics. This is to elucidate, and give a clear understanding of what "prime matter" is meant to represent, by those who assume such a thing. Then he proceeds in the later part of Metaphysics to refute this idea. That is his technique, to first elucidate the idea, so that we understand what is meant by the terminology, then he proceeds to demonstrate the deficiencies of the idea.
That reply is odd only because we are not used to thinking of tangible Matter in terms of Qualia (properties, fields). Instead, we typically think of matter as Quanta (countable objects). We measure (compare) one thing to another (KG = a standard massive object), not the thing-in-itself (ding an sich).
Mass is measured indirectly by its effects on our senses, or our measuring tools. A unit of Substance (an object or thing) is measured the same way, by its effects on our senses. Like zero-mass Photons, we can't detect Aristotelian (Soul) Substance directly, so we look for chemical reactions (physical change) to its energy input or output. Energy & Mass are potential, Chemistry is actual. But our metaphysical rational minds can recognize the signs of potential, and estimate its probability. That statistical prediction is a form of mathematical prophecy. :smile:
What is energy made of ? : Energy is not made of anything, energy is a term used to describe a trait of matter and non-matter fields.
___Wiki
Potential :
[i]1. having or showing the capacity to become or develop into something in the future.
2. the quantity determining the energy of mass in a gravitational field or of charge in an electric field.[/i]
___Wiki
Entelechy : the realization of potential. . . . the supposed vital principle that guides the development and functioning of an organism or other system or organization.
___Wiki
Quantum Potential : https://www.infoplease.com/math-science/space/universe/theories-of-the-universe-quantum-potential
Stylistically, tacking a list of links on the end of your posts without explanation doesn't work for me. Nor
Qualia as properties and fields? Not following that.
It might be worth my going back to the answer I gave to the OP. The issue is, is space/vacuum a substance? The answer involves working out what a substance is. The trouble is that the notion is not all that settled - see the discussion in this thread between Meta and Apo. The word "substance" is not used much outside of metaphysics, except perhaps in chemistry. Hence, it's not at all clear what the title question is asking. So it's a good metaphysical question, one that folk can fumble around with for as long as they like.
A more fruitful approach might be to look at mass rather than substance. The question becomes does space have mass, and the answer to that is given at least in part by @Benj96 in the OP.
The old Newton Absolute of an inert where/space went away. All is field, thought Einstein.
A good guess seems to be that fields themselves, and only them, form the substratum formerly known as space in which all plays out.
These would not be classical fields but covariant quantum fields, as Rovelli points out, that are what's being headed to as the final unveiling of reality's totality.
What has fallen by the wayside, in order:
1. Newton's separate, absolute space and time as backgrounds/containers, whose only quantity is volume, with particles in space moving through time—is gone. (Replaced by Einstein's spacetime.)
2. Faraday's and Maxwell's fields and particles as coming from spigots of particles—is gone, too. (Replaced by particles manifesting from fields, along with spacetime and other fields becoming covariant.)
3. Classical fields/particles—is gone (since no continuum, due to quantum discreteness). (Replaced by spacetime and quantum fields in quantum mechanics.)
4. Spacetime—is gone (now emergent). (To be replaced by covariant quantum fields in quantum gravity.)
Fields in general are granular, indeterminate, and relational. The particles manifesting exist as themselves only during interactions; they are not persistent things. Their spectrum is discrete, such as that electrons can only have certain orbitals (from this the periodic tables can be constructed). Gravitational field quanta are different; they are not in spacetime but they are spacetime.
There are no infinities (Einstein's curved spacetime is finite but boundless; Planck size / granularity /digital limit makes the size scale absolute, plus it eliminates classical, analog continuums of endless divisibility. No more Zeno paradoxes.)
No things are permanent; there's no fundamental lego type of building blocks that can build anything. (Called constitutionalism?)
There is no original space and time. In Quantum Gravity theory, 'time' would amount to a counting of beats but there is no universal clock; 'space' quanta serve as 'space' themselves; no Newton type 'space' is required.
Err no. Mass is not a simple matter of a weighing a kilo of stuff when you get beyond schoolboy physics. It is defined in terms of inertia - resistance to acceleration. You need to get out your ruler and stopwatch. And that is while you are still working within a Newtonian metaphysics where references frames are inertial.
It is bad to spread this kind of silliness just because of some ancient Scientistic prejudice against “metaphysics”. It betrays a lack of familiarity with both physics and metaphysics as academic disciplines.
Lol. Why are you trying so hard to avoid talking about inertia? What is that when it is at home in your naive realism paradise?
Do you have a source where it is clear that is the argument?
The Stanford article I cited on the prime matter issue fits with my view that Aristotle never fully worked it out, even if he left us with most of the essential tools.
Quoting Metaphysician Undercover
I agree with the first part but not the second. In my semiotic view, time as a continuous thread of Being is also emergent.
And physics supports this. The Cosmos has a thermal history that locks in its future direction. It is a space of possibilities that becomes increasingly constrained as it expands and hence cools.
So yes, we apply psychological models that see a world divided in into matter and void (the spacetime coninuum]. With Newtonian modelling, this becomes a system of laws and measurements. We have an Aristotelean division into material and formal causes.
But physics has kept marching on until matter and void, space and time, etc, are all unified as aspects of a universal substance - a theory of quantum gravity, if we can pull that off.
And spacetime would have to be emergent in that scheme, just as would mattergy - relativistic mass.
Quoting Metaphysician Undercover
Is this your interpretation? I don’t think he had the mission of refuting idealism as even Plato is not really an idealist - especially by the Timaeus.
Instead I would say the issue was resolving the issue of hylomorphic substance - how substance could be the co-production of formal and material causality. Or as systems science would put it, bottom-up construction in interaction with top-down constraints.
Quoting Metaphysician Undercover
Peirce’s logic of vagueness resolves this initial conditions issue as I have outlined before. I realise you don’t agree.
Quoting Metaphysician Undercover
Yes. That is why I wanted to check how much scholasticism you are projecting onto what Aristotle actually says (as much as we can rely on the curated version passed down by history).
The social definition of inertia; demonstrated here as “meaning is use”. :lol:
The Stanford article doesn't seem to address Bks 9-12. This is where Aristotle "worked it out". You really just need to read his Metaphysics thoroughly from start to finish, to understand. There is another section, around Bk. 3 where it is demonstrate that in an absolute sense, form is prior to matter. This earlier argument, which is very similar to the principle of sufficient reason, along with the so-called cosmological argument are key.to understanding Aristotle's metaphysics.
The question he asks at this earlier point in the Metaphysics, is why does a thing, as a thing, exist as the thing which it is, and not something else. The thing, when it comes into existence, must come into existence as the thing it is, or else it would be something other than the thing it is, contrary to the law of identity. Also, a thing is not just random matter, it has a particular form as a particular thing. The only way that the thing could come into existence as the thing it is, and not some random other thing, is that it's material existence is preceded by its form. Therefore the form of a thing is necessarily prior to its material existence. And when we extend this to the universe as a whole, being an identifiable thing, this is the principle which supports Neo-Platonic independent Forms, which are temporally prior to the material existence of the universe. And, the fact that the form of a thing is prior to its material existence is what supports the human capacity of prediction, and free will to interfere in material continuity.
Quoting apokrisis
We've been through this before, this "view" is contradictory. Emergence is a temporal process, something occurring in time, as time passes, so it is impossible that time is emergent.
Quoting apokrisis
Such a unified theory is not forthcoming, and the reason is that your contradictory view of time expressed above, is as you describe, the one accepted by physics. In other words, physics proceeds with a misconception of time.
Quoting apokrisis
We need to respect the evolving thought of Plato. He started out with a strict adherence to Pythagorean idealism. In this way, he laid out for understanding, the principles, such as the theory of participation, which support it. When this idealism was laid out, he pointed out its problems. Then he proceeded towards a new form of idealism, dependent on "God", which was expressed by the time he wrote Timaeus. Notice the similarity in this style, and what I described of Aristotle. They both lay out the principles, to be well understood, then they proceed toward criticism of them, and onward toward proposing something new. You cannot say that Plato is not idealist at this point, just because he rejected Pythagorean idealism for a new form of idealism, more similar to what is described by Berkeley.
That section of Metaphysics, Bk.9, ch.8-9, is clearly directed against Pythagorean idealism, and what Aristotle referred to as "some Platonists", as a refutation of that type of idealism. But Plato had already exposed the deficiency of the theory of participation, and that type of idealism, in The Republic, Parmenides, and other places I'm sure.
Quoting apokrisis
You are totally neglecting what Aristotle demonstrated, the temporal priority of form, in relation to matter. This is what supports final cause, and our capacity to predict,.as well as interfere with through the means of free choice, what happens in the realm of material existence. Without accepting this principle, that form is prior in time to material existence (and this is the principle which necessitates the proposal of divinity), you cannot claim that your metaphysics is consistent with Aristotle's. You are removing the most important principle of Aristotelian metaphysics, because you desire to stay within the field of "naturalistic", instead of proceeding toward the supernatural, which is where Aristotle's metaphysics really leads us.
Quoting apokrisis
All you need to do is read Metaphysics Bk.12., specifically ch.7, where Aristotle explains how the first mover moves "the first heaven", which is the eternal circular motion. The first mover moves the first heaven in this way, because it is an apprehended "good". Therefore the cause of the heavens is a final cause, an intentional act. I've seen commentators, who like yourself are dissatisfied with Aristotle's reference to divinity, and propose that this part of the Metaphysics was not actually written by him. However, this is really nonsensical, because it is very consistent with the early part of Metaphysics, which I referred to above, where Aristotle questions why a thing is the thing which it is, and concludes that the form of a thing is temporally prior to its material existence.
Sorry you don't like my "style", but the links are intended to be the "explanation" of terminology in the post, for those who are interested in more detail. But, if that doesn't do it for you, I have lots of additional explanatory material that is too extensive for a forum post. The links also refer to other thinkers who share some of my unconventional views.
In the context of this thread, it's not important to grasp the equation of Qualia, Properties, & Fields. But for anyone interested, I can go into excruciating detail. It's all based on my personal Enformationism thesis, which envisions a paradigm shift in Science. Once you grok the new perspective, those technical peripheral issues will be easier to understand.
Anyway, I didn't expect my comments to have much impact on this thread. I post these esoteric notes in order to apply my unorthodox worldview to interesting questions about the nature of reality. It's a form of intellectual exercise ---primarily for my own benefit --- not a pedagogical or evangelistic endeavor. A few readers seems to find them of interest. The others just ignore them, or disparage them. :cool:
Grok : understand (something) intuitively or by empathy.
PS___When I referred to "Qualia as properties & fields" it was in the context of the definition of "Energy" in my previous reply to you :
What is energy made of ? : Energy is not made of anything, energy is a term used to describe a trait of matter and non-matter fields.
A "trait" is a property or quality of the thing referenced --- in this case the Vacuum or Plenum we call Space. And physicists today tend to imagine empty Space as a Potential Energy Field. Although the Space is a "non-matter" field, they treat it in their calculations as-if it was a material Substance that can be warped and compressed. Like Space, Energy is a vacuum full of Potential. It consists only of statistical Probability. So Potential Energy is a quality or trait of empty Space, which is imagined as a mathematical Field. Does that help you to follow the gist of my comment? :smile:
As-If : a hypothetical or imaginary concept; a metaphor
Vacuum (zero-point) Energy : https://en.wikipedia.org/wiki/Vacuum_energy
The simplest one: space is expanding and 'nothing' cannot expand.
I agree with that argument too. Which is why I say the matter of origination can only be solved by adding a logic of vagueness to our metaphysical tool kit.
Both formal and material cause have to arise in the same moment. They in fact must emerge as the two aspects of a shared symmetry breaking. And time (as spacetime) also emerges.
Big Bang cosmology describes that. At the Planck scale, matter and spacetime are clearly dual. The smallest coherent distance is also the greatest energy density as being so confined, it can contain only the single shortest frequency wavelength beat. And that is the hottest thing possible. A material event of the highest energy.
So the duality of matter and spacetime is written into the heart of physics by the reciprocal mathematics of the Planckscale. Material cause and formal cause are two halves of the same symmetry. All that happens is that the Cosmos expands and cools from there.
There is then no time before this first moment as time is part of the onset of metric expansion and thermal cooling. There is change with an emergently coherent direction.
The Hartle-Hawking no boundary story is based on that. The planckscale is a general cutoff as it is the point where energy density and spacetime are indistinguishable. They are a symmetry not yet broken. Vagueness rules until they each establish the mutually reinforcing directions to grow apart from the other.
Energy density can become energy density by virtue of thinning and cooling. Spacetime can become spacetime by expanding and becoming a frame on energy densities. Crisp difference can become possible as not everything needs to be all the same temperature and all the same size any longer.
So the key is to stop asking the usual question of what can first. Hylomorphism starts already as a package deal where both material and formal cause exist, doing their job, as the complementary aspects of a holistic transition from a vague everythingness to a crisp somethingness.
Quoting Metaphysician Undercover
It is an over interpretation to claim Aristotle was consistent himself. What I say is that he still broke the story apart into the many elements that are still useful today.
And the conceptual tool he really lacked was a notion of vagueness (as opposed to crispness). This leads to problems where one half of a dichotomy must always precede the other half. And of course, that never can be the case if each half is effectively the cause of the other in being its Hegelian “other”.
That's OK. The new paradigm --- that all is Information --- is a radical departure from the conventional scientific worldview of Materialism, and the ancient worldview of Spiritualism. Like Quantum Theory it departs from classical doctrines on reality. It also shifts the meaning of many common terms, such as "space" & "substance". But it is an emerging theory among some prominent scientists.
If my layman's Enformationism thesis is not your style, you may find the technical and academic approach of physicist Paul Davies and the Santa Fe Institute more to your liking. Davies is a very clear writer, and brings you along gradually to this new perspective on reality. But at first, even his upside-down physics may seem opaque. At Santa Fe think tank, they address fringe subjects, but stick as close as possible to conventional empirical science, while I am free to ad-lib and riff on related philosophical themes. I'm not beholden to any scientific or religious doctrine. :smile:
Information and the Nature of Reality :From Physics To Metaphysics
"Many scientists regard mass and energy as the primary currency of nature. In recent years, however, the concept of information has gained importance."
Ed. by Paul Davies, & Henrik Gregersen
The Matter Myth: Dramatic Discoveries that Challenge Our Understanding of Physical Reality
by Paul Davies & John Gribbin
Space and Time in the Modern Universe
by Paul Davies
From Matter to Life : Information and Causality
Edited by Walker, Davies, and Ellis of Santa Fe Institute
Note : mostly about information in living organisms
Again, you are ignoring the contradiction involved in "time emerges". Time must already be passing for anything to emerge, so time is necessarily prior to emergence. If space emerges, that's something different from time emerging. Time cannot emerge because that implies time is passing prior to time emerging, in order for this emergence to occur.
Your idea that formal and material cause must arise together, is the cause of vagueness in your metaphysics. You are not properly distinguishing the active from the passive. When form (as actual, active) is seen as prior to material existence, vagueness succumbs to absolution. Now we have time passing, with active Forms, prior to any material existence. This is the separate realm of non-spatial existence, described by dualism. So it makes no sense to describe this world of immaterial Forms in the spatial terms of physical matter. This is why quantum physics is so inadequate for understanding first principles of ontology, they speculate into the immaterial realm, using empirically derived principles of physics drawn from observations of material existence. One cannot apply the principles of material existence to the immaterial realm, they are distinct. The immaterial is separate and distinct from the material in the very same way that the future is separate and distinct from the past. It is only by denying this separation that vagueness is allowed to enter your metaphysics.
What we understand, in a mysticism based metaphysics, is that the entire material universe is created anew with each passing moment of time. This is a necessary conclusion derived from the nature of freewill. The freewill has the power to interfere with the continuity of material existence at any moment in time. This indicates that there is no necessary continuity of material existence between past and future. So all material existence must be created anew, from the Forms, at each moment of time. The human will, as final cause has the power to co-determine the material existence which will occur at each moment. The only vagueness is within the lack of understanding which the human mind has.
If time is what is emergent, then it is necessary that nothing be happening before it gets started. The idea of "before" becomes the incoherent claim here.
You presume time to be eternal. Thus there is always a "before". Hence time is proven to be eternal. Your argument is a simple tautology.
A thermal model of time is about the emergence of a global asymmetry - an arrow of time pointed from the now towards the after - the present towards the future. So the past, the before, is a backwards projection. It is imagining the arrow reversed. And reversed to negative infinity.
Yet the reality - according to science - is that time travel (in a backward direction) is unphysical. And the Big Bang was an origin point for a thermal arrow of time.
Yes, we can still ask where the heat to drive that great spatial expansion and thus create an arrow of time, a gradient of change, could have come from. What was "before" that?
But this is no longer a conventional notion of a temporal "before" anymore than it is a conventional notion of "what could have been hotter" than the Planck heat, or "shorter" than the Planck distance, or "slower" than the speed of light.
Every such conventional notion fuses at the Planck scale - the scale of physical unification. The asymmetries are turned back into a single collective symmetry. There is no longer a before, a shorter, a hotter, a slower. All such definite coordinates are lost in the symmetry of a logical vagueness. That to which the principle of non-contradiction (PNC) now fails to apply.
Before the PNC applied, there is a time when it didn't. That is the "before" here. :wink:
Quoting Metaphysician Undercover
Another of the many co-ordinates that are erased if you wind back from their current state of divided asymmetry to recover their initial perfect symmetry. At which point the PNC fails to apply. The logic you want to argue with suddenly runs out of the road it felt it was travelling down.
In the beginning, the active and the passive (along with the stable and the plastic, the necessary and the accidental, the global and the local, etc, etc) were a symmetrical unity. Both halves of the dichotomy had the same scale and so were indistinguishable as being different. The PNC might feel as though it ought to apply, but - being indistinguishable - it can't.
It is only as they grow apart that a proper asymmetric distinction can develop. The passive part of nature is that which is less active. And vice versa. Taken to the limit, you get the passive part of nature as that with the least possible activity. Or the reciprocal relation where passive = 1/active. And vice versa. The active = 1/passive.
Quoting Metaphysician Undercover
That can only mean ... as a religious and unphysical belief.
It is a claim of a theistic model. And a naturalistic model has become the one that has produced all the useful physics here.
Quoting Metaphysician Undercover
Epicycles to explain away a metaphysics that is provenly unphysical. It feels like an explanation being expanded but it is a confusion being compounded.
Right, and "emergent" implies that there is a time prior to, hence a "before" the thing emerges. That's exactly the problem with your claim that time emerges, it necessitate such incoherencies as a time before time, due to its contradictory nature.
Quoting apokrisis
I never presumed time to be eternal. Why would you conclude that? I just argued that there is time prior to material existence.. How does that necessitate "time is eternal". I generally use "eternal" in the theological way, to indicate "outside time", so "eternal time" would be self-contradictory.
Quoting apokrisis
This is why theologians need the term "eternal" to refer to what is outside of time. The scientific community hijacks and restricts the use of "time" to conform to their empirical observations, i.e. they define time in relation to the material world. This leaves absolutely no way of talking about what is prior to the material world, because that creates the apparent contradiction of "before time". So the theologians use "eternal" to refer to outside of time, when "time" becomes defined in this scientistic way.
Quoting apokrisis
This is only the case from your scientistic perspective. If we ditch that scientism, and adopt some properly formulated metaphysical principles, as the theologians do, we can get out of that trap of having to assume that the PNC does not apply.
So for example, when you say "Every such conventional notion fuses at the Planck scale", you are restricting "conventional" to refer only to the understanding of time employed by the scientific community. All other ways of understanding time are excluded by your bias, as unconventional. So when the theologians demonstrate a way of avoiding such violation of the PNC by showing that we need to allow for activity which is outside of time (eternal actuality) when "time" is defined that scientistic way, your bias inclines you to dismiss it as unconventional, and an appeal to the supernatural.
The "Planck scale" only represents the level at which empirical observation becomes impossible. All you need to do is to accept the reality that there is activity which is impossible to observe empirically, to get beyond this self-imposed restriction. It is a self-imposed restriction, because you are restricting your reality (actual existence) to that which can be empirically observed. Once you open your mind to the truth that there is reality (actual existence) beyond that which can be observed empirically, this "Planck scale" restriction can be seen as unwarranted. And, the vagueness caused by that misconception of time can be properly dealt with through the application of logical principles such as the PNC. The inclination, or urge to violate the PNC is derived from the vagueness produced by that misconception of time. By denying the PNC you strip yourself of the capacity to understand what lies beyond what is empirically observable. It's a self-defeating metaphysics which you propose.
Quoting apokrisis
That a model is useful does not necessitate the conclusion that it provides a true representation. Thales predicted a solar eclipse with an untrue model of the planetary motions. That the models produced by physics have encountered problems which incline you to say that the PNC is violated beyond the Planck scale, is clear evidence that they are untrue models, regardless of their predictive capacity. It's self-defeating to simply assert that reality is illogical beyond this level, so simply forget about trying to understand it. How could a reality which is illogical in its base level (the unobservable Planck level) produce a logically ordered upper level? Such a metaphysics is completely incoherent.
Quoting apokrisis
Evidence of your bias. If it's "unphysical", reject it. But all your metaphysics of "perfect symmetry" is equally "unphysical", so you're really just hypocritical. .
Besides, whereas the Information thing (capitalized) is interesting enough to pursue as such of course, to paraphrase Gamez, it's just another sample "all-embracing monstrous metaphysical vision" when taken wholesale.
By "overboard" do you mean he goes beyond current materialist doctrine into speculations on quantum queerness? If so, I agree. And I find it to make a lot of sense, at least as far as Quantum theory can make sense.
Quoting jorndoe
I take it that you don't approve of Scientific Speculation and Metaphysical Philosophy? Davies doesn't ask you to take what he says "wholesale". You are expected to take a scientific analytical approach, up to the point where Reductive analysis bogs down in Holistic metaphysics, such as Quantum Entanglement. QE doesn't "make sense", but it does seem to be a fact of physics. So Davies uses Information theory to peer into the mists of murk beyond classical Newtonian physics. :smile:
Regarding the Information thing (paraphrasing Gamez), by wholesale I meant thorough all-embracing hypostatization, but that wasn't about Davies.
Everyone already know these pitfalls, but, hey, I'm all for speculation as much as the next person over. (y) (not that it's about me)
Yes, we can consider this a contest between pragmatic naturalism and dogmatic theism if you like. One holds consequences here in the real world. The other not so much.
Hypostatization is the fallacy of Reification : ascribing reality to abstractions. But recent neurological studies are finding that what we humans take for reality is actually a figment of our imagination : an abstraction. Cognitive Psychologist Donald Hoffman has produced a novel theory of perception that sounds a lot like the ancient Buddhist teaching of Maya (illusion). If you are not familiar with that notion, the book review linked below will give you a brief glimpse from a non-Buddhist perspective. But, if you have any interest in cutting-edge Information theory and Consciousness science, I recommend that you read the book for yourself. :cool:
The Case Against Reality : http://bothandblog6.enformationism.info/page21.html
I'd prefer to frame it as the contest between pragmatic naturalism and the quest for truth. Your approach is, who cares if this naturalist metaphysics leads us into contradiction, so long as we adhere to naturalism at all costs. My approach is, if it leads to contradiction there's a problem with it, let's look at other proposals.
It only contradicts some assumptions you take as axiomatic to your theism. The PNC is a case in point. A belief in some Newtonian and non-thermal model of time being another.
It is good that your theism is constrained by the attempt at a self-justifying metaphysics - a rational logical structure. And I agree that conventional scientific metaphysics - being overly reductionist - fails palpably to have this kind of causal closure.
But that is why pragmatism – particular in the Peircean sense - is the royal route to "truth". It combines that causal closure of the formal metaphysical model with the empirical checks that are needed to be able to say the resulting metaphysical model indeed predicts the world as we can observe or measure it.
Your reaction to Peirce's relaxation of the PNC is telling. He makes the PNC an emergent limit whereas you cling to it as a brute fact. You need it as an input to construct your system. Peirce showed it to be a natural outcome of any kind of systematic development of a "rational cosmos".
Sure, you can have an argument against that. But it has to be better than: "I don't like the challenge it creates for my necessary presumptions".
Hoffman is just re-casting age-old idealism (mental monism) in the image of a couple odd theses of his.
I suppose, if you really think this holds water, then you could put together a concise and short argument in a new opening post. (y)
Keep in mind, if Hoffman wants to raise this stuff to science, then the requisite falsifiability criteria and such applies.
(Can't promise ahead that I can participate much personally, but it seems a relevant topic for the forum.)
Right, that's because without the PNC, there is no such thing as truth. That's why I frame it as a contest between pragmatic naturalism and the quest for truth.
Quoting apokrisis
Perhaps you can explain to me how can know the truth about the nature of the universe, when you claim that its fundamentals violate the PNC.
Quoting apokrisis
Yes, you told me this already, to avoid the problem that "time is emergent" is contradictory, you simply assume that existence according to the PNC emerged at a later time than time emerged, therefore the contradiction of "time is emergent" is allowed, because this happened at a time when the PNC was not applicable.
Quoting apokrisis
The only necessary presumption I've expressed is the PNC. I adhere to it because I believe that without it, knowing the truth is impossible. In a quest for truth, one must adhere to some criteria for judgement. If you can show me how knowing the truth is possible when the PNC is violated, then I might give up that necessary presumption.
So, the PNC isn't "the" necessary presumption. A presumption is necessary as the criteria for judgement, and the PNC is the one which seems most fitting. But I'm open to other proposals. Do you have what you believe is a better criteria for judgement?
The PNC is not about "truth". It is about "validity". Or indeed, merely about "computability".
So let's take the deflationary tack here.
The PNC could apply to a world of definite particulars - a mechanical realm of being. It just is the case (it is the ontological truth) that identity has this binary quality of having to be one thing and not its "other". If that is how we find reality, the PNC is a good metaphysical model. We might build in that strong presumption as a given.
But it is quite reasonable to question the claim the world in fact is divided quite so crisply. Indeed, that is the very thing that quantum indeterminism has challenged in the most fundamental way. If two particles are entangled, there is no fact of the matter as to their individual identity. They happily embody contradictory identities - until the further thing of a wavefunction collapse. A thermal measurement.
So right there is a canonical modern example of how reality is vague (a quantum potential in which identity is accepting of contradictions). But then - emergently - it can also evolve a binary crispness. The PNC now applies. A definite measurement one way, and not the "other", can be entered in the ledger of history.
So a logic of vagueness, in which the PNC becomes an emergent feature of classical reality, has direct empirical proof now. Peirce was right in his willingness to question some ancient metaphysical "truths".
The PNC remains a useful tool because we also know that wavefunctions do get collapsed. Well, that is if you can move past the no-collapse quantum interpretations and accept a thermal decoherence model of time itself. :wink:
But anyway, wavefunctions do collapse and so the PNC does apply from a classical perspective. Yet we then need a logic of vagueness to account for how the PNC could emerge from a ground of being, a ground of quantum indeterminism, where it patently doesn't.
That's incorrect. The three so-called fundamental laws of logic, the law of identity, the law of non-contradiction, and the law of excluded middle, are guidelines for making judgements of truth and falsity. That's why they describe a binary system. The first, the law of identity establishes correspondence between the language being used and an identified object which is the substance. Notice that the thing identified is an object, consistent with Aristotle's "primary substance", not a logic subject, which would be "secondary substance". The identifier, the name, let's say "Socrates", is presumed to directly correspond with an object, and only that unique object, by the law of identity . Any faults in this correspondence relation will allow falsity. Further, the laws of non-contradiction and excluded middle, provide guidelines as to what we can truthfully say about any identified object. We can say that these two laws make propositions of "validity", but what they tell us is that when we cannot decide which of the two contradictory propositions ought to be accepted, we need to return to the object, as the primary substance, to make that judgement based in correspondence, or truth.
If we replace the named object with a subject, secondary substance, suppose that "Socrates" refers to a subject rather than a named object, then our guidance is only validity. We are not dealing with correspondence, or truth, but validity alone, because we have removed the applicability of the law of identity which identifies an object We can make whatever predications we want of that subject, so long as they are valid, but we have nothing to substantiate these judgements, no object of correspondence, no means for truth, with only an imaginary subject.
So the PNC may be used as a tool of validity, or it may be used as tool for truth, depending on the relationship you build between it and the law of identity. Of course it is well known that the three fundamental laws must be applied together, as a unit, so your removal of that law from its relationship to the others, to say that it is only about validity, is a false representation. Maintaining that the law of identity relates directly to an object in correspondence, as it is intended to, and maintaining the proper relation between the law of identity and the law of non-contradiction ensures that we apply the law of non-contradiction toward truth.
Quoting apokrisis
This is a valid concern. It may be the case that the world is not actually arranged in such a way that the PNC applies. However, we have seen that the PNC is extremely useful, and applicable in a vast majority of cases. And, whenever it appears like the PNC does not apply we can assume that the descriptions we've made from observations are somewhat faulty, so that we can go back and revisit those descriptions until we find the appropriate ones in which the PNC is observed.
If, whenever it appears like some aspect of the world is not arranged in such a way that the PNC applies, we simply assume that this is the way that aspect of the world is, and leave that aspect of the world as unintelligible, then we have no inspiration to revisit our observation based descriptions, to determine the instances when the descriptions were faulty. So it really provides no pragmatic service to us, to assume that the world might not be divided so crisply, unless we can find another way to make these aspects which appear as vague, intelligible. All we can do, is assume that the world really is arranged in such a crisp way, which makes it intelligible to us, despite the fact that our descriptions make it appear unintelligible. Then we can continue to seek the deficiencies in our observation based descriptions, which make it appear unintelligible. But to assume that it might really be unintelligible is nothing but counterproductive to these efforts.
Quoting apokrisis
This is a good example. It demonstrates that we need to revisit those methods of observation, and keep doing so, until we find a way of description which makes this aspect of the world intelligible. To simply assume that this aspect of the world is unintelligible, (there is no truth) and therefore give up the effort is counterproductive.
Quoting apokrisis
No, it is not an example of how reality is vague. It is an example of how you are willing to give up on the quest for truth. Instead of researching all the observational premises, and theoretical principles employed, to determine the mistakes within, and correct them, so that this aspect of the world might become intelligible to us, you are completely uninspired to make that effort, and ready to sit in the corner whining "it can't be done", the world is simply unintelligible.
Quoting apokrisis
Again, you are incorrect here. The fact that a certain aspect of the world appears to be unintelligible to us, does not prove that it is unintelligible absolutely. Unless you can prove that the methods employed are the only possible methods, or the best possible methods, you cannot claim "empirical proof" of such a thing. That's actually a ridiculous sort of claim. It's like a blind person claiming to another blind person, to have "direct empirical proof" that there is no such thing as colour. Deficiency in one's capacity to apprehend something does not prove that the thing cannot be apprehended.
Yes. Idealism is an ancient philosophical worldview that never went away. To me, Hoffman's theory seems to be an update of Kant's Transcendental Idealism. However, Hoffman calls it Model Dependent Realism. I suspect that the notion of "transcendence" does not fit your worldview. So you may dismiss Hoffman as an occultist, but he is an MIT educated occultist.
I'm not a credentialed cognitive scientist, so I'll let Hoffman make his own argument. Obviously, Idealism is not compatible with the current dominant doctrine of Materialism. But Quantum Theory has already undermined the foundation of that ancient hypothesis. I presented my concise & short "argument" in the blog post linked above.
Do you know of any cognitive or psychological theory that is empirically falsifiable? Mind studies are not "hard" sciences, so their theories are essentially philosophical. Only time will tell if Hoffman's provocative theory gains credibility among his peers in cognitive science. At this time, his theory is "challenging leading scientific theories", so you would expect that many of his peers are skeptical. But his theory has been enthusiastically received by several prominent cognitive scientists, including Steven Pinker. :cool:
Quotes :
"SHORTLISTED FOR THE PHYSICS WORLD BOOK OF THE YEAR 2019 : A groundbreaking examination of human perception, reality and the evolutionary schism between the two"
https://www.penguin.co.uk/books/295/295303/the-case-against-reality/9780141983417.html
Challenging leading scientific theories that claim that our senses report back objective reality, cognitive scientist Donald Hoffman argues that while we should take our perceptions seriously, we should not take them literally.
"Don Hoffman . . . combines a deep understanding of the logic of perception, a gift for explaining it with simple displays that anyone can-quite literally-see, and a refreshing sense of wonder at the miracle of it all."--Steven Pinker, author of How the Mind Works
That is rather the point. Peirce was highlighting the presumption you have “truthfully” identified an object. Some concrete particular under the first law. And he was drawing out the logical implications of the corollary - the case when the principle of identity doesn’t apply.
This is exactly what I was talking about. If you take the laws of non-contradiction and excluded middle out of context, remove them from their relationship with the law of identity, you no longer have anything to ground truth or falsity in, no substance. Without identity truth and falsity is not relevant.
Therefore, in the quest for truth, the law of identity is of the utmost importance. We seek to apply the law of identity, and this brings the other laws to bear fruit in relation to truth and falsity.
Quoting apokrisis
When we are seeking truth, there is no such thing as "the case when the principle of identity doesn’t apply". If it appears to you, like the principle of identity cannot be applied, I would reply that you are not trying hard enough. Truth does not come easy, it requires effort.
We can never simply assume that the law of identity has been truthfully applied, Peirce was correct in this, and it's the starting point for skepticism. So when the logic leads to vagueness or other absurdities, we need to revisit how the law of identity has, or has not, been applied in these cases. It makes no sense to conclude that the law of identity cannot be applied, because that just demonstrates a lack of effort.
You are reading it backwards. A logical definition of vagueness (and generality) is what helps ground your desired "truth-telling" apparatus. It tells you the conditions under which the laws of thought will fail - ensuring you do what is needed to fix those holes.
So you have to establish that you are dealing with a concrete case where a binary judgement can apply. The thing in question has to be that thing and no other thing. You can't simply presume it. You have to check it.
But that is then why you need a pragmatic definition of "truth". One that has measurable consequences.
Theism routinely by-passes that constraint on logicism. God becomes a concept so general that nothing is impossible of Him, a concept so vague that anything can be taken as evidence of Him.
There is evil in the world? It's put there as a test. You recovered from your heart attack? It was the power of prayer. But your dead neighbour prayed too? God probably knew he was a paedo.
You are treating the laws of thought as if they are Platonic abstractions. Peirce was concerned with rooting them in the reality of the world. And so defining when a rule does not apply is necessary to being able to define when it actually does.
Quoting Metaphysician Undercover
Exactly. But having started skepticism going, we then need to rein it in appropriately. And that is what this is about.
We want to avoid the two errors of credulity and unbounded skepticism. We want to be like scientists and say, as far as our model goes in terms of the measurements it suggests, the theory is probably true.
Peirce also did critical work on probability theory so that exact numbers could be put on the relatively likelihood of something being false rather than true. His was a system of logic with methodological consequences.
Quoting Metaphysician Undercover
Again, yes. And what does that effort look like?
(Reveal: Pragmatism rather than theism!)
Oh here we go, I see everything you do as backward, and see what I do as backward.
Quoting apokrisis
This is not true. We cannot ground truth in a definition. Your definition could be random fantasy. So you would end up with a coherent logical system without any correspondence with reality. That's why Aristotle introduced "substance", to ground logic in reality.
Quoting apokrisis
This is why we cannot begin with a definition. A definition consists of words. The words used must refer to something. So we must establish what the words refer to (identity) first, prior to proceeding to a definition. This is demonstrated by Platonic dialectic. You, and Peirce, have it backward. "Vagueness" represents the unidentified, what we have no words to describe. So you think that instead of analyzing "the vague", apprehending and identifying its various aspects, such that we can bring it out of its current appearance as vagueness, into a crisp clear understanding through the process of identifying its parts, we ought to just define "vague" as identifying something unintelligible.
Notice that you and Peirce, by claiming that vagueness is a real aspect of the universe, have actually proceeded in the way that I have described. You have identified something, and named it "vagueness". The problem though, is that you want to assign to this identified thing, the property of being inconsistent with the laws of logic. That is how we know it is an untruthful way to proceed. What it indicates is that you have not properly identified and described the thing which you call "vagueness".
Quoting apokrisis
I've been describing this effort. It is to provide real coherent and truthful descriptions, rather than the lazy way of saying "it's vague" and cannot be described in an intelligible way. The difference between your way and the theistic way, is that theism maintains that God is supremely intelligible, despite the fact that the human intellect cannot very well grasp Him. This is the opposite to your approach which says that vagueness inheres within the thing, making the thing impossible for any intellect to apprehend. Do you see why your approach is backward?
Thanks for the reference, it's a good read. You might understand, from what I wrote, that I do not disagree that it is possible to treat the LNC and LEM in the way that Peirce does. However, as I explained, he does this by divorcing these laws from the primary law, the law of identity. Determinateness is a function of identity. So it is by removing the need for an identified particular allows for Peirce's categories of the general, and the vague, in the first place. It is only in this context that these categories make any sense. Without an identified particular object, what Aristotle called "primary substance", the LNC and LEM are bound only by inductive principles, which are based in probability. Probability is not consistent with the three laws, when maintained as three, because identity of an object gives us determinateness. It is only by removing this determinate object, that we are forced to resort to general principles instead. But the general principles are produced from induction, which gives us probability instead of determinateness.
Further, the author of your referred article, Robert Lane, explains how Peirce allows that the term of predication might be defined in a multitude of ways. This is why I argued that reference must be prior to definition. If Bob Dole is the identified object, and we say "Bob Dole's hair is red", then the colour of Bob Dole's hair tells us what "red" is. This is the importance of having an identified object, substance, which provides an example of what the term of predication means, rather than having to rely on a definition, and the sophistry involved in different interpretations of the same word.
Notice how Robert Lane provides no indication, throughout that article, as to how Peirce shows any respect whatsoever to the law of identity in his discussion of the LNC and LEM.
The conclusion I draw is that yes, we can't presume complete determinism. But nor do we then need to lapse into complete indeterminism.
Pragmatisim is the middle path of constructing a theory of logic in which indeterminism is what gets constrained.
As an ontology, that says reality is foundationally indeterminate, and yet emergently determinate. And the determinate aspect is not merely something passively existent (as often is taken to be the case with emergence - ie: supervenient or epiphenomenal). It is an active regulatory power. The power of emergent habit. The power of formal and final cause to really shape indeterminate potential into an actualised reality.
So it is a logical system large enough to speak of the world we find ourselves in - complete with its indeterminant potentials and determining contraints.
Quoting Metaphysician Undercover
Again, I am taking the systems view of ontological reality. So the internalist approach that Peirce takes on this would be the feature, not the bug. I'm still digesting that aspect of Lane's argument, but that was one of the sharp ideas that grabbed me.
Quoting Metaphysician Undercover
There is equivocation here on Peirce's part because his logic of vagueness was a project still in progress.
His early worked was couched in terms of Firstness - free fluctuations. But as we have discussed, a fluctuation already seems too concrete and individuated. Formal and final cause appear already to be playing a part by that point. A fluctuation has to be a fluctuation in something - or so it would seem.
This is precisely the obvious hole in the vogue for accounts of the Big Bang as simply a rather large quantum fluctuation. Even if a quantum field is treated as the most abstract thing possible, the field seems to have to pre-date its fluctuation. Verbally at least, we remain trapped in the "prime mover" and "first efficient cause" maze you so enjoy.
But he was recasting Firstness as Vagueness in later work. And we can see that in his making a triad of the potential, the actual and the general - as the mirror of the three stages of the laws of thought.
A fluctuation is really a possibility. A spontaneous act, yet one that can be individuated in terms of the context it also reveals. We are nearly there in winding our way back to bootstrapping actuality.
A step further is "potential" properly understood as a true vagueness. A fluctuation is a spontaneity that is not caused by "the past". It is called for by the finality of its own future - the world it starts to reveal. This is one of the things that smashes the conventional notion of time you prefer to employ.
But anyway, when it come to the law of identity, it is enough for everyday logic that reality is already reasonably well individuated - at least in the ways that might interest us enough to speak about it. The law of identity can work even if any instance of individuation is merely a case of uncertainty being sufficiently constrained.
However when we get to ontological questions about the machinery of creation, then this background to the laws of thought become relevant. The details of how things really work can no longer be brushed under the carpet, or shoved in a black box labelled "God".
If we look at reality, as we know it, to find out what distinguishes or separates the determinate from the indeterminate, we see that the past is determinate, and the future indeterminate, with the present separating these two. So Aristotle assigned indeterminacy to future events, what may or may not be, and these future occurrences are not ruled by the LEM. His famous example, the sea battle tomorrow.
If I understand Peirce correctly, he wants to take one step further, and say that the present, which separates the determinate past from the indeterminate future (LEM not applicable), is itself a "vague" division. So at this time, the present, the LNC does not apply. So we have a determinate past, an indeterminate future which can only be predicted through generalizations (LEM not applicable), and a present which violates the LNC.
The present is the most difficult to apprehend. If the future is really indeterminate, as free will, and final cause indicate, and the past is really determinate, as the fact that we cannot change what has occurred indicates, then the present must exist as a time of transition between these two. This transition we can call "becoming". Becoming, as Aristotle demonstrated, is incompatible with the logical categories of being and not being. This is one reason why he was led to violate the LEM. But "incompatible with", means neither being nor not being, and he insisted that the LNC be maintained.
Let's say that the present cannot be a crisp division between future and past, because this would deny the activity, becoming, which we observe to occur at the present. So the indeterminate world of the future cannot pass into the determined world of the past, at a crisp moment. Therefore we might need to assign vagueness to the present. But this vagueness is not a vagueness described by a violation of the LNC, it is described as an incompatibility with the LNC. This means that we cannot describe becoming, which occurs at the present, in the same bivalent logic of truth and falsity that we use to describe the static past, what has occurred, so it is more like a violation of the LEM. But if we look toward the future now, is it possible to say that the LNC is violated? Of the sea battle tomorrow for example, can we say that it is both true and false that it will occur. Suppose we take a many worlds interpretation of quantum physics, does this say that the sea battle both will and will not occur?
Quoting apokrisis
This is the problem with wave theory. A wave needs a medium, and electromagnetism is understood by wave theory. Denying that there is a medium, and insisting that the activity is "wavelike" doesn't solve the problem.
Quoting apokrisis
This idea of firstness really doesn't make sense. Suppose there is a first moment in time. Prior to the first moment there would be infinite potential, because there is only future with not any past. There would be absolute indeterminateness, with no past whatsoever, to determine anything. That means absolute freedom. However, whatever it is that acts with such absolute freedom, and causes the passing of time to start, must act for some reason, and this is why we assign final cause to this first act. So the indeterminateness of the first potential is not absolute at all, it's just that we do not understand the final causes (intention) involved.
Quoting apokrisis
Appealing to God is not to brush things under the carpet, but to realize the true nature of time, and how the first act must necessarily be an intentional act, final cause. Because when we look back to the point when all was future, and there was no past, (the first moment in time), we see that the acting thing must be capable of being completely in the future, and this is the nature of final cause. So as time passes, material existence can be determined according to the will of that being.
Or rather that the past is the determining context. The future is created by what then becomes determinate due to the application of these constraints. The present is the "now" where global historical constraints are acting on residual indeterminacy to fix it as some new actualised event. So the present is defined by the actualisation of a local potential via the limitations of global historical context.
Or as quantum theory puts it, actuality is realised by the collapse of the wavefunction. A local potential and a global context are resolved to produce a result that is "determinate" and so now belonging to the generalised past, while pointing also towards a more specified future.
Events remove possibilities from the world. And so shape more clearly the possibilities that remain.
Time thus arises as the macroscale description of this directional flow. Potential becomes increasingly restricted or constrained over time as it realised in particular happenings. The business of change takes on an increasingly determinate character - even if there thus also has to be a residual indeterminancy to give this temporal trajectory something further to be determined by contextual acts of determination.
Quoting Metaphysician Undercover
As I point out, you call it a separation. I am talking about it as an interaction.
The present as an act of local actualisation has to emerge from the interaction of what is past (the development of some global contextual condition) and what is future (the indeterminancy still to be shaped - but not eliminated - by that process of actualisation).
I wouldn't get too hung up on mapping this directly to the laws of thought. We normally imagine them to be Platonic abstractions that exist outside of physical reality. So they are framed in language that is a-temporal from the get-go. Verbal confusion is only to be expected.
But vagueness would describe the state of things at the beginning of time because the indeterminism in the system is macro. There is no history of actualisation as yet, and so no determining context in play.
However by the time you get halfway through the life of the Comos - as we are in the present era - then it has grown so large and cold that it is most of the way to having only a microscale indeterminacy. The potential has been so squeezed that you can only really see it at the quantum level of physical events.
At the macroscale, the Cosmos is now getting close to the other end of its time - its classically fixed state of maximum possible global determinacy. It has arrived at what Peirce calls generality. (Or continuity, or synechism, etc).
Don't worry. It all makes sense.
Quoting Metaphysician Undercover
Yep. But who wants to go with the MWI?
Quoting Metaphysician Undercover
Alternatively, this is pragmatism. Accepting that we can only model reality. And so what matters is that the model works. It can solve our practical problems.
Quoting Metaphysician Undercover
So can you lift the carpet and provide the detail of who is God and how He does these things? What first act did He perform with the Big Bang? What intent we can read into its unfolding symmetry breaking? How much choice did He have over the maths of the situation?
These would all be good starting points to tell us what is better about your model of existence. Let's see if you can say something that is not either too vague or too general.
The problem here is that you do not account for the acting free will, final cause. It does not act according to these constraints, the determining context. It acts according to what is desired for the future. Yes it is constrained, but the primary objective is to bring about what is desired, regardless of constraints.
Quoting apokrisis
So this scenario is missing something, final cause. You have "global historic constraints", and you have "indeterminacy", but you neglect the free willing being who utilizes the indeterminacy amidst the constraints, to bring about the desired "new actualised events". That is the key point, that the new actualized event is not any random event, produced from the indeterminacy amidst the constraints, it is a final cause event, intended for some purpose.
Quoting apokrisis
This is a perspective dependent claim. "Potential" is a human conception which is perspective dependent. An apple hanging in the tree has potential energy due to the force of gravity. If it starts to fall it gains kinetic energy, but this is still potential, in the sense that it is the capacity to do work. And every time the energy is converted to a different form, it is still the same potential, according to conservation laws. The problem is that some forms of energy (potential) are harder for the human being to harness, and some might even appear to us, as impossible to harness. So we might say that potential (energy) becomes increasingly restricted, but this is a judgement based in the human perspective. Theories about entropy and heat death, only describe potential from the human perspective, the human capacity to harness energy.
Quoting apokrisis
So this is not really correct. Events change the possibilities in the world. Any event can open up as many, or more new possibilities as it removes. In reality an event just changes the possibilities in the world. And since the possibilities in the world are countless at any given moment, it doesn't make sense to even think about numbering them, or if there is more possibilities at one moment than at another. The law of conservation of energy states that energy, the potential to do work, remains constant. Some energy might slip away from the human capacity to harness it, as entropy, but this is a perspective dependent description.
Quoting apokrisis
Therefore, this is a faulty claim, created through the notion that the human perspective gives us the absolute. This is why the theological perspective is superior on this issue. It recognizes that claims such as the idea that potential is becoming increasing restricted, are simply a product of the human perspective. We have no idea of the potential available to a superior being like God, so such claims are not ontologically meaningful. For example, a culture living and thriving, in the designed conditions of a petri dish, (if it could think), would think that the available potential was running out, as it consumed the nutrients provided for it. But many other cultures could use the waste of that culture as potential for their activities. Such claims about potential becoming restricted are completely perspective dependent.
Quoting apokrisis
But at the first moment in time there is necessarily no past. Can you apprehend this? All your talk about the past which the present emerges from is nonsense, because there can be no past whatsoever until time starts passing, and at that moment the past begins to emerge. So the past is really what emerges. As soon as there is time, there is an emergent past, and the past continues to emerge so long as time keeps passing.
Prior to this first moment of time, there can still be future, as the future is not determined by the passing of time, being prior to it. Thus if the past emerges it emerges from the future, because the future is prior to it. This is why the idea of infinite determinacy, or infinite potential, prior to the beginning of time, seems to make sense. It appears like prior to the first moment of time there is infinite potential because there is no past (constraints), and only future, therefore potential without constraint The reason why this doesn't really make sense is explained by the cosmological argument. If time hasn't started passing, and the potential is infinite, there would be nothing to make time start passing. So we would need to posit an act which would start time passing, and this actuality cannot come out of the infinite potential, because it's an actuality. The act which appears to be derived from potential, but is really an act (which appears to come out of the future), is the intentional act, final cause. So we assume that this is the type of act which orders time itself.
Quoting apokrisis
Well, it makes sense, but it's completely a perspective dependent assessment of the situation which you offer so despite it making sense, it's not a good ontology. The human concept of potential is based in the human capacity to bring about change in the world. The human being, as a small, insignificant being in comparison to the universe as a whole, has a relatively small capacity to bring about change in the universe. So the human being assesses indeterminacy as being only in the microscale, the assessment of indeterminacy being directly related to the human capacity to produce change through intentional acts, final cause. A far more significant being, with a much greater capacity to bring about change through intentional, free will acts, would apprehend indeterminacy within what we call the macroscale. Your ontology is rather skewed, taking the human perspective as some sort of absolute.
Quoting apokrisis
That's the point, if denying the LNC gives us something like MWI, who wants that?
Quoting apokrisis
As I've explained already, describing things on the basis of it works for some pragmatic purpose, is quite different from the quest for truth. Pragmaticism does not produce good metaphysics.
Quoting apokrisis
The point is to apprehend that the first act is of the same sort of act as the intentional, freewill act, or final cause, such that we can move in the proper direction towards an understanding of it. To deny that it was this sort of act, and pretend that it was some type of random fluctuation or something like that, is to mislead ourselves, guide us in the wrong direction.
That is only a problem from your theistic presumptions. It is the basic inconsistency in theism or idealism that my version of physicalism resolves.
Finality is not about "free will". It is about the inescapability of the emergence of natural law - global habits of regularity that arise directly from nature's efforts to instead attempt to head locally in every direction at once.
You don't understand Peirce's metaphysics yet. But this is the guts of it.
Quoting Metaphysician Undercover
Citing Newtonian mechanics here is odd given that it is indeed a highly technical and reductionist perspective on whatever "potential" might mean.
Well I guess you need to match your theism with its "other" of scientism to avoid talking about physics in the holistic way I am doing. But clearly I don't accept your attempt to limit the concept of "potential" so strictly.
Quoting Metaphysician Undercover
An engineer might have that human concern. A cosmologist is more interested in how that technical language speaks to thermal gradients. It is not about a potential to do work (serve human finality). It is about a potential to roll down a "second law" entropic slope (and thus serve cosmic finality).
Quoting Metaphysician Undercover
It is you who think in atomistic moments to be strung like beads on a chain. So this is why you end up with the problem of either having to have a first moment, or an infinity of moments.
My view is about effective scale. So at the beginning everything is the same "size" and so indistinct or vague. By the end scale is as polarised as it can get. The small is as small as possible, and the large as large as possible.
In the Heat Death, the visible universe has reached its maximum extent due to the inherent limits of its holographic event horizons - technical jargon for the distance any light ray can reach before the ground under it is moving so fast that effectively it winds up standing still ... as is the case when you fall into a Black Hole.
And it has also reached its minimum average energy density as every location within that spread of spacetime now has a temperature of 0 degrees K and so the only material action is a faint quantum rustle of virtual particles.
So this is a very different conception of "time" than your Newtonian one. It is not a collection of instants - truncated or endless. It is instead a reality that is truncated at one end by symmetry - an absence of any concrete distinctions. And then truncated at the other by its opposite - a completely broken symmetry where energy density and spacetime are poles apart.
Everywhere is cold. Everywhere is large. And it is all one great "moment" - a continuity - in that it is a single story of symmetry breaking, a single thermal history of development. It begins and ends for reasons internal to its own structure-creation. There is no "outside" against which its existence can be measured.
Quoting Metaphysician Undercover
It is the only test of bad metaphysical theories.
:clap: You do TPF, Peirce, Hartle-Hawking/Rovelli, et al proud, apo!
No, it's an observation. I did not grow up with any theistic assumptions, I didn't go to church, and was not indoctrinated. I studied philosophy in university, and found that the theological metaphysics is consistent with my observed experience, unlike your naturalist metaphysics.
I don't know what "inconsistency" you are talking about. You have described the constraints of past time, and the "application of these constraints" toward the indeterminacy of the future. Do you not apprehend the necessity of a "being" which applies these constraints? Simply assuming constraints from the past, and indeterminacy in the future, does not provide the premises necessary to create an ordered, or organized existence, an object, which "applying these constraints" implies..
Quoting apokrisis
This demonstrates very clearly that you do not understand final cause, nor do you understand freewill. "Final cause" refers to the cause of an act carried out for a purpose, an intentional act. "Freewill" is derived from an understanding of final cause, in conjunction with the notion that the intentional act is not determined (caused) by past material existence.
Quoting apokrisis
If Peirce's metaphysics states that final cause is not related to free will, then it's a misrepresentation. However, I think that Peirce had very little to say about either of these, and you are just projecting your misunderstanding of final cause and free will onto Peirce's metaphysics. The reality here is that Peirce's metaphysics, being pragmatic, does not account for freewill or final cause, it takes these for granted. So you present a twisted misunderstood representation of final cause, which you think would be consistent with Peirce's metaphysics, and propose it with the intent of making Peirce's metaphysics appear naturalistic..
The issue is that final cause, being what is responsible for artificial things, is fundamentally inconsistent with naturalism. This is because of the classical dichotomy between natural and artificial. Naturalism pretends that it can explain artificial things by classing human beings as natural things, and claiming that artificial things "emerge", just like human beings "emerge", and insisting that to believe other wise is to "believe in the supernatural" which has bad connotations. But the fact of the matter is that the existence of artificial things is much more accurately described by the philosophy of final cause and freewill, and naturalism can only attempt to make itself consistent with final cause by misrepresenting final cause. So there is a deep chasm of separation between final cause as understood by classical philosophy and theology, and final cause as represented by naturalist metaphysicians like you. Of course, the real representation, the one which is consistent with observation, and true, is the classical representation.
Quoting apokrisis
Yes this is my point. You assume that things are unintelligible at the beginning, therefore we ought not even try to understand the beginning. The theological way assumes that the beginning is fundamentally, and supremely intelligible. The idea of physical or material existence being derived from the intelligible forms of the creator, explicitly indicates that whatever it is which is prior to the beginning of physical or material existence is fundamentally intelligible. You ought to be able to see why the theological way is much more appealing to anyone with a desire to know the truth about the beginning. If intelligibility is lost in vagueness at the beginning, as you suggest, then there is no point in attempting to understand the beginning, it is simply impossible. But, if the beginning of orderly existence (as we understand the universe to be), necessarily proceeds from an act of final cause, then we might be inspired to proceed toward understanding that act.
Quoting apokrisis
As I explained, this is completely perspective dependent, and cannot be considered to be anything even remotely related to the truth.
Quoting apokrisis
My conception of time cannot be said to be Newtonian. I've read much of Newton's material and he doesn't even present a conception of time, just taking for granted what has come from before him. Furthermore, I never described any "collection of instants", nor did Newton rely on any such conception. Newton's three laws of motion clearly rely on time existing as a continuity. Continuous time, i.e. without the separation of instants, is what supports the concepts of mass, inertia, and velocity in Newton's laws.
And the only representation of time which I offered is a separation between future and past. so you're just misrepresenting what I've proposed, in order to say that you are offering something different. You are offering something different though, without appealing to the misrepresentation. You offer a naturalistic metaphysics based in a conception of time which does not respect the substantial difference between past and future. That is the issue which modern physics faces, it does not respect the substantial difference between past and future. That there is a substantial difference between past and future is the most fundamental ontological principle, as it is the principle with the best empirical support.
Quoting apokrisis
Sure, pragmaticism might be the only test for metaphysical theories, but it has no business putting forth metaphysical theories itself. Look at the results you've described. Existing metaphysical theories lead to the conclusion that the beginning of the cosmos is not understood. Therefore the beginning of the cosmos cannot be understood. That's what you've described. The problem is that your pragmatism has not taken into account, and tried to understand the existence of itself pragmaticism, and such a venture leads us to final cause. So until you properly understand final cause, you cannot understand the failings of pragmaticism.
If I visited another planet and found all these ruins and artefacts, I would feel they could only be explained as machinery constructed by a race of intelligent beings. That would be a logical inference.
But If I visited another planet and found only mountains and rivers, plate tectonics and dissipative flows, then I would conclude something else. An absence of intelligent creators. Only the presence of self organising entropy-driven physical structure.
Quoting Metaphysician Undercover
I simply don’t accept your own view on them. That’s different.
Quoting Metaphysician Undercover
He emphasised the role of habit instead. Constraints on action that explain both human psychology, hence “freewill”, and cosmology if the lawful regularity of nature is best understood as a habit that develops.
So it is usually said he was very Aristotelean on finality. But he also wanted to show that any “creating mind”, was part of the world it was making, not sitting on a throne outside it.
Quoting Metaphysician Undercover
So we agree there for quite different reasons. :grin:
Quoting Metaphysician Undercover
OK I accept Newton’s arguments were more complex. He had the usual wrestle over whether reality was at base continuous or discrete. Were his infinitesimals/fluxions always still a duration of did they achieve the limit and become points on a line?
But his insistence on time as an external absolute was how he could also insist that all the Universe shared the same instant. Simultaneity.
And note that the argument I’m making seeks to resolve the continuous-discrete debate via the logic of vagueness. Neither is seen as basic. Instead both are opposing limits on possibility. And this is the relativistic view. Continuity and discreteness are never completely separated in nature. But a relative degree of separation is what can develop. You can arrive at a classical state that looks Newtonian. Time as (almost) a continuous duration while also being (almost) infinitely divisible into its instants.
Quoting Metaphysician Undercover
That is where incorporating a thermodynamic arrow of time into physics makes a difference. It breaks that symmetry which comes from treating time as a number line-like dimension - a series of points that you could equally read backwards or forwards.
Once time is understood in terms of a thermal slope, an entropic finality, then the past becomes different from the future.
What has happened is the past as it now constrains what is possible as the future. Once a ball rolls halfway down the slope, that is half of what it could do - or even had to do, given its finality. It’s further potential for action is limited by what is already done.
So note that the reciprocal function describes a hyperbola. And we can understand this as representing the complementary quantum axes that define uncertainty (indeterminism, vagueness). Let's call the x axis momentum, the y axis location. In the formalism, the two values are reciprocal. Greater certainty about one direction increases the uncertainty about the other. The two aspects of reality are tied by this reciprocal balancing act.
Now think of this parabola as representing the Universe in time - its evolution from a Planck scale beginning where its location and momentum values are "the same size". In an exact balance at their "smallest scale". The point on the graph where y = 1; x = 1.
Note that this is a value of unit 1. That is where things crisply start. It is not 0 - the origin point.
Now if you follow the evolution of the parabola along its two arms you can seen in the infinite future, the division between momentum and location becomes effectively complete. The curves are asymptotic so eventually kiss the x and y axes. They seem to become the x and y axes after infinite time.
And then the catch. If you are an observer seeing this world way down the line where you believe the x and y axis describe the situation, then retrospectively you will project the x and y axis back to the point where they meet at the origin.
Hey presto, you just invented the problem of how something came from nothing, how there must be a first moment, first cause, because everything has to have started counting its way up from that common origin point marked on the graph.
A backwards projection of two orthogonal lines fails to read that it is really tracing a single reciprocally connected curve and is thus bamboozled into seeing a point beyond as where things have to get going from. It becomes the perennial problem for the metaphysics of creation.
But if you instead take the alternative view - the reciprocal view that is as old as Anaximander - then the beginning is the beginning of a counterfactual definiteness. And that takes two to tango. Both the action and its context - as the primal, unit 1, fluctuation - are there together as the "smallest possible" start point.
Where y = 1; x = 1 is the spot that there is both no difference, and yet infinitesimally a difference, in a distinction between location and momentum, or spacetime extent and energy density content. It is the cusp of being. And a full division of being - a complete breaking of the symmety - is what follows.
Looking back from the infinite future, the starting point might now look like y = 0; x = 0. An impossible place to begin things. But there you go. It is just that you can't see the curve that is the real metaphysical story.
That kind of absolute space and time - the one where the x and y axes are believed to be represent the actual Cartesian reality in which the Universe is embedded - is just a projection of an assumption. An illusion - even if a usefully simple model if you want to do Euclidean geometry or Newtonian mechanics.
The Cosmos itself isn't embedded in any such grid. Instead it is the curve that - by the end of its development - has fully realised its potential for being asymptotically orthogonal. So close to expressing a state of Cartesian gridness, Euclidean flatness, Newtonian absoluteness, that the difference doesn't make a damn.
It gets classically divided at the end. But it starts as a perfect quantum yo-yo balance that is already in play from the point of view of that (mistaken) classical view of two axes which must meet at the big fat zero of an origin where there is just nothing.
Who would be applying constraints on this planet then? It would not be appropriate to refer the "application of constraints" unless there is something which is applying constraints. You have a habit of talking in this way, as if there is something, some being, applying constraints, or acting in some other intentional way, but when questioned about that you tend to just assume that constraints are applying themselves. Then you proceed into nonsense about self-organizing systems, as if inanimate matter could organize itself to produce its own existence from nothing.
Quoting apokrisis
A habit is the propensity of potential to be actualized in a particular way. What is fundamental to "potential" is that no particular actualization is necessary from any specific state of potential. If a specific state of potential tends to actualize in a particular way (habit), there must be a reason for this. The reason cannot be "constraints on action", because the nature of potential is such that no particular actualization is necessary, and constraints would necessitate a particular action negating the nature of "potential", as having no particular actualization necessary. Therefore we must dismiss "constraints on action" as an explanation for habit, and allow that each instance of actualizing a potential must be freely decided, like a freewill action, to maintain the essence of "contingent" as not-necessary.
This is the difference between a habitual act of a living being, and the necessary act of an inanimate object. The habitual act must be "decided" upon, at each instance of occurrence, or else we cannot truthfully say that there is the potential to do otherwise. So "potential" is excluded from the habitual act if the habitual act is caused by constraints, because the constraints would necessitate the action, and there would be no possibility of anything other than that action. If there is something, such as a being, which applies the constraints, to direct the activity, allowing that the potential might be actualized in some other way if the constraints were not applied, then the not-necessary nature of potential is maintained by the choices of that intentional being applying the constraints. But this implies that some intentional being, acting with final cause is applying the constraints to suit its purpose.
Quoting apokrisis
I've explained a number of times now in this thread, the logic of vagueness does not solve any problems, it simply represents them as unsolvable, so we might leave them and not concern ourselves with them, thinking that is impossible to resolve them, instead of inquiring toward the truth of the matter.
You are the one referring to the "application". And the obvious answer from my point of view is that the constraints are self-applied. The regularity of habits develops out of nature's own set of possibilities.
Quoting Metaphysician Undercover
Nonsense? Or science?
Cosmolology shows how everything is self-organising back to the Planck scale. I provided you with the hyperbolic curve as a model of how there need be "nothing" before this self-organising was already going.
Quoting Metaphysician Undercover
That is why we are talking about habits developing. At first, everything would try to happen willy-nilly. Then later, things would self organise into an efficient flow.
If someone shouts fire in the cinema and everyone rushes for the same door, lots of bodies trying to do the same thing at once have the effect of cancelling each other out. There is a chaotic jam and nobody gets anywhere.
But if the crowd organise into a flow, then everyone can get out in the fastest way possible.
Rules emerge like this. Just think about how traffic laws emerged to avoid everyone driving like a panicked crowd. Efficient flows always beat inefficient chaos. It is nature's finality. The least action principle.
All possibilities are binaries if they are to be clear and not vague. To take a direction, you have to be moving away from whatever is its counterfactual.
Possibilities come in matched pairs. Or to the degree that they don't, then - as a possibility - they are vague.
True if limited to strictly human thinking. But absolute clarity is unavailable to humans whose emotions, agendas and biases distort clarity. Taking ALL directions seems the way to go. If not...
Quoting apokrisis
...the “matched pairs” way of thinking self-imposes the limit of two choices, BOTH of which MUST include a degree of vagueness since true clarity is elusive at best. Other perspectives persist.
Does nature offer counter-examples? What are they?
Nature??? To paraphrase Obi-Wan, “Your perceptions can deceive you. Don’t trust them.”
What’s the cornerstone of philosophy? Question everything! Linear thinkers succumb to the notion that “nature” is absolute. I submit it is not beyond question, nor is anything, especially human “definitions”. In order to question what we perceive we must first question our own so-called nature.
Uh huh.
Or around these parts, question everything and believe nothing. :smile:
Quoting Dan Cage
That is certainly Epistemology 101.
Hmmm... I question everything and believe nothing, so it sounds like I fit in. But I do not recognize the Epistemology label so I have not knowingly subscribed to it. I represent no specific ideology, philosophy, religion or science, at least not willingly. To me (and to all humans, I should hope), they are all fallible. Why the dismissiveness?
That's just what they called the introductory epistemology class back when I was little. Hume, Berkeley, Descartes, Kant. The usual crew.
Interesting and informative, thank you.
I have encountered occasional quotes from a few philosopher-types over the course of my 64 years... unintentionally. Plato, Dante, Sartre, to name a few. But have never attended a philosophy “class”. And I have agreed with some of those quotes, disagreed with others, not that my opinion matters.
My discipline is self-taught, self-imposed, though not without influence, of course. If it happens to match, at least in part, an existing line of thought, it is coincidence.
I seek a forum of what I hope is “original” thought that goes beyond what has been thought to date. What I’ve encountered here and on Arktos, so far, has been debates over which “established” philosophy is closest to being, ahem... “correct”. “Right” and “wrong” are strictly human concepts and are, therefore, incomplete at best and invalid at worst.
Must I don the cape of my favorite philosophical crusader in order to be “worthy” of this forum? I know that’s not up to you, but some guidance may be helpful. Non-belief is vastly different from disbelief. I am open to anything, but I find very few human thought-inventions compelling. Is there a label for that?
Things are pretty relaxed on that score - at least as the price of entry. But philosophy is a dialectical contest. So if you say something easy to bash, then expect that gleeful bashing. That is the price of staying. :up:
Quoting Dan Cage
Depends how many philosophical positions you have actually encounter and whether you made a sufficiently compelling case against them really.
Skeptic would be a good thing to be labelled. It would mean you have mastered the basics of critical thought.
Excellent advice!
But I’m not out to “bash” or be “bashed”, nor to “win” an argument in the process. Learning and growth are infinite. I’m content simply to expand. If gain-saying is the winning “formula” here, it would be a waste to participate. I will continue to monitor.
Thanks much for the enlightening dialect!
That use of :application was a quote from your post. Nevertheless, I've explained how }self-applied constraints" is illogical involving contradiction. If the constraints are fixed constraints (what the laws of physics are generally believed to describe) then there can be no potential to behave in any other way, and the constraints are not applied, they are just there. If the constraints are capable of applying themselves, then there must be freedom of application inherent within the constraints themselves. This would mean that there is an element of freedom inherent within the constraint, and this is contradictory.
Quoting apokrisis
Such self-organization is not science, it's you attempting to produce a metaphysics which will account for what science gives us in a naturalistic way. And you refuse to accept the contradictions inherent within your naturalistic metaphysics as indication that you ought to move along toward a more acceptable metaphysics. Instead you'd rather appeal to an ontology of vagueness which allows you to leave the contradiction where they lie.
Quoting apokrisis
I see you reject the principle of sufficient reason as well as the principle of non-contradiction. Do you see why I am fully justified in referring to your metaphysics as nonsense?
Quoting apokrisis
This is a key point you seem to be missing about possibility, or potential. Potential is completely incompatible with with the bivalent system, and therefore needs to be represented in a completely different way. Peirce clearly pointed this out. Possibility is something general. If it is reduced to a particular possibility such that we can represent its binary opposite, we are not representing the possibility properly, because possibility always relates to numerous things, not one thing. If actualizing possibility X means not actualizing possibility Y, this does not mean that X is the opposite of Y. But this is also why the idea of infinite potential, or possibility, is nonsensical. It leaves nothing actual to make the choice as to which possibility will be actualized. A possibility does not have the capacity to actualize itself.
The current approach in cosmology and particle physics would be to see any global regularity in terms of emergent constraints. That is why symmetry and symmetry breaking are at the heart of modern physics. They describe the form of nature in terms of the complementary emergent limits on free actions. A probabilistic view where change is change until change can no longer make a difference. At that point, the system is "stable" and its equilibrium balance can be encode as "a universal law".
Quoting Metaphysician Undercover
Yes. That is the distinction I have made all along. Potential would be simply a vagueness. The PNC fails to apply. And possibility is the next step along. A possibility is a concrete option. The PNC applies in that to go in one direction is not to go in its "other" direction.
A possibility is an actuality in that regard. A generalised notion of potentiality in fact. There is now a world, an embedding context or backdrop, where every act is matched by a "reaction". To push is to encounter resistance. To move is to depart.
It is all made actual and concrete by the fact that every possibility is bivalent. A direction is asymmetric as it breaks - and hence reveals - an underlying symmetry.
Vagueness is where there just isn't any such general backdrop to local events or acts. If you are in a canoe in a thick fog on a still lake, do you move or are you still? The PNC can't apply unless there is some context to show that a change is happening, and even not happening.
But when the fog lifts, we have reference points. We are either moving or not moving as the clearly bivalently complementary options now. We have a choice between the two opposed possibilities. The PNC becomes a legitimate rule.
Quoting Metaphysician Undercover
Yep. You dispute the distinction between vague potential and crisp possibility and then repeat the basic argument.
As Peirce says, the trajectory is from Firstness to Thirdness, from vagueness to generality. Actuality as a set of concrete local possibilities emerges via the contextual regularisaton of a vagueness, a unformed potential, by generalised habits. A prevailing state of global constraints.
The generality of a backdrop is needed as the symmetric reference frame that orientates local possibilities as the bivalent symmetry-breakings, or asymmetries. That is what I have said all along.
And hence you need the further category of vagueness to stand behind this evolutionary development. The generality of a backdrop or symmetry state has to arise out of "something" too.
Quoting Metaphysician Undercover
It is not X and Y that speaks to bivalence. It is X and not-X.
Quoting Metaphysician Undercover
Sure. In your mechanical model of reality.
The Peircean model says vagueness is only regulated. So there is always chance or spontaneity to affect things. Regulation is asymptotic. It can approach the limit but never actually completely reach it. So infinitesimal chance always remains in the system to tip the balance.
That is why quantum mechanics can work. Or any other form of spontaneous symmetry breaking in physics.
You only need a system to be symmetrically poised between its two directions - the choice over a concrete action. Something is always going to tip the balance. Nature just fluctuates at a fine-grain level and chance will give the poised system its nudge that then actualises the possibility.
It is the old paradox of a ball balanced on the peak of a rounded dome or pencil balanced on its sharp tip.
Newtonian physics says a perfectly balanced ball or pencil could never topple. Nature says that - quantum mechanically - the world is just never that still. There will always be a slightest vibration. And the slightest vibration is all that is needed for the ball or pencil to spontaneously break its symmetry and so actualise a possibility.
Yes. But what if this non-linear sensitivity is being regulated by a parameter that is a reciprocal relation such as y=1/x? And so yx = 1?
A tiny tip one way is yoked to a tiny tip that compensates. Unit 1 has been fixed as the identity element, the common departure point. The indifference lies in now giving it any particular value to denote some quantified scale. It is now always just a generalised quality - the way an identity element behaves as a symmetry awaiting its breaking.
Other examples of starting values that emerge as the balances of divergences.
The value of pi - understood as the ratio of a circumference to a diameter - can vary according to the geometry of a plane. Pi = 2 for the closed or negatively curved surface of a sphere. Pi heads for infinity in the opposite case of the positively curved hyperbolic plane.
It is only the special case - the Euclidean plane, where lines can remain parallel to infinity, never converging or diverging - that the ratio is a familiar fixed constant. 3.14159...
Euler’s number or e is perhaps a clearer case in being the constant that emerges from the "self-referential" reciprocal built into a pure model of continuous compounding growth.
That is, f(x) = e^x graphed as a curve which intersects at y = 1; x = 0.
The system is set to the most general initial value - 1 - before any growth has had the time to be added. And then the slope it generates by x = 1 is e - or 2.7182818... The unit 1 picture spits out a pi-like constant - a universal scale factor - for the dynamics of self-compounding growth.
https://www.mathsisfun.com/numbers/e-eulers-number.html
With the Planck scale, the physics wants to run it backwards to recover the "unit 1" reciprocal equation that is the Universe's own universal scale factor. That is the thought motivating this particular game here.
Okun's cube says it must take all three fundamental Planck constants in a relationship to recover that unity. If all three constants - h, G and c - can fit into one theory, then that is the theory of everything.
General relativity unifies two of them - G and c. Quantum field theory unifies another pair - h and c. So unifying all three is about a combined theory of quantum gravity.
At which point everything collapses into confusion as it is a completely self-referential exercise. There is nothing "outside" as the yardstick of measurement. It is all reduced to some internal interplay.
Well, this is why efforts like Loop Quantum Gravity have tried to extract realistic solutions as emergent features from the kind of self-organising reciprocal thinking I describe.
If you frame the quest as getting back to where time and space are coordinates set to zero, then energy density has to be infinite. Neither extreme is a sensible answer to the question.
But if instead the general answer - from a dimensional analysis - is that everything starts from 1, that gives you a fundamental grain to grab hold of. You have a yo-yo balance to swing on. You can extract a log/log powerlaw slope that is the dynamics of an expanding~cooling Cosmos. The energy density thins as the spacetime spreads. The rate of both is yoked together, as scaled by the speed of light - the third side to this "unit 1" Planck story.
So how small and hot was the Universe at the Big Bang? The answer is 1. Or rather so hot and massive that it was as small and curled up as possible in terms of its scale factor. And vice versa. It was so hot and massive it was striving as hard as possible to blow itself apart in every direction. It's spatiotemporal curvature was just as much hyperbolic or positive as it was spherical or negative.
By the Heat Death, the end of time, the scale factor is still "1" but now in an inverse fashion. Everywhere is so cold and empty that the gravitational curvature - the stress tensor of GR - is at its weakest possible value. Almost zero, or 1/G, of what it had been. And the same for h as a measure of the quantum uncertainty or positive hyperbolic curvature wanting to blow things apart. Effectively it has fallen to 1/h or nearly no curvature in that direction either.
So the Universe stays "flat" and follows its unit 1 scale factor trajectory as a spreading~cooling bath of radiation. But that conceals the trauma that is the Big Bang as a state of unresolved tension - the maximum difference in terms of being the "largest" energy density packed into the "smallest" spacetime. And the Heat Death as the evolution towards the calmest expression of that driving tension - its dissipation into its own reciprocal state of being the smallest energy density packed into the largest spacetime.
I'm sure I'm only writing this out for my own amusement. But I just find it a fascinating story.
A different kind of "maths" results from setting your origin to 1,1 rather than 0,0. It constrains any path being traced to something nicely tamed by its own self-referential set-up.
Sorry, we must be talking past one another again. I have no idea what you are saying. Here is a parabolic LFT having a neutral or indefinite or indifferent fixed point in C. Depending upon the value of K one gets the behavior I described before.
[math]F(z)=\frac{\left( K+\alpha \right)z-{{\alpha }^{2}}}{z+K-\alpha },\text{ }F(\alpha )=\alpha ,\text{ }F'(\alpha )=1[/math]
This does not account for the problem that I mentioned, which is the issue of saying that the constraints apply themselves in this type of emergence. For the constraints to be applying themselves, the thing being constrained, indeterminacy, freedom, or whatever you want to call it, must be an inherent part of the constraints, thus allowing the constraints the freedom of application. Combining these two in this way provides you with no possibility of separating them in analysis for the purpose of understanding, and you are left with a vague union of constraints and the thing constrained rendering them both as fundamentally vague, unintelligible.
Quoting apokrisis
You are not applying Peirce's distinction between internal and external application of the LNC, as described by Lane in your referred article. There is a difference between saying 'x is red and x is not red', and saying 'it is true, and it is not true, that x is red'. The former is a proper violation of the LNC, the latter indicates an improper definition, or faulty representation of 'red'.
So if the terms of bivalent logic fail to apply in the proposed predication, then we have an improper proposal for predication, a faulty representation of the relationship between the subject and the property to be predicated, such that the LEM is actually what is violated as 'neither/nor'. But the LNC is not actually violated in this case, that it is violated, is an illusion created by an improper proposition. That's the point which Aristotle made with the concept of "potential", insisting that the LNC still applies, as he employed this principle against the sophists who based arguments in improper propositions for the sake of proving absurdities.
Quoting apokrisis
This is the false representation, or description. Potential itself, as ontologically existing potential, indeterminacy in the universe, is not what is bivalent. It is the epistemic possibility of predication, represented as particular possibilities, or as you say above, "every possibility", which is bivalent. The ontologically existing potential remains outside the LEM, and cannot be predicated because the proper terms to describe it have not been developed. Nor can the ontological potential, which we describe in general terms, be expressed as particular possibilities. Therefore you have demonstrated a category mistake here.
The category mistake you are making is that you are taking the ontological potential, described as "underlying symmetry" which inherently violates the LEM due to our inability to describe it, and you are representing it as epistemic possibilities which are bivalent. Then you insist that it violates the LNC. But you have not created the necessary bridge across this gap between categories, so you claim the real thing, the ontological potential, violates the LNC, when in reality it violates the LEM. Therefore, you are really just expressing the desire to violate the LNC to allow the improperly described "potential" into your bivalent system without providing the necessary terms of description which are required to truthfully bring it into the bivalent system coherently.
Quoting apokrisis
If you analyze your own example here, you'll see that you cannot apply the PNC in this situation because of a deficient description of the situation, due to the fog. The deficient description creates the illusion that the PNC cannot be applied to the situation. However, that's just an illusion, and all we need to do is provide the adequate description (see through the fog) and then the PNC can be applied. So in reality, a claim such as "the PNC can't apply" is never warranted, because any time that it appears like this is the case, we need to make the effort to find the appropriate description so that we can apply it.
Quoting apokrisis
Huh? The "vague potential" we are talking about is ontological indeterminacy, real potential in the world. A "crisp possibility", is a described situation, an epistemic principle. In no way do I repeat your category mistake by repeating your argument.
Quoting apokrisis
We are not talking about "the Peircean model" here. We are talking about the apokrisist model, which utilizes an idiosyncratic interpretation of Peirce, along with a huge category mistake (perhaps initiated by Peirce).
No relation. Your description of a "tipping point" in physics caused me to see certain neutral fixed points in complex dynamics from that perspective. Moving a tiny distance away in one direction gives a value that iterates back to the fixed point, whereas moving a tiny distance away in another direction gives a value that quickly iterates far away from the fixed point, although it may eventually return. Of course, a repelling fixed point would send any point close by further away.
I've written over 175 mathematics programs, most focused on graphics illustrating mathematical concepts, but not being a subscriber on this forum means I can't upload the graphic imagery. A simple vector field in this case would show immediately what I have described. But I realize this is not the topic of the thread, so I apologize for deviating :yikes:
Are you a physicist? You seem very knowledgeable.
That’s OK. I was just confused trying to figure the relevance.
So is the vagueness of a quantum potential ontological or epistemic? Do you believe nature is counterfactual all the way down despite the evidence?
I think it's very clearly epistemic, as the uncertainty of the Fourier transform, to me is clearly an epistemic vagueness.
Quoting apokrisis
I think I've sufficiently explained this already. What you claim as "evidence" of ontological vagueness is simply a failure in human description, i.e. inadequate description. If my eyes are not good, and I cannot distinguish whether an object is, or is not red, due to apparent vagueness, I might be inclined to say that it is both, or neither, if I am unwilling to accept the fact that my eyes are deficient, and admit this. Likewise, if the mathematical, and physical principles by which a physicist understands quantum potential, makes this thing called "quantum potential" appear to be vague, the physicist might not be willing to accept the fact that the apparent vagueness is due to deficiency in the principles.
So the physicists can't properly describe this aspect of reality because it appears vague to them. And you, instead of turning to other principles like theological principles, which I've argued provide a better description of the temporal aspect of reality than those principles adopted by physicists, refuse to even look this way. Instead you adhere to your biased scientistic metaphysics, assuming that if the physicists cannot describe it, it cannot be described, therefore the vagueness must be real, ontological.
But hidden variables have been experimentally ruled out. If it is epistemic, you are left with a truly pathological metaphysics like MWI as your only refuge.
I'm sticking to the science here. The PNC fails to apply to the internals of the wavefunction. The PNC is an emergent feature of the classical scale where the wavefunction collapse has actualised some concrete possibility and so any remaining indeterminacy certainly is epistemic.
Quoting Metaphysician Undercover
Physicists in fact tried their hardest to avoid ontic vagueness. They invented the MWI as one way not to have to admit defeat.
In the end, the "deficiency" is in the metaphysical reductionism that frames the problem - the framework both you and the MWIers share by insisting ontic vagueness is impossible from a classical viewpoint where everything has counterfactual definiteness from the get-go.
What I described is not hidden variables, it's faulty principles. That is epistemic, and it does not lead to MWI, far from it.
Quoting apokrisis
I don't see any physicists addressing the deficiency in their conception of time, which I described in this thread, to adopt a conception which is consistent with our experience of time. Our experience of time indicates that there is a substantial difference between future and past, and therefore no necessary continuity of substance at the present. If physicists had respect for this, they would seek the cause of continuity instead of taking it for granted, as conservation laws. The problem, as we discussed, is that physics is pragmatic, purpose driven toward the goal of prediction. Understanding the real nature of the universe is not the goal of modern physics, so the principles employed by physicists are not designed for this purpose. They are designed for prediction, not for understanding what makes prediction possible.
It seems to me your "therefore" does not logically follow. The "substantial difference" requires a temporal distancing. I suppose you discard elementary calculus, with its notion of time continuity, and its many physical applications. Maybe not. Following arguments in philosophy is sometimes like trying to separate the filaments of cotton candy.
I believe a temporal distancing is required, to separate future from past. I think Peirce posits a vague now. But this separation between future and past is the prime reason why I believe we need a two dimensional time. We have one representation of time which presents us with a continuous time, past through future. If we posit an instantaneous point as the separation between one part of time and another part of time, future from past, we cannot account for the substantial difference between future and past. This substantial change, from future to past, requires a period of time to occur in, it is a form of becoming, and becoming cannot occur instantaneously. So we need to develop another dimension of time to account for this substantial change which occurs at the present, and relate it to the other dimension of time which is supposed to be a continuity through past and future. Some metaphysicians will talk about the present having width, I call it breadth.
Let me think about this. I wrote and posted a note on complex time recently that expresses a "real" and an "imaginary" time variable. But not in the way you describe. Interesting.
There are numerous ways multidimensional time has been approach. From physics it's a different approach as from metaphysics, but each way helps us to deal with the apparent vagueness of the present "now", assumed by special relativity. From the metaphysical approach, we have principles based in human experience, leading us toward a form of presentism. But experience demonstrates that we actually observe motion, "becoming" at the present, so the present cannot be a crisp moment, or point in time because motion requires time. This is the vagueness of the present described by Peirce, and employed as a fundamental premise in special relativity.
Therefore if time is represented as a continuity with points dividing one part from another, this is not a proper representation because the notion of a point of separation is derived from the crisp division between past and future at the present. When we allow that time is passing at the present we can go in two principal ways. We can maintain that the first representation is correct, and claim that the separation cannot be made cleanly because human capacities don't allow us to do so, therefore "the present" is just an arbitrary period of time on the time line. Vagueness cannot be ruled out of this period of time. The more complex way is to represent the past and future as the line of being as one dimension of time, then show activity as occurring at a particular "vague point" on this line, with its own micro-scaled time to account for becoming. The difficulty is to establish the proper relation between the macro-scaled timeline of being, and the micro-scaled time of becoming, such that true understanding might be enabled. This requires determining precisely the activity which occurs at the present in the micro-scale, separating it from the activity of the macro-scale, such that this activity might be related to the activity of the macro-scale time line, as a distinct form of activity.