Are the laws of nature irreducible?
This is an attempt to get a coherent concept of the laws of nature. What are they? What are they made of? How do they work?
If Davies is correct in saying that laws do not depend on physical processes, does that necessarily imply that laws cannot be explained by physical processes? IOWs laws are irreducible to physical processes?
Indeed, a bottom-up explanation, from the level of e.g. bosons, should be expected to give rise to innumerable different ever-changing laws. By analogy, particles give rise to innumerable different conglomerations.
Moreover a bottom-up process from bosons to physical laws would be in need of constraints (laws?) in order to produce a limited set of universal laws.
Finally a related question: what makes laws work?
Paul Davies: But what are these ultimate laws and where do they come from? Such questions are often dismissed as being pointless or even unscientific. As the cosmologist Sean Carroll has written, “There is a chain of explanations concerning things that happen in the universe, which ultimately reaches to the fundamental laws of nature and stops… at the end of the day the laws are what they are… And that’s okay. I’m happy to take the universe just as we find it."
...
Physical processes, however violent or complex, are thought to have absolutely no effect on the laws. There is thus a curious asymmetry: physical processes depend on laws but the laws do not depend on physical processes. Although this statement cannot be proved, it is widely accepted.
If Davies is correct in saying that laws do not depend on physical processes, does that necessarily imply that laws cannot be explained by physical processes? IOWs laws are irreducible to physical processes?
Indeed, a bottom-up explanation, from the level of e.g. bosons, should be expected to give rise to innumerable different ever-changing laws. By analogy, particles give rise to innumerable different conglomerations.
Moreover a bottom-up process from bosons to physical laws would be in need of constraints (laws?) in order to produce a limited set of universal laws.
Finally a related question: what makes laws work?
Joel Primack: What is it that makes the electrons continue to follow the laws?
Comments (440)
Hence If we think of laws as being Prescriptive, as Davies does, then the laws affect the processes but not vice versa. On the other hand if we think of the laws as being Descriptive, the processes affect the laws, but not vice versa (the patterns in the processes determine what the laws are).
And lo, the symmetry is recovered.
The same laws that I described as "descriptive" can be treated prescriptively. We know how gravity, force, and mass interact, so when we launch a rocket to Mars, the laws of nature prescribe how much thrust is needed, when, for how long, and in what direction. Think of the Cassini Mission bouncing around the complex gravitational fields of Saturn and its various moons. NASA wouldn't be able to program the on board computers or alter the programs without an exquisite understanding of precisely what the laws prescribe. When NASA scientists' understanding of the laws of nature aren't quite exquisite enough, rockets miss their targets and go sailing away into the brightly light yonder. around the sun.
If you fall off the roof of a very tall building, be assured that your plunge toward earth is altogether in accordance with laws which will not be amended before you become a large splat! on the sidewalk.
1) Enumerate the laws that one wishes to discuss and
2) Explain how these laws are invariable through time and are applicable to every possible event.
I have never seen this done. Proponents of such a concept as laws of nature generally prefer to discuss them in gross generalities which I reject.
Instead, I prefer a more changeable and evolving universe as Rupert Sheldrake describes. I find this a more realistic view of the universe:
http://www.sheldrake.org/research/most-of-the-so-called-laws-of-nature-are-more-like-habits
Nice question. I really like Paul Davies but that column is mix of the good and bad.
The way I would look at it is that the fundamental laws describe mathematical symmetries - which are in effect the limits on un-lawfulness. With a circle, for example, disorder can do its damnedest - spin the circle at any direction at any speed - and the circle will still look unruffledly the same. All that disordering has no real effect as the very form of the circle is indifferent to every kind of action, or attempt to break its symmetry.
So this is a standard thing. Symmetries are emergent equilibrium states on the larger picture. They are the constraints that can't be broken because there is no possible action that could make a substantial difference. And we can apply this to a dynamical process like a Big Bang universe or other entropic systems.
An ideal gas has particles going off in all directions, but they can't change the overall temperature or pressure of the system - its global symmetries ... (unless all the particles decide to all go in the same direction - something that can't happen in a Big Bang universe that is always cooling/expanding of course.)
Anyway. Davies makes the useful point that most laws or constraints are "merely effective" - locked in due to symmetry breaking. It is easy to see that as the Universe has cooled/expanded, bosons have attained stable local identity and so have behaviour that is accounted for in terms of symmetries that got broken. Antimatter and matter were once in thermal equilibrium (a state of symmetry). Now all the antimatter has fizzled away leaving matter as "the law". Effectively we can chuck away the right-handed interactions of the weak force because only the left-handed ones exist to result in physical laws.
And now my suggestion is to just roll effective law all the way back to the beginning. We don't have to work our way back to a fundamental Platonic state of being which is a perfect symmetry. Instead - if we understand laws to always be the emergent limits of disordering, the dynamical equilibrium balances that must always develop because "continued differencing makes no further difference" - then we can start the whole shebang with both disordering and ordering being the "symmetry" in play. We don't have to pick one over the other - disordering, or the quantum action, over order, or the symmetries of spacetime. Instead the two co-arise as themselves the deep asymmetry. The story is simultaneously bottom-up and top-down.
So this is synergistic. The laws need disorder (or violent physical action - quantum fluctuations) to exist. They represent the equilibrium limits that regulate the Cosmos in being the effective symmetry states that "just don't care". Disorder loses its teeth because it can spin a circle all it likes and the circle already immanently exists as the limit of that very possibility for action. Try every form of disruption and in the end, what can't be disrupted is what is left as necessarily being the case.
Of course that still leaves plenty of mystery in trying to track things back to the beginning. Physics can now describe a whole sequence of emergence when it comes to the development of effective laws following the Big Bang. There were a whole series of phase transitions to produce more complex states of order as the energy density dropped and the scale factor increased. However we are still trying to work out whether gravity was part of some vanilla quantum force and so there is some grounding symmetry state for reasons that will seem self-evident once we have its number.
Yet the metaphysical problem here is that Davies (although he is big on top-down causation) is still too wedded to a Platonic conception of symmetries as "substantial things" - like something that might break like a plate if you drop it on the floor.
If instead you go the completely effective theory route - where any foundational symmetry would itself have to be emergent along with the disordering it "ignores" (that is, the quantum action that is needed to manifest it as "a real physical thing"), then you have an elegant way past the usual "first causes" and "prime mover" quandries of metaphysics.
The thing is we absolutely understand the nature of effective law. It is not a mysterious thing. So why not extrapolate backwards from that (as some major metaphysicians, but really no modern physicists, have done).
The Standard Model of particle physics. You can take various restrictions of this - quantum field theory, non-relativistic quantum mechanics, in order to simplify the discussion.
General Relativity. Restricted to Special Relativity in some circumstances.
Neo-Darwinism.
The laws are invariant through time because they say they are.
1) Is this a complete list?
2) Can you show that each law applies to every event and is invariant through all time (post and future) and are included within each other without contradiction (e.g. reciprocity of Special Relativity)?
We are looking for laws nit evolving observations and mathematical symbolism.
I didn't realize that mathematical equations and generalized, undefinable stuff like neo-Darwinism can say anything. I though only people can say things like what you just said.
You could leave Neo-Darwinism out, so in terms of physical reality, it is a complete list. There are only two theories. The aim is for there to be only one.
The laws of physics apply always and everywhere. They work forwards in time and backwards in time. If the theories were contained within each other you wouldn't need two of them. Have no clue what you mean by "reciprosity of Special Relativity".
I wonder how much of this enterprise is the attempt, in Stephen Hawking's words, to know 'the mind of God'. HIstorically, the notion of 'laws of nature' was grounded in the idea of the 'handiwork of God'; the laws were made by God, in a manner analoguous to how humans (or monarchs) created civic laws. Newton and even Galileo saw it in those terms. But with the decline of religion and the growth of naturalism, there has been an (often impicit) assumption that, as the divine origin of the Universe has been dispensed with, then the laws must in some sense be amenable to a naturalistic explanation. It's almost like 'reverse engineering' - the idea of dissassembling a mechanism to discover what makes it work. It seems to me that there may be insurmountable barriers to such an undertaking, however, and that the motivation might be questionable in the first place.
'The whole modern conception of the world is founded on the illusion that the so-called 'laws of nature' are the explanations of natural phenomena' ~ Wittgenstein, TLP 6.371
Quoting apokrisis
What if there is no beginning? Buddhists don't believe there is, and that the search for any 'beginning' is rather like the search for 'who shot the poisoned arrow', instead of seeking treatment for the poison. If it turns out that THE Big Bang is simply A Big Bang, then I would think the idea of a single beginning is forever out of reach, anyway - the universe is indeed a cyclic process of expansion and contraction, starting from beginningless time.
I think it's important to understand why mathematical laws were taken to represent 'timeless truths' in the beginning of the Western tradition. If you study the origins of arithmetic and geometry, they are of course deeply intertwined with mysticism, as in ancient cultures, the idea of 'separate magesteria' was not nearly so pronounced as it is today.
But, the ancients who discovered such basic concepts as ratio, as geometrical laws, and so on, which is what made science possible in the first place, really believed they were seeing into a higher level of reality than what was visible to the mere senses. And, who's to say that they were not? Imagine the world in those ancient times, when there were no machines, no roads or buildings - the minds that began to understand the principles that allowed the construction of the Pyramids (for instance) would seem to be on a different level altogether.
That is the 'romance of numbers' which is generally deprecated nowadays in favour of naturalist (i.e. neo-Darwinist) accounts, of counting being 'an adaption'. But I think it is worth remembering why it used to be seen that way, how it could be understood that what could be seen through number and ratio, was on a different ontological level to what was merely seen through the 'eyes of the senses'. That was very much foundational to science itself, and it was the revival of Platonism in the Renaissance that was one of the main ingredients of the so-called 'scientific revolution'.
//ps//
I prefer to get beyond such claims of definiteness. You can't be radically indefinite unless you abandon a lack of beginning along with the beginning of beginning.
Quoting Wayfarer
Yep. So like Tom and everyone else, you are stuck with a classical notion of time as a space in which there is an endless symmetry of succession. And yet we know that time and energy are in a reciprocal relation which the goal of a replacement quantum theory is to explain.
Cycles are what you end up with if you can't get passed the symmetry of your own mathematical equations. If you can go forward, you can go back. And then from there you can repeat without making a difference.
So sure, cyclic metaphysics seems very logical. But that's the problem. It shows you aren't ready to break out of the mental box you have constructed for yourself. A final theory is going to have to figure out what time really is. And if the theory is cyclic or reversible, then it still starts your ontology with a Platonic symmetry and not an Anaximandrian potential.
Pity that it has to come down to ad hominems, isn't it?
I think it's a near-certainty that the universe will turn out to be a cyclical process of expansion and contraction, as is everything in nature. How that constitutes 'a mental box' is a bit beyond me.
But I think it's also eminently possible that there never will be a 'final theory'. After all, if neo-darwinism is true, we're simian, all what we think we know really amounts to a cunning plan by the Selfish Gene in the service of its propogation.
You can take it personally but it was the collective "you" I was addressing.
Quoting Wayfarer
Again, on what grounds - except a belief that time symmetric laws support a time symmetric reality, thus ignoring all the evidence that that time symmetry is irreversibly broken back here in reality?
With respect to the cyclical nature of reality, I think it is one among many speculative ideas being considered. After all, if there can be a single 'big bang' event, what 'law of nature' says that it can't happen more than once? Of course I know I don't have any kind of argument for that.
What equations that do exist are limited in their description and are not universal to every event. If laws exist they have yet to be articulated or enumerated which is central to the question at hand.
So, sooner or later, 7 will equal 6? Just a matter of time?
As with God's laws, scientific knowledge and mathematical equations are subject to dispute and change in every instance.
The second law of thermodynamics is the obvious one.
Of course the second law is itself framed rather mechanically in terms of reversible Newtonian motions. And so it is quite easy to "prove" theorems about eternal recurrence ... given that time symmetry is being taken axiomatically for granted.
Yet meanwhile back here in the real world, the Universe expands and cools eternally. We know that from observation rather than theory - the discovery of the hyperbolic curvature due to "dark energy". So we already had the problem of writing time asymmetry into theories like GR and QM by hand - we have to add a directional time signature that is not to be found in the symmetry-describing equations. And dark energy is another observable that shows we really do have a big hole in current theory.
But anyway, at the very least, the expanding and cooling is now certain to reach a heat death, an actual Planck limit on entropification. So while it might be highly probable (with a certainty of 1) that a quantum fluctuation as hot and dense as the Planckscale would result in a big bang universe (as argued by spawning multiverse scenarios, for instance), it is matchingly (reciprocally) improbable that a heat death universe would be able to re-produce a Planckscale fluctuation of that requisite magnitude.
Once you have struck a match and burnt it out, it is "quantumly possible" if you kept striking it that it might eventually catch fire again. Some probability can be attached to anything happening. But I think we could also say that a probablity of "almost surely zero" is zero for all practical purposes, even for metaphysics.
We can never rule out a story of the universe as the original perpetual motion machine. On the other hand, we can empirically assert that it is screamingly unlikely to be the case. It is far more likely that we just haven't figured out the problem rather than that we can extrapolate from the success of simplifying our models of the world by presuming time symmetry - and writing in the right direction for time in ad hoc fashion to make the models actually fit the world as we experience it.
You mean history has shown that our scientific models just keep getting remarkably more comprehensive in scope. Our ability to describe the world accurately has been improving exponentially.
As for accuracy, physics as moved from the Newtonian concrete to the quantum ambiguous and probable. If accuracy is defined by a probability wave then accuracy had taken a left turn. Physics is very useful but hardly precise. What is highly probable is that it will all change - again and again and again.
I remember you from another Forum, about five years back.
Your posts haven't changed at all. ;-)
Huh? Our measures of reality now have such precision that we can even measure the residual indeterminacy that persists despite our living in an era when the Universe is now so large and cold.
I'll say it again. We can now quantify uncertainty to about as many decimal places as you might require.
Perhaps you remain unimpressed. I merely then point out that you communicate with me using the resulting quantum technology and not ... telepathy or carrier pigeon.
Quoting darthbarracuda
I'm sure that the basic idea of 'natural law' predates Descartes, although, now you mention that, how he treats it or what he says about it, might make an interesting study.
Yes. I think there is much truth to this. According to Cartesian epistemology, the world can never be known directly through perception. There is a disconnect between the things that we (seem to) know empirically from our temporally situated perspective -- substances that have fallible powers -- and the fundamental (so called) entities posited by the exact sciences, that are subject to exceptionless laws.
But the exceptionless laws that govern the theoretical entities posited by the exact sciences (i.e. "basic physics" and the special sciences that allegedly reduce to it) are conceived through abstracting away most or the real and relational properties of the entities that we actually encounter in the natural (and human) world and in the laboratory. The focus of physics is the (mere) material constitution of ordinary objects.
The OP quoted Sean Carroll, a theoretical physicist, who believes that the law of physics express invariant relations quite unrelated to the ordinary time asymmetric notion of cause. The ordinary notion of causality applies to entities that have fallible causal powers -- things that act on one another or that we can make some use of. If we consider a homogeneous set of such entities, seek purely mechanistic explanations of their behaviors and abstract away from their intrinsic teological structure, and also from the practical uses that we can make of them (as real materials or artifacts) then we can achieve some explanations about the manner in which some processes are materially implemented inside of them. But we also lose sight of what those entities are and the real powers that they have. Far from being a fundamental science, physics is a very narrow science. It may appear fundamental to a Cartesian metaphysican who isn't concerned with the fact that the entities that it describes can't be disclosed in experience but rather must be abstracted away from it.
If laws are indeed Descriptive, and processes affect the laws, then we cannot explain the existence of e.g. an immutable gravitational constant. If a certain gravitational value is produced by 2035 bosons, then another gravitational value would be produced by 1160 bosons.
I like your circle metaphor. However, how does one get from “unlawfulness” to a (perfect) circle?
Also I don’t see how the circle metaphor elucidates the existence of various fundamental constants, which could have been very different; see the multiverse hypothesis. IOWs in many cases the existence of limits (a la the circle form) is not apparent.
I agree with Pierre-Normand, coming at this from a more wayward angle. The metaphor of 'law' is, when you stop to think about it, quite an odd one. It was a Roman rather than a Greek concoction, to apply it to 'nature' (an odd idea too) and then 'natural law' was revived in the 17th Century, indeed by Descartes.
Behind the metaphor, a law qua law - a legal-system law - is not immutable, nor is it something that everything obeys. The laws are what ought to be obeyed, but often aren't, or no-one on the forum would ever have smoked dope. The laws don't last necessarily beyond the next meeting of our lawmakers, praise be their devotion to public duty, although they seem eternal to anyone caught up in infringing them.
So laws have - for determinists - an annoyingly non-deterministic linguistic cousin. Me I like them that way, but then I'm a non-deterministic sort of a fellow.
Do you hold that such a naturalistic explanation must entail a bottom-up explanation from a lower level of, let’s say, bosons? If so, do you hold that this is in principle possible?
What does the fact that the universe is ever-changing — cyclic or otherwise — tell us about the nature of immutable laws? Does the fact of change contradict a purely descriptive nature of those laws?
Generally I argue against reductionism and philosophical materialism - so, no.
I can see the sense of methodological naturalism, which amounts to the bracketing out of metaphysical questions as a matter of practical reason. But I think it is often forgotten that this bracketing out has been done, leading to a view that naturalism can be complete, in principle, when in fact it has proceeded by excluding something important in the first place. There is a kind of a cultural amnesia arising from that, in my view, going back to the early modern period, culminating in the kind of scientific naturalism which is nowadays the default worldview of the global intelligentsia.
Quoting Querius
The cyclic nature of the Universe doesn't really have that much bearing on the idea of the immutability of natural laws. I mean, it's possible to conceive of those laws in such a way that they will hold in any possible worlds, even if in some other respects those worlds are wildly different.
But the general point I am trying to make is that whilst a great deal can be explained on the basis of natural law, the question of 'why there are natural laws' is not one of the things that can be explained! It's really is a meta-physical question, in that it's asking 'why physical laws?' So, questions about the 'order of nature' are of a different kind to questions about 'the nature of order', if you like. And I think that is often neglected.
Does an ever-changing universe (cyclic or progressively expanding) have bearing on the idea that physical processes determine the laws and not vice versa? If the universe is ever-changing, and processes determine the laws, would that not necessarily result in ever-changing laws — contrary to what we find?
If those laws are not contingent, but exist necessarily, does that scenario exclude the possibility that those laws are determined bottom-up by physical processes? It seems to me that ‘wildly different’ physical processes cannot produce the same laws.
I don't get this. If a physical law just is a description of how things behave then if we have a description of how things behave then we have a physical law. That the description includes "could be this or this" doesn't make it any less a description.
However, of the Schrodinger equation, wave-particle duality, and the Uncertainty Principle is all that science has to call a law, then it is important to recognize the limitations of this concept of natural laws.
And what's a law? My suggestion is that a law just is a description. So if there's a description then ipso facto there's a law.
Alternatively, a physical law just is the way things behave. So if there are things that behave then ipso facto there's a law.
Either way, your comment above is the exact sort of thing that can be addressed by dropping the term "law" and just talking about things behaving certain ways and us describing these things and their behaviour. Clearly the term "law" creates problems out of thin air.
This is exactly why I suggest dropping the label altogether. We end up arguing over nothing.
The perfect circle though cannot be a real, or natural figure, and this is indicated by the irrational nature of pi. So the analogy of spinning a circle is a flawed analogy, because there cannot be a real, perfect circle, spinning in time, in order to fulfil the analogy. The perfect circle can only exist as an ideal. This was Aristotle's mistake, he assumed an eternal circular motion as unmoved mover. But such an unmoved mover would require the real existence of a perfect circle. To fulfill the conditions of eternal circular motion, the circle must be perfect, just like the circle must be perfect to fulfill the conditions of the analogy.
The problem is that the law is a sort of "ideal" description, just like the perfect circle is an ideal. How the ideal relates to what actually exists is another issue altogether.
That would be the nominalist view; by contrast, a realist would say that a "law of nature" is a real tendency or habit that governs actual things and events, but is not reducible to them. If a law is merely a description, then there is no good reason to think that it would apply to future behavior, since different things and events are involved; yet we make successful predictions all the time, not just in science, but in everyday living.
As I have noted before, the perfect circle can be real, just not actual. The irrational nature of pi has nothing to do with it - the circumference of a circle is incommensurable with its diameter, which just means that the two cannot be measured by a common unit.
I also said that the law could just be the actual behaviour. What I'm questioning is the notion of the law as some third thing. So it's not that there's gravity, our mathematical model which describes gravity, and a law of gravity. There's just the first two.
How are you distinguishing "gravity" from the "law of gravity"?
To rephrase, what is gravity if not the law of gravity? Are you defining it as the actual behavior, or is it a real tendency or habit that governs that behavior without being reducible to it?
I'd say it's the bending of space-time (or the moving of bodies with mass towards each other as a result; I'm not sure). I don't even know what it would mean to talk about a tendency or habit that governs this behaviour.
Suppose that I am holding a stone. If I were to let go of it, then it would fall to the ground. This proposition is true, regardless of whether I ever actually let go of the stone. It expresses a tendency or habit - a conditional necessity - that really governs the stone's behavior in an inexhaustible continuum of possible cases, so it is not reducible to any actual occurrence or collection thereof.
Without cohesion, we just have useful mathematical equations but certainly no laws.
Who said anything about precision? We make successful predictions all the time, since success does not require absolute precision.
Quoting Rich
I am not familiar with Sheldrake, but Peirce had the same preference; hence my references to "tendency or habit" above. I suspect that it is also why he consistently talked about "generals" rather than "universals" when discussing nominalism vs. realism.
I haven't suggested otherwise?
I don't see how a counterfactual can be considered a physical-law-as-habit. Seems like reifying.
The law of gravity is not the same thing as the mathematical model that we often use to represent it. Again, it is a real tendency or habit that governs actual things and events such that if certain conditions were to obtain, then certain outcomes would follow.
Do you deny the truth of the proposition, "If I were to let go of a stone, then it would fall to the ground"? If not, how do you explain it? What else would "a physical-law-as-habit" be, if not this kind of conditional necessity?
It's true because things in such situations behave in such ways. I don't know why this is supposed to entail laws-as-habits.
I'm not the one saying that physical laws are habits, so I don't know why you're asking me.
That would be a description of the law of gravity.
Indeed. A description of a thing is not the thing itself.
From 'descriptions of the law of gravity are not the real thing’ it does not follow that there is no actual law of gravity.
Useful mathematical equations are not actual laws of nature, but that simple fact does not tell us that there are no laws of nature.
If there are no real tendencies or habits that govern things in such situations, then what constrains them to behave in such ways?
Quoting Michael
If laws of nature are not real tendencies or habits, then what are they?
The fundamental behaviour of things is, by definition, fundamental. There is no further explanation.
As I said in my first post, there is just the behaviour of things and our descriptions of such behaviour. There's no need to posit some extra thing which is the "law". If something is to be called a "law", then it's one of these things.
You do not think that the remarkably consistent behavior of things calls for an explanation? If not, why not?
Quoting Michael
You do not think that questions like why things behave as they do and why this behavior is so consistent are worth exploring? We should just accept them as brute facts and not inquire further?
But who gets to decide when something is fundamental? If we just said that it's a fundamental law that a computer turns on when the power button is pressed, we'd be wrong, since it clearly isn't a fundamental law in the sense you're getting at. It can be explained further, and thus better, than just stating that it's a brute fact and moving on.
The fact is that what you are claiming to be fundamental is perhaps not; or, if it is, there still remains the question of why it is fundamental in the first place. "It just is" is perhaps even more mysterious than "something else made it this way", but it tries to pretend to be anti-mysterious and obvious to escape any worrysome metaphysical issues that arise when people start thinking.
For as much crap that is thrown at theists for using god-of-the-gaps reasoning, popular scientists are disappointingly inept at answering this question and instead tend to pretend it doesn't exist, or use fortune telling reasoning to assert that the answer will be elucidated later on.
Because the fundamentals are, by definition, fundamental. You might want to say that these laws-as-habits are fundamental (else I guess you'll have to explain why these habits are the way they are and why they apply to the physical things they do?), but I think it far simpler to just accept the behaviour itself as fundamental.
Similarly, you wish to argue the nominalist position that natural laws are simply descriptions of behaviors. But where does causality come from? When two hydrogen atoms and an oxygen atom bond and become H(2)O, we can make a description of this phenomenon. But this description doesn't cover all the bases. Why does hydrogen bond with oxygen? And why does it bond in some instances, but not others? The element of contingency here leads me to believe that there is legitimately something relevant that "decides" what is going to happen.
So, if A causes B, why does A cause B?
For the record, I am skeptical of laws of nature. I prefer dispositions and powers. Laws of nature are mathematical abstractions based upon these things.
But how do you know that the observed behaviors themselves are fundamental, rather than manifestations of something else that is even more fundamental? In other words, how can you tell when to stop looking for a further explanation?
Quoting Michael
Of course it is simpler to settle for calling it a brute fact, but that does not make it rationally justifiable, let alone correct. Where would science be today if Newton had shrugged his shoulders because falling from trees is merely what apples happen to do?
I think that what you call dispositions and powers - i.e., what I call tendencies and habits - are the laws of nature. Mathematical abstractions are what we use to represent them, perform calculations in accordance with them, etc.
I think we generally agree. Habits and tendencies arise from dispositions and powers - they are the "macro" scale "laws" while dispositions and powers form a network at the "micro" level. As such the macro-scaled habits and tendencies can change, similar to the Piercean tychism. How a system behaves is dependent not only on its constituent parts but also on the organization of these parts, which creates a causal web/network in which general behavior arises.
To put it more concretely, the concept of a law of nature is rather elastic and loose, and really doesn't provide any additional insights into what is happening, though it is sometimes called upon to justify some pre-determined path of events. Under such conditions, I have no idea how to answer the OP other than to say, I guess not because I can't seem to find any law of nature anywhere, just some generalized habits which have no claim to irreducibly.
The circle simply illustrates the basic principle that a symmetry is defined by differences not making a difference.
Unlawfulness comes in once we start talking about symmetry in the sense of dynamical equilibrium states - or broken symmetries that can't get more broken and so ... become effective or emergent symmetries again.
And this is better illustrated by a gas of particles. At equilibrium, every particle is as likely going forward as going backward. So all action settles to a collective average.
But if you really want to get into it, you could consider the physics of Goldstone bosons - https://en.wikipedia.org/wiki/Goldstone_boson - which are local spinless excitations resulting from global symmetry breakings.
The usual crude description of this is that this kind of irreducible excitation arises like the way a ball balanced on a Mexican hat. The ball has no choice but to roll down the slope (breaking its initial symmetry). But then nothing stops the ball rolling around in a circle in the trough of the hat. It makes no difference to the energetics of the system which way a broken field actually points. And so - being free - it must happen. The ground state becomes a new effective symmetry - the ball rolling around in the circle of the trough - which the world then reads off quantumly as a new degree of freedom or an actual particle.
Quoting Querius
Are you talking about laws or constants? Or laws with different constants? That is, do you have a clear story on how they are the same or different kinds of things?
Personally I'm not a great fan of multiverses precisely because of the muddled thinking on these issues.
Consider again a circle and ask whether pi, as a constant expressing the ratio of radius to circumference in Euclidean space, could be different in a different universe? Doesn't pi have to be pi in all conceivable (flat) universes?
The constants of nature scale the actions of nature. So they put a ruler on the local degree of symmetry breaking. It should be no surprise if they turn out to be effective balances - themselves "geometric ratios" - as, for instance, string theory might suggest from the internal structure of compactified dimensionality.
Another simple analogy is random sphere packing - https://en.wikipedia.org/wiki/Random_close_pack
You can take a bunch of balls and stack them carefully - with maximum order - and fill 74% of the available space with balls. So 26% always remains free space. But if instead you are only allowed to shake the balls into place - do things nature's way, the probabilistic way - then you can only get down to a 64% to 36% ratio of ball to void.
So naively, nature ought to be able to achieve its absolutely orderly ground state. But instead - there always being irreducible jitter - one would expect ground states to only be effective. They would reflect an average behaviour that emerges because freedom is as constrained as possible, and yet that average is itself based on a free symmetry.
With sphere packing, that emergently "grounding symmetry" is the ability of the balls to still slip about. If any balls happen to get stacked for an instant with ultimate crystalline regularity, that can't last long as the greater mass of balls will jostle them back towards the more typically random and loosely packed arrangement.
So in nature, you have to start with a global symmetry (that gets broken). But also end in a local symmetry that puts an effective limit on that breaking. Otherwise you really would end up with nothing rather than some fine-grain groundstate blur of action that is the thermal differences that can no longer create a real difference.
Which is why fundamental is a word that a process metaphysician would only use in quotes.
An ontology of self-organising habits sees everything as instead emergent. Instead of reality being constructed bottom-up from irreducible parts, it instead arrives at its own irreducible limits by way of a generalised constraint on free possibility.
Quoting aletheist
Yep. There was no comeback on that.
Michael wants to focus nominalistically on instances of behaviour, and yet at the same time, he accepts that the behaviour in question is exceptionless. If we let the stone go, there is zero expectation it will rise, let alone float. So we would have reason to talk about laws or habits of nature unless there were constraints defining the very freedoms we are pointing to.
We can call the behaviour of the stone an example of universal gravitational action because the stone apparently has many directions open to it, but moves - indeed accelerates - in always only the one. And if we have both freedoms and constraints, that is two things to talk about. You can't just label them the one thing of "instances of behaviour".
Do processes determine the laws, or are what we describe as 'laws' simply a generalisation of processes? And, if such processes, or laws, were constantly changing, could anything exist? I would have thought that there have to be 'islands of stability' for anything whatever to exist.
I habitually wake up in the morning, however the time I awake constantly changes, and yes it is possible that I will not awake at all. Habits are not cast in stone, they are just highly probable with deviations. Habits provide some degree of stability while still allowing for enumerable possiblities.
The analogy referred to a spinning circle, and by this description, "spinning" implies necessarily that it is actual. Therefore the analogy refers to an actual circle, which according to your statement above, cannot be a perfect circle. However, the description in the analogy described the spinning circle in a way which could only refer a perfect circle. Therefore the situation described by the analogy is impossible, contradictory.
It's not that difficult.
If you want to talk about actual circles, then the form of a perfect circle represents their exceptionless limit. So it is what actuality can both aim for, yet never completely attain.
And yet by the same token, actuality can attain effective perfection if it gets close enough so that it makes no bleeding difference.
So if we grant actuality the purpose of being circular, it can get there as close as can possibly be measured. The very idea of "having a purpose" entails the further idea of "there being a point beyond which there wouldn't be continuing reason to care". Which is the pragmatic fact that saves us getting hung up on Platonistic paradox - ie: purposes can be satisfied, at least relatively speaking. :)
So pragmatic purposes already pre-suppose their limits because a purpose definitely conceived is one conceived in terms of what then counts as a matter of generalised indifference. Logic says eventually, a purpose gets satisfied and so further action in that direction becomes an irrelevance.
Which is what spinning a circle (or talking about rotational symmetry) illustrates. You can spin until you create a circle. But continuing to spin then doesn't make any actual difference. Once action has expressed its limit, further action doesn't change anything.
And that is the way to understand why nature develops lawful or habitual behaviour. In breaking symmetries, it eventually arrives back at symmetries. We exist in a Newtonian classical paradise of inertial freedoms because - in the end - the two principal symmetries of translation and rotation can't be washed away by any conceivable spatial action. And then relativity includes Lorentzian boosts in this picture - uniting the spatial story with the energetic or temporal one.
Finally along comes quantum mechanics to scale the actual size of the fundamental indifference our Universe displays. Our spatiotemporal geometry is effective or emergent. QM says here is the Planck ruler which we can use to measure the actual gap between what reality can manage to achieve and the classical perfection we might think it was always striving after.
If the behavior of things is actually invariant in some fundamental sense and doesn't evolve, then it would seem the invariant behavior must be determined by something that is 'over and above' the individual things. Such a speculative entity is what we refer to as a "law of nature'; they do not consist in mere description. Even if the behavior of things is invariant over short or medium time scales but evolves over the long term and that evolution is ordered as opposed to totally arbitrary then it would seem that something must be determining the forms that evolution may take.
If the tendency of nature to form habits is universal, always and everywhere, then would that tendency not itself be a law of nature?
Wouldn't further spin increase the rate of spin? Do you think that the rate of spin is not an actual difference? If not, then there is no difference between spinning and not spinning either. Your statement seems to imply that there is no difference between a static circle and a spinning circle. But surely there must be, and if there is a difference between these two, then the rate of spin is also a difference which needs to be considered.
Yes, but logically speaking the possibilities are that the tendency of nature to form habits is universal and invariant or that the tendency of nature to form habits is not universal and invariant, no?
And actually prior to the logical possibilities regarding the tendency of nature to form habits are the logical possibilities that the behavior of nature is simply universal and invariant or that nature tends to forms habits and hence its behavior at the most fundamental levels evolves, or that nature at the most fundamental levels behaves arbitrarily and the fact that there are observations that seem to show the contrary is a matter of pure chance.
In hesitate to limit possibilities since there also seems to be more of them. Heraclitus declared all is in flux. In his own way, Bergson adopted a similar stance, using a more formal description. One could say that Heraclitus' view that everything is always changing, is a ubiquitous observation, I would agree. To call it a law, would contradict the observation so there is no reason to say so. It depends upon whether the universe is in continuous change, or to put it another way, does the pendulum really stop at apex at the point of reversal?
As for the fundamental level of nature, I believe it is intelligence that is always changing because it is learning.
...and an invariance is a symmetry.
So everyone is talking about the same thing, sort of. But there is a historic division between those who think about nature in terms of self-immanence versus those who conceive of limits or constraints being transcendentally imposed (and freedoms as being transcendentally created).
Greek metaphysics started out with an immanent story - Anaximander's tale of the Apeiron. And Aristotle cashed that out in his four causes model of development. He understood causality as a matter of constraint. And so Aristotle was happy with a reality that largely lives by its habits, yet is still capable of spontaneous accidents. Things can happen that "break the rules" in a way that doesn't make a difference.
But even in ancient Greece, the alternative view was brewing. The Stoics adopted the atomistic view that chance was simply ignorance of the deterministic detail. Fate rules the future by force of necessity.
And so the debate went back and forth through metaphysical history. It turned out that - being simpler in eschewing formal/final top-down causality - a reductionist approach to lawfulness was the most pragmatically effective ... in looking at existence purely in terms of material/effective cause.
This was in particular the Newtonian breakthrough. The laws could be up there in the mind of God. Then down here on Earth, everything was some tale of impressed forces. A curious dualism crept in where science appeared to both need and eschew universal constraints.
But in practice, it was useful. You frame some Platonically invariant description of a symmetry relation - like change in motion being temporally proportional and directionally orthogonal to impressed force - and then you can get on down here on Earth measuring such changes as particular events and imputing the materiality of the effective cause needed to bring about those states of observation.
So local observables came to stand as signs of global unobservables. The Lord or the Law of Gravity operated in mysterious fashion. But as Michael doesn't tire in saying, all we actually see right here and now is some behaviour, some event, which we read off in terms of an "unreal" universal abstraction. ;)
However the bigger picture of causality never went away. And following the thermodynamics revolution in particular, science has started again to think about causation in contextual or holistic fashion. We are getting back to self-organising immanence with our Big Bang cosmology and more thermally-inspired, condensed matter style, models of quantum gravity. Formal and final cause are back in the picture, along with the possibility of spontaneity or accidents as the class of physical behaviour that quantifiably doesn't make a difference.
That again is why it is irritating that you draw the wrong conclusion from quantum indeterminism. We can now measure "pure accidents" with complete precision. It is the law that there is an irreducible degree of lawlessness in the world. It is simply a corollary of the fact that classicality can fulfill its determinstic desires to the degree it makes any real difference.
As soon as you break the symmetry of a circle - put a nick or a mark on its circumference - immediately you can see (from this imperfection) that it has some relative rate of motion (or rest).
So you are simply now describing the situation in terms that are crisply different - where the disc is semiotically marked and the symmetry quite radically imperfect.
A marked spinning disc can no longer be confused with a marked still disc ... unless - sneakily - its spins so fast that the mark becomes a grey blur, and we restore a symmetry because our eyes become indifferent to "the reality". (You see, as usual there is no escaping the logic of hierarchical order. Go to either extreme and it all looks the same again - just for exactly the opposite reason!)
There may be other logical possibilities than those I outlined. But if so, what are they? The point is that any universal invariance is going to be conceived as an overarching law, simply because it is universal and so is independent of any particular things or even the sum of particular things; any universal invariance would be a unifying behavior. Where does that unity come form? Not conceivably from the individual things that are so unified. This could be said to be so, even if the behavior of nature were universally utterly arbitrary and random; which of course it could not be, anyway, if it were to be intelligible at all.
But even in that extreme case the universal law would be that there can be no universal invariance. Could there then be local invariances? If so, why would they ever arise out of pure randomness? It would seem too incredible that the kind of invariance and ubiquitous natural cohesion that has been observed by humans over their history, with no reliable records of any natural transgressions (miracles or breakdowns of natural regularity) to be found, could arise by pure chance out of what is fundamentally utter randomness.
I would say that there is aspect of quantum physics that need upon current observations, there is at this time a limit to precision and completeness of measurements (one can be more precise this if one wishes). I don't like embuing more into something than it is entitled to. Quantum physics is a new way of thinking about the behavior of fields and matter, but much is still left to be discovered and understood. It doesn't appear to be absolute or final, and it appears to be evolving. Beyond this, it appears to be practical for certain types of applications.
There does appear to be spontaneity in our lives but I would say it is peripheral to quantum theory, though one is free to speculate, as many do, the origination of this spontaneity which could very well undermine any possibility of a law.
So what's new? Isn't science meant to be self-correcting inquiry in that fashion. You are simply now criticising science because it is in fact epistemologically modest and doesn't go about claiming ontic absolutism (those guys represent the modern religion of Scientism).
So science could only be failing in its goals in your eyes if you yourself are a supporter of an unreasonable level of ontic absolutism. (Just another bloody fundamentalist :) )
And you have failed in particular to make it clear how the field's notable epistemic humility - the Copenhagen Interpretation for crying out loud!! - makes it guilty of over-reaching any descriptive account of the world.
Surely only a true scientist would accept a humiliation as complete as CI. :)
OK, but the issue was whether or not it is possible to have a perfect circle, such that you could not tell its rate of spinning, or even whether or not it is spinning. And if there is such a perfect circle, the perfect symmetry, which would be impossible to determine whether it's spinning or not, wouldn't it be nonsensical to speak about it as if it is spinning? That's what I am trying to get at, the nonsensicalness of this notion of spinning, which appears to be totally incompatible with the pure symmetry of a circle.
Why is that not what I was discussing?
Quoting Metaphysician Undercover
You see a featureless disc. How do you tell if it is spinning or not? Would you see anything different if a stopped disc started to move, or a moving disc stopped?
The difference between a spinning vs motionless triangle, pentagon or - most especially - any irregular shape is always going to be obvious to the eye. And yet a circle is an unbroken symmetry in that regard. So that is a mathematically important and distinct property - hardly a nonsensical one.
And then - surprise, surprise - rotational symmetry is one of physics foundational facts. Nature can't prevent what it cannot see. And so rotation is built in as an inertial property. Any object - in the absence of impressed forces - will continue to spin at the same rate forever, in just the same way as it will move in a straight line forever due to translational symmetry.
So again, the notion is hardly nonsensical. The theorem linking the maths to the physics is pretty famous - https://en.wikipedia.org/wiki/Noether's_theorem
I am not suggesting limiting options, though. I am all for thinking of every possibility we can imagine, and then working out how we logically conceive of each one. The thing is I don't see how something like whether there are laws of nature or not is discoverable by science. Science itself operates on the assumption that there are invariant laws of nature; and it's not clear how it could function without that assumption.
https://blogs.scientificamerican.com/observations/deep-in-thought-what-is-a-law-of-physics-anyway/
My views on scientific laws are closely aligned with this view:
http://www.iep.utm.edu/lawofnat/
"In 1959, at the annual meeting of the American Association for the Advancement of Sciences, Michael Scriven read a paper that implicitly distinguished between Laws of Nature and Laws of Science. Laws of Science (what he at that time called "physical laws") – with few exceptions – are inaccurate, are at best approximations of the truth, and are of limited range of application. The theme has since been picked up and advanced by Nancy Cartwright."
Well what I mean is that scientists expect chemicals for example to behave the same tomorrow as they did today. Or when hypothesizing about, for example geological formations, they assume that materials behaved the same millions of years ago as they do today. Or when they are hypothesizing about galaxy formation or even what would have happened just after the Big Bang, they assume that different elements, particles and materials would have behaved as expected in the hypothesized conditions.Without such assumptions science could never get started.
However, you're overlooking the fact that:
Quoting Rich
So, we all should be grateful that Rich remembered to get up today, and consequently that the solar system continues to exist. X-)
Incidentally, Nancy Cartwright's PDF is here, and worth reading in this context.
The fact that laws of nature have limited application doesn't detract from their usefulness within that domain.
Again, I say the only real question is 'why are there laws', and that whatever this question is, is not a scientific question, but a meta-scientific question.
Yes, I got that. The point I was trying make is that I do not see how unregulated chaos can produce anything other than … unregulated chaos. ‘Symmetry’ implies repetitive patterns, which are, as I envision it, absent in chaos.
Such phenomena can only take place in a stable orderly lawful universe. If instead our starting point is utter unlawfulness/chaos, we would not know what to expect. Given unlawfulness, particles could pop out of existence for no reason at all. The collection of particles could turn into anti-matter and/or form a conglomerate. During observation the cosmological constant could shift followed by an instant implosion of the universe. And so forth. Thorough unlawfulness all the way down is completely unpredictable.
What you are talking about are events and laws that result from more fundamental laws. I have no problem with that idea, as long as it not offered as an "explanation" of laws at the fundamental level.
I am talking about laws and their constituents.
I think you might have gone a bit too far. Humans will try to explain Reality whether there are varying or non-varying laws. Science does not assume invariant laws, but the existence of laws that vary more slowly than the extent of our experience certainly makes science more tractable. Our experience only encompasses approximately 14 billion years back to the surface of last scattering!
Chaos is more subtle than that. It does have characteristic organisation. The primary symmetry of "chaos" is the fractal or powerlaw pattern that is scale symmetry. In a fractal system, you have fluctuations over all scales and thus a state without any average size of fluctuation.
So chaos as an absence of constraint still has a strict kind of patterned order. It has a wildness that mathematically regular.
Quoting Querius
So you say. Yet the forment of the quantum vacuum generates particles with a spontaneity that is also completely statistically predictable. What we observe in nature is thus a spontaneity that can't help also being ordered.
Just to have an energetic event you have to have spatiotemporal separation. And to maximise the randomness of the spacing of the events itself is an imposition of an organisation. There is a limit on even making things as unpredictable as physically possible. Go past the point of effective randomness and you start getting back towards the overly orderly.
Again the ideal gas illustrates the issue. Low entropy or order would be all the particles gathered in one corner of a flask. So therefore maximum disorder ought to be every particle as spread out as possible, right? But then that would put every particle spaced out an even distance on a grid. So now we are back into a state of high order that won't last long.
It's another version of the sphere packing story and why effective physics rules.
Quoting Querius
I don't follow. But anyway, the Goldstone mechanism started out as a mathematical curiosity, then an explanation of macrostates and quasiparticles, now it explains the Higgs and effective mass. So it is working its way down towards the "fundamental" quite nicely.
Quoting Querius
What do you mean by constituents? Laws certainly relate variables.
Also, in many cases (i.e. many ontological domains, including the objects of quantum physics) the very nature and existence of the material parts causally and/or constitutively depends on properties of the system of which they are proper parts. This can include boundary conditions, topological properties of space-time, etc. George Ellis argues for this in his recent book on the topic of top-down causation. This general point also has been argued by Michel Bitbol in some of his papers on emergence and on the foundation of quantum mechanics.
Ellis's and Bitbol's arguments are quite general, not very controversial, and free of the obscure and speculative quantum woo-woo that sometimes permeates the discussions of natural scientists when they turn to the topic of the mind. This ontological dependence of parts on whole also had been argued by John Haugeland in Pattern and Being and in Truth and Rule Following (both collected in his Having Tought: Essays in the Metaphysics of Mind). Haugeland's ontological points are quite general but Ellis's and Bitbol's discussion show that physics, even so called fundamental physics, is no exception and affords no refuge to the reductionist.
How times have changed.
That's the point I was getting at. What a surprise. This incoherency is considered by some to be "one of physics foundational facts".
You see what happened to the assumed eternal circular motions which Aristotle assigned to the orbits of the planets. It turned out that they weren't actually circular, nor were they eternal. You are repeating the very same mistake with your "rotational symmetry".
The "forment" of the quantum vacuum? I assume you mean foment, but that still doesn't make any sense. Nor does the claim that the quantum vacuum generates particles with a spontaneity that is "completely statistically predictable". I think that the very opposite of this is actually what is the case.
But these particles of energy within the so-called "quantum vacuum" are just very clear evidence that quantum field theory is unacceptable. That's the real problem here, quantum field theory just hides the realities of existence behind some artificial, and incoherent, symmetries.
Science depends upon mathematical equations that describe repetitious events that are approximately the same, enough so that they can be used for practical purposes. That Newton's Equations are imprecise does not mean that they are impractical. In some cases they may be impractical in which case other equations are used.
The concept of laws of nature is not only unnecessary in science, it is totally misleading.
In philosophy it is only needed by determinists who use this generality for want of specifics. It is more of a desire than anything concrete.
Schrödinger published his famous equation in 1926. In 1935 it was noticed by Einstein et al that his equation implied that pairs of particles prepared in a certain way, would exhibit the surprising and unexpected phenomenon of quantum entanglement. In 1981 conclusive experiments were performed proving that this feature of reality was in fact present.
There are several other features of the Schrödinger equation, that revealed surprising, unexpected and technologically important aspects of Reality.
So, if as you claim, physical laws merely describe repetitious events, rather than capture and reveal the structure of Reality, then please explain how it is possible that these laws reveal novel features of Reality.
The discovery of entanglement refutes your claim that "science depends on mathematical equations that describe repetitious events".
The discovery itself is the result of human intuition and creativity which one can put under the umbrella of science of one wishes. Are you suggesting that sciencific discoveries are the result of human intuition! I'm on board with this.
How many times did it repeat in the 50 years between its discovery and the first time it was observed?
I agree that laws of nature, whatever we might think they are, or whatever we might think their ontological implications to be, are indispensable within the domain of science and even everyday life.
The question as to why there are laws, I guess could be answered in terms of any ontological framework; in terms of realism, idealism or pragmatism; the one standpoint they seem to be inexplicable in relation to would be nominalism.
I don't know. Do you have an exact number or an approximation?
If there are not rigidly deterministic laws of nature then there must be statistical or probabilistic laws, such that, for example, chemical elements have always been observed to combine predictably, and electronic circuits can be relied upon to an incredibly high degree, given their complexity.
There is no need for the concept in science and I actually never read a scientific paper that called upon the notion of laws to make its presentation. Laws of nature is just an ambiguous term without any concrete value that is called upon by some metaphysical viewpoints. I have yet to have such enumerated. I guess it is more convenient to suggest they exist without any need for a concrete definition. The article I presented earlier is but one example of how disorganized and messy the topic gets if philosophers or scientists are actually called upon to provide a concrete example with definitions.
I'm afraid I have no idea what you are talking about, Rich.
You're the one claiming that there were "repetitious events" being "described".
While entanglement was a particularly striking example of a feature of reality discovered, not by observing repetitious events, but by analysing a physical law, there are many others. Each one refutes your misconception.
If you wish to claim that creative intuition is the heart and soul of scientific discovery, then I would agree.
It is generally accepted by science that all the laws of nature are reducible to the laws that define the four fundamental forces or interactions; gravitational, electromagnetic, nuclear-weak and nuclear-strong.
So the real tension would be over the source of exceptionless necessitation that the notion of universal laws imply. Are laws perfect eternal forms? Or are they approximate and history-derived habits? And the wrangling goes on because they do seem to be a bit of both.
I side of course with the view that the laws of nature are emergent regularities or states of generalised constraint that develop from a history of free interactions. So that is the Peircean story of reality as a habit. In this view, laws would seem to evolve with time. The rules today could become something different tomorrow. It is all quite loose and rather contingent.
However I don't thing that is the whole story because - as now being expressed again in ontic structural realism - mathematical physics also shows that there is a rather Platonic influence on the course of physical history.
There are structural attractors which give the Universe a pre-destined outcome - the symmetries of particle physics being the prime example of a logical necessity that impinges on development. So the force of necessity is not simply an evolutionary accumulation of certain accidents that creates limits (much like mountains randomly arise to block winds and channel water flows in a landscape).
It is reasonable to argue that the permutation symmetries of the standard model would be logically true even if materially there never was any universe. Well, perhaps I wouldn't go that far - as permutation symmetries still rely on something "material" to permute. But still, as a mathematical form, the standard model seems to lie in wait, lurking in eternal Platonia, as something that would have to manifest by force of logical necessity, imposing itself on mere historical contingency as soon as any contingent history of free material action got going.
So there is a genuine tension because two important factors appear to be in interaction. The Universe is more than just a contingent habit - an accumulation of events. There was a structural attractor in the shape of symmetry that always lay in the Universe's future. And yet it still requires that history of all kinds of shit just trying to happen for the eternal patterns to be made manifest as the limits on being.
Which is another way of justifying my claim that reality in the end has to be based on effective physics. The interaction between radical freedom (or material contingency) and radical constraint (or Platonic symmetry) is understood by us as the clash of two species of absolute - pure possibility vs pure necessity. And then the actuality which we inhabit is then the average of these two aspects of nature. Existence is always a mixture, an equilibrium state.
But overall - from the cosmological perspective now afforded us by science - we can see that we are in the middle of an actual transistion in terms of that balance. The Big Bang stands at one end of the spectrum in being a vanilla quantum fireball - radical freedom. The Heat Death is the other end of the spectrum in being the broken-down classical realm of crystallised symmetry.
So in terms of law - as the accumulation of increasingly specific constraints on freedom - it is only by the end that the Platonic forms will be crisply fixed. The latent mathematical structure is what existence is being shaken down towards (although with even the end state - a future as de Sitter universe composed purely of event horizons and their residual black body fizzle of radiation - that approach to the limit is only effective, in the manner illustrated by random sphere packing and the impossibility of reaching actual crystalline perfection in a world that has to employ freely-permuting material parts).
In summary, laws are kind of hybrid like this. They mix the Platonic and the accidental, the eternal and the historic. So that is the reason why they are effective, yet not then actually "just a habit". There is also the fact of structural attractors that (from our point of view) pre-determine the outcome. There is also something mathematically fundamental in play - except it really calls to materiality from its future, not sets the direction from its past, and acts top-down in constraining fashion, not bottom-up in a constructive one.
You are factually wrong on this. Einstein discovered entanglement by analysing Schrödinger's equation.
But of course quantum mechanics is not the only physical theory full of surprises. Einstein's general relativity implied several novel phenomena, such as time dilation, gravitational red-shift, black-holes, the big-bang, and the cosmic microwave background. He reailsed early on that general relativity predicted gravitational waves, which took 100 years before they were observed.
As I mentioned, each of these phenomena alone refutes your misconception.
Agree. It is an indisputable fact that through mathematical physics, many things have been discovered which could never have been foreseen by any other means. That is the basis of Eugene Wigner's famous essay The Unreasonable Effectiveness of Mathematics in the Natural Sciences.
Quoting Rich
That is much more a description of your own point, repeated ad nauseam. The equations under discussion make predictions which no mind could have envisaged, and, in fact, still can't. As for David Bohm, I don't know why you're citing him, as he doesn't accept 'entanglement' at all, but proposed 'Bohmian mechanics' as an alternative.
That is Nancy Cartwright's point - that the concept of 'laws' doesn't really make sense without God.
Quoting apokrisis
But you did say already:
Quoting apokrisis
So if it has any organisation whatever, then it's not strictly speaking chaos, in the sense envisaged as 'primordial chaos'. I think you're speaking here of 'chaotic systems', but any chaotic system you can study, does of course exist against the background of a laboratory and is contained by some boundary. The 'primordial chaos' doesn't exist against the background of any organisation whatever. Or - does it? So where does that order come from? How do you get 'order for free'? That's the million dollar question.
But that's the point. Any attempt to envisage chaos leads to discovery that it has some structure. Any notion of a big great mess still has emergent statistical order - or a lack of order that is precisely defined.
Quoting Wayfarer
And now you are recounting my usual arguments for vagueness as what we would really mean by primordial chaos. So I do grapple with the standard models of randomness and chaos so as to understand what a "truer unboundedness" would look like - hopefully mathematically as anything less is not worth the effort.
The chaos, the crystal's chance path, during the formation of snowflake fractals is comfortably situated in the context of our orderly stable lawful universe. IOWs it is not chaos all the way down. It is chaos embedded in order.
Moreover the law ‘every snowflake is six-sided’, which emerges due to symmetry/equifinality, is fully determined by underlying more fundamental laws, such as the laws which dictate what binding angles are permissible for water molecules.
My point is: sure you can watch some pretty amazing things emerge in nature by a combination of law and chance/chaos, but this does not tell us that chaos can explain the natural laws.
The standard interpretation of Peirce's cosmology is that the initial state was a chaotic mix of chance and reaction in which anything was possible but nothing persisted, hence nothing was actual. The tendency to take habits was one of those spontaneous occurrences at first, but its very nature was to persist and reinforce itself, so it did. Then other things began to take habits, and that is how matter eventually came about, with the "laws of nature" serving as its habits. These are much more consistent than habits of mind - hence why nature is often much more predictable than human behavior - but they are not completely exceptionless, since objective chance is still active in the world. Peirce was not convinced that all of the discrepancies in our measurements are due to error; rather, things really are not quite as exact as our equations seem to indicate.
Toward the end of his life, Peirce seems to have adopted a more theistic cosmology in which God is indispensable as Ens necessarium, necessary Being, to account for the order that we now observe. God conceived an inexhaustible continuum of possibilities, and then chose which of them to actualize. Spontaneity is thus a manifestation of divine freedom, rather than objective chance.
What do you think I was saying? The only subtlety is that that I add that the chaos is "embedded in the order" all the way back to the start. Which is what a logic of vagueness would be required to model. Things have to begin with the unexpressed potential for chaos~order as the yin yang synergistic outcome. And so now we have a third thing to label - unexpressed potential. Which is what I am calling Vagueness, or Anaximander called Apeiron, Peirce called Firstness (as well as vagueness), etc.
Quoting Querius
Hah. Snowflakes are a little more complicated in fact. Water molecules actually have to bend more than they want to as their "natural angle" is not exactly that of a hexagonal symmetry. So they are an example of top-down causality or constraint producing the simplified regularity required for the very expression of the hexagonal order that comes to historically dominate the accidents (the accident that is the attachment of further water molecules).
So snowflakes are a good example of an effective solution - a global equilibrium balance that reshapes the very stuff out of which it is being formed. What you call "fundamental" is what has got fundamentally pwned.
Quoting Querius
Well it should be clear that "chaos" is a pretty bad word once you start to study the reality closely. Even when chaos theory became vogue in the 1980s, it was wildly misunderstood.
So of course I am talking about dynamical self-organisation. And chaos is the state of things when imagined with the fewest possible constraints. But you can't just have ... no constraints, or no boundary conditions.
So chaos doesn't explain natural laws - in the sense that order just emerges from disorder. That would be merely a reversal of orthodox fundamentalism or absolutism. Chaos is not the cause of all things, and order merely its effect.
My argument in favour of effective physics is instead that the chaos~lawfulness dichotomy would be a mutually-formative deal from the vague get-go. It is there as a relation in seed form even before anything "actually happens".
So it is the division that pre-exists the existence that manifests as a result of it being the case. It is the (triadic) relation that is fundamental (triadic as in its vague dyadic initial state, it of course has its whole future developmental history as a compressed axis of action and memory).
We have looked deep into the dark heart of "chaos" and found in fact precise mathematics. Order is inevitable even in chaos as chaos - to actually exist in a way we could then point back at - needs organisation or structure like any system or process.
Thus what I am talking about is an empirical discovery by science which is still recent enough not to have sunk in with that many people. But really, metaphysics should never be the same again.
But could God have had a choice if mathematical symmetries limited His options rather rigorously?
Quoting aletheist
What is missing here - from a modern hierarchy theory point of view at least - is that wholes simplify their parts so that they increasing have a better fit.
Like I said about the way snowflake symmetry has to bend water molecules to shape, the collective level of action acts as a literal shaping constraint on the spontaneity that is doing the reacting. It limits absolute freedom by imposing some common direction or character on all free action.
And this is why habits are absolutely real. They are the cause of regularity all the way down.
Peircean metaphysics certainly gets this irreducibly complex triadic relation. But a minor criticism is that it doesn't really foreground this further crucial wrinkle of the causal deal.
Not sure what point you are making here. Chaos is a fully deterministic feature of some time-reversible dynamical laws. You can't have chaos without everything behaving lawfully.
Also, it is worth noting that chaos is a feature of classical mechanics.
You are confusing the models with the reality being modelled. The map you hold in your hand may be time-reversible, the territory it describes looks to be irreversibly dissipative.
But of course, your time-reversible laws don't take account of the material fact that energy is involved in there being a spatiotemporal process.
So we already know how our current best models of dynamics are incomplete. The motivating force of a matter field has to be inserted by hand still.
Any fool knows that deterministic chaos is reversibly deterministic, exactly as it says on the box. But any fool also ought to know that someone has to go out and make the acts of measurements to feed the hungry system of equations. And at that point, the harsh truth that observers are really involved in reality becomes more than just a sideline epistemic issue.
So just like QM and relativity, chaos theory also has its strong version of the "collapse" issue. That is how we know it to be a "fundamental" theory. We have talked the observables to death and now have to turn around and somehow deal with the still informal issue of the observer.
Again, that is what makes Peirce such a splendid chap. He was on to the metaphysics of this in a big way. He managed to reduce the observer~observables dichotomy to a formal abstract model - his semeiotic relation. He came up with the right approach to quantum interpretation even before the quantum was discovered.
Maybe you should read one of the most famous papers in the history of science.
http://www.drchinese.com/David/EPR.pdf
Succinctly, scientists such as Bohm, who have life to the notion of non-local actions by the quantum potential, describes the process as seeking differences within similarities and similarities within differences. This is precisely the process I used in my computer science career. It is fundamental to creative thought.
Sean Carroll in his book ‘The Big Picture: On the Origins of Life, Meaning, and the Universe Itself’, Dutton, 2016 writes:
<my emphasis>
But anyway, the facts are that isolated water molecules have a bond angle of 104.4 degrees, yet the symmetry of the snowflake demands they get comfortable with the new number of 120 degrees.
I mean do you (or Carroll) believe that even the water molecules, or the nucleons of which they are composed, have some absolute fixed shape rather than an effective shape - one that is a holistic dynamical balance or some general average? C'mon.
A neutron when separated from an atom has a half-life of about eleven and a half minutes. It decays into a proton, an electron and a neutrino. However, once inside the nucleus of an atom, a neutron is highly stable, with a half-life in the millions or billions of years. Its integration into the higher order intelligibility of the atomic nucleus changes its properties. The higher order reality has modified the lower order constituent.
Gentleman, our work here is done!
Quoting apokrisis
Assuming omnipotence, as Peirce did, the only thing that could have limited God's options were God's own previous choices, including the creation of those mathematical symmetries.
But it is one thing saying God could choose to create a world in which 1+1=3, quite another to believe it in your heart. Do you think Peirce would have gone along with such a frontal assault on natural reason?
I am not seeing the connection between this comment and the notion that "mathematical symmetries" somehow limited God's options. For one thing, Peirce consistently held that mathematics deals only with hypothetical states of affairs, not actual ones. He also insisted that we cannot be absolutely certain that 2+2=4, since human fallibility entails that it is possible - even if very unlikely - that every single person who ever performed this addition made the same mistake. Presumably he would have taken the same position on 1+1=2.
Sure, Cartesian doubt means that all knowledge is in principle fallible. But Peirce then built his career on dismissing Cartesian doubt by insisting on starting right where we are - in some state of belief. And then that the purpose of reasoned inquiry is to minimise uncertainty (rather than pursue the phantasm of absolute certainty).
So what you say seems to cut across the whole tenor of his thinking for me.
What Peirce says is: "Mathematical certainty is not absolute certainty. For the greatest mathematicians sometimes blunder, and therefore it is possible ? barely possible ? that all have blundered every time they added two and two" (CP 4.478).
So his point appears to be that humans are certainly fallible. Even if infinitely likely, it is still infinitesimally possible no-one has ever managed to get the simplest of all sums right.
But an omnipotent God couldn't be that incompetent surely? And more to the point, there is a big difference in executing a calculation and providing the very world which makes a mathematical model a matter of logical necessity. From certain reasonable axioms, certain deductive consequences (like arithmetical operations or permutation symmetries) must flow.
So either God is constrained Himself by the general principle of intelligibility - existence as the universal growth of reasonableness - or the whole of Peirce's metaphysics collapses for a far more serious reason. Semiotics just doesn't exist unless the sign relation is in fact a sign of something.
You left out my first two statements ...
[quote="aletheist";54011]I am not seeing the connection between this comment and the notion that "mathematical symmetries" somehow limited God's options. For one thing, Peirce consistently held that mathematics deals only with hypothetical states of affairs, not actual ones.[/quote]
... and I am still not following you here:
Quoting apokrisis
If God is constrained by "existence as the universal growth of reasonableness," it is only because He chose to create existence that way. In fact, Peirce characterized this as God's purpose.
Quoting apokrisis
And Peirce called our existing universe God's argument, a symbol whose object is Himself and whose interpretant consists of the living realities that it is constantly working out as its conclusions.
I can relate to that.
The problem is that context itself, cannot validate the assumed unity. The parts of the atom, being in proximity to each other does not give existence to the unity which is the atom. There is a very particular relationship of those parts which is necessary for the existence of the atom. The atom itself cannot cause this particular relationship, because the atom only exists after the relationship is established and a cause must be prior to the effect. To make this claim (top-down causation) is to put the effect (the existence of the atom) prior to the cause (production of the necessary relationships).
Quoting apokrisis
No, not quite. I assume that something must cause these specific relationships. I do not believe that the relationships cause themselves. Nor do I believe, as you seem to, that the thing caused by the relationships is a cause of the relationships (top-down causation). That would put the effect prior to the cause. As I said in other threads, I believe that the cause of the relationships is immanent within the parts, just like the will to act is immanent within individual human beings. When the part comes into existence, as all material things come into existence, the relations that it will have to other material things, is already inherent within it. The cause has already acted on that part in its creation. So it is given its position when it comes into existence. The cause of this, (that which gives it its position), cannot be the material object which is described as a unity of the parts, acting as top-down causation, because this material unity only exists after the parts come into existence, as the effect.
.
Sean Carroll objects to the notion of downward causation because he doesn't understand it. He wrongly believes the possibility of downward causation to contradict the causal closure of the micro-physical domain, as if a macroscopic or systemic cause of a micro-physical event entailed a violation of the laws that govern micro-physical interactions. But downward causation doesn't have this consequence. It isn't something queer, magical, or unphysical. Sean Carroll is thus shooting down a strawman notion, though, to be fair, he isn't alone in wrongly portraying downward causation in this manner; so does philosopher Jaegwon Kim. In his paper Downward Causation without Foundations, Michel Bitbol (while discussing Kim's objections) sets up the problem of physical closure thus:
"The first statement is meant to dismiss the idea of strong emergence, according to which the high-level processes are endowed with autonomous causal powers, and with ability to alter the low-level processes. It does so by assuming that for high-level processes to count as causal powers in the fullest sense, and to be able to alter anything significant in the lower level, they must induce a deviation in the laws of the micro-processes. But if this were the case, two common presuppositions of the scientific picture of the world would be denied: (a) the presupposition of nomological closure of the lower micro-physical level, and (b) the presupposition of causal fundamentalism, according to which “macro causal powers supervene on and are determined by micro causal powers” (Bedau 2002, 10). Strong emergence thus apparently amounts to an indefensible variety of ontological dualism."
Bitbol later adresses the problem of physical closure thus:
"No level of organization can claim any privilege for itself, because every such level is defined (or “constituted”) by a certain scale of intervention and observation. Moreover, no absolute meta-observer, no “view from nowhere,” is available to select one pattern of causes at a certain agent-relative level as the “truly efficient” one. This does not threaten the thesis of causal closure of the domain of physics, but only denies it any ontological significance. Causal closure here means only that it is possible to establish a systematic and self-sufficient network of causal connections relative to a single scale of intervention and experimental access, without having recourse to any other scale of intervention and access. This being granted, causal closure of a low level of organization (say the level of micro-physics) is perfectly compatible with the thesis that there are also efficient causes at an upper level of organization."
I've been trumpeting a similar view since joining these forums - but actually according to physics we can go even further.
The following statement has been proved to hold under quantum mechanics:
Any finite physical process can be simulated to arbitrary accuracy by finite means by a universal computer
The following statement is conjectured to hold for all current and future laws of physics:
Any finite system can be simulated exactly by a universal computer operating by finite means..
What this means is that our micro-physical laws have the remarkable property that they support abstractions, which are real and causal. This is why computation, language, and life are possible. It also means that human minds may be instantiated on a computer, and several other remarkable implications.
Another point that might be worth noting, is that causality does not exist at the micro-physical level. I'm not sure there is any such thing as "botton-up" causation.
What is the significance of 'finite' here?
Quoting tom
Where does it start, then?
I would have thought 'the placebo effect' provides a cogent example of top-down causation.
I am not sure how a discussion about emergentism is relevant to fundamental laws of nature. As I have stated before I have no problem with a secondary (emergent) law like ‘every snowflake is 6-sided’, as a direct consequence of fundamental laws.
So, unless you are arguing that laws at the fundamental level can be explained by emergence, I refrain from commenting on Bitbol.
If laws that govern phenomena from a variety of empirical domains (e.g. chemical reactions, natural evolution, the actions of human beings, etc.) don't reduce to the laws of physics, or to so called "fundamental" laws of nature, them there is no reason to think that they are a consequence of them. The laws of physics may explain, partially, how the higher level processes are implemented, but they leave open what the higher level laws themselves are.
The higher level laws (or systemic principles of organization) may depend on contingent facts about boundary conditions, or historically contingent facts about the evolution of those entities. The higher level entities can also be governed by general principles that are quite independent from the laws that govern their low level material constituents since those higher level entities are multiply realizable in different sorts of materials or components (e.g. the same software can run on different hardware architectures.) In that case, it's not the lower level laws that determine the higher level laws. At most, they may enable them though providing a contingent form of implementation. Enablement, though, falls short from causal determination.
I agree. Also every post on this forum is a cogent example of top-down causation. Question is, do we find such causation in inanimate nature.
For sure. It's Jaegwon Kim's argument that Bitbol is rehearsing here. Kim's argument also is sketched in the section Argument against non-reductive physicalism from his Wikipedia page. It this this argument from causal closure that Bitbol responds to.
You initial query in the first post of this thread was about "laws of nature", quite generally, and the source of their universality. What makes you think that some laws are fundamental and some aren't? It's only on the assumption of reductive physicalism, and/or some rather strong thesis of supervenience, that some laws are believed to be fundamental in the sense that they would govern everything that happens in the world.
But what distinguishes the laws of physics (or of "fundamental" physics) from other laws of natures (or from normative principles of biology, cognitive sciences or social sciences) may only be that the former focus on rather general features of material constitution while abstracting to some arbitrary degree from formal principles of organization and contingent boundary conditions.
Correct me if I am wrong, but does the very concept of 'emergence' not imply a lower level of (more) fundamental laws? Emergent stuff emerge from fundamental stuff, right?
Unless you are arguing that it is emergence all the way down, which seems incompatible with the concept of emergence, I do not see the relevance to a discussion about fundamental laws.
EDIT: Emergence does not explain the level on which it sits.
There are many examples in physics. George Ellis (responding to Sean Carroll) provides an example in the comment section of this post on emergence by Massimo Pigliucci:
"However this billiard ball point of view, based in Newtonian physics, is invalid once one takes quantum physics into account. A classic example is the fact that the mechanism of superconductivity cannot be derived in a purely bottom up way, as emphasized strongly by Bob Laughlin in his Nobel prize lecture, see R B Laughlin (1999): `Fractional Quantisation'. Reviews of Modern Physics 71: 863-874. The reason is that existence of the Cooper pairs necessary for superconductivity is contingent on the nature of the relevant ion lattice; they would not exist without this emergent structure, which is at a higher level of description than that of the pairs. Hence their very existence is the result of a top-down influence from this lattice structure to the level of the Cooper pairs. The concept of a given set of unchanging interacting particles is simply invalid. They only exist because of the local physical context. One can also find many examples where the essential nature of the lower level entities is altered by the local context: neutrons in a nucleus and a hydrogen atom incorporated in a water molecule are examples."
(Notice that the neutron example provided by Ellis was also given by Wayfarer recently.)
There need not be any such implication. One can argue for a notion a strong emergence in a context of explanatory pluralism without endorsing a stratified and foundational view of nature (Bitbol's paper is titled "...without Foundations". Alan C. Love also argues for a notion a emergence that doesn't rely of a stratified view of nature in his Hierarchy, causation and explanation: ubiquity, locality and pluralism.
The relevant distinction of higher/lower "levels" is local (to a specific explanatory context) and is relative rather than absolute, as is Aristotle's distinction of matter and form. Material bottom-up causal explanations appeal to features of implementation or constitution while top-down explanations appeal to features of systemic organisation. But "lower level" entities have form too, and "higher level" entities have material properties too. There need not be an ultimate level at the bottom, and strong emergence allows us to dispense with the need for one.
Yes, in a sense, it's emergence all the way... But there need not be a bottom, fundamental, level. There need not be an ultimate formless material constituent of everything, and such a notion is dubiously coherent anyway.
No, of course not, and neither do explanations in terms of material constitution explain the laws and features of the constituted entities, except in he odd case where reductive explanations are available. But that is the exception rather than the rule.
Emergence from nothing it is.
There are those who demand understanding and those who do not.
This is a gross mischaracterization of the position of the nonreductivist/emergentist/pluralist. What is denied is a unique "fundamental" material explanation of "everything". The thesis of strong emergence is not a claim that there is something at the bottom that we must not seek an explanation of. On the contrary, a plurality of explanations is sought that is sensitive to the specific context of existence of the entities being inquired about. The pluralist is much more curious and investigative than the reductionist since she doesn't only look down; she also looks up and sideways.
However, if you are correct, it is the claim that there is not necessarily something at the bottom.
Unless one argues that there is something up there and/or sideways, then what we have is 'emergence from nothing'.
This is the same problem which I pointed to. Downward causation puts the effect prior to the cause. The atom only exists after the relationships between the parts has been established. The atom is the effect of these relationships. Therefore we must look for something other than the atom itself, as the cause of these relationships.
The problem is very evident if we refer to new relationships which come into existence, such as when human beings create synthetic chemicals. It is not the case, that the new complex object acts through downward causation, creating the new relationships necessary to bring itself into existence. It is the case that human minds determine the necessary relationships, then human beings act to bring those relationships into existence, causing the existence of the synthetic chemical.
What the proponents of downward causation seem to miss, is that there must be a cause of existence of the relationships which exist between the parts of a complex object. This cause of existence cannot be the complex object itself, because the complex object only comes into existence after the relationships. If some of these relationships are called "laws of nature", then the cause of the laws of nature is something other than the complex objects which come into existence as a result of the laws of nature.
Quoting Wayfarer
Quoting Querius
We must be careful not to automatically assume that cases of mental causation, such as intention and free will acts, are automatically top-down causation. We do not know where the free will act derives from, but it has influence over the most minute parts of the brain and appears to be bottom-up.
Take as an example our theory of Life. It is a theory of replicators subject to variation and selection. The theory of life does not even mention anything physical: replicators, variation, selection, ... are all abstract!
If you look at our best theory of information, you will see that it is, at it's core, a theory of counterfactuals! And, the information is independent of the physical substrate.
In most of science, the fundamental objects that the theories deal with are abstract, be they heat engines, replicators, information, computers ...
In the long run, cosmological theories will have to take account of the existence of sentient beings, and what they choose to do.
Reductionist and emergentist accounts of the state of affairs must be compatible. The laws of physics are always obeyed.
The main thing to keep in mind when pondering such questions is that physical laws are not features of the universe (nor are fermions, bosons, etc.). They are features of the conceptual apparatus we've invented to explain the universe.
So, you think that electrons (a fermion) and photons (a boson) don't exist? Rather they are merely part of a "conceptual apparatus"?
They exist if a physical theory postulates their existence, and that theory is successful in explaining various empirical phenomena.
There are two grounds for contending that X exists: X is a subject of direct experience, or X is postulated by a successful explanatory theory. A theory is successful, of course, when it correctly predicts future experience.
What a fascinating discussion! I was most struck by the vigorous push-back from some of the commentators to Pigliucci's post. Many of the ideas he espouses seem so elementary and obvious to me that I always forget that the reductionist wing of science is as well entrenched as is it. The paper he discusses is equally as fascinating, and it jibes with so much of what I've been reading recently. I love that Pigliucci is so attentive to the fact that the reductionist program so rarely fits the science itself, being rather read-into the data from the 'outside'. It gives me hope. Thanks for the link : )
An excellent argument in favor of the fundamental irreducible nature of laws, which, as far as I can see, no one has attempted to address.
This reminds me of the famous 1948 Copleston vs. Russell debate on the existence of God. At one point Russell counters Copleston's argument from contingency by saying:
So if more particular laws emerge from more general laws, what's illogical about extrapolating from that observable fact? If what we see is emergence, then why shouldn't we think that is all there is, rather than having to leap to belief in something mysterious, transcendent or supernatural?
All that is required then is a proper understanding of emergence itself. And your claim - the usual reductionist one which makes emergence some kind of elaborate linguistic illusion - is not a proper model of emergence.
Emergence - as it is understood by hierarchy theorists, Peirceans, and others who take it seriously - is a holistic or cybernetic deal. The whole shapes the parts that constructs the whole. So what is "fundamental" is hierarchical development itself. Existence begins not with nothing but instead an "everythingness" - a "state" of unbounded potential. And then limitations develop to produce definite somethingness.
As I say, this is simply a fact when it comes to accounting for the "higher level laws". It is what we mean by them being emergent. Complexity and particularity arises as the general (some generalised set of freedoms) becomes more constrained in specific ways. History locks in its own future by removing certain possibilities as things that could actually happen. And the future is then woven from what was thus left open as a possibility.
So we know this holistic understanding of emergence is right just from looking at the world and listening to how physics actually describes it. For that reason, it is more logical to expect that emergence of this kind can explain it all ... or at least get as near as we are ever going to get to answering that ultimate question of "why anything?".
But instead you have fallen into the usual trap of expecting reality to bottom-out in some fundamental atomistic stuff. And in the 1880s, most physicists would have agreed with you, feeling that the great success of classical mechanics and atomistic metaphysics had basically put an end to physics - leaving it "an exhausted mine". Yet then guess what happened next.
Of course it is just as bad to make the other monistic claim - that everything is instead top-down. That just winds up in mysticism.
If we want to talk about real emergence, it is irreducibly triadic (because everything must emerge - the forms, the materials, and the dynamical balance of these two which is then the substantial actuality).
So you are not even dealing with the actual argument of a proper holist yet. You are just thinking in terms of the reductionism that wants to neuter emergence by treating it as "mere appearance". Or in the slightly more sophisticated defensive position of "supervenience", one shrugs one shoulders and says even if all this top-down stuff is true, it can't change anything important down here at the level of concrete atomistic particulars.
But unfortunately for supervenience, there are no concrete particulars except to the degree that top-down constraints have shaped them.
One can imagine taking an instantaneous snapshot of some material system and transporting its information to make a perfect clone ... that would then roll on as if nothing had happened. Beam me up Scotty! Dissolve my atoms in one place, produce a replica in another. Hey presto.
But science fiction is science fiction. Real science knows it has a fundamental observer problem. The acts of measurement needed to animate the mathematical equations are not reducible to the formalisms of theories. And this is going to catch you out any time you start talking about the big questions of existence.
So supervenient emergence sounds good - if you don't understand the basic problem of observerless physics.
It is something that does catch out everyone. Tom is another example in that he repeats the same error at the level of the information. He believes in observerless computation. And so he has no problem with a scifi story of human minds being downloaded. Or existence itself being a grand computation (finitude being something that can be taken formally for granted and not instead a fundamental problem in being an informal issue of deciding when an act of measurement is "sufficient to purpose").
Anyway, the point is that to dismiss a metaphysics of emergence, one first has to learn quite a lot about what that position entails. Reductionists have conjured up their own strawman versions which they can erect at the boundaries of their domain and say "see we understand, and it doesn't change anything". To people who actually study emergence, you can see why the constant waving of the limp effigy of supervenience or epiphenomenalism is rather annoying. :)
Forgive me, but I can't take any argument for a divine creating intelligence seriously. There is just nothing about this actual observable world which suggests that minds exist outside a state of semiotic complexity. So I am happy to reduce existence to that abstraction - the notion that the universe itself arises as a kind of mindful, self-organising state in being pan-semiotic. And if you want to say that is what theists might really mean by "god", then fine. But once you start attributing free choice to an immaterial creator or material being, that's another kettle of fish. It goes against the whole point of even in believing the sign relation to be the fundamental seed of existence.
Theism (of the first cause type) is simply contradictory of Peicean semiosis ... even if Peirce himself made some weak arguments for the difficulty in resisting such theism in the end.
And as to what Peirce really thought about maths, its not something I've really looked into, but the commentary suggests he vacillated between constructivism and Platonism - like all philosophy of maths does.
https://jeannicod.ccsd.cnrs.fr/file/index/docid/53339/filename/ijn_00000208_00.txt
But my own argument here is that his oscillation between these two poles doesn't have to mean he was simply confused or inconsistent. Instead, I have argued that this standard dilemma is to be expected because the actuality is in fact that both poles are correct in defining the dichotomistic limits of (mathematical) being. There is both contingency and necessity in play - with actuality being the effective balance.
So at the worst, it is a "good thing" that Peirce didn't just lump for one metaphysical extreme over the other. To reduce to some monism would be contradictory of his own holistic triadicism.
Thus yes, every mathematician in history might have added up two plus two incorrectly. And yet also the mathematics of symmetry could be maths that has a Platonic strength that even "God" could not question.
(As further clarification, the maths of symmetry I hold as the highest form of maths because it is the pure science of constraints. Arithmetic is clearly just the science of constructive acts. That is why arithmeticians do end up making lumpen statements like "God created the integers". If your mathematical metaphysics has to start with concrete atomistic construction, then like all reductionist, you end up with this kind of hand-waving towards foundations as brute facts. Arithmetic's lack of holism is why division is such a problematic operation of course. But I digress even further...)
This narrative seems akin to Dawkin’s Weasel program. Also here we have ‘unbounded potential’ at the start: 'WDLTMNLT DTJBKWIRZREZLMQCO P'. And next ‘limitations develop to produce definite somethingness’. And indeed ‘complexity and particularity arises as the general becomes more constrained in specific ways’. And yes also ‘history locks in its own future by removing certain possibilities as things that could actually happen.’ And at generation 43 ‘METHINKS IT IS LIKE A WEASEL’ emerges.
The two stories are a perfect fit.
The problem for Dawkins is that he has to explain the existence of well-crafted boundaries (fitness landscape) which produce the target sentence. IOWs where did that fitness landscape come from? Such a landscape potentially exists for any phrase whatsoever, and not just for METHINKS IT IS LIKE A WEASEL. Dawkins's evolutionary algorithm could therefore have evolved in any direction, and the only reason it evolved to METHINKS IT IS LIKE A WEASEL is that he carefully selected the fitness landscape to give the desired result. Dawkins therefore got rid of Shakespeare as the author of METHINKS IT IS LIKE A WEASEL, only to reintroduce him as the (co)author of the fitness landscape that facilitates the evolution of METHINKS IT IS LIKE A WEASEL.
The same problem for strong emergentism: it has to explain the existence of well-crafted limitations which produce order. Strong emergentism gets rid of order by natural laws, only to reintroduce order as following from ‘limitations’. It’s the same magician’s trick. Dawkins hides Shakespeare in the landscape and the emergentist hides the ordering principle in ‘limitations’.
Yep. Like the Copenhagen Interpretation, we accept our epistemic limitations. In the end, all we have got is some state of conception that looks pretty consistent. We create for ourselves some forrmal theory. And then we agree with our selves that certain acts of measurement can be taken as signs of the thing we mean to talk about. We can read a number off a thermometre and say "that is the temperature". And away we go, making predictions - ie: suggestions about further acts of measurement.
So the slippery bit is the act of measurement. We have to presume that our instruments are making some kind of proper translation of the physical reality (the thing in itself) into the symbolic currency on which are theoretical conceptions are grounded. The numbers on the dial are phenomenon, not noumenon. But we operate on the expectation that the relation we have establish with the world in this fashion is reliable. It tells us what we need to know - at least in terms of the purposes we might bring to the table.
So "existence" becomes a symbolised reality. We say yep, that is the temperature - I read the number off a suitable instrument.
Of course we can always hope that through all our scientific advance, we are really getting down to the bottom of things. But just from thinking about the logic of this modelling relation we have with reality, we can see that might be a rather false hope.
For a start, the essence of any act of measurement is a severe constraint on physical existence. The needs of computation mean we have to impose finitude on the world to allow sharp observation. Less is more when it comes to information that has meaning. We want the message coming in from our instruments to be all signal and no noise.
And likewise the other feature of modelling is that good models need to be based on (unrealistically) sharp dichotomies. We want absolute separation of that which (as is itself implied by the contrast) not in fact in a state of actual separation.
So we come at reality with a crisp division - like law vs particle. Or formal vs material cause. We break things apart with conceptual violence so as to stand "outside" the world we must in fact stand within in fully participatory fashion. That is a necessary fact of modelling epistemology. But it is then naive metaphysics to think that the needs of modelling mean that the world in itself has both laws and particles that exist in some mysterious dualistic and absolute fashion.
The complementary limits on reality are just its conceptual extremes. That is why it is equally wrong to talk about laws or particles "actually existing". Yet as far as our modelled understanding of reality goes, both would be real in the sense of being real measurable bounds on actual existence. Formal cause vs material cause (what laws and particles represent) are what you would seem to see if there was really such a thing as standing outside the world as it substantially is.
Of course, folk have little problem of understanding formal cause in this fashion. But they get very prickly when it is suggested that material cause is in exactly the same boat - by logical necessity.
But they are not the same story at all.
Your scrambled sentence already begins with the counterfactual definiteness of some set of letters ... a conventional set of marks which I know how to read and thus can tell a gibberish sequence from one that has a reasonable interpretation.
And your citing of Dawkins and evolutionary constraints continues to underline that you are nowhere near the kind of holistic emergence I am talking about.
It is a central problem of evolutionary theory that evolution can only explain the reduction in variety. It can't explain the presence of that variety in the first place.
This is why theoretical biology has gone over to evo-devo thinking in a big way. A theory of evolution has to be coupled to a theory of development (or dissipative structure).
I am sorry that you see it that way, but I do forgive you.
Quoting apokrisis
Obviously Peirce himself did not think so - not even remotely - since he explicitly affirmed the Reality of God as Ens necessarium.
I do suppose that at some point your emergence narrative also gets passed the phase of nothingness.
Irrelevant.
Not an argument. You talk of ‘limitations’, how is that not comparable to constraints?
Random mutations.
If nature were not animate, we would not be in a position to ask the question.
Quoting apokrisis
But neither does semiosis doesn't make any sense in the absence of mind. What is a sign without an interpretant?
I think you've got an issue with your model of the nature of mind and of 'God'. I know you react viscerally against anything you perceive as a 'God idea', but consider other models of cosmic order, such as logos, Tao or Dharma. They too suggest a kind of 'intelligible order' but not along the lines of what is usually described as 'theistic personalism'. (Not looking to start an argument, just making a suggestion.)
Quoting apokrisis
If it helps, angels are said to require no speech.
Quoting tom
That is close to Heisenberg's view - electrons (etc) don't actually exist in the way stones and flowers exist; they don't fulfil the requirements that generally define what constitutes an 'existing object'; they are on the borderline between potential and actual.
Quoting Metaphysician Undercover
I'm not sure you're getting the meaning of 'top-down' or 'bottom-up'. Intentionality and free will are both 'top-down' practically as a matter of definition; which is why materialists, such as Dennett, are obliged to try and deny them.
Incidentally, the 'Workshop on Naturalism' which is mentioned in Pigliucci's blog post, referenced by Pierre Normand above, was also discussed in some detail in Andrew Ferguson's review of the reaction to Thomas Nagel's Mind and Cosmos, called The Heretic.
But the problem is that you don't understand the current science well enough to have a clue what stage the narrative has reached. And you don't seem that interested in finding out either.
Quoting Querius
So natural selection can certainly remove those. But how does natural selection also create them?
(It's a basic issue in evolutionary science.)
Talk of an external intelligent creator is simply question begging - displacement activity rather than metaphysics.
But talk of an immanent organic telos is something I can get right behind as being even "quite magical", and a good reason to reject "silly reductionists". So I am always citing the various ancient expressions of this general naturalistic view of existence. You could add Judaic Ein Sof to that list.
As I said, "We do not know where the free will act derives from...". Perhaps, you for some reason, think that free will and intentionality are examples of top-down causation, but I think you've got it wrong. Clearly the free will act begins in the most minute parts of the neurological system, perhaps within the brain, moving outward to move the parts of the body, which move things in a larger surrounding area. How do you define "top-down", such that this activity is consistent with "top-down causation"? As I've argued before, I think the entire concept of top-down causation is misguided, it's a fiction. It's an attempt to explain the existence of life through physicaist principles.
This begs the question.
But then, explaining free will in terms of neurological events is the essence of the very 'physicalist principles' which you then go on to reject. That is 'neurological reductionism'.
Again, I don't think you get the distinction between top-down and bottom up.
Quoting Metaphysician Undercover
Well, I gave an example of the neutron's properties being determined by its being situated in an atom.
But, as mentioned, you could also consider the placebo effect an example of 'top-down' causation. Why? Because it is a case where a subject's belief produces a physical result; belief is 'top down', because it belongs to a higher level of organisation than the molecular and cellular structure of the patient. If the effectiveness of medicine relied solely on molecular potency - which is the 'bottom-up' view - then placebos ought not to work. Indeed there ought not to be any such thing as 'mind-body medicine' or psychosomatic illnesses, and accordingly the existence of those is regularly dismissed by materialists. According to reductionist materialists, there can only ever 'bottom-up' causation.
Thanks for linking that lecture. Bitbol is very good and draws many seemingly disparate elements together.
So sayth the emergent self who is confused about the roles of mutation and selection in evolutionary theory.
Selection obviously does not create variety. Variety is created by random mutations.
Basic indeed, and I wonder why I have to explain it.
Why are you babbling about mutations? My point about your weasel was that the letters already exist. So how did that situation develop? Recombination is one thing. But where did that alphabet come from? If you want to say it evolved, run me through the story.
Your claim:
The problem with your claim is that evolutionary theory can explain variety. Variety is explained by random mutations in the DNA.
So I replied:
Your response:
Here you are talking about ‘random mutations’ and asked: ‘how does natural selection also create them?’ ...
I have tried to read this charitable. I supposed that you were not asking ‘how does selection remove random mutations and also create random mutations?’ That would be a very confused question.
So, again, trying to read your question charitable, I did suppose that your question is not about random mutations, but about variety in general, as in ‘how does selection also create variety?’
So I replied:
Your response:
I leave it to the unbiased reader to decide, who is babbling here.
Let me google that for you.
I think this is missing Apokrisis's point. I may try to put it differently. There was a sharp qualitative break when abiogenesis occurred a few billion years ago. Prior to that event, there may have been chemical evolution in the sense that more complex molecules grew from random interactions from simpler molecules, and some crude process of selection. But this movement towards complexity didn't have any autonomous teleology since the complex molecules, or their parts, didn't have any organic function. It is only when early replicators not only were passively selected by environment pressure according to fitness, but also began to strive to survive and replicate, that they could be considered alive. They then had teleology in the sense that their parts became functional organs and they acquired autonomous behaviors.
Thus, the alphabet of life -- what is being varied, mutated and selected by environmental pressures -- doesn't consist in meaningless nucleotide sequences. It rather consists in functional (and thus meaningful) elements of anatomy, physiology and behavior. This teleology is manifested in the structure of whole organisms, and their organs, only in the ecological context of the holistic forms of life that they instantiate. From the moment of abiogenesis onwards natural selection became a top-down (and teleological) causal process.
My claim (see this post) is that Apokrisis' emergence narrative ( see this post) is very similar to Dawkin's Weasel Program. His counter-argument, as I understand it, is that there is a fundamental difference because Dawkin's Weasel has an unexplained starting point and his emergence narrative has not (?).
I do not agree that there is a fundamental difference.
I hold that naturalistic evolutionary theory is incoherent and I also hold that it does not describe a top-down and/or teleological cause.
Why? Why? and Why?
I just wanted to outline my dispute with Apokrisis. Here you can read why I hold that Dawkins' Weasel and Apokrisis' emergence narrative are very similar. In that post you can also find a critique which applies to evolutionary theory. However I decline to offer an in depth critique of evolutionary theory, which would be off-topic.
We cannot ignore the facts of neurological involvement in the free will act. The question for the metaphysician is the cause of such activity. It seems very clear that the activity of the nervous system is the cause of the activity of the human body. And this indicates that top-down causation may not be consistent with the facts. If stating facts is begging the question, then the position I'm arguing against is most obviously fictional. Top-down causation denies that the free will act is free.
So I believe it is you who does not understand top-down causation. And by "understand", I mean to recognize it as a misunderstanding.
Quoting Wayfarer
As I explained, this assumes an unacceptable causal relationship. This perspective assumes that the atom is the cause of the relationships which constitute it. From the premise that a cause is necessarily prior in time to the effect, this means that the atom must exist before the relationships which constitute it exist. To my mind, this is impossible, to say that a thing exists prior to the existence of its constituent parts. To resolve this, we could say that the idea of the atom exists prior to the constituent parts, as the "blueprints" for the atom, but then we are no longer referring to the atom itself, but a Neo-Platonic "Form" of the atom, which acts as the cause of existence of the atom
Quoting Wayfarer
You are using "belief" here as a noun, it refers to a static thing. But you are claiming that this static thing is a cause. This is the problem with Pythagorean idealism which Aristotle demonstrated. Such idealism assigns existence to ideas, but in doing so, it gives the ideas the property of passivity. This makes it impossible that ideas are truly causal. Therefore we have to see beyond this problem, as the Neo-Platonists did, and find a way to understand the ideas as active.
The physicalist and emergentist perspective is to describe the mind as a property of the activity of the brain. This means that things of the mind, consciousness, ideas, and concepts, etc., are caused by the physical activity of the brain. But for a complete understanding we must look for the cause of such physical activity. Why do living things behave in the peculiar way that they do? The emergentist wants to look at the activities of the physical universe in general, and show how the activities of living things is really no different from the activities of non-living things, and this is how they account for the emergence of life and consciousness. They create a false compatibility between the activities of living things and the activities of non-living things, through the means of "top-down" causation. But the dualist metaphysician will understand this as wrong, and maintain that the soul is the true cause of the physical activity of the living being, and this is distinctly different from any top-down causation..
You cannot pass this off as apokrisis' point, because this is completely distinct from and inconsistent with, what apokrisis argues. Apokrisis assigns telos to the universe in general, so it is not as if telos emerges with the existence of life, it was a property of the universe already.
I agree with your clear analyses.
Perhaps one could claim, as Apokrisis may do, that the shape of an atom is a fortunate ‘limit’ which its constituents run into. So, not the atom is the cause of the relationships which constitute it, but the ‘limit’ is the cause. Moving downward, the constituents of the atom are, in turn, also the result of ‘states of unbounded potential’ running into ‘limits’...
To cut a long story short, it is 'limits' all the way down, up and sideways.
So, starting from "everythingness", there are just all these fortunate ‘limits’ which shape the universe and the stuff in it.
One of the problems with this view is that the positioning of limits is in need of explanation. Limits can produce any possible universe, so why are those limits positioned so fortuitously that we end up with a stable universe fine-tuned for intelligent life? What are the chances that random limits produce human brains, spaceships, jet airplanes, nuclear power plants, libraries full of science texts, novels and super computers running partial differential equation solving software?
This may be the problem here in a nutshell. We describe the world in terms of constraints, we can call them "laws". There are two types, the laws of physics and the laws of ethics, or societal laws. The former describe the ways that physical objects behave, and the latter describe how human beings ought to behave. When we look at what the laws refer to as real existing "constraints", we can refer to those constraints as "causes". But this is to use "cause" in the sense of an influence, something which affects activity. We can say that these constraints have an affect on specific activities, and in this sense they are causes.
The constraint, as a cause, is inherently passive. It functions as a cause merely by affecting an occurring activity. This presupposes the existence of the activity. So the constraint is a cause only if there is activity. Therefore we still need to consider "cause" in its true primary sense, as the activity itself, which is required in order that a constraint may be capable of being a cause. If we focus on the constraints, as causes, and divide constraints into top-down and bottom-up constraints, we have simply distracted our attention from the true, primary cause, the activity itself, which is being constrained.
Yes, I'm aware of this feature of his view, and it was indeed to accommodate this point that I used the qualifier "autonomous teleology" before the passage you quoted to characterize the qualitative break when the teleology manifested by a primitive replicator become its own (meaning that of the life form that it now exhibits) rather than just a general "pansemiotic" teleology. I'll let Apokrisis comment further on my gloss on his argument. I also wanted to add, regarding the alphabet of life, that each life form has, in a clear sense, an alphabet of its own.
In that case there are only ~3 life forms - prions, viruses (based on RNA) and everything else (based on DNA).
Low level neurophysiological processes play a dual role in the etiology of intentional human behavior. The first role consists in its causal relevance for such things as muscular contraction, and hence of "raw" (physically describable) bodily motion, and can be traced back to low-level "efficient" causes: e.g. to previous neural events and to the physical stimulations of sense organs. But intentional human behavior also has a higher level characterization where it is evaluated in point of practical rationality.
Free actions are actions that an agent can be deemed personally responsible for and that manifest her sensitivity to reasons; and this sensitivity rests on molar ("person level") rational abilities that are merely enabled, rather than caused, by underlying "low level" cognitive functions and neurophysiological processes. (They are also enabled by normal maturation and acculturation within the social context of a rational form of live). The neural cause of a specific muscular contraction can tell you why some muscle contracted at the time that it did, and hence why an arm rose but it leaves you clueless as to why the agent raised her arm (or even whether the motion was intentional at all). It's only from the standpoint of the rational/teleological organization of the cognitive economy of an individual as a whole that rationality transpires and that sensitivity to practical reasons is manifested as a form of top-down causation.
On my view there are as many life forms as there are individual species. Prions and viruses are akin to parasites, so their life forms aren't quite distinct from the life forms of the animals or plants that they infect. The main point is that what is being selected by the environment (selected for or against) always is something like a behavioral ability (or defect) or a bit of physiological function that always already has its significance, as the sort of process/ability that it is, owing to the functional (and hence teleological) rôle that it plays in a distinctive living organization -- an already organized forms of life within a specific ecological context.
Dawkins' Weasel algorithm is a simple illustration of the power of constraints - given mutable variety. So not only does he have the computer selecting the letter pattern closest to a pre-existing goal, but also the computer generates a population pool of a 100 sequences at a time, with a built-in variance of 5%. So even the mutational variety is set so as to meet some external pre-existing goal.
Dawkins says the final sequence goal is not a big teleological puzzle because in nature, that becomes just the (now utterly contingent) constraint of some fitness landscape. So first you have a world created by some programmer playing God. Then - in good old reductionist fashion - the world suddenly takes over the goal-setting ... in a way that is as little teleological as it is possible to imagine. It becomes good old random shit again.
Great. And also notice that nothing further is said about the programmer's role in setting up the mutational variety so nicely.
Nor also - the even deeper point - that the whole example skips over the issue of the epistemic cut where physical acts become symbolic acts (and vice versa).
So as I pointed out, the very thing of marks with meanings that could be interpreted - letters that could make words that get read by a mind, or gene sequences that could make proteins that then become the switches and the motors regulating dissipative metabolic processes - goes assumed and not explained. In Dawkins example, we have already crossed the Rubicon between the physical realm and the symbolic realm.
So - as usual when listening to an arch-reductionist - there is a ton of question begging. And my holistic approach addresses all those questions that go to the very thing of how life - as a biosemiotic relation - could arise emergently in the first place.
Hence while your strained attempts at a put-down are mildly amusing, I might wish you would make the same effort to actually read a little deeper before parading your basic ignorance of the issues you have chosen to raise here.
Again, the reductionist imperative being expressed as a law of thought. If nature seems divided against itself by a metaphysical dichotomy, we must rush in and save it from itself by deciding that the duality is in fact a monadism. So we collapse the complexity and hoist the reductionist flag, proclaiming all is calm and well in the world again. Only one of any two things can be fundamental.
It is laughable. This is what happens because humans have developed an essentially mechanical culture. If you make machines, eventually you want to be a machine.
Anyway. Causality is dichotomous because that is simply how metaphysical development works. To change a vague state of affairs, the vagueness must be crisply divided towards its complementary extremes of possibility. This is the dialectical logic that got started in ancient Greece and now - because people believe themselves to be meat machines (even if infested with some kind of secondary soul stuff) - it can't even be seen when it stares them in the face.
So it is not a surprise but a prediction of dialectics that causality would gain its real world definiteness by becoming divided against itself in logical fashion. Thus we have - in holism - the hierarchically-organised interaction between global constraints and local freedom.
We have bottom-up construction matched by top-down constraints. Each is the cause of the other (as constraints shape the construction, and the construction (re)builds the generalised state of constraint).
And yes, constraints would seem to passively exist as a context ... because the freedoms are in complementary fashion, the active part of the deal. So causality covers all the bases. It gains real world definiteness because it has both its active and passive forms to create some actual state of contrast ... that is another way it is no longer just vague possibility.
Quoting Metaphysician Undercover
I should note that you keep getting the detail of anything I say quite wrong in your eagerness to shoehorn it into some semblance of what a reductionist might say.
It is freedom that constructs bottom-up. The role of top-down constraint is to give shape to that freedom. So constraints (as the bloody word says) are all about limiting freedoms. They take away or suppress a vast variety of what might have been possible actions ... and in so doing, leave behind some very sharply directed form of action. Or as physics would call it - to denote the particularity that results from this contextual sharpening - a "degree of freedom".
I agree. “Real existing” is a crucial qualifier here, since if constraints have no ontology, how can they have causal powers? Consider a bucket filled with water. If we term the bucket a ‘constraint’, then indeed one can say that the bucket causes (or influences) the water to have a certain shape. But how does this apply to a universe floating in nothingness? Assuming that this universe has a certain shape, we cannot coherently say that ‘nothingness’ causes the shape of this universe, because nothingness cannot have causal power, cannot constrain anything. So unless we are talking about “real existing” constraints, like buckets, we cannot coherently talk about them as being causal actors.
That is essentially it. But I would add that chemical evolution would be teleological in carrying out the wishes of the laws of thermodynamics. So molecular complexity would arise because its was being successful at accelerating local entropification rates. Chemical evolution would be functional in that (inorganic) sense.
And then the big shift is the development of a semiotic code or system memory - the RNA or whatever that created the epistemic cut between the "program" and "the world". Now you have the new possibility of local functional autonomy. The organism can mean something to itself.
So the inorganic realm is still teleological (in the dilutest fashion). Where it is different is mostly that it lacks local autonomy in the semiotic sense. The telos is the general or "ambient" one of the how the complexly stratified physio-chemical realm of the planet's surface is serving the second law. Life and mind are then actually something new in internalising that telic imperative symbolically, and by doing so, managing to entropify the world at an even greater rate.
Quoting Pierre-Normand
Yep. That was the point. Life has meaning because ... there is death as its contrast. So because of biosemiosis or a new symbolic level of action, an organism could become a survival machine. While of course being constrained by the general purposes of the second law, it could also now think its own prime purpose was to flourish and multiply. As a direction in nature, it could point counterfactually away from entropic death and decay and towards negentropic life and growth ... for a time at least.
And even when the body dies, the genome persists into the next generation. The functional information gets transmitted and not lost. What the genes pass on is hardly meaningless noise but the essence of what it means to live again in this world.
So that makes the very idea of "random mutation" rather an obvious conflict. Sure, people used to talk about mutation in terms of "hopeful monsters", but I hope they don't even mention the phrase to school kids anymore.
In fact mutation itself is a highly constrained or tuned thing in nature. Evolvability itself has to evolve. The degree to which the organism exposes its essence to the vagaries of fate has to be a careful choice as the history packaged up in a genome is hard-won information.
Which is why "random mutation" no longer explains anything in modern biology. It is just the start of the unravelling of what has turned out to be a very complex part of the whole evolutionary deal.
Already your cosmological speculation has started to go very wrong. No-one says the universe floats in nothing, let alone that this would be what gives it a shape.
In general relativity, the shape is flat unless otherwise deformed by its material contents. And because those material contents are constrained by the laws of thermodynamics, they will spread themselves about in a way that minimises the deformation.
As Wheeler so famously put it, "Spacetime tells matter how to move; matter tells spacetime how to curve." Which is the holistic view in a nutshell. Each is in complementary fashion the cause of the other.
No-one? Are you sure? Tell me, what is the universe floating in?
BTW a close reader would have noticed that I did not make a claim about our universe. I was talking about a (hypothetical) universe floating in nothingness.
You agree with my point.
What do you mean by "float"? In what sense could that be a property the Universe is said to possess.
A question about ‘striving to survive’:
Why is it that e.g. a bacterium avoids death? Does it fear death? Does it even have a concept of death?
Or do you guys assume that ‘striving to survive’ is just one of those things that ‘emerges’ due to a ‘limit’ or some similar 'cause'?
Ok, so you have described a "higher level" of neurological activity, which you say is responsible for the intentional, rational, free will acts. Since you can identify no efficient cause for this activity, you assign the cause to the "individual as a whole". Now we have some extremely vague notion of, "an individual as a whole" being the cause of this neurological activity. Why is it that you believe that this vague notion of "an individual as a whole", being the cause of this activity, is a better description than the classical description which holds that the immaterial soul is the cause of this activity?
Quoting apokrisis
When we consider causation in its primary sense, as active cause, like I described, there are two distinct types of active causes, we have efficient causes and final causes. One refers to the active cause of physical changes and the other refers to the active cause of intentional acts. In the secondary sense of "cause", the passive constraints which influence the active causes, affecting the effect of the causes, there is also two types. Here we have material constraints and formal constraints. So we have the four types of causation identified by Aristotle here, two primary, active causes (efficient and final), and two secondary, passive causes, (material and formal).
Quoting apokrisis
This is nothing more than a vicious circle. Construction builds constraints and constraints constrain the construction. The whole point in attempting to determine which is primary, a procedure which you call "laughable", is to avoid such a vicious circle Which is really more laughable, the vicious circle, or the attempt to avoid it?
Quoting apokrisis
This model makes no sense at all. You assume a "freedom" which constructs from bottom-up. What could this freedom be constructing other than constraints? But it cannot be constructing constraints, because the constraints as you say are top-down, limiting freedom. And why would freedom be constructing constraints anyway, this is opposed to its nature? Further, the assumed "freedom" cannot be constructing freedom, because it already is freedom, according to the assumption. So what is this freedom, which constructs from bottom up supposed to be constructing? And what kind of thing, which has the power to construct, do you propose "freedom" refers to?
Quoting Querius
The issue I see with "real existing constraints", and why I used that phrase, has to do with the nature of the artificial prescriptive laws of how human beings ought to behave. These are constraints which are created by human beings, such as ethics, societal laws, etc.. We can see these laws as real existing constraints, but they dictate how we ought to behave, and the subject ultimately has the capacity to disobey those laws. Therefore they are radically different, ontologically, from the natural constraints which we assume dictate the way that natural inanimate physical objects behave. The inanimate physical objects do not have the capacity to disobey these natural constraints.
How is it opposed to its nature if the constraints are responsible for its nature?
Quoting Metaphysician Undercover
Obviously the attempts to avoid it. Or rather, the failure to understand how hierarchical organisation is not viciously circular at all.
Isn't this contradictory to say that constraints are responsible for freedom? I don't know how that could work. Besides, you've assumed freedom as the starting point, and freedom constructs. So it cannot be constructing constraints which are responsible for its freedom, because it already has freedom prior to constructing.
Quoting apokrisis
Well, I've seen you attempt to explain your understanding of hierarchical organisation, and like the one above, which I commented on, they all end up with a vicious circle.
Is this a serious question? Are you now arguing here as a theist and so have some dualistic concern about bacteria having souls and freewill?
Quoting Querius
If teleological talk frightens you for some reason, you can think of it as simply a way of characterising the imperative to grow and reproduce.
Nope. And I've already explained it to you in this thread as in umpteen other threads.
Quoting Metaphysician Undercover
But as that vicious circle is locked up, biting its own tail, inside the small world of your own imagination, I can't feel unduly worried.
I mean you could read a book about it - Stan Salthe wrote a pair of splendid ones - but I've no evidence you actually put any effort into researching the positions you take.
My point was to characterize what it is that natural selection selects for (or against) from the moment of abiogenesis onwards. It selects among processes and bodily structures that already have vital functions (i.e. functions that promote the vital organization and procreation abilities of the organism.) But natural selection remains an enabling force in this story. When mutated heritable vital functions are defective, or relatively inefficient (compared to those of kins and foes) then they are selected away. Organisms strive to survive, under local constraints, since those who don't so strive die off and fail to reproduce. This story just is a fleshing out of the standard Darwinian theory of evolution -- one that frees itself from the unnecessary strictures of reductionism while characterizing the isolated processes and activities.
(See Ruth Garrett Millikan's What is Behavior for similar anti-reductionist points regarding the evolution of behavioral abilities).
Look, questions about 'neurological involvement' and the nervous system is not metaphysics at all.
Quoting Metaphysician Undercover
'Begging the question is 'assuming what is to be proved'. The statement of yours which I said was 'begging the question', was this one:
Quoting Metaphysician Undercover
But the point that is at issue is whether such an act can be understood on the basis of 'the most minute parts' - that being 'the bottom' - or from the formation of a conscious intention - that being 'the top'. So I said your statement begged the question, because it assumes what it needs to prove, which you're still doing.
Quoting Metaphysician Undercover
It doesn't 'assume' anything - it's a statement of scientific fact. What we think about it is about as relevant as our opinion on 'what gravity is'.
Quoting apokrisis
Note that this is still basically a biological perspective which understand life it terms of underlying thermodynamic and other physical laws.
Of course. It would have to be otherwise I would be in trouble.
So life|mind is an example of radical emergence ... which is also in a deeper sense just more of the same.
The signal characteristic of bios is that it is negentropic complexity that is thus the precise "other" of entropic simplicity. No one would mistake an organism for a rock. And yet still, on close inspection, negentropy is only possible because it accelerates local entropy production. So it's purpose is completely aligned with the second law and the universal arrow of time. Yet it is also completely different ... when we start describing it in its own apparent terms at its own emergent scale of being.
Now this physicalist understanding of life - as biosemiotic dissipative structure - is completely uncontroversial in theoretical biology circles (at least the ones I choose to circulate in ;) ). And there is no reason not to think it extends also to explain mindfulness as a physicalist phenomenon.
The big challenge for semiotics is instead about heading in the other direction - explaining the Cosmos itself in pan-semiotic terms. That is still a speculative Metaphysical venture, and not yet on the agenda in any open way amongst physicists. Although David Layzer has been pushing the dissipative structure story there for a long time now.
The person -- the human being -- is responsible. The "higher level" isn't a higher level of neurological activity. It's a functional level (see functionalism in the philosophy of mind) of mental organization that relates to the lower level of neurology rather in the way that the software level relates to the lower hardware level in the case of computers.
To pursue the analogy, the hardware level deals with the implementation and enablement of basic logical functions. But what makes the execution of those basic logical functions logical (or computatonal) at all, and what makes the intended effects (e.g. the screen or printer outputs) results of meaningful computations is their participation in the specific global hardware+sofware architecture.
That's not the reason why I am looking for something other than an "efficient" cause. I am rather looking for a final (intelligible/teleological) cause -- something like a goal or reason -- because of the form of the question and the formal nature of the event: Why did so and so intentionally do what she did.
The "efficient" (so called) cause that you are highlighting figures as an answer to a different question: why did this or that piece of meaningless bodily motion occur at the time that it did. If you would also ask in what manner those muscular/neural events are capable of enabling genuinely cognitive function (and intentional behavior) then you would already have left the narrow explanatory space of bottom-up "efficient" causation.
(I put "efficient" between scare quotes since efficient causation was originally an Aristotelian notion quite unlike modern "Humean" (universal law exhibiting) causation. The efficient cause of the existence of the house, according to Aristotle, is the builder -- a substance rather than a process or event.
First, the individual isn't the efficient cause of the neurological activity. It's rather that what global cognitive economy this or that neural event (or neurophysiologcal function) is a part of determines its functional nature (See Davidson on radical interpretation). So, this is a species of formal causation. Second, the notion of the individual as a whole just is the notion of a human being like Joe or Sue. Though the notion of an individual living organism has some inherent vagueness due to questions of persistence and identity criteria, its not any more vague than the notion of a neuron, a mouse, a tea cup or a hydrogen atom.
Because we can improve on the notion of the immaterial soul through reviving Aristotle's notion of the rational soul as the specific form (i.e. the specific functional organization) of the mature and healthy human body.
What I meant by "clearly", is that evidence indicates this. Even Pierre-Normand admitted this. If you have any evidence to the contrary, then bring it forward.
Quoting Wayfarer
I would disagree with you, that conscious intention is "the top". I think it is you who is begging the question by describing conscious intention as "the top". You only do this to support your claim that anything caused by conscious intention is ipso facto "top-down" causation, because according to this assertion there is nothing above conscious intention. I support my claim, which you call "begging the question", with accepted neurological facts. You have the dubious claim that conscious intention is the top. Can you offer any support for this assumption of yours, that conscious intention is the top? In what context is intention the top of anything?
Quoting Wayfarer
Let me get this straight. You are claiming that the atom is the cause of existence of the relationships which constitute the existence of the atom. So there are particular relations between the protons, the neutrons, and the electrons, and these particular relations are caused by the atom itself? Now you claim that this is a statement of scientific fact. Do you not see how silly this is? I suppose that the relationships between atoms which make up a molecule, are caused by the molecule itself? I learned in science, that it is a chemical reaction which causes these relationships, and the molecule comes into existence as a result of a chemical reaction. Likewise, with respect to the relationships which constitute the existence of an atom, it is a nuclear reaction which causes these relationships, and consequently the existence of the atom. So much for your "scientific" fact. I should rather class it as an "alternative fact", claimed as a fact just to support your untenable position..
It's standard neuroscience I would say. Attention acts top down by applying a state of selective constraint across the brain. You can hook an electrode up to a retinal ganglion cell and watch it in action. Or an EEG can record the fact as it happens in general fashion as a suddenly spreading wave of suppression - the P300.
So, as far as neuroscience goes, folk wouldnt talk about it as consciousness (too many unhelpful connotations for the professionals). But top down integrative constraints are how the brain works.
If we are going to discuss a higher level, and a lower level, then we need to distinguish the two distinct aspects of the free will act. First, we have the impulse to act, and second, we have the will to deliberate. The first inclines us toward action according to instinct reflex, or existing habits. The second is the capacity of the will to decline, or resist this action, we call this "will power". It is this second aspect which makes rational decisions, and conscious deliberations possible. Do you agree that the first is the lower level, and the second is the higher level?
Now if we are to look for the cause of the free act, we must look for the source of activity, and this is to be found in the lower level. The upper level has the capacity to prevent particular activities, diverting energy toward deliberation and rational decision making, instead of making a rash act, but it does not cause activity it directs the activity. So if we are to describe the free will act in terms of bottom-up, or top-down, wouldn't you agree that it is a bottom-up causation, which is influenced by the upper level, which has the capacity to guide the efficient causes in different directions? The true source of activity, is to be found in the lower level.
Quoting Pierre-Normand
When we look for the final cause, in the way you describe, we are faced with ideas, as the reason why so and so did such and such. But as I explained to Wayfarer these ideas are passive things, and analyzing passive things will not bring us to the active final cause. The true final cause must be an active cause because it sets in motion the efficient causes necessary to produce the intended end. A passive idea cannot produce efficient causes. This is why, following the Neo-Platonists, we need to proceed to a form of immaterial cause, which is similar to an idea, but is itself active. In this way, intention and final cause become intelligible. But since the final cause is what brings into existence the efficient causes, which proceed to bring about the end, it is necessary to place the final cause at the lowest level, prior to the efficient causes.
Quoting apokrisis
Yes, I agree with this, read my reply above. The point I made already though, is that this top-down form of constraint is not acting as causation, top-down, it is passive. The free act derives from the bottom and is therefore a bottom-up form of causation, which simply moves upward through the constraints. The constraints may appear to be arranged in a top-down fashion, but since they are passive, there is no real top-down activity here, and this is a misnomer to call it top-down causation, it is just a structure of constraints. If we want to look for the true final cause, the cause of the free act, we must look to the very bottom of the chain of efficient causes, to find a source of activity which is free from efficient causes.
This is what I was arguing in the case of your army analogy in the other thread. What you call "the army", consists of a structure of static constraints. Call them top-down, bottom-up, it doesn't really make any difference at this point, it's a static structure. Now, it is the actions of each individual soldier, through their free willing acts to partake and follow the structure, which causes the existence of the army. The cause of the army is bottom-up, each individual coming in and choosing to do one's part.
You might say that this is a trivial difference, to argue that the constraints are a static structure, and not a true active cause, but it becomes important when we look at beginnings, the coming into being of things. The set of constraints, which exists as "the army", isn't passive in the absolute sense, the army must have come into existence. This is the "construction" which you referred to. These constraints come into existence in a bottom-up fashion, as you described, so they are truly bottom-up constraints, and their top-down appearance is just an illusion.
It's not passive. Individual neuron firing is actually being suppressed or enhanced.
It's also not purely top-down of course. As I've said often enough, it is the interaction that counts. So you can't really treat selective attention as "a thing" that floats above the action. Instead it is a rapidly evolving balance of activity across the brain - a global act of integration~differentiation.
But critically, it is a wave of purpose forming action. To attend is to be already intending.
And also it is memory and expectation based. So the brain knows how to make sense of the current world because there is this "top-down" weight of prior experience to direct things. And I put top-down in quotes to show I am talking about a hierarchical story where the higher level stuff acts on a larger spatiotemporal scale, so avoids your vicious circularity that comes from thinking a process like attention or consciousness happens "all at once" in a flash.
Quoting Metaphysician Undercover
Nope. There is no need for constraints to be considered as passive or static. But certainly - if you follow hierarchy theory - they do play out on a larger spatiotemporal scale. So from the point of view of the soldier, the army is forever the same. But of course the army also changes over sufficient time. It is only ever relatively static or passive.
Quoting Metaphysician Undercover
Well this just comes back to your own mystical beliefs about freewill. So I can repeat the same argument and you will avoid it just as swiftly. Anyway, the individual soldier is a soldier because army training has pruned away all the unhelpful civilian freedoms he might have had as a raw recruit. And if the soldier felt he had "freewill", that is one of the first things that boot camp was designed to hammer out of him.
Eventually indoctrination will lead the soldier to learn some narrowed set of habits and so will of course "choose" to behave in military fashion. That will even carry over in civilian life. Everyone knows this.
It isn't silly, nor it it an 'alternative fact', and I would rather not be accused of trading in such. 'A free neutron will decay with a half-life of about 10.3 minutes but it is stable if combined into a nucleus.' That is from the science textbook.
I can see the introduction of 'top-down' has introduced a lot of confusion. Some of those points have been addressed in the posts above. But what this is all about is that 'physical reductionism' is generally 'bottom-up', because it wants to explain such 'higher-level' things as actions, intentions, thoughts, and so on, in terms of the physical and physiological components of the being. So 'bottom-up' thinking, is usually characterised as reductionist, and/or physicalist.
As Pierre Normand explained above, the 'higher level' in 'top-down' isn't a higher level of neurological functionality, but 'a higher functional level of mental organization that relates to the lower level of neurology'.
Generally speaking, Platonist philosophy is 'top-down' (and also anti-naturalist, anti-reductionist, and anti-nominalist.) I'm not saying that to appeal to the authority of Platonism, but to illustrate the kinds of philosophies that are associated with 'top-down' attitudes.
So, generally speaking, materialist philosophies have tended to be 'bottom-up', and idealism has tended to be 'top-down'. Materialist philosophies emphasise the fundamental physical structures, idealism emphasises the mind-like aspects of nature, or the primacy of the intellect, and so on.
So when I said that your 'explanation' begs the question, what I mean is that when you say things such as 'Clearly the free will act begins in the most minute parts of the neurological system...', you're assuming a point of view that is generally associated with reductionist accounts. But as this is the very point that was being debated, it is this assumption that is begging the question.
No, I don't endorse this rather empiricist psychological model. It portrays the rational mind as residing on top -- as a controller/inhibitor -- of antecedent "raw" instincts, drives or impulses. I prefer an account of practical rationality that views the habits and motivations of a mature human being as being largely constitutive of her ability to appraise goals and putative values in point of rationality. What it is reasonable to do in specific circumstances comes to be felt as something that it is desirable to do from the practical epistemic standpoint of someone who has grown to be motivated by the right things and to be sensitive to the morally and/or prudentially salient features of a practical situation.
This is broadly an Aristotelian view of the essential interdependence that holds between phronesis (practical wisdom conceived as a capacity for practical knowledge) and virtue (acquired excellence of character/habit conceived as a set of good motivational dispositions that don't blind individuals to their duties) as described in the ethical and practical-philosophical writings of Elizabeth Anscombe, John McDowell, David Wiggins, Jennifer Hornsby, Philippa Foot, Michael Thompson and Sabina Lovibond, among others.
On that view, desire and rational will operate at the same molar, personal-level, of practical cognition/motivation. The low level consists in the component "cognitive"/neural abilities that merely enable high level (personal-level) cognition to function in the relavant context of embodiment, environment and culture.
I'll comment on the rest of your post separately.
To pursue the analogy, how is an intention translated into instructions (‘software’) for neurons? And what power does emergent consciousness have over matter, such that ‘creating software instructions’ is an apt analogy?
How does an emergent consciousness know which constraints to apply? Why do neurons make sense to an emergent consciousness? What knowledge and power does emergent consciousness have so that it can command neurons?
The broader question I am asking here is about the interaction problem wrt emergentism.
What power does the act of perceiving of a bird have over matter such that the cat is able to jump on the bird and catch it? What power does the hunger of the cat has to motivate (and move) the cat to pursuing birds? Is there a philosophical mystery there? Our states of mind (and of consciousness) have among themselves motivational and perceptual states. In the case of rational creatures such as ourselves those states take specific conceptual forms such that they can be expressed linguistically and their conceptual contents can be articulated in episodes of practical and theoretical reasoning.
But this sharp qualitative increase in cognitive power that we exhibit compared with the simpler and more straightforward behavioral engagements that non-rational animals have with their perceived environments need not introduce any new mind-over-matter interaction problems. Our conceptual abilities just are reshaped animal abilities that we acquire though initiation/training into a linguistically mediated culture. This is how we are being "programmed", to get back to the computer analogy.
This is not an explanation. You note that ‘striving to survive’ is necessary for life to succeed, and from there you go to (paraphrasing) 'and therefore organisms strive to survive’. This is not an explanation, since organisms don't necessarily exist.
Moreover, in order for ‘striving to survive’ to be one thing — contrary to a collection of unrelated behaviors — there must be a binding principle like ‘fear of death’, which logically requires a concept of death.
You are being unresponsive to my questions.
I did not ask how we are being programmed. Instead I asked how emergent consciousness commands/ programs neuronal behavior.
I would be interested to see more information concerning this point, a better description of the purported suppression and enhancement. Is there a chemical process involved? As I see it, neurons are always firing. The issue of top-down vs. bottom-up causation, would question whether all the neurons are attempting to fire all the time, in which top-down causation would be preventing this. Or, are just some of the neurons attempting to fire, because some underlying process is causing some neurons to fire while others are not, in which case it is bottom-up.
Quoting apokrisis
As I see it, "this weight" of memory, is just passive matter. The prior experience exists as matter in the brain, and this structure of matter will serve as constraint.
Quoting apokrisis
With respect to the vicious circle, I think that we both respect the fact that we need to look toward something which is not the brain itself, to avoid the circle. This is where you and I take opposite routes. You proceed toward a larger spatiotemporal scale, and you validate final cause in the second law of thermodynamics. My opinion is that this gives you nothing more than infinite regress. So I go the opposite way, I turn deeper inward, looking toward an inner cause, which is prior to physical existence. I look to the inside of the circle, and find release in the non-spatial realm of the central point, while you turn to the outside, making a bigger circle which eventually gets lost in an infinite vagueness. Our two very distinct directions of approach are actually functions of the way that we apprehend time. I understand time in a way which is completely different from the way that you understand time, and this is why my approach is completely different from yours.
Quoting Wayfarer
I don't question this. But the thing that maintains the existence of the neutron within the atom is the relations which it has with the other parts of the atom. So the question is, what causes these relations. You claim that the atom causes these relations, top-down, I claim that there is a deeper, underlying process which causes these relations, bottom-up.
What I think, is that turning to top-down causation, is a form of quitting, a refusal to go deeper to determine the true causes. For example, suppose we ask, why do water molecules exist in a liquid state at room temperature. One could answer, "this is the form of water", at certain temps it's solid, at others it's liquid, and at others it's gas. These specific constraints act on the water, in a top-down way, causing the water to be liquid at room temperature. But this is just a form of quitting, because it doesn't produce a deeper inquiry, proceeding to analyze the motions of molecules, and determine the true cause.
That is how I see this top-down way of looking at the atom. You put the parts of the atom in a static, and stable relationship, and claim that it is this form, this particular set of constraints called 'the atom" which acts top-down, and causes the longevity of the neutron. But I think that this is just a quitting, an unwillingness to look at the underlying activities of the parts of the atom, as the true cause of this stable relationship.
Quoting Wayfarer
I understand this point. My approach, instead of turning the whole reductionist position upside down, as top-down causation does, is to go beneath the bottom of the reductionist approach. Remember, I am dualist, I assume non-physical existence. Physical existence gives the typical reductionist a set of limitations at the Planck scale. I am not physicalist, and I do not accept these artificial, physical, limitations produced by the limitations of the physicist's theories.
Quoting Wayfarer
I think that characterising Platonist philosophy as top-down is the result of a misunderstanding of Plato. Modern Platonism, and Platonic Realism portray Plato as a Pythagorean Idealist. In actuality though Plato thoroughly analyzed Pythagorean Idealism, and recognized its weaknesses. When the weaknesses were exposed, Aristotle finalized the refutation. That Plato does not follow these top-down principles, which Pythagorean Idealism does, is evident in his positing of "the good", in The Republic. The good is what makes intelligible objects intelligible, like the sun makes visible objects visible. This means that the good, or in Aristotle's terms, that for the sake of which, or final cause, is something beyond an idea, or intelligible object, as the reason, or cause of intelligibility of such things. It is by making the good an idea, or intelligible object, as is sometimes expressed with "the idea of good" (which is a misinterpretation of Plato's "the good") that final cause is understood as an idea. This is very similar to the position apokrisis takes, it is a Pythagorean position, making final cause, or "good" into an idea, in apo's case, the concept of the second law of thermodynamics.
Quoting Wayfarer
The problem is that the reductionist approach is based in good scientific principles of investigation. It makes no sense to dismiss these principles to seek a top-down causation, which from the very start gets lost in vagueness due to the fact that there is no evidence of top-down activity. Where there is no evidence of activity, how would we proceed in seeking a cause? There are constraints which appear as top-down structures, but even the activity which brings these constraints into existence, and changes these structures acts from bottom-up.
So it appears to me, like modern idealism is taking a wrong direction. Instead of turning inward, to find the true source of good, deep within, inherent within, as an immanent within the living being, it has turned outward seeking some transcendent good. The reason for this, I believe, is that we have let go of dualism, seeking the simplicity of monism. Reality though, is that there are two distinct directions, inward, and outward. What we find in the outward direction is distinctly different from what we find in the inward direction, and this is not just a matter of perspective. Looking inward is completely different from looking outward, and these two cannot simply be portrayed as directly opposite to each other. What we see in the inward direction is something radically different from what we see in the outward direction, hence the need for dualism. This is due to the peculiar nature of time, the past is distinctly different from the future, with respect to ontology, existence, they cannot be portrayed as simple opposites. Because of this distinct difference, we cannot let go of dualist principles without rendering reality unintelligible.
In a nutshell, information can regulate physicochemical instability. If the physics is delicately poised - what they used to call on the edge of chaos - then an almost immaterial nudge can make the switch between competing states.
This is the biosemiotic basis of life. The fact that this is happening right down at the nanoscale of cellular processes is a recent biophysical discovery. Everything is constantly on the verge of falling apart, so by the same token, only needs the slightest regulatory nudge to reform. Top down informational control of living processes is possible because the physical machinery has critical instability - in complete contrast to the reductionist expectation that bodies must be built from strong and stable materials.
So that is the basic principle - empirically demonstrated.
And then brains are just higher level information generators, supplying the regulatory nudges that manage the critical instabilities that are a muscular body in purposeful interaction with an environment.
So forget "consciousness" with all its antique Metaphysical connotations.
If you want the modern scientific story, we are talking about semiosis - the ability of rate independent information to regulate rate dependent dynamics. A system of signs (or a model) can act top down to manage a sea of critically unstable physical processes in such a way that organised and meaningful behaviour arises from a mess of potential chaos.
The following quotes are from the book 'Human Agency and Neural Causes: Philosophy of Action and the Neuroscience of Voluntary Agency', by J.Runyan:
How does the "I" select with great care between so many options of neural changes? And how does the "I" cause certain neural changes to occur?
When I choose to raise my arm, certain neural changes occur. Okay. But again, how does that work?
Neural changes are caused by the “I” — the macro-level whole. But again how does this “I” do that? What is the mechanism here? And how does the “I” know which neural parts to change and which not?
By holding information in thought the “I” is causing a change in neurons? How does that work?
How do thoughts cause neural change? How do they cause the correct neural change? And how is thinking distinct from neural activity?
There are two possibilities here:
1) Emergent consciousness has control over neurons, in which case I would like to know how this works.
2) Emergent consciousness does not have control over neurons, in which case personhood, responsibility, freedom and rationality is not possible.
>:O >:O >:O fucking hell mate, this was like
What you are neglecting is that the "I" here is a socially constructed concept enabled by the learnt semiotic habit of speech. So the top down causality has to be traced back now to human social concepts about autonomously regulated behaviour. In the final analysis, in is not you pulling the strings. You are just responding with the various degrees of freedom formed for you due to your particular cultural upbringing.
So you do have freewill ... or rather society set you up from childhood with the habits of rational self-regulation. You then creatively fill your society's purposes (or you get locked up, or in various ways, physically constrained.
In other words, this semiosis business has multiple levels. There are at least three levels of regulatory code we are talking about here - genetic, neural and linguistic. Each code supports an even richer level of evolving downward constraint over material action.
You are ignoring the third possibility that consciousness is just a bad word in that is sounds like it is talking about something substantial, and that is not the right way to think about it. You are presuming something that doesn't warrant presuming, and then getting angry when others point that out.
A concept understood and held by … what?
The term “you” refers to … what?
How is freedom grounded in this context? And what is it that is free?
Also, if “I” am free to a certain extent, how do “I” cause ‘my’ neurons to change? How do “I” know which neurons to change and which not? How does a 'socially constructed concept' control matter at will?
I experience something therefor I exist. In order to experience something it is logically required for me to exist.
I doubt my existence … In order for me to doubt my existence I must exist. I cannot doubt my existence if I don’t exist.
Maybe my thoughts are infused by a Lügengeist, maybe everything I think I remember is a lie. Still … I must exist in order to be lied to.
Maybe I think that I exist because I am socially instructed to think that. Nope. In order to be socially instructed to think anything I must exist.
I cannot doubt my existence. I exist. Undeniably so.
EDIT: Bill Vallicella arguing in favor of consciousness. Excellent philosophy.
Of course you can't doubt it ... given that you are in existence as a socially constructed self regulatory habit of thought.
And indeed, you are reading right from the script in protesting your existential essence in these terms.
Modern romantic mythology requires that you be solipsistic being in this regard. You have been soaked in a Nietzschian ideal of selfhood from the earliest exposure to popular culture. So nothing could insult you more than the suggestion you might actually be a habitual product of some time and place in the developing story of human history.
Heard it all before a million times. But I stick with the facts of social science.
Quoting Querius
I like Vallicella except for his reactionary politics and that he's a gun nut.
Note the final paragraph of Bill's article:
With respect to the Buddhist view - every single occurence of the term 'not-self' (anatta) in the early Buddhist texts is adjectival i.e. given as an attribute of phenomena. The objects of any phenomenal experience have three characteristics or 'marks' (laksana), namely, anatta, anicca, dukkha (not self, impermanent, and "unsatisfactory").
Nowhere in the early texts does the Buddha say 'the self does not exist' (although neither does he say that it does exist). When asked the question outright, he refrains from answering (here.)
This 'noble silence' is according to some scholars the origin of 'middle way' (Madhyamaka) philosophy which was to become the central philosophy of Buddhism.
But the key point is, to reify the self as an object of perception, as something constant and changeless, is a perceptual error. According to Husserl (in The Crisis of the European Sciences), Descartes made a similar error, i.e. after perceiving the centrality of (what Husserl would later call) the Transcendental Ego, he then made the grave mistake of objectifying it or treating it naturalistically as a 'substance' when in fact what it really is, is the condition for the possibility of knowing (here).
Quoting apokrisis
I think it's obviously a consequence of liberalism, which is founded on individual liberty, freedom of choice, freedom of expression and so on. Now that the religious rationale of self-abnegation has been removed, the individual ego, buttressed by economic theory and scientific method, is now the sole arbiter of reality, which gives rise to a deep and pervading Cartesian anxiety.
Yep. Of course the feeling of being conscious always involves the feeling of intentionality or the feeling of there being a point of view in play. So by logical implication - if you are habituated to believe in a reductionist causality - the act of experiencing requires then a subject who "has" the experience. Which then sets things up nicely for the usual homuncular regress.
Reductionism has no choice but to fall into the trap because it has done away with the richer model of causality which could cash out the self as simply a generic dynamical context. Some accumulated weight of habit which thus gives mental events a probable direction.
Who sticks with the facts of social science?
If there is no "I" who perceives and understands the facts of social science, then how can you be aware of the facts of social science? If there is no a consciousness of succession in one and the same conscious subject, how can you be aware of the fact that you continue to stick with the facts of science? Without the consciousness of succession, without the retention of the earlier states in the present state, no such conclusion could be arrived at.
A thought cannot exist without a thinker.
Maybe "I" am a social scientist. That is "I" understand and perceive the world in a fashion that is a particular educated habit of some human community. Those in the know will point and say, see, there's a guy whose read his Mead, his Vygotsky, his Harre. He is one of us. And so that is how "I" in turn can recognise "myself".
So I'm not a social scientist in some romantic, intrinsic, ineffable fashion. I can instead see that is "what I am" by all the same objective criteria by which anyone would "be a social scientist". It is not any kind of problem that the source of "my identity" in this regard would be completely communal and so reliant on linguistic structure.
Of course, we humans are also all animals. We are genetically and neurally individual. So if you start to break "consciousness" down into its actual semiotic levels of organisation, there is also no problem in talking about awareness in the kind of "raw feels" way you are concerned with now. You can try to imagine the human mind without its cultural/linguistic habits - and find that science says there is now no introspection or "off-line thinking" going on, just what we might call "extrospection", or living "mindlessly" for the present.
So my objection is that you are just bundling all the complexity into some simplistic and dualistic notion of a mental stuff that is somehow the object of perception. The brain produces a display of data ... and a wee homuncular Querius sits perched in the pituitary gland, or somewhere, soaking up the ever-changing panorama being neurally represented.
You are arguing here as if there is some problem to do with the soul of the machine. And yet a living/mindful system is not a machine (in the literal sense) and so we don't need to worry about souls or other mental stuff that might animate the inanimate.
Okay, that was (again) absolutely unresponsive. Try again:
hint: "I am a social scientist" is not an answer.
In Buddhist philosophy (now that we seem to have arrived there) it is said that the thinker cannot exist without thoughts, that thought and thinker are mutually dependent or co-arising. The idea of 'self and other' is a deeply-embedded mental formation. What is generally taken as 'the self' is literally a thought-construction. That doesn't deny the reality of agency or even of subjectivity, but of an 'I' that is separate from and a possessor of experience.
What happens to your fist when you open your hand? What happens to your lap when you stand up?
So you can continue to talk confusedly about some singular notion of the experiencing self, but I've already explained the complex nature of "being a human mind" in terms of the empirical facts.
Only after we recognize the reality of this separation, and adopt it as a firm principle, can we begin to understand the nature of the material existence which comprises that medium of separation. Then we can differentiate between natural, material objects, which are proper to the medium of separation, and artificial things, conventions and institutions, which are created by human beings to bridge that gap.
But I firmly believe it is a mistake to take for granted, all these artificial, institutions, which human beings have worked so hard to create, in an effort to close the gap between them, as if these things were naturally occurring. And this is what is the case when you start with the assumption that the separation between you and I is not a real and fundamental separation.
Does it matter? Isn't it better to allow discussion to develop relatively unconstrained (as long as it remains interesting)?
In any case I would have thought the question of the reality of the laws of nature is very much related to the question of the reality of the self. Both are either irreducible, or reducible in the usual naturalistic manner.
Still unresponsive.
If there is not an "I" who encompasses all three levels, how can you overview and be aware of those three levels?
The returning irony is of course that your analysis of the "I" cannot exist if it was correct. Somehow the source of your cherished narrative must be exempt of the constraints and divisions you claim to exist.
The others are mistaken and necessarily so, because they are reading from the "script", produced by forces beyond their rational control, but surely that does not hold for "you".
---
p.s. Here you can find everything you ever need to know about 'homuncular regress'
So do you agree there are these three levels? Or do you dispute it?
My point was that you are talking a monadic substance approach to consciousness - the usual outcome of reductionist simplicity.
I said consciousness - as what it is like to be in the usual human modelling relation with the world - is a complex hierarchically-structured process.
And all that was by way of dealing with the original point - what we would really mean by "top down acting consciousness". To remind you, I was explaining how constraints depend on semiosis and that in fact our human interaction with the world has at least three distinct levels (and so at least three distinct levels in which those constraints are evolving).
If we are talking about the neural level, for example, then that means the top-downness is to do with attentional and intentional brain processes.
But if we are talking about human "self-consciousness" - the self-regulatory awareness of the self as a self - then the source of those higher level constraints come from right outside of individual biology and development. That level of selfhood is socially constructed and linguistically encoded.
Of course everything is then functionally integrated. We hang together pretty well despite these multiple levels of constraint.
So again, are you disputing that there are these three levels of organising constraint that make up the complex process that is being a "conscious human"? If not, present the evidence that contradicts my sketch of the scientific analysis.
All you have done so far is kept jumping to new questions and evading any detailed consideration of the technical arguments already put to you.
:-} again Kantianism of some sort... :P
I am unhappy with the question in my previous post. It does not make sense at all. My mistake. I retract that question.
What I am happy about is the remainder of that post and the questions before that, which are still unaddressed.
So, what's your answer? Is the self irreducible, or reducible in the some naturalistic way? Or...?
I think the self arises from nature, yes. The self is a possible action/activity of nature. And I think laws of nature are merely models we use for purposes of modelling and predicting the world. I certainly don't understand the Kantian position of postulating necessary transcendental subject and transcendental object - I don't agree that either of them have been shown to be "necessary".
I think this is all backsliding from the superior Platonic/Aristotelian ontology which was developed with Spinoza, St. Aquinas, etc.
Autonomous, responsible, free personhood is a prerequisite to rationality.
If external forces beyond my control shape me with insurmountable arbitrary constraints and if there is nothing but blind ‘potentiality’ running into those imposed constraints, then I am not in control over my actions and thoughts. And if I am not in control over my actions and thoughts, then I am not rational. And if I am not rational then I ‘have no hope of providing an adequate understanding of the nature of reality.’
Indeed, I view consciousness as indivisible and I have provided several arguments. In response all you have offered is derogatory talk and avoidance.
Please do continue. Explain voluntary agency. Explain how an emergent self-aware consciousness directs its neurons. Let’s start modestly. Explain what happens when one chooses to raise one’s arm. Explain the mechanism from intention to neural change. And explain how the “I” knows which neural parts to change and which not.
In this post you find an attempt by a kindred social construct named Jason Runyan. Consider my questions in this post as addressed to you.
p.s. you can skip the part about being social constructs and so forth. We know that already.
So you say. But I've asked you to show how mind science could have got it so wrong then.
It's not an explanation of the event of abiogenesis and was not meant to be. It's rather a description of the observable teleological structure of all known living things as contrasted with the more primitive dissipative structures and the complex "organic" (proleptically so called) chemicals that existed before that event. I don't have an explanation for the event of abiogenesis itself; and it probably involved some large measure of contingency. It may have been a lucky accident.
The topic being the reducibility of the laws of nature, though, I am merely pointing out that the nested teleological structure of living things (also termed autopoiesis) whereby the parts (i.e. the organs) have as a function to promote the survival of the whole (the organism) and the molar activity of the organism reciprocally are directed at maintaining the function of the parts does not afford bottom-up reductive explanations. The reason for this is that the component processes (the physiological functions of the organs, cells, etc.) only make sense in relation to their functional embedding in the whole.
It certainly does, though it is not required that the plants or non-rational animals that strive to survive and reproduce have any reflexive awareness that their physiology and behaviors be thus structured so as to favor such outcomes. In fact, rational animals don't need it either though it comprehensibly tends to becomes a genuine concern for them.
It would go something like this:
The abstract mind, instantiated on the computationally universal brain, decides to move an arm. It does not know the mechanism of how this is performed, because it does not need to. The mechanism involves layers of sub-conscious neuronal control systems, which eventually result in the appropriate nerve signals to the appropriate muscles.
An abstract computer program, instantiated on a computationally universal computer, decides to move its robot arm ...
Because we don't believe in magic, there cannot be any other explanation, though some of the details are a bit sketchy to say the least.
The intention does not get translated into instructions for the neurons and needs not get so translated. The intention is directed outwards to the intended goal, in the world, not inwards to neurons or muscles. ("...I am not only lodged in my body as a pilot in a vessel..." -- René Descartes) In my analogy, talk of intentions, beliefs, perceptions, actions, etc., concerns the functional/psychological "software" level. Neurophysiology enables it rather in the way the hardware enables the software to run in the case of a computer.
So, likewise, the computer programmer can write instructions that directly govern the logical manipulation of significant symbols and not concern herself with the task of the compiler, interpreter, or hardware. The programmer needs not concern herself with the precise way in which the hardware enables her program to run. The high level software causally directs and controls the steps in the computational task, while the hardware enables but doesn't direct the execution of the intended calculations or symbolic manipulations. (The proof of that is that if there is an unintended result -- a bug -- it often only is required to fix the software. The hardware need not be at fault, though it may sometimes be). Since the activity of the hardware gains its significance only through this merely enabling relationship, and doesn't stand in the way of the software control, this is a typical example of top-down causation.
One final note: as this example shows, there need not be a materially identifiable distinction between the software and the hardware levels. The software doesn't float over the hardware and control it from outside, as it were. Rather, when the software has been suitably loaded (and possibly, compiled, or hard-coded) then the functional structure of the computer has changed. The software level is thus a functional level rather than a mereological level (i.e. a simple part/whole relationship). It refers to the functional organization of the suitably programmed computer; while the correspondingly modified hardware it its newly informed material "body".
Quoting Querius
That social constructs exist outside of human consciousness, and act downward onto the consciousness of the individual is where the falsity lies. This assumes the illusion of Pythagorean idealism in which ideas (as social constructs) have some sort of independent existence. The moment we (falsely) assume that the existence of ideas is external to the consciousness of the individual, we surrender our moral responsibility. That is why Plato introduced "the good", to straighten out this perverted form of thinking. As apokrisis and Pierre-Normand have indicated, social constructs are based in habituation. The science of habituation is moral philosophy. All social constructs are based in morality.
The simple fact of the matter is, that morality comes about through the effort of the individual. Habituation is learning, and unless hard determinism is true (which it is not), learning is "caused" by individual effort, willful determination. Moral character, is what the individual must construct within oneself. Morality, hence all social constructs are caused by the efforts of the individuals.
The belief that morality is caused by external social constructs acting downward onto the individual consciousness is the grave mistake of the hard determinist. Consider the consequence, if every human being started to believe that it is true that morality is caused by social constructs acting on us. No individual human being would make any effort to learn or understand any moral principles, believing that morality is simply caused by external constructs acting on them. Consequently no one would be able to teach any moral principles, moral principles would be forgotten, and social constructs destroyed. We would all simply believe that morality is caused through some natural process of naturally occurring social constructs acting on us, until there were no more social constructs, then we'd have to wake up to reality, and rebuild.
That is the consequence if we receive as truth, the illusion that habituation occurs as some natural process of down-ward causation. Instead, we must face the hard cold facts, that morality is caused by the great strenuous efforts of each and every individual human being.
If the mind is the brain, and is produced by neuronal behavior, then the whole path from intentionality to neural change is a purely physical affair. There is no gap between the ‘mental’ and the physical, so no need for a mechanism to close any gap. I have no questions concerning this scenario, other than how matter can be intentional.
If instead the ‘emergent’ mind is independent from neuronal behavior, if it can reach down, by free will of its own, and cause neural change, I would like to know how this works.
If, as a third possibility, the mind is 'semi-independent' or something, please provide a clear picture.
If the mind is independent from neural behavior, then there is, by definition, a gap between the mind and neurons. Again, if there is no such gap, no such independency, I have no questions. Assuming the gap exists, I would like to know how the hoovering consciousness reaches down causally effective and on what basis it chooses between various options.
In my book it makes no sense to term neuronal systems 'sub-conscious'. Anyway, how does the independent mind control those 'sub-conscious' neuronal control systems?
EDIT:
Regarding an explanation of the existence of the universe, we are either stuck with the incoherent concept of infinite regress or a First Cause. IMHO there is some 'magic' involved in both explanations, so who are we to not believe in ‘magic’?
That is rather surprising.
So, our intentions, deliberations and thoughts are direct instructions for neurons. Neurons listen in and understand our mental stuff directly and know what to do? No problemo?
Well, in order to function, hardware does require translation of high-level programming language, so this analogy seems inapt.
Because a compiler — translator — bridges the gap. Right?
Am I missing something? Where is the translator (compiler) in this narrative?
Is that not analogous to the claim that the chess program is the computer?
Quoting Querius
Software needs hardware to run on, and if you disturb the hardware with anaesthetics or death, then the effect on the running of the software is going to be pretty drastic. But there is another type of independence: all computationally universal hardware is equivalent.
Quoting Querius
You can remove an entire cerebellum, and the only difference the person notices is reduced motor control.
Do you say that the mind is analogous to software? If so, that would paint a rather inert picture of the mind. In this context I would rather say that software are the instructions for the brain. One problem is, how do we write those?
Notably, writing instructions doesn't get any easier by the fact that the brain is constantly reorganizing; synaptic connections are constantly being removed and others created — neuroplasticity. The brain is nothing like the rigid and fixed circuits of computer hardware.
To pursue this unproductive analogy, software needs a programmer in order to exist.
If we are not knowing nature-as-it-is through the Laws and they are merely predictive models that tell only about how nature appears to us, then our understanding of nature through science tells us nothing about 'what really is', nothing of ontological or metaphysical significance. On that assumption the notion that the self Quoting Agustino would seem to be groundless.This is actually precisely Kant's contention.
The transcendental subject is necessary as an expression of what we don't know about the self; about how it and the world are really produced. The self as we experience it to be cannot be understood by rational and empirical means. The Question of the self transcends the rational empirical domain. I don't think it is correct to claim that the Platonic or even the Aristotelian conception of the world is devoid of the idea of transcendence, and this also cannot be rightly claimed about Aquinas either. I think even Spinoza, if he was to be consistent, would allow for transcendence, as I have tried to convince you before.
I must say Agustino, I am beginning to find your professed Christianity to be quite inconsistent with what you espouse when it comes to philosophy. Spinoza's philosophy, for example, which you say you so admire is completely incompatible with any reasonable conception of Christianity, with any conception of it that does destroy its essence; its uniqueness as a religion, that is.
:s
No, I mean the mind IS software. According to known physics, it can't be anything else. Consciousness is a software feature, and the software programs itself.
That, I think is a very strange notion. You do realize don't you, that physics neither claims to understand the mind, nor attempts to understand the mind. So to make your determination of what the mind "is", according to known physics, would be a highly unusual, and fallible, thing to do.
Our thoughts are not instructions for neurons at all. The intentional contents of our beliefs and intentions aren't directed at neurons. They're typically directed at objects and states of affairs in the world. Our neurons need not be told what to do anymore than transistors in computers need be told by the software what to do. The installed software is a global structural property of the suitably programmed computer. What it is that the transistors are performing -- qua logical operations -- is a function of the context within which they operate (i.e. how they're connected with one another and with the memory banks and input devices). Their merely physical behavior only is governed by the local conditions, and the laws of physics, regardless of the global structure of the computer.
The hardware must only be suitably structured in order to deal adequately with the software instruction; it need not have instructions translated to it. If the high level code needs to be compiled or interpreted before it is run it's only in cases where the hardware is general purpose and its native instruction set isn't able to run the code directly.
I think part of the trouble in conveying the significance of my hardware/software analogy stems from the fact that the term "hardware" is highly ambiguous. The term can either refer to the material constitution of the computer qua physical system, which merely obeys the laws of physics. It can also refer to the computer qua implementation of a program that transforms significant inputs into significant outputs. Understood in accordance with the latter acception, the hardware, in virtue of its structure, already embodies a specific algorithm -- an abstract mapping from input sequences to output sequences (including the possibility of non-termination). It can be represented formally by a Turing machine. When a definite part of the input to the hardware (maybe accessed by the CPU from an internal memory store) encodes a virtual machine, then non-native code can be interpreted. In that case, the hardware in conjunction with the interpreter can be conceived as embodying a higher level algorithm.
That's right. But that's a special case. The task of the compiler (or interpreter), though, isn't to translate high level instructions in a language that it understands. The hardware understands nothing, of course. Rather, the task of the compiler merely is to transform the structure of the input such that when this transformed input is mapped by the low level hardware to the corresponding output, the result of the computation accords with the specification of the high level program. The composite system consisting in that hardware, augmented with the compiler, constitutes a virtual machine that behaves exactly in the same way (though maybe less efficiently) as a more complex hardware that would have the high level language as its native instruction set.
So, in a way, a mature human being understands natural language thanks to her overall physiological structure embodying a "virtual machine" that has this language as its native language. The neurons need not understand what their individual roles is in the underlying causal chain anymore than transistors in a computer need understand anything about their electrical "inputs". The relevant level of information processing (or of human natural language understanding) resides in the overall structure and behavior of the whole computer, including its compiler/interpreter (or of the whole human being).
To be clear, I am not saying that the hardware/software analogy furnished a good or unproblematic model for the body/mind relationship. The purpose of the analogy is quite limited. It is intended to convey how top-down causation can be understood to operate unproblematically, in both cases, without any threat of causal overdetermination or violation of the causal closure of the lower level domain.
If so, how does downward causation work? How do we get from the intention to raise one’s arm to neurons which act in accord with that intention?
Excusez moi? In order to be functional, to act how and when they need to act, transistors in computers do need software instructions.
You forget about the role of software information, which is part of the global structure.
You are mistaken. No computer can run programming language/source code directly, translation to machine code is always necessary, unless, of course, you start with machine code. However our deliberations, thoughts and intentions are anything but ‘machine code’. Behold the gap.
Again, you are mistaken, it is exactly that.
Such a level of understanding is not at issue here. What transistors need are clear instructions. Obviously they don’t need to 'understand' anything else, let alone their position in the scheme of things.
The translation problem — from deliberations and intentions to instructions for neurons — persists.
The neurons don't need to act in accord with the intention since the intention isn't directed at the neurons. If my intention is to grasp a glass of water standing on a table before me, what would it mean for my neurons to act in accord with this intention? Neurons and blind and impotent. The electrical activity of my neurons must, for sure (and this is what you must mean), be such as to enable the suitable muscular contractions so that my hand will move towards the glass, etc. This neural activity may sustain a sensorymotor feedback loop that realizes, in context, my voluntarily taking hold of the glass of water specifically in conditions where I had both the opportunity and an intelligible reason to do so.
Such a molar activity of my body/brain/environment may have become possible, while I matured, through the progressive tuning and habituation of the underlying physiological processes. Those processes need only sustain, albeit not control, higher level sensorimotor feedback loops as well as (at an even higher level) proclivities to act in a manner that is rationally suited to the circumstances. So, both the learning of the abilities and their actualization always are top-down causal processes from the intelligible and perceptible circumstances of our embodiment in the natural and cultural world to the tuning and (conditional) actualization of the underlying mindless (albeit adaptative) enabling responses from our low-level physiology. Our proclivities to be thus behaviorally trainable (which we share with other animals) and to be acculturated (which we don't) are of course evolved.
There is no need for the person, her body, or her mind, to instruct neurons on what to do since it is rather their function -- through their mindless and automatic (previously selected and/or tuned and/or habituated) low-level activities -- to enable whole persons to move about in a manner that is expressive of rationality and of sensorimotor competence. Likewise, in a simpler case, the cells of the heart are collectively organized in such a way as to enable the whole organ to pump blood without its being necessary for the heart to "instruct" individually its own cells about what to do. The cells respond directly to local variation in blood flow, adrenalin, electric potential, etc., in such a manner that the result of the overall dynamics is adaptative.
I'll comment later on the topic of the role on the compiler/interpreter in the computer analogy.
According to you, neurons don't need to act in accord with the intention to raise one arm ....
Unless you are willing to retract this claim, our discussion ends here.
In a sense they do (metaphorically) and in another sense (literally) they don't. Which is why I took the pain to disambiguate the two senses, charitably ascribed to you the reasonable one, and attempted to warn you against the easy conflation. The tendency to make this conflation is a core target in Bennett and Hacker, Philosophical Foundations of Neuroscience. But if you don't like having your preconceptions challenged, suits you. I may keep on answering some of your already stated questions and challenges. You are of course free not to respond.
I think that the whole hardware/software analogy is an unproductive distraction. What is at issue here is the electrical processes of the computer, what that energy is doing. The hardware and software work together in unison to control what that energy is doing, so there is no real division here to speak of.
But if we look at the neurological activity of the human being now, what the energy is doing, we can understand a real division which Querius points to. There are controls over the activity which are clearly aspects of the physical neurological system. But then we also have immaterial ideas, in the mind, which appear to exercise some control. So if our neurological activity is proceeding according to the constraints of the physical system, how is it that with our minds, and the use of immaterial ideas, we can decide which direction to go?
I don't mind having my preconceptions challenged, if you don't mind elaborating?
The problem with 'mind as software' is that it surely is an analogy. It isn't literally the case, because software is code that is executed on electro-mechanical systems, in response to algorithms input by programmers. The mind may be 'like' software, but it is not actually software, as has been argued by numerous critics of artificial intelligence 1.
Quoting Querius
That is the well-known philosophical conundrum of the 'subjective unity of experience'. There is a vast literature on that, but it remains mysterious.
Past a certain point, I just don't think it is possible to explain the nature of mind, because the mind is prior to, and the source of, any explanation about any subject whatever, including explanation of the nature of mind. Just as it is not really possible to explain why natural or scientific laws exist, it is also not possible to explain the basic operations of reason, as any explanation will have to make use of the very faculty which it is seeking to explain. In this case, it's a matter of 'knowing you don't know' being preferable to 'thinking you know something you don't.'
My comment was directed at Querius who resolved to stop reading my post further than the first sentence lest I would issue a retraction. The rest of the post was of course an elaboration. Querius thought my assertion that my neurons don't need themselves to act "in accord" with my intentions -- as opposed to their activity, as I explained, merely enabling a molar (high-level) bodily behavior that itself constitutes my enacting those intentions in dynamic interaction with my environment -- was incredible. Was there something in my explanation that also rubbed against your view regarding the "mind/brain" relationship, which I propose may be viewed as a matter of high-level to low level structural (i.e. "implementation") relationship (as opposed to a boss/employee relationship, say)?
The purpose of the digital computer analogy was to show that, in this case also, individual transistors, or logic gates, or even whole collections of them -- i.e., the CPU -- need not have the high level software instructions "translated" to them in the case where the implementation of this high level software specification is a matter of the whole computer being structured in such a way that its molar behavior (i.e. the input/output mapping) simply accords with this high level specification. In cases where the code is compiled or interpreted, the CPU need not know what virtual machine, if any, is being implemented.
It's not that mysterious once you accept that the unity is mostly being unified by what it successfully ignores. (Which is also what makes the computer analogies being used here fairly inappropriate.)
So attentional processing "unifies" by singling out the novel or surprising. And it does that by suppressing everything else that can be treated as predictable, routine, or irrelevant.
Well I say attention "does it". But of course it is anticipatory modelling and established habit that defines in advance the predictable, routine, or irrelevant. So attention-level processing only has some mopping up to do.
Thus the mind does have its strong central division into habit and attention. Everything that can be dealt with without clear conscious knowledge gets sorted out in 150 to 300 milliseconds by "automatic" habit. Then anything left over becomes a focus of "conscious" attentional processing - which takes 300 to 700 milliseconds to form and stabilise. With attention we are now talking about reportable awareness as - having managed to remove so much unnecessary sensory detail from the picture - we have a small enough "point of view" to retain as a persisting state of working memory.
So when it comes to something like the question of how does one lift one's arm, the usual way is without even attentionally deliberating. Attention is usually focused in anticipatory fashion on some general goal - like getting the cornflakes off the top shelf. Then habit slots in all the necessary muscular actions without need for higher level thought or (working memory) re-presentation. It is only if something goes wrong that we need to stop and think - start forming some different plan, like going to get a chair because our fingers can't in fact reach.
So - as I have argued through the thread - the key is the hierarchical and semiotic organisation of top down constraints over bottom up degrees of freedom. And even a simple action like lifting an arm is highly complex in terms of its many levels of such organisation.
I can lift a hand fast and reflexively if I put it on a hot stove. Pain signals only have to loop as far as the spine to trigger a very unthinking response.
Then I can lift the hand in a habitual way because I am intending in a general way to have my routine breakfast.
Or then I can lift my hand in a very artificial way - as perhaps in a laboratory experiment where I am wired up with electrodes and I'm being asked to notice when my intention to lift the arm first "entered my conscious awareness".
At this point, it is all now about some researcher's model of "freewill" and the surprise that a familiar cultural understanding about the "indivisibility of consciousness" turns out to be quite muddled and wrong.
Not that that will change any culturally prevalent beliefs however. As I say, the mind is set up to be excellent at ignoring things as a matter of entrenched habit. A challenge to preconceptions may cause passing irritation, but it is very easy for prejudice to reassert itself. If - like Querius - you don't like the answer to a question, you just hurry on to another question that you again only want the one answer to.
I quite agree. Its usefulness rests in helping clearing up some issues regarding inter-level material-realization v.s. functional-level causal relationships and the threat of causal over-determination that always lurk. Its drawback it that it encourages what Susan Hurley had called the sandwich model of the mind that portrays mental operations as being located in a linear causal chain mediating (i.e. being sandwiched) between raw sensory "inputs" on one side and bodily actions (raw motor "outputs") on the other side.
Real computers are structured in hierarchical fashion. So once you start to talk about operating systems, languages, compilers, instruction sets, microcode and the rest, you are talking about something quite analogous to the organic situation where the connection from "software" to "hardware" is a multi-level set of constraints. Functions are translated from the level of programmes to the level of physical actions in a way that the two realms are materially or energetically disconnected. What the software can "freely imagine" is no longer limited by what the hardware can "constrainedly do".
Where the computational analogy fails is that there is nothing coming back the other way. The physics doesn't inform the processing. There is no actual interaction between sign and matter as all the hierarchical organisation exists to turn the material world into a machine that can be ignored. That elimination of bottom-up efficient/material cause is then of course why the software can be programmed with the formal/final fantasies of us humans. We can make up the idea of a world and run it on the computer.
So the computer metaphor - at least the Universal Turing Machine version - only goes so far. The organic reality is rather different in that there is a true interaction between sign and matter going on over all the hierarchical levels. Of course, this is more like a neural network or Bayesian brain architecture. But still, there is a world of difference between a computer - a machine designed to divorce the play of symbols from the play of matter - and a mind/brain, which is all about creating a hierarchically complex, and ecologically constrained, system of interaction between the two forms of play.
Computers are not "of this world" so can be used as devices to freely imagine worlds.
Brains are devices constrained by a world. But in making that relationship structurally complex, brains gain the functional degrees of freedom that we call autonomy and subjective cohesion. (The freedom to actually ignore the world being a central one, as I argued.)
Yes, I broadly agree. The interplay of worldly dynamic constraints and freedom to imagine (and, centrally, to plan actions) is explained in relation with the faculty of memory in an interesting way by Arthur Glenberg in his paper What memory is for, Behavioral and Brain Sciences, 20, 1, 1997.
Here is the abstract:
"Let's start from scratch in thinking about what memory is for, and consequently, how it works. Suppose that memory and conceptualization work in the service of perception and action. In this case, conceptualization is the encoding of patterns of possible physical interaction with a three-dimensional world. These patterns are constrained by the structure of the environment, the structure of our bodies, and memory. Thus, how we perceive and conceive of the environment is determined by the types of bodies we have. Such a memory would not have associations. Instead, how concepts become related (and what it means to be related) is determined by how separate patterns of actions can be combined given the constraints of our bodies. I call this combination “mesh.” To avoid hallucination, conceptualization would normally be driven by the environment, and patterns of action from memory would play a supporting, but automatic, role. A significant human skill is learning to suppress the overriding contribution of the environment to conceptualization, thereby allowing memory to guide conceptualization. The effort used in suppressing input from the environment pays off by allowing prediction, recollective memory, and language comprehension. I review theoretical work in cognitive science and empirical work in memory and language comprehension that suggest that it may be possible to investigate connections between topics as disparate as infantile amnesia and mental-model theory."
This has some relationship with the famous Libet experiments, doesn't it? They showed that the body moves before the subject is aware that they want to move it. A lot of people seem to interpret that as a blow against free will, but I see it as more an indicator of the limited role of ego; that a lot of what we do is pre-conscious and the process of 'thinking about it' lags by a brief period of time, partially because it's not always necessary, and partially because there's a lot of work involved. But that doesn't mean that we are not free agents. When we are 'unconsciously competent' at something, when it becomse 'second nature', then we can perform it without necessarily thinking about it
This video has some interesting points to make about that (John Haugeland and Hubert Dreyfus both appear):
Yep. So what the experiments illustrate is that we have "free won't", rather than freewill. As long as we aren't being hurried into an impulsive reaction, we can - the prefrontal "we" of voluntary level action planning - pay attention to the predictive warning of what we are about to do, and so issue a cancel order.
So part of the habit-level planning for a routine action is the general broadcast of an anticipatory motor image. As part of the unity of experience, the sensory half of our brain has to be told that our hand is suddenly going to move in a split second or so. And the reason for that is so "we" can discount that movement as something "we" intended. We ignore the sensation of the moving hand in advance - and so then we can tell if instead the world caused our hand to move. A fact far more alarming and deserving of our attention.
So Libet was a Catholic and closet dualist. As an experimenter, that rather shaped how he reported his work. The popular understanding of what was found is thus rather misunderstood.
If you turn it around, you can see that instead we live in the world in a way where we are attentionally conscious of what we mean do to do in the next second or so. Then at a faster operating habitual level, the detail gets slotted in - which includes this "reafference" or general sensory warning of what it is shortly going to feel like because our hand is going to suddenly move "of its own accord". But don't panic anyone ... in fact just ignore it. Only panic if the hand fails to get going, or if perhaps there is some other late breaking news that means mission abort - like now seeing the red tail spider lurking by the cornflakes packet.
So the Libet experimental situation was extremely artificial - the opposite of ecologically natural. But it got huge play because it went to the heart of some purely cultural concerns over "the instantaneous unity of consiousness" and "the human capacity for freewill".
But again this is reductionist to the extent that you're treating the subject - namely the human - in a biologistic way - explaining human nature in terms of systems, reactions, models, and so on. It's adequate on one level of description, but not on others. As far as free will (or won't) is concerned, the point from the perspective of a humanistic philosophy is not understanding the determinative causes of human actions from an abstract or theoretical point of view, but what freedom of action means. That is the point of the clip: an important part of what makes us human is that we care about something. In your case, you care about philosophy of biology, which drives you to explore it, enlarge the boundaries of it, and so on. Very good. But I don't know if that necessarily has a biological origin or rationale.
Quoting apokrisis
Isn't that 'the genetic fallacy'? Anyway, I'm Buddhist and an outed dualist. ;)
All modelling is reductionist ... even if it is a reduction to four causes holistic naturalism. And as I say, even the brain is a reductionist modeller, focused on eliminating the unnecessary detail from its "unified" view of the world. The brain operates on the same principle of less is more.
Quoting Wayfarer
Yep. But that is covered by my point that neuroscience only covers the basic machinery. To explain human behaviour, you then have to turn to the new level of semiosis which is linguistic and culturally evolving. So you can't look directly to biology for the constraints that make us "human" - the social ideas and purposes that shape individual psychologies. You do have to shift to an anthropological level of analysis to tell that story.
(And I agree that the majority of neuroscientists - especially those with books to sell - don't get that limitation on what biology can explain.)
Quoting Wayfarer
As it happened, Libet told me about his dualistic "conscious mental field" hypothesis before he actually published it in 1994. So I did quiz him in detail about the issue of his personal beliefs and how that connected to the way he designed and reported his various earlier brain stimulation and freewill experiments.
So I am not making some random ad hominen here. It is a genuine "sociology of science" issue. Both theists and aetheists, reductionist and holists, are social actors and thus can construct their work as a certain kind of "performance".
And believe me, the whole of philosophy of mind/mind science came to seem to me a hollow public charade for this reason. For the last 50 years (starting from the AI days) it has been a massive populist sideshow. Meanwhile those actually doing real thinking - guys like Stephen Grossberg or Karl Friston - stayed well under the radar (largely because they saw the time-wasting populist sideshow for what it was as well.)
Thank you.
Quoting apokrisis
:-*
Quoting apokrisis
I don't understand the bad reputation which reductionism has received. If it's the way toward a good clear understanding, then where's the problem? I can see how a monist materialist reductionist would meet a dead end in the quest for understanding, at the Planck level where the material world becomes unintelligible, and this would appear as the limit to intelligibility, but a dualist reductionist would not meet the same problem. The dualist allows non-spatial substance.
Quoting apokrisis
I don't see this need. We hear people talking, we read books. These are perceptual activities. Why can't we treat them like any other perceptual activity? Why do you feel the need to look to something else, like social ideas, cultural constraints, to understand what is just another perceptual activity? In reality this is just the individual interpreting what one hears and reads, just like we interpret any act of sensation. The only difference is that when we interpret these sensations, speaking and writing, we assign a special type of meaning to them because we recognize the context of having come from other minds.
Reductionism, to put it bluntly, is 'nothing but-ism'. You may think you're a human being, endowed with inalienable rights, but in actual fact you're 'nothing but':
Quoting Metaphysician Undercover
Reductionists are generally materialist. If there are such philosophers as 'reductionist dualists', I would be interested to hear about them.
I always say it is fine in itself. It is only bad in the sense that two causes is not enough to model the whole of things, so reductionism - as a tale of mechanical bottom-up construction - fails once we get towards the holistic extremes of modelling. You need a metaphysics of all four causes to talk about the very small, the very large, and the very complex (the quantum, the cosmological, the biotic).
Quoting Metaphysician Undercover
Yep. Olde fashioned magick! Dualism is just failed reductionism doubling down to make a mystery of both mind and matter.
Quoting Metaphysician Undercover
You meant conceptual activities really, didn't you? :)
Or at least some of us read books and listen to people talk to gain access to the group-mind. It kind of defines the line between crackpot and scholar.
I'm pretty sure I'm dualist, and apokrisis has repeatedly affirmed that I'm reductionist, so where does that leave me?
Quoting apokrisis
No, I meant that hearing people speak, and reading books are acts of sensation. Don't you agree? And how the individual neurological system deals with these acts of sensation can be understood just like any other act of sensation. We can refer to those concepts of attention and habituation, which you like. Why should we refer to some concept of social constraints in order to understand these acts of sensation? The act of sensation is not being constrained by some aspect of society, it is just a matter of the individual focusing one's attention.
Quoting apokrisis
I don't read books, or speak to people to gain access to any "group-mind". Whatever that is, it sounds like a crack-pot idea to me.
Of course not. All my senses actually see is squiggles of black marks. My cat sees the same thing.
To interpret marks as speaking about ideas is something very different. It is to be constrained not by the physics of light and dark patterns but by a communal level of cultural meaning.
So without being a substance dualist, the semiotician has all the explanatory benefits of there being "two worlds" - the one of sign, the other of matter.
Quoting Metaphysician Undercover
Exactly. I mean who needs a physics textbook to know about physics, or a neuroscience textbook to know about brains? Just make the damn shit up to suit yourself.
Chalmers?
Trying to answer Apokrisis' question, I guess ;-)
I'm highly sympathetic to dualism, but I think everyone is flummoxed by the idea of how 'res cogitans' coud be a 'non-extended substance', because the very idea of 'non-extended substance' appears self-contradictory. (I think I know how to resolve that, but I am never able to explain it.)
Quoting apokrisis
Chalmers has admitted to being a dualist, but I don't know if he's admitted to being a physicalist. I suppose he and Searle and others of that ilk, take issue with materialism but at the same time, they don't want to defend any kind of traditional dualism. (Need to do more reading.)
I haven't kept up with Chalmers's recent views. He's recently endorsed a sort of a functionalist view of the mind that accommodates externalist and "extended mind" theses regarding cognitive functions and propositional content. His joint paper with Andy Clark regarding Otto and his notebook (The extended mind. Analysis. 58 (1): 7–19) explicitly brackets out issues of phenomenal consciousness ("what its like" questions; qualia and such). So, he may have remained an epiphenomenalist regarding the phenomenal content of consciousness. This, together with the idea of the intelligibility of P-zombies, is a position that seems to flounder on Wittgensteinian considerations regarding the necessary publicity ("publicness"?) of criteria that ground our abilities to learn and to understand the meanings of words that purport to refer to felt sensations.
That all computationally universal hardware is equivalent is not an analogy. The human brain has to be computationally universal. That's not an analogy either.
The mind isn't LIKE software, it IS software. The human mind is constantly changing - creating knowledge - by programming itself. It is also a type of software we don't understand yet.
And "everyone" touts Libets 1980s experiments as evidence of the absence of free will, and ignores his 1990s experiments, where he demonstrated a mechanism for it.
Quoting Querius
I wanted to post a response to this point, made a couple of days back, because I think it's important, (if not directly connected with the OP).
I share the belief that 'the person' ought to be a central concern of philosophy and the belief that tendency to reduce or explain away the person on the basis of a purportedly naturalistic account of human life is a shortcoming of naturalism.
You can adhere to a naturalistic attitude in respect of the subjects of natural philosophy, but the 'naturalising' tendency also can tend to treat human kind as 'the human species', which I think amounts to a kind of 'objectification' (i.e. treating subjects as objects).
So, it appears to me to be that in the transition to an overall secular or naturalistic view of life, one of the essential aspects of 'the person' has been, as it were, lost in translation, because the Christian tradition had the notion of man as 'Imago Dei'. I'm not saying that as a Christian apologist or as an attempt to restore or return to such an idea, but out of a concern for the basis of values, in the absence of something of that kind of depth or profundity.
The Aristotelean idea of 'eudomonia' and its associated 'virtue ethic' is a worthy candidate, and it's noteworthy that such a 'neo-Aristotelean' philosophy is increasing in popularity.
In any case, I just wanted to acknowledge Querius' point in respect of 'the person'.
Quoting tom
Not true. Software is part of a computer - that's the actual definition. Humans are not computers, and thoughts are not software. At best it's a model or an analogy. But, there's been a thread running on Online Philosophy Club, since 2007, about this very question, and it just keeps running. (Maybe it has a halting problem. ;-) ). In any case, I don't expect this is a disagreement that can be resolved.
Denial is always an option I suppose, but history will not be on your side.
Of course, if you managed to formulate an argument that the brain is not computationally universal, and that it could not be programmed (e.g. by training), and that therefore the mind could not be an abstraction instantiated on a brain, then you might have a point.
Might be easier to show that the entire theory of computation is wrong though. Go for the jugular and attack computational universality. Best of luck!
You ought to check Robert Rosen's Essays on Life Itself for such arguments. Also Howard Pattee's paper, Artificial life needs a real epistemology.
But even just from a good old flesh and blood neuroscience perspective, where's the evidence that the brain is actually any kind of Turing machine (even if you believe that any physical process can be simulated by a UTM)?
It's not 'denial' - it's another 'd' word, called 'definition'. The definition of 'software' is different to the definition of 'reason' or 'thought' or 'human consciousness'. They're different things. So yes, I'm denying that 'thought' and 'software' are the same thing. If you can prove they are, please send me an invite to the Nobel Ceremony. Best of luck!
The rational soul is the form of the human body, according to Aristotle. I likewise prefer to conceive of the mind as a set of powers exhibited by an embodied human being rather than as a feature of her brain, but that won't be my focus here. I would readily grant that humans are smart enough to execute whatever algorithm is given to them. Indeed they can do it as mindlessly as any old CPU, or as Searle would do it in his Chinese Room. I would also readily grant that mental abilities can be multiply realized in a variety of biological or mechanical media (be they better conceived as specifically implementing computational operations, or not) but this shows no more than that possession of mental skills is a formal feature of rational beings.
The mind/software analogy also glosses over other significant differences between rational beings and computers. Computers don't give a damn. Deep Blue could exhibit some level of intelligence through winning chess games, but, if given the opportunity, it will also play the same game one trillion times in a row and never get bored (or interested) in the least. The software can be tuned to exhibit some degree of randomness and learning, but doing so would only fulfill the wishes of the programmers or users. What is missing for the merely computational operations of a computer to constitute true mindedness is embedding and functioning within the animate form of life of creatures who give a damn what they are doing and what happens to them. As John Haugeland might have put it: people follow normative rules that they voluntarily endorse because those rules are constitutive of phenomena that they care about. So, people are quite unlike computers passively running programs on whatever data they are given to process.
I have to say that the latest understanding of biophysics at the nanoscale is now a serious challenge to multirealisabilty. Organic molecules have physically unique properties that allow them to flourish in a dissipative environment and function as various kinds of functional components. So the biologists don't have to grant the computationalists any kind ground at all anymore if life and mind are semiotic processes rather than information processes.
And the beauty is that the onus is on computationalists to show that life and mind are "just information processes" now if they want to keep pushing that particular barrow. This is no longer the 1970s. :)
Peter Hoffman has done a great book - Life's Ratchet - on this.
The brain is computationally universal, but the mind certainly is not. There are many operations a mind will not perform, for reasons as diverse as morality and boredom. Humans can't execute algorithms mindlessly, and they don't execute algorithms like those we program into machines.
Quoting Pierre-Normand
I certainly wouldn't grant that. The only known object in the universe which instantiates mental abilities - i.e. creates knowledge, possesses qualia and general intelligence, is the human brain.
Quoting Pierre-Normand
It would have to gloss over the significant difference between rational beings and abstract beings instantiated on computationally equivalent hardware. Or more precisely, the significant difference between minds instantiated on human brains and minds instantiated on computers.
Quoting Pierre-Normand
Brains don't give a damn either.
This doesn't show that humans can't perform those operations; only that they may occasionally choose not to. Humans don't really instantiate universal Turing machines because they are finite mortal beings, but then so are human brains. But I don't quite know what your argument is anymore. You seemed to be arguing that the mind was the software of the brain, quite literally. Your ascribing vastly superior computational powers to brains than you do to people supports this contention how?
On this, at least, we agree.
Well, according to this, "real computers have limited physical resources, so they are only linear bounded automaton complete. In contrast, a universal computer is defined as a device with a Turing complete instruction set, infinite memory, and infinite available time."
Given that we don't have infinite memory or infinite available time (or a Turing complete instruction set?), the brain isn't Turing complete. It's only linear bounded automaton complete.
Actually when you come to fully grasp the concept of "substance", especially in light of modern physics, the very opposite becomes apparent. Spatial extension refers to the form of an object. Following Aristotelian logic, the existence of matter is assumed to substantiate the independent existence, the particularity, of such objects. In order that what we say about things may be true, and applicable to particular things, and our logical proceedings may be properly grounded, we assume matter, as substance. Matter is inherently distinct from form, and this is Aristotelian dualism. Therefore it is impossible that substance, as matter itself, has spatial extension because this would be to say that it has a form.
The problem which modern physics has, is how to come to grips with "non-extended substance". Non-extended substance has been assumed in the concept of "point particle", which derives from the way that gravity has been modeled, as centred on a point. This is inherent within the concept of mass. But non-extended substance may be given properties, such as charge, and spin, in a way that extended forms would have such properties. If the difference between extended and non-extended existence is not properly determined, through some form of dualist principles, and upheld in the principles of physics, mistakes are inevitable. The use of non-extended substance, in conceptual form, has run rampant through physics, with complete disregard for any need to distinguish between spatially extended and non-spatially extended existence, to the point that we now have things like virtual particles.
Quoting apokrisis
I've read a lot of books, but I do this to enrich my own mind, not because I think it will make me part of some fictitious "group-mind". The problem with your group-mind idea is that it makes the false assumption that society is some sort of whole, a unity, without determining the real principle "God" which validates this unity. So it is God which is the true unity, and the group-mind is just an unsuccessful attempt to conceptualize that unity without the necessary and essential aspect of that concept - God.
This is the sin of Lucifer, Satan, the fallen angel. Because of his great power, given to him by God, he believes that he is God. That is the sin of self-deception, and God has no recourse but to exile that angel. By saying that human beings create a group-mind, without attributing this unity to God, you assign to the human race the property of God, and commit the sin of the fallen angel.
Cripes. So social constructionism is the work of the Devil.
Damn straight!!!
Metaphoorically speeaakiiing ;)
If downward causation is indeed beyond the control of the person, then this flies in the face of rationality, as I have argued before.
I don't believe it is good to make generalized judgements like this, casting a blanket of good or bad over an entire 'ism. That appears to be your approach to reductionism. But many such 'isms are epistemological only, providing principles of guidance within particular epistemic categories. It is how these principles are related to what is outside the category, how we relate an epistemology to an ontology for example, which is where we should make such judgements of good and bad. So in the case of social constructionism, some might belief that social constructs are natural, and some might believe that they are artificial, and others might simply use some of the epistemological principles without making such a judgement..
The distinction I am drawing is between the physics and the abstraction.
At the risk of repeating myself, it has been proved that all real universal computers are equivalent. The set of motions of one can be exactly replicated on the other. It has further been proved that any finite physical system can be simulated to arbitrary accuracy, with finite means, on a universal computer. The brain can thus be simulated on a universal computer, whether it is itself universal or not. Whatever a brain can do, a computer can do. There is nothing beyond universality.
The idea that the brain is not computationally universal seems somewhat churlish. Given that we know that the capabilities of the Mind far exceed the capabilities of any currently known programs which run on computationally universal hardware, to claim the brain is non-universal, does not give credit where credit is due!
The brain is computationally universal, just like my laptop.
Quoting John
I've never said this. I've never said through the laws we know nature as it appears to us, nor have I ever implied such a noumenon/phenomenon distinction. In fact I said quite the contrary - the laws themselves do not even reveal the phenomenon to us.
Quoting John
This is true.
Quoting John
Not so, because it's something that we observe phenomenally directly. We observe how the self is given birth and arises out of nature and out of our community.
Quoting John
Why not?
Quoting John
That would depend on what you consider its essence to be I think. If you consider its essence to be the special significance of the Trinity, or man-become-God and God-become-man and such Hegelian notions then I'd agree with you. If you consider its essence to be love, then I don't think it's contradictory at all, except in showing that God cannot love us the way we love God. But I don't see why that's so bad for Christianity. Furthermore - it is utterly rational and undeniable, so given that philosophy is such and such, we have to shift our religious understanding by its lights.
Your version of Christianity seems to be quite Protestant - do you consider yourself a Protestant Christian by the way?
Apart from entanglement, the geometry of the universe, the big-bang, cosmic microwave background ...
Sure, the laws reveal nothing.
:-} http://thephilosophyforum.com/discussion/comment/54437#Post_54437
I forgot, gravitational waves were revealed by theory 100 years before they could be measured. It only took ~50years to observe the other features of reality like entanglement and the cosmic microwave background.
And don't forget cosmology takes us to times before the big-bang, whose signal is revealed to be within the CMB.
No, theory clearly reveals nothing.
Quoting tom
Quoting tom
You realise all these are nothing except useful fictions which we have invented in order to conceptualise our measurements, and create a system which enables us to make conceptual-based predictions? There is no big-bang, entanglement, cosmic-microwave background, etc. above and beyond their effects and predicted effects. We could re-name and re-conceptualise all of those. The Big Bang could be a Small Whirl, etc. There's an infinity of re-conceptualisations which we could use, and which could predict the same things.
Quoting tom
Again - this is pure concept, it has no reality. It's useful because it helps us think about a model, and thinking about the model helps us predict the world.
Quoting tom
Big-bang, CMB, etc. are concepts, not realities. They are pieces which together form a coherent whole, which is our scientific model of reality. Nothing more.
Quoting tom
I see nothing revealed there about reality.
You are in a state of irrational denial.
The Earth really does orbit the Sun, and the Earth really is not flat and it's not turtles all the way down, really.
And entanglement was really predicted in 1935 and really observed in 1982.
And gravitational waves were really predicted in 1916 and really observed in 2016.
And do you think solid state electronics was an accident, or did solid state physicists really predict that if they could really build a p-n junction, they could make diodes, amplifiers ... ? Solid state theory could not even be conceived of without quantum mechanics!
And the CMB really is light from the big-bang, and certain as-yet-unobserved patterns in the polarization of the CMB radiation will be evidence of eternal inflation.
Quoting tom
What does this mean? Does this mean that if I go in a straight like I will return ultimately to the point I started from? Yes it does. Therefore the Earth not being flat is a model for the underlying reality. The underlying reality is what you experience directly - ie returning to your starting position if you go in a straight line.
I had no idea you really did that. Quite a trip eh?
You said this:
Quoting Agustino
And I said this:
Quoting John
Now, as I understand it, there is no difference between "models we use for purposes of modelling and predicting the world" and "they are merely predictive models that tell only about how nature appears to us" since "the world" just is the term we use for "how nature appears to us", or vice versa; they are synonymous.
But in your response, you have subtly changed the wording of what I had said: "I've never said through the laws we know nature as it appears to us". You go on to say "the laws themselves do not even reveal the phenomenon to us". Well, of course they don't, our senses reveal the phenomenon to us, obviously. The laws are models that tell us only about how the world appears to our senses, and including about the functions of our senses themselves vis a vis their objects.
The salient point is that the laws are models that tell us stories about the world only as it appears to us. They do not tell us anything about the world as it is in itself. The latter is something we are capable of conceiving of as a mere possibility; it is a possibility we can know nothing about. It's importance lies only in the very obvious fact that we can conceive of it. The fact that we can conceive this way of something hidden from us has had incalculable effects on human social, historical, religious and creative development.
Then explain how our theories can tell us stories about things we will have to wait 50 to 100 years to observe?
If you look at any scientific theory, it explains what we see in terms of what we cannot see. Our theories tell us stories of how the world really is and, from them we deduce what we will see, even if we it takes 100yrs to develop the technology.
Our theories tells us about how we think the world might be in itself. Any understanding of how the world could be in itself in accordance with our theories can only ever be given in terms of how the world appears to us, and so would be utterly speculative. Anything that we see in the future will be the world as it appears to us; what else could it be?
Still this dualistic crackpottery.
A computational simulation is of course not the real thing. It is a simulation of the real thing's formal organisation abstracted from its material being.
This should be easy enough to see. A computer relies on the physical absence of material constraints. It is cut off from the real world in that it has a power supply it doesn't need to earn. It doesn't matter what program is run as the design of the circuitry means the entropic effort is zeroed in artificial fashion. The whole set-up is about isolating the software from dissipative reality so it can do its solipsistic Platonic thing.
A brain is quite different in being organically part of the material world it seeks to regulate via semiosis. And you can see this in things like the way it is fundamentally dependent on dissipative processes and instability.
Where a computer must be made of Platonically stable or eternal parts - logic circuits frozen in silicon - the brain requires the opposite. It depends on the fact that right down at the nanoscale of cellular structure everything is on the point of falling apart. All molecular components are self-assembling in fluid fashion. So they are constantly about to break apart, and constantly about to reform.
And in having this critical instability, it means that top-down semiotic constraint - the faint nudges to go one way or the other that can be delivered by the third thing of a molecular message - become supremely powerful. This is the reason why a level of sign or biological code can non-dualistically control its world. It is why the "software" can regulate the materiality of metabolic processes, and on a neural scale, the material actions of whole bodies.
So science has looked at how organisms are actually possible. And the answer isn't computation but biosemiosis.
Computers are abstracted form. So they have the fundamental requirement that someone - their human masters - freezes out the material dynamics of the real world so they can exist in their frozen worlds of silicon (or whatever super-cooled, error corrected, machinery a quantum computer might get made of).
And organisms are the opposite. They depend on a material instability - being at the edge of chaos - that then makes it possible for top-down messages to tip stochastically self-organising processes in one direction or another.
As I say, that is what makes multi-realisability an issue. A Turing Machine can indeed be made out of anything - tin cans and string if you like.
But biology - in only the past 10 years - has shown how organic chemistry may be a unique kind of "stuff" that can't be replicated or simulated by simpler physical machinery (circuitry lacking the critical instability that then gives semiosis "something to do").
It is a happy fact that Turing himself was on to it with his parallel work on chemical morphogenesis. He was an actual genius who saw both sides of the story. But sadly UTMs have given licence to decades of academic crackpottery as hyped-up computer scientists have pretended that the material world itself is "computable" - as if an abstracted simulation is not the opposite of existing in a world of material process.
So you mean ... exactly what I said then?
Ie: Holism is four cause modelling, reductionism is just the two. And simpler can be better when humans merely want to impose their own formal and final causality on a world of material/efficient possibility. However it is definitely worse when instead our aim is to explain "the whole of things" - as when stepping back to account for the cosmos, the atom and the mind.
Except what if there is nothing besides nature as it appears to us? Away with the noumenon/phenomenon distinction. The noumenon doesn't exist in the sense the phenomenon exists - empirically. Hence there is no discussion of noumeonon asking what is it, bla bla - that is what you ask with regards to empirical matters.
Quoting John
I agree, but I'd say the laws are merely models which we can use to predict certain sense experiences in the world.
Quoting John
Only if you accept a noumenon/phenomenon distinction.
Quoting John
Well I can conceive of flying pigs too - are flying pigs therefore important? :P
Quoting John
Why do you say this?
Well, that's a small step in the right direction at least.
Quoting John
Of course theories are speculative - to be more precise conjectural. But for that precise reason, they may have literally nothing to do with anything that has ever appeared to us. Many theories were conjectured to solve theoretical problems; Special Relativity solve the problem of the incompatibility of electromagnetism and Newtonian Mechanics. General Relativity achieved the unification of Newtonian Gravity and Special Relativity........
But what are these conjectures about? They are about how Reality really is. Are they wrong? Of course, but that is a bit harsh. It is more precise to describe our best theories as the prevailing misconceptions, that will be replaced in time, by better theories.
It would be a catastrophic mistake to ignore the fact that our deepest theories have made utterly shocking predictions about novel phenomena that are true. All future theories must respect these discoveries. Entanglement, quantum computing, electronics, teleportation .... are not going to go away!
We know from mathematics that there is a limit to what can be proved, but no such limit to conjecture, and no such limit to the scientific method exists.
No, not at all what you said. The modeling which you describe portrays final cause (intention, or telos) as top-down causation, instead of its true position, bottom-up, as is evidenced by free will. You really don't provide a four cause modelling, as your top-down causation is just formal cause, through and through. You haven't provided a position for final cause.
That's what allows thought, and life.
Nope. It is the semiotic interaction between the realms of sign and materiality that allow that.
Computation explicitly rules out the interaction between formal and material causes. So to actually build a computer, the dynamics of the material world must be frozen out at the level of the hardware. Computation is the opposite of the organic reality in that regard. And biophysics is confirming what was already obvious.
And that is before we even get into the other issue of who writes the programs to run on the hardware. Or who understands that the simulations are actually "of something". Or that error correction is needed because what the computer seems to be saying must be instead that kind of irreducible instability which is the real dynamical world intruding. (Oh shit, my quantum entanglements keep collapsing or branching off into other worlds.)
But keep on with the computer science sloganeering. I'm well familiar with the sociology of the field. No one cares if people talk in scifi terms there. It is the name of the game - always over-promise and under-deliver.
do you believe that nature would disappear if humans were wiped out?
Quoting Agustino
I can't see why it's not a perfectly valid distinction.
Quoting Agustino
Flying pigs if they were real would be just another phenomenon, so the analogy doesn't work. The fact is that we can and routinely do conceive of things in themselves. It is generally understood that there was a world long before there were humans, but of course we can only imagine that as though we were seeing it. What it looked like to a dinosaur or what it was like absent being perceived at all is simply unimaginable to us. But it is equally unimaginable that there was not what would appear to us as the Earth. So, if the Earth and its mountains, rivers, plants and animals etc. existed prior to humans, how would that not qualify as, despite our inability to imagine it as other than how we would see those things, the Earth and its creatures in themselves?
The question makes little sense to my mind. Things can only disappear or appear for perceivers.
Quoting John
Yes, but for the world to exist it doesn't need to appear to someone. This doesn't mean the world is noumenal at all though.
Quoting John
The world-in-itself is ultimately no different than the phenomenal world. We see the world as it is - there is no world other than the world as we perceive it. Before we existed, the world existed just as it exists now - the only difference is that now there exists someone to perceive it.
Just out of curiosity, have you read Meillassoux's After Finitude? I'm curious if you have what you think about his anti-correlationist (with regards to Kant) arguments.
It might be that the Earth just is the phenomenal thing and not the noumenal thing. So although there was indeed something that existed prior to humans, it would be a mistake to equate that thing with the Earth (or mountains, or rivers, etc). So it's not that there's this independent thing that is the Earth and that contingently appears to us a certain way, but rather that there's some independent thing (or things, if we're to be a realist about particle physics) that contingently appears to us as the Earth (which seems to be what you're saying in the first sentence of the second paragraph above, contrary to the rest of that second paragraph).
This is the approach Putnam seemed to take with his internal realism, and it's the only way I can see to avoid reductionism (which I think is a mistaken position).
Right so when human beings will disappear, the Earth will disappear even though there are no perceivers left for which it can disappear right? :-} Appearing and disappearing are events of perception, they are not ontological. To say the Earth is just a phenomenal thing - just something which appears - is incoherent. The Earth is exactly THAT which appears or disappears depending on the perceiver opening or closing his eyes, etc. The Earth as that which can both appear and disappear is independent of the perceiver.
It's not clear to me what you mean by saying that the Earth is that which appears or disappears depending on the perceiver opening or closing his eyes. I would say the same of pain – pain is that which appears or disappears depending on the firing of certain neurons. But given that it doesn't then follow from this that the pain is something that's always there, independent of perception, that only sometimes happens to be felt, it doesn't then follow from what you said that the Earth is something that's always there, independent of perception, that only sometimes happens to be seen. So there's something missing in your claim.
It seems to me that you want to reduce the Earth to the mass of particles out there in space that is causally responsible for the experience of the Earth. But to me that's akin to reducing pain to the neurons in my brain that are causally responsible for the experience of pain. I think such reductionism is mistaken.
At best I'm open to reducing pain to neurons that are firing a certain way, and so at best I'm open to reducing the Earth to that mass of particles that are causing me to have a certain kind of experience. But I'm less open to reducing pain to those neurons even when they're not firing or to reducing the Earth to that mass of particles even when they're not causing me to have an experience.
Yes, pain is something that is always there when there's the specific firing of certain neurons. Whether one is conscious of this pain is different, and that depends on whether a state of consciousness is present in the mind of the person experiencing pain. If I'm hit with a ball in the head and I have a concussion, while I'm knocked out it isn't that I'm not in pain, but that I don't perceive the pain - my perception has ceased, but the world goes on, unperceived. That's why when I wake up, I wake up feeling the pain.
Quoting Michael
The mass of the Earth in and of itself isn't sufficient to cause a perception. Perception is the result of the Earth and of your cognitive faculties together - it's two aspects of reality meeting that results in perception. But both aspects are real prior to perception.
"Everyone knows that the earth, and [i]a fortiori the universe, existed for a long time before there were any living beings, and therefore any perceiving subjects. But according to Kant ... that is impossible.[/i]"
The objector has not understood the fact that time is one of the forms of sensibility. The earth as it was before there was life, is a field of empirical enquiry...; its reality is no more being denied than is the reality of perceived objects in the room in which one is sitting.
The point is, the whole of the empirical world in space and time - including the world 'before man evolved' - is a creation of the understanding, which apprehends all the objects of empirical knowledge within it as being in some part of that space and at some point in time: and this is as true of 'the earth before there was life' as it is of the pen I am now holding a few inches in front of my face.
This, incidentally, illustrates a difficulty in the way of understanding which transcendental idealism has permanently to contend with: the assumptions of 'the inborn realism which arises from the original disposition of the intellect' enter unawares into the way in which the statements of transcendental idealism are understood.'
Bryan Magee Schopenhauer's Philosophy, Pp 106-107 (paraphrased).
Yes I am aware that Kant thinks so, but his assumption must be questioned.
Quoting Wayfarer
How do we know that time is only a form of the sensibility? Isn't the sensibility itself within time? In fact, it seems that time itself is presupposed even to get the sensibility itself working. There can be no sensation without time - so not only is time something that structures sensation, time is also something which makes sensation itself possible.
I think you're still operating from the assumption that what is real is simply what is there in your absence - you can picture the world without you, or anyone, in it. But that picture is still based on a perspective, a point of view. In it, things have relationships, and scales. You can't picture it from no viewpoint, because from no viewpoint, nothing is large or small, near or far, long-lasting or ephemeral. The mind creates that framework or structure within which all judgements about what is real and what is not are made. Not your mind or my mind - the human mind, the biological-cultural-linguistic milieu which comprises the mind.
The only reason you think the mind presupposes the world, is because you yourself know you were born into the world. That's true, and it's all well and good, but it is not what is at issue in asking about the nature of the mind and its relationship to the world.
My argument is not that the world doesn't exist in the absence of any or all observers, but that whatever we can say we know about what exists, presupposes a perspective. Even if that is mathematicized, which effectively eliminates purely individual perspectives and gives a kind of 'weighted average' of all points of view, it's still an irreducibly human point of view, which is inextricably an aspect of whatever we say exists.
Dualist crackpottery, you say?
Quoting apokrisis
Semiotic interaction between the realms, you say?
I agree. Your view contrasts with the view expressed by Sean Carroll (quoted) in the OP of this thread. Physicists often are happy to equate "the Universe" -- the totality of what exists -- with some comprehensive set of "initial conditions" conjoined with a set of universally quantified statements ("universal laws"). Everything (i.e. every empirical truth; every state of affairs) is supposed to be determined by the initial conditions and the laws. This is a view of the "block universe" in which time just is another dimension akin to the three spatial dimensions. The human perception of the flow of time is alleged to be an illusion stemming from of our merely subjective perspective, not just in point of temporal scale, as mentioned by Wayfarer, but also regarding the distinctions between present, past and future, which are taken not to be of any relevance to the objectively existing fabric of the world. Hence, Sean Carroll is led to downgrade the objectivity of the very notion of causality. In his view, nothing ever really comes into existence. The "block universe" being "eternal" at a fundamental level, events (or states of affairs) need not be caused to occur (or to be as they are) since the laws of physics govern everything and the way in which they govern consists in them fully constraining the mathematical relationships between the layout of the universe at all the singular moments of time (i.e. in between elements of a full set of space-like slices of the eternally existing "block universe").
Such a view of the universe can't of course mesh with our view of the world as a source of possible objects of experience. Kant argues in the Analogies of Experience (in his CPR) that an empirical experience can't have an objective purport if it doesn't potentially rationally bear on other experiences. (Wilfrid Sellars also argued for this in his Empiricism and the Philosophy of Mind currently being discussed in another thread). And this is only possible if we can distinguish the successive experiences of a single thing that has changed from the simultaneous experiences of two separately existing things. The possibility of our conceiving of this simultaneity/succession distinction, in turn, depends on our ability to recognize laws that govern the evolution of enduring substances (i.e. laws that state their persistence conditions and their fallible (active and passive) powers. (Why those powers must be fallible is explained by Sebastian Rödl in his book Categories of the Temporal). If it were conceivable that any "substance" could be experienced to have become any other "substance", with no law governing how its qualities tend to change over time, then there would be no telling if two qualitatively distinct experiences refer to the same object (at different times) or to two distinct objects (at the same time). Thus, the possibility of the objectivity of experience presupposes the possibility of the experience of time (as a formal condition, rather than as a material content) and the possibility of the experience of time, in turn, presupposes the ability to recognize substances governed by laws. So, in sum, the category of a substance -- of an enduring object that can be experienced at different moments of time and that is governed by laws that specify its powers -- must be brought to bear by an experiencing subject to all her experiences if they are to have objective purport at all. If this is right, the formal concepts of substance and of time are prerequisites of the intelligibility of the world.
But, can't the world be simply conceived to exist (i.e. be intelligibly be judged to exist) without its satisfying the condition of its also being a potential object of experience by agents possessed of finite intellects like us? This was the issue being discussed by Agustino, John and Michael regarding the existence of the Earth before there were humans experiencing it. It is important to recognize that the Earth is a potential object of experience of a distinctive formal kind. It is an enduring substance. As such, it doesn't exist qua object of experience independently of the specific substance concept that it is taken to falls under -- e.g. the concept of a rocky planet -- which specifies its conditions of persistence and individuation. Those conditions are tied up with the concept and aren't independent of our interests in individuating it thus. If we wonder at what point in time the Earth began to exist, for instance, this question can't be made sense of quite independently of our criteria for an object's inclusion into the (substance) category of a rocky planet. So, this is why the claim that the Earth existed before there were humans quite independently of whatever humans ever thought regarding what it is that makes a planet the sort of thing that it is doesn't quite make sense. The existence of the Earth, qua possible object of experience, doesn't depend on there actually existing humans actually or potentially experiencing it, which is something Agustino would be correct about if it were his only claim. But the very sense and intelligibility of the state of affairs being considered -- e.g. that the Earth existed three billion years ago -- is relative to some substance concept or other that corresponds to the specific interests of a potential subject of experience.
Sean Carroll's block universe, as he conceives it, within which time just is an objective parameter, doesn't contain any planet because this conception lack any criterion according to which some set of "particles" does or does no make up a "planet" in any specific space-like slice of his "objective" (so called) universe.
Why would time be considered as a form of sensibility? The concept of time is produced by us relating numerous activities. What is being sensed here in this relationship? It is not the activity itself, being sensed, that lends itself to the concept of time, it is the relationship. The earth circles the sun once, then it circles again, and these are judged as "the same" amount of time, by comparing to the cycles of the moon, or other things. The concept of time is based in such relationships, not the things being sensed.
Quoting Wayfarer
This is a better explanation, time is "beyond perception", but that's why Agustino referred to it as transcendentally real. As you say, it's beyond any particular scale, human or otherwise. The fact that it's beyond any particular scale does not make it unreal, it only confirms the reality of it. Time is not a scale, it is really what is measured, and can be measured by many different scales.
Quoting Wayfarer
You are assuming here, that every description is particular to a perspective, and there's nothing wrong with that, it's a valid principle. Now, consider something which enters the description regardless of the perspective, and here we have time. Each and every perspective of reality includes time, so it is something which is common to every perspective. Therefore it is that thing which is evident from every perspective. It is what is real, objective, not perspective dependent.
Quoting Pierre-Normand
If we propose a "block universe", we propose a perspective from which there is no time passing. That is, there is no such thing as the activity of time passing, from that perspective. If we accept this proposition we deny that time passing is something which is evident, and observable from each and every perspective, these are incompatible. Then time as something real, independent, objective, is denied, because the objectivity of time is dependent on the assumption that it is something which is evident from each and every perspective.
Quoting Pierre-Normand
The idea that two distinct objects have "simultaneous experiences" is what, in the past, grounded our notion of objective existence. This gave us the notion that distinct things had something in common, the experience of time passing. This thing which they have in common was called existing. The precepts of special relativity do not necessitate that we dismiss this objectivity in favour of the block universe. What special relativity indicates is that there is vagueness with respect to "simultaneous experience". How we understand "simultaneous experience" greatly influences how we produce laws of physics. So there is variance within the laws of physics depending on one's interpretation of simultaneous experience.
Measuring time only becomes possible after time already exists. It's not measurement that makes time possible, but time that makes measurement possible. So no - humans don't conceive of time because of measurements or rotations... So why do they then conceive of time? Because of motion - activity - becoming. The notion of time is nothing more and nothing less than an abstraction extracted from change. Change gives the concept of time - this was so now, and it isn't so later. Without change, there is no time.
Scales are formed to allow measurement simply because there is no transcendental point where one can get outside of reality to judge it. Measurement is simply comparing one aspect of reality with another - a ruler with a desk (when measuring the desk). So the fact that measurement uses scales - and necessarily does so - proves immanentism and denies all transcendentalism as incoherent. Time for example is nothing but comparing one change (a clock ticking) with another.
Quoting Wayfarer
All scales are equally real, since they map the same underlying reality. If X is equal to 3 x 30mm rulers in so and so circumstances, that is the same thing as saying X is equal to 9 x 10mm rulers in so and so circumstances - or even that X is equal to 1 x 30mm rulers if its traveling very quickly. A giant will have a ruler which is equal to 10 of mine maybe. Asking which scale is real, his or mine, is stupid though. They're both equally real.
Quoting Wayfarer
That is just a mental model, not reality itself. Mathematical models are just that - models. And of course you can't picture it from no viewpoint - that would entail being transcendent to reality, and you're not. You're immanent.
Quoting Wayfarer
Not only. I daily experience my mind being dependent on the world.
Quoting Wayfarer
Yes, there are only immanent explanations, not transcendent ones, thank you for finally coming to the realisation X-) Surely, conceptual knowledge presupposes that one is embedded within reality - and not transcendent to it.
Except of course, it is the very theory that reveals the block-universe to us - i.e. that the B-Theory of time is true - that explains the formation of planets and correctly predicts their orbits.
And time is a dimension, not a parameter, and it is relative, not objective.
Yes, of course. That was part of my point.
Special relativity relativizes the concept of simultaneity to "inertial frames of references" that are used to operationalize this concept (with the notional use of sets of co-moving rulers and clocks) as well as the concepts of physical length and duration. It doesn't have much bearing on the ideas of simultaneity or succession of perceptual experiences of rational agents as Kant was making use of them. That's because those concepts, as used by Kant to investigate into the grounding of empirical knowledge, are revealed to be tied up with the concept of an enduring substance and such a formal concept doesn't fall under the purview of physical law.
Physicists talk about specific substances all of the time (e.g. atoms, rocks and planets) but they rely on ordinary concepts of enduring material objects that fall under common sense sortal concepts with their associated persistence and individuation criteria, which physics as such says nothing about. Physicists usually are philosophically naive about substances. They fail to notice that their knowledge of ordinary objects (singular substances) isn't informed by physical theory. They also tend to fail to notice that singular substances as such only obey the so called laws of physics approximately and fallibly (e.g. on the condition that they don't change shape, don't lose or gain material parts, etc. etc.)
I never questioned the explanatory and predictive powers of the special or general theories of relativity, or the heuristic value of the "timeless" metaphysical pictures that they may suggest (for mere purpose of physical explanation). This picture of complete determinacy of the future (given some fully determinate specification of energies and momenta in some space-like surface), of course, rubs against the indeterminacy inherent to quantum mechanics. Only through endorsing a time-independent state formalism can you attempt to reconcile QM with the block-universe view, as you are wont to do. But this is to gloss over the measurement problem of QM and the fact that the measurement operators carry over the time-dependence of actual measurement operations (e.g. though specifying the time-evolving basis of the projection of the time-invariant state vector, in Dirac's formalism.)
Yes, I agree that it doesn't make sense to speak of the noumenal thing as the Earth, because the Earth is knowable and the nounemon is, by definition, not. The noumenon is, however, minimally conceived as the transcendental conditions of or for experience that can never appear in experience. Experience can thus never be completely transparent to itself; or knowable from the inside out, so to speak. The idea of the noumenal is an expression of the existential realization of this universal human condition.
If you resist the temptation to invoke "collapse", QM is a fully deterministic theory. Copenhagen is not a theory of reality, so has nothing to say about whether reality is really deterministic or not, De Broglie-Bohm is fully deterministic as are the various Everettian interpretations. The indeterminacy is purely epistemic, and not a feature of reality.
That said, according to realist no-collapse QM, we have determinism, but space-time is false, or rather an approximation.
Quoting Pierre-Normand
There is no measurement problem in realist no-collapse QM.
No, but then there's the problem of there being many worlds. The remedy is worse than the disease in my opinion.
Quoting Pierre-Normand
Very well said, your comments are extremely helpful thank you. I notice that two of Sebastian Rödl's books are available in my University library, and thanks for alerting me to him.
There isn't. Everettian QM adds zero worlds to those already proposed by cosmological theories. Denying Everett does not reduce the number of worlds.
So, are you saying this is not the case? Is this paragraph wrong, and should the article be changed?
Quoting Agustino
Quoting Agustino
You're overlooking the very act of measurement itself. Most of what you say about 'what already exists' is, I think, the subject of the criticism by Sellars in his essay 'the myth of the given'. You presume that we can compare 'models' with 'reality itself', as if you can rise totally above the act of knowing, and know what it is you don't actually know. Then, by claiming you know 'reality', you say that what we think we know is 'a model'. You're not seeing your own sleight-of-hand here.
Quoting Agustino
Q: What do you call a Greek sky diver?
A: Con Descending. ;-)
Those must be Self-Consciousness and Categories of the Temporal: An Inquiry into the Forms of the Finite Intellect. The latter book appeared last in the English translation (slightly updated, it seems), but was written by Rödl first in German (Kategorien des Zeitlichen: Eine Untersuchung der Formen des endlichen Verstandes). Although Self-Consciousness is excellent and, among other achievements, clarifies some core aspects of John McDowell's epistemology, Categories of the Temporal is my favorite and is an unmitigated success, in my view. It may be worth reading first. I have only one small reservation regarding one subsidiary thesis -- about the divisibility of movement -- that is not damaging to the main argument at all.
You must be thinking of Empiricism and the Philosophy of Mind.
Wikipedia, the font of all that is true. Well, try this Wikipedia:
https://en.wikipedia.org/wiki/Multiverse
Pay particular attention to "Level III" multiverse.
No one complains that our infinite universe subject to the Bekenstein Bound implies a multiverse of identical and near-identical copies of ourselves. No one complains about inflationary theory being impossible because it implies a multiverse.
No one complains when Susskind and Bouso and others identify Level III and Level I as the same thing. Yet, apoplexy ensues when the only multiverse we have evidence for is mentioned. Bizarre!
I am perturbed by the reference to materialism. I think I understand that Marx's interpretation of materialism is historical and economic, but I'm afraid I always tend to see materialism as being the prime philosophical adversary. Elsewhere, his books are referred to as an effort to re-vitalise the German idealist tradition. So I'm a bit nonplussed by that.
Quoting Pierre-Normand
The Myth of the Given is the one essay I have read, albeit, probably not very well or deeply.
I am not complaining about those, because I have not the least idea what they mean, nor do I particularly care to find out. After all, this is a philosophy forum, it is not actually Physics Forum.
But in any case, the other wikipedia article, on multiverses, Level III, casts no light whatever. It still maintains there are indeed dopelgangers and multiple worlds. And Tegmark's books on the multiverse are routinely criticized by reviewers for verging on science fantasy.
Even in no-collapse intepretations, there is a process of decoherence into "coherent histories" (analogous to the "worlds" of the many world interpretation) that takes place. Coherent histories are correlated to (or "relative to") the macroscopic states of measurement apparatuses, or of the embodied human observers themselves who actively single our aspects of the world to observe, and who don't conceive of themselves as sorts of queer superposed Schrödinger cats). This is what "relative state" refers to in Everett's "relative-state formulation of QM".
So, in all the common interpretations of QM, including "no-collapse" interpretations, there always is a tacit reference to measurement operations, and the choice of the setup of a macroscopic measurement apparatus always refers back to the interests of the human beings who are performing the measurement. The processes of either "decoherence", or "collapse" of the wave function, (or of "projection" of the state vector), amount exactly to the same thing from the point of view of human observers. In order to escape this essentially perspectival human predicament, and ignore the measurement problem, you have to label all of empirically observable reality an illusion and proclaim the mathematical abstraction represented by the Schrödinger wave function of the whole universe "reality". "Reality," as you conceive of it, is beyond the reach of observation, or of our own empirical concepts.
Then why do you care about Everettian quantum mechanics?
Quoting Wayfarer
Tegmark did not invent the Level I multiverse, he just gave it a catchy name. If our consensus model of cosmology is true - i.e. Lamda-CDM, then there are an infinite number of exact copies of you at mean separation of ~10^10^115 metres.
Tegmark did not invent the Level II multiverse, he just gave it a catchy name. According to the consensus theory of the creation of our universe, inflation is an eternal process giving rise to an infinite number of universes - i.e. a multiverse.
Tegmark did not invent the Level III multiverse, he just gave it a catchy name. This is the quantum multiverse, the only multiverse for which we have any evidence. It adds no complexity to Levels I and II.
The "multiverse interpretation" of QM was an idea that came to light from several researchers at about the same time. Most notably Susskind and Bousso explored the idea and published a paper on it. These are serious big names in physics, as big as you get! The idea has since expanded into the slogan ER = EPR. A very big idea, informing a great deal of research. Quantum entanglement and wormholes could be the same thing!
Quoting Wayfarer
Tegmark made a list. Don't be too harsh!
I wouldn't make too much of that. This surprising gloss that Rödl has chosen to put on his project makes him no more of a materialist than Aristotle already was. Rödl's work "naturalizes" Kant, in a sense, through displaying how human beings, who are finite and essentially embodied substances, (in Aristotle's sense), and hence "material", can nevertheless gain a priori knowledge of the form of their own intellects in virtue of them already embodying this form as a rational capacity (rather than gaining knowledge of it empirically, which would be a sort of psychologism). Also, there is no mention of Marx or of materialism in Categories of the Temporal (and only a passing reference in just one chapter of Self-Consciousness).
Let me also reference this excellent albeit very short review of Categories of the Temporal by Aloisia Moser. (This is a direct link to the pdf file)
Yep. Decoherence - at the level of heuristic principle - says all the troubling indeterminacy disapears in the bulk behaviour. So that probabilistic view gives us an informal account of collapse that fits the world we see.
Of course, the existing quantum formalism doesn't itself contain a model of "the observer" that would allow us to place the collapse to classical observables at some specific scale of being. But then either one thinks that is the job of a better future model - which seems the metaphysically reasonable choice. Or one can go crazy with the metaphysics and say every possible world in fact exists - a "solution" which still does not say anything useful about how world-lines now branch rather than collapse.
So the main reason for supporting MWI is that it is ... so outrageous. It appeals because it is "following the science to its logical conclusion" in a way that also can be used to shock and awe ordinary folk. Scientism in other words.
The ideas of physics obviously have philosophical implications; many worlds and multiverses (and yes, I know they're different kinds of ideas) are nowadays influential, and, I think, maybe they're profoundly misleading. I know there's a lot of research being undertaken on the basis of such ideas, but I think a more austere and authentic science wouldn't become beguiled by them.
Quoting Pierre-Normand
Thank you. I shall definitely look into that book.
I was unaware that Sellars had written an essay with that title. Are you sure? John McDowell wrote Avoiding the Myth of the Given a few years back...
'Sellars' most famous work is the lengthy and difficult paper, "Empiricism and the Philosophy of Mind" (1956).[6] In it, he criticises the view that knowledge of what we perceive can be independent of the conceptual processes which result in perception. He named this "The Myth of the Given," attributing it to phenomenology and sense-data theories of knowledge.'
Which was what I meant in respect of Agustino's post.
Yes, knowledge in the sense Sellar's means is always already in "conceptual shape". This doesn't change the fact that 'things' are known non-conceptually. But this latter kind of direct knowledge is ineffable and anything we say based on it cannot be an item of propositional knowledge justified by that form of knowing. The 'feeling' of the knowing may only be evoked in artworks, music, poetry or the language of mysticism.
I don't think I would agree with this statement, specifically the part about enduring substance not falling under the purview of physical law. I think that Newton's first law, sometimes called the law of inertia, provides the formal concept of enduring substance. I admit that this law takes enduring substance for granted, but that's what laws of physics do, they take for granted what the law states. What this laws says, is that any substance will continue to exist, exactly as it has, in the past, unless acted upon by a force. So what this law does is describe enduring substance, as that which continues to exist as it has, in the past, unless it is acted upon.
It is a very specific sort of substance that strictly obeys Newton's first (or second) law. It's a substance that is either defined as the mereological sum of its material parts, or that consists in an essentially indivisible mass. Substances that can survives the loss of some of their massive parts, or maintain their identities though the accretion of new massive parts, such as plants, animals, most artifacts, and celestial bodies, don't strictly obey Newton's laws of motion precisely because of the principle of conservation of momentum (which is a consequence of those strict laws). Those laws only strictly apply to physical "matter", things that have invariant mass. When an ordinary substance gains or loses parts, conservation of momentum only applies to the unchanging mereological sum of this substance and of the parts that it either lost or gained.
What the laws of physics are quite silent about -- and this was my main point -- are the principles that govern the conditions of persistence and individuation of ordinary empirical substances. This falls under the purview of irreducibly "higher-level" laws (such as the laws of biology) or depends on human conventions possibly tied up with specific pragmatic interests, in the case of artifacts, or some combination high level-laws and conventions in the case of objects that fall under semi-technical concepts such as planets, mountains, etc.
I think there's a bit of equivocation going on here. It think that Newtonian physics only deals with mass, not 'substance' in the formal sense, the meaning of which you so ably set forth in your earlier response to me; in fact it's this very point which differentiates modern physics from its Aristotelean precedents. Whereas the formal idea of substance is concerned with identity, physics generally has no concern with the nature of the identity of the substance it is measuring at all; I expect if you asked a physicist about the 'nature of substance' she would reply 'that is something you need to ask a chemist about, dear chap'.
Quoting John
Every time I come home, I hang the car keys on the top hook. That doesn't change the fact that sometimes I hang them on the bottom hook.
And yet it is not Christmas...
:s
No worries, but if you think I have contradicted myself then evidently you don't allow for the possibility of any kind of direct knowing. Presumably this would mean either that animals cannot know anything, or that they can conceptualize or that things themselves are in conceptual shape independently of their being perceived.. If either of the latter two obtain then conceptualization cannot be dependent on linguistic capacities.
The difference may cut even deeper if, as Aaron R mentioned in another thread, true Aristotelian substances are unities of matter and form, of dunamis and energeia. In that case, the only true substances are living organisms, and also, arguably, pure chemicals or chemical elements (as argued by Aryeh Kosman in The Activity of Being: An Essay in Aristotle's Ontology). Artifacts and non-living objects like stones and mountains are substances only by analogy. (The unity of their material parts is accidental according to scholastic metaphysics). This definition, though, may be a bit too narrow for purpose of Kantian epistemology. Kant's category of substance has wider extension. If we thus broaden the strict Aristotelian concept to encompass any enduring object ("continuant") then we lose the intimate unity of matter and form but we get a formal concept of substance more suited for analyzing the variegated ontologies of the "special sciences" (and our empirical cognition of their objects) and of the many areas of the human world.
What classical mechanics typically deals with are such things as idealized point particles, rigid bodies, homogeneous incompressible fluids and unbreakable ropes. Real objects that approximate the behavior of those things (gas molecules, liquid water, billard balls, levers, planets, etc.), considered over short enough time frames and gentle enough mutual interactions, can be considered the "substances" of classical mechanics -- a science not concerned about attributes of "form" other than mass, geometrical shape, and the object's powers to exert Newtonian forces on one another (or to generate force fields). The massive objects that thus approximately obey the laws of classical mechanics, and considered merely as such, still have individuation and persistance conditions, albeit trivial ones (since they don't undergo generation, corruption or mereological variations). They are re-identifiable enduring objects and Kant's arguments in his Analogies of Experience speak to our concepts of them (among very many other concepts that are alien to physics proper). They also have specific powers. They are typical substances in the broad sense of contemporary analytic metaphysics (Peter Strawson, David Wiggins, ...) that owes as much to Kant as it owes to Aristotle.
I think 'animal knowing' can generally be subsumed under the heading of 'stimulus and response'. When you say 'direct knowing', then there's the 'myth of the given' again - you assume that the objects of sense - that which is 'given' - are simply real, or that they have an inherent reality, which philosophers then speak about. But what that doesn't come to terms with is the way in which sense experience is incorporated and interpreted by the human, or the sense in which knowledge is constituted by the activities of the perceiving intellect.
Note this passage from a review, by a theologian, of Lawrence Krauss' 'Universe from Nothing':
(Remainder here.)
Quoting Pierre-Normand
Right! Because the Aristotelean term for 'substance' was originally 'ouisia', which is nearer in meaning to 'being' than to what we now designate as 'substance'; and only beings are 'bearers of predicates', or 'subjects', per se - '[primary] substance is that which is always subject, never predicate'. So this enables a distinction between the nature of 'beings' and 'objects', which I think has been subsequently lost or forgotten.
Right, so you think of animals as Descartes did; as machines? When I refer to "direct knowing" I am not thinking primarily of what is known by the senses at all, although this kind of 'knowing' may be evoked by artworks, music or poetry; which obviously would be impossible to experience without functioning senses. The term could also apply to the 'knowledge' which is supposed by some to be had by mystics and prophets.
I haven't been speaking at all about the purported reality of objects of the senses; I have no idea where you got that idea. Perhaps read a little more closely?
Thanks, but I'm not familiar with the Krauss book, so the review doesn't mean much to me.
Yes, living things, for Aristotle, are the paradigms of substance in the strongest sense. They display a unity of mater and form in the sense that their identifiable proper parts only are active qua essentially being parts of those beings (i.e. having as their proper function to sustain the characteristic activity of the substance they are a part of). The human heart is essentially a part of a human being, and so is human blood, human bone tissue, etc. This is why the elements also are considered to be primary substances according to Aristotle, for there is no part of fire that isn't essentially fire (according the the ancient Greek conception of fire as one among four elements, of course). Still, my point was that under this construal, artifacts, stones and planets also are primary "beings", i.e. primary bearers of high-level predicates that only apply to beings of that kind. It just so happens that they are imperfect substances, in a sense, since their matter is imperfectly united with their form. Their overarching forms (i.e. their essential qualities or functions) only partially inform the structure of their parts, and those parts thus retain a unity of form that is alien to the substances they are constituents of. For instance, a chair can be made made of wooden planks that retain their identities as wooden planks while they merely (and accidentally) lay a hand, as it were, to enabling the characteristic function of the chair (and thus, also, enable its existence qua functional artifact) while they are well fitted parts of it.
But my main point was that Kant is justified in using his category of substance with the broader extension (to encompass "imperfect substances") since it functions as a necessary formal principle of the intellect that allows us to gain empirical knowledge of all sorts of material "beings" (e.g. boats, planets, rocks) other than just living organisms or chemical elements. My main aim in bringing those metaphysical considerations to bear on the present discussion was to show why and how our conception of "planets" (a "sortal concept", which is a concept of a "substance" in the broader, imperfect sense) still must be presupposed to be operative in the asking of the question regarding the existence of the Earth in the distant past. It is this concept of a planet, incorporating the understanding of its specific mode of existence, that determines the truth conditions of the intelligible claim about the past existence of the object, regardless of the fact that the bits of rock and magma that make up the planet may have a somewhat "accidental" (imperfect) unity from a scholastic point of view.
According to Aristotelian physics, any object, living or inanimate, is a unity of matter and form. Aristotle finds it necessary to assume this duality in order to account for the existence of "change". When a thing changes, it in some ways stays the same (remains the same thing, only changed), yet in another way it must change. What remains the same, persists despite the change, is the matter, what changes is the form. The assumption of matter is necessary to account for the potential for change, and the assumption of form is necessary to account for actual change. So all changing things must consist of both matter and form to account for the fact that something always persists through a change, yet something always changes, otherwise it is not a "change".
In his biology he describes a special type of form called the soul. Living bodies have a soul, and in the case of a living body, this form, the soul, is prior to any form that the material body may have. So the soul is defined as the first actuality of a body having the potential for life. This is somewhat ambiguous, but in Neo-Platonism, the form of any body is prior to the material existence of that body.
Quoting Pierre-Normand
I would not agree that it is necessary for "substance" to be defined as indivisible, or the sum of its material parts, in order for substance to obey Newton's first law. This law is not concerned with division, it provides no principles for division or unity of parts. The existence of parts is irrelevant to Newton's first. To account for division we need to describe individual parts each as separate substances, each having matter and form then. So division is a case of one substance, having matter and form, becoming a number of substances, each having matter and form. If we assume that division is a "change", then we need to account for the persistence of matter, as the thing which stays "the same", through the change, and this is the conservation of mass. In more modern physics, conservation of mass is replaced with conservation of energy, so that energy replaces Aritotelian matter, as the thing which persists, stays the same through the change. Also though, we need to account for a type of mathematical difference, one form becomes numerous forms.
But the point is, that division is not covered by Newton's first law. Persistence is covered, but division is not. So if we want to describe division under the terms of Newton's first, the single object, matter with form, moving as a unity of substance, under Newton's first law, must be defined as a number of objects, each, with its own matter and form. This allows each of the parts to move in different directions. So it's not the case that "substance" would be defined in a different way, it continues to be defined as a unity of matter and form, but it is the case that the object is described in a different way. So Newton's first would still apply in both cases. In one way the object would be described as a single substance, and in the second it would be a group, a number, of substances.
Since the description of the object is the form of the object, then the two descriptions are logically not descriptions of the same object. In one case, there are a number of objects, and in the other there is a single object. There is an inherent incompatibility between these, such that it is incorrect to say that they are two different descriptions of the same object. They are not, they are only the same to the extent of an assumed material equivalency. They have distinct forms, which makes them distinct objects, but we assume that it is the same matter. Such an assumption is a falsity, because the same matter cannot have, at the same time, different forms.
I honestly can't follow what you mean here, nor what this has to do with "The Myth of the Given"...
If you are going to quote, then the name is "Consistent Histories" or "Decoherent Histories". The first is explicitly epistemic, the second equivocates. The formalisms are however used in Unitary QM.
Quoting Pierre-Normand
It might be worth pointing out that "coherence" is precisely what decoherence destroys. Anyway, I think you have Decoherent histories a bit wrong. They are for closed systems, and don't tell you which one occurred, rather give, under certain consistency conditions, a probability space for the course-grained histories. Of course the major difference between Everett and Decoherent Histories is that Everett regards the histories as real.
As for relative states, I think you've got that the wrong way round. It is the observer who is put into a state relative to each of the outcomes of a measurement. One counterpart is put into the state of having measured up etc.
No - I believe animals are beings, but that they are not rational beings, and in this context it's a significant distinction. (I think that was an egregious error of Descartes, by the way.)
Regarding the nature of knowledge: In the Platonic dialogues, there are many (often inconclusive) discussions of what constitutes 'true knowledge'. I think their perspective is nearer that of the Upanishads, i.e. 'true knowledge' is liberative - the 'ordinary person' (the hoi polloi) are trapped in the 'cave of ignorance', the philosopher/sage has risen to the 'vision of the One' (resemblances between Platonic and Hindu philosophy are the subject of McEvilly's The Shape of Ancient Thought). I think the original intent of philosophy was much more radical than what we now understand it to be. It was 'religious' but not in the way we take 'religion' to now mean; not in the sense of accepting dogmatic truths of faith, but of calling into question our innate sense of what is real.
Secular Philosophy and the Religious Temperament
(Subsequently, Aristotle shifted the focus of the debate away from his teacher's mysticism and towards the empirical, that arguably being part of the process by which this kind of sensibility was lost or attenuated. [It's also significant that Platonic epistemology plays a much larger role in Eastern Orthodox than Catholic philosophy.] Subsequently, the distinction between reality and appearance, then became the distinction between the 'real world' of physics - which is still the basic motive behind Sean Carroll's naturalism - and the world of 'the mind' which is only of derivative or secondary value.)
Quoting Agustino
At various points in your preceding posts, you refer to 'reality as it is', 'independently of models'. But, that presumes you can know 'reality as it is', when that is precisely what is at issue.
For example:
Quoting Agustino
Quoting Agustino
Quoting Agustino
Quoting Agustino
I think these kinds of intuitions is what the 'myth of the given' is criticising. It is the belief that knowledge has a dimension which is given or self-evident, which philosophy then elaborates on, when in fact, critical philosophy is questioning the very thing which you're taking to be self-evident.
OK, but I wasn't referring at all to "accepting dogmatic truths of faith". If we "call into question what is real" and come to know something other than what is 'normally' considered to be real; what is known to us via the senses (which pretty much defines the ordinary sense of 'real'), what kind of knowledge do you think that could be other than the kind of direct intuitive knowledge I was earlier referring to?
You mean this post?
Quoting John
It's a bit sketchy. Perhaps you might elaborate.
I'd say that there are (at least) two senses of 'knowing'. There is the everyday sense of the knowledge involved in perception, where we can talk about what we know of the empirical world. All knowledge in the conventional sense is of this inter-subjective kind.
Then there is the direct knowing of things by familiarity. I know a song, or a friend's face. I think this is basically a knowing which cannot be conceptually articulated. A dog has heaps of knowledge of this kind. You might call it aesthetic knowledge (in the broadest sense captured negatively by the term 'anaesthetic'); it is a knowledge of direct awareness; it is the knowledge that is left over after everything that can be articulated as knowledge is exhausted.
I think this is basically the same as aesthetic knowledge in the narrower sense, when we know beauty or harmony, for example, or in the moral sense, when we know goodness, or in the religious sense when we know God, or in the 'Zen' sense of 'being enlightened'. This knowing of the 'familiarity' kind cannot be, to the great frustration of many philosophers, inter-subjectively corroborated, but it is not through any lack of trying; in fact philosophers are often very stubborn, and so I doubt the attempt will ever be entirely abandoned.
I think metaphysics is firmly in the latter character of knowing. An inter-subjectively corroborate-able (horrible word but I could not think of any other) metaphysics seems to be simply impossible to achieve. People follow their metaphysical intuitions or else some authority (which really amounts to other's intuitions as canonized); there is no possibility of evidence or logical proof when it comes to metaphysics.
Of the traditional two senses of knowing: knowing that and knowing how, I would say the latter is a kind of 'knowledge by familiarity'. To anticipate an objection; it is not being denied that when people know how to do things, that that fact cannot be inter-subjectively corroborated. It is just that knowing how and knowing by familiarity are knowings of the mind/body and of the feelings, and what exactly is known, and more especially how it is known, cannot be precisely articulated, or in some cases cannot be articulated at all.
I think this is the issue which we approached with Pierre-Normand, concerning the notion of "enduring substance". Pierre mentioned that the concept of enduring substance doesn't enter the purview of the laws of physics. But I think this notion is inherent within such laws, essential to them, as the "given", which validates these laws.
Any law of physics assumes that what has been the case in the past, will continue to be the case, in the future. We make an inductive statement, of what has been, according to observation, and this statement acts as a law. The law contains predictive power according to the fact that what has been in the past, will continue to be so, in the future. So for example, I can state as a law, "the clear sky is blue". This is derived from past observations, and it holds predictive power about how things will be tomorrow, the clear sky will be blue, according to the principle that what has been the case in the past, will continue to be the case in the future. That principle is what is taken for granted, as given.
This is the essence of "enduring substance". What has been the case in the past, will continue to be the case in the future. It is what the laws of physics take for granted, as "the given". When we question this "given", we question the very nature of time itself. Why does reality appear to be like this, and how does this relate to the appearance of free will?
Just because reality is not immediately and non-inferrentially given doesn't mean we don't know what it is. The Myth of the Given isn't necessarily resolved by postulating an inaccessible noumenon as Kant did. There are even materialists who reject the Myth of the Given and use this as a way to justify that Sellars' Manifest Image (phenomenon) presupposes and is influenced by a Scientific Image (noumenon) to which we have access to.
Ah, but do we? I am inclined to accept the view that nobody knows what anything really is. All knowledge is approximate except for in regards to those attributes of things which can be numerically abstracted and predicted according to natural regularities (as we are discussing in this thread). But aside from such scientific knowledge, a lot of what we think we know, just a melange of ideas we've absorbed from those around us; what we think of as real is irredemiably conditioned by concepts, ideas, theories, and attachments which are largely subliminal in nature but which strongly condition what we think and how we react.
Quoting John
The use of the term 'metaphysics' ought to respect the Aristotelean derivation, otherwise it becomes a catch-word for all kinds of woo. That, I think, is why scholastic metaphysics tends to appear cumbersome - its formality ensures every key term is defined very precisely, specifically to avoid debate sliding off into vagueness.
I think what you call 'direct knowing' might be part of what Michael Polanyi defined as 'tacit knowledge' - things you learn by experience, on the job, through life experience, and so on. That is indubitably an important aspect of knowledge.
I think the 'religious sense' you refer to, is a lot more difficult to generalise about, as there are many forms of spirituality with various kinds of cognitive modes. But I would say, that as far as the sapiential traditions are concerned a key characteristics is 'knowing how you know', also known as meta-cognition. Buddhist insight meditation (vipassana), for example, is aimed at direct cognition of the conditioned nature of perception. I think that kind of understanding metaphysics requires or facilitates meta-cognitive insight - an insight into knowing how you know (in the case of Buddhist meditation, knowing how your affective processes hang you up all the time).
Quoting Metaphysician Undercover
I said that, also, the reason being that Galilean and Newtonian physics rejected Aristotelean physics, it didn't need the scholastic concept of 'substance' in order to do its work (and besides wanted to break from the 'dead hand of scholasticism'). However some Aristotelean ideas, especially formal and final causality, are making a comeback, because (especially in biology) they seem to be indispensable. I think in the hands of a skilled philosopher, such as Pierre-Normand, the traditional (or neo-traditional) concept of 'substance' makes supreme sense.
You and Kant, but Hegel, Schopenhauer, and Spinoza before, all rebelled against this separation of noumenon from phenomenon, and granting access only to the one and not to the other. I don't see how Kant's distinction is valid if we don't have access to the noumenon. If we actually don't, then the Kantian distinction is merely a logical formalism, and nothing else.
The problem is that there cannot be any precise formulations of metaphysical categories and definitions of metaphysical terms that everyone will agree upon. To practice metaphysics is to undertake a purely logical exercise in consistency, and the disagreements people have about metaphysics are on account of what cannot be produced or proven by mere logic; namely the assumptions that form the premises upon which any metaphysical thinking is based.
So, vagueness necessarily consists in the impossibility of logically proving the overarching validity of any precise definition within metaphysics. The ideas of substance, identity, being, reality and so on are ineluctably ambivalent. Metaphysical preferences then, inevitably come down to people's intuitions, because the meanings of the base terms are themselves intuitively decided upon.
The argument I made to Pierre-Normand is that the concept of enduring substance is inherent within Newtonian physics, as the given. It is taken for granted by Newton's first law. So my argument is that it is not the case that the concept of enduring substance does not enter into the laws of physics, it is right there in Newton's first law, as the given, that which is taken for granted.
If the phenomenal and the logical are all that we can know; the former as experience and the latter as thought, then what more could the noumenal be, since it obviously cannot, by mere definition, be known as phenomenal, but a "logical formalism", as I have been trying to convince you for some time now?
It's about the perspectival nature of knowledge, that we know anything as it appears to us, not as it is in itself. 'The concept [of ding an sich] was harshly criticized in his own time and has been lambasted by generations of critics since. A standard objection to the notion is that Kant has no business positing it given his insistence that we can only know what lies within the limits of possible experience. But a more sympathetic reading is to see the concept of the "thing in itself" as a sort of placeholder in Kant's system; it both marks the limits of what we can know and expresses a sense of mystery that cannot be dissolved, the sense of mystery that underlies our unanswerable questions. Through both of these functions it serves to keep us humble'. (more)
There's also a very interesting discussion by Eric Rietan:
I'm pretty near to Schleiermacher, and also I agree that the 'thing in itself' is a sign of the limits of knowledge: it reminds us that our supposed knowledge of external things is meditated, is perspectival and conditioned; some of what we take for granted that we know, specifically the inherent reality of the objects of appearance - maybe we actually don't know.
Quoting Metaphysician Undercover
Newton's laws concern mass, not substance, in the Aristotelean sense. Crucial distinction.
Quoting John
I don't know about that. Scholastic metaphysics is very rigorous. You and I and most people here don't play in that space. I think you say that the distinctions, etc, are 'ineluctably ambivalent' because you're not actually speaking from within that domain. But many metaphysical arguments (for example, the cosmological or ontological arguments) are indeed logically provable, but they're not empirically verifiable; given certain axioms, then their conclusions certainly follow, but the axioms cannot themselves be proven.
Scholastic metaphysics is something I have not studied much. But I am familiar enough with it to know that there were major disagreements between philosophers within that tradition. Now disagreements in philosophy are only possible where one or other (or both) of the disputants is asserting something that is inconsistent with their premises, or else if they are arguing consistently, but from different premises and definitions
In the first kind of case, if there is good will, it should be possible to expose the inconsistencies and resolve the dispute. IN the second kind of case, there can be no resolution because the protagonists are coming form different definitions, even notions, of key concepts and presupposing different premises. No inconsistency in the arguments of either protagonist need to be at work, and yet they will forever disagree.
That is what I mean by the terms being "ineluctably ambivalent"; the possibility of different definitions and understandings of the terms can never be eliminated; and I think my assertion that this is so has nothing at all to do with what domain I am speaking from. It is quite simply the human condition; and I think anyone with their eyes open should be able to see the truth of that.
You say many metaphysical arguments are logically provable, but that is not true at all. The arguments are only as good as their premises, and premises can never be proven within an argument because the argument is dependent upon them. Arguments can be shown to be consistent and valid or not, that is all logic can achieve. If the argument is consistent and valid, then its conclusion will be true if and only if its premises are true. And to repeat, that can never be proven. I am saying this only to emphasize that you have been inconsistent in saying that metaphysical arguments can be proven, while admitting that axioms cannot be proven. It is axioms or premises that must be either accepted as intuitively self-evident or not, or in cases where premises are by no means self-evident then they must simply be accepted on faith, or not. It is always possible that what seems self-evident to one will not seem so to others. Philosophy is actually very messy and imprecise, and ineluctably so, I would say.
Well, in support of your argument, that was one of Kant's major motivations - that metaphysical arguments had been going on for centuries and never looked like being resolved, and that is certainly true.
But, what I was commenting on was this:
Quoting John
I still don't think that's correct. I had intended a couple of times to enroll in an excellent-looking external course offered by Oxford (this one and may try again for April). But they do discuss a curriculum in such courses - it isn't just 'what anybody thinks'.
Quoting John
That is what is said: logically provable, but not empirically verifiable.
The problem is, if you simply say that metaphysics is whatever anyone feels intuitively about what is true, then it doesn't add up to much of a philosophy. Even Buddhism has its metaphysics, notwithstanding it's usual reputation as anti-metaphysical.
Quoting John
Well, perhaps that is because of the approach you've taken?
You have entirely misunderstood what I said. I have not suggested that metaphysics is "just what anybody thinks". Many rigorous metaphysical systems can be constructed on different premises, and metaphysics as the study of what we are able to imagine as possible premises and what consistent arguments we can construct based on the elaborations of those premises is a fascinating, rewarding and creative discipline. Of course different peoples' metaphysical views will be more or less sophisticated depending on their familiarity with the dialectical evolution of the whole tradition.
The fact is though that no metaphysical system can ever be demonstrated to be the one true system or even the most true one; different views will always be in play even at the highest levels of sophistication; and I can't see how it does not, in the final analysis, come down to individual intuition, taste, faith or merely opinion as to what an individual believes is true (if she believes anything is true) when it comes to metaphysics. All metaphysical systems are ultimately models and none of them can ever be completely adequate to what they purport to be modeling.
So, philosophy as a whole (which is what I was referring to, rather than my own philosophy) will remain "messy" and it has nothing at all to do with "my approach". How could it? The messiness consists in the welter of different approaches and standpoints, not in the individual works of philosophers, which would be messy only if they were inconsistent. Philosophy is more an art than a science.
You agree with Hegel now, against Schopenhauer? Actually Hegel's philosophy was arguably very influenced by the mystical tradition, so it is by no means as cut and dried as you are painting it. For an interesting discussion see:
https://www.amazon.com/Hegel-Hermetic-Tradition-Glenn-Alexander/dp/0801474507/ref=sr_1_1?s=books&ie=UTF8&qid=1487493228&sr=1-1&keywords=hegel+and+the+hermetic+tradition
:s I agree with Hegel's conception of how we access the noumenon, not with some of his other positions. This is in fact no different than the Spinozist conception, but since Spinoza (the improved Hegel :P ) isn't in this discussion, I'm using Hegel as an alternative.
We should maybe discuss this, but I probably wouldn't agree with a mystical interpretation of Hegel. But I do know Hegel was influenced by the mystical tradition, both by Holderin, and by Schelling. The book you linked looks interesting, I will have a look, thanks! :)
I do think it is true that Hegel and Schelling reintroduced elements of Spinozism. But of those two Hegel's concerns at least (I don't know much about Schelling) were far more comprehensive than Spinoza's as Hegel was attending to the whole dialectical development of speculative reason, and understanding each phase as a piece in the whole puzzle.
Schopenhauer actually criticizes Schelling and Hegel for doing precisely for following Spinoza in his On the Fourfold Root of the Principle of Sufficient Reason; which I took from the shelves to do a little rereading of the other day. I have long thought that Schopenhauer himself appropriates Spinoza's notion of conatus and redresses it as Will, but I can't remember encountering any acknowledgment of this from Schopenhauer.
Some of his bitter diatribes against Hegel are quite amusing, and they, along with his repeated references to his "prize-winning essay" and the immense importance of his own work clearly show his monumental ego. He is a better writer than either Kant or Hegel, though, and I think perhaps his main value lies in making Kant more easily comprehensible.
Yes, but it is a different style of presentation that is at stake. Spinoza gives a completed system, Hegel gives a Phenomenology - the process of completion of the system. Spinoza is more difficult to learn and understand though, since he doesn't show how his system is completed in the first place. Understanding some Hegel (or Schopenhauer), does help in understanding Spinoza though.
Quoting John
Yes
Quoting John
Yes I think so, but he also, at least early Schopenhauer, anthropomorphises the thing-in-itself by identifying it completely as the Will. Both Schopenhauer and Hegel are Spinozists though, effectively performing a re-reading of Kant through the lens of Spinoza.
Quoting John
>:O >:O >:O Personally I love his insults, but then, like him, I'd also say I have a big ego :P - which explains why I admire people like Schopenhauer.
It's not just making Kant comprehensible, it's also synthesising Kant with other philosophers.
Yes, I meant "consistent histories". Thanks. Michel Bitbol's paper Decoherence and the Constitution of Objectivity is relevant to this discussion, as well as Rom Harré's The Transcendental Domain of Physics, both published in Constituting Objectivity: Transcendental Perspectives on Modern Physics, Michel Bitbol, Pierre Kerszberg and Jean Petitot, eds.
Mass was said to be a fundamental property of matter, weight or some such thing, which is quantifiable. Matter has mass, means that the matter of a body is quantifiable as the mass of the body, and being quantifiable means that it has a form. Mass is assumed to be the most fundamental form of matter. Therefore to discuss the mass of a body is to discuss substance in the Aristotelian sense, matter with form (mass). That's the point, mass is substance for Newton, in its most fundamental form, and Newton's laws take substance for granted, as a given, and describe the behaviour of substance.
Mass was so fundamental that they needed two different types of it.
It's more accurately what we know through. It seems to me that what is being discussed here is 'the unconscious' - the mental and affective processes that underlie discursive thought and reason, which are not themselves objects of knowledge.
Quoting John
Well, good!
Quoting John
Well, that's true, but it's also a reflection of a world where every school of thought and tradition vie in the marketplace of ideas. (I have often reflected on the fact that in these debates, there is no ultimate court of adjudication, but all that said, I still cling to the hope that there really is a 'higher philosophy'.)
Quoting Metaphysician Undercover
I really don't think that's correct MU. The equations of matter work for matter in a generalised sense, it doesn't matter which type. The whole point about Aristotelean 'substance' is that it is a complex concept, and isn't really part of modern natural philosophy, except by analogy.
It's not only what you know through. It's that through which the whole empirical world is constituted (not only your self - in fact your self and the world presuppose one another - that which constitutes them both is the noumenon - the real). Hegel was fundamentally right in identifying the limits of the subject to also be the limits of the world - hence ultimately bringing down the distinction between noumenon and phenomenon, in the sense of rendering us access to both.
Would you rather live in a world like that, or a world in which a politically enforced predominant view is mandated, and competing views are at the very least frowned upon, and at the extreme enforced? So, if there really 'is' a "higher philosophy", what can that actually mean? How would we recognize it, if rigorous reason can lead to multitudes of metaphysical views, each based on different starting assumptions?
Would not recognition of a "higher philosophy" be itself a 'higher' recognition and thus necessarily be a supra-rational process? Wouldn't it be something like the gnosis of the mystics, or the abhijñ? of Buddhism? If such a process is possible and if it yields genuine insight into the nature of reality, then surely it must a 'higher' intuition, perhaps we could say an intellectual intuition, that transcends logic and defies rational explanation.
I don't think it actually does, as you know. So there is even disagreement about this, because there are others like me as well.
Quoting John
:s You mean naval gazing Sir?
Quoting John
To be honest, I wouldn't actually care much, so long as there was no state police involved, or abuse in the workforce by the bosses towards the workers.
I don't think it's right to say that Hegel "brings down" the distinction between phenomenon and noumenon if by that you mean abolishes it. I think it is right if by "bring down' you mean 'immanentizes'. He wants to show that the in itself is not in itself for itself, but only in itself for us.
So, I understand Hegel to be denying there is any in itself for itself; but that for us, there is a valid distinction between what is in itself and what is for us. It is to the extent that we can think the in itself that we can know it, and what we are thus knowing is what the in itself is for us, and nothing more. Hegel denies that there can be any in itself beyond this; that is he denies transcendence or what is the same, he denies that anything is truly transcendental for us. The transcendental is thus nothing over and above being an immanent thought.
I am not all too confident of this interpretation of Hegel, as he is the most complex of thinkers, and not easy to read. A good deal of my understanding, since I don't have a lifetime to devote to being a Hegel scholar, is derived from readings of secondary literature.
There'd be a paucity of philosophical literature; and if you found yourself capable metaphysical speculation you would either have to keep it to yourself or risk censure and perhaps prosecution, incarceration, or even execution, depending on how prohibitive your society was and how severe your heresy was seen to be.
No.
What I mean by brings down the distinction is this... In my reading:
Hegel maintains the Kantian Subjective Logic whereby the Subject and the Object are mutually constituted through the synthetic unity of apperception (the noumenon), thereby also maintaining the a priority of the noumenon with regards to the phenomenon. However - Hegel does shatter the epistemological distinction between the noumenon and phenomenon - both the noumenon and the phenomenon are known. Reality is not a mystery. My reading probably is quite close to Yovel's reading of Hegel.
That depends. With the advent of technology and modernity there has been a division produced in culture. There is popular culture - the media, TV, Hollywood, etc. and then there is the real culture, which is quite often ignored and forgotten. The kind of politically enforced culture would be the popular one. Whosoever escapes the popular culture can freely dwell in the real culture. It's just popular culture that would be restricted and controlled. That's the kind of politically enforced view that I would accept to live under.
What are you reading of Hegel? If this interpretation comes from reading the Phenomenology, can you cite some passages in support of it or at least provide some references to page numbers?
Do you include philosophical literature under the 'popular culture' heading?
Obviously not... >:O Common man, philosophy is frowned upon in popular culture... do you see folks like Miley Cyrus interested in philosophy?! That's popular culture. Popular culture is empty of content anyway - it's a culture used to brainwash idiots to consume more, and give in to their base desires...
The insight I am referring to is personal insight of a kind which cannot be inter-subjectively corroborated. It's just like the insight of the artist, musician or poet which can be expressed only evocatively. What the artist, musician, poet or mystic is 'speaking' about, cannot be explained in propositional language..Painting, sculpture, music, poetry and religious and mystical literature are all like this; it moves you or it does not. If you are not moved by the arts or mystical literature then that says more about you than it says about the arts or mystical literature.
Well then you didn't read my post on this subject, that you originally responded to properly; we are obviously not talking about the same kinds of society.
I have never finished the Phenomenology, most of my knowledge of Hegel comes indirectly from secondary sources. I've read for example Yovel's translation + commentary on Hegel's Preface to Phenomenology, I've read his interpretation of Hegel as a Spinozist (although an improved version of Spinoza) in Spinoza and other Heretics, I've read parts of Frederick Beiser's Hegel, I've read Macherey's comparison/discussion of Hegel and Spinoza (in Hegel ou Spinoza), and I've started to read the Phenomenology beyond the Preface but have never finished it. Oh and I've started to read the book you have suggested after I "stole" it from online O:)
So yeah, those are the references briefly for my positions. There probably are others but they aren't directly about Hegel. As I said, I'm not a Hegel expert like you :P
Quoting John
Okay I'm moved by them - for a few seconds, minutes or hours, and then back to planting potatos in the garden :P The potatoes don't plant themselves you know, and man does not live on spirit alone. It seems that my place is still in the world - planting potatoes - everything else is just an escape from that, is it not?
Hegel also gives a Science of Logic among numerous other works. I haven't begun to penetrate it yet, and i don't know if i ever will, but the Phenomenology is generally considered to be merely a propadeutic to the Logic. And even though the Phenomenology is unarguably a formidably difficult work, the Logic is, going by some accounts, even more difficult.
Spinoza is far easier to learn that Hegel in my view. Hegel's passages are densely packed and difficult to understand. Spinoza is a breeze by comparison. I think you have it exactly backwards; understanding some Spinoza will help you in understanding Hegel, especially since he came well after Spinoza.
There is practical life (in the sense of earning a living) and then there is contemplation and study and creative pursuits. Not always easy to balance; having worked for more than three decades as a landscape and building designer and contractor I know that all too well.
>:)
There's deeper problems than just balancing I think. First even the need to balance is anachronistic - work should be a creative and fulfilling activity in and of itself. The fact that it isn't and there needs to be a separate time for creativity means that one is living a divided life, and probably doing both half-heartedly. In addition, if you honestly play the scenario in your mind that you don't have to earn a living anymore, and you can just do whatever you want, you'll see that you'd get amazingly bored, and so you'd still return to some form of work. That's why I eventually want to get involved in politics and my community, because otherwise there's not much that you can do apart from work work work - which, combined with study, is pretty much all I'm doing now...
I agree. But Kant pointed out 'the limits of knowledge' - which is how all this came up - and what lies beyond rational explanation (i.e. the antinomies of reason). I'm not saying Kant is be-all and end-all, but I'm trying to relate the idea of a 'higher teaching' against philosophy, specifically Western philosophy, and metaphysics. (I mentioned before, I discovered Kant via T R V Murti's Central Philosophy of Buddhism.)
(Anyway, speaking of making a living, we are both seeking work right now, and finding it extraordinarily difficult, we think mainly cause of age. Dear wife has an excellent career record and a Masters degree, but has been working full-time on job applications since July last year; I'm vying for contracts but Sydney is one of the most expensive, therefore most competitive, markets in the world. So at times like this I have to fight the voice that asks me whether I've wasted far too much time on philosophy.)
I have always found my work, "creative and interesting". But there are different kinds and levels of creativity and interest, and it cannot be expected to fulfill them all, Study is itself a creative pursuit, or should be. I love reading, painting and drawing, studying music and playing the piano, writing, both prose and poetry, and physical exercise and discipline (I also learn Tai Chi). I don't want to give any of these pursuits up, but i also like to relax and do nothing sometimes, go bush walking, or watch good series on TV, or spend time talking with friends. I would like to regularly practice meditation too, but i am not prepared to put aside the time for that. So there is never too much, or even enough, time as far as I a concerned.
That sounds like a worthwhile and interesting project.
Quoting Wayfarer
I can sympathize insofar as I have been self-employed for pretty much my entire working life and the uncertainty about where the next job is going to come from is always on the periphery somewhere. I have only been working about 15 hours a week for the last ten years or so, and the contracting and design side of my business is looking very quiet now, so we are down to our regular maintenance stuff if nothing else comes in. And that only involves me about 7 hours a week average (my companion works with me; she does the lighter tasks of shrub-shaping and weeding and I do the heavier work which involves operating power equipment such as mower, hedge trimmer, line trimmer, blower, chain saw, brushcutter and so on).
I like to think of the Biblical quote "Take no thought for the morrow, for sufficient unto the day is the evil thereof" when I find myself fretting about the future. As Bod Dylan says, "When you got nothing, you got nothing to lose",. This is not to say I have nothing; I have property of course, but I think having nothing is a state of mind (which is by no means easy to attain, and even harder to remain in). so, I would never want to succumb to the voice, as i am sure you also would not, that says that philosophy could be a waste of time. I do get this about philosophy though "Isn't it just a waste of time, it doesn't seem to answer anything" from some of my very smart, yet predominantly practically and hedonically oriented, friends, though.
:-$
As you said, the equations refer to mass, not matter itself. Mass is a measurable property of matter. The duality, or complexity, of substance is inherent within Newton's laws, because it is assumed that there is matter, and it is assumed that matter has mass. These are two distinct things, matter and its quantifiable property, mass. The working premise at that time, was that there is no matter without mass, and no mass without matter. Some may have thought that matter and mass are the same thing, but the two are not the same thing, as mass is clearly a property, and that is not consistent with the concept of matter.
Matter, in Aristotle's physics was that which persists, does not change, through a change. The law of conservation of mass removes this designation from the matter itself, and puts it on the mass, which is a quantifiable form. But then it was found that matter could exist in the form of energy as well, and this required a law of conservation of energy. Now we have two distinct fundamental forms of matter, two distinct types of substance, one is mass, the other is energy. That is only because the designation of "that which persists" has been removed from matter itself, and applied to the form.
Well, that's what I consider myself to have been doing on philosophy forums since I joined them.
Quoting John
Tricky, for a mortgage holder. (I once said to a monk, us urban householders have a lot of stuff to worry about. 'I know', he said. 'Why do you think we're monks?')
Quoting Metaphysician Undercover
I hate to be disagreeable, but I really think you're mistaken about this. It's a question for history and philosophy of science, of course, but how could there be (for instance) such a division in Cartesian substance dualism, where there are two kinds of substance? The whole point about Newtonian and Galilean substance was reduction to those attributes which could expressed in numerical terms. Newton hardly studied Aristotelean physics, and certainly didn't take it seriousy, it having been overthrown by Galileo, Copernicus, and others. The distinction that then emerged was not between 'matter and property' but between 'primary and secondary qualities', where the latter were associated with the observing mind (colour, etc) and the former (including mass) were primary attributes of the object of measurement.
Yes, yes, that's exactly the point, the distinction between 'matter and property' was lost, because matter was taken for granted. If you read Newton, he had a lot of respect for the concept of matter, and discussed it a lot. But what happens with his laws, is a focus on this particular property of matter, mass. So in a sense, matter is equated with mass, all matter has mass, and all mass has matter. After that, the focus is just on that attribute, mass, because the matter is taken for granted. The equivalency of mass and matter was taken for granted, matter could be represented as mass, so there was no more need to question the existence of matter itself.
LOL I get this from just about everyone, including most friends and relatives (I only have probably 2 friends I can discuss philosophy with properly). They all think philosophy is useless because they say you can't "do" anything with it - as in what does your knowledge help you with? So when I tell them that there are some things which we do in order to obtain other things, and then there are things which we do for themselves, they don't understand. Even if I explain that we can't always do X in order to get Y, because if everything we did had to be done in order to get something else, then we'd have an infinite regress of do X to get Y, get Y to get Z, etc. So we must stop at something, which we do for its own sake. Even after this "proof" people still protest about it, without of course relating with anything of what I've said, or recognising that their ends-in-themselves are different from mine, and at least a priori, no better.
So yeah, I think philosophy is end in itself. But on top of this, I think philosophy actually helps with most things in life, ironically. Like it helps with handling most things in life. Philosophy even helped me in fucking chess playing... even in martial arts practicing philosophy made me much better. Philosophy also helped me cure my OCD and anxiety. I mean without philosophy my life would really be much much worse. I can probably list no thing that philosophy didn't help me with. Even washing the dishes... I used to hate doing it, now I don't mind - I can accept whatever has to be done. Even in learning new things (because of my self-employment in IT I probably learn new things daily still), philosophy is of tremendous help. The one thing which maybe philosophy hasn't helped me with, is motivation. I always find my sources of motivation in places other than philosophy. If anything, philosophy doesn't motivate me to do anything that is not close at hand. In any case, I found that nothing motivates one to do something better than loving a woman :P O:)
In addition - I think philosophy is needed for politics. I mean I can't imagine how someone can do politics without being a philosopher, or at least without having philosophers as advisors... Why do you think Chinese Emperors used to employ all the hermit and recluse philosophers in running their empires? :s I mean were they stupid?! >:O If I was a politician, or a big businessman, I would employ philosophers in taking all decisions - there's no better brain than the philosopher's brain in deciding what is best to do.
Quoting John
For me I find it currently creative and interesting for the simple reason that I'm still learning a lot everyday. I'm relatively new still in this kind of business. But I imagine that after practicing it for 3-5 years, I'll pretty much know everything inside and out. I'm lucky I got the chance to switch fields. I hated working for someone else, and as an engineer I found I pretty much can only work for someone else... >:O at least in Sapientia's great country, Britain.
Quoting John
You are lucky in that. I prefer self-employed compared to work under a boss. But I was unlucky because my degree didn't really allow me to work as self-employed straight off. As a civil engineer you're pretty much fucked if you want to work on your own immediately after university :P
Quoting John
But certainly there's always repeat business? I mean for me, I got in by first having done work for a family friend who had a small business, then he recommended me to others, etc. and by today I have a good set of a few clients. Even if no one new comes, there's always repeat work - or maintenance work - from these people. And then if all that disappeared, I'd advertise more aggressively, or I'd do some freelance work, etc. There's a lot of possibilities as self-employed if you're willing to think about them and try them. But if you're stuck in a job, there's pretty much no possibility for movement and change there...
Another thing to note here is a common phrase in particle physics, which is "That which is not forbidden, is required."
To answer the questions more directly (and sadly, less informative):
What are natural laws? Natural laws are models of phenomena, if one concedes they are constructs. If one supposes natural laws are immutable truths, then natural laws would then be the mechanics of reality; providing a map of tendencies.
What are natural laws made of? Statements about events and processes. The language of the statement is typically developed through trying to get a model to match witnessed phenomena.
How do natural laws work? One way to think of it, as Sean Carroll stated, is the chain of explanations stops at the natural law, and that's how they work. Another way to think of it would be suggesting that the chain never stops, because one could ask why they work as they do (which is close to how the question was posed), which science is woefully unable to deal with in a manner satisfying to many thinkers.
One thing that may interest you might be the difference between the Newtonian schema and a Lagrangian one. Under the Newtonian schema the world is thought of a a computer that takes the state of a system and evolves it. Under a Lagrangian schema, the world evolves assuming an initial and final state. I think this is similar to the it from bit versus bit from it arguments.