Causality conundrum: did it fall or was it pushed?
Causality is a confusing subject. Here is a familiar question from physics. Imagine a ball bearing balanced exactly on the peak of a perfectly smooth dome. Or perhaps a pencil standing poised on its sharp tip. Does it fall, or is it pushed?
Naive models of causality would suggest that every effect has its particular triggering cause - some push. So the story would be that the ball bearing or pencil would have never moved ... unless some tiny little nudge happened to set it falling.
But another view is that the ball bearing or pencil just fell. Any kind of nudge would have been enough to disturb it. It didn't matter what would happen as the actual triggering event. The very nature of the world is such that a nudge of some kind couldn't have been prevented. The cause was generic rather than particular.
Rather that ascribing the cause to a particular event - some vibration or puff of air - it most accurately ought to be ascribed to a general failure to be able to prevent these kinds of environmental perturbations. Nothing could have prevented the falling because the real world just doesn't permit that absolute, fluctuation-free, kind of stability.
So which explanation should we prefer over the other? And which world is the one that science best reflects?
Is it the world in which every "accident" must have its one particular cause - only that event could have done the job? Or is it the world where fluctuations can never be completely suppressed, and so disturbances are a generic fact of life?
What description of nature sounds the more reasonable summary of the facts?

Some background reading if you desire... https://www.pitt.edu/~jdnorton/papers/DomePSA2006.pdf
Naive models of causality would suggest that every effect has its particular triggering cause - some push. So the story would be that the ball bearing or pencil would have never moved ... unless some tiny little nudge happened to set it falling.
But another view is that the ball bearing or pencil just fell. Any kind of nudge would have been enough to disturb it. It didn't matter what would happen as the actual triggering event. The very nature of the world is such that a nudge of some kind couldn't have been prevented. The cause was generic rather than particular.
Rather that ascribing the cause to a particular event - some vibration or puff of air - it most accurately ought to be ascribed to a general failure to be able to prevent these kinds of environmental perturbations. Nothing could have prevented the falling because the real world just doesn't permit that absolute, fluctuation-free, kind of stability.
So which explanation should we prefer over the other? And which world is the one that science best reflects?
Is it the world in which every "accident" must have its one particular cause - only that event could have done the job? Or is it the world where fluctuations can never be completely suppressed, and so disturbances are a generic fact of life?
What description of nature sounds the more reasonable summary of the facts?

Some background reading if you desire... https://www.pitt.edu/~jdnorton/papers/DomePSA2006.pdf
Since 2003, there have been many reactions to the dome. Some are amused to see that indeterminism arises in so simple an example in Newtonian physics. Others are indifferent. The response that surprised me, however, came from those who had a full grasp of the technical issues, but nonetheless experienced a powerful intuition that the dome somehow lies outside what is proper in Newtonian theory.
Comments (149)
It neednt be any more complicated than that, and I dont see a reason to have a presumption of an underlying universal explanation.
I think quantum wave collapses are best viewed as events that lack a contrastive cause, just like this idealized Newtonian example exemplifies. But this idea of events that happen (in some sort of symmetry breaking way) without being caused to so happen also may be the best construal of events that are determined by microscopic (symmetry breaking) causes where those causes simply are irrelevant to the emergent macroscopic dynamics that we are interested in.
Over 48 feet tall and 40 tons, the wind-carved rock balances precariously on a pedestal only 3 feet by 17 inches. (The color may have been manipulated in this photo)
It doesn't have to fall, but it probably will. Aside from human action, numerous independent factors could cause the rock to fall. Wind and earthquake come to mind, heating and cooling, particularly the freeze/thaw cycle--provided enough moisture was available at the right time in the right fissure. Very subtle influences, like the tidal pull of the moon, subliminal vibrations in the earth, shock waves from a meteor strike, lightning strikes, etc. could contribute to the fall.
Were we to carefully monitor wind, quakes, temperature, precipitation, the motion of the moon, and any other environmental factor we could measure, we could quite possibly not identify a single cause, because all these factors might be operating at the same time. Another thing, all the environmental forces operating on the rock are cumulative, and have been operating for a long time. (Don't we have to include the wind of millennia past as a causal factor?)
I cast my vote for "it fell" rather than "it was pushed".
But here you seem to both say the right answer doesn't really matter, and yet also the right answer is that there will be some particular triggering cause that accounts for the "why". In the end, it will be a push that did it. And that is what counts most for your general view of the world.
In practical terms, you might be right. It seems we could always measure nature more closely and put our finger on some individual environmental disturbance as the guilty party.
But the essence of my question was metaphysical - what do we really want to believe about the truth of nature? So something is at stake. We ought to come down on one side or the other.
Could you imagine ceasing to care about the individual pushes and instead accepting that the generic impossibility of eliminating all disturbances is this deep truth?
I am guessing you would resist that alternative view strongly. The question becomes why? With what good justification?
That's because of a specific queer mathematical property of the shape of the dome and how the system interacts with the vertical force of gravity. At the initial time when the ball gets rolling, its speed is zero. But the first derivative of its speed (its acceleration) also is zero. Hence, the force, at the instant in time when it is initially at rest, also is zero. This is why no force at all is required to set it in motion.
Another way to view it is to imagine the time reversal of the process where the ball is being sent rolling up the dome with just enough speed so that it will end up at rest at the apex, after a finite time. Thereafter -- and this is unmysterious -- it may remain at rest for an arbitrary period of time. If this is a valid solution to Newton's equations, then, so is the time reversal of this process where it remains at rest for some time and then "spontaneously" starts rolling (with an initial instantaneous null acceleration).
How so? If the ball has mass, it has inertia. A push is required to set it moving.
Of course, it the ball is already "in motion", then that acceleration already exists. But that then becomes my alternative story of the physical impossibility of eliminating such accelerations. A physical mass has to have some kind of internal thermal jitter and so - even when placed perfectly at rest - it is going to just throw itself over the edge and roll.
So we can't rely on the formalisms if the formalisms simply leave out the crucial physical facts.
Not so, as I've already explained to Bitter. The shape of the dome is such that, as the ball is getting infinitesimally close to the apex, the second derivatives of its horizontal motion tends towards zero; and hence, also, the horizontal component of the force.
Yes, but this would be a case of innumerable accidents as you say. The "push" advocates would still reply that there would be one last event that finally did the trick. So it could have been the unlucky tourist that leant on it, or that lightning bolt which was the straw that broke the back of the camel.
I want to focus on the most extreme example where absolutely anything would be enough to be that straw. And so we can't really blame some particular straw anymore.
I cant think of a reason that it would be impossible.
You guessed wrong unfortunately. Sorry.
Until today I hadnt given it much thought. If it turns out to in fact be impossible, i cant imagine why I would strongly resist.
But that relies on the ball starting on a slope, not on the flat. It is only infinitesimally close to the apex and so also infinitesimally inclined towards rolling down in some direction. The forces acting upon it are already sufficiently off-kilter.
So again, my essential point remains. The ball can't be placed with exact precision at the apex. Our modelling incorporates that infinitesimal "swerve" as something that can't be eliminated.
As a way of thinking about what causes the ball to start to roll, the answer becomes we couldn't prevent that because any placement on the apex had to involve infinitesimal error.
Quoting Pierre-Normand
But now if you time reverse the story, you still only can arrive infinitesimally close to the apex, not actually perched exactly on it. So if the ball seems at rest, that is a mistake. It is only ever decelerating and then beginning to accelerate again. Inertia and friction might slow that transition in the real world. It might get stuck a while. But in the model, which presumes frictionless action and an actual perfect balance of forces, with no infinitesimal errors regarding its location at the apex to break the symmetry in advance, traditional thinking would seek a triggering push. And it is that framing of the situation which is the OPs target.
But that is the easy presumption that is under attack here. Most people probably do find no reason to even question the possibility of being able to eliminate every possible source of perturbation in some physical system.
The habit is to think of a world that is essentially clean and simple. A blank slate. A void. And then you start populating this world with its little pushes and pulls, its atomistic play of events.
But I say why isn't the inverse of that a closer match to observer reality? Why don't we start with a world already chock full of pushes and pulls, then see if we can imagine subtracting them all completely away.
Quantum mechanics tells us we can't in fact achieve a void. There are always going to be infinitesimal or virtual fluctuations.
So in fact we have a well-motivated reason for taking the opposite view - the one that presumes the impossibility of suppressing all physical disturbances. And so - as a metaphysics - that would flip the usual comfortable view on its head.
Where before there was no reason to think an absence of fluctuation was impossible, now there is no reason to think it might be possible. Hence the idea of a triggering cause loses its previously fundamental-seeming metaphysical status. The interesting condition is the one where such causes have become so suppressed that all their particularity has been lost and there is only now the generic concept of "the inevitability of spontaneous outcomes". Perturbation itself becomes a primal feature of "the void".
No so. What you are saying would be true for any number of smooth convex domes, including spherical domes. But the particular shape being discussed in the paper that you linked to in the OP, referred to as "the dome" and defined radially as the surface with height h = -(2/3g) r^(3/2), where g is the vertical acceleration of gravity, is such that the equation of motion for a small spherical mass being perched at rest at the apex at t = 0 admits of two different sorts solutions. The first solution describes the mass remaining at rest at all times. The second class of solutions have the mass moving away from the apex with radial positon r(t) = (1/144) (t – T)^4, where T is an arbitrary time and t <= T; and r(t) = 0 at any time t before T.
So, in the time reversal scenario, when the ball is sent sliding up towards the apex with just the right speed, it doesn't slow down asymptotically as a function of time. It slows down to rest in a finite time and then (consistently with Newton's laws) remains at rest for an arbitrary amount of time at the apex before sliding back down in the same, or another arbitrary, radial direction.
If "the dome" didn't have this very specific shape, then, the equation of motion of the sphere instantaneously at rest at the apex, at a time, would (in most cases) be deterministic. It would necessarily remain at rest at all times. In the case of "the dome", indeterministic outcomes are consistent with Newton's laws of motion even in the idealized case where there is no initial disturbance at all away form an initial state of instantaneous rest! That's what makes "the dome" (Norton's) such an interesting shape.
It is certainly not a presumption on my part. I in fact cannot think of a reason why it would be impossible.
You seem like you want me to accept simething that you feel is possible as something that is true. It takes more than something to just be possible in order for me to accept that it is true. This doesnt seem like a particularly high or difficult standard.
Im open minded to all possibilities, but I need a good reason to accept the possibility is actually true and I told you that I cannot think of a good reason to accept that it is impossible to eliminate all disturbances. Do you have one?
Quantum mechanics.
That's fairly intuitive, right? Yet, the mathematical analysis of the case contradicts this intuition. Look at the equation of motion of the ball bearing. At any time when it is at the apex, both the net force being applied on it, and its acceleration, are zero, as they should be, or else this equation would not be a solution to Newton's laws of motion. And yet, also, the ball bearing is radially being displaced away from the apex a finite distance after only a finite time. It needs not be the case that, as our intuition about causality seems to demand, that the cause (viz., the net force) begins operating before the effect (viz., the acceleration) starts being manifested. In this idealized Newtonian case, they both come into existence together (i.e. at any time t > T) without there being some other physical cause that accounts for the moment in time T when the ball starts moving away from the apex (with only an initial null acceleration at t=T, and at all t <= T).
But anyway, I just meant to talk about the standard example as an illustration of spontaneous symmetry breaking.
Not really, because it assumes metaphysical realism: the idea that there might conceivably be an external God's eye view of the world that amounts to a complete description of it, including its laws and causal relations. I think Kant has shown this not to a be possible account of our (or of any conceivable) empirical world. But this case remains an instructive mathematical possibility. I'll say a bit more later on.
I have never heard that QM shows us the impossibility of eliminating all disturbances, I was under the impression that QM is still struggling to lock down exactly whats happening at the level it focuses in.
Ok, so it sounds like you already knew the answer to your question. At the QM level the ball doesnt get pushed, at the level we most interact with it does.
On one view there are no accidents as every event will turn out to have some particular cause. Look close enough and you will find the nudge that actually did the trick of toppling the perfectly balanced ball bearing or pencil. So the accident becomes in fact micro determined. It only looks an accident while we are ignorant of the fine detail.
I want to contrast that usual view with its opposite. The story can be turned around by laying the stress on the other facts. In the end, it was the impossibility of eliminating all sources of environmental disturbance that was the cause of the toppling. Yes, there was some individual nudge that did it. But a nudge of some kind was also absolutely inevitable. We thus have no good reason to point a finger at some particular nudge as if it were significant in its own right. It was nothing special. If it had failed to act, the ball bearing would have still fallen just as surely because an unlimited number of other nudges were there to step in and do the same job.
So the odds of the accident happening would be 100% from that point of view. We could say that the ball bearing simply has the propensity to fall. It doesn’t need a particular push. It is generically set up to respond to a perturbation. Identifying some individual nudge as the actual culprit adds no real information to an account of the causality.
Clearly this line of reasoning then takes you into the interesting metaphysical questions about how the Big Bang could happen out of “quantum nothingness”, or what causes an unstable particle to decay.
It is simply not accurate to say the ball has a propensity to fall, or that our view of things bottoms out at some generic level and we cant or could never be specific. You are just choosing to be less accurate, choosing to not get too specific.
So you want this discussion to be unmoored from logic, sense and causality. Thats fine, its some kind of thought experiment, but as you pointed out I am missing the point.
In what way is it useful? You mentioned something from nothing and unstables particles...but we have ways of determing those things, those are questions physicists answer with precisely things like quantum mechanics and mathmatics. Why is it that you think these tools are insufficient?
What is the advantage of operating from the basis you are suggesting?
So the act of placement is really a push, because placement cannot be precise. And, if the act of placement could be precise enough, or the surface flat enough, then a push would be needed. Therefore it's always a push.
That is my view. The ball fell because when it was released by whatever was holding it on the apex, its centre of mass was not exactly above the point of contact with the dome, so it started falling.
One can foresee an objection that says 'But what if the CoM was exactly above the point of contact when released?' The response to this is:
1. The probability of that being the case is zero, as the horizontal coordinates of the CoM would have to match two exact real numbers.
2. In addition, the CoM would have to have both horizontal components of its velocity exactly zero upon release. In practice no object can have an exactly stationary CoM, because of the same probability argument.
Have you looked at the paper linked to in the OP, though? The case has been specially contrived such that even if the ball is placed exactly at the apex, and with no initial velocity at all, then, consistently with Newton's laws of motion, it could either remain stationary or start falling towards an arbitrary radial direction with a distance from the apex: r(t) = (1/144) (t – T)^4 (where T is the time when it would spontaneously start moving in the absence of any net force at that time).
I agree that in practice this precision would be impossible, but that isn’t the point being made. The point is about how we like to assign causality to particular triggering events, but if a triggering event is almost sure to happen, then the particular loses its hallowed explanatory status. The cause can be treated as completely generic.
Making the generic cause to be about the impossibility of placing a ball with arbitrary accuracy on an apex is both another way of saying the same thing, but not quite as strong a version as focusing on the impossibility of eliminating triggering fluctuations.
I rather like the idea of a generic cause of the symmetry breaking mechanism. The generic cause, in this case, is the (practically inobservable) fluctuation background that serves as the explanation of the emergence of the indeterministic law that governs the (practically) observable events of symmetry breaking. There being such a generic cause doesn't entail that there is a law on account of which a contrastive explanation can be given as to why the ball fell in one rather than another direction, or fell immediately rather than at a slightly later time. There may be no such law, and no such contrastive cause. (There may however be a emergent law specifying the half-life of a ball's staying poised before starting to fall).
Yeah. I think this is the next interesting case to dig into.
Another slant on the OP would be the more standard example of a phase transition where a system is in a state of correlations of infinite length. It is right on the cusp of a global symmetry breaking and any perturbation at all will push it across the threshold. At that point you can throw away the need to identify the triggering disturbance. It was always going to happen somewhere.
All this is about how to view spontaneous symmetry breakings. The formalism usually reduce the description of the physical system to its perfectly poised symmetry, leaving the issue of a triggering cause - the source of the spontaneous change - outside the model. This leads to some rather arbitrary metaphysical conclusions by those only prepared to consider what the formalism is prepared to cover.
So that is the issue. How can we talk about the lighting of the blue touchpaper, the first cause, in a way that it is part of the model and not some ad hoc extra? If fluctuations are treated as generic, then that would answer the question. The causal problem is flipped as what would now become surprising is if some critical instability could be prevented from breaking.
In short, why does existence exist? It no longer requires a particular triggering cause that broke a prior quiescent nothingness. Now the generic issue is how could wild fluctuations ever get suppressed? The causal story becomes about the physical mechanism that could limit the possibility of fluctuations to the point where stable order finally starts to reign.
OK. I don't find assigning causality a productive exercise, so I'll leave the field to those that do.
That's strange. I would have thought assigning causality to relevant agents, events, or states of affairs, is quite productive (pragmatically) whenever something occurs, we don't know why it occurred, but we would be interested in seeing to it that such an event occurs again, or in preventing it from reoccurring, for instance.
Comments suggest to me that the cause of the sudden spontaneous motion is a concealed fourth derivative jounce. So it is like the ball is set down on the apex in the middle of just being about to snap. The maths allow this because the maths is blind to the concealed action. The maths only concerns itself with the first and second derivatives being zero to say the ball is at rest. It can’t pick up a singularity, as when the ball would be briefly motionless apex of its trajectory when tossed in the air.
That would seem a rather trivial get out though. And I don’t yet see why this particular curvature is so special. Any explanation for why this curve is somehow poised in a way that allows for the claimed indeterminism?
This is another good little commentary I was looking at - https://theconfused.me/blog/is-newtons-first-law-merely-a-special-case-of-the-second/
The location function given has discontinuous Jounce (aka Snap), which is the fourth derivative of displacement (second derivative of acceleration) wrt time.
The Jerk (3rd deriv of displacement) is non-differentiable.
Jerk is (t-T)/6 for t>T and 0 otherwise.
Jounce is 1/6 for t>T and 0 otherwise.
So I don't think this case does what it at first seems to do, which is to generate breaking symmetry out of nothing. The breaking symmetry is always there in the discontinuous Jounce, which we have simply assumed. The plausible physical solution is that which has smooth displacement and all derivatives are always zero - ie symmetry doesn't break.
It does raise an interesting question for me though. There are functions called 'bump functions' that are smooth and yet are zero everywhere except on a compact interval. An example is f(x) = exp(-1/x) for x>0 and f(x)= 0 for x<=0. This is zero until it gets to x=0 and then it suddenly starts increasing 'for no reason', and asymptotically heads towards 1 from below..
I wonder if one could construct a dome of a shape that made the displacement vector a bump function. That would be mysterious because the function is infinitely-differentiable, and hence doesn't implicitly already deny causality.
The way one excludes bump functions, when one wants to do so, is to restrict ourselves to analytic functions, which can be expressed locally as a power series. There are no analytic bump functions.
I have always found bump functions very mysterious, and like to ponder them when I have nothing else to do. They are a truly beautiful case of nothing happening, then something suddenly starts to happen, without a discontinuity anywhere to be found.
I will muse over whether one could construct a dome shape that would make the displacement vector a bump function.
Quoting apokrisis
The curve is constructed so that the displacement function is a constant multiple of (t-T)^4 for t>=T. The same would work for a displacement function proportional to (t-T)^n for any n>=4. In that case the non-differentiability won't appear until the (n-1)th derivative of the displacement function. So as long as n>=4 the nondifferentiability will be out of sight and out of mind.
That's interesting. I hadn't thought about the implications of that. But I am unsure about the implications that it has for symmetry breaking understood as a nondeterministic bifurcation in phase space. Imagine the ball bearing being sent sliding up the slope of this surface with just the right velocity such that it ends up being at rest at the apex. Granted, there will occur a discontinuity in the jounce at the time when the ball comes to rest. How are this process, or its time reversal, not both plausible physical possibilities (in any arbitrary radial direction)? And if they all are plausible physical possibilities, then it would appear that there is a bifurcation in the phase space of this system. (Granted, in a 'real world' implementation, it's still vanishingly improbable that the ball would ever be placed precisely at the apex, and laid there completely at rest.)
The higher derivatives would have to be nonzero for the ball to pass the cime and go down the other side. If it stops there, there are no discontinuities because Jounce and Jerk were already zero on the way up.
Well, what is 'concealed' (or worth paying attention to) is the discontinuity in the jounce. But I am usure that this discontinuity can be construed as the cause of the spontaneous motion of the ball after the time interval where it has been at rest and was subject to no net force. What may be instructive, and maybe dispel some of the weirdness of the case, is to consider the limiting case of the effect of a small horizontal momentum P being transferred to the ball bearing (such as the impact from a single molecule of air) where this initial momentum transfer P tends towards zero. The integrated time that it takes for the sphere to thereafter fall along the surface in the radial direction where it has initially been pushed will tend towards a finite time T as P tends towards zero. This is why, in a sense, we may say that the symmetry breaking event requires no initial perturbation at all.
Yes, I wasn't picturing the ball to keep on going to the other side. Rather, it reaches a bifurcation point in phase space. It is equally physically possible (with undefined probabilities) that it will stay at rest, or immediately start moving towards an arbitrary direction.
The jounce wasn't zero on the way up. It was constant and equal to 1/6. It will drop to zero only if the ball thereafter remains at rest at the apex (which is only one physical possibility among others).
True.
I realised that what I wrote above, as powers of (t-T), is not actually the Jounce, Jerk etc but rather the higher derivatives of the radial coordinate r. It's a scalar rather than a vector. I doubt that helps but I need to get my perspective right before commenting further. At present, I'm finding the case of sliding the ball up more perplexing than it starting at the top.
I'm also wondering whether we need to incorporate the dome into the system we are analysing rather than treating it as a supplier of external forces, in order to make sense of the scenario.
Also - I just realised that the dome is not smooth. a geodesic over the top will have a nondifferentiable first derivative, because the second derivative of r^(3/2) blows up at 0+. That can explain the discontinuity in what we've been calling Jounce. That Jounce is a function of the dome's shape, and the shape is not smooth at the top, so it is reasonable for there to be a discontinuity in Jounce there, as you point out there is.
A new perspective on the whole problem just came to me though. Here it is:
The paper argues that the ball could move because there is a solution to equation 2 (d^2r/dt^2 = sqrt(r)) in which the ball moves, where that equation 2 is derived from Newton's Second Law. But that doesn't mean the solution is applicable. To test whether it's applicable, we need to substitute it back into all Newton's Laws and see if they still hold.
Newton's first law says that an item will remain in its state of motion (which is interpreted to mean its velocity does not change) unless acted upon by a net external force. So the ball in a perfect, stationary position at the top will remain in its state of motion, which is stationary. It will not roll down. Hence the solution is non-Newtonian and must be rejected. It satisfies the second but not the first law.
The same goes for when we slide the ball up. When it arrives at the top it is stationary and balanced. Newton's first law says it will remain in that state until pushed.
For me, that solves the puzzle. The solution in which spontaneous movement occurs only satisfies Newton's 2nd and 3rd laws, not his first.
I think your construal of the first law might be too strong, or too literal, and may make it inconsistent with the second law. This first law often is seen as a special case of the second, where the net force is zero. Thus construed, the force being applied to a mass at T only is relevant to its state of motion at T. Thus, the fact that the net force being applied to the mass at T is zero ought not to entail that the state of motion will remain unchanged at a later time but only that the rate of change of its velocity is zero at T. This is quite obvious when the mass moves within a variable field of force. Granted, it is less obvious in the case where a mass is instantaneously at rest in a variable field of force at a point where the force vanishes. This is the sort of case that allows for bifurcations in phase space. But if an overly strict construal of Newton's first law might dictate that a particle would remain stuck at such a point of vanishing force when it arrives there with a null velocity (so as to conveniently remove the bifurcation in phase space) then this construal of Newton's first law would also have the very unfortunate consequence that it makes it inconsistent with the second law in other cases. How would you account, consistently with such a strict construal of the first law, for the fact that the point mass does not remain at rest, in the case where the field of force varies at that point as a function of time and hence is null only for an instant?
I got a bit lost here. Newton's third law is that for every action there is an equal and opposite reaction. I can't see how that law is relevant to the questions being examined in this scenario. Can you outline what you had in mind here?
Sorry. I got confused. (Can you imagine that I have an undergraduate degree in mathematical physics?) I was thinking of Newton's second law (F = dp/dt) and wrongly labelled it Newton's third law. I don't remember making this mistake before. Maybe I can blame the emotional impact of the Kavanaugh saga.
First we observe that Newton's laws were aimed at explaining real phenomena and so did not use the pernickety precision needed to cover all boundary cases. The case being discussed here is not only practically impossible but has probability zero of occurring even in Newtonian theory, so is also 'theoretically impossible' if we make the fudge of equating probability zero with impossible.
So to try to apply Newton's Laws to such a bizarre scenario we'd first have to expand them so it was clear what they entailed in that situation. I think expansions could be made both that make bifurcation possible and that prevent it.
An expansion that allowed bifurcation would be to essentially remove the First Law, by expressing it in a way that is unambiguously contained within the second law. We could do that by re-writing it as
'The time derivative of the velocity of a massive object is zero at any time at which the net force on it is zero'.
That makes it a special case of the 2nd law, so it adds nothing to it.
An expansion that prevents bifurcation could be:
'Where there is more than one future movement pattern of an object that is compatible with the 2nd and 3rd laws and the conditions in place at time t, and one or more of those patterns involves the object's velocity remaining constant for the period [t,t+h) for some h>0, the pattern that occurs will be one of those latter patterns'.
Very wordy, I know, but it has to be in order to deal with nonphysical cases like this without just disappearing into Law 2. Note also that it leaves open the possibility that there may still be bifurcations possible with this law - not the one discussed in the paper, which would be ruled out, but other ones in even more pathological cases. I suspect it may be possible to prove there cannot be, but that's just a hunch.
I'm thinking there might be some sort of analogy to Asimov's three Laws of Robotics, where each law can overrule those above (or is it below) it, but I'm not sure if that works.
Or perhaps the "deep truth" is that the actual event that results in a push is unpredictable, just as the moment when the ball starts moving is unpredictable. Your thought experiment concerns chaos and complexity, a subject that interests me greatly, but on which I have no expertise to speak of. :blush:
:up: :smile:
But even internal vibrations are often demonstrated to have an external source. When I heat my lunch with the microwave, it causes internal vibrations, but there is an external source.
If the external source of the internal "push" cannot be found and identified, this doesn't mean that we can exclude the possibility of an external source, saying that there is no push just because we can't see the push.
Notice, though, that this proposed expansion only shaves off 'branching outs' from bifurcation point towards the future. Determinism is commonly defined as a property of a system whereby the state of this system at a time, in conjunction with the dynamical laws governing its evolution, uniquely determine its state at any other time (either past or future from this point in time). This is a time-symmetrical definition of determinism. Under that definition, if the laws are such that there remains bifurcation points in phase space that are branching out towards the past, then the system still is indeterministic. The system past or present states uniquely determine its future; but its future or present states don't always uniquely determine its past.
[i]'Let P be the case that an object O has location L in phase space at time t. Let the set of patterns of motion of the object that are consistent with P under the 2nd and 3rd laws be S. Let U be the subset of S such that the object's velocity remains constant for some open interval [t,t+h), where h may vary by pattern.
Let W be the subset of S such that the object's velocity remains constant for some open interval (t-h,t], where h may vary by pattern.
Let V be the intersection of U and W.
Then the actual pattern of motion (both pre and post t) is in V if that is nonempty, else in U if that is nonempty, else in W if that is nonempty.[/i]
This states that the solution must have locally constant velocity both looking backwards and forwards if that is compatible with laws 2&3, else locally constant future velocity if compatible with 2&3, else locally constant past velocity if compatible with laws 2&3. Otherwise the law is silent.
This seems to do as much as possible to remove both future and past bifurcations in a way that is consistent with our intuitions.
What about it?
If the expanded law is allowed to remain silent for the specific set of states of the system whereby its trajectory in phase space aims precisely at the potential bifurcation point, how is that law any different from an indeterministic law that allows for any of the bifurcations that are merely consistent with the second law?
The case where the ball stays at rest on the apex during a finite time interval merely constitutes a subset of the set of the trajectories in phase space that are aimed at the potential bifurcation point. Also included into that set are all the trajectories whereby the ball is rolling up the surface towards the apex with just enough speed to reach it with zero velocity. In that case, both the second law, and your expanded law (if I understand it correctly) are silent regarding what happens next.
As regards what happened in the past, to project backwards we need more information than just the current phase location of the ball. We need the phase locations of all the other elements of the system, including whatever it was that fired the ball up the slope, if that is what happened. This is a requirement for backwards projection in any multi-particle system, not just those with potential bifurcations. Essentially we are asking 'how did the ball get there', and we can't work that out just by looking at where the ball currently is. Eg consider a ball sitting at the bottom of an inverted dome. It could have rolled down from any angle, or dropped directly down out of the sky. Our inability to say which happened reflects that we don't know the current phase locations of all relevant particles, not that a bifurcation may have happened.
But what happens in the case where the ball is being sent rolling up towards the apex with the requisite speed? Consider the situation at any time T, when it already had been rolling up for awhile, and hasn't reached the apex yet. In this case, we fully know the pattern of motion of the ball in the temporal vicinity of t = T. Is your law mandating that the ball will stand still indefinitely after it has reached the apex, or is it rather silent regarding what will happen next?
I think my use of the word 'observation' in the above rule may have conjured up images that were not intended. I have changed the words so that now it just refers to what the location in phase space is at that time, regardless of whether it is observed. It could, for instance, have been predicted from an earlier observation. In the case of your latest post, we have an observation at time T, which enables us to unambiguously project the path up to time t2, using only the 2nd law. At time t2 the 2nd law on its own allows for bifurcation, so the 1st law steps in and requires that the path that it follows is the one for which the velocity remains constant at zero from that point onwards (until such time as the dome wobbles or a new force acts on the ball).
Very well. In that case your law doesn't describe a deterministic system under the time-symmetrical definition of determinism. It allows bifurcations of paths in phase space towards the past. But you had meant to strengthen your law in order precisely to remove such backward looking bifurcations.
The definition above prevents bifurcation in the past by means of the subset W. As I said earlier, one needs to be more cautious when talking about bifurcation in the past. Generally, for any given observation, no matter how ordinary, there are multiple past scenarios that could have led to it (example of ball at bottom of inverted dome), and distinguishing between them needs information about more particles than just the ball and the dome. But that's not bifurcation in the sense we've been discussing. It's just a lack of complete information.
I am assuming that the dynamical equations, together with whatever supplementary laws might be posited, which govern the system determine the set of the physically possible histories of the system. I am assuming that the system consists in the dome, the ball bearing, the ambient gravitational field, and nothing else. The physically possible histories are being represented by trajectories in phase space. The system is deemed deterministic (in the time-asymmetrical sense) if the set of all the physically possible trajectories in phase space present no bifurcations. A backward looking bifurcation at T would consist in a case where two or more partial histories of the system before T would be consistent (in respect of physical possibility) with the same unique partial history after T. Your law seems to allow for this possibility.
Also, since the laws of classical mechanics are symmetrical with respect to time, it's deemed to be impossible to tell if a movie depicting a segment of the history of a mechanical system is running forwards or backwards. But if your law were governing a system, and a movie was shown of a ball rolling up a Norton dome and coming to rest at the top, then it would be possible to tell for sure that the movie is being run forwards since the time-reversal of this scenario would be physically impossible.
In that case it is impossible for the ball to have rolled up the dome, because there is nothing in the system that could have given it the necessary upward impulse. So if we observe it sitting at the top of the dome, the only possible history is that it has always been there. This can be derived from the 2nd law alone. The 1st law is not needed.
I don't see any reason why a physical system can't have some of its components initially in a state of motion. Velocity is relative to an inertial referential frame anyway. If it was initially at rest in some inertial frame, then it was initially moving relative to another inertial frame. And the laws of classical mechanics are Galilean-invariant. Les us assume that the ball has been shot up with a canon, or hit with a cue stick, if you like. The laws of motion govern its state of motion, thereafter, from the time after it was shot (or hit) right up until the time when it reaches the top of the dome.
In that case the path that involves the ball having always been at the top of the dome will not be consistent, under the 2nd law, with the current state of the cannon or the cue stick (eg heat, momentum) Also, the momentum of the dome will be different in both cases, as the ball transfers its horizontal momentum to the dome (3rd law) as it climbs to the top.
The second point has a more general application than the first. If we truncate our histories before we get back to the firing of the cannon, so that it starts with the ball already in motion, the momentum of the dome will differ when the ball is at the top between the case where it was always there and the case where it rolled up.
I had always assumed that the equation of motion of the ball (away from the potential bifurcation point) was as given in Norton's paper. For this solution to be exact, the potential motion of the dome is neglected. It this had not been the case, the force that maintains the dome up against gravity would have to be specified, as well as the dome's mass, moment of inertia, etc. Those complications would seem to be quite beside the issue being discussed in the paper (or in this thread). The surface of the dome is better conceived as a strict restriction on the range of motion of the ball, providing a reaction force just as strong as needed to keep the sphere along this mathematically defined surface.
I don't know why I've been arguing in its defence. I suppose I just got caught up in the momentum, and the challenge of coming up with arguments against a well-directed set of challenges.
I'm thinking of reverting to a version of the first law that only removes ambiguity (bifurcations) going forward, not backward, as that is what my intuitive feel for Newton's Laws is.
But I'll sleep on it first, in case I change my mind again.
Where's it being accounted for here?
Gotta love it...
The presence of a uniform and constant field of gravity g is assumed in the setup of the problem. It's the source of the weight, mg, of the ball bearing. It's thus, indirectly, the source of the radial (horizontal) component of the reaction force exerted by the surface on the ball. This reaction force vector is constrained (when summed up with the weight vector) to maintaining the acceleration vector along the tangent to the slope. The radial component of the reaction force is proportional to the sine of the slope at the point of contact with the ball, and hence null when the ball is located at the apex. Those assumptions, together with Newton's second law, allow the derivation of the equations of motion of the ball. Since there is a plurality of such physically possible equations of motion, the system is indeterministic.
Yes, from a far away planet, with the variable attraction from the dome itself being neglected.
I mean, when taking gravity into consideration...
The source of the puzzle regarding causality is that the cause of the initial departure from a state of rest is usually (or intuitively) being identified with the existence of the net force being exerted on the mass at the moment of departure from rest. But, in this case, this force is exactly zero. The net force only starts to grow after the ball has already begun to move away from the apex. So, what was the cause of the beginning of its movement? That's the conceptual puzzle.
Right?
That's right, although the force at issue, here, is the net force. For sure, you can allege that there ought to be some random force from thermal molecular motion that kicks the ball out of balance. But the puzzle remains since the equation of motion that accounts for the ball "falling away" from the apex towards some arbitrary radial direction remains valid and strictly consistent with Newton's laws of motion even when there is no such perturbative force being posited.
This part in particular. The sine of the slope changes with molecular decay(in the 'right' places), right?
Not sure what molecular decay is. But if you're thinking of thermal molecular motion, yes. It would be a source of fluctuation of the net force, and then could be appealed to as the cause of the fall. But that doesn't address the original conceptual puzzle since, according to Newton's laws of motion, the "fall" (or initiation of the movement) of the ball from Norton's dome is physically possible even if there is no initial perturbation at all. It occurs even in the idealized case where the ball and the dome are ideal solids, perfectly smooth and perfectly rigid, in a total vacuum.
I'm not seeing the need for an initial perturbation either. The system of molecular decay can change the net force causing the bearing to begin being in motion all the while never appealing to a force outside the system, aside from gravity. The physical structure of molecules changes over time. This change alone is enough to account for the movement of the bearing after sufficient time without introducing another force.
So, you are envisioning a spontaneous change in the microscopic shape of the ball. This would break the initial symmetry and move the ball's center of gravity away from directly above the apex of the dome. Fair enough. But it still doesn't address the initial problem regarding Newton's laws: namely, that they allow for the ball to start moving towards some arbitrary radial direction even in the case where there is no such initial departure from symmetry from any cause whatsoever.
And this is solely as a result of the shape of the dome?
Yes. Although Norton's dome isn't the only shape that allows this, many shapes, such as a spherical dome, or a paraboloid, wouldn't allow it since it would take an infinite amount of time for a perfectly balanced ball to "fall off" from the apex. (Or, equivalently, in a time-reversed scenario, it would take an infinite amount of time for a ball sent sliding up to come to rest at the apex).
Hmmm... Sounds eerily similar to Zeno. It may be that the solution lies in what's being neglected by the problem itself(how it's being framed is the problem). Molecular decay disallows perfect spheres and perfect domes...
Two different questions are being confused here.
The OP was not intended to be about Norton's dome and its claims of Newtonian indeterminism due to a latent jounce concealed in the initial conditions. The OP was about how we would think about an initiating cause when it comes to spontaneous symmetry breaking. The usual natural inclination is to finger one individual perturbation - the straw that broke the camel's back. But the alternative view is to point to the general impossibility of reaching the Platonic perfection modelled by a Newtonian set-up.
In that other view, Newtonianism is only arrived at via the suppression of environmental perturbations. And so, even if perturbations can be more or less completely suppressed - thus allowing Newtonian determinism to be a useful general description of an actual material system - they can also never be absolutely suppressed. Hence - generically - the environment still fluctuates, meaning there will always be a straw to break the camel's back.
"Molecular decay" would be just one more example of this generic inability to suppress all background fluctuations. Just the same as thermal jitter in the ball and dome. And if we get down to the quantum level, there just is always a probability of the ball quantum tunnelling across the threshold, moving sufficiently off-centre due to uncertainty.
In the real world, there would also be some residual frictional forces holding the ball bearing in place. The surfaces in contact would have some microscopic degree of roughness. So in reality, the whole frictionless dome set-up is unphysical.
Quoting Pierre-Normand
I have come to the tentative realisation that I don't have any intuitive sense that a model of the world needs to be satisfy that criterion. Key contributors to this feeling are:
With that in mind I have returned to favouring my original reformulation of Newton's First Law:
Quoting andrewk
This is not just an artificial construction. It is my best attempt at stating formally what I believe intuitively to be the case for a Newtonian model.
It is carefully formulated so as to still allow movement where there is a time-varying external force on the particle, that is zero at time t when the particle is stationary. In that case, if the force starts to increase at time t (eg according to formula F(t') = t' - t), the object will commence movement at time t because it would contradict the 2nd law if it did not do so. But in the case of the dome there is no time-varying external force. The external force varies by position, not time.
I would be interested in comments on this position.
And another thing
Another way to avoid bifurcations, both future and past, might be to replace the 1st law by a new law saying that the relationship of any force to time must be analytic. Analytic functions are expressible as a power series, which implies, but is stronger than, the condition of being smooth. I have a strong intuitive sense that all force functions are analytic.
With this constraint, the dome could not be constructed, because its surface is not smooth, so it would require application of a non-smooth force to make it into that shape. I also doubt whether the ball could be positioned perfectly atop the dome by application of an analytic force.
For this thought too I would very much appreciate comments.
The metaphysical reality of time reversal scenario's would also be in question here. If classical reality is actually the product of maximally constrained fluctuations - spontaneity is only ever suppressed - then time reversal can only be a locally effective story, not the generic metaphysical story.
Yes, I can drop your Ming vase on the floor and you will be able to gather up all its shards to glue it back together. Initial conditions can be recovered if no information is either created or lost in the time evolution of the event.
But thermodynamics would appear to say that is not the generic case, certainly at the Cosmic scale on which we are trying to write the laws of nature. So especially if we grant that the development of physical complexity erases past information - because hierarchical organisation acts downward to simplify the parts of which it is composed - then time irreversibility becomes the generic condition.
David Layzer makes this cosmological case - http://www.informationphilosopher.com/solutions/scientists/layzer/
The larger point I am course working towards is that when it comes to the big question - why does anything exist? - we mostly start from completely the wrong metaphysical place. The ball perched for all time on top of a dome is just an example to how far our most familiar models of causality are from the physical reality.
Layzer's approach shows how classicality exists as literally borrowed time. The Big Bang cosmos had to go through its phase change - gravitating mass had to condense out of the relativistic thermal fireball - to create "Newtonian" degrees of freedom. You suddenly had particles that could have a location because they could move at less than light speed and so inhabit a realm where there was emergently a space-like causal separation between events.
So when speculating about the beginnings of everything, we have built up a stock of false Newtonian intuitions. Newtonianism only describes a Cosmos that has got cold and large enough to have undergone a complete phase transition - one that takes it from a quantum description to a classical one. And if we try to time reverse that, how to do we cross the divide given it involves a massive loss/creation of information at the transition point? (A loss and a creation, depending on whether you are tracking the negentropic order created, or the entropic disorder lost.)
The quantum fireball was of course its own further crossing of a transition with its own symmetry-breaking creation story. But we can't say more about that until the lingering Newtonian determinism is completely dispensed with.
The OP presupposes an utterly impossible entity. It would have ended rather abruptly had it's author noticed this fatal flaw.
There is no need to worry about trying to fix Newton's laws if you just accept them as effective descriptions. They are how reality would work in the limit. But reality can only approach such a limit.
It is defending Newtonianism as a literal truth that creates the problem.
And the metaphysical insight for philosophy of maths is that a third category beyond the discrete and the continuous needs to be recognised - that of the vague.
Limits in metaphysics can only be defined logically in dichotomous or complementary terms. The discrete and the continuous are ideals - opposing limitations on possibility defined by their formal reciprocality. That then leaves raw possibility - ie: vagueness or indeterminacy - sitting in the middle as the stuff that can approach one or other limit with arbitrary precision.
Yeah, it was my mistake to link to that paper for sure. I should have just left it at the gif I was taking.
But still, Norton's dome is also its own interesting debate. I'm just saying don't keep mixing the two things up.
Reflecting on things further, we do seem to wind up with the issue that there always needs to be some kind of probabilistic tunnelling process for there to be an actual causal issue. The symmetry of the initial conditions is already broken in the sense that there is both a probabilistic process - some kind of quantum or even classical jitter - and the barrier preventing it breaking through until a threshold is accidentally breached.
So the ball bearing perched on the dome is going to be able to survive any nudge until there is one strong enough to overcome the frictional forces that would oppose the ball moving. Instead of the accelerative nudge tipping the ball bearing, that energy would instead start to heat up the environment via friction.
So again, if we really zero in on the full physics of the thought experiment, we find ourselves being pointed down the path to a fuller thermodynamical conception of this little causal universe. Not by accident, we look to be headed towards Feynman's Brownian ratchet and how that imposes an ultimate physical cut-off on determinism.
The general principle that follows is that we need to view first causes - the spontaneous breakings of symmetries - not as something hot happening to something cold (as in the bump that pushes the ball bearing), but instead as something cold happening to something hot - the fall, as in a generic fall in temperature that suddenly allows a ratchet and pawl to quit hopping about and start turning mechanically in a single deterministic direction.
That explains particle decay. In a hot environment, the particle isn't even stable. It is already melted. But if the environment is cooled, a particle can form. It will lock up a degree of internal instability that has some lingering propensity of overcoming the thermal decay barrier represented by a now cold world. By quantum uncertainty, that barrier will be spontaneously crossed because the particle fluctuated into a higher energy state that wasn't forbidden to it.
So if we want laws of nature that are generic enough to capture the causality of a Big Bang cosmos, this is the direction our causal thinking has to head in.
Classical causality is about something hot happening to something cold. But we need to flip that model on its head. The deeper causal story is about something cold happening to something hot. It is the context that tells the story, not the event. As the temperature drops generically, then localised heat can start to become the new big thing. But only after the temperature has dropped generically.
Edit: corrected mistake.
I didn't quite complete this thought. As I am trying to make plain, my own interest here would be in the question of cosmic creation - the causality of the Big Bang as an example of spontaneous symmetry breaking. So the difficulty becomes getting beyond a story - like tunnelling - which already presumes a causally broken situation. We have to get the bit just beyond where there are either the trapped propensity, or the barrier that is trapping it.
Tunnelling is good for explaining why there might be delays in events happening - like particle decays. And the decays - being statistically random - seem good evidence that we are glimpsing a quantumly indeterministic realm beyond. With quantum tunnelling, we can see flashes of the fundamental uncertainty breaking through.
However, the primal story would seem to have to go beyond a trapped propensity and the threshold holding it back for "a time".
I would instead say that to arrive at the classical situation, you would have to keep adding classical constraints. My argument is that indeterminism can never actually be eliminated. It can only be contextually regulated.
So we arrive at classicality as a terminus - the result of adding enough complex restrictions to produce an apparent causal simplicity. We have to remove the jitter, the friction, the heat - all that messy thermodynamic stuff - to arrive at one round object perched motionlessly on top of another round object with now no other object in sight to disturb that ridiculously unstable situation.
Maximum instability is presented as absolute stability. And then somehow this is the causal model of the world that most people want to defend.
I understand that you intended to raise issues for causality that are more general than those that arise from the peculiar features of Norton's dome. But I also think the specific issues raised by Norton with respect to this peculiar case are relevant to some features of diachronic/synchronic emergence, the arrow of time, and the metaphysics of causation. Those features intersect with the broader questions you are interested in. Maybe I'll come to discussing some of them in due course. Meanwhile, I apologize for the temporary side-tracking.
I'll comment later since I'm taking a pause to read Norton's paper.
There is indeed an analogy to be made with Zeno's dichotomy paradox. When classical mechanics is being portrayed as a picture of the way the world is, in itself, at a fundamental material level, this picture is usually accompanied by a Humean conception of event-event causation (displacing the traditional Aristotelian picture of powerful substance-causation). Furthermore, 'events' are being identified with the 'states' of systems at a instantaneous moment in time. (The state of a system consists in the specification of the positons, momenta and angular momenta of all the particles and rigid masses comprising it). So, on that view, the (event-)cause of an (event-)effect are conceived as two instantaneous states of a system such that the later can be derived from the former in accordance with the dynamical laws of evolution of the system.
So, on that view, the cause of the state of motion (and position) of the ball at a moment in time can be identified with its state of motion at an earlier time. In the case of Norton's dome, if the ball has begun moving exactly at time Ti = 0, and is moving at a determinate positive speed at time T > Ti, then it was already moving at a determinate (and smaller) positive speed at time T2 = T/2. Its state of motion at that earlier time can thus be viewed as the cause of its state of motion at T. And likewise for its state of motion at time T3 = T/4, which can be viewed as the cause of its state of motion at T2. As long as the ball is in motion, there is an earlier cause (indeed, infinitely many causes) of its current state of motion. But those ordered causal chains don't extend in the past beyond Ti = 0. They don't even reach this initial time. So, there is no initial cause of this temporally bounded infinite sequence of events, even though all the events occurring after Ti have a sufficient cause.
IE, so even if we specified a starting time for the ball rolling, that's still an incomplete description - we need a start time and a direction.
The differential equation that constrains the equation of motion, and, in this case, that has been set up to ensure that Newton's second law is obeyed at all times, admits of a multiplicity of solutions. So, it's true that leaving out the direction of the motion that is beginning at the initial time T, such that this initial time is the only one (or the last one) when the particle is at rest, underspecifies the equation of motion. But it doesn't underspecify the "state" of the system at the initial time. Newton's laws of motion are supposed to govern the evolution of material systems on the basis of specifications merely of their "states" at a time, where those states are being fully characterized by the positions and momenta of the material constituents of the system. (The higher order time derivatives of the momenta are irrelevant to the determination of the "state" of a mechanical system, as far as Newton's laws are concerned). So, the fact that the initial state, in conjunction with specification of the forces, and the laws, underspecifies the equation of motion (and hence, also, the future direction of motion), precisely is what makes this system indeterministic (as constrained only by Newton's laws).
I am in broad agreement with this. I've finished reading Norton's paper, now. It's very good even though the whole discussion presupposes a broadly Humean conception of causation, and of the laws of nature, that is inimical to me. Nevertheless, if this presupposition is granted (as it can be for the sake of the discussion of the structure of idealized physical theories), Norton offers very good replies to the main attempt by critics to 'specially plead' against the conclusion that his dome provided an example of indeterminism within the strict framework of Newtonian mechanics.
One thing that struck me, though, is that Norton seems to be making an unnecessary concession to his critics while discussing one specific feature of the ideality of his thought experiment. What he is conceding is that the indeterminism that arises from the state where the ball is initially at rest at the apex of the dome only arises at the limit where the peculiar mathematical shape of the some is perfectly realized on an infinitesimal scale, and hence can't be realized in practice owing to the granular structure of real matter.
It rather seems to me that this indeterminism is an emergent feature that is already manifest under imperfect realizations of the dome. Whether or not it is manifested depends on how the ideal limit is being approached. One way to approach it, which seems to be the only way that Norton and his critics consider, is to assume that the ball is being located, at rest, precisely at the apex of the dome, and to realize the shape of the dome ever more precisely in the neighborhood of the apex. Only when the curvature at the apex blows up, will the ball's "excitation" (as Norton call's the spontaneous beginning of the motion from a state of rest) become physically possible.
But there is another way to approach (or approximate) the peculiar indeterministic nature of the dome, and to probe the corresponding bifurcation in phase space that characterizes it). We can stick with a merely approximate realization of the shape of the dome, where the curvature remains finite within a neighborhood of radius R from the apex, and the ball is being initially located (or sent sliding up) in the vicinity of the apex with some error distribution of commensurate size. We can compare, side by side, two experiments where the infinitesimal limit is being approached, one using an hemispherical dome, say, and the other one using Norton's dome. In the first case, under successive iterations of the experiment where the ball is placed (or sent) with an ever narrowing error spread towards the apex, and where the apex is materially shaped ever more closely to an ideal hemispherical shape, the time being spent by the ball in the neighborhood of the apex will tend towards infinity. In the case of Norton's dome, the time will tend towards zero (while the time required to move a fixed distance D away from the apex will remain roughly the same). As we move towards the ideal limit (with an ever smaller error spread, and an ever larger curvature within the narrowing neighborhood), the ball will not only become more sensitive to microscopic disturbances (which it will be both in the hemisphere and in the dome cases) but the cumulative effect of those triggering disturbances, as well as the small errors in initially setting up the ball at the apex, will be continuously amplified from the microscopic realm to the macroscopic realm (in a fixed time) in such a way as to make manifest the bifurcation in phase space as a truly emergent macroscopic phenomenon lacking a counterpart in the microphysical realm.
If things can converge, then they can diverge. In one direction, the ultraviolet catastrophe. In the other, its matching infrared catastrophe.
So in terms of my metaphysical interests here, the dichotomous nature of any ideal limit is not a surprise. It would be a prediction. If you have fluctuations, as you do in quantum physics, then you are always going to be stuck between the two perils of everything adding up to infinity, or everything cancelling to zero.
Now those two perils are mathematically nicely-behaved but also observationally non-physical. The Universe actually exists in a way that suggests a finite cut-off before we can arrive at either two ideal limits to processes of convergence or divergence.
So that was something implicit in the OP. We need to explain finitude. There has to be an emergent scale of fluctuations that becomes too small to make a difference. Or indeed, to big to make a difference.
And here is where I would call on the holism and semiosis of hierarchy theory. In hierarchy theory, small scale fluctuations eventually become just a solid blur - from a middle ground perspective of them. And likewise, large scale fluctuations eventually become so large in spatiotemporal terms that they completely fill the available field of view. Change can no longer be seen as it is change that stretches wider than the visible world itself.
This is the usual contrast between blackholes and de Sitter spaces. Looking in one direction, fluctuations tend to a Planck scale quantum blur. Looking in the other, we encounter the large scale event horizon cut-off imposed by the speed of light.
So yes. There is always a dichotomy in play if there is any action at all. If there is a convergent limit, there is a divergent one to match it. And then tracking the physics of such limits with fluctuations also makes sense. But that then is nudging you towards this kind of hierarchical semiotics, this triadic story of being inside limits because of some kind of finitude-constructing mechanism, some kind of cut-off creating effect.
Again, the mathematical imagination is quick to believe that the infinite and the infinitesimal are in some sense achievable. But I'm thinking no. Finitude must arise somehow in the actually physical universe. And we don't have a lot of good tools for modelling that.
My OP illustrated one form of such a cut-off - the principle of indifference. If instead of having to count every tiniest, most infintesimal, fluctuation or contribution, we simply arrive at the generic point of not being able to suppress such contributions, then this is just such an internalist mechanism. The crucial property is not a sensitivity to the infinitesimal, but simply a loss of an ability to care about everything smaller in any particular sense. There is smaller shit happening just as there is also bigger shit happening in the other direction. It just isn't visible from our middle ground position due to a lack of the means to record that information. The holographic universe story in a nutshell.
This may not be right. What I should have said (in the case of the hemispheric dome) is that the acceleration in the vicinity of the apex will be such that the total time from the moment when the ball will exit the shrinking neighborhood and travel to a predefined distance D from from the apex will tend towards infinity.
This particular conclusion is convergent with my own. It seems interesting, to me, that the shape of Norton's dome creates a specific condition of instability such that the ability of the ball to move away from the equilibrium point, and further slide under the impetus of the tangential component of the gravitational force to a finite distance D from the apex in a finite time T, is insensitive to the magnitude of an initial perturbation from equilibrium. This condition of instability is somewhat independent of the condition under which the initial perturbation is enabled to arise (from thermal molecular agitation, or Heisenberg's uncertainty principle being applied to the initial state of the ball, or whatever).
Anything could be a cause for the pencil tipping over (generic cause) BUT the actual event is caused by a breath of air or the table shaking (specific cause).
The world is full of causal effectors (fall). No one can deny that. However, to say that because this is true one can't ascribe a specific cause (push) would be foolish. Don't you think?
So toppling pencils and rolling balls just serve as illustrations of the principles. And I’m arguing that while logic says there will always be some triggering cause, it also doesn’t make much sense to attribute anything much to that particular event - single it out as something uniquely significant and useful to know. The real cause of the change is the fact that triggering events couldn’t have been avoided. That generic fact of nature is what would be useful to know about and understand fully.
Specifically it's that no force (0 vector) is applied as an initial condition while the ball is at the apex that leaves room for the indeterminism.
Well, the fact that there is no force while the ball is initially at rest on the apex of Norton's dome enables it to remain stationary during some arbitrary length of time T. This corresponds to one possible trajectory in phase space, among many. But that would also be true of a ball resting on the apex of a sphere, or paraboloid. In those cases, though, the evolution would be deterministic since there would be no possibility for the ball ever to move off center any finite distance in a finite amount of time. That's not so in the case of Norton's dome. The ball can "fall off" (start moving away from the apex) at any time consistently with Newton's second law being obeyed at all times.
See the point. Perhaps I'm too poorly attuned to physics to see much of a distinction between a time symmetry and a radial one.
I don't understand this comment. The dynamics, in this case, is indeterministic (branching out at the point in phase space representing the particle at rest at the apex) but it is also time symmetrical. The same branching out occurs in phase space towards the past.
Doesn't really matter what point I'm making for the purposes of the discussion, seeing as it's moved on. All I'm saying is that mapping t->t-T is a symmetry of the laws of motion here, but so is rotating the force vector; any path that the ball could take down the object would follow Newton's laws at all time, even though the arbitrary start time and arbitrary falling direction are unspecified. The major difference between the two in my reading is that the problem is 'set up' to be radially symmetric and so we're primed to think of the problem as of a single dimension (the radial parameter), but the time symmetry falls out of the equations and is surprising.
That's a cop-out. You're just saying that if we don't have the capacity to determine the particular causes involved, we can just say it's a generic cause, "the environment did it". But that's an untruth, because it implies that the particular aspects which are the true causes, are acting together as a unified whole, called "the environment", when the claim of a concerted effort is unjustified. So your claim that "the environment" is an acting agent, is nonsense without some principles whereby "the environment" can be conceived as an acting, unified whole.
It's fine to pick up again a sub-thread when something has been overlooked.
What is surprising? The indeterminism is uprising, but the time symmetry is expected since the laws of motion are time-symmetrical.
I think we meant different things by indeterminism. In the paper's sense of 'a single past can be followed by many futures', the translational time symmetry of the non-zero solution is what facilitates that conclusion. If the ball decides to fall in a given direction, its behaviour is determined at every point on that path by the equations of motion (after redefining t-T=0).
Nevertheless, this collapses the issue to a one spatial-dimension one time-dimension problem- we have a radial direction and the time parameter, and the equations of motion are defined in terms of a radius which is a function of time. So it's pretty clear that the dynamics is radially symmetric.
What I was missing is that when we collapse down to a vertical cross section of the dome, then remove half of it (going from something that looks like /\ to something that looks like /), the one dimensional version of the problem exhibits the time symmetry.
What the radial symmetry highlights is 'the same future can be held by many pasts', where a future includes 'choosing' a direction to fall in, the time symmetry (specifically talking about the mapping t->t-T which shows up in the solution) highlights 'the same past can be held by many futures'. I was mixing up the two in my head.
Apocrisis was talking about a generic force rather than a generic cause, or generic agent. I think is makes sense to speak of a general background condition that isn't happily conceived of as a cause of the events that they enable to occur (randomly, at some frequency). Causes ought to be explanatory. So, there may be events that are purely accidental and, hence, don't have a cause at all although they may be expected to arise with some definite probabilistic frequencies. Radioactive decay may be such an example. Consider also Aristotle's discussion of two friends accidentally meeting at a well. Even though each friend was caused to get there at that time (because she wanted to get water at that time, say), there need not be any cause for them to have both been there at the same time. Their meeting is a purely uncaused accident, although some background condition, such as there being only one well in the neighborhood, may have made it more likely.
Yes, because the path of the system in phase space only is branching out at the point representing the particle being at rest at the apex. When the particle is already has acquired some momentum, some distance away from the apex, then its trajectory is fully determined in both time direction up to the point where it gets to (or came from) the bifurcation point (that is, to the apex, at rest).
By 'determinism', as predicated of a material system and its laws, I only mean that this system's state at a time (and the laws) uniquely determines its state at any other time. That a single past (either a single past instantaneous state, or a single past historical trajectory in phase space) leads to a unique future is a corollary.
The issue here is spontaneous symmetry breaking. So you’ve got to start with some plausible state of symmetry.
I think my criticism still holds if this is what was meant. If the particular causes cannot be identified, it is a cop-out to claim it's a "general background condition". All you are saying is that you cannot isolate the particular causes, but you know that there was something or some things within the general conditions which acted as cause. And if you make the "general background condition" into a unified entity, which acts as a cause, you have the other problem I referred to.
Quoting Pierre-Normand
There is a clear problem with this example, and this is the result of expecting that an event has only one cause. When we allow that events have multiple causes, then each of the two friends have reasons (cause) to be where they are, and these are the causes of their chance meeting. So the event, the chance meeting, is caused, but it has multiple causes which must all come together. When we look for "the cause", in the sense of a single cause, for an event which was caused by multiple factors, we may well conclude that the event has no cause, because there is no such thing as "the cause" of the event, there is a multitude of necessary factors, causes.
It is not that they can't be identified. It is that the identification would miss the causal point.
It is the inability to suppress fluctuations in general, rather than the occurrence of some fluctuation in particular, which is the contentful fact.
It's true, in a sense, that 'events' have multiple causes. Recent work on the contrastive characters of causation and of explanation highlight this. But what it highlights, and what Aristotelian considerations also highlight, is somewhat obscured by the tendency in modern philosophy to individuate 'events' (and hence, also, effects) in an atomic manner as if they were local occurrences in space and in time that are what they are independently from their causes, or from the character of the agents, and of their powers, that cause them to occur. This modern tendency is encouraged by broadly Humean considerations on causation, and the metaphysical realism of modern reductionist science, and of scientific materialism.
If we don't endorse metaphysical realism, then we must acknowledge that the event consisting in the two acquaintances meeting at the well can't be identified merely with 'what happens there and then' quite appart from our interest in the non-accidental features of this event that we have specifically picked up as at topic of inquiry. Hence, the event consisting in the two individuals' meeting can't be exhaustively decomposed into two separate component events each one consisting in the arrival of one individual at the well at that specific time. The obvious trouble with this attempted decomposition is that a complete causal explanation of each one of the 'component events' might do nothing to explain the non-accidental nature of the meeting, in the case where this meeting indeed wouldn't be accidental. In the case where it is, then, one might acknowledge, following Aristotle, that the 'event' is purely an accident and doesn't have a cause under that description (that is, viewed as a meeting).
Well, either it's a chance encounter or it's a non-accidental meeting. Only in the later case might a cause be found that is constitutive of the event being a meeting (maybe willed by a third individual, or probabilistically caused by non-accidental features of the surrounding topography, etc.)
Agreed. Those separate causes, though, may explain separately the different features of the so called 'event' without amounting to an explanation why the whole 'event', as such, came together, and hence fail to constitute a cause for it (let alone the cause).
How could identifying the causes miss the causal point?
Quoting Pierre-Normand
I believe that an "event" is completely artificial, in the sense that "an event" only exists according to how it is individuated by the mind which individuates it. So the problem you refer to here is a function of this artificiality of any referred to event. It is a matter of removing something form its context, as if it could be an individual thing without being part of a larger whole.
Quoting Pierre-Normand
So here's the issue. If we are allowed to individuate events, remove them from their context, then we may look at events as distinct, independent of their proper time and space. By doing this, we can say that any two events, are accidental, or the common term, coincidental. So the two friends meet by coincidence, but any two events when individuated and removed from context may be seen as coincidental, despite the fact that we might see them as related in a bigger context. And when we see things this way we have to ask are any events really accidental or coincidental. it might just be a function of how they are individuated and removed from context, that makes them appear this way.
You missed the point. Read what I wrote and reply to what I wrote.
The framework is Aristotelian. Material/efficient causes are being opposed to formal/final causes. Don't pretend otherwise.
I agree with this. I am indeed stressing the fact that the event doesn't exist -- or can't be thought of, or referred to, as the sort of event that it is -- apart from its relational properties. And paramount among those constitutive relational properties are some of the intentional features of the 'mind' who is individuating the event, in accordance with her practical and/or theoretical interests, and embodied capacities.
Yes, indeed. The context may be lost owing to a tendency to attempt reducing it to a description of its material constituent processes that abstracts away from the relevant relational and functional properties of the event (including constitutive relations to the interests and powers of the inquirer). But the very same reductionist tendency can lead one to assume that whenever a 'composite' event appears to be a mere accident there ought to be an underlying cause of its occurrence expressible in terms of the sufficient causal conditions of the constituent material processes purportedly making up this 'event'. Such causes may be wholly irrelevant to the explanation of the occurrence of the composite 'event', suitable described as the purported "meeting" of two human beings at a well, for instance.
Does this argument work for crime where responsibility for actions is a cornerstone?
Quoting apokrisis
To tell you the truth, I saw your statement as irrelevant, and bordering on meaningless gibberish.
Quoting apokrisis
We're discussing physical causes in inanimate objects. The "inability to suppress fluctuations", is a given, a background condition. Newton's law of inertia is not stated as a body which has the "ability to suppress fluctuations" will remain in an inertial state. It is assumed that the inanimate body has the "inability to suppress fluctuations". There is no such thing in physics as the capacity to resist potential causes (ability to suppress fluctuations), that would be an overriding supernatural power which would turn physics into nonsense.
Clearly, it is the existence of particular fluctuations which are of interest to physicists, not an inability to suppress fluctuations in general, which implies the existence of a supernatural capacity to suppress particular fluctuations in the first place.
However, because I am expressing a general constraints-based view of causality, you could say that responsibility is about limiting antisocial behaviour to some point where a community becomes indifferent to what you are doing.
If you wear your socks inside out, that doesn’t really matter, regardless of whether the act is accidental or deliberate. But if you bump into someone in the street and hurt them bad, then the difference would tend to matter.
My OP wasn’t ruling out the idea of deliberate action. It was focused on the causation of accidents in unstable situations.
Cheeky bastard.
This was your original point. And note how it relies on a notion of "net force". So the Newtonian view already incorporates the kind of holism I'm talking about.
A system in balance by definition has merely zeroed the effective forces being imposed on it by its total environment. The Newtonian formalism doesn't specify an absence of imposed forces. It just says that any fluctuations present are balanced to a degree that no particular acceleration is making a difference.
So the point about a ball balanced on a dome is you can see this is an unstable situation. There is a strong accelerative force acting on the ball - gravity. The situation is very tippable. The slightest fluctuation will lead to a runaway change. Down the ball will roll.
It now matters rather a lot whether the "net force" describes a literal absence of any further environmentally imposed force, or whether it represents a state where the fluctuations are coming from all directions and somehow - pretty magically - cancel themselves out to zero ... until the end of time.
So that was my point. The conventional view of causality likes to treat reality as a void. Nominalism rules. All actions are brutely particular. But conversely, reality can be seen as a warm bath of fluctuations. And now the kind of simple causality we associate with an orderly world has to be an emergent effect. It arises to the degree that fluctuations are mostly suppressed or ignored. It relies on a system having gone to the stability of a thermodynamic equilibrium.
But if fluctuations are only being suppressed to give us our "fluctuation-free" picture of causality, then that means they still remain. That then becomes a useful physical fact to know. It becomes a way of modelling the physics of instabilities or bifurcations.
And metaphysically, it says instability is fundamental to nature, stability is emergent at best. And that flips any fundamental question. Instead of focusing on what could cause a change, deep explanations would want to focus on what could prevent a change. Change is what happens until constraints arise to prevent it.
So this is how holism becomes opposed to reductionism. It is the different way of thinking that moves us from a metaphysics of existence to a metaphysics of process.
It is when we allow for the ideas, thoughts, and intentions of the individuals, that the causal waters are muddied. It is only because the two people know each other, and recognize each other, that their chance meeting at the well is a significant event. Otherwise it would be two random people meeting at the well, and an insignificant event.
This is what also happens when Aristotle explains the causal significance of chance and fortune. Chance is a cause, in relation to fortune, so when the chance meeting is of a person who owes the other money, and the debt is collected, the chance occurrences causes good fortune. Likewise you can see that in a lottery draw a chance occurrence is the cause of good fortune. And some chance occurrences such as accidents, cause bad fortune. But "fortune" only exists in relation to the well-being of an individual So it's only within this framework of determine things as good or bad, that we say chance is a cause. The event is "caused" by whatever things naturally lead to it, but that it caused fortune is in relation to the well-being of individuals.
You seem to be headed off on some tangent and I have no idea what your trying to say. The uncertainty principle is a reflection of the Fourier transform which describes uncertainty in the relationship between time and wave frequency.
Quoting apokrisis
This is a false conclusion though. The ball has to get there, to its position on the top of the dome. This requires stability. Stability is your stated premise, the ball is there, it has the capacity, by its inertial mass, to suppress fluctuations, therefore it has a position. You cannot turn around now and say that the ball rolls because of its incapacity to suppress fluctuations, and therefore instability is fundamental, without excepting contradictory premises. Stability is fundamental and instability is fundamental.
Yep. So one is the view I would be arguing for, the other would be the one I would be arguing against. Stability was not my stated premise. It was the premise which I challenged.
I think you have a point.
Consider an event A. In terms of necessary causes we can field events x and y. These events (x and y) set the stage for A. Now comes along event z which is the sufficient cause and results in actualization of event A.
So, if I understand you, all events (x, y and z) are important. The instability due to x and y is causally important in our equation.
Also, there could be another event w causally equivalent to z that could've taken advantage of the unstable situation resulting from the conjunction of necessary causes x and y.
In such cases we could say that the event A is inevitable.
Certain configurations of as few as 5 gravitating, non-colliding point particles can lead to one particle accelerating without bound, acquiring an infinite speed in finite time. The time-reverse of this scenario implies that a particle can just appear out of nowhere, its appearance not entailed by a preceding state of the world, thus violating determinism.
A number of such determinism-violating scenarios for Newtonian particles have been discovered, though most of them involve infinite speeds, infinitely deep gravitational wells of point masses, contrived force fields, and other physically contentious premises.
Norton's scenario is interesting in that it presents an intuitively plausible setup that does not involve the sort of singularities, infinities or supertasks that would be relatively easy to dismiss as unphysical. has already homed in on one suspect feature of the setup, which is the non-smooth, non-analytic shape of the surface and the displacement path of the ball. Alexandre Korolev in Indeterminism, Asymptotic Reasoning, and Time Irreversibility in Classical Physics (2006) identifies a weaker geometric constraint than that of smoothness or analyticity, which is Lipschitz continuity:
A function [math]f[/math] is called Lipschitz continuous if there exists a positive real constant [math]K[/math] such that, for all real [math]x_1[/math] and [math]x_2[/math], [math]|f(x_1) - f(x_2)| \le K |x_1 - x_2|[/math]. A Lipschitz-continuous function is continuous, but not necessarily smooth. Intuitively, the Lipschitz condition puts a finite limit on the function's rate of change.
Korolev shows that violations of Lipschitz continuity lead to branching solutions not only in the case of the Norton's dome, but in other scenarios as well, and in the same spirit as Andrew above, he proposes that Lipschitz condition should be considered a constitutive feature of classical mechanics in order to avoid, as he puts it, "physically impossible solutions that have no serious metaphysical import."
Ironically, as Samuel Fletcher notes in What Counts as a Newtonian System? The View from Norton’s Dome (2011), Korolev's own example of non-Lipschitz velocities in fluid dynamics is instrumental to productive research in turbulence modeling, "one whose practitioners would be loathe to abandon on account of Norton’s dome."
It seems to me that Earman oversells his point when he writes that "the fault modes of determinism in classical physics are so numerous and various as to make determinism in this context seem rather surprising." I like Fletcher's philosophical analysis, whose major point is that there is no uniquely correct formulation of classical mechanics, and that different formulations are "appropriate and useful for different purposes:"
Maybe I'm misunderstanding, but I am not persuaded by this. The limit of the horizontal forces as the beam stiffness approaches infinity is zero and, that is the value that a first principles analysis of the infinitely stiff case gives us too. The load pushes downwards with force W on the beam and the ends of the beam push down on the supports with weight W/2 at each end. There is nothing in the system supplying any horizontal forces, so the horizontal forces are zero, which is equal to the limit of the finitely stiff cases.
What the lack of a unique solution to the equations for the infinitely stiff case tells us is that, in addition to providing vertical support, the supports could also push inwards against the beam or pull outwards on it, with any force at all, and the system would still remain static. That is not surprising, given the beam is infinitely stiff and hence infinitely resistant to both tension and compression. But in the absence of the system containing any elements that supply such lateral forces, the lateral force must be zero.
So I don't think this does supply the example Norton wants of a system where the behaviour in the limit is not equal to the limit of the behaviours close to the limit - which is what happens with the dome.
Not relevant to the problem, but I would have thought that, if we allow the application of lateral forces by the side supports different to the resistance to the natural lateral pull from the weight of the beam, then even in the finite elasticity cases those lateral forces can range over an interval. The more strongly they pull (push) the beam ends outwards (inwards), the less (more) the beam will sag. SO I'm not sure I can see any qualitative difference or discontinuity between the infinite and finite stiffness cases.
If that's right, then the only concrete example used to argue against the solution that suggests we should rule out the dome as an inadmissable idealisation because of the infinite curvature at the top, has failed. All that is left to argue against that solution is the second last paragraph on p21 that begins with 'It does not.' But I found that para rather a vague word salad and didn't feel that it contained any strong points. Indeed I'm not sure I understood what point he was trying to make in it. Perhaps somebody could help me with that.
I'm interested in the thoughts of others, whether they agree with Norton's or my analysis of the beam example, and whether the argument against ruling out inadmissible idealisations can stand without it.
In the case of the beam in Norton's paper, in order to definitely answer the question we need to find the state of stress inside the beam, which for normal bodies is fixed by boundary conditions. The problem with infinite stiffness is that constitutive equations are singular, and therefore, formally at least, the same boundary conditions are consistent with any number of stress distributions in the beam. This happens for the same reason that division by zero is undefined: any solution fits. In the simple uniaxial case, the strain is related to the stress by the equation [math]\epsilon = \frac{1}{E}\sigma[/math]. When E is infinite and 1/E is zero, the strain is zero and any stress is a valid solution to the equation. Boundary conditions in an extended rigid body will fix (some of the) stresses at the boundary, but not anywhere else. Or so it would seem.
However, what meaning do stresses have in an infinitely stiff body? Because there can be no displacements, there is no action. Stresses are meaningless. Suppose that instead of perfectly rigid and stationary supports, the rigid beam in Norton's paper was suspended between elastic walls. Whereas in the original problem the entire system, including the supports, was infinitely stiff and admitted no displacements, now elastic walls would experience the action of the forces exerted by the ends of the beam. Displacements would occur, energy would be expended, work would be done. The problem becomes physical, and physics requires energy conservation, which immediately yields the solution to the problem.
So I wouldn't worry too much about these singular limits; just as in the case of the division by zero, no solution makes more sense than any other, they are all meaningless.
You mean, of course, the division of zero by zero. (Very good recent exchange between you and andrewk, by the way! I'll comment shortly.)
I was quite startled when I read the case of the beam, described by Norton as a "statically indeterminate structure", because just a few days earlier I had been struck by a similar real world case. I was startled yet again today when I say you two insightfully discussing it.
A friend of mine went to Cuba with his wife and they asked me to feed their two cats while they were away. They live on my home street, a mere five minutes walk away. As I was walking there, I noticed the street sign, with the name of our street on it, being rotated 45° from its normal horizontal position. The sign is screwed inside of an iron frame, and the frame was secured to the wooden pole in its middle position, while another screw, at the top, had come loose and thus enabled the sign to rotate around its (slightly off center) horizontal axis. (I guess I'll go back there during the day and take a picture). A few weeks ago there had been a storm in our area, with abnormally heavy winds, which knocked the power out for several hours. Those heavy winds may have been the cause.
But then, as I wondered what kind of force might have cause the top screw to come loose, I also wondered how any torque might have been applied on the middle screw prior to the top screw giving way (or, at least, beginning to loosen up). I assumed the middle screw not to have been centered on the horizontal axis of the sign, or else there would have been no torque from the wind. But then I reasoned (while simultaneously realizing that it made no sense!) that, on the one hand, there couldn't be any horizontal force on the top screw without there being a torque on the middle screw, but also, on the second hand, that there could not be any torque on the middle screw without there being an initial horizontal displacement of at least one of the two screws! So, on the condition that the whole system would be perfectly rigid, and treated as a problem of pure statics, there appears to be no possibility for either a torque being applied on the middle screw, or an equal and opposite lever reaction force being applied on the top screw, without one of those two forces being enabled to produce (dynamically, not merely statically) an initial action, or reaction, just as SophistiCat described regarding Norton's horizontal beam case. And such an initial dynamical action only is possible on the condition that the system not be ideally rigid. And then, of course, as SophistiCat astutely concluded (and I didn't concluded at the time) the ideal case might be inderminate because of the different ways in which the limiting case of a perfectly rigid body could be approached. (I had later arrived at a similar conclusion, regarding Norton's dome, while Norton himself, apparently, didn't. But I'll comment on this shortly).
I had also highlighted this paragraph from Norton's paper in orange, which is the color that I use to single out arguments that appear incorrect to me. In the margin, I had written "idem", referring back to my comment about the previous paragraph. On the margin of the previous paragraph, I had commented: "That may be because the limit wasn't approached correctly. A more revealing way to approach the limit would be to hold constant the shape of the dome (with infinite curvature at the apex, or close to it) and send balls sliding up with a small error spread around the apex. The limit would be taken where the error spread is being reduced."
Alternatively, we can proceed in the way @SophistiCat discussed, and allow the dome to have some elasticity. There will be an area near the apex where the mass is allowed to sink in and remains stuck. When we approach the limit of perfect rigidity, the sensitivity to the initial placement of the mass in the vicinity of the apex increases without limit and we get to the bifurcation point in the phase space representation of this ideal system. The evolution is indeterminate because, through going to the ideal limit, we have hidden some feature of the dynamics and turned the problem into a problem of pure statics (comparable to the illegitimate idealization of the beam pseudo-problem, which was aptly analogized by SophistiCat as the fallacious attempt to determine the true value of 0/0 with no concern for the specific way in which this ideal limit is being approached).
Maybe I should clarify my reasoning a bit here. Since the frame of the sign is being held by two screws, the force of the wind on the sign will be opposed by an equal reaction force distributed on the two screws. (I am ignoring the torque around the vertical axis, which is not relevant here). But my main point is that the force being applied on either screw doesn't result in any torque (around the horizontal axis) being applied on the other screw unless the frame, working as a lever, is allowed of rotate a little bit. And this can't happen prior to one of the screws, at least, beginning to loosen up.
Thanks, but I cannot take credit for what I didn't actually say :) The rigid beam case is indeterminate in the sense that multiple solutions are consistent with the given conditions. I did think that there may be a way to approach a different solution (with non-zero lateral forces) by an alternative path of idealization, perhaps by varying something other than elasticity. My intuition was primed by the 0/0 analogy, in which a parallel strategy of approaching the limit from some unproblematic starting point would clearly be fallacious. But I couldn't think of anything at the moment, so I didn't mention it.
I think I have such an example now though.
Suppose that a pair of lateral forces of equal magnitude but opposite directions were applied to the beam. This would make no difference to the original rigid beam/wall system: the forces would balance each other, thus ensuring equilibrium, and everything else would be the same, since the forces wouldn't produce any strains in the beam; the forces would thus be merely imaginary, since they wouldn't make any physical difference. However, if we were to approach the rigid limit by starting from a finite elasticity coefficient and then taking it to to infinity, there would be a finite lateral force acting on the walls all the way to the limit.
* A less technical work by the same author is Vital Instability: Life and Free Will in Physics and Physiology, 1860–1880, 2015 (PDF).
Van Strien writes that "nineteenth century determinism was primarily taken to be a presupposition of theories in physics." Boussinesq was something of an exception in that he took the nonunique solutions that he and others discovered seriously. (He acknowledged that his dome was not a realistic example, taking it more as a proof-of-concept; rather, he thought that actual indeterminism would be found in some hypothetical force fields produced by interactions between atoms, which he showed to be mathematically similar to the dome equations.) Boussinesq believed that such branching solutions of mechanical equations provided a way out of Laplacian determinism, giving the opportunity for life forces and individual free will to do their own thing.
But by and large, Van Strien says, such mathematical anomalies were not taken as indications of something real: in cases where solutions to equations of motion were nonunique, one just needed to pick the physical solution and discard the unphysical ones. This is probably why these earlier discoveries did not make much of an impression at the time and have since been partly forgotten, so that Norton's paper, when it came out, caused a bit of a scandal.
From my own modest experience, such attitudes towards mathematical models still prevail, at least in traditional scientific education and practice. It is not uncommon for a model or a mathematical technique to produce an unphysical artefact, such as multiple solutions where a single solution is expected, a negative quantity where only positive quantities make sense, forces in an empty space in addition to forces in a medium, etc. Scientists and engineers mostly treat their models pragmatically, as useful tools; they don't necessarily think of them as a perfect match to the structure of the Universe. It is only when a model is regarded as fundamental that some begin to take the math really seriously - all of it, not just the pragmatically relevant parts. So that if the model turns out to be mathematically indeterministic, even in an infinitesimal and empirically inaccessible portion of its domain, this is thought to have important metaphysical implications.
Interpretations of quantum mechanics are another example of such mathematical "fundamentalism". Proponents of the Everett "many worlds" interpretation, such as cosmologist Sean Carroll, say (approvingly) that all their preferred interpretation does is "take the math seriously." Indeed, the "worlds" of the MWI are a straightforward interpretation of the branching of the quantum wavefunction. (Full disclosure: I myself am sympathetic to the MWI, to the extent to which I can understand it.)
Are "fundametalists" right? Can a really good theory give us warrant to take all of its implications seriously?