Explaining probabilities in quantum mechanics
The Copenhagen Interpretation of quantum mechanics postulates the Born rule which gives the probability that an observer will measure a quantum system in a particular state. For example, if a photon is sent through a beam splitter, the Born rule says that the photon will be detected on the reflection path half the time and on the transmission path the other half of the time.
The Born rule gives the correct probabilistic predictions. But what explains the Born rule? And why are there probabilities at all? According to the Copenhagen interpretation (which rejects causality) there is no explanation - the probabilities are just a brute fact of the universe. Hence the Born rule must be postulated under that interpretation.
However for causal interpretations (such as Everett and de Broglie-Bohm), the probabilities must have an underlying causal explanation. So, in the case of Everett, the Born rule should be derivable from unitary quantum mechanics, not merely postulated. I'm going to outline a derivation below - note that it draws from Carroll's and Sebens' derivation.
The first issue is why we should expect probabilities at all. With the beam splitter example above, the Everett interpretation says that a photon is detected on both paths (as described by the wave function). But an observer only reports detecting a photon on one path. Therefore, on the Everett interpretation, the probabilities describe the self-locating uncertainty of the observer (or the observing system), not the probabilities that the photon will be detected exclusively on one path or the other.
The second issue is why the probability is that given by the Born rule and not by some other rule.
The Born rule states that the probability is given by squaring the magnitude of the amplitude for a particular state. In the beam splitter example above, the amplitude for each relative state is ?(1/2). So the probability for each state is 1/2.
It appears that in these scenarios we should be indifferent about which relative state we will end up observing. So the simple rule here is to assign an equal probability to each state. If there are two states, then the probability of each is 1/2. If there are three states, then the probability of each is 1/3. And so on. This is often called branch-counting and it makes the correct predictions in cases where the amplitudes for each state are equal.
So far so good. But this approach doesn't work if the amplitudes are not equal. For example, suppose we have a beam splitter where the probability of reflection is 1/3 and the probability of transmission is 2/3.
In this scenario there are two states with amplitudes ?(1/3) and ?(2/3). If we are indifferent about which state we will end up observing, then we will wrongly assign a probability of 1/2 to each state. Branch counting doesn't work in this case.
So we have a simple indifference rule that only works in specific circumstances and seems inapplicable here. What can be done? One thing we can do is to transform the setup so that the rule does become applicable.
To do this, we can add a second beam splitter (with 1/2 probability of reflection and transmission) to the transmission path of the first beam splitter. When we send a photon through this setup, there are now three final states and they all have equal amplitudes of ?(1/3) each (i.e., ?(2/3) * ?(1/2) = ?(1/3)).
Now the indifference rule can be applied to get a probability of 1/3 for each final state. Since there are two final states on the first beam splitter's transmission path, their probabilities add to give a total probability for the first beam splitter's transmission path state of 2/3. This is the correct prediction as per the Born rule and it only assumes the indifference rule as required.
This works for any scenario where states of unequal amplitudes are factorable into states of equal amplitudes.
Does anyone see any problems with this?
The Born rule gives the correct probabilistic predictions. But what explains the Born rule? And why are there probabilities at all? According to the Copenhagen interpretation (which rejects causality) there is no explanation - the probabilities are just a brute fact of the universe. Hence the Born rule must be postulated under that interpretation.
However for causal interpretations (such as Everett and de Broglie-Bohm), the probabilities must have an underlying causal explanation. So, in the case of Everett, the Born rule should be derivable from unitary quantum mechanics, not merely postulated. I'm going to outline a derivation below - note that it draws from Carroll's and Sebens' derivation.
The first issue is why we should expect probabilities at all. With the beam splitter example above, the Everett interpretation says that a photon is detected on both paths (as described by the wave function). But an observer only reports detecting a photon on one path. Therefore, on the Everett interpretation, the probabilities describe the self-locating uncertainty of the observer (or the observing system), not the probabilities that the photon will be detected exclusively on one path or the other.
The second issue is why the probability is that given by the Born rule and not by some other rule.
The Born rule states that the probability is given by squaring the magnitude of the amplitude for a particular state. In the beam splitter example above, the amplitude for each relative state is ?(1/2). So the probability for each state is 1/2.
It appears that in these scenarios we should be indifferent about which relative state we will end up observing. So the simple rule here is to assign an equal probability to each state. If there are two states, then the probability of each is 1/2. If there are three states, then the probability of each is 1/3. And so on. This is often called branch-counting and it makes the correct predictions in cases where the amplitudes for each state are equal.
So far so good. But this approach doesn't work if the amplitudes are not equal. For example, suppose we have a beam splitter where the probability of reflection is 1/3 and the probability of transmission is 2/3.
In this scenario there are two states with amplitudes ?(1/3) and ?(2/3). If we are indifferent about which state we will end up observing, then we will wrongly assign a probability of 1/2 to each state. Branch counting doesn't work in this case.
So we have a simple indifference rule that only works in specific circumstances and seems inapplicable here. What can be done? One thing we can do is to transform the setup so that the rule does become applicable.
To do this, we can add a second beam splitter (with 1/2 probability of reflection and transmission) to the transmission path of the first beam splitter. When we send a photon through this setup, there are now three final states and they all have equal amplitudes of ?(1/3) each (i.e., ?(2/3) * ?(1/2) = ?(1/3)).
Now the indifference rule can be applied to get a probability of 1/3 for each final state. Since there are two final states on the first beam splitter's transmission path, their probabilities add to give a total probability for the first beam splitter's transmission path state of 2/3. This is the correct prediction as per the Born rule and it only assumes the indifference rule as required.
This works for any scenario where states of unequal amplitudes are factorable into states of equal amplitudes.
Does anyone see any problems with this?
Comments (68)
So the problem for a causal interpretation (as Everett and de Broglie-Bohm are) is to explain how probability arises in a causal universe. In particular, if the wave function describes a state evolving into a superposition of two states where one state has an amplitude of 1 and the other has an amplitude of 2, then why should the probability of observing them be in the ratio of 1:4? That is the squared rule that is the Born rule.
If that can be explained in a causal framework then it restores the idea that probability reflects a lack of knowledge about the world, it's not fundamental.
Probability can still be baked in the universe even with a causal interpretation. The cause may be inherently probabilistic, which is one of the possible interpretations of the Bohm quantum potential initial conditions. Hence, the reason Bohm suggested that his interpretation is causal yet non-deterministic.
That's a possible response. But if you make a distinction between the universe and reality, then it just pushes the issue back a level. That is, is reality causally closed (recast as the Principle of Sufficient Reason rather than in physical terms)?
That raises the question of the status of the Born rule under such interpretations. It would seem that the Born rule could only be postulated, not explained or derived.
I like the quantum information approach where the view is that uncertainty is irreducible because you can't ask two orthogonal questions of reality at once. Location and momentum are opposite kinds of questions and so an observer can only determine one value in any particular act of measurement.
So this view accepts reality is fundamentally indeterministic. But also that acts of measurements are physically real constraints on that indeterminism. Collapse of the wavefunction happens. It is only that you can't collapse both poles of a complementary/orthogonal pair of variables at once. Maximising certainty about one, minimises uncertainty of the other, in good Heisenberg fashion.
What this interpretation brings out sharply is the contextual or holistic nature of quantum reality. And the role played by complementarity. Eventually you are forced to a logical fork when trying to eliminate measurement uncertainty. You can't physically pin down two completely opposite quantities in a single act of constraint.
Grab the snake by the head, or by the tail. The choice is yours. But pin one down and the other now squirms with maximum freedom and unpredictability.
Of course, when talking of observers and measurements, we then have to grant that it is the Universe itself which is doing this - exerting the constraints that fix a history of events. So it becomes a thermal decoherence interpretation with the second law of thermodynamics being the party interested in reducing the information entropy of the Universe as a physical system.
Things happen for reasons, but there is also serendipity, chance or hazard. There is an element of spontaneity involved.
Quoting apokrisis
'A scientist is just an atom's way of looking at itself' ~ Niels Bohr.
Note that 'the Copenhagen interpretation' is NOT a scientific hypothesis. It is simply the kinds of things that Heisenberg, Bohr and Pauli would say about the philosophical implications of quantum mechanics. The term wasn't even coined until the 1950's. But it is an epistemologically modest attitude, in my view, and one generally in keeping with the tradition of Western natural philosophy. Heisenberg's essays on Physics and Philosophy are very interesting in that regard.
The "modest" understanding of that has been the good old dualistic story that it is all in the individual mind of a human observer. All we can say is what we personally experience. Which then leads to folk thinking that consciousness is what must cause wavefunction collapse. So epistemic modesty quickly becomes transcendental confusion. We have the divorce in nature which is two worlds - mental and physical - in completely mysterious interaction.
I, of course, am taking the other holistic and semiotic tack. The epistemic cut is now made a fundamental feature of nature itself. We have the two worlds of the it and the bit. Matter and information. Or local degrees of freedom and global states of constraint.
So CI, in recognising information complementarity, can go three ways.
The actually modest version is simple scientific instrumentalism. We just don't attempt to go further with the metaphysics. (But then that is also giving up hope on improving on the science.)
Then CI became popular as a confirmation of hard dualism. The mind created reality by its observation.
But the third route is the scientific one which various information theoretic and thermodynamically inspired interpretations are working towards. The Universe is a system that comes to definitely exist by dissipating its own uncertainty. It is a self constraining system with emergent global order. A sum over histories that takes time and space to develop into its most concrete condition.
Oh, and I'm sure there is an 'epistemic cut'.
My take would certainly not be that it's ALL in the mind of the observer, but that (1) the mind makes a contribution without which we would perceive nothing and (2) this contribution is not itself amongst the objects of perception as it is in the domain of the subjective. Scientific realism would like to say that we can see a reality as if we're not even there at all, 'as it is in itself', in other words that what we're seeing is truly 'observer independent'. But it is precisely the undermining of that which has caused all of the angst over the 'observer problem', because that shows that the observer has an inextricable role in what is being observed. The observer problem makes that an unavoidable conclusion, which Everett's 'many worlds' seeks to avoid.
So as maths, many worlds is fine. It has to be as it is just ordinary quantum formalism with the addition of thermodynamical constraint - exactly the decoherent informational view I advocate.
But it gets squirmy when Interpretation tries to speak about the metaphysics. If people start thinking of literal new worlds arising, that's crazy.
If they say they only mean branching world lines, that usually turns out to mean they want to have their metaphysical cake and eat it. There is intellectual dishonesty because now we do have the observer being split across the world lines in ways that beg the question of how this can be metaphysically real. The observer is turned back into a mystic being that gets freely multiplied.
So I prefer decoherence thinking that keeps observers and observables together in the one universe. The epistemic cut itself is a real thing happening and not something that gets pushed out of sight via the free creation of other parallel worlds or other parallel observers.
What would you say if someone were to assert that quantum computers rely on the actual reality of many worlds in order to operate?
To build an actual quantum computer will require a lot of technical ingenuity to sort practical problems like keeping the circuits in a cold enough, and isolated enough, condition for states of entanglement to be controllable.
Do you think those basic engineering problems - that may be insurmountable if we want to scale up a circuit design in any reasonable fashion - are going to be helped by a metaphysical claim about the existence of many worlds.
Unless MWI is also new science, new formalism, it is irrelevant to what is another engineering application of good old quantum mechanics.
I'm only going on the jacket blurb of the first book by the guy that invented quantum computing:
Reading through the reader reviews of that title, it seems Deutsch gives pretty short shrift to anyone who doubts the actual reality of parallel universes, which he seems to think is necessary for the concept to actually work.
Isn't apo saying that the concept doesn't actually work?
Now most have moved to a more nuanced take of talking about many world-line branches. But my criticism is that simply mumbles the same extravagant metaphysics rather than blurting it out aloud. Many minds is as bad as many worlds.
On the other hand, listen closely enough to MWI proponents, and they now also start to put "branches" in quotes as well as "worlds". It all starts to become a fuzzy ensemble of possibilities that exist outside of time, space and even energy (as preserved conservation symmetries). The MWIers like Wallace start to emphasise the decision making inherent in the very notion of making a measurement. In other words - in accepting metaphysical vagueness and the role that "questioning" plays in dissipating that foundational uncertainty - MWI is back into just the kind of interpretative approach I have advocated.
There is now only the one universe emerging from the one action - the dissipation of uncertainty that comes from this universe being able to ask ever more precise questions of itself.
In ordinary language - classical physics - we would say the Universe is cooling/expanding. It began as a fireball of hot maximal uncertainty. As vague and formless as heck. Then it started to develop its highly structured character we know and love. It sorted itself into various forces and particles. Eventually it will be completely definite and "existent" as it becomes maximally certain - a state of eternal heat death.
Only about three degrees of cooling/expanding to get to that absolute limit of concrete definiteness now. You will then be able to count as many worlds as you like as everyone of them will look exactly the same. (Or rather you won't, as individual acts of measurement will no longer be distinguishable as belonging to any particular time or location.)
Unless it collapses back into another singularity, and then expands again. Guess we'll have to wait and see ;-)
I reject parallel worlds and parallel minds because immanence has to be more reasonable than transcendence when it comes to metaphysics.
An immanent explanation could at least be wrong. A transcendent explanation is always "not even wrong" because it posits no actual causal mechanism. It just sticks a warning sign at the edge of the map saying "here be dragons".
And the Everett formulation is just an interpretation - a metaphysical heuristic. It itself becomes subject to various metaphysical interpretations as I just described. You can get literal and concrete. Or you can take a vaguer approach where the worlds and branches are possibilities, not really actualities. Or you can go the full hog and just accept that the foundation of being is ontically vague and so any counterfactual definiteness is an emergent property.
The real advance of "MWI" is the uniting of the maths of quantum mechanics with the maths of thermodynamical constraints - the decoherence formalism.
This is a genuine step in the development of quantum theory. And it has sparked its own wave of interpretative understanding - even if ardent MWIers claim to own decoherence as their own thing.
No we bloody don't. Dark energy is a fact. The Heat Death is gonna happen.
Of course we now have to account for dark energy. And again - in my view - decoherence is the best hope of that. Because quantum level uncertainty can only be constrained, not eliminated, then that means that the fabric of spacetime is going to have a built-in negative pressure. It is going to have a zero-point energy that causes quantum-scale "creep".
Unfortunately we don't know enough particle physics to do an exact calculation of this "creep". We can't sum all the contributions in an accurate way to see if they match the dark energy observations. And the naive level calculation - where either things either all sum or all cancel - produce the ridiculous answers that the dark energy value should be either zero or Planck-scale "infinite". An error of 130 orders of magnitude and so another of your often cited "crises of modern physics".
Other calculations going beyond the most naive have got closer to the observed value. But also admittedly, not come nearly close enough yet.
But at least, as a mechanism, it could be bloody wrong. ;)
It is a bit too fruit loopy to think that just our observation of something completely altering it beyond that which is conceivable through more or less physics. I mean there could be something like "magic" where our mind alters realty but it is best to rule out everything else before we allow ourselves to think something like "magic" is going on.
I don't know whether I should laugh or cry. I am sure many members are awaiting breathlessly for the final verdict on what will happen billions and billions of years from now as science refines it's precise calculations. No doubt such calculations will require increased funding. Come to think of it, how about forecasting tomorrow?
Just measure the cosmic background radiation. Its 2.7 degrees above absolute zero. The average energy density is down to a handful of protons per cubic metre.
Again you reveal the vastness of your ignorance of routine scientific facts. The Heat Death is a done deal even if you might also say the clock has another tick or two to actually reach midnight.
Is it: A) We are pretty much at the end of the journey. Yes siree, 32 orders of magnitude is quite a big drop. We are not even talking nanoseconds to midnight (nano being merely 9 orders of magnitude).
Or: B) Bibble, bibble, bibble. Blub, blub, blub....
You have 4 months to live. You have 2 years to live. The universe has 1 trillion years to live. Always with the proclamations.
I just want to take issue with your characterization of probabilistic theories as "acausal." What you are talking about is causal determinism, and the keyword here is determinism. You can, of course, put your foot down and insist that causality necessarily implies determinism, but, as far as your arguments here are concerned, causality may as well equal determinism, because you are not actually talking about any aspect of causality other than it being deterministic. So for your purposes, causality is a redundant concept, since all that you are talking about is determinism. And I suspect that you only bring it up for rhetorical purposes (everyone wants to preserve causality in our theories, right?)
That's fine. At any rate, the justification for the Born rule boils down to the following claims:
1. On the Everett interpretation, measurement leads to initial self-locating uncertainty. An observer can have complete knowledge about the relative states of the system, but not which particular state they have just measured. This raises the question of how to quantify their uncertainty in terms of probabilities.
2. If the state amplitudes are equal, the observer should initially be indifferent about which state they have measured. So the states can simply be counted to calculate the probability that a particular state has been measured.
3. If the state amplitudes are not equal, they can be mathematically factored into states that do have equal amplitudes. And again the states can be counted to calculate the probability. The number of factored states exactly tracks the square of the initial amplitude, so it is equivalent to applying the Born rule.
The main assumption is the indifference rule which seems reasonable to me.
Quoting SophistiCat
It would be great if everyone wanted to preserve causality in their theories but that is what the Copenhagen interpretation explicitly rejects. The idea that the universe is inherently probabilistic implies that the probabilities are a brute fact and inexplicable.
Naturally since the Everett interpretation directly maps the quantum formalism onto the world, then a deterministic formalism leads to a causal (or, if you prefer, deterministic) theory. But it's worth noting that the theory describes and predicts behavior, it does not prescribe it.
Here's the actual challenge Deutsch raises in his book:
Numbers aren't anywhere. Numbers are an abstraction over things (which is more-or-less the Aristotelian view).
Possibilities are also abstractions. In ordinary use, a real possibility is just one that is more likely to eventuate.
In that case I clearly don't understand what Deutsch is talking about, although that is probably not surprising.
The realm of possibility is the future. The difference between a real possibility and an unreal possibility (something said to be possible but actually impossible) is determined by the past. What has already occurred, in the past, determines what is possible in the future. If you take a presentist perspective, neither past nor future "exists", because existence is limited to the present. If you take a dualist perspective you can allow that both the past and the future have actual existence, but there is a substantial difference between the two (substance dualism). Therefore the realm of possibility, remaining always ahead of us in time, in the future, is very real, but since it has not yet received material existence at the present it is apprehended only by the mind, and not the senses
The demonstration would require a quantum computer with about 300 qubits. Either that is an engineering problem that can one day be solved. Or there is some unknown law of the universe that prevents that possibility.
Well, again, you are just equivocating between "causal" and "deterministic." What you are really saying is that our theories ought to be deterministic. I disagree. There is no a priori reason why we should prefer determinism. Or indeterminism, for that matter. I don't consider either to be a theoretical virtue, in and of itself. Of course, if one also offered better or more specific predictions, or a more economical description than the other, then it ought to be preferred - but those other advantages obtain independently of the determinism/indeterminism split. Indeed, in the case of the interpretations of quantum mechanics, none of the empirical advantages can be credited to one interpretation and not another, since they all make the same empirical predictions*.
* That may not actually be true - some interpretations seem to make distinct predictions, but they are presently out of reach for empirical investigation.
While not a prediction per se, Bohm's version of the Schrodinger equation implies a quantum potential that can act non-locally, at a distance. This implication inspired Bell to formulate his Theorem which has been experimentally tested many times over. In this regard, the causal, non-deterministic model of Bohm's carries some additional weight. Does this sound reasonable?
Yes I think our theories should be deterministic. But, most importantly, our theories should be explanatory which is how I've used "causal" in this thread. I'm unaware of any non-deterministic theory that meets that criterion.
Consider a simple probabilistic theory about dice. This (well-tested) theory says that any given dice roll will have a 1/6 probability of producing any particular number between 1 and 6. But the theory doesn't explain why dice exhibit that behavior, it just asserts it.
That is precisely the situation with the Copenhagen interpretation and any other interpretations that postulate the Born rule probabilities instead of deriving them. They may make the correct predictions but they don't actually explain anything.
This implies that creating a new world for every quantum event, or smearing every thing across infinite works comprising some mega-universe of some sort is parsimonious. Others may call such a theory as unimaginably elaborate.
In response to previous assertions that the reality of multiple universe is 'craziness' is it any more crazy than any of the other interpretations?! (Cat both dead/alive etc)
Much, much crazier than Bohm's straightforward causal non-deterministic interpretation.
Bohm's interpretation implies non-locality which has been experimentally observed at the molecular level. It is also no-deterministic which drives the determinists crazy. As a result they come up with this interpretation which requires an infinitly ever-growing exponentially mega-world interpretation which indeed it's still probabilistic but deterministic in this fantasized infinite-world interpretation. It demonstrates how far materialists-determinists are willing to go to preserve their beliefs. I would say God is much more reasonable.
(Plus your apparent hatred for determinists is bordering on the hysterical.)
The refusal to countenance this as a real possibility just demonstrates the problems human beings have with scale. Due to our arrogance as perceived 'masters of the universe' we relate everything to our own size/perspective and so things going too far either way seem ridiculous. If someone said there were probably 10 alternate universes people would easily believe it, but say there are trillions and trillions and trillions...etc and they can't comprehend it.
But if you consider the sheer numbers of atoms in a small piece of coal, or the space between the nucleus of the atom and the electrons, of the size of the universe etc things outside our tiny scale seem far less ridiculous.
Considerably more. The whole purpose of the Schrodinger's Cat thought experiment, was to try and depict the strangeness inherent in quantum mechanics with a life-size example. It was verging on satire; it is saying, 'if you take the mathematics of so-called 'superposition' literally, it means that...' - and then uses the infamous example of the live-dead cat to make the point. In some ways, 'Schrodinger's Cat' was an expression of exasperation, as much as anything.
(Physics joke: 'Erwin! What did you do to the cat? It looks half dead!' ~ Mrs Schrodinger.)
But, the Copenhagen 'interpretation' is another thing altogether. It's not a scientific theory at all, it is simply a way of characterising the kinds of things that Bohr, Heisenberg and to some extent Pauli would say could or could not be said on the basis of quantum physics.
Regarding Everett: here's an interesting if little-commented fact - Everett actually had the privilege of meeting with Bohr, several times, in 1959. But Bohr never showed the least sign of accepting the 'relative state formulation' and at this stage, Everett was already out of theoretical physics, on his way to becoming one of the mathematicians behind America's ICBM program.
This is all related in a Scientific American article called 'The Many Worlds of Hugh Everett', which notes the origin of the 'theory' as follows:
***
Quoting Mike Adams
It's not scale that the problem, it is the inherent outlandishness of the implications of there really being many parallel universes. The literal implication of this idea is that every possible variation of everything that happens, really does happen. So this very dialogue - the one you and I are participating in - is taking place in an infinite number of identical worlds, and also an infinite number of worlds that are different in only one degree, up to an infinite number of degrees of difference.
Here is the handy diagram that Wikipedia generously includes in the article on Many Worlds to explain this:
although in this case, there's only two outcomes. But, I suppose depicting an infinite number of outcomes would be problematical, in a two-dimensional graphic.
I am totally OK with flights of fantasy in science and philosophy, but if we are to start taking seriously a quantum theory interpretation that calls for a continuous formation of infinity upon infinity of newly made worlds without any evidence or any hope of ever having any evidence, just for the sake of having a determinist theory to hang some hope on, then we should also begin to take seriously the infinity of God, that provides equal determinism and equal hope. Fair is fair.
On the other hand over can instead choose to explore Bohm's interpretation which is causal, non-deterministic, and which is the only one that not only predicts non-locality (already observed) as well as provides explains away all the weirdness in a very straightforward manner, e.g the delayed choice experiment, non-local spooky action, etc.).
The big problem with Bohm's interpretation is that it allows for choice, something that the materialists-determinists just cannot accept because it is contrary to their faith, and faith is exactly all they have to hold onto - other than the fantasy of infinity upon infinity of new worlds springing out of no where continuously. Science indeed has become goal oriented just like the teachings of the Church.
I would be grateful is you could elaborate on how Bohms interpretation is 'causal and non-deterministic', because at first sight the coupling of the two appears oxymoronic.
The agent is precisely what you experience every day in you life. Call it what you will, consciousness, your mind, the Elan vital, or that which is choosing. The label matters not. You are the agent that is making choices. That which is peering out through your eyes.
What I was wanted to point out that the only reason the super-fantastical Exponentially-Forever Growing- Infinity-Worlds (scientists are being very modest when they refer to it as Many-World) is taken seriously at all is because determinists need it in light of quantum theory and they are desperate. But no matter what, in this world, everything remains probabilistic.
Bohm's quantum mechanics interpretation is very straightforward. It is causal because everything is real, there is no collapse. The quantum potential which guides the "election" (the election can be considered a wave perturbation) is defined by form not distance so it acts in all directions and all distances equally (non-local action). Any change in the quantum potential will immediately affect the election (this explains the Delayed Choice experiment). The equation itself is equivalent to the Schrodinger equation with different ontological implications. Here is a video which explains how it might all work. It's not precise because the narrator doesn't really understand Bohm, but it is good enough as a starting point.
It could be argued that it's is our humansize-skewed scale perspective that makes the notion seem outlandish, where as in reality (given what we know about quantum mechanics) we should really be open to any theories which make sense mathematically.
Incidentally, I'm using the word 'scale' in a very wide sense, not simply to denote size but the parameters of humancentric experience.