Determinism, Reversibility, Decoherence and Transaction
The electron double-slit experiment, aka Young's slits (not to be confused with my new website), is a treasure-trove of nature. Some have said all of quantum mechanics can be considered from it.
A brief reminder: when a cathode fires electrons at a screen with two slits, beyond which is another screen, the pattern that builds up on the back screen is bands of light and dark, the dark bands being where few or no electrons strike, the light bands being where more strike. From this we deduce that the electron beam coming from the cathode is a wave.
If the voltage of the cathode is reduced such that only one electron fires out, say, per ten seconds, eventually the same pattern builds up. From this we deduce that each electron is a wave.

In non-relativistic quantum mechanics, this wave is described a the wave equation of the form:
[math][E - V] u = \frac{1}{2m}[p - A]^2 u [/math]
where u is the wavefunction, and all the other terms are operators: E is total energy (the Hamiltonian), A is magnetic potential, p is momentum, V is electric potential, m is mass, and our choice of units is such that all the decorative physical constants like h, c, e, etc. are 1. This is the quantum form of Newtonian mechanics [math]E=\frac{1}{2}mv^2[/math] extended to include electromagnetism.
This does not put time and space on equal footing. The solution requires knowledge of u at one time and two places. These are generally derived from physical considerations, e.g. where the electron wavefunction must vanish: two positions where the wavefunction is zero at one time: generally the start.
When u(r,t) is time-evolved according to the above, it spreads out from the cathode, goes through the slits, spreads out from the slits and interferes with itself. Consequently, the wavefunction is defined over many positions at the back screen. And yet when we measure, we see that the electron hit a precise point on the screen.
How the electron got from a field to a point is called the measurement problem, and different solutions to the measurement problem have yielded different interpretations of quantum mechanics. The oldest successful interpretation was the Copenhagen interpretation which states that, upon measurement, the electron wavefunction collapses probabilistically to a single position, the probability given by the absolute square of the wavefunction (the Born rule).
This idea of the absolute square is important. It is how we get from the non-physical wavefunction to a real thing, even as abstract as probability. Why is the wavefunction non-physical? Because it has real and imaginary components: u = Re{u} + i*Im{u}, and nothing observed in nature has this feature. The absolute square of the wavefunction is real, and is obtained by multiplying the wavefunction by its complex conjugate u* = Re{u} - i*Im{u} (note the minus sign). Remembering that i*i = -1, you can see for yourself this is real. We'll come back to this.
There are other probabilistic interpretations, and also some deterministic ones, such as Bohmian mechanics, wherein the electron always has a single-valued position and momentum (hidden variables), and Many-worlds interpretation in which the wavefunction does not collapse but, thanks to the mathematical rules of entanglement, you can never have a term in the wavefunction in which the electron hit the screen at position [math]y[/math] but you observed it at position [math]y' \ne y[/math].
I'd like to question here the physics of the Copenhagen interpretation -- a reference point for many non-determinists and God-botherers -- because it is incredibly simplistic and I have always thought so. The back screen is treated in an ideal way, which is something physicists often have to do to make problems tractable, but the artefacts of this idealisation are then taken as containing actual insight.
The back screen is a macroscopic object that cannot be treated precisely with quantum mechanics. However, we can still use QM to investigate the issue further. For instance: is the electron wavefunction the only quantity that affects where the electron can be found? Clearly it isn't. An electron cannot, for instance, occupy any part of the screen where an electron already is, unless that second electron can somehow vacate its position (Pauli exclusion principle). This will not reduce the potential sites the electron can occupy to 1, but it will, at any given time, for any given electron, stop the number of sites being a continuum. In the language of condensed matter physics (for the screen is condensed matter), an electron can only go where an electron hole exists. An electron hole in this context is any space where an electron could occupy but does not.
The back screen is a high-entropy object compared with the electron. That is, at any time, it may occupy one of hundreds of thousands or millions of microstates: particular configurations that are energetically equivalent to one another. At one instant t', a position on the screen r' may not admit an electron because it already has one there. At a subsequent instance t'', it might admit an electron at r'. The screen will explore these microstates in a thermodynamic way. i,e, in the same way that a box of gas will have different but energetically equivalent configurations of gas molecules one instant to the next.

[EDIT: To incorporate spin, just extend the concept of the coordinate r.]
So we've reduced the continuum of the screen down to some finite number of available electron holes at time t'. But the electron is again a physical thing: it has energy and momentum. It can't just stop when it fills a hole. It goes on to do other things and, again, this is not considered in the simplified version that sustains the Copenhagen interpretation. Because what it does next is also constrained by the precise microstate of the screen. If it scatters another electron such that it goes from state k --> k'', there has to be an existing hole with state k''. Further, whatever state the scattered electron ends up in, there has to be a hole for that too.

In this way we can reduce that finite number of acceptable positions on the screen ever more by continuing the process. Once it has scattered that first electron, it will scatter another and another, each scattering ever constrained by available states in the screen. We can go further. Eventually the electron, either alone or in an atom, will leave the screen entirely, go off on fun adventures in space and time, until eventually one day it finds a positron -- another kind of electron hole -- where it dies like the swine it always was.
Because the electron's birth and death are the true boundary conditions of its wavefunction! And here we turn to relativity. The relativistic wavefunction is of the form [math][E - V]^2 = [p - A]^2 + m^2[/math]. This puts time and space on equal footing (both energy and momentum are squared), requiring knowledge of the particle at two times, not just one. This is why in relativistic quantum theories, we do not proceed by specifying an initial state, time-evolving it forward, and asking the probability of spontaneously collapsing to a particular final state. Rather, we have to specify the initial and final states first, then ask what the probability is. This is constructed as a Green's function G(r, t; r', t'). You can see that we can swap the primed and unprimed coordinates around. If, for a given G(r, t; r', t') there exists a G(r', t'; r, t), the process X=(r,t) --> X'=(r',t') is reversible. (If the the magnitudes of the two are always equal, the system is in equilibrium.)
We are quite familiar in QM with specifying spatial boundary conditions. For instance, consider a tiny oven, a blackbody radiator. What photons can be emitted inside? We all know that only photons with wavelengths that are integer divisors of 2L, where L is the side of the box, can be sustained (B, C & D below). This is quantisation in a nutshell, and it pops out as solutions of the wave equation with the proper number of spatial boundary conditions.
But let's turn the question around. An atom in the oven emits a photon and later another atom absorbs it and re-emits it (more like A below). How does the first atom know to emit a photon such that an even number of its wavelengths will fit in the box? How does either the atom or the photon know how big the box is? Photon emission is just the de-excitation of an atom from one energy level to a lesser one, and the energy of the oven is given by its temperature, which could be anything.

The question becomes less mysterious when we treat time and space equally: the emitted photon doesn't just have a spatial endpoint, but a temporal one, i.e. the photon can only be emitted when it "knows" where and when it will be absorbed. But how could this be?
Photon emission/absorption is another example of a reversible process. Imagine atom 1, in an excited state, "spontaneously" de-excites, deterministically emitting a photon which then "spontaneously" collapses so as to be absorbed by atom 2 which deterministically excites. Run that movie backwards and you have exactly the same thing, only the thing that's spontaneous is now deterministic and vice versa. The Copenhagen description, though, is irreversible: wavefunction collapse is a loss of information that is not retrieved by simulating the reverse process.

The wave equation for the photon is just E = p. If we start with an initial wavefunction u = d(r), where r is its starting point, the wavefunction will expand and expand. To imagine the time-reversed proces, we begin with its final state. But if we just evolve the final state, that won't get us back to where we started. What can we evolve that will take us from time t' back to time t? To answer this, it's easiest to consider plane waves:
[math]u(r,t) = Ae^{ipr}e^{-iEt}[/math]
What we want is the same thing but travelling in the opposite direction in space (p --> -p) and time (t --> -t):
[math]u_b(r,t) = Ae^{-ipr}e^{iEt} = u^*(r,t)[/math]
This is just the complex conjugate of u, u*, the thing that, when we multiply it by u, gives us the most fundamental real thing QM has: the probability of finding the particle at position r at time t.
Is it a coincidence that the thing that makes a wavefunction real is also the thing that describes that wavefunction in reverse? Not according to the transactional interpretation of quantum mechanics, which holds that the actual trajectories a particle takes are not just determined by the retarded wavefunction going from time t to t', but also the advanced wavefunction going from time t' to t. (Advanced wavefunctions come up in standard QED as well, to yield the electron self-energy). In this interpretation, the complex conjugate is essentially a message from the future. The electron takes the real trajectories it does in part because it has information about where it's going or, from another viewpoint, where it's conjugate came from.

Both the retarded wavefunction going from t -> t' and the advanced wave coming from t' -> t may explore whatever places they want, but only in the trajectories where one is the conjugate of the other do their trajectories become real.
This is inconsistent with Copenhagen, but quite consistent with relativity. It treats space and time equally, requiring two temporal boundary conditions for the electron. Furthermore, it incorporates more physics. We can identify the complex conjugate of the wavefunction as the backwards-in-time emission of an electron hole in the back screen, which incorporates information about the microstate of the screen itself: for an electron hole to be advanced from (r', t'), the microstate of the screen must be such that there exists an electron hole at (r', t'). We would make such a demand of the cathode: for an electron to be emitted at point (r,t) there must be an electron at (r,t). It is only sensible that we do so for the hole the electron will occupy. (It's worth convincing yourself that this hole also has a history. For every electron that vacates position r to occupy position r', it leaves behind a hole and goes to where the hole previously was, describing an electron hole that vacates position r' to occupy position r. We do not need to consider it the same hole throughout, but there must be some conservation of hole-ness.) It conserves indirectly measurable quantum phenomena such as the interference effects in the double-slit experiment because the electron and its hole still go from r to r' or vice versa by every possible trajectory. And yet it eliminates the mysterious collapse mechanism which takes the wavefunction from a field to a singularity.
In a nutshell, my thesis is this... The true boundary conditions of any particle are its birth and death: where and when it was created, and where and when it will be destroyed. These are facts of each particle. This is the full time-dependent wavefunction of the electron which is, relativistically speaking, equivalent to a static 4D wave. Unlike in the QM of the Copenhagen interpretation, the conjugate solution (from death to birth) is also a solution when these boundary conditions are applied. This solution eliminates almost all of the trajectories possible (and expected) in Copenhagen QM. And this is just the single-particle picture.

As we expand the picture to include more bodies in the universe, especially the rest of the electronic field, more and more remaining trajectories are removed by things like scattering and Pauli's exclusion principle.

[EDIT: One can also see that single-particle trajectories that otherwise wouldn't be real could be made so by scattering. But Feynman showed that it is the closest paths to straight lines that contribute the most to the sum over histories.]
There is no guarantee here that this will eventually reduce the number of intersections with the screen to 1. But we are a long way from the original Copenhagen picture of an electron that might be found anywhere. I expect that, if we could solve the many-body Dirac equation for the universe (well, it would have to be some cosmologically-consistent generalisation of it), it probably would resolve to 1 intersection.
If not, we're left with a solution that looks like a hugely constrained version of the many-worlds interpretation. I name this the not-many-worlds interpretation of quantum mechanics.
A brief reminder: when a cathode fires electrons at a screen with two slits, beyond which is another screen, the pattern that builds up on the back screen is bands of light and dark, the dark bands being where few or no electrons strike, the light bands being where more strike. From this we deduce that the electron beam coming from the cathode is a wave.
If the voltage of the cathode is reduced such that only one electron fires out, say, per ten seconds, eventually the same pattern builds up. From this we deduce that each electron is a wave.
In non-relativistic quantum mechanics, this wave is described a the wave equation of the form:
[math][E - V] u = \frac{1}{2m}[p - A]^2 u [/math]
where u is the wavefunction, and all the other terms are operators: E is total energy (the Hamiltonian), A is magnetic potential, p is momentum, V is electric potential, m is mass, and our choice of units is such that all the decorative physical constants like h, c, e, etc. are 1. This is the quantum form of Newtonian mechanics [math]E=\frac{1}{2}mv^2[/math] extended to include electromagnetism.
This does not put time and space on equal footing. The solution requires knowledge of u at one time and two places. These are generally derived from physical considerations, e.g. where the electron wavefunction must vanish: two positions where the wavefunction is zero at one time: generally the start.
When u(r,t) is time-evolved according to the above, it spreads out from the cathode, goes through the slits, spreads out from the slits and interferes with itself. Consequently, the wavefunction is defined over many positions at the back screen. And yet when we measure, we see that the electron hit a precise point on the screen.
How the electron got from a field to a point is called the measurement problem, and different solutions to the measurement problem have yielded different interpretations of quantum mechanics. The oldest successful interpretation was the Copenhagen interpretation which states that, upon measurement, the electron wavefunction collapses probabilistically to a single position, the probability given by the absolute square of the wavefunction (the Born rule).
This idea of the absolute square is important. It is how we get from the non-physical wavefunction to a real thing, even as abstract as probability. Why is the wavefunction non-physical? Because it has real and imaginary components: u = Re{u} + i*Im{u}, and nothing observed in nature has this feature. The absolute square of the wavefunction is real, and is obtained by multiplying the wavefunction by its complex conjugate u* = Re{u} - i*Im{u} (note the minus sign). Remembering that i*i = -1, you can see for yourself this is real. We'll come back to this.
There are other probabilistic interpretations, and also some deterministic ones, such as Bohmian mechanics, wherein the electron always has a single-valued position and momentum (hidden variables), and Many-worlds interpretation in which the wavefunction does not collapse but, thanks to the mathematical rules of entanglement, you can never have a term in the wavefunction in which the electron hit the screen at position [math]y[/math] but you observed it at position [math]y' \ne y[/math].
I'd like to question here the physics of the Copenhagen interpretation -- a reference point for many non-determinists and God-botherers -- because it is incredibly simplistic and I have always thought so. The back screen is treated in an ideal way, which is something physicists often have to do to make problems tractable, but the artefacts of this idealisation are then taken as containing actual insight.
The back screen is a macroscopic object that cannot be treated precisely with quantum mechanics. However, we can still use QM to investigate the issue further. For instance: is the electron wavefunction the only quantity that affects where the electron can be found? Clearly it isn't. An electron cannot, for instance, occupy any part of the screen where an electron already is, unless that second electron can somehow vacate its position (Pauli exclusion principle). This will not reduce the potential sites the electron can occupy to 1, but it will, at any given time, for any given electron, stop the number of sites being a continuum. In the language of condensed matter physics (for the screen is condensed matter), an electron can only go where an electron hole exists. An electron hole in this context is any space where an electron could occupy but does not.
The back screen is a high-entropy object compared with the electron. That is, at any time, it may occupy one of hundreds of thousands or millions of microstates: particular configurations that are energetically equivalent to one another. At one instant t', a position on the screen r' may not admit an electron because it already has one there. At a subsequent instance t'', it might admit an electron at r'. The screen will explore these microstates in a thermodynamic way. i,e, in the same way that a box of gas will have different but energetically equivalent configurations of gas molecules one instant to the next.
[EDIT: To incorporate spin, just extend the concept of the coordinate r.]
So we've reduced the continuum of the screen down to some finite number of available electron holes at time t'. But the electron is again a physical thing: it has energy and momentum. It can't just stop when it fills a hole. It goes on to do other things and, again, this is not considered in the simplified version that sustains the Copenhagen interpretation. Because what it does next is also constrained by the precise microstate of the screen. If it scatters another electron such that it goes from state k --> k'', there has to be an existing hole with state k''. Further, whatever state the scattered electron ends up in, there has to be a hole for that too.

In this way we can reduce that finite number of acceptable positions on the screen ever more by continuing the process. Once it has scattered that first electron, it will scatter another and another, each scattering ever constrained by available states in the screen. We can go further. Eventually the electron, either alone or in an atom, will leave the screen entirely, go off on fun adventures in space and time, until eventually one day it finds a positron -- another kind of electron hole -- where it dies like the swine it always was.
Because the electron's birth and death are the true boundary conditions of its wavefunction! And here we turn to relativity. The relativistic wavefunction is of the form [math][E - V]^2 = [p - A]^2 + m^2[/math]. This puts time and space on equal footing (both energy and momentum are squared), requiring knowledge of the particle at two times, not just one. This is why in relativistic quantum theories, we do not proceed by specifying an initial state, time-evolving it forward, and asking the probability of spontaneously collapsing to a particular final state. Rather, we have to specify the initial and final states first, then ask what the probability is. This is constructed as a Green's function G(r, t; r', t'). You can see that we can swap the primed and unprimed coordinates around. If, for a given G(r, t; r', t') there exists a G(r', t'; r, t), the process X=(r,t) --> X'=(r',t') is reversible. (If the the magnitudes of the two are always equal, the system is in equilibrium.)
We are quite familiar in QM with specifying spatial boundary conditions. For instance, consider a tiny oven, a blackbody radiator. What photons can be emitted inside? We all know that only photons with wavelengths that are integer divisors of 2L, where L is the side of the box, can be sustained (B, C & D below). This is quantisation in a nutshell, and it pops out as solutions of the wave equation with the proper number of spatial boundary conditions.
But let's turn the question around. An atom in the oven emits a photon and later another atom absorbs it and re-emits it (more like A below). How does the first atom know to emit a photon such that an even number of its wavelengths will fit in the box? How does either the atom or the photon know how big the box is? Photon emission is just the de-excitation of an atom from one energy level to a lesser one, and the energy of the oven is given by its temperature, which could be anything.
The question becomes less mysterious when we treat time and space equally: the emitted photon doesn't just have a spatial endpoint, but a temporal one, i.e. the photon can only be emitted when it "knows" where and when it will be absorbed. But how could this be?
Photon emission/absorption is another example of a reversible process. Imagine atom 1, in an excited state, "spontaneously" de-excites, deterministically emitting a photon which then "spontaneously" collapses so as to be absorbed by atom 2 which deterministically excites. Run that movie backwards and you have exactly the same thing, only the thing that's spontaneous is now deterministic and vice versa. The Copenhagen description, though, is irreversible: wavefunction collapse is a loss of information that is not retrieved by simulating the reverse process.

The wave equation for the photon is just E = p. If we start with an initial wavefunction u = d(r), where r is its starting point, the wavefunction will expand and expand. To imagine the time-reversed proces, we begin with its final state. But if we just evolve the final state, that won't get us back to where we started. What can we evolve that will take us from time t' back to time t? To answer this, it's easiest to consider plane waves:
[math]u(r,t) = Ae^{ipr}e^{-iEt}[/math]
What we want is the same thing but travelling in the opposite direction in space (p --> -p) and time (t --> -t):
[math]u_b(r,t) = Ae^{-ipr}e^{iEt} = u^*(r,t)[/math]
This is just the complex conjugate of u, u*, the thing that, when we multiply it by u, gives us the most fundamental real thing QM has: the probability of finding the particle at position r at time t.
Is it a coincidence that the thing that makes a wavefunction real is also the thing that describes that wavefunction in reverse? Not according to the transactional interpretation of quantum mechanics, which holds that the actual trajectories a particle takes are not just determined by the retarded wavefunction going from time t to t', but also the advanced wavefunction going from time t' to t. (Advanced wavefunctions come up in standard QED as well, to yield the electron self-energy). In this interpretation, the complex conjugate is essentially a message from the future. The electron takes the real trajectories it does in part because it has information about where it's going or, from another viewpoint, where it's conjugate came from.

Both the retarded wavefunction going from t -> t' and the advanced wave coming from t' -> t may explore whatever places they want, but only in the trajectories where one is the conjugate of the other do their trajectories become real.
This is inconsistent with Copenhagen, but quite consistent with relativity. It treats space and time equally, requiring two temporal boundary conditions for the electron. Furthermore, it incorporates more physics. We can identify the complex conjugate of the wavefunction as the backwards-in-time emission of an electron hole in the back screen, which incorporates information about the microstate of the screen itself: for an electron hole to be advanced from (r', t'), the microstate of the screen must be such that there exists an electron hole at (r', t'). We would make such a demand of the cathode: for an electron to be emitted at point (r,t) there must be an electron at (r,t). It is only sensible that we do so for the hole the electron will occupy. (It's worth convincing yourself that this hole also has a history. For every electron that vacates position r to occupy position r', it leaves behind a hole and goes to where the hole previously was, describing an electron hole that vacates position r' to occupy position r. We do not need to consider it the same hole throughout, but there must be some conservation of hole-ness.) It conserves indirectly measurable quantum phenomena such as the interference effects in the double-slit experiment because the electron and its hole still go from r to r' or vice versa by every possible trajectory. And yet it eliminates the mysterious collapse mechanism which takes the wavefunction from a field to a singularity.
In a nutshell, my thesis is this... The true boundary conditions of any particle are its birth and death: where and when it was created, and where and when it will be destroyed. These are facts of each particle. This is the full time-dependent wavefunction of the electron which is, relativistically speaking, equivalent to a static 4D wave. Unlike in the QM of the Copenhagen interpretation, the conjugate solution (from death to birth) is also a solution when these boundary conditions are applied. This solution eliminates almost all of the trajectories possible (and expected) in Copenhagen QM. And this is just the single-particle picture.

As we expand the picture to include more bodies in the universe, especially the rest of the electronic field, more and more remaining trajectories are removed by things like scattering and Pauli's exclusion principle.

[EDIT: One can also see that single-particle trajectories that otherwise wouldn't be real could be made so by scattering. But Feynman showed that it is the closest paths to straight lines that contribute the most to the sum over histories.]
There is no guarantee here that this will eventually reduce the number of intersections with the screen to 1. But we are a long way from the original Copenhagen picture of an electron that might be found anywhere. I expect that, if we could solve the many-body Dirac equation for the universe (well, it would have to be some cosmologically-consistent generalisation of it), it probably would resolve to 1 intersection.
If not, we're left with a solution that looks like a hugely constrained version of the many-worlds interpretation. I name this the not-many-worlds interpretation of quantum mechanics.
Comments (236)
I appreciate the well thought out, explicit and thorough description of your thesis. I will reread this a couple times and perhaps make a comment if I see something questionable.
So let me start with this idea that the electron (or whatever proposed particle) takes a path, a "trajectory". It is actually impossible that the particle takes a path, and this fact is imposed by the concept of "energy". Energy, in its conceptual formulation is a wave feature. A person might think that a massive object, or a particle, moves from A to B, and brings with it "energy" which is transferred to another object at point B. But the energy according to conventional principles is understood as being transferred from one body to another through wave principles. A moving body has velocity, mass, and momentum according to Newtonian principles, but it does not have "energy". Energy is a feature of the body's relation to something else, and according to conventional principles (Einsteinian), the "something else" is light, or electro-magnetism. Since electro-magnetism is understood and represented by wave principles, the concept of "energy" dictates that energy transmission from one place to another is in the form of a wave. Therefore it makes no sense at all, to say that energy moves from A to B as a particle, because the very concept of "energy" dictates that energy can only be transmitted as a wave. The further discussion as to which trajectory the particle takes therefore, is completely moot, having no significance whatsoever, because whatever it is which transmits from A to B is conceptualized as energy, and energy transmits as a wave, not a particle. There is no particle which moves from A to B, only energy, and by conventional principles energy is transmitted as a wave. The possibility of a particle with a trajectory is excluded by the conceptualization employed.
Quoting Kenosha Kid
I believe the wave function, as you mention here, is artificial. It has been created in an attempt to establish consistency between the two incompatible representations of space, 1) massive bodies moving through empty space, and 2) energy moving as waves in space.
Notice, what you say later, that what makes the wave function "real" is the capacity for reversal. This is another feature of conventional conceptualization. We understand radiant energy, such as radiant heat, through its absorption, not through an understanding of the process of radiation. So this is a necessary condition of radiant energy, that it is absorbed. The concept of radiant energy is based in the absorption of energy into a body. Sure we can talk about something radiating energy into empty space, into an infinite vacuum or some such thing, but this is not consistent with the concept which is based in objects receiving energy, not in objects emitting energy.
This is a fundamental feature of our means of understanding, which is observation. The observer is always on the receiving end of the radiation, so our understanding of radiation is based in its reception. It doesn't really make sense to talk about observing radio wave emissions because the act of observing is itself a reception. And, though we can observe changes to the body which emits the radiation, this is not a true observation of emission itself. So, to facilitate mathematical calculations we simply assume that emission is an inversion of reception, and voila, the wavefunction is claimed to be real, but really there is a hole in the understanding here.
Quoting Kenosha Kid
I believe the macroscopic/microscopic division is not an adequate representation of the real divide. The real divide is the division between two incompatible conceptions of "space". One understanding of space is as a medium full of waves, and the other is as an empty vacuum with massive bodies moving around. You can see how the two conceptualizations are incompatible, and where the two conceptualizations meet, radiation is absorbed or emitted from a massive body, and there is confusion due to the incompatibility of the two. That the issue is not macroscopic/microscopic is evident from the fact that macroscopic things can be represented by the wave model. That the wave model representation of macroscopic objects is inaccurate is due to that hole in the understanding.
Quoting Kenosha Kid
This is a good example of the incompatibility. You are describing the screen as an area of space within which there are subdivisions, some of which will allow for the existence of a particle. But the incoming radiation, to be absorbed into that space is being received as waves within this empty space. So it is necessary to have a transformation principle whereby space (as a medium) with energy moving as waves, is compatible with the conception of empty space with moving particles. Conventional wisdom tells us that the wave formulation is far more advanced, providing a much higher degree of understanding of the reality of the situation, so we ought to dispense this conception of empty space with bodies or particles moving around, and replace it with a consistent wave model. The whole idea described here, that the screen consists of an area with subareas which might or might not provide for the existence of a particle is the wrong approach. The entire area (screen, or macroscopic object) needs to be represented as an interaction of waves to be able to properly understand how the incoming waves of radiant energy will react. However, as I described above, the relationship between emission and reception of radiation is not well understood, and we cannot simply assume an inversion.
Quoting Kenosha Kid
This is the principal misleading, or misguided principle right here. If the radiant energy travels as waves, as necessitated by the concept "energy", the death of the electron is the moment that the energy is emitted. Its birth is when the energy is received. There is no continuity of the particle during transmission. The particle only exists as a part of a massive object. So this is where you need to turn your model around. You cannot represent a photon or electron as being emitted. The photon, as a particle only exists as a part of an object. If energy is emitted by that object, it is emitted as waves, and this constitutes the end of the photon or electron, not its beginning. Conversely, the beginning of the photon or electron, is when radiation is absorbed into an object. We must maintain these principles because the spatial conception which represents energy traveling from here to there, does not allow that energy travels as a particle, it necessitates waves. The particle only exists within the other spatial conception of objects existing in empty space. Remember, "boundary conditions" are applied as deemed required, so if you want your boundary conditions of the electron or photon to be true, you need to represent the true existence of the particle, as allowed for by the conceptions employed. If the conceptions deny the possibility of a particle transmitting energy from one object to another, you cannot employ boundary conditions of the particles, which allow that the particle exists while the energy is being transmitted as waves.
.
I welcome identification of any deficiencies. However, the concepts in question are those of non-relativistic and relativistic quantum mechanics, in which I have some background. Just to define some scope, which I had presumed implicit but perhaps is not, the question here is whether the insight gathered from Copenhagen interpretation regarding determinism is valid. This must be judged from within a quantum theory, with additions and subtractions of course, so I will focus on the parts of your response that fall within that scope.
Quoting Metaphysician Undercover
Oh dear god.
Quoting Metaphysician Undercover
Oh crikey!
Quoting Metaphysician Undercover
There is no cutoff, which is fine as I am not exploring the realm of the classical limit. It is sufficient to know that a classical limit exists. The relevance of the macroscopic screen is merely that it explores microstates, nothing more.
Quoting Metaphysician Undercover
This is worth treating. In 1900, things were thought to be either particles or waves. The blackbody radiation spectrum and quantised atomic orbitals turned that on its head: waves were behaving like particles; particles were behaving like waves. It was surprising, hence the "wave-particle duality paradox'.
There is no paradox. There is no preferred basis set for describing waves except for that of the operator a particular measurement device is described by. At one extreme, plane waves -- Eigenstates of the momentum operator -- have well defined momentum and no defined position, but occupy all of space. At the other, Eigenstates of the position operator have well-defined position but no defined momentum. Everything else lies in between.
The double-slit experiment begins with an electron with a well-defined position and, after measurement, ends with an electron with a well-defined position. These need not be precise, though they are typically treated as such. In between, the electron spreads out as a wave, but eventually must reduce, either deterministically or spontaneously, to something more localised. A good basis set that puts wavelike and particle-like extremes on equal footing is the stroboscopic wave-packet representation, on which I wrote my master's thesis. All of the above still holds: we simply replace Pauli exclusion of two electrons being in one kind of state (position) with that of two electrons being in another kind of state (stroboscopic wave-packet). The exclusion principle holds across all such bases (e.g. you cannot have two like-spin electron plane waves with the same momentum, which is what I had in mind for the states k, j', k" and k"', though these could be position, orbital, Bloch, Wannier or stroboscopic states or anything else you might consider, it makes no difference to the argument).
Quoting Metaphysician Undercover
Stimulated emission. Spontaneous emission. Blackbody spectra. Photoelectric effect. Cathode ray tubes. Lasers. The concept of emission is pretty uncontroversial in quantum mechanics.
You obviously have some exotic ideas of your own about how nature is that are at odds with quantum mechanics. I would not try to dissuade you from them. The above is not meant as pro-quantum propaganda, but rather, as I said, to explore ideas within the field and assess their consequences for determinism. The end-game being an attempt to put to bed the 'quantum equals non-determinism' myth. Whether the concepts discussed are true or not is far less important than whether they hold or don't within that particular framework.
If the pattern is something that "builds up" then the pattern isn't the result of one electron, but many over time. One electron going through every ten seconds makes one dot on the screen every ten seconds that eventually builds up the pattern over time. So each electron behaves like a particle and the relationship between all the electrons is a wave, not that each electron is a wave, or else you'd get the pattern with the first electron. There would be no "building up" if each electron was a wave.
I'm afraid not. If there are no other electrons to interfere with, and the electron does not interfere with itself, there is no possibility of interference effects.
There is a problem with this approach, and that is that you have outlined specific problems with quantum theory itself, and how these problems lead toward an appearance of indeterminism in some interpretations. To dispose of the appearance of indeterminism, the problems which are being interpreted need to be addressed themselves. Since the problems of quantum theory are a manifestation of the conceptualizations employed (as I described above), then we have to step outside quantum theory to get a handle on these problems.
That is why my Zeno analogy is relevant. What you are asking is analogous to saying let's adhere strictly to Zeno's descriptions, and try to solve Zeno's paradoxes from within that box. It cannot be done, because it is Zeno's descriptions themselves, the conceptualizations employed which are faulty, so we must step outside of those conceptualizations to locate their faults. How could we possibly judge the Copenhagen interpretation without stepping outside the descriptions and conceptualizations which are employed to produce it?
Quoting Kenosha Kid
This describes exactly what I explained. There are two distinct and incompatible conceptions of space. The wavefunction attempts to reconcile the two, but because they are incompatible it cannot. So the two defining parameters (boundaries if you like) of motion, 1)"well defined momentum and no defined position", and 2)"well defined position but no defined momentum", are each in themselves, incomprehensible if they are meant to describe an actual motion. The former describes the limits to the "wave in space" conception of space, and the latter describes the limits of the "particle in space" conception of space. The two conceptions are incompatible, so when attempts are made meld them together as the wave function, the result is uncertainty with respect to one or the other.
Quoting Kenosha Kid
Here again we have a demonstration of the two distinct, and incompatible spatial conceptions. You describe an electron as a particle with a well defined spatial position, at the beginning and at the end of the process. This represents the one spatial conception, "objects in space". In the meantime, the "in between", you say that the electron spreads out as a wave. This is a completely different conception of space, one in which there are "waves in space", and this conception of space is incompatible with the "objects in space" conception, as demonstrated by the Michelson-Morley experiments.
As I stressed in the last post, your claim that there is a particle, called an electron, which exists during the in between period, and "spreads out as a wave", is completely unsupported by the conceptualization of "energy". In fact, the concept of energy denies the possibility that there is such a particle. So your stated "beginning" is really the end of an electron, and your stated "end" is really the beginning of another electron, and what exists in between the two, providing for temporal continuity, is wave energy which is based in a conception of space that is incompatible with the conception of a particle as an object at a location in space.
It really makes no sense to attempt at a validation of a temporal continuity of a single electron by introducing different forms of the electron , like "stroboscopic wave-packet", when the real issue is that there are two distinct conceptions of space, one supporting the existence of particles like electrons, and the other supporting the existence of wave energy. Putting the two conceptions, wave-like and particle-like on equal footing is not a good idea because each of the two involves a conception of space which is incompatible with the other. Therefore the intelligent solution is to determine which of the two conceptions is superior to the other, and figure out what needs to be done to make things which are understood by the other, consistent. Pretending that the two can be made to appear consistent by putting them on an equal footing, is not a real solution.
Quoting Kenosha Kid
As your op describes, there are problems with quantum mechanics in general. Therefore we must question all principles, even those uncontroversial things which are taken for granted. The idea that emission can be described as spontaneous and random indicates that it is not well understood.
Quoting Kenosha Kid
OK, if that's what you want to discuss, then perhaps you can describe how spontaneous emission and random fluctuations are consistent with determinism.
I can see that you're keen to do so, but that is not a logical argument. The particular issue in question only occurs in one interpretation of QM, therefore it is not necessary to go outside of QM to avoid it, nor would I be saying much of anything about it if I did.
Quoting Metaphysician Undercover
Perhaps in whatever exotic picture of the universe you have. In QM, it's quite a trivial affair. You define a wavefunction with a value of 1 at the initial position and 0 everywhere else as the initial state, and you time-evolve it according to its wave equation. It will disperse quite naturally.
Quoting Metaphysician Undercover
An element of any basis set, such as the positional basis, can be written as a superposition in any other basis. In the case of the two extremes of position and momentum, this is simply the Fourier transform and its inverse.
Quoting Metaphysician Undercover
I dedicated quite a portion of the OP to spontaneous emission.
LOL. Just read what you wrote, bro.
Quoting Kenosha Kid
You're saying the "beam" is wave and interferes with itself, so if an electron is a wave, then it can interfere with itself.
A wave interfering with itself is how the pattern is created.
The "beam" is the relationship between the individual electrons and according to you is a wave. If the same pattern is created no matter how long the interval between each electron, then it isnt the electron that is a wave because one electron would create the pattern if it were a wave like the "beam" of electrons.
Ha!
Quoting Harry Hindu
Ah, I see the misunderstanding. I meant it as a chronological voyage of discovery. We conclude the beam is a wave from the interference pattern. We then go on to realise that each electron is a wave by reducing the voltage. Unclear, my bad.
All you did was reduce the interval between electrons being emitted and you get the same pattern. The beam is still there, just at a lower voltage, so it is still the beam that is the wave, and not individual electrons.
The second experiment doesn't show that each electron is a wave. It shows that electrons don't appear to move through, or are governed by, space-time like other particles, or maybe it is our view of space-time that is skewed.
Yes you described emission as the reversal of radiation. And I explained how this is an unjustified assumption. Your reply was "Oh crikey!". And then you went on to assert "the concept of emission is pretty uncontroversial in quantum mechanics." Now you refer back to the OP as if you think that the answer to my question is there. But all that is there is the following faulty assumption:
Quoting Kenosha Kid
You seem to be ignoring the fundamental facts of radiant energy. The ejection of an electron which is caused by the absorption of radiation, according to the photoelectric effect, or some other mechanism, what you call scattering, is not a simple reversal of the emission of radiation. The former has a determinate cause, the latter may be spontaneous. The fact that you can treat the two mathematically as one reversible process does not justify your claim that the two are one reversible process.
This is also treated in the OP.
Quoting Metaphysician Undercover
As I said, this is uncontroversial, and the purpose of the OP is not to compare QM with idiosyncratic theories. In QM, this obeys a specific symmetry of the universe that makes it reversible.
Maybe, but that does not fall within the scope of the OP, which concerns quantum mechanics, not alternative theories to quantum mechanics.
Sure it does. It shows that your OP is unfounded in asserting that electrons are waves, and is not the rest of your OP built upon that faulty premise?
Seems to me that you've admitted that consciousness is involved in some way to say that it has imaginary components, or where else in reality do imaginings exist? Is it me, or are scientists getting really lazy with their use of language?
If the wavefunction represents a fundamental characteristic of nature, then how can you say that nothing else observed in nature has this feature, when observing is what collapses the wavefunction? It's like saying that nothing else has the features of atoms when everything is made of atoms. You're simply talking about different views of the same thing - a view of atoms, waves and electrons from the macro-scale vs the micro-scale vs the quantum-scale. Each theory is simply a description made from one of these views.
As stated previously, the OP is regarding QM, and nothing outside that framework. Feel free to start a thread on the subject you'd obviously prefer to discuss.
I see, avoid the issue you claim to be addressing by asserting that the OP has already addressed it. I call that lying.
You propose a complete misrepresentation of the human conceptualization of radiant energy. Notice that a cooler object will not radiate heat to a warmer object, therefore contrary to your claim, emission/absorption is not a reversible process. Your claim that spontaneous emission is deterministic is dependent on this false premise.
I propose a rejection of the Copenhagen interpretation of quantum mechanics. Ideas about the nature of radiation that are different to quantum mechanics may well be fascinating, but not relevant.
Probably a stupid question: if the mapping from a wavefunction to it times its complex conjugate always produces a purely real variable, and that the "advanced wave" is defined by the mapping of this wavefunction to its complex conjugate, how could this be taken as evidence of a coincidence of physical mechanisms (two processes with the same result) when it's actually two names for the same mapping?
In other words; what generates the advanced wavefunction trajectories aside from the conjugation operation?
Analogy:
"By applying this new technique, we have transformed the sleeping agent into a soporific!"
"What did the technique add?"
"It reveals that it's no coincidence that sleeping agents are soporifics".
No, it's a good question. The first part of the answer is that they are both independently solutions to the relativistic wave equation. This is different to the non-relativistic Copenhagen interpretation, in which the advanced wave is not a solution at all, and really is just a mapping from the retarded wave. So just by solving the equation, you get forward and backward solutions.
The second part, which I mentioned but only in a throwaway fashion, is that we don't need to consider the 'hole' taking part in a given process to have the same (future) history as the electron's future. The birth and death of the particle are the two boundary conditions, but things like where the electron appears on the screen can be considered nodes. The advanced wave sent back to the cathode needn't be the same that conjugates the retarded wave beyond the the back screen.
A quick note on this... The OP is very much in the realm of QED, in which all processes really are reversible. If that was the end of it, we really would expect the advanced wave from death to birth to be the conjugate of the retarded wave from birth to death, and there would be no insight. Generally in TQM though, only the advanced wave from the screen to the cathode need conjugate with the electron, which is likely more general. There is no time-reversed equivalent of some electron-producing radioactive decays, for instance.
Your false claim that emission/absorption is reversible is obviously what is inconsistent with quantum physics.
And your refusal to accept the empirical evidence of a multitude of real examples of radiant energy, demonstrates that you are simply obsessed with some pet theory which is not supported by empirical evidence.
Visuals are not my strong suit, but I'm really trying :rofl: This is an oversimplified example of how, beyond the perfect symmetry of SR, retarded and advanced waves might not need to share the entirety of their histories, in this case in a scattering event.
Time goes from left to right.
What is it that i obviously want to discuss, KK? The only thing that I've been discussing is the faulty assertions in your OP, but you can believe that I'm talking about something else if it makes sleep better tonight.
Something wherein electrons are not waves, i.e. something that is not quantum mechanics. And by all means, but elsewhere.
https://en.wikipedia.org/wiki/Alternatives_to_general_relativity
Not much agreement in science it seems.
You're confused. Its your OP that fails to show that electrons are waves. You've only been able to show that the beam is a wave. So if a requirement of QM is a belief that electrons are waves, then your OP isn't about QM either. That's all I'm saying.
There seems to be a point in an academic progression at which a student may get into a course in his area that is an abrupt excursion into something that seems weird and unlike anything he has encountered before. If he is fortunate and has a really good prof he may become enthusiastic and proceed, or, more likely, he may have an indifferent prof and exit the discipline. My own experience was a beginning grad course in set theory. It could have come close to leveraging me out of math, but the young, energetic prof made it both interesting and pleasantly challenging. I never returned to the subject, but I stayed in the discipline.
I had a year of physics, planning to become a physicist - but reading about quantum theory showed me the error of my thinking! Hats off to Kenosha Kid. :up:
The OP is not deriving QM, merely summarising it. Conversation would be pretty limited in scope if you have to re-derive from first principles everything that you intend to discuss every time.
Thanks jgill! Math is much harder, I think. My department didn't offer a general relativity course when I was an undergraduate, so I had to sit in on the math department's course to learn about it. Hands down the hardest time I had at uni. Although fluid dynamics comes a close second. Kudos to your friend.
I was thinking about what you said about the asymmetry of boundary conditions, e.g.
Quoting Kenosha Kid
Quoting Kenosha Kid
Quoting Kenosha Kid
Mathematically, of course, any consistent set of boundary conditions is on equal footing with any other. And in any case, rather than solving a boundary value problem by time-evolving a wavefunction (forward and/or backwards), we can equivalently solve a least action problem, which obviates the question of where to start and where to end, since we are doing it everywhere at once. Which makes me wonder about the physical significance of all this, and particularly your main take-away about determinism.
The "trick" of putting some of the boundary conditions ahead in time makes the point a rather trivial one. Another way to state it would be to note that if there is a fact of the matter about the way the world is going to be at some future time, then there is nothing indeterminate about it. Well, of course.
I am also going to take a little issue with this:
Quoting Kenosha Kid
I wouldn't agree with the statement that the wavefunction is non-physical because it has a complex component. We can represent uncontroversially real entities with complex functions, as you are no doubt aware (e.g. the electromagnetic field in classical electrodynamics, and generally any 2D model where complex representation is expedient). Perhaps your thinking here is prompted by the QM formalism of observables - linear operators that, when applied to the wavefunction, produce real values that correspond to measurements (of position, momentum, and other attributes of a quantum state). But if only measurements are real, then nothing about the wavefunction as such is real, not even its absolute square: a probability density is not a measurement.
Anyway, this is probably a diversion (or not - you tell me). I myself don't regard the question of what there is as important. I take a theory as a whole, with all its ontological furniture, as real (enough) to the extent that it does a good (enough) job.
Then either QM is flawed in how it goes about showing that an electron is a wave, or your summarization of QM is flawed. I never asserted that electrons are or are not waves, merely that you didn't show that they were.
Thank you for saying so :)
Quoting SophistiCat
In the simple symmetric Minkowski spacetime, yes, it is trivial, and I think that's fdrake's point too. If the idea isn't controversial and the resultant determinism is trivially derived, I will take an "Of course!" But as I said to fdrake, transactional QM itself is still probabilistic. It removes the irreversible wavefunction collapse but which hole will be sent back is still undetermined. The last two images in the OP are essentially a recourse to other physical considerations leading to a conclusion that non-determinism is a case of running away with ourselves.
But the really good question here is the idea of boundary conditions as a "trick". You're right, we can do this in non-relativistic QM too. We'd essentially be forcing a final-state--dependence by hand, mimicking what is justified in the Dirac equation with what is not needed in the Schrödinger equation. My argument here is that the form of the equation is physically meaningful, i.e. the choice of boundary conditions is not just efficacious. The Copenhagen interpretation derives from the allowance to calculate wavefunctions from an initial state only because it's an approximation to reality, and it is this feature that yields wavefunction collapse and its inevitable probabilism. But this isn't real.
Quoting SophistiCat
But these aren't physical either. It is simply that complex exponentials are much easier to manipulate than individual sines and cosines. I'm not trying to do the wavefunction down, though. Whatever its ontology, it is important for predicting experimental outcomes and therefore corresponds to something physical. But no complex quantity can be physical in itself, i.e. we can't observe it in nature.
Quoting SophistiCat
That's actually not an uncontroversial statement. The wavefunction is frequently referred to epistemologically as the total of our knowledge about a system. The OP basically states that it encoded more ignorance than knowledge.
Quoting SophistiCat
No, all good points, especially about the "trick" of boundary conditions.
Again, I was not aiming to re-derive QM from scratch. If you want to know more about why individual electrons are waves, you can Google it.
Quantum field theory is the most foundational theory we have, underpinning everything we know about except gravity. It might be that QFT can be generalised to curved spacetimes but it's probably more likely that there's a more fundamental theory out there somewhere, something like string theory maybe. Check out Kakuza-Klein theory for why string-type theories are expected: https://en.wikipedia.org/wiki/Kaluza%E2%80%93Klein_theory?wprov=sfla1
Do you think that a hot object knows that the cooler object is cooler when it radiates heat? Of course you do not, because you know that thermodynamic equilibrium, which determines whether emission occurs or not, is a feature of the object's relationship with its environment. And I'm sure you know that the definition of "black-body" is based in thermodynamic equilibrium. Since this idea which you have (I should call it an "ideal") that emission/absorption is reversible, is dependent on black-body conditions, it's practical significance is very limited.
Let's see if I can determine its significance. Emission/absorption is only reversible when an object is at thermodynamic equilibrium, which is when emission will not occur. When emission does occur, the object is not at thermodynamic equilibrium. Therefore emission/absorption is never reversible. However, we do have a slight issue, which is "black body emission". Of course this anomaly indicates that the ideal is faulty.
Quoting Kenosha Kid
All radiation, other than black body radiation (which is an anomaly produced by misconception), is a feature of the relationship between the emitting body and its environment. There is no need to employ this type of imagery, such as the emitting body "knows" its environment.
Quoting Kenosha Kid
Clearly you cannot "run the movie backward". The idea that you can take an object's radiation of energy to its surroundings, and turn it around such that you can represent it as it's environment radiating the energy to the object, is completely unjustified, and obviously wrong.
Quoting Kenosha Kid
Have you ever heard of "mechanical efficiency"? Mechanical efficiency is always less than 1, because a mechanical system always loses energy to its environment, friction for example. Clearly, we cannot have a reversal process, because energy is lost. We all know that perpetual motion is nonsense. It's covered by the second law of thermodynamics. Therefore you ought to know that your proposal of reversal is ridiculous.
That would be unnecessary according to the OP. It is sufficient that each particle knows where it is going. The statistical emergence of thermodynamics would arise from the fact that trajectories from (r,t) to (r',t) are more probable than their reverse. Indeed, all of thermodynamics is derivable from QM (via statistical mechanics: in fact, that's how we were taught stat mech at my uni), without any notion of heat being forced in by hand.
Quoting Metaphysician Undercover
No, a blackbody radiator is a non-equilibrium thermodynamic system, that is: it is not in equilibrium with its environment. An oven is something that radiates in its interior. So it's the interior radiation that is in equilibrium. The oven as a whole is still non-equilibrium.
Quoting Metaphysician Undercover
Again, the CPT (charge-parity-time) symmetry of quantum field theory is not a new idea of my own that I'm presenting for consideration. It is a known symmetry that is broken by a few rare processes. I'll give you a heads up now, since you keep making this error: very little of what I've presented in the OP is original. The new-ish bits are that a) one cannot draw conclusions about where a particle may be found at a given time by considering only that particle at the time, and b) that the birth and death of a particle are its true boundary conditions. Those might be considered novel or controversial.
Quoting Metaphysician Undercover
At thermodynamic equilibrium, the rate of emission equals the rate of absorption (clue's in the name).
Quoting Metaphysician Undercover
That's precisely what a system in equilibrium with its surroundings does. But I was discussing fundamental processes, not statistical ones. Fundamentally, a particle moving from one position to another is reversible for instance, e.g. things are not constrained to move in the same direction along a given axis.
Quoting Metaphysician Undercover
Thermodynamics does not demonstrably lead to loss of information. In fact, conservation of information and entropy are related.
Quoting Kenosha Kid
What stops complex quantities from being physical?
I think you're doing okay without me, dude :D
Quoting fdrake
It's a good question. We certainly don't measure something as having a complex value, which could be a cognitive or technological limitation. It might be that the wavefunction is a 'real' (existing) thing and we encode our ignorance about its underlying nature as a complex phase (e.g. a compact dimension).
Or it might just be a mathematical trick, like Cat's examples. According to the Hohenberg-Kohn and Runge-Gross theorems (and, um, the me theorem), the wavefunction is uniquely given by the charge and current density and the environmental fields, all real-valued. We just don't have a good way of dealing with these quantities directly: wavefunctions are easier.
My take in the OP is that the wavefunction has some kind of existence and the real-valued densities arise from considering retarded and advanced waves. I wouldn't go as far as saying that the complex wavefunction is an accurate depiction even within this quite literalist view.
You would have thought that a book on the ontology of the wavefunction would examine the question of its complex nature, but nope. The word 'complex' appears a total of 9 times, one of which is in its everyday sense. It's entire insight on this question is: Schrödinger was surprised.
But it did jog my memory. The wavefunction can be written as a real function multiplied by a complex phase defined everywhere (this is trivial: a complex number has magnitude and phase). The phase is important for interference effects, but makes no difference when it comes to observables. The former is why I believe in an ontic wavefunction, and the latter is why the wavefunction might be considered epistemic.
Also, that complex phase function is directly related to the probability current, while the real part is directly related to the probability density. In fact, a single-particle wavefunction can be written as [math]\nabla\psi = \frac{1}{2}\nabla n + i\frac{d\mathbf{j}}{dt}[/math], where n is probability density and j is probability current. I'm not sure that's ever been published... It was on my whiteboard for years but never made it into a paper.
Now recalling the four-current from relativity, we have [math]J = [n, i\mathbf{j}][/math]. (In tensor formulations we write the elements as real and move the imaginary part into the metric.) Again we see this relationship between a real spatial part and an imaginary momentum part, even without quantum mechanics.
So I'd venture that the complex nature of the wavefunction is to do with the relationship between space and time. It might not be something about the particle itself, but rather how we describe particles in space and time.
That's nonsense, to say that a particle knows where it's going. Are you suggesting that the particle has a mind of its own? And it's really no different from my example of saying that the hot object knows where the cold object is. All you are doing is qualifying this to say it's not really the hot object which knows where the cold object is, it's the energy within the hot object which knows where the cold object is.
It's not the case that the particle knows where it is going. What is the case, as I explained in my earlier post, is that the human conception of radiant energy is based in the absorption of energy, because it is empirically based. As I said, "we can talk about something radiating energy into empty space, into an infinite vacuum or some such thing, but this is not consistent with the concept which is based in objects receiving energy, not in objects emitting energy." It's not the case that the energy transmitted must know where it's going, what is the case is that we do not have the understanding which is required to conceptualize radiant energy in any way other than through the empirical observations of absorption. Emission is simply a logical extension of that conception. Then you find some axioms which state that emission is the reverse of absorption, and so be it, in your mind. Despite all the obvious evidence that it is not.
Even the idea that there is a particle which is transmitted is unsupported by evidence. The emitting object has a field. There are waves between the emission and the absorption. That there is a particle which is emitted cannot be empirically verified. Whatever it is which is emitted (waves I am told by physicists), cannot be directly observed without being absorbed. Therefore your logical conclusion, that there is a particle which is emitted is simply a product of your desire to represent emission as the reverse of absorption. You are begging the question. Emission is the reverse of absorption, a particle is absorbed, therefore a particle is emitted. But the evidence is contrary to this, because all that is observable between the emitting and the absorption, is wave patterns. So you have absolutely no justification for the claim of a continuously existing particle being emitted from one location, and being absorbed at another. I would stress that you appear to have a desire to represent emission as the reverse of absorption, so you theorize that there is a continuously existing particle in between, to support this theory. But the empirical evidence clearly suggests otherwise.
Quoting Kenosha Kid
Yes, but being "non-equilibrium", means that it is based in the concept of equilibrium. Which is what I said, even if you didn't interpret it that way.
Quoting Kenosha Kid
Yes, I realize that, I've come across most of what you have written here in researching my replies to you. Do you get most of your information from Wikipedia? You ought to pay more attention to respected physicists instead. Someone like Dr. Feynman for example describes energy transmission as waves, not as particles. Your idea that a particle moves from emission to absorption, though it might make interesting discussion on the internet, is not really accepted by mainstream physics. That's why the in between is represented by a wave function. I'm sure you are aware of the fact that the wave function is not meant to represent the continuous existence of a particle. It is meant to predict where a "particle" (or whatever it is which bears that name) might appear.
Quoting Kenosha Kid
Sure, a particle moving from one place to another is reversible. But physicists tell us that electromagnetic energy moves from one place to another as waves, regardless of what pseudo-scientists on the internet are saying. So this is the difficulty you need to overcome in order to have your theory even considered. It might be considered by people who don't know fundamental principles of physics, and think that electromagnetic radiation is transmitted as particles, but physicists know that transmission is through waves.
[quote="Metaphysician Undercover;461417"]If a substantive thing, (massive object), is inclined toward temporal continuity (as inertia implies), yet "feels" a force which would impel that object to change, then there are two very distinct forces involved, the force to stay the same, and the force to change. If the object stays the same, despite feeling the force which would impel it to change, doesn't this appear to you like the object has made a choice, and exercised will power to prevent the force of change?
No, I did something that has apparently never occurred to you: I got an education.
Quoting Kenosha Kid
Well, exactly. You can reproduce all complex-valued math without recourse to imaginary numbers or complex exponentials - it would just be more work. But then I don't understand why you insist that
Quoting Kenosha Kid
Complex quantities are no more and no less physical than real quantities, tensors, vectors, and whatnot. They are all mathematical objects.
Quoting Kenosha Kid
Quoting Kenosha Kid
Sorry, you seem to be making a distinction without a difference here. Both the amplitude and the phase are essential for encoding our knowledge about the system. So if that makes the wavefunction real in a broad sense (which is fine by me), then the whole of it has to be real, not just the amplitude.
Looks more like you got an indoctrination, a process of teaching a person or group to accept a set of beliefs uncritically.
Any theory that doesnt attempt to explain the role of the observer/measurer in an event involving observation/measurement is missing half of the explanation, especially when we observe changing measuring devices changes the observed outcome.
From the overlap integral of the retarded wavefunction with the advanced wavefunction:
In transactional QM, this is the very meaning of the Born rule. It's not my transactional QM btw, it's been around a while I think.
Quoting SophistiCat
When a particle moves from event (r,t) to (r',t'), it still does so by every possible path (Feynman's sum over histories). If you sum up every possible r' at t' and normalise, you recover the wavefunction at t'.
Quoting SophistiCat
Ah, I see. This is deeper than I'd realised. Simply multiply whatever complex number by whatever physical unit: we never see that. I see I have 10 fingers and that I'm 5.917 feet tall, but I have never weighed (12 + 2i) stone.
Quoting SophistiCat
I'm getting confused now between ontologically real and real as in has no imaginary component. I started it, mea culpa. I'll rephrase.
The OP holds that the complex wavefunction is an ontic description -- or fair approximation to such -- of how particles propagate through space and time as we represent them. How we describe the relationship between time and space is intrinsically complex, just from straight relativistic vector calculus, which happens to be the language quantum field theory is written in. There are other languages for describing relativity that can do away with the imaginary number and it may well be that in future we can generalise QFT in a similar way; such an endeavour would be part of a general relativistic quantum mechanics. So I don't hold the complex wavefunction to be an ontic description of the particle itself, rather it encodes the ontology of the wavefunction in our spacetime representations accurately, e.g. encodes information vital for doing the physics.
Quoting Kenosha Kid
Yet the screen, double slits and the electron emitter are all macro objects composed of electrons and all have an effect on the outcome of the experiment.
Looks like QM has limits as well.
We only conceived of atoms in order to explain observations of macro-sized objects and QM was conceived of to explain the behavior of atomic sized objects. Seems to me that if the theories were compatible they would seamlessly integrate, like genetics and evolution (micro vs macro explanations of the same process).
Why would there be limitations if this is suppose to be a theory explaining the fundamentals of reality, if it's not that both classical and QM are explaining different views/measurements of the same thing?
The second quote you have taken out of context, and the reference link does not link to the context. Notice that I have "feels" in quotation marks, because this was not my terminology, and I was criticizing this way of describing the situation.
The same criticism is applicable here too though. If the electromagnetic field, through which radiation transmits, extends from one object to another, and there is energy which is received at the other, but not enough to cause a physical effect (i.e. not enough for the object to absorb a photon of energy by photoelectric effect), then if we are using that terminology which KK chooses to use, you might say that the object "feels" the other, without being physically affected by it. If this were really the case, then we ought to conclude that the second object exercises will power to prevent itself from being physically affected by the other, because this is the only reasonable way that we have to talk about one object being affected by another, without causing a physical change. By the precepts of physics, if one object affects another, there is necessarily a change to the other, or else we cannot say that the one affects the other, because that claim would be unsupported by empirical evidence, and physics does not accept panpsychism as providing reasonable explanatory principles.
Quoting Kenosha Kid
As evident from the terminology which you use, (described in my reply to jgill above), your education was not in physics. Nor was mine, so we ought to be on par for any approach to this matter of physics.
Quoting Kenosha Kid
This is where you're wrong, and you ought to refer to some real physics to sort yourself out. There are many true statements one can make about "the wavefunction", but the wavefunction does not describe the propagation of particles through space and time. That is a false proposition. We can say that the wave function may be used to predict where a particle will appear, through a description of the propagation of energy (as wave motion), but we cannot conclude that it describes the propagation of particles. It really describes the propagation of waves, hence "wave" function.
But this only gives the "real paths" of the electron once the "boundary condition" on the other end is fixed, i.e. once the measurement already happened at the back screen. This doesn't explain any actual data though: we have no independent knowledge of those "real paths" besides what the interpretation tells us. What we have from experimental setup and observation are just the boundary conditions, the origins of the retarded and the advanced wavefunctions. And while we fix the former by our setup, all we know about the latter in advance is:
Quoting Kenosha Kid
which is no more than what vanilla QM tells us and doesn't explain the really interesting bit, i.e. the measurement problem. And isn't that what we really want from an interpretation?
So what mechanism fixes the forward boundary condition?
Quoting Kenosha Kid
Well, of course, if you take something that is usually represented by a scalar, such as height or weight, then a real number will be optimal as a mathematical representation. But take something like stress, for example, and you'll want a tensor or a vector at the least. (Although if you really set your mind to it, you can map a quantity of any dimensionality to any other dimension. You can map 6-tuples to scalars and back - it'd just be horribly impractical.)
Quoting Kenosha Kid
Yeah, I suspected as much.
Quoting Kenosha Kid
Fair enough.
It would help if this issue is clarified.
Yes. I think your question was: how do we get the Born rule? The Born rule is derived from each retarded path in the sum over histories overlapping with each advanced, conjugate path coming back from the screen. The Born rule would only apply to the real paths.
Quoting SophistiCat
True, but this is actually a more minimalistic interpretation of empirical data. It yields a single position on the back screen without magical collapse mechanisms or (hopefully) proliferating universes, and still yields the same interference effects as standard QM interpretations.
TQM, Copenhagen, MWI, Bohm etc. are all obliged to yield the same experimental predictions, having the same mathematics. The point here is more that the philosophical ramifications of an ideal screen in Copenhagen are naive, and that the fact that complex conjugates of solutions to the Dirac equation are also solutions to the Dirac equation, in fact the general idea that there's no obvious arrow of time in special relativity, does not support the assumption of such collosally irreversible and information-losing processes. There are more physically well-grounded and more rigorous approaches to understanding the same experimental data.
Quoting SophistiCat
There is possibly no measurement problem in this formulation, because the (numerically) real aspect of the wavefunction is never spread out across the screen. The electron travels from (r,t) to (r',t') by every possible path, but only to (r',t'). If you were to place a screen midway at r'', you wouldn't get the same paths, therefore no collapse.
It is, I think, impossible to prove that every path through every other possible value of (r',t') can be eliminated by considering the entire history and future of the electron (or, respectively, future and history of the hole(s) it overlaps with), hence the jokily-named not-many-worlds interpretation. But conversely it's impossible to prove that the electron does actually have a real probability to be found at any point on the screen. The former is simpler, however, and it is possible to state with absolute surety though that not every part of the screen is accessible that Copenhagen says is accessible.
Quoting SophistiCat
Well, the wavefunction is a scalar field :)
Complex numbers form a 2D vector space over the reals which is isomorphic to [math]R^2[/math] (vectors take the form (Re(z), Im(z), adding real and imaginary parts works just the same as adding x and y components, it's why the plane representation of complex numbers works).
Complex numbers as a field are isomorphic to a collection of 2 by 2 matrices.
If it's impossible for an imaginary quantity to be physical, and it's possible for a real quantity to be physical, why would it be the case that one complex number representation (x+yi) can't be physical, and another (2 by 2 matrices of real numbers) might be physical? I think that leads to three possibilities:
(1) The property "is physical" (or "is physically meaningful" or whatever) is not preserved by isomorphism of structures.
(2) Neither complex numbers nor real numbers can be physical.
(3) Both complex numbers and real numbers can be physical.
I find (1) implausible: all the mathematical properties of an algebraic structure are preserved over isomorphism, so whether something is physically meaningful or not would then have to depend on something extra-mathematical that somehow varies with an isomorphism. Could lead to situations where two models have all the same equations and experimental predictions but one is somehow physically meaningful and the other isn't.
I find (2) implausible: describing reality well is what good theories do. Assuming it leads to absurdities like while the concept of mass might be physical, the number associated with every mass measurement cannot be.
Left with (3). Though it's got quite a lot of work left to do in it. How "physical meaning" distributes over the components of a physical model is something I've wondered about since seeing geometric series infinite sums used in inelastic rebound; computing the infinite sum assumes an infinite amount of collisions with arbitrarily small bounces. But that's not a "physically meaningful" part of the model.
This might trivialize C. Here's a quote from the web, from an educational perspective:
"Question.Is C isomorphic to R2? Answer.As a what? A field?Question (revised).Is the field C isomorphic to the fieldR2?Answer.NO! R2is not a field, it’s a vector space!Question (re-revised).Is the vector space C isomorphic to the vector space R2?Answer.That’s a good question! However, it is meaningless/misleading. A vector space isomorphism is only defined between two vector spaces over the same field. R2 is a two dimensional field over R and C is a one dimensional vector space over
I.2."
Elementary dynamical systems in C I have worked on would not have been possible in R2.
Quoting jgill
Aye. All the "i" is doing when thinking of C as a 2d vector space over R is keeping track of the second coordinate. It is an isomorphism of vector spaces over the reals, but not an isomorphism of fields. So in that respect, it might be misleading for me to have said that "the complex numbers form a 2d vector space" over R, because "the complex numbers" are a field. You and the quoted thing are right! My mistake.
I wrote a version of the post you responded to that only used that fact, but considered it too weak as it didn't respect the field structure. The second fact; the isomorphism of the field C with a field of 2 by 2 real matrices; I believe suffices for my point.
My guess is that KK is an engineer.
We've never managed to measure anything that is a 2x2 matrix either. We use matrices a lot -- they are useful, but they are constructs. I don't think nature knows about them.
What I think is a better criterion is whether we can do without them. Negative numbers are empirically indispensable, interference effects being a great example, electric charge being another. Complex numbers may also be indispensable, or they might just be efficacious. Given the way they enter into the equations, my feeling is that it's the latter, but I'm not trying to make that particular case in the OP. However complex things are at root, empirically they manifest as real.
Quoting Kenosha Kid
Very skeptical of that. Whether a quantity can be physical or not should not depend on how it is recorded. Somehow individual measurements are physical but tabulating them makes them incapable of being physical. Nature can "know" about the vector elements but not the vector? The matrix elements but not the matrix they constitute together?
Quoting Kenosha Kid
"empirically manifesting as real" and "being physically meaningful" don't play so well together I think. Depends on how they're fleshed out.
EG: if the criterion for a theory (as a whole) being physical is successful prediction of experimental results ("manifesting as real"), it's silent on theory elements. A conception like that wouldn't let you intervene mid argument with a physical insight to conjecture a next step or ignore a class of solutions, since it doesn't constrain theory elements at all, only constraining theories' being "physical" as a whole by resultant predictions.
Or: if the criterion for a theory element being physically meaningful is simply that it is part of a theory which produces successful predictions, then the complex conjugated solutions are physical since they are part of the theory. But that also trivialises physical insight, as the criterion cannot distinguish between theory components as before.
Or: if the criterion for a theory element being physically meaningful requires that it is a measurable quantity, then only measurable quantities in theories can be physically meaningful - and that rules out the discussion item (as @SophistiCat implied with his observables comment).
From what I've seen, that is how physical meaning/the property "is physical" is used, to operate on steps mid argument, to rule out solution classes, to declare something a mathematical trick or not. A criterion that either makes all theory elements equally physical, does not care about theory elements at all, or requires that all concepts employed have measurable values can't suffice here.
Please not!
How you tabulate results is a matter of convention, for communication, or performance. You can represent classical waves as sine waves: completely real. Or, for ease, you can represent them as complex waves. For the exact same wave. The latter adds information that is related to the former (because imaginary sines are related to real cosines and vice versa), but we're not transforming the physical wave from a real field to a complex field. It's still real. So how we tabulate things can't change their physical reality.
Quoting fdrake
Yes. Vectors are a way of dealing with multiple similar quantities which transform in similar ways. Some of those quantities may be related to one another, but there's nothing I can think of that makes the interpretation of the vector as anything more than a notational convenience.
Quoting fdrake
Beyond falsification, which depends on the element and how it works inside the theory, yes.
I won't quote the rest, just sum up. A theory is tested empirically, not it's individual elements. If the theory as a whole (or a subset of elements, catering for irrelevancies to a particular experiment) yields otherwise inexplicable or more accurate predictions for experimental outcomes, it's a good theory.
What we see in QM is that it's a good theory, but contains various combinations of elements to which experimental outcomes are insensitive. It might be that you cannot remove or replace or augment just one element, but the whole combination.
This is evident in the various interpretations of QM. in Copenhagen, the electron is a complex wave, the field acts linearly, there is one measurement outcome and spontaneous collapse. In MWI, the electron is a complex wave, the field acts linearly, there are an infinity of outcomes and the wave evolves forward in time deterministically. In Bohm, the electron is a real, classical particle, the field is nonlinear, there is one outcome, and the particle evolves forward in time deterministically. In transactional QM without my edits, the electron is a complex wave, the field is linear, there is one outcome, the wave evolves forward and backward in time but probabilistically.
They're not all right, yet they all, in principle, yield the same predictions of the same experimental outcomes. Nature cannot care that much how we represent it.
To clarify: let's imagine that someone's repeated a measurement of a mass twice. They've written both down. The first measurement I'll call "a", the second measurement I'll call "b". Which of these (if either), do you mean:
(1) The process of aggregating both measurements into a vector (a,b) is not physically meaningful (I agree with that).
(2) Representing things as vectors is nothing more than a notational convenience.
I'll assume the premise:
(3) Things adopted for the sole reason of notational convenience are not physically meaningful.
I think that's justified from the remarks you've made about complex representations of mathematical objects being convenient tricks.
So, I don't think you mean (2) if you also think (3), as that rules out vectors from being physically meaningful. So that goes for all vector quantities! And then typical objects of physics are no longer physically meaningful by that rule. So I'll assume you mean (1).
Quoting Kenosha Kid
If you mean (1), I don't think it applies to the context I meant. I took something which was not physically meaningful (the complex number x+yi) because it had an imaginary component, fed that number through an isomorphism of structures into a real valued matrix which could not be ruled out of being physically meaningful on that basis. The two structures are equivalent, but the criterion of "must not contain an imaginary number" rules the first out of physical meaningfulness but not the second.
I also don't believe you have applied this doctrine: "A theory is tested empirically, not its individual elements" consistently, though there is a lot of ambiguity between going from talk of whether an element in a theory is physically meaningful and whether the whole theory is good. Regardless, I see a few cases:
(A) You are happy to declare that a whole theory is unphysical if it relies upon complex numbers.
I don't see that as plausible since you've said you see the wavefunction as ontic in some regard, and it relies upon complex numbers. If the criterion of whether a theory is physically meaningful or meaningless is determined by the accuracy and precision of its predictions (rephrasing your quote), it won't care whether the theory contains complex numbers anyway - so declaring a theory non-physical on the basis of it requiring complex numbers is an equivocation. Up to suspicions about needing to remove complex quantities from physics, anyway.
(B) You are happy to declare that an element of a theory is unphysical if it relies upon complex numbers.
I see that as plausible, as there are examples of you doing it in the thread. Which goes against the other idea (up to ambiguities) that a theory is physically meaningful iff it makes accurate predictions, and seems in tension to me with having ontic commitments to the wavefunction.
My remarks in thread have been in the context of (B), and I don't really want to get into a Motte and Bailey situation. Motte: complex numbers are unphysical. Bailey: the mark of a good theory is its capacity to produce accurate predictions. The issue in the Motte is to my mind about how one manages ontological commitments within theories (maintaining an ontic commitment to the wavefunction while claiming complex quantities are non-physical). The issue in the Bailey is to my mind the claim that good theories come with accurate predictions. I'm not picking a bone with the Bailey, I'm picking a bone with the Motte.
The general perspective I'm coming at this from is some sort of scientific realism, I'm ontologically committed to the existence of entities in scientific theories. I strongly agree with this:
Quoting Kenosha Kid
Which is why I'm pressing the issue; if nature doesn't care how we represent it, why would whether something could be physical or not vary with an isomorphism of structures? Why would a criterion to decide whether a structure is physical decide differently depending upon which representation of a structure you choose?
I'll give you a few examples of why the complex wavefunction might be complex because of representation-specific factors, which I think will answer your question.
1. The wavefunction contains unphysical information
The wavefunction doesn't just encode the dynamics of the particle, but of similar particles in similar scenarios. The actual particle is described entirely by real quantities, but we cannot know them and the minimal representation that covers them all is complex. This is the case in Bohmian mechanics and can be taken as an extreme case of the OP in which only real trajectories are ever tried.
In which case the best, minimal representation of the particle is not an isomorphism of the complex wavefunciton, avoiding the problem you see,
2. The wavefunction contains physics not about the particle
This is where I'd hedge my bets the most, which is essentially that, due to our choices about how to represent particles, the minimal quantity we can deal with has to be complex in order to yield good experimental predictions. In this case, the truest minimal representation of the particle (i.e. in a better representational framework) would again not be isomorphic with the complex wavefunction.
This is the case with special relativity, where to get the right answers out in simpler representational frameworks, some of the four-vector elements have to be complex. This is avoided by a more sophisticated framework involving metrics, which is not typically done in QM except in attempts to do QM in curved spacetime (as far as I am aware).
3. The wavefunction contains only information about the particle but in a suboptimal representation
Here we might not gain or lose information about the particle as we hop from one representation to another, and in fact the relationship might be isomorphic. However, in our choice of representation, that information must be encoded as a complex field. This is, I think, the one you have in mind.
An example might be that space is 4D, containing an additional compactified dimension, and that the true meaning of the complex phase is a coordinate along this angle. So what we represent as [math]\psi(\mathbf{x}) = f(\mathbf{x})e^{iA(\mathbf{x})}[/math] (such that [math]n = f^2[/math]) might better be something like [math]\psi(\mathbf{x}, \theta) = f(\mathbf{x}) cos(\theta)[/math], which yields all of the desired interference effects.
This is an isomorphism to and from the complex wavefunction and would be physically meaningful. However the complexity of the wavefunction would then simply be a non-physical artefact of an incorrect choice of representation, i.e. a representation that does not correspond to the physical universe. Of course, we could only say that if there was some test for it. Maybe some future test is possible, but for the time being the wavefunction is not an observable, and, were it observable, QM says we would not observe it as having a complex value*, which is sufficient (I think) to state that it is non-physical as represented.
*The condition of real numbers for observable quantities is a postulate of quantum mechanics, manifest as the insistence on Hermitian matrices to represent physical operators. This is all I really had in mind when I referred to the complex wavefunction as nonphysical, i.e. the minimalist thing we can do with the wavefunction is to act on it with the Hermitian density operator. That said, I think I'll defend the broader point you're taking me to task on.
Note that electric impedance is a complex variable. So there exist classic physical variables described by complex numbers.
That is also merely a convenient representation. Voltage and current are real, but in AC currents they are sinusoidal. Complex exponentials are easier to manipulate mathematically than sinusoidal functions, and one can simply dismiss either the real or the imaginary part at the end. Impedance is then the ratio of the complex voltage to the complex current. This does not mean that the voltage or the current are truly complex; it is merely convenient to treat them thus.
It does mean something. It means that complex exponentials are easier to do calculus with than sine waves.
Well, if complex numbers are nothing more than a mode of computation, there's no reason to worry about their use in the wave function.
Thanks, I'll try not to lose sleep over it.
Kenosha seems undecided on this. On the one hand Kenosha seems to insist that the wave function represents real waves. On the other, Kenosha asserts that the wavefunction is the representation of a particle. Someone, other than me, ought to tell Kenosha Kid that particles and waves have completely different spatial-temporal representations. And, the point which Kenosha refuses to acknowledge is that there is an incompatibility between the two representations which renders them as incommensurable.
The obvious problem is that the medium ('ether') of the electromagnetic waves has not been identified. Therefore the real properties of the waves cannot be observed or determined. Instead, these waves are represented by ideals such as sine waves. However, since these waves are not ideals, but real physical entities, with real spatial-temporal constraints, there is a degree of uncertainty in application. So until the real medium is identified, and the real waves are studied, the incompatibility between the two distinct spatial-temporal representations will remain.
I think that's pushing risk aversion a little too far.
i think what is at issue here is the geometrical shape of a fundamental unit of space. If we propose a fundamental unit of space, as an infinitesimal, then the shape of the infinitesimal will influence the way that we conceive of the possibility of motion through space. Notice that the two fundamental representations of 2d (and consequently 3d) space, the square and the circle, are fundamentally incompatible. Straight lines can never be reconciled with curved lines. The two representations will not merge, and incompatibility is demonstrated every time we try. Though we have developed many ways to work around this issue, when we get to infinitesimals the difference becomes significant.
Consider the difference between representing an area of 2 dimensional "space" with cubes, and with circles. The cubes can be placed side by side, and all the "space" will be covered. This does not work with circles. Side by side circles will not cover the "space". So now we need to overlap the circles, and the process of representation becomes very complex. Is there a fundamental circumference size, or do they vary? If we assume that all fundamental, infinitesimal circles (spheres in reality), are the exact same size, then there is the complicating factor of position their centers, (points). The relationship between centers must be represented, and now we tend to fall back on the square representation.
But if we want to maintain the status of "fundamental unit" assigned to the infinitesimal circles, we cannot undermine this assignment by relating the circles to each other with squares, because that places the square representation as more fundamental than the circle. Therefore we need to disassociate "the point" from the dimensional representations (lines, squares, cubes), the point being non-dimension in essence anyway, and reconstruct a representation of 'real space' using curved lines, which is completely independent from, and not influenced by that faulty dimensional representation.
This would create fundamental circles, but then we still need to determine what type of relationship one point has to another, and this is where the real difficulty lies. The curved lines would represent the essence of space, but the relationship between non-dimensional points would represent the essence of time, being prior to space and therefore non-dimensional.
Yes, perturbation theory being an obvious example. Quantum electrodynamics is usually treated with perturbation theory, with each term in the perturbation series being a Feynman diagram.
Although topologically the same.
Quoting Metaphysician Undercover
I have done quite a few investigations into linear fractional transformations, and one feature that makes them important is they transform Circles into Circles, where the capital C is in recognition of the fact that a straight line is simply a circle with infinite radius. This has to do with the Riemann sphere.
It appears the rest of your post goes into the hyperreals, where others on TPF have greater competence.
A circle with an infinite radius is an incoherency. This is exactly the problem I am talking about, the faulty attempts by mathematicians to make circles compatible with straight lines. It necessarily results in incoherency.
The logical thing to do when faced with this glaring incompatibility is to address the nature of reality, and attempt to determine the reason for that incompatibility, rather than to attempt to veil it, or cover it up with such incoherent principles. It's no secret that mathematical axioms may contain incoherency. If the incoherency is veiled, and the axioms are useful, they will be accepted by convention. To bridge a gap of incompatibility, such as the relation between the non-dimensional point, and the dimensional line, we can use whatever means is proposed by mathematicians, and proves useful to that end, but unless the real nature of that divide is understood, there is no truth provided by the application of the axioms.
We have learnt over thousands of years of application, to measure distances between objects with straight lines and 3 dimensional representations. However, we've come to conceptualize force, which is the driver of motion, as based in torque, and this is rotational. Naturally, we fall back onto the only means of measuring which we have, the three dimensional straight lines which we know and love, employing things like vectors to represent rotational force. So the incompatibility rears its ugly head. Why do we insist on developing all sorts of convoluted and complex mathematical axioms designed to bridge this gap of incompatibility between the measuring technique (straight lines). and the actual reality (curved force), instead of dismissing that 3d measuring technique altogether, as inadequate, and delving into the true reality of curved existence?
The first principle of the curved space is "the point", which represents the center of the circle. We often even imagine a point as a tiny sphere. But the point is non-dimensional, so a spherical representation is unjustified. The second principle, is that if we model a three dimensional space as surrounding that point equally, we have irrational ratios (incoherency), known as pi and the square root of two. So the first logical conclusion is that equality in the dimensions of space is a false premise, irrational. We cannot represent space as having equal dimensions. Therefore a rotational force such as torque, cannot act equally in each of the dimensions which it is represented as acting in. Such a representation is an ideal, based in the necessary spatial equality of eternal circular motion (Aristotle), which is not a reality. The reality of such circular motion was dismissed when the orbits of the planets were discovered to be ellipses
So let's say that a force, which is derived from a non-dimensional point, must have a start. And, that start must be directional, it cannot be equal in all directions. The start cannot be equal in all directions because this is denied by irrationality. (Of course one might argue that reality is irrational, but that's pointless, and contrary to the vast evidence we have of our capacity to understand reality.) So we ought to dismiss the irrational approach of spatial equality surrounding a point, and start with the premise that a force emerging from a non-dimensional point is necessarily directional. The direction is not a straight line though, perhaps like a spheroid without symmetry because it necessarily has a starting direction.
Quoting jgill
Thanks for the heads-up, but obviously I don't accept the reals as an acceptable solution to the incompatibility described above, so the hyperreals are completely irrelevant to me. Therefore i would not adhere to any such principles. I approach infinitesimals from the metaphysical perspective of the problem in establishing a relationship between spatial extension and temporal continuity, like Peirce, not from the perspective of how mathematicians attempt to deal with this problem. The mathematical approach, of reducing rotational principles to straight line 3d representations, I reject as incoherent, just like your claim that a straight line is a circle with infinite radius.
They do. Generally, Fourier transforms of real functions are complex. The Fourier transform of a wavefunction of position and time is the wavefunction of momentum and frequency. Any wavefunction with nonzero momentum yields a complex wavefunction of space, thus the ground state wavefunction of a stationary system is generally real. This gets back to the fact that it is motion -- things changing position with time -- that introduces complexity. This is true also in special relativity, where the four-current in the vector representation is complex for nonzero current (for the + - - - convention... The charge is complex in the - + + + convention). This goes away in the tensor formulation of relativity because the spacetime metric handles the manipulation of real four-vector elements to yield real observables. Metrics are not used in QFT afaik, except in attempts to derive a QFT of gravity. Perhaps they should be, and perhaps if they were all wavefunctions would be real.
As an illustration, the four-momentum in vectorial relativity is: [math]P = [ E, i p_x, i p_y, i p_z ][/math]. The magnitude is then [math]E^2 - p_x^2 - p_y^2 - p_z^2 = E^2 - p^2 = m^2[/math] where units have c=1. This gives us the Einstein equation: [math]E^2 = p^2 + m^2[/math]. The imaginary number is required to get minus square of momentum into the equation.
In tensor notation, we instead have the metric encode the relationship between time and space coordinates. R is a square matrix with diagonal (1, -1, -1, -1), 0 elsewhere. [math]P = [E, p_x, p_y, p_z][/math]. The above is then [math]P^aR^a_bP_b = m^2[/math]. Everything is now real because the relationship between time and space has been removed from the quantity under consideration (the four-momentum) and placed in a tensor that solely handles that relationship.
Go right ahead and spin your metaphysical web. Like the flock of sparrows now sitting on my fence, the peanut gallery awaits your penetrating views. :cool:
Like the flock of sparrows sitting on your fence, the peanut gallery has no interest in the true nature of space and time. The truth about reality is just too far removed from what they believe about reality, so they are unprepared to even set their bearings in the right direction.
No worries Cat. I look forward to your challenging responses.
But nonetheless can be counted upon to show up anyway.
In case you haven't noticed, TPF is full of them. So you may feel right at home here with your far fetched idealism.
I suspect I will not understand the truth about reality when you reveal it, but I'll give it a try. :up:
I will never reveal such a thing, because it is not understood by anyone. But I can often tell when a theory takes us in the wrong direction.
Ha ha haaaaaa
I already showed you how your thesis, which is a turning away from the vast array of evidence that energy is transmitted as waves, towards a theory which treats this transmission as a movement of particles, is a turn in the wrong direction.
I thank you for your input. I disagree with your analysis and do not see it as consistent with QM. As I said on page 1, whatever alternative theory you have outside of the QM framework might make an interesting thread in its own right.
This assessment, that my objection to your theory is not consistent with QM, is simply a product of your misinterpretation of QM. Clearly, wavefunctions represent waves, not particles as you insist from your interpretation. I suggest that you are interpreting QM from the perspective of quantum field theory and this way of approaching QM inclines you to apprehend these waves as being particles. But there is nothing inherent within quantum mechanics itself which would incline one to believe that a quantum of energy exists as a particle rather than as waves. Clearly the wavefunction does represents waves. So to interpret it as representing particles is a misinterpretation. This is made more evident from the fact that the relation between the wavefunction and the particle is one of probability.
So here's an example to elucidate this point. Let's say that I employ some fancy mathematics to determine the probability of you replying to this post. I cannot truthfully claim that this mathematics represents your reply (which doesn't even exist right now), because it represents the probability of your reply. You ought to consider a similar relation of probability between the wavefunction and the particle.
You clearly don't know the first thing about it. I hold a PhD in it. Thanks but honestly I'm not looking for help from ignorant blowhards with intellectual pretensions. Whatever this idea of QM you have is, it isn't QM and, fascinating as it might be to you, it's not relevant to this thread, never has been, never will be.
Told!
By the way, why do you present this stuff on a philosophy forum when you are completely uninterested in philosophical discussion of it? Why leave your peers? Have you been rejected?
It's posted in Philosophy of Science to examine the philosophical ramifications of QM on determinism. This thread isn't meant as an advertisement for QM. However QM is the scope of this particular question about determinism, reason being that people use QM a lot as if to prove that determinism is dead. It's not. There are deterministic and non-deterministic interpretations. My point here is that even the seemingly non-deterministic ones don't have anything to say about determinism.
Look, I resisted playing the PhD card for four pages and really didn't want to do it, opting to leave little hints here an there that I guess you didn't see. I don't feel good about it, but if you were as interested in the subject as much as you're interested in appearing like an authority on it, this wouldn't be an issue. Anyway, I'm sorry I lost my temper too.
I thought the electron couldn't be 'found' anywhere until it is measured? It is not a discrete entity that exists in some unknown location until such time as it is registered. In fact there really is no such thing as 'an electron' until it is measured, when it manifests as a registration on a plate - which is the basis of 'the measurement problem'. It doesn't exist somewhere, lurking about undiscovered - all you can point to is the distribution of probabilities. There is no 'it', per se, until it's measured.
But your explanation presumes that there is an electron as a discrete existing particle that exists independently of being measured. Whereas, if you are to question the 'Copenhagen Interpretation', isn't that precisely the point at issue?
Incidentally, 'the copenhagen interpretation' was a phrase coined by Heisenberg in 1955 when he wrote Philosophy and Physics. It referred only to the general ideas of himself and Bohr with respect to what could and could not be stated on the basis of quantum physics. So I hardly think it's 'simplistic'. The reason that it's objected to, is because of the challenge it poses to scientific realism, which is the idea that there are indeed discrete, mind-independent sub-atomic entities called 'particles'. That is certainly the main reason Einstein objected to it.
'According to the Copenhagen interpretation sub-atomic particles generally do not have definite properties prior to being measured, and quantum physics can only predict the probability distribution of a given measurement's possible results. The act of measurement affects the system, causing the set of probabilities to reduce to only one of the possible values immediately after the measurement. This feature is known as wave function collapse.' (Wiki).
However, the 'collapse' of the wave function is actually a metaphor, as there never is an actual wave per se (any more than there is an actual particle). The 'wave function' is a distribution of probabilities that appears as a wave, but it's not an actual wave. Hence the fundamental ambiguity at the heart of quantum physics.
The measurement problem is that the wavefunction that describes the electron can be in a superposition of observables, but when we measure it it's always in one. It is always a wave. Even if we managed to measure its position to arbitrary accuracy, it would be a wave in momentum space still. (Likewise if we measure it's momentum exactly it's a wave in space. This is the uncertainty principle.)
So it's a question of how the electron transforms from one wave to another. The double slit example is of it reducing to the kind of wave that grows out from the slits to the kind of wave with a fairly precise position (exact, or some kind of wave-packet which is more realistic).
Quoting Wayfarer
No, Copenhagen is just the above with that transformation being probabilistic according to the Born rule. The question of the epistemology of the wave is related but separable.
Quoting Wayfarer
In truth, it's less the interpretation and how it's applied. In the double slit experiment, the back screen is treated as ideal. This means the only thing that affects where the electron can be measured is the electron wavefunction, which is ambiguous, hence the measurement problem.
The insight drawn from this is that the wavefunction evolves into this ambiguous state and can collapse to any position on the ideal screen. But this holds only for ideal screens.
We'd get a different answer if we could calculate the time-dependent many-body wavefunction of the entire apparatus, but we can't. So the ideal calculation becomes the only thing we have to go on. My point is that it's unwise to take artefacts of that idealisation seriously just because we lack the computing power to do it properly.
Quoting Wayfarer
As per my many responses to MU, this may well be true, but it's not QM.
This is a thread about the implications of QM for determinism on a philosophy of science channel of said philosophy forum. If I'm out of line in my OP, can you be a bit more explicit?
I don't think you're out of line - but I also don't think you're fully grasping the philosophical implications of the issue.
I'm not trained in physics beyond high-school but I have studied units in philosophy of science, and have read a few of the serious popular-level books on the subject, like Manjit Kumar's Quantum - sub-titled Einstein, Bohr, and the Great Debate About the Nature of Reality. Why do so many of the books about the subject refer to the 'nature of reality'? I also recently read Adam Becker's 'What is Real?' and David Lindley's Uncertainty: Heisenberg, Einstein, Bohr, and Heisenberg, and the struggle for the Soul of Science. All of them are basically dealing with the philosophical implications of physics. I've been listening to Philip Ball's lectures, and talks by Jim Baggott, both of whom are well-regarded and by no means crackpots or cranks (and there are some in this subject). They all acknowledge that there is a deep philosophical problem sorrounding the ontological status of the wave function - whether it's real, or simply a mathematical device, or an artefact of the understanding. None of that is resolved.
(Lately I've discovered, through this forum, a French philosopher of science called Michel Bitbol, whom I think has the best take on the philosophical issues.[/url] He's done some really interesting papers on the philosophy of physcis of Schrodinger and Heisenberg, amongst other subjects (which can be found here). Distinctly Continental, I suppose you might say, but in this area I think the Continentals are streets ahead of the Anglo's!)
Quoting Kenosha Kid
The back screen is physical, it's not 'ideal'. When you run the experiment, the results are recorded on a physical screen.
Quoting Kenosha Kid
But again, the Schrodinger equation is wave-like, but it is an actual wave? I posted a question about this experiment , which you can see here on Physics Forum and here on Physics Stack Exchange if you're interested.
The gist of the question:
One comment is that 'the boundary conditions of the experiment are not time-dependent'. So I was trying to argue that the wavefunction itself is therefore transcendent with respect to time (and so also space, as in relativistic physics time and space are aspects of the same reality). A timeless wave, if you like. But that got smacked down with: don't be stupid, the particle interferes with itself! (per Dirac).
Do you recognize the difference between a mathematical statement of probability, and a description? If so, let's move away from the idea that the wavefunction describes anything. Furthermore, when we make predictions (and a prediction is not a description) concerning events, using probabilities, we refer to the possibility of those events either occurring or not occurring. When an event may or may not occur, as is the case with a prediction, it is apprehended as not determined. Since the wavefunction is a statement of probability concerning a future event, the probability of detecting a particle, it refers to something which is not determined. With what part of this do you disagree?
Quoting Wayfarer
Kenosha Kid likes to waffle. When it suits Kenosha's purpose, the electron is a particle. When it suits Kenosha's purpose, the electron is a wave. But Kenosha adheres to no concrete principles to distinguish between when the electron is observable as a particle, and when it is observable as a wave. Kenosha Kid will call it a particle, or a wave, depending on what is required at that point in the discussion.
I generaly find Kenosha's posts a model of clarity. I often don't agree with them but not because I think they're 'waffle'.
Yes, Kenosha is very clear and descriptive, as I said in my first post in the thread. Nevertheless, there is waffling back and forth as to whether KK believes that the electron is best described as a particle or as a wave. Let me show you. The following two quotes are from the op where we can see quite clearly that KK's intent is to reduce the electron to a particle with deterministic existence, and nothing more
Quoting Kenosha Kid
Quoting Kenosha Kid
But when questioned, KK admits that the so-called particle is really a "wave-packet", or "plane waves", in what follows.
Quoting Kenosha Kid
And in the following reply to Harry H, KK is clear in the claim that electrons are waves.
Quoting Kenosha Kid
But then Kenosha goes right back to the op hypothesis that electrons are particles:
Quoting Kenosha Kid
Quoting Kenosha Kid
Quoting Kenosha Kid
In the following, you'll find Kenosha describe three different interpretations of QM, two in which the electron is waves, and one in which it is a particle:
Quoting Kenosha Kid
Remember, Kenosha's thesis that the electron is deterministic, requires that it is a particle, like the Bohm interpretation above. But whenever pressured on the question of whether the electron really is a particle or a wave, Kenosha always returns with, it's a wave, as below:
Quoting Kenosha Kid
So it appears to me, like Kenosha Kid is trying to make the argument that the electron is a deterministic particle, when Kenosha has actually learned from many years of study, that the electron is a wave, and is really not so deterministic.
This is similar to Kant's phenomena/noumena distinction. Sensing the world (observing) is not a simple passive arrangement where the sentient being receives what is out there. It is an interaction with the world.
Quoting tim wood
Oh get real Timmy. What the fuck do you know about reality? Obviously I went to the school of MU. Ever heard of it? I approached this thread with a desire to discuss, and learn. But Kenosha Kid would not oblige me, and immediately went on the defensive, refusing to discuss the fundamentals of QM, saying that this is outside QM.
True, and discussed in this thread in some depth. Please point out where you think I failed to understand it. The OP is fairly ambivalent about the ontology of the actual wavefunction. At one extreme, you can consider the full solution to the Schrödinger equation as ontic. At the other, only the transacted trajectories are ontic. It's not my intent to push anyone one way or another.
Quoting Wayfarer
As is clear from the preceding text:
Quoting Kenosha Kid
this is regarding the Copenhagen interpretation. The probability of finding the electron anywhere on it is given only by the electron wavefunction, which is equivalent to treating the back screen as an ideal metal.
Quoting Wayfarer
The solutions to the Schrödinger equation are waves. The equation itself is not a wave. I think all of those books you mentioned would have told you this
Quoting Wayfarer
That's harsh of them. The wavefunction is time-dependent, but only in its phase. The probability of finding it at a given position is unaffected by this phase. This is not a general feature of wavefunctions.
This isn't addressed to you per se, just for general clarity: the concrete principle adhered to is the expansion postulate of QM, which states that the wavefunction is always a linear admixture of one or more Eigenstates of a given operator. After measurement, this is equal to a single Eigenstate of the measurement operator.
QM does not maintain two different types of electron: one wave-like, one particle-like, and swap between them. It is always a wave. That wave can be a position Eigenstate or not. I think I've already addressed this once before.
The particle-like behaviour evident in measurement is not that the electron ceases to be a wave at all, but that the wave somehow reduces to a single Eigenstate of the measurement operator.
If you posted your OP at physicsforum.com, I'd be interested to see what other physicists say about it. I intuitively feel there's a problem with it, but of course I don't have the skills to say what, exactly. And this is a philosophy forum, where [E?V]u=12m[p?A]2u is not, as it were, lingua franca.
Quoting Kenosha Kid
Yes - and this is 'a difference that makes a difference'. The fact that time or rate is not a boundary condition is, I feel, of the utmost import, philosophically.
"Somehow"? So if its always a certain state, and only changes its state when interacting with a measuring device, which would also be made of waves, then what is so special about a measuring device (which is just a large group of electrons) that changes the nature of an electron? And how do you know that what you are talking about isn't the measurement, but what is actually measured, if the end result is an effect of electrons interacting with a measuring device which you dont get when the electron isn't measured?
Quoting Kenosha Kid
Her you talk about single electrons, as if they are a particle before going through the slits.
And I don't see whet you addressed the issue where a single wave can interfere with itself after passing through double-slits, but a single electron, which you claim is a wave, cannot. What makes an electron wave different in that it cannot interfere with itself, but a wave of electrons (a wave of waves?) can interfere with itself? And if a wave can consist of smaller waves, then what does say about electron waves?
What prevents the electron waves in the beam from interfering with the electron waves that make up the board with the slits and the electron waves that make up the screen? Why does the wave only interfere with itself if all the other parts of the experiment are an assortment electron waves?
Ok, I'm fine with this. Now, if the electron "is always a wave", can you explain the principles which allow you to refer to the electron as a particle with a trajectory. Here's an example of such speak from the op:
Quoting Kenosha Kid
Quoting Kenosha Kid
Quoting Kenosha Kid
Quoting Kenosha Kid
Quoting Kenosha Kid
I understand "trajectory" as the path of a body or object moving through space, a common example is a projectile. Of course this descriptive term is inconsistent with the way that energy is known to be transmitted by waves. So the challenge to you is to explain how you manage to reduce the wave transmission of energy to a trajectory. You clearly affirm that the energy is a wave, yet you propose that the energy is projected with a trajectory rather than a wave transmission.
Please pay particular attention to your description of scattering, in which you describe an electron as existing at a particular position, and leaving a hole at that position. You know that having a determinate position, and having a determinate momentum are mutually exclusive descriptions of the electron. So how would you reconcile these two incompatible descriptions of the electron, one in which it has a momentum as a wave packet, and the other within which it has a position with the capacity to leave that position creating a hole there?
In relation to quantum trajectory theory, here's an article (I haven't had time to read completely read it yet) which might interest you, if the link works: https://doi.org/10.3390/e20050353
entropy 2018, 20(5), 353
Anyway, here's a line from the conclusion of that article:
Ha! And that was the nice version. It's not written like that normally. The important thing is not the terms, but that energy and momentum are not linearly proportional: E ~ p^2. This translates as requiring two boundary conditions in space and only one in time: the initial state. It's the shape of the equation that's of interest.
Quoting Wayfarer
That's a conversational dead-end then :rofl: I would expect the controversies would be different. Transactional QM, while mathematically identical to any other QM, is not as supported as Copenhagen, MWI, or SUaC (shut up and calculate). The reaction from Copenhagenists will be that it's absurd because it's not Copenhagen, just as MWI is absurd. The reaction from MWIers will be that it's wrong because it's not MWI, just as Copenhagen is wrong. That is the usual quality of debate about QM interpretations on Physics Forum.
The genuinely controversial part of the OP is the idea that, if we could solve the many-body wave equation for the whole apparatus, we would find that a given electron does not have the characteristic double-slit band pattern at a given time. I expect that most would say it would, but we cannot solve the equation to find out. I say it wouldn't, but we cannot solve the equation to find out. But my take includes more physics.
I don't think the stuff you're bringing up matters so much except maybe at extremes such as the wavefunction being a "metaphor" which is so far from conventional QM as to be also irrelevant here. As per my discussion with fdrake, I don't assume a particular ontology beyond the Born rule, which is a postulate of QM. The extent to which the wavefunction mathematically represents or encodes something about the physics -- which is valid since it makes physical predictions -- it is something that can be calculated and discussed as an object.
This is discussed in the OP here:
Quoting Kenosha Kid
In perturbation theory and path integral formalisms of relativistic QM, such as quantum electrodynamics, one specifies initial and final states, as per the form of the Dirac equation which is power 2 in space and time (momentum and energy). This is what is meant by a path or trajectory.
If we do this with final position states for all positions on the back screen, we reconstruct the wavefunction defined over all of those points. The wavefunction and the Green's function are highly related.
Quoting Metaphysician Undercover
These descriptions aren't incompatible. Any wavefunction can be written as a superposition of Eigenstates of any measurement operator. If my electron collapses to an exact position state, for instance, and an electron in the screen is a wave-packet spread around that position, either the latter has to be scattered away from that position or the former is blocked from being found there.
Quoting Metaphysician Undercover
Great title! Yes, in Bohmian mechanics the electron always has definite position and momentum, and thus a well-defined trajectory.
https://scitechdaily.com/scientists-crack-quantum-physics-puzzle/
Full paper here: https://www.nature.com/articles/s41467-020-18652-w
This isn't quite what I thought it was, but exhibits similar behaviour.
Anderson localisation is caused by impurities in semiconductors that cause the wave to scatter so much and so randomly that it interferes out almost completely and cannot progress.
In the OP, the scattering sites are just other electrons in the screen, which will be much less severe.
However both cases have similar outcomes insofar as paths that aren't viable in the future are never tried in the past.
Is there any 'given electron' - prior to it being measured?
Yes, in quantum theory, including quantum field theory, the electron is considered to be there whether it's measured or not. For instance, it interacts with the Higgs field all the time, without which it would move at the speed of light.
But that is the whole point at issue in the 'measurement problem', is it not?
Kumar, Manjit. Quantum: Einstein, Bohr and the Great Debate About the Nature of Reality (p. 236). Icon Books Ltd. Kindle Edition.
The bolded sentence seems to undermine the premise that the electron can be considered to be there, measured or not.
The so-called states you talk about here, are not actual states, they are probabilistic. This is the difference I've been telling you about, between a description and a statement of probabilities. And a reconstruction of the wavefunction over all the points produces a map of probabilities, not a description of an actual trajectory.
Furthermore, the precepts of special relativity, which as you say puts time and space on an equal footing, make it difficult (if not impossible) in some instances, to distinguish temporal aspects from spatial aspects. The problem being that there is a real difference between a spatial separation and a temporal separation, because by the nature of time, a temporal separation is not invertible, while a spatial separation is. The separation between time1 and time2 cannot be treated in the same way as the separation between spatial point A and point B, because the empirical evidence demonstrates that things only move from time 1 to time 2, and the opposite is impossible. If you put time and space on equal footing, and allow for reversal of time in your theory, you have allowed a principle which is inconsistent with empirical evidence.
There is a way around this temporal reality which might help you with the transactional interpretation. In metaphysics and theology there are principles which allow that a power, or active force, (normally represented as final cause), might act prior to the passing of time. This is how a free willing being can change the way that material existence will be, from one moment to the next.
So for example, if the passing of time is determined by us through reference to physical change, then there also must be a cause of physical change itself, which is necessarily prior to the passing of time. As each moment of time passes the material world will exist, or be, in a particular way at that moment. The power, or force, which causes the material world to be as it is, at each moment as time passes at the present, must be prior to the passing of time at the present, and therefore a cause which is in the future.
Quoting Kenosha Kid
When one of them excludes the possibility of the other, this means that the two are incompatible. We can represent two of them as mathematically equivalent if we want, like ice is the solid state of liquid water, and the two might be mathematically equivalent, but that it is in the form of ice excludes the possibility that it is liquid water, because the two are incompatible, meaning that it would be contradictory if it is both liquid and ice at the same time.
'Position' is a state. All of the possible positions constitute a complete basis set. Any wavefunction can be written as a superposition of these positional basis functions (the Eigenstates of the position operator). After measurement of position, the wavefunction somehow collapses to a single position. This is an example of the measurement problem.
Likewise 'momentum' is a state. All of the possible moments constitute a complete basis set. Any wavefunction can be written as a superposition of these momentum basis functions (the Eigenstates of the momentum operator), including an Eigenstate of the position operator (which is a plane wave in momentum space). After measurement of momentum, the wavefunction somehow collapses to a single momentum. This is another example of the measurement problem.
It is not that the electron doesn't exist when it's a wave: the wave conserves charge, mass, etc. It's that the wave spontaneously changes from a superposition to a single Eigenstate upon measurement.
Here's a random image I found of an electron wavefunction in some wave-packet state collapsing to a position Eigenstate on measurement of position:
Please read what I wrote again and make an effort to absorb it.
Quoting Metaphysician Undercover
This is your repeated claim but it's not shown. Neither in relativity nor relativistic quantum mechanics is there a preferred direction of time. Histories of particle motions constitute worldlines in 4D, with no intrinsic arrow.
Quoting Metaphysician Undercover
Actually the empirical evidence proves that time and space are interchangeable, i.e. those dimensions in one frame of reference get mixed together in another frame of reference. Look up the Lorentz transformations.
Quoting Metaphysician Undercover
Interesting. It sounds a bit similar to the OP, in so far as the physical requirements of the existence of the material world in the future dictate the possible causes in the past. Coherence is very much a wavefunction feature.
Quoting Metaphysician Undercover
See my above response to Wayfarer. A wavefunction can *always* be written as a linear combination of states from any basis set. This is the expansion postulate of QM.
That's exactly why these theories are deficient, they are not consistent with empirical observation. Empirical observation provides us with a unidirectional time. Those theories do not give us a principle to account for the reason why time is unidirectional, thus allowing you to work with a reversible time. Therefore the theories are deficient with respect to empirical observation, in that sense.
We can say that the second law of thermodynamics provides a principle to account for the direction of time, and this is what makes ideals such as Aristotle's eternal circular motion, and perpetual motion machines impossible, in reality. And as you said, the Copenhagen interpretation allows for a loss of information, making it consistent with that law. I believe that Bohm proposes that the entire universe be considered as one whole, to deal with this problem. But special and general relativity do not provide adequate principles to deal with the entire universe as one whole, as is evident from concepts like dark matter and dark energy, so the need for hidden variables appears.
You propose a simple ideal, the notion that time is reversible, and create from this ideal, the notion of an equality between emission and absorption, which is dependent on the notion that a future moment is invertible with a past moment. Of course this ideal has no bearing on the reality which we know, within which there is a substantial difference between past and future. The whole idea reminds me of people with less than high school education, proposing ideas for perpetual motion machines, without any respect for conservation of energy and the second law of thermodynamics. The fact that this idea is coming from someone who claims to have a PhD in this field just baffles me. Shame on you! I want to repeat what Timmy said to me ,"what school taught you that?"
Quoting Kenosha Kid
The Lorentz transformations provide mathematical principles for reconciling different frames of reference. They provide no empirical evidence that time and space are interchangeable. They might be applied under the assumption that time and space are interchangeable, but such application leads to the problems I mentioned above, which is evidence that this assumption is wrong.
Quoting Kenosha Kid
Not quite. "Causes in the past" are epistemic determinations. This is the way that we as intelligent beings represent a line of temporal continuity. We say that Q happened at an earlier time, and cause R at a later time. But this is not a true representation of reality, being simply a representation of our apprehension of temporal continuity. In reality, whatever comes to be at t1, as Q, is caused by something in the future of t1, and whatever comes to be at t2, as R is caused by something in the future of t2. The only true causes are always in the future. and being in the future, they have not material, or physical existence. We know them as the immaterial cause of material existence (immaterial Forms, God).
Quoting Kenosha Kid
Sure, just like a quantity of H2O can be expressed as a combination of ice and liquid, but that's an admission that you cannot distinguish the boundary which you claim to be able to determine with those principles. When the precise boundary between H2O as liquid, and H2O as solid cannot be exactly established and appears to be vague, we can express the states along that vague boundary as a combination of both, some of the water is frozen, some is liquid. But such an expression just indicates that the boundary cannot be determined, and the method of expression is an acceptance of this. When we cannot apply the law of excluded middle we say that somehow it is a combination of both.
And yet no one has devised an experiment to show that photon emission/absorption is unidirectional, or motion is unidirectional, or matter/antimatter pair creation/annihilation is unidirectional, and these constitute almost all of the elementary phenomena studied by the most empirically-tested theory ever: quantum electrodynamics.
Going beyond QED, we do see arrows of time emerging: weak nuclear decay, the Higgs mechanism, cosmology, thermodynamics. I have drafted a follow-up thread to this one touching on the last two which are interesting because fundamentally they are reversible. Thermodynamics, for instance, is just the motions, creation and destruction of particles. General relativity retains the bidirectionality of special relativity. The cosmological and thermodynamic arrows of time appear circumstantial rather than fundamental.
The actual arena of CPT-violating phenomena is very small, but might be vital, e.g. in explaining the absence of antimatter in the universe. However for most experiments, such as the double slit experiment, they are not relevant at all.
Copenhagen and common sense are at odds with this, which might explain why the pioneers of quantum mechanics had the issues they did. Bohr and Heisenberg worked mainly in the non-relativistic approximation where time is unidirectional, and, funnily enough, their preferred interpretation of QM is unidirectional. Meanwhile Dirac was doing it right and seeing reversibility in more accurate equations.
Quoting Metaphysician Undercover
Not interchangeable, just mixed. There is no privileged reference frame in which you can say 'this time axis is time, these spatial ones are space', and you can find an infinite other frames in which part of time becomes spatial and part of space becomes temporal. It is the four-vector that is physical, not its space-time components.
Quoting Metaphysician Undercover
Ah. Not so interesting.
Quoting Metaphysician Undercover
No. No, not at all like that.
:chin:
These are all temporal processes. Time is empirically proven as unidirectional. By simple deduction therefore, these processes are unidirectional. There is no experiment required, that's the beauty of deduction. What more do you want, an experiment which tries to run time backward, and finds out that this is impossible? Good luck with that.
Quoting Kenosha Kid
It's a little more than just common sense. That time is unidirectional is the most fundamental and important empirical principle which we have. Knowledge of the truth of this principle influences everything we do, every day of our lives. The future is different from the past. The former consists of possibilities, which we can influence the outcome of, toward things we want, and away from things we don't want, while the latter consists of facts which cannot be changed. When something bad happens to you, you cannot change that, it has happened, but if you apprehend something bad that could happen in the future, you can take steps to avoid it.
Sure, one can argue determinism by arguing that there is no difference between future and past, that both consist of fixed facts, like the past, but that's a childish argument. And even children learn very quickly the difference between future and past. If your argument for determinism is simply a denial of the obvious difference between future and past, then this thread is ridiculously pointless.
Quoting Kenosha Kid
OK, you really seem to believe the proposition that time is reversible, and applying this proposition in physics is "doing it right". I sincerely hope that you do not really have a PhD in physics if this is an indication of what is being taught in physics these days.
Sorry - hadn’t noticed this response till just now because I wasn’t tagged in it.
You’re still glossing over the basic conceptual problem implicit in your phrase ‘any given electron’. This implies that there is an electron ‘there’ - which is then qualified by saying ‘oh well, it depends on what you mean by “there” ‘. The diagram you provided does nothing to dispel this.
The question of whether sub-atomic particles really exist or not is central to all of this. I’m suggesting that they exist in a metaphorical sense, as constructs within a theoretical framework. Put another way, the answer to the question ‘does an electron exist’ is neither yes nor no.
Heisenberg again:
The Debate between Plato and Democritus, a keynote speech at a physics conference in the 1950’s (bolds added).
I think the philosophical issue here is that we’re so wedded to a realist paradigm that we can’t see it any other way. It’s natural to believe that subatomic particles are truly existent. I think this is why Einstein asked, exasperatedly, ‘does the moon continue to exist when nobody’s looking at it?’ His implication was, of course it does, and this is why quantum theory must be incomplete or incorrect in some way. Heisenberg describes this as ‘dogmatic realism’. But this is one of the reasons the discovery of the uncertainty principle is said to have such profound philosophical implications.
One more anecdote - there’s a well-known saying by Bohr that ‘those who haven’t been shocked by quantum physics haven’t understood it’. This too was related by Heisenberg, and was in respect of a lecture that Bohr had given to the Vienna Circle. They all nodded sagely and applauded warmly at the end of Bohr’s lecture. And that’s when he made that remark, again, somewhat exasperratedly. Not, one imagines, that it made any difference. :-)
Yes, to think that the collective professor-hood of world physics didn't think to come and check what you personally find intuitively plausible before constructing their curricula. What an oversight!
Then don't describe it as empirical. What it is is a strongly held belief.
Quoting Metaphysician Undercover
And yet you just said we don't need empirical evidence because a claim is sufficient.
Quoting Metaphysician Undercover
Fine. If all you have is an insistence to the contrary, your response is ridiculously pointless.
And this is sufficient. If the mathematical entity -- the wavefunction -- is doing its job in yielding accurate predictions of statistical outcomes, it corresponds to something real. It doesn't need to be the case that the epistemic object we deal with be identical to the ontic thing it represents. That's true generally in mathematical physics.
Heisenberg's Platonism is his own affair... It is not a statement about QM but about his personal philosophy of reality. He is welcome to it, of course, but no one else is obliged to adhere to it.
Well, one is quite entitled to take that with a large grain of salt. Or at least recognise that it is a methodological postulate, more than a statement about ‘what is’. And as Heisenberg was one of the figures who devised this entire field of science, I find his philosophy of it quite persuasive.
Only in the very limited scope of the quantum, not in making predictions in the macro world. The wavefunction is useless in predicting what trajectory to take when aiming a rocket at the Moon or predicting the identity of who committed a crime. What role could the wavfunction play in a "theory of everything"? Why is classical physics still useful in yielding accurate predictions? Does that not mean that classical physics is doing its job? Then why are they incompatible?
Quoting Kenosha Kid
And the output of the detectors only becomes known when it is consciously observed by a person. The hypothesis of a measurement before this conscious observation lacks compelling theoretical or empirical grounding.
After all, QM offers no reason why the whole system—electrons, slits and detectors combined—wouldn’t be in an entangled superposition before someone looks at the detectors’ output.
It's not a matter of intuitive plausibility, it's a matter of empirical evidence.
Quoting Kenosha Kid
Obviously, it's a strongly held belief because it is empirical. Empirical means based in observation and experience rather than theory. Clearly, the strength in the belief that time is unidirectional is provided for by experience and observation, and therefore it is empirical. The idea that time is reversible is provided for by theory (such as the special theory of relativity) and is therefore non-empirical.
Quoting Kenosha Kid
That's not what I said. I said deductive logic is sufficient. And, the premises of the deductive argument are validated by empirical evidence. The idea that time is unidirectional is not simply a claim, it is a proposition strongly supported by empirical evidence.
Quoting Kenosha Kid
What is pointless, is for someone like you, to come into a philosophy forum, and argue determinism based on premises derived from science fiction, produced from the fringes of relativity theory, enabled by the deficiencies of the faulty boundaries of that theory. We can argue whatever we want, if we take our premises from science fiction.
As I tried to tell you days ago, your efforts would be much more productively spent if you sought the medium within which the waves exist, so that you can establish a relationship between that medium and material (massive) existence. The Michelson-Morley experiments demonstrate that this relationship is completely unknown to us. Rather than pretending that there is no such medium, and going off into the opportunities for science fiction created by that misunderstanding, we need to put effort toward understanding the nature of that medium.
Quoting Kenosha Kid
This might be true, but it provides no directional guidance for interpretation. Statistical evidence and mathematics provide for prediction, Theory provides a description of what occurs. Now, there is a gap between these two which is bridged with interpretation. So, as an example, I can map the position of sunrise day after day for years, and develop predictive capacity, (like Thales predicted the solar eclipse), then I can relate this predictive capacity to my theory (which may be science fiction) that a giant dragon carries the sun in its mouth from sunset to release it again on the eastern horizon at sunrise, at the predicted spot. The deficiencies of my interpretation are exposed as 'how does the dragon act so mechanistically to find the exact point of release every day?'. Your science fiction interpretation demonstrates the inverse problem. You assign mechanistic reliability to something which is understood as stochastic.
Yeah, that's really not how empiricism works. You can't look at a red car and state that your strongly held belief that all cars are red is empirical. If you want to know whether the elementary process of QED are reversible or not, you can't look at thermodynamics and say, "Well, that's irreversible, therefore everything is!" That's just backward thinking.
Quoting Metaphysician Undercover
It's not deductive, it's inductive.
Quoting Metaphysician Undercover
Fine, ignore the thread. I will certainly be ignoring your contributions to it.
We are not talking about whether this or that specific process is reversible, we are talking about whether time is reversible. It is an empirical fact that time is not reversible, this is made evident by the second law of thermodynamics. It never has reversed, and there is absolutely no reason to believe that it ever will, this would violate that law. Though it makes good science fiction to talk about a reversal of time, it is literally backward thinking.
Now, all elementary processes of QED known to human beings, are temporal processes. And, time is not reversible. Therefore no elementary process of QED known to human beings is reversible. If there are some processes which are non temporal then they are unknown to human beings, and to portray them as a plain reversal of a temporal process is simple naivety. It doesn't even make sense to talk about non temporal things as processes.
If you want to look at some aspects of QED as being not describable as temporal processes, then you need to get beyond this simple idea that a non temporal thing is a straight reversal of a temporal process. Once you remove the constraints of "temporal process", which means to follow the direction of time, then you open a whole new realm of possibility in thinking, and there is really no reason to restrict your thinking to a simple reversal. In fact, it makes no sense to think that something outside the constraints of time would mimic a process constrained by time, but do it in reverse. That would be like assuming that if something was not constrained by gravity, it would act in a way opposite to gravity. What reason is there to think in this way?
Quoting Kenosha Kid
Look again, the argument is deductive. P1.Time is not reversible. P2. All the known processes of QED are processes in time. Therefore no processes of QED are reversible. P1 is inductive, that's why I say it is an empirical principle, it's based in experience, observation. But the argument that no process of QED is reversible is deductive. Do you see the logic? All processes are temporal, they are features of the passing of time, that's the nature of what a "process" is. Time is not reversible, and this is evident from the second law of thermodynamics, therefore all these things which we call processes, which are features of the passing of time, are not reversible.
"Going back in time" is incoherent unless the entire universe is going in reverse. Any particular process undergoes changes at a frequency relative to other processes. What is the observable difference between a process oscillating between two states and a process going forwards and backwards in time?
Walking forwards and then backwards is changing my state, but I'm still moving forward in time because the change in state is relative to the change going on in the rest of the universe. I didn't move backwards in time relative to you, I just changed my spatial state relative to you. Only if the entire universe reversed its process would it make sense to say that "time" is reversed.
Incidentally, what does QM have in common with a savings account? :cool:
Most fundamental processes do. Mass, charge, position, acceleration, energy, density, electric field, potential and polarization are the same under time-reversal. Pretty much everything else transforms into another physical process or property, such as velocity, momentum, angular momentum, current, magnetic field, and of course time coordinate. That is, for any processes describing these, the time-reversal is itself a physical process.
There's really only two things that have an intrinsic arrow of time, i.e. where the time-reversal of a process is not another physical process: weak dynamics (such as in radioactive decay) at low temperatures, and statistically improbable initial conditions, e.g. initialising a system into an ordered state, which gives us the second law of thermodynamics. Thermodynamics itself is reducible to QED, i.e. consists entirely of reversible phenomena. The Einstein equations also do not privilege a direction of time, with the cosmological arrow likewise an artefact of boundary conditions not elementary processes.
It's worth clarifying that we're not generally speaking of things changing direction in time. The advanced wavefunctions of the OP are always moving in reverse to our psychological arrow of time. Changing direction in time requires an imaginary component to the Lorentz boost, so can only be considered for *virtual* processes, such as virtual electron-positron pair creation/annihilation.
Quoting jgill
One likes to believe it's got every possibility of being positively valued until one measures it?
As you may know, unitary quantum mechanics conserves information. So given an arbitrary state for a system, any earlier state can be determined. In this sense, a system is reversible (i.e., inverse transformations can be applied to the system to restore the earlier state).
Where things get more complicated is when a measurement is taken. At that point, information is generally lost to the environment due to wavefunction collapse. However even then, the pre-measurement state can be determined by someone else isolated from that system and observer.
For example, consider a quantum coin that begins in a "heads" state and is transformed to a superposition of "heads" and "tails". Suppose an observer measures "tails". They cannot determine from that state what the original state was. But an isolated observer can (in principle).
This recent paper by Zurek (one of the pioneers of decoherence theory) provides the details:
Quoting Quantum reversibility is relative, or does a quantum measurement reset initial conditions? - Zurek
Time is an illusion. Change is real.
An object/process doesnt move forwards or backwards through time. It simply changes and how it changes is a property of the process, not of time.
Again, what is the observable difference between time being reversed and some process undergoing change? How can you tell if the process is really time reversed or simply undergoing natural changes between two or more process-defining states?
If your bank account had money deposited and then withdrawn, your bank account didn't move backwards in time. Depositing and withdrawing are natural states of bank accounts changing, not a state of time moving forwards or backwards.
That's an interesting article John. It shows how one might predict the movement of an object through a force field using a vector field. I think what Kenosha Kid is arguing is that a particle predicts its own end point by receiving information backwards in time from that end. The existence of the particle between emission and absorption can then be represented as a determinable symmetry between the need to be emitted and the need to be absorbed. But this requires a direct relation (ideal reversibility) between emission and absorption.
Quoting jgill
There is a problem with the use of equations in modern mathematics which you know I've discussed in other thread. Mathematicians tend to treat the two sides of the equation as signifying the very same thing, some ideal (eternal Platonist) mathematical object. However, if you look at an equation such as 2+2=4 (the example in the other thread), you'll see that the + represents an operation which is absent from the other side of the equation. An operation is a time dependent process. Therefore, to properly interpret the meaning of that equation we need to recognize that we have established equality between a time dependent operation on the left, and a time independent quantity on the right. This indicates that the effects of time have been dismissed from the expression of equivalence (the equation) as irrelevant, inessential, or accidental.
The mathematical equation according to modern axioms, is constructed without regard for the effects of time. What is represented is mathematical objects (eternal Platonist), and the difference between different operations (temporal processes) is dispensed as a difference which makes no difference. So when mathematics is applied in physics, it is inevitable that there will be symmetry in the time variable. The evidence is abundant, as Kenosha Kid attests, above. The very axioms which mathematicians employ, the premises for the mathematical proceedings, have already dismissed the effects of time as irrelevant. Consequently, the conclusions drawn will show that the effects of time are irrelevant, therefore temporal processes are represented as reversible.
A good example of the problem involved with ignoring the temporality of mathematical operations is the difference between the conventions employed in the two operations (processes) called multiplication and division. In multiplication we start with a definite quantity, and increase that quantity. This produces a well defined fundamental unit and the product cannot be outside the boundaries of that defined unit. There is no remainder as there may be in division. In division however, we allow the unit to be divided indefinitely, producing repeating decimals and irrational numbers. Therefore the conventions of division annihilate the fundamental unit, there cannot be a fundamental unit according to those conventions. Now we can make two opposing principles or axioms, one for multiplication, that we must start with a fundamental unit, and one for division, that there is no fundamental unit, as the unit is infinitely divisible.
The difference between multiplication and division, that they cannot be inversions of each other under current conventions, becomes very evident in wave theory, such as that employed in music. The octave serves as the fundamental unit, and we can create harmonies through either multiplication or division. But there is a problem in division which creates inconsistency, dissonance, in the tones created by division, with the tones created by multiplication. So there are two conventional ways of dividing the octave, just temperament, and equal temperament, each based in different principles designed to mitigated the difference between division and multiplication. The principal issue is that the initial choice for the fundamental unit, the octave, is arbitrary. Having an arbitrary unit as the fundamental unit, manifests in the reality of division being not a direct inversion of multiplication. This problem is foundational to the Fourier uncertainty, and it will not be resolved until we determine a fundamental unit which is not arbitrary, and adhere to the principle that the unit cannot be divided. Only then could we have a true ideal which would allow multiplication and division to be inversions of each other.
Just a clarification, it is not lost to the environment in the Copenhagen interpretation: it is simply deleted. Decoherence is the process of information loss to the environment, in which superpositions cannot be sustained by macroscopic objects because of the large number of degrees of freedom. When an electron is found at y, the contribution at y' is dissipated. Last time I checked, consensus was this is real but insufficient to account for apparent wavefunction collapse, although Penrose advocated this view at some point.
https://plato.stanford.edu/entries/qm-decoherence/#ConApp
Quoting Andrew M
This is specifically the Von Neumann-Wigner interpretation. In Copenhagen, the wavefunction collapses objectively.
I didn't cover Von Neumann-Wigner in the OP because it's kind of a magic version of MWI: in both cases, the first observer is in a state of having made a measurement (entangled), but for the second observer everything is still in superposition (unentangled). What I like about Von Neumann-Wigner is the relativism, but I dislike the magic. However, various findings in the last few years have provided experimental evidence for that relativism. It seems to me consistent with an MWI with stricter branching criteria, which is less magic. These experiments generally rely on something called *non-destructive measurements* whose status is questioned.
https://advances.sciencemag.org/content/5/9/eaaw9832.full
Anyway, point being that these various interpretations are not interchangeable. The docoherence picture of wavefunction collapse is at odds with Copenhagen, MWI, transactional QM, and Wigner's friend. Likewise Wigner's friend is at odds with Copenhagen, decoherence and transactional.
Time is the vertical axis, a given dimension in space the horizontal. The solid lines denote massive particles, in this case an electron/positron; the squiggly lines represent photons. The solid lines have arrows because the particle in question is not its own antiparticle, e.g. the antiparticle of an electron is a positron, not an electron. The squiqqly lines have no arrow because the photon is its own antiparticle.
The first diagram A1 depicts a known physical process called pair annihilation, in this case the annihilation of an electron with a positron or anti-electron. The two particles move together, resonate, and decay into a pair of photons.
The frame invariance of relativity suggests that the temporal and spatial aspects of this process should be interchangeable, i.e. if we rotate the diagram 90 degrees such that time becomes space and space becomes time, the diagram should still represent a physical process, and indeed it does. A2 represents a scattering event, in this case one which a positron absorbs a photon and later re-emits it. A further rotation by 90 degrees leads to A3: a complete time- and space-reversal of the first diagram, and this too is a physical process called pair creation, the reverse of pair annihilation. A final rotation gives us the fourth diagram wherein we see that absorption followed by emission time-reversed is still absorption followed by emission.
The next set B1-B4 show the same processes but with the absorption and emission events reversed. As we see, we still have the pair creation and annihilation rotations, merely with photon momenta exchanged. One can complete this set by incorporating spin.
This is a broader depiction of what is meant by reversibility in the OP: that rotating a physical process about space and time yields another physical process. The vast majority of physical processes are included in QED which has the above characteristics, i.e. is independent of whether a dimension is temporal or spatial, directed one way or the other. Within this majority of elementary processes covered by QED, nature does not have a preferred reference frame, including directionality.
This is not meant as a pun. Under stipulations that will make the Kid's eyes roll, there is a fundamental form or principle underlying the math of both. Start with a very simple version of Schrödinger's equation:
[math]ih\frac{\partial \psi }{\partial t}=H\psi ,\text{ }\psi =\psi (x,t)[/math]. Then get rid of that annoying Hamiltonian by stipulating [math]\frac{{{\partial }^{2}}}{\partial {{x}^{2}}}\psi ={{C}_{0}}\psi [/math]. Now, writing this as a normal derivative, since x is held constant in the partial, [math]\frac{d\psi }{dt}={{C}_{1}}\psi [/math].
Now, turn to a savings account with a yearly interest rate r (like r=.03) compounded n times per year. At the end of a year one has an amount under continuous compounding,
[math]A={{\left( 1+\frac{r}{n} \right)}^{n}}{{A}_{0}}\to {{e}^{r}}{{A}_{0}},n\to \infty [/math]. Which is the solution to the differential equation, [math]\frac{dA}{dt}=rA[/math].
Underlying both DEs is the fundamental relationship: The instantaneous rate of change of something is proportional to the amount at that time. The first DE has the imaginary i in its "constant", and [math]{{e}^{i\theta }}=\cos (\theta )+i\sin (\theta )[/math] works its magic.
(I know, I've made a mess of the physics!) :gasp:
This is the problem I explained to you earlier. Proceeding from spatial location A to location B is reversible, and the reverse may be represented as proceeding from B to A. Going from time 1 to time 2 is not reversible. Therefore it is inaccurate to represent the temporal and spatial aspects of a process as interchangeable. I believe that's why there is a distinction between space-like and time-like in the conventional application of relativity theory. You seem to disregard this convention with your science fiction.
Thanks, that's pretty well the only point I was getting at.
Yes, good point.
Quoting Kenosha Kid
I'm not sure I follow your last sentence (and I read the SEP section). If, on measurement, the superposition state information is lost to the environment (apart from the measured value) then what else could be required for apparent collapse to have occurred?
Quoting Kenosha Kid
Not specifically. This is just unitary QM. For example, MWI and RQM both agree with this prediction and are both referenced in the paper you linked:
Quoting Experimental test of local observer independence
Also the Zurek paper I linked specifically addresses the von Neumann-Wigner interpretation and states that retention of information suffices for collapse. (Which the paper treats as relative.)
Quoting Quantum reversibility is relative, or does a quantum measurement reset initial conditions? - Zurek
Quoting Kenosha Kid
Zurek is a decoherence guy and he agrees with the Wigner's friend predictions. You seem to be treating decoherence as objective.
Also I'm curious why you say Wigner's friend is at odds with transactional QM. Is it an objective collapse interpretation, like GRW?
So I opened an account at the bank with $100 at an imaginary 314% interest rate. A year later, the bank claims I owe them $100! They say that if I keep my account open for another year, I'll get my $100 back. Should I trust them, or just pay the $100 and close the account?
Edit: The interest rate should be 314% (not 3.14%)
Oh okay. Ha ha!
Quoting Andrew M
Decoherence leads to random phases between different trajectories. You can't guarantee a singular value upon measurement without invoking e.g. MWI. But nor do e.g. excluded or otherwise saturated points so thank you for bringing it up. It is yet another physical consideration ignored in the idealised screen that will eliminate possibilities in a way dependent upon the exact state of the lab equipment.
Quoting Andrew M
Yes, I was thinking about this afterwards. I was thinking that in MWI the second observer would just branch. I'm not sure what the consensus is on branching during non-destructive measurement, but thinking about it you're probably right. Essentially a branch in MWI is just an additional term in the MB wavefunction. Whether the maths tells us branching has occurred will be time-dependent.
Quoting Andrew M
Decoherence usually is. Reading the full paper, Zurek is saying that, in light of the recent Wigner's friend type experiments, it's not. I'm not so sure about that. He's solving a problem that probably doesn't need to be solved from either end. Generally decoherence is treated objectively and is insufficient to yield collapse
Thanks for the link, btw. I hadn't read that paper. I kind of wish I had before submitting a follow up thread to this.
"Decoherence" is fundamentally flawed. It assumes coherence as the natural, observer independent condition of existence. But coherence is the property of a "system". Such a system is either completely artificial, or an arbitrary designation, or a combination of these two. In both cases, it's a human construct. Therefore coherence is a manufactured condition, either as a physically constructed system, or as an arbitrarily applied theoretical ideal. It is self-contradictory to conceive of coherence as the independent state of existence, when it is clearly a manufactured condition.
Nope.
Quoting jgill
is exactly the Schrödinger equation, which is a differential equation. You're right, these crop up everywhere.
With interest rates for savings where they are one might as well open an account with an imaginary rate. :worry:
The core of the theory is an emission-absorption process, such as when two atoms exchange energy or (as in your presentation) an electron is emitted and later absorbed by a solid. (I think scattering is handled similarly, but I haven't looked into it yet. There is also an issue of weakly-absorbed particles, such as neutrinos, which may not have a future boundary; I know that Cramer has looked into this, but I haven't.)
The process is initiated by a "transaction" between the emitter and the absorber, which is described by the following reversible pseudo-time sequence:
(I hope I got this right.)
The above is offered as an interpretation of the standard quantum mechanics formalism, rather than some additional physics. The steps do not describe a causal time sequence of events - they merely serve a pedagogical purpose.
The rationale for the interpretation comes from the fact that, as KK mentioned, some relativistic formulations of the wavefunction equation have two solutions: w and its complex conjugate w*. But complex conjugation is equivalent to time reversal (although it also implies negative frequency, energy and charge) - hence the advanced wave that is produced alongside the retarded wave in TI. Plus the Born rule, etc. - the conjugate of the wavefunction is ubiquitous in the formalism.
It is interesting that both the TI and the Everett MWI start from similar philosophical positions. Both are realist about the wavefunction (in contrast to Copenhagen). Both are offered as minimalistic interpretations that do nothing more than take the math seriously (as Sean Carroll, an Everettian, puts it). "From one point of view, the transactional interpretation is so apparent in the Schrodinger-Dirac form of the quantum-mechanical formalism..., that one might fairly ask why this obvious interpretation of the formalism had not been made previously" (Cramer). But they still end up in very different places. To me it seems like TI goes further out on a limb than MWI. I am uncomfortable about the pseudo-causal narrative of the "transaction." But perhaps the more profound aspects of the interpretation escape me.
This bit I don't understand though:
Quoting Kenosha Kid
Whether it's the non-relativistic Schrodinger equation (which has only a retarded solution) or a suitable relativistic equation (which has both), the equation alone does not determine where and how the absorption/measurement will happen - hence the "measurement problem." I am not sure what point is being made here specifically about the relativistic equation.
(The Schrodinger equation can be produced as a non-relativistic limit of a more general relativistic formulation. How then do two solutions reduce to one? Turns out that two versions of the Schrodinger equation are equally valid reductions: the other one has only an advanced solution.)
Now, as for the "not many worlds" interpretation:
Quoting Kenosha Kid
I still don't see how this can be. Boundary conditions are, by definition, local. And yet when we do experiments like quantum interference, we find that the measurements depend mainly on the incident wave. Why aren't results confounded by such strong dependence on the boundary conditions?
Damn straight!
Quoting SophistiCat
All, I believe.
Quoting SophistiCat
Time, frequency and energy are all inextricably linked. Essentially energy is frequency with decorative physical constants, and is reciprocal to time (time interval and frequency are Fourier transforms of one another). So the odd one out here is charge.
Quoting SophistiCat
I think it's more economical than parallel universes. But it doesn't resolve the measurement problem by itself. Since the transaction goes both ways, the final state is "known" at the start, so it is still probabilistic, however nothing real collapses probabilistically.
Quoting SophistiCat
So this is where the theoretical background ends and my argument begins. Taking the Dirac equation and TQM as gives, we understand that in order for an electron described by a particular time-dependent wavefunction to be emitted by the cathode and absorbed by a point on the screen, the cathode must itself be in a state that would emit an electron with that wavefunction, i.e. it must have an available electron to emit.
The advanced wave must be similarly causal but in reverse. For an advanced wave to be emitted from a particular point on the screen (describing an electron hole in reverse), that point must be capable of doing so. Otherwise the retarded wavefunction depiction of the electron leaving the cathode is unjustified in the first place.
That state also has its own history in our future (recalling that QM is backwards-deterministic) and we can repeat this process by considering events in that future history that are consistent with the future of the electron. This can only eliminate possible locations on the screen.
Andrew M has pointed out that other factors that equally depend on the precise state of the set-up should also eliminate possible paths, namely those where the eventually phase randomisations due to scattering in the screen cause destructive interference.
These are factors of the true time-dependent many-body wavefunction that describes the entire experimental setup that a) we couldn't possibly solve and b) we couldn't possibly know the accuracy of (we can't know the precise state of a macroscopic object even if we could store it's wavefunction in principle).
This aspect of the argument is not dependent on relativistic TQM, rather the latter provides us with an obvious way to consider how these incalculable states will inevitably lead to certain trajectories becoming disallowed when we consider not just the past but also the future of the experiment.
Quoting SophistiCat
Correct, the conjugate of a solution is not itself a solution. However you can still time-evolve that conjugate and it will behave as expected, traveling in the reverse direction to the solution.
Quoting SophistiCat
No, not necessarily. We treat them as local because we treat experiments ideally: we have to.
Quoting SophistiCat
In the case of both microstate exploration and decoherence, the precise state changes constantly. An electron on the screen which might forbid an incident electron at time t might not be present at time t'. A particular configuration of scatterers that would destroy the wavefunction at time t might permit it at time t'.
The signature pattern of the double slit experiment is not one event but many thousands. What we see then is not just the value of the probability density of the electron, but also the statistical behaviour of the macroscopic screen. Over a statistical number of events, the pattern must be independent of changes in the precise microstate of the screen.
Here's a PU in case you haven't seen one before:
:cool:
:up: Though this can potentially be beneficial as I'll show in my next post...
Quoting Andrew M
...The short answer is, yes, I can trust them. Here's the worked out solution.
Euler's formula is:
[math]{{e}^{i\theta }}=\cos (\theta )+i\sin (\theta )[/math]
e represents continuous growth starting from 1, at a rate indicated by the exponent. If the growth rate is 0, we remain at the starting point (i.e., e^0 = 1). Similarly, starting from $100, we remain at $100 (i.e., 100 * e^0 = 100 * 1 = 100).
While real exponential growth occurs along the real number line, imaginary growth is circular. That is, it can be visualized as rotation around the origin of the complex plane (where the real number line is horizontal and the imaginary number line is vertical).
From the problem above, a 314% interest rate is a growth rate of 3.14, or (approximately) [math]\pi[/math]. Plugging that rate into the formula, we get
[math]{{e}^{i\pi }}=\cos (\pi )+i\sin (\pi ) = -1 + 0i = -1[/math]
which is Euler's identity. So, starting with $100, I ended up with -$100 (i.e., 100 * e^i.pi = 100 * -1 = -100). Not a great outcome! But suppose I leave my account open for another year. That is a rate of pi for two years, or 2pi. Plugging that rate into the formula, we get:
[math]{{e}^{i2\pi }}=\cos (2\pi )+i\sin (2\pi ) = 1 + 0i = 1[/math]
So starting with $100, I'll end up with $100 (i.e., 100 * e^i.2pi = 100 * 1 = 100). That is, I'll get my original money back. 2pi radians, of course, is 360° - the money has gone full circle.
So the best strategy is to borrow lots of money at an imaginary interest rate, wait until it becomes positive (a 180° rotation), and then withdraw it.
Quoting Kenosha Kid
I was hedging because in his 1980 Phys. Rev. D paper Cramer writes that for particles with spin other than 1/2, e.g. bosons, there are a number of alternative relativistically invariant wave equations, at least one of which is first order in time. I don't think he considered the full QFT formulation though.
Quoting Kenosha Kid
Are there really any absolutely forbidden points of interaction? Instead of being absorbed, can't the electron scatter instead?
Quoting Kenosha Kid
So let's consider a limiting case where exactly one spot on the screen is available for interaction at any one time - an advance electron hole, as it were. This is what you hypothesize might indeed be the case, right? We can fairly assume that such points are uniformly distributed over the surface of the screen*. If the availability of electron holes imposes an absolute constraint on where an interaction can occur, then instead of the interference pattern we should see just that - a uniform distribution.
* Or in any case, we can't expect their distribution to coincide with the amplitude of the incident wave.
Quoting Kenosha Kid
Well, if I understand correctly, the Everett interpretation is characterized more by what it doesn't do - arbitrarily impose a collapse - than by what it does, so in a way it's hard to be more economical than that (although it does ditch those advanced solutions. Hm... could you combine the two?...) But I didn't mean to imply that parsimony and fidelity to the formalism are the only or the most important criteria in evaluating philosophical interpretations. I am rather ambivalent on that point.
But also first order in space, I think? So the four solutions (advanced spin up, advanced spin down, retarded spin up, retarded spin down) reduce to two (advanced and retarded).
Quoting SophistiCat
Sure, but scattering also obeys the Pauli exclusion principle, so there must be two holes: one for the scattering electron to go to, and one for the scattered electron to go to (see the Feynman diagram in the OP). And those scattered electrons can scatter again, each requiring two holes, and so on and so forth. So it's a proliferation of backwards hole emission and transmission events.
Quoting SophistiCat
Possibly but not necessarily, hence the not many worlds interpretation :) The electron wavefunction at the screen yields the possible trajectories of the electron independent of the state of the universe and its future history. The possible trajectories can only be eliminated, not added to, by incorporating more and more of the instantaneous state of the screen and more and more of the future of the system.
Does it reduce to one? Not necessarily. Thing is, we can't know, since knowing for sure means solving the time-dependent many-body Dirac equation or some good approximation thereto, not just for the lifetime of the experiment but for the lifetime of the particle.
Quoting SophistiCat
No, because the probability distribution of the incident electron will still multiply the probability distribution of acceptor sites.
Quoting SophistiCat
Fair point, it basically just runs with the mathematics, to its credit. Which is why I think MWI is an improvement over Copenhagen.
Quoting SophistiCat
Isn't that the not many worlds interpretation?
Quoting Kenosha Kid
He mentions a "relativistic Schrodinger equation," which has has a (P[sup]2[/sup] + m[sup]2[/sup])[sup]1/2[/sup] term.
Quoting Kenosha Kid
Does the incoming electron always scatter on another electron? If it's a solid that it interacts with, wouldn't it be something more complicated? Anyway, let's run with the assumption that in any type of interaction the electron is constrained by the requirement of having a (potentially infinite) chain of suitable boundary conditions.
Quoting Kenosha Kid
Not if there is only one available site - in this case the electron wavefunction becomes irrelevant. The electron wavefunction removes at most a measure-zero amount of potential interaction sites, so without loss of generality we can assume that for any incoming electron, every point on the screen is available before we consider the conditions at the screen. But if the conditions at the screen are such that only one site is available, then that is what will dictate the actual distribution of impacts, not the electron wavefunction.
In case of "not many" available sites, the impacting distribution will factor in, but it will be smeared.
Quoting SophistiCat
So that is how it appears to an observer. What actually happens once a handshake has occurred, and thus a single destination has been determined? In the double-slit experiment, does the electron follow a definite, albeit unknown, trajectory through one and only one of the slits, similar to pilot wave theory? Or does an electron essentially just disappear from the source and appear at the destination a short time later (a kind of non-local electron/hole exchange)? Or does it travel all possible paths to the destination as a wave?
It being a wave (waves) takes care of some well-known interpretational challenges that trade on the wave-particle ambiguity, like delayed choice and quantum eraser. That said, some challenges have also been proposed that are specific to TI, such as the quantum liar experiment.
The last one. The retarded wavefunction is as per the Copenhagen interpretation but without collapse.
Ah, it becomes emitted or not emitted in a time-dependent way (i.e. the longer the experiment, the higher the probability of transmission).
Quoting Andrew M
Just to clarify, the transactional interpretation doesn't itself eliminate probabilism, it just eliminates collapse. If the electron could end up in multiple possible final states, the state is still chosen probabilistically, but the electron "knows" in advance which it has shaken hands with.
The elimination of final states is an additional physical consideration that takes seriously the time-reversibility of the equations. In order for a particular site to receive the electron, it must be in a state capable of emitting an advanced wave.
:rofl: Sound financial advice!
Quoting SophistiCat
I was just rereading part of the paper on type II emission and absorption events, which are interesting. If we take the emitter to be atom 1, the absorber to be atom 2, and the emission to be a photon, from the lab frame it appears as a photon (its own antiparticle) from the origin is absorbed by atom 1 (the emitter) and then, seemingly unrelatedly, atom 2 emits a photon of the same energy, which continues forever.
If the distance between the two atoms is L, the time between perceived absorption and emission (actually emission and absorption) will be L/c. So if type II is possible, it ought to be detectable experimentally in principle, though in practice it would be hard if the phenomenon is limited to CMB radiation (assuming that's all that could constitute a photon from the origin).
What's more interesting is the idea that causal relationships between events can be apparently unmediated in principle.
[quote=John Cramer]However, there is an alternative approach which, while not in the mainstream
of contemporary theory, represents an effective way of preserving the intrinsic
time symmetry of the relativistically invariant wave equations and thereby
avoiding the ad hoc insertion of an arrow of time into the formalism. [/quote]
Does this man seriously think that the insertion of an arrow of time is "ad hoc"?
From the above:
The above basically says that the probability of transmission of a photon or electron or whatever is given by the value of the retarded wavefunction spreading out from the emitter at the absorption event, not the value of the advanced wavefunction spreading out from the absorber at the emission event.
This is where the OP and the TI diverge, since the former states that knowledge of the future state of the screen would definitely disallow certain transmissions for which P is nonzero, i.e. the Born rule only tells us about the emission, while the electron knows about the absorption already, being stimulated by that absorption to the equal extent that the absorption is stimulated by the emission.
Hence transactional QM itself remains probabilistic, even as it dispenses with collapse.
Cramer claims to have mapped the above arrow of time to the cosmological arrow in the paper here:
Quoting Kenosha Kid
However the more I read it the less compelling I find it. He actually talks about advanced waves going forward in time, which is contrary to what an advanced wave is.
A good way to conceive of why the OP says this isn't true is to consider single-electron transistors in quantum electronics.
In the first image, taken at time T1, the rightmost electron at site E cannot tunnel into site D because an electron already exists there. (In this case, we're not talking about Pauli exclusion, simply electrostatic repulsion, but feel free to mentally add Pauli exclusion into the picture, which is a much more powerful effect.) Likewise the electron at site D cannot tunnel into site E. But it can tunnel into site C.
It is only once the electron at D tunnels into site C that the electron at E can tunnel into site D (second snapshot at time T2). And so on and so forth. This is analogous to the microstate exploration of the back screen in the double-slit experiment. Let's imagine, for illustrative purposes, that we've built a back screen made entirely of single electron transistors (with fixed gates for now).
Clearly then the true probability of transmission from cathode to any given site A-E is not identical to the absolute square of the wavefunction (denoted by the blue rays coming from the cathode), but also on whether each site has an electron in it or not. If an electron was measured on the screen at T4, we would expect to find the electron in sites B or D most of the time unless the wavefunction at these sites was small. But at T3, we would expect to find the electron at sites A or E.
None of this is captured in P12 (where 2 = (A, B, C, D or E at some time T?)) which is a statement only about the wavefunction coming from the cathode, and yet is physically essential to the probability of finding the electron at a given site at a given time. (NB: if the incoming electron is of sufficiently high energy, and has opposite spin to the occupying electron, it can still get into an occupied site, however it is much less likely to do so.)
We can also see that, over time, all of the sites will be available or not, leading to a statistically useful time-averaged probability per site of the screen receiving an electron from the cathode: that is, we expect statistically that the probability given by the incoming electron to dominate over the noisy data about which sites are and are not available at given times, so we should recover the usual interference patterns.
It's also worth pointing out that a screen in which we can measure the position of an incident particle to some high accuracy is more like an array of single electron transistors than like the ideal metal of naïve Copenhagen interpretation.
I wonder why he thinks that Type II emission is equally possible as Type I. Type I is an interpretation of a well-studied phenomenon (emission/absorption of EM radiation), while it takes work to explain away Type II. What's the rationale in proposing it? Just a general preference for symmetry?
Yes, I think because the general formalism underlying Type I also allows for Type II such that disallowing Type II requires extra by-hand theory.
Of course, if it could be tested and confirmed, that would be quite remarkable. Cramer says (referring to earlier analyses) that if both types occur, one of them is expected to dominate - presumably, Type I in our case. But that still leaves a possibility of some Type II absorptions. Do you think it's likely that they would have gone unobserved/unnoticed all this time? How difficult would this be to test?
I think it's more that the Wheeler-Feynman (WF) theory automatically includes Type II. It's an addition to classical EM, yes, but you'd have to add something to WF in order to get rid of it.
Quoting Kenosha Kid
So the bolded part is what I am having difficulty with. We do, of course, observe that the cumulative distribution of impacts on the back screen is in line with the square of the wavefunction from the emitter. If emission occurs whenever, while the distribution of available absorbers on the screen is constrained by external factors, then we won't recover the expected distribution. (In the edge case where at most one site is available at any time, the distribution won't even have any dependence on the impacting wave!)
The only way to recover the expected distribution that depends only on the impacting wavefunction is if the timing of successive emissions is coordinated so as to compensate for the constraint from boundary conditions and produce an undistorted distribution over time. But I don't see how such compensatory mechanism is being realized.
Quoting Kenosha Kid
Yes, he says the entire four-vector is reflected at the Big Bang boundary, i.e. time direction of the advanced wave is flipped, but still insists that the reflected wave is an advanced wave. But other than terminology, do you see any issues with his proposal?
Hi,
Cramer’s proposal cannot work for EM waves, simply because the early universe was not transparent to EM radiation. Advanced electromagnetic waves, traveling backward in time, can reach some 380,000 years after the Big Bang and therefore can’t be reflected at the Big Bang boundary. Instead of that, they should interact with the ordinary dense matter, at least 380,000 years old, as shown in Figure 2 from my recent paper on the problem of the direction of the electromagnetic arrow of time: http://philsci-archive.pitt.edu/13505
This interaction would seem to us like retarded EM radiation coming about 380,000 years after the Big Bang, although it is actually 1/2 advanced + 1/2 retarded EM radiation.
He addresses this concern in the paper, but this physics is way above my pay grade.
That this doesn't hold true is precisely my point. While an individual transmission may depend on the precise microstate of the screen, the screen explores these microstates continuously. A statistical number of transmission events will take place over a period of time, during which one will have a statistical spread across the precise microstates explored during that time, a spread which looks like the probability of finding a given electron at a given position depends principally on the wavefunction.
Quoting SophistiCat
The cancellation depends on both waves being advanced waves, so it's not purely terminological. (Advanced waves cannot cancel retarded waves in Cramer's formulation.)
Quoting Darko B
I agree with the gist of your idea, that advanced waves could exist and will look to us like retarded waves (photons being their own antiparticles).
I don't see the classical spherical wavefront as a particular problem to get around though. Quantum mechanically, the photon, after collapse, is never consistent with a spherical wave. For instance, the emitting object will undergo recoil in a direction inconsistent with the symmetry of spherical wave emission.
Spherical wavefronts are useful because we do not know the full boundary conditions of the transmission and, further, because they recover the correct interference effects for every possible future absorption event (sum over histories). According to the OP, we can consider the retarded and advanced parts of absorption and emission as spherical separately, but the full transmission is only the sums over all paths between the emitter and the absorber... in *either* direction of time.
Let's take an extreme example:
1. The emitter (of electrons, photons, ...) is under experimental control, so that for instance we can ensure that a particle is emitted every millisecond.
2. The availability of receptive absorbers is so constrained by present and future boundary conditions that at any point of time at most one site is available.
Right away, if at the time when we want to make emission happen there are no available absorbers, then we have a problem: some assumption has to give. But even if an absorber is available, the cumulative distribution of impacts will be defined only by the distribution of the available absorbers on the screen over time. And at the same time, in order for the Born rule to hold, that distribution has to match the impacting wavefunction - whatever it happens to be. If we can contrive to emit a particle that hits the screen at times (t[sub]1[/sub], t[sub]2[/sub], ...), the screen had better supply us absorbers at such locations r[sub]i[/sub] that (r[sub]1[/sub], r[sub]2[/sub], ...) form the distribution that we expect to see.
So how can this happen (or can it happen)?
A. (1) and (2) hold, which means that the screen and the universe in its future lightcone have to contrive to match the impacting wavefunction. While no individual absorber is constrained to be in a fixed position at a fixed time, there is a constraint over time on all such absorbers, which is a function of the impacting wavefunction, whatever it happens to be.
B. (2) doesn't quite hold: instead of just one absorber at a time, we have "not many" absorbers. This relaxes the constraint on the screen, but does not completely eliminate it. Unless there is such a constraint, the actual distribution of impacts will inevitably be distorted.
C. (1) does not quite hold: we cannot make emissions happen at will (can we?) This distributes the constraint of producing the right cumulative distribution between the emitter and the screen or shifts it entirely to the emitter, so that now it is the emitter's responsibility to be aware of the state of the screen (and the rest of the universe in the future lightcone) and fire particles under the constraint of producing the right distribution.
Quoting Kenosha Kid
Why not? If I understood it correctly, the reflected wave is out of phase with the advanced wave, so it must cancel it. That the time direction is reversed means that the cancellation occurs everywhere at once, so that to an observer it is as if neither wave ever existed. Only the advanced wave back towards the BB is cancelled; the retarded wave from the emitter is out of phase with the advanced wave, which means that it is in phase with the reflected wave.
Difficult to say. He doesn't say more than 'We will not discuss Type II transactions further' or words to that effect in the original paper. (Recall that retarded and advanced waves cancelling are Type II transactions.)
Quoting SophistiCat
Not *only*: the wavefunction of the emitted electron still natters; my point was rather that it can't be the *only* thing that matters.
In TQM itself, the probability of a transaction causing absorption at (r, t) is the amplitude of the retarded wavefunction arriving at (r, t) times the amplitude of the advanced wave travelling backward from (r, t). So it depends on the probability amplitude of *both* waves.
In your example of a screen that has only one absorption site at any one time, only this site can backwards-emit a hole wave. In the language of TQM, only this wave can handshake with the retarded wave, since the amplitude coming from all other sites is everywhere zero.
However, that single hole will move around the screen and, on average, should be smeared out such that the probability distribution we see forming is given only by the retarded wave.
Quoting SophistiCat
It's not that it's obliged to because of the experimental setup: it will do whether we fire electrons at it or not. In essence, this is what entropy is at the quantum mechanical level: the effectively random exploration of energetically equivalent microstates. If the screen, without us firing electrons at it, stayed in the exact same microstate, with the same single acceptor site, it would effectively be a highly ordered system. Another way to look at it is the fact that the hole is its own quasiparticle, with its own wavefunction obeying a wave equation. Just as we wouldn't expect an electron to stay put in the absence of a driving field, likewise we wouldn't expect the hole to stay put. It'll move around the screen just like an electron would.
Quoting SophistiCat
In the case of no acceptor sites, this is what happens. It's an interesting property of current-carrying systems (which the double slit experiment basically is) as described in quantum transport theory (my particular field) that the notion of electrons being driven by external electric fields is redundant. In actual fact, all current-carrying systems behave probabilistically and thermodynamically: an electron leaves the cathode if and only if the anode has a hole available with the same energy. (Compare to a box partitioned and arranged such that only one half contains more particles than the other, both boxes being in their ground state, with particles filling up energy levels up to the box's characteristic energy. We open a hole in the partition, and higher energy particles from the higher energy box will move to fill holes in the lower energy box, but not vice versa due to Pauli exclusion. If those molecules were charged, you'd have a battery.)
Here's an illustration that contains the main point. Current (in natural units) here flows left to right not because the system exhibits an electric field across itself but purely because of the *chemical potential difference* between the source and sink. The electron source on the left has electrons filling energy levels higher than on the right, thus electrons move to the right, thus a current. If the source and sink levels were equalised, no current would flow (or, as is described by quantum transport theory, no *net* current will flow). If the sink level was higher than the source, electrons would move from right to left.
This is another example of electrons behaving as if they know where they're going before they leave, or rather not leaving because there's no place to go.
This is the point I made way back at the beginning of the thread, about radiant energy. The hot will only radiate to the cold, because of that disequilibrium, and reversal is not possible. But this is just a feature of how we understand that activity which is radiant energy. It is an epistemological feature which makes the idea of radiating energy into empty space somewhat incoherent, and it makes what is referred to as spontaneous emission, unintelligible, as random. But it is folly to say that the energy knows where it is going, because what is really the case is that we only understand radiation through its absorption; so the idea of radiation which appears like it does not know where it is going is something we haven't learned how to grasp yet, and is therefore unintelligible to us.
Well, the confirmation wave is just an echo of the offer wave: its amplitude is proportional to the amplitude of the offer wave at the would-be absorber. So the information carried back by confirmation waves is redundant, as far as the system as a whole is concerned - it's just a mechanism for establishing a transaction in accordance with the Born rule.
Quoting Kenosha Kid
If the hole moves around independently of the impacting wave, while emissions are a Markov process, i.e. a transaction is established whenever a hole is available (as you explain below), with no "knowledge" of what comes before or after, then the resulting distribution of impacts will be independent of the impacting wave. It will only depend on the entropic movement of the hole - most likely just a uniform distribution.
Some dependence on the wavefunction will manifest when multiple holes are available at the same time, but unless there are a whole lot of them (rather than "not many"), the distribution will be blurred.
I'm not sure why you think so. The electron doesn't have to be transmitted at all. In fact, wherever the hole is located, we expect no electron to be transmitted most of the time. Any time the hole is at a site where the probability of finding the electron (as given by its wavefunction) is zero, then no transmission event will occur at all, for instance (i.e. you cannot slow the rate down to 0.001 Hz and get an event every 1 ms if the only available hole is sometimes inaccessible).
This is the analogy with the wire and the source and sink reservoirs. No electron ever leaves the source that cannot fit into the sink. (A nice sanity check that the idea that electrons know where they're going before they leave isn't too exotic.)
Similarly if the probability of finding the electron at a given site is 0.2, you would expect an electron to transmit there when there is a hole there at most 20% of the time.
In reality, the screen is more complex, and electrons will usually be able to squeeze in somewhere. But there should, as per Pauli, always be places it cannot squeeze, and that is neglected in ideal treatments.
In our example of a diffraction through slits the wavefunction is non-zero almost everywhere on the back screen, so that is not an issue. If an electron is ready to fire, and there is (in the edge case) just one hole that it can fill, then it will go there almost always, because where else would it go? Which means that the distribution over time will just be the distribution of holes popping up on the screen. We could put one slit, or two, or ten - doesn't matter, the distribution will be the same.
(Unless holes and/or emission events conspire to construct the distribution that we expect to see.)
Quoting Kenosha Kid
Not if that's the only place where it can go, or one of the few places where it can go. Perhaps the emitter is picky and won't always transact with a hole just because it's available? So in your example if the only available hole is at a 20% probability spot, then 80% of the time the electron will wait out until another hole opens up (and then make another probabilistic decision). That would work, but where is the mechanism for this process?
Quoting Kenosha Kid
Yes, but in order to explain experiments where we see nice diffraction patterns, we must conclude that the number of holes available at any given time is not too few, or else we would be seeing something different (or we need to modify the theory).
That's my point, it doesn't have to go anywhere. For an electron to transmit, there will typically be a chemical potential difference between the source (here a cathode) and the destination (here a screen). If the only available site lies where the electron wavefunction is zero, no electron will transmit, exactly as if there are no available sites at all, which is the case when the source and sink are at equal chemical potential (zero voltage).
Quoting SophistiCat
Effectively, yes. TQM is a conspiracy of sorts.
Quoting SophistiCat
Exactly this. Quantum systems are whimsical :D
Quoting SophistiCat
Well, as I said, so long as, over the lifetime of the experiment, the holes on average occupy a uniform distribution, we will obtain the characteristic banded pattern.
Sure they would appear to "conspire", because the fields associated with the emitting device would overlap and interact with the fields associated with the absorbing device. Fundamentally, the emitter cannot be separated from, to be considered as independent from, the absorber.
Quoting Kenosha Kid
So if the holes (at the screen and elsewhere downstream) don't participate in the conspiracy (indeed, such an extended conspiracy would seem problematic), and there aren't many holes available at any one time, then the emitter has to time its transactions so as to build up the right pattern over time. Indeed, you need to posit that the process of picking the site of the transaction unfolds not instantaneously, as Cramer posits, but in all 3+1 dimensions. That's the only plausible solution that I can think of.
Is it plausible? I suppose the bare-bones theory (not including a specific mechanism for the Born rule) does not rule it out, and neither does its empirical basis, which consists of just such accumulation over time of apparently probabilistic events.
The appearance of conspiracy has a very limited representation in relation to what is real. As I said above, it represents something epistemic. What it represents is our inability to actually understand the true nature of emission and absorption. The one, absorption, is dependent on the other, emission, in its real existence, yet emission is not understood, and is represented as a mirror image of absorption, which has a dependence on emission, which is not understood. Therefore it is simply a vicious circle of misunderstanding, manifesting as the appearance of conspiracy.
Quoting Metaphysician Undercover
From what little I know of the subject, these are reasonable comments.
(Oddly, I am working on a problem in elementary complex dynamical systems that has me caught up in a circular argument that must be broken: To show a certain sequence converges one needs to show it is bounded, but to prove it is bounded reflects back to its convergence behavior. It looks like a step-by-step argument alternating between the two will be the ticket.)
That's exactly what experiment-based theorizing in science accomplishes. To see how it figures into my account of relative superposition amongst wavelengths and wavicles, give my most recent post in the "Anatomy of a Wave and Quantum Physics" thread a look. (Trying to get some more commentary on those ideas, which might associate in many ways with your guys' thread, though I haven't read much of this).
Well, I'm being careful to distinguish between transmission and emission. Emission can be described as the spread of a single electron wavefunction from the tip of the cathode. Transmission is emission + absorption. In standard QM, transmission has occurred when we detect an electron on the screen. Emission by itself cannot, as MU keeps saying, be observed directly and independently (well, it can, but not without destroying the interference pattern).
So the electron wavefunction may well continue to evolve but simply not collapse. In TQM, the same holds: the retarded wavefunction can evolve indefinitely; it is only when the transaction with the advanced wave occurs that transmission occurs. As per the OP, the emission occurs precisely because the transmission occurs, i.e. it is simply one of the boundary conditions of a process that is agnostic about any arrow of time.
Quoting SophistiCat
True, hence my interest in Type II transactions, which, if they existed, should be empirically observable and presumably would differentiate TQM-like interpretations from others empirically.
But just to stress, the aim is not really to argue for a particular interpretation of QM. I don't really have any beliefs about it precisely because it is not something evidence can shine a light on, although the OP best describes a tentative ordering narrative going on in my little headbox. Rather, the point is about rejecting premature non-deterministic conclusions from naive quantum mechanical treatments of systems we cannot solve the many-body Schrödinger equation for. QM != non-determinism.
Pardon my interjection, but could you guys briefly outline the properties of wave motion? How does the velocity or oscillation of an electron in an atom vary from one traveling in a beam or current, and how does this compare to electromagnetic radiation in various contexts? Could kinds of waves exist that travel faster or slower than what we have thus far measured? Why is the speed of light considered a top velocity in popular physics? Seems to me that wave mechanics are at the core of quantum foundations, so I want to get a summary handle on what waves do.
Don't you think that this is a sort of odd way of looking at things? We think of emission as occurring at a point in space and time, at the cathode, and absorption occurs at a point in space and time at the screen. However, there is a spatial-temporal separation between these points, and the concept of transmission accounts for that spatial-temporal extension. This would imply two distinct acts, emission and absorption, with a spatial-temporal separation between them.
To make emission/absorption into one single activity you need to dissolve the spatial-temporal separation between them. The cathode and screen must directly interact. Traditionally, the radio tower emits the information, the radio receives it, and the spatial-temporal separation between them is covered as transmission in the form of waves. Now, you could say that the wave field is a property of the transmitter, and this wave field interacts with the wave field of the receiver, and then you would have the premise for a direct interaction between transmitter and receiver, making your statement (above) true. The radio tower interacts directly with the radio, through the means of their wave properties, making emission and absorption one and the same act This would be like saying that the sun warms the earth by touching the earth, the electromagnetic field properly being a part of the sun which is in contact with the earth. This would negate the premise of space between the object and the eye; when seeing an object it actually touches the eye through field interaction. Also the idea that two distinct objects cannot occupy the same space would be faulty by this premise. This would allow an object to be understood in its entirety rather than just as it appears to our senses. We already know that objects really overlap each other, by the effects of gravity which is a property of objects.
An electron in an atom is *bound*: unless it is supplied with enough energy (ionisation energy), it cannot move away from the atom. In this sense, it has no velocity, but it still has momentum, e.g. the angular momentum that, along with energy, identifies its state. Bound states have discretised energy levels: only certain energies are allowed.
By contrast, plane waves are *unbound*: they can have any energy and momentum. However, by definition, plane waves already occupy all of space, and so don't move anywhere either. In between these two extremes, electrons starting from a confined volume (such as an ionised electron) will spread out from the region of confinement (a circular wave) or move in a more well-defined direction and spread out as it does so (a wave-packet).
Frequency is just energy with different decorative physical constants.
As per Cramer (and his predecessors), there can be no emission without transmission in the absorber theory, whether classical or quantum. "Absorber theory, unlike conventional quantum mechanics, predicts that in a situation where there is a deficiency of future absorption in a particular spatial direction, there will be a corresponding decrease in emission in that direction." (I don't think you disagree, since that is also the premise of your hypothesis - just pointing this out, because what you wrote might suggest otherwise.)
In Type II transactions there is still an "absorber" - well, let's call it a "partner emitter."
I wonder though whether the absorber theory actually rules out, logically or empirically, uncollapsed/unabsorbed waves?
Quoting Kenosha Kid
By the way, in the 1980 paper Cramer wrote: "Davies argues that the most general test of absorber theory would include the possibility of type II transactions." This refers to a 1975 paper by Paul Davies: On recent experiments to detect advanced radiation. Perhaps that would be a good place to start digging in that direction. (Google Scholar doesn't list Cramer's paper among the references.)
It would be more accurate to say that in a situation where there is a deficiency of future absorption in a particular spatial direction, there will be a corresponding decrease in emission in that direction towards the future. There is always a past absorber in the Big Bang Universe in any spatial direction. In spatial directions where the radiation is reduced towards the future, the radiation towards the past is increased.
In the experimental attempt to detect advanced radiation in the late 70s (published in 1980), Schmidt and Newman used a half-wave dipole as an instrument of observation: "A search for advanced fields in electromagnetic radiation". With this, they introduced a future absorber in the spatial direction in which they tried to detect advanced radiation. So in that spatial direction they increased the radiation towards the future and reduced the radiation towards the past (figure 3), therefore it is no wonder that they had a negative result.
I made the same mistake in the first attempt to replicate their experiment. I almost gave up after numerous failed attempts. Then, after I read the paper: "Radiated power and radiation reaction forces of coherently oscillating charged particles in classical electrodynamics" I decided to replace the half-wave dipole with a lambda / 20 receiving antenna, which is a much more inefficient absorber, and positive results began to appear almost immediately: "Measurement of advanced electromagnetic radiation".
Yes, I am in the sense that I would insist at least on an ultimate absorber: an electron hole (positron) for an electron, less so on the short-range behaviour Cramer describes. But even in TQM, the retarded wavefunction is emitted whether it transacts or not. It could potentially go on forever, but the emitting system would be considered as in the same state (no emission). By contrast, standard QM has a probabilistic emission: the emitting system is in a superposition of having emitted and not emitted. When the former is 0.9, we can be 90% sure emission has occurred, whether or not we detect a corresponding absorption (in principle), and it's for this reason I was being careful to discuss transaction.
In standard QM, the electron can be emitted without a future absorption event.
In TQM, the electron wavefunction is likewise emitted regardless of any future absorption event, but will only transact if there is a corresponding advanced wave.
In the OP, there is no emitted wavefunction without n absorbed one.
Quoting SophistiCat
The wavefunction spreads out irrespective of future absorbers, just like in standard QM. It's just that the probability of emission does not approach 1 in the absence of absorbers in TQM.
Quoting SophistiCat
Ah, great minds. And my little one. Yes, it should be experimentally detectable and discernible from other interpretations of QM.
This concerns a paper from last year that only came to my attention today. The paper is here:
https://www.nature.com/articles/s41586-019-1287-z
and a frankly poorly written layperson article from Scientific American is here:
https://www.scientificamerican.com/article/new-views-of-quantum-jumps-challenge-core-tenets-of-physics/
(I actually found it's analogies harder to follow than the original paper, but others might get something out of it.)
A Wiki article on Quantum Trajectory Theory is here:
https://en.wikipedia.org/wiki/Quantum_Trajectory_Theory
This has nothing to do with advanced waves or reversible processes, but rather concerns a deterministic element of quantum transitions (historically the sole purview of the non-deterministic wavefunction collapse) and the role of the measurement apparatus in its unpredictable aspect.
TL;DR version: The experimenters built an electronic qubit and found that, when weakly coupled with the measurement apparatus, the transition between two if it's energy levels was both deterministic and predictable. This transition constitutes a trajectory from the lower to the higher energy level through a predictable continuum of superpositions in a finite amount of time.
When the measurement apparatus is more strongly coupled, this finite interval is no longer guaranteed: the system might never reach the end of the trajectory. Furthermore, at any time, it might collapse unpredictability back to its initial state.
Relevance: Part of the OP and much of my conversation with Cat discussed the importance of the precise state of the measurement apparatus to the experimental outcome, something that generally cannot be known because of the intractably large number of degrees of freedom (entropy). In the OP, this is discussed in the context of the transactional interpretation of QM, in which the evolution of the electron wave as it moves toward the screen is 'realised' (made real) by a time-reversed advanced wave coming from the screen to the electron source. Without knowledge of the advanced wavefunction, we cannot determine where the electron will go: the best we can muster is to realise the electron wavefunction with itself (the Born rule) and get a statistical distribution of possible outcomes.
One of the discussions Cat and I had regarded making the macroscopic screen more quantum by preparing it such that there was only one possible acceptor site, in which case one would expect a deterministic flow of electrons from the source to that one site, the rate given by the voltage between the source and the acceptor, and by the amplitude of the electron wavefunction at that site. (Here, keeping the site as an acceptor site is equivalent to giving it a continuous supply of advanced electron holes.)
This is not dissimilar to the experiment in this paper, which also describes a deterministic trajectory from source (here two nearby states |G> and |B>) to the only possible destination |D>. When there is negligible coupling between |D> and its environment, transitions to that state are smooth, deterministic and predictable. When it is more strongly coupled, the determinism is apparently destroyed.
We can liken this to an acceptor site that is inconstant in its supply of holes, or, rather, is coupled to its environment such that electrons from outside the apparatus can dip their toes in the acceptor site. Since this is not something we can possibly describe, it is effectively stochastic, giving us the apparent non-determinism we are familiar with in QM.
The importance of this paper is that it significantly narrows down the source of this apparent non-determinism to precisely the thing we are ignorant of: the measurement apparatus itself! In a perfectly deterministic universe, ignorance about important causal factors will yield unpredictable effects. We do not need an additional intrinsic non-determinism to explain the apparent probabilism of these sorts (i.e. reversible) of events.
Not sure if I should resurrect this thread, but feel the compulsion so who am I to resist? Reading a book on quantum physics that describes retarded/advanced wave theory, and while trying to envision this thought experiment it occurred to me that the mechanism seems to be the same as a lightning strike, though the rates of transmission may differ. Does lightning demonstrate this theory observationally, and what level of insight might be lent to a "quantum handshake" model of the double-slit experiment? Since I'm no expert at this point, I'll let you guys ponder the idea from scratch instead of awkwardly trying to explain it all at the outset and see what you think.
Yes, very apt. The possible trajectories are explored down from the sky to the ground, but only once it reaches the ground does it become lightning. Of course, lightning will still explore different endpoints, but it's still a canny comparison.
If this is being preserved for posterity, I might go back and edit my embarrassing snapping at MU :fear:
I'm glad you liked it! I think I can actually specify this pretty well since I've been ruminating on it awhile.
Almost immediately after the retarded "wave" leaves (whichever side it starts from, probably the absorber side paradoxically, just as a lightning bolt originates from the ground, induced by the emission charge that is comparable to a thunder cloud), the complementary advanced wave arrives, then the two interfere to produce a new retarded wave in the forward direction (towards the emitter) while the retarded wave in the backward direction cancels out the original retarded wave.
The new advanced wave dissipates into the absorber somehow and similarly interferes with the emitter's retarded wave. The stair-stepping retarded/advanced waves from the emitter and those from the absorber rapidly close the distance between them until they contact each other and the quantum handshake occurs, a surge of charge briefly connecting the emitter and absorber directly, like a lightning bolt, in this case invisible. (whew, hopefully I didn't retard that description!)
The double-slit produces a symmetry in the chamber’s charge that makes the absorber’s charge-active sites comparably symmetrical, resulting in what looks like a precise interference pattern on the florescent screen despite the haphazard, seemingly randomized nature of each individual transmission “handshake” and its equal likelihood of passing through either slit (though abiding the path integral concept).
It would be fantastic to run this in slow motion or while measuring the charge distribution in the double-slit chamber, and maybe analyzing lightning could give a clue as to the coordination of causal vectors and rates in this type of process.
Is an analogy between the double-slit mechanism which produces an interference pattern from a beam of light and that generating a statistical distribution from a beam of electrons entirely fallacious?
Does a statistical distribution similar to that of the electron experiment arise from molecules with up to a thousand atoms by this "lightning bolt" mechanism also, perhaps involving sizable charge polarities and "smearing" within the particle as it travels that temporarily reconstitute its structure into an entirely different form?
What do you guys think?
The photon shows up as a single speck on the florescent screen, with an interference pattern built up as the statistics of many individual particles? The original double-slit experiment diffracted a beam of light into a spreading field before it reached the double-slit, so it was most certainly traveling through both slits simultaneously to interfere with itself, but the modern double-slit experiment could be different.
Exactly right.
Quoting Enrique
What the low-frequency experiment tells us is that it isn't one photon from the beam interfering with another photon, but each photon interfering with itself, likewise for electrons. If there was an additional many-photon interference effect, it should be evident in the interference pattern on the screen.
I think my model suggests that the particle passing through both slits to interfere with itself in only the third dimension is an illusion. If this is accurate, woohoo! The experiment still suggests a wave function, but a single wave propagates both backwards and forwards in time, intermittently collapsing itself in some sense. This model may only apply to a very specific kind of process and context, with wavicles in nature propagating in many more orientations than temporally forward and backward, hence the complexity of our holographic universe. But the form of this holography is partially an outcome of perception's nature, thus the observer-dependence of experimental observations enigmatically brought into sharp relief by the supraintuitional oddities of quantum physics.
I'm not sure where holography comes into it though.
A wave function collapsing itself in the double-slit experiment is relatively simple compared to most of what happens in nature, an extremely parameterized holography even in the context of three dimensions, basically like a lightning bolt. In environments outside the lab, the holograph (as viewed in lower dimensions) and its background is usually much greater in complexity, more like a supradimensional tapestry with standing waves and intricate flow at morphing rates. Maybe a particle is merely a standing wave?
Spot on, but in 4D instead of 3D. A good analogy is Bloch waves: these are steady-state waves in periodic structures such as crystals. Stationary Bloch states have no momenta either because a) they consist principally of zero-momentum terms, or b) they consist of non-zero momenta terms in equal and opposite degrees. The latter can be seen as a superposition of e.g. particles moving forward through the crystal and particles moving backward. In fact, statistically we cannot differentiate between two particles in these stationary states and two that are moving with equal speed in opposite directions: the many-body wavefunction is the same for each.
The idea expressed in the OP, and expressed by yourself above, is a 4D generalisation of this. Each particle is a 4D standing wave that can be decomposed into parts moving forward in time (retarded wavefunction) and parts going backward in time (advanced wavefunction). They're not exactly analogous: in Bloch waves the operation is additive, in these they are multiplicative, but they are very similar.
This is my impression of the ideas so far (solely for educational learnings, don't come down on me too hard). Curious to see the extent that your knowledge corroborates it.
In the double-slit experiment a wave packet or "wavicle" travels through one or the other slits, but either option is equally probable across many trials, though fundamentally deterministic (thus far immeasurably so) in relation to a single wavicle. An apparent "interference pattern" is not generated by diffraction through the slits but rather produced by peaks of charge distribution along the absorber's surface rendered symmetrical by the slits, which initiate the various trajectories of wave packets in coordination with the emitter charge and determine the statistical range of possibility for endpoints.
This is near one pole of the wavicle spectrum as it exists in Earth environments, a case that can be modeled somewhat simplistically as an almost ideal duality of retarded and advanced wave related in a precisely multiplicative way during propagation of the wavicle's path, the total wavicle smeared out in an especially linear four dimensionality as it moves.
A crystal is near the opposite pole of the Earthbound wavicle spectrum, modelable as retarded and advanced waves which are instead a nearly ideal additive duality, amounting to such equilibrated, counterbalanced motion that the wavicle appears stationary even upon very large magnification, an especially polygonal four dimensionality.
Most conventionally multiwavicle matter exists somewhere in between, with waves of many varying and fluctuating rates interfering in complex patterns that may deviate greatly from four dimensionality as it applies to the poles of the retarded/advanced wave model, creating a huge range of relatively local entanglement phenomena.
All of this entanglement exists within fields or clouds of charge that are a medium for nonlocal causality. Charge is the nonlocal facet of entanglement in electromagnetic matter.
The science of atomic chemistry adequately (but perhaps not ideally) models a large portion of the entanglement spectrum.
Generally that's not a good idea. For instance, if I edited out everything I said that was embarrassing to me, there really wouldn't be much left.
In that case, I
My next lecture will explicate quantum mechanics as the golden path to fourth dimensional world peace! Its the advanced wave of the future man! lol
By the way, this experiment has reputedly been performed with more than two slits. My model predicts that, within static total width parameters, the particle has roughly equal chance of traveling through any centered, equally sized and spaced slits. The erroneously-regarded "interference" pattern on the detector screen will then vary symmetrically in proportion to emitter position, also slit quantity, width and placement, predictable according to some kind of mathematical formula. Is this accurate?
:rofl: Quoting Enrique
It will vary depending on the distance from the cathode to the slits, the slits to the screen, the distance between the slits, the widths of the slits and the voltage of the cathode.
Is a triple-slit experiment with the same setup as the textbook double-slit an on average 33% probability, a quadruple-slit 25%, etc.?
That interpretation doesn't make sense to me. It fails to account for why a detector at one of the double slits only registers a particle half the time, nor the apparent randomness of localized absorber contact amongst even dozens of particles. Technically its possible for an electron "lightning bolt" to divide in some proportion, travel through all slits, and then recombine on the opposite side to contact a single spot on the florescent screen, but this seems highly unlikely for particles consisting of hundreds of atoms.
I think researchers managed to radically misinterpret the experimental results because of their reificational love affair with solving the wave function. The real source of absorber pattern statistics must be relative distribution of negative charge along its surface, not particle interference.
It's easier to see it in the case of photons. The photon must travel through the undetected slit and not be destroyed or the detected one and be destroyed. There's no possibility in that case of interference.
I'm not aware of any direct evidence that the particle travels through both slits simultaneously as a wave, then recombines into a particle. It might be the case for photons while not for much more massive particles, but what could possibly be the mechanism?
It seems more plausible to me that the detector at the slit affects overall charge distribution in the chamber such that all possible pathways are influenced to produce bright bands instead, rather than disrupting some sort of superposition or entanglement of the wavicle with itself at multiple slits. Even Schrodinger hated the collapse of the wave function concept and he invented the wave function!
The interference pattern is the evidence. That's how they know particles are waves. The single-electron experiment was performed in 2002: https://physicsworld.com/a/the-double-slit-experiment/
Quoting Enrique
Electrons are waves: waves diffract and interfere.
What is the evidence that a single emitted electron is a wave spanning multiple slits, and does this evidence obtain for molecules also? When the wavicle contacts the florescent screen, seems to me it is closer to a particle with a definite trajectory than a collapsing wave function of a large swath of the chamber, though electrons do of course evince properties in many contexts that can't be explained unless they are spatially diffuse in ways exceeding for instance a grain of sand.
I linked to an article on the paper.
Probably involves mathematical parameters that are difficult to explain in a simple message board post. Not a physicist, but maybe I'll take a look at it.
Ah okay. Basically the interference pattern on the screen is determined by how the wave propagates to it. The pattern is what's called a Fourier transform of the slit setup. For a single slit, this is something like a bump. There's no dark and light bands because there's no positive and negative fields to cancel each other out. The bulk of the wave passes in a straight line between the cathode and the screen, with some spreading out as it passes through the slit.
To get the dark and light bands of the interference pattern, you have to have multiple sources that can reinforce or cancel each other out. You can't get this kind of pattern with a wave and a single slit, and you can't get this pattern with point particles and multiple slits, as these would just produce multiple copies of the same thing you'd get with a single slit.
Putting a detector behind one of the slits basically gets you back to the pattern you'd get with point particles. The wave has to get past the detector or not, so go through one slit or the other. You can't get interference this way. You can only get interference if it goes through one slit *and* the other.
Sorry this nonexpert is getting so nitpicky, but I'm curious. Does observable evidence exist that the particle travels through both slits during a single trial, or is that only an assumption? I've read an "interference pattern" results from molecules with up to a thousand atoms. It seems to me these must be very much like point particles compared to an electron, so claiming they travel through both slits could be problematic. For the relatively large molecule at least, the endpoint on a florescent screen is then produced by retarded/advanced waves linking up approximately halfway along the path determined by one of the slits, not both, meaning the wavicle stretches more linearly rather than spreading horizontally, though over many trials the particle is equally likely to travel through either slit.
But it depends on the mathematical specs of distance between the slits, slit width, particle size, charge distribution, etc. Don't have the foggiest notion of exact proportions.
Failure to get the interference pattern from molecules larger than a thousand atoms could be the result of the molecule being too large to be influenced by an absorber's charge as induced in consort with the emitter like a thunderstorm, not a "collapse of the wave function" decoherence caused by entanglement properties at the slits. The sensor might somehow affect trajectory through either slit (in separate trials) even though it only records the particle at one of them, so that decoherence in this case is not a feature of the particle itself. This could imply that decoherence is not as large a constraint on particle behavior as might have originally been thought, allowing more quantum degrees of freedom to be anticipated for molecules in nature.
Is an "interference fringe" the interference pattern on the absorber or something else?
So an electron beam interferes with itself as it travels through a crystal or when divided by a metal filament, that's pretty certain. But the interference pattern can't account for why the particle shows up as a point on the screen instead of a wave. Looks like the double-slit experiment works with molecules having as many as two thousand atoms, so the postulated "interferes with itself" mechanism has not reached its limit, but seems dubious to me nonetheless.
How can a diffuse wave interfere with itself to form a single particle on the screen? It doesn't make any sense. What is the direct evidence of diffraction at both slits simultaneously?
I'm not sure what you mean by "to form a single particle on the screen". It interferes with itself. Not to do something: that's just what it does anyway.
Unless you just mean "how does it end up as a point on the screen?" That's the measurement problem described in the OP:
Quoting Kenosha Kid
Quoting Enrique
Its worth noting that the experiment requires very specific molecules to work at large masses, so hidden variables must exist. It works with a bucky ball, and there's no way one of those is divided in two by the slits, transcending its chemical bonds completely, to then recombine on the opposite side and end up as a point on the screen.
I'm suggesting the primary hidden variable is charge distribution in the double-slit chamber that materializes prior to the emitted particle reaching the slits, which parameterizes statistical distribution while determining the particle's trajectory in the same retarded/advanced wave manner as a lightning bolt. Do you find this explanation at all compelling?