Quantum Mechanics and Hidden Variable Theories

(Bell's Inequality)

 

For making predictions about the results of specific experiments Quantum Mechanics (QM) is a very successful theory. It has provided the foundation to the modern theories of Chemistry (Atomic Physics), Electro-Magnetics (E & M), Nuclear Physics & Elementary Particle theory, Optics and many other theories. However, in term of explaining the results its leaves something to be desired. Therefore, QM theory is often broken into two parts: a formalism and an interpretation. The formalism is a set of rules on how to do the problems. The interpretation is the why.

In QM, a particle is described by a mathematical expression called a wave function in the Schrödinger formalism and a state vector in the Dirac formalism. In the remainder of this discourse, differences in these formalisms will be ignored and the term function will be used for either the state vector or the wave function. This function describe the particle as being smeared out over all of of space. The value of the function (actually the value of the absolute square of the function) at any point is a measure of the probability the particle is located at that point. The average value calculated over all space yields the most probable location of the particle. QM, therefore, provides its results in terms of probabilities of their occurance. An experiment that actually detects the particle, however always finds it in a specific place, not smeared out. This is called “collapsing" the function.

No physicist (that I know) has any argument with the result of this “collapse”. The particle is there. The question that has been argued throughout the 20th century is where or what was the particle before the measurement. The Copenhagen Interpretation (CI) that is basically attributed to Niels Bohr in essence says, the act of measuring the system, disturbs it and the very question of where it was prior to the measurement is meaningless. It further states that the probabilities associated with the function are not classical but form their own algebra which is closed and complete. (Since the algebra of the probabilities is different it actually belongs to the rules section not the interpretation section.) This is an interesting and difficult concept for two reasons. The conceptual difficulty arrises from its implication that before measuring the particles location, it is the function that is “reality” rather than the particle. The real interesting part though is that, if true, it provides some insight into a new matrix for a model of the universe.

Albert Einstein and others had a great deal of difficulty accepting the tenets of CI. This lead to the EPR (Einstein, Podolsky, Rosen) hidden variable theory. In their theory, the particle always had a definite location; we just didn’t know what it was. They argue the probabilities are classical and are used only for handling the fact that we don’t have all the information available. Hidden Variable theories are also sometimes called local realism theories.

To understand how intractable this problem has been only requires knowing that its still being argued. On the surface it seems a relatively trivial argument. It, however, has recently lead to a very disturbing thought - there is at least one very basic concept which we as physicists hold dear, that is wrong.

The local realistic concept of nature in the EPR theories rest on 3 basics assumptions:

1: Realism: We believe that nature has an existence independent of us and that regularities that we see in it are a result of this reality.

2: Induction: We believe that conclusions about future events may be drawn from consistent observations (that Inductive reasoning is valid).

3: Separability: We believe that no influence can propagate faster than the speed of light. This assumption argues that it is conceptually possible to separate a system into two systems such that an observation made in one will not influence the results of other observations made in the other.

These rules are essentially a common sense interpretation (albeit somewhat classical) of the world. They are essentially the basis of causality; that when consistent correlations are observed between events, the events are somehow connected and that they are both the result of some other event or that one event caused the other. Most physicists cling to these assumptions and will only give them up with reluctance. Nonetheless, due to the work of John Bell at CERN, it looks like one of them will have to be modified or discarded.

In 1964, John Bell determined the Bell Inequality based on the above assumptions. In general, his inequality places a limit on the extent to which distant events can be correlated. In contradiction to this, the QM rules (note the term "rules" not "interpretation") predict the limit will be exceeded.

As an example, consider a system of 2 protons in a singlet spin state. The rules of QM state that on one particle, the spin along any arbitrary axis will be up and along the same axis on the other particle will be down. You don’t know which particle will be up and which will be down but do know they will be opposite. It also make no difference if the particles are allowed to separate or how far they separate they remain in the singlet state. The same result will occur if the measurement is made. Another rule of QM is that only one component of spin may be uniquely determined. If the Z component is measured on one particle then all information about the X and Y components on either particle is immediately destroyed (randomized) and any measurement of either component on either particle will yield totally arbitrary results regardless of where the particles are when the measurment is made. Therefore in QM, the information about the first measurement is "instantly" propagated to the second particle.

Hidden Variable theories do not allow this instantaneous propagation. In them. if the 2 final locations are separate, information about one experiment can only propagate to the second particle at a finite maximum speed (speed of light). Therefore, by sufficiently separating the particles, it should be possible measure the result along one axis on particle 1 and to measure the result along a different axis on particle 2 before particle 2 "knows" about the measurement on particle 1. Since knowing the result on 1 particle allows you to infer the result on the other, this theory (classical probabilities) allows the knowledge of 2 components of spin rather than only 1 as in QM.

The basic premises of the classical view:

  • the parameters have definite values at all times even in the absence of measurements;
  • valid inferences of their values may be made to satisfy universal laws; and
  • observations may be separated a sufficient distance such that they can be considered independent:
  • therefore provided results that are markedly different from those obtained from the QM rules. Furthermore, due to the Bell Inequality, a test can be made to determine which viewpoint is correct.

    One such experiment would be to create an ensemble (large number) of singlet state particles which are allowed to separate. At a point far apart, the number density of the X and Y components of spin of the resulting particle streams are measured and compared to the two theoretical results. Unfortunately for local realism, the results to date say QM is “correct” leaving us with a dilemma - where is the error.

    My personal choice for modification is the separability (speed of light) assumption. Of the three assumptions, it is the one that would be the easiest to accept changing. Additionally, this is not the only observation which seems to indicate super-luminal velocities. Rather than being a dramatic upheaval of physical laws, this may also be a pathway to a new view of the universe. There are a number of small irritating itches in physics of which this is only one. We don’t have a good concept of energy. We seem to have started proliferating quarks just as we used to proliferate nucleons and elements before that. We seem to have “lost” 99% of the universe somewhere. Perhaps the matrix we are seeing as the universe is not quite complete.

    As a final statement here, I have got to say that realism and inductive reasoning seem to be much more basic to me. If I didn’t believe in the existence of an independent universe I would be some sort of god creating everything I can think of. I can believe in many things but that’s a real stretch. As for induction, without it there would be little reason to study anything. There would certainly no universal laws. We also do see consistency in our measurements. Experiments are repeatable. Differences occur, but they are usually small or at the limits of our theories. I wouldn’t think the results would be so selective if inductive reasoning wasn’t an acceptable procedure.