N.23 Fundamental assumption of statistics

The assumption that all energy eigen­states with the same energy are equally likely is simply stated as an axiom in typical books, [4, p. 92], [17, p. 1], [24, p. 230], [50, p. 177]. Some of these sources quite explicitly suggest that the fact should be self-evident to the reader.

However, why could not an energy eigenstate, call it A, in which all particles have about the same energy, have a wildly different proba­bility from some eigenstate B in which one particle has almost all the energy and the rest has very little? The two wave functions are wildly different. (Note that if the proba­bilities are only somewhat different, it would not affect various conclusions much because of the vast numerical superi­ority of the most probable energy distribution.)

The fact that it does not take any energy to go from one state to the other [17, p. 1] does not imply that the system must spend equal time in each state, or that each state must be equally likely. It is not difficult at all to construct non­linear systems of evolution equations that conserve energy and in which the system runs exponentially away towards specific states.

However, the coefficients of the energy eigen­functions do not satisfy some arbitrary non­linear system of evolution equations. They evolve according to the Schrödinger equation, and the inter­actions between the energy eigen­states are determined by a Hamiltonian matrix of coefficients. The Hamiltonian is a Hermitian matrix; it has to be to conserve energy. That means that the coupling constant that allows state A to increase or reduce the proba­bility of state B is just as big as the coupling constant that allows B to increase or reduce the proba­bility of state A. More specifi­cally, the rate of increase of the proba­bility of state A due to state B and vice-versa is seen to be

\begin{displaymath}
\left(\frac{{\rm d}\vert c_A\vert^2}{{\rm d}t}\right)_{\rm...
... due to A}
= -\frac{1}{\hbar}\Im\left(c_A^*H_{AB}c_B\right)
\end{displaymath}

where $H_{AB}$ is the perturb­ation Hamiltonian coefficient between A and B. (In the absence of perturb­ations, the energy eigen­functions do not interact and $H_{AB}$ $\vphantom0\raisebox{1.5pt}{$=$}$ 0.) Assuming that the phase of the Hamiltonian coefficient is random compared to the phase difference between A and B, the transferred proba­bility can go at random one way or the other regardless of which one state is initially more likely. Even if A is currently very improbable, it is just as likely to pick up proba­bility from B as B is from A. Also note that eigen­functions of the same energy are unusually effective in exchanging proba­bility, since their coefficients evolve approximately in phase.

This note would argue that under such circumstances, it is simply no longer reasonable to think that the difference in proba­bilities between eigen­states of the same energy is enough to make a difference. How could energy eigen­states that readily and randomly exchange proba­bility, in either direction, end up in a situation where some eigen­states have absolutely nothing, to incredible precision?

Feynman [17, p. 8] gives an argument based on time-dependent perturb­ation theory, chapter 11.10. However, time-dependent perturb­ations theory relies heavily on approxi­mation, and worse, the measurement wild card. Until scientists, while maybe not agreeing exactly on what measurement is, start laying down rigorous, unambiguous, mathematical ground rules on what measurements can do and cannot do, measurement is like astrology: anything goes.