A.16 The adiabatic theorem

An adiabatic system is a system whose Hamiltonian changes slowly in time. Despite the time dependence of the Hamiltonian, the wave function can still be written in terms of the energy eigen­functions $\psi_{\vec n}$ of the Hamiltonian, because the eigen­functions are complete. But since the Hamiltonian changes with time, so do the energy eigen­functions. And that affects how the coefficients of the eigen­functions evolve in time.

In particular, in the adiabatic approxi­mation, the wave function of a system can be written as, {D.34}:

\begin{displaymath}
\fbox{$\displaystyle
\Psi = \sum_{\vec n}c_{{\vec n}}(0)...
...ngle\psi_{\vec n}\vert\psi_{\vec n}'\rangle{\,\rm d}t
$}
%
\end{displaymath} (A.74)

where the $c_{{\vec n}}(0)$ are constants. The angle $\theta_{\vec n}$ is called the “dynamic phase” while the angle $\gamma_{\vec n}$ is called the “geometric phase.” Both phases are real. The prime on $\psi_{\vec n}$ indicates the time derivative of the eigen­function.

Note that if the Hamiltonian does not depend on time, the above expression simplifies to the usual solution of the Schrödinger equation as given in chapter 7.1.2. In particular, in that case the geometric phase is zero and the dynamic phase is the usual $-E_{{\vec n}}t$$\raisebox{.5pt}{$/$}$$\hbar$.

Even if the Hamiltonian depends on time, the geometric phase is still zero as long as the Hamiltonian is real. The reason is that real Hamiltonians have real eigen­functions; then $\gamma_{\vec n}$ can only be real, as it must be, if it is zero.

If the geometric phase is non­zero, you may be able to play games with it. Suppose first that Hamiltonian changes with time because some single parameter $\lambda$ that it depends on changes with time. Then the geometric phase can be written as

\begin{displaymath}
\gamma_{\vec n}= {\rm i}\int
\langle\psi_{\vec n}\vert
...
...gle{\,\rm d}\lambda
\equiv \int f(\lambda) {\,\rm d}\lambda
\end{displaymath}

It follows that if you bring the system back to the state it started out at, the total geometric phase is zero, because the limits of integr­ation will be equal.

But now suppose that not one, but a set of parameters $\vec\lambda$ $\vphantom0\raisebox{1.5pt}{$=$}$ $(\lambda_1,\lambda_2,\ldots)$ changes during the evolution. Then the geometric phase is

\begin{displaymath}
\gamma_{\vec n}= {\rm i}\int
\langle\psi_{\vec n}\vert\n...
...
f_2(\lambda_1,\lambda_2,\ldots) {\,\rm d}\lambda_2 + \ldots
\end{displaymath}

and that is not necessarily zero when the system returns to the same state it started out at. In particular, for two or three parameters, you can immediately see from the Stokes’ theorem that the integral along a closed path will not normally be zero unless $\nabla_{\vec\lambda}\times\vec{f}$ $\vphantom0\raisebox{1.5pt}{$=$}$ 0. The geometric phase that an adiabatic system picks up during such a closed path is called “Berry’s phase.”

You might assume that it is irrelevant since the phase of the wave function is not observable anyway. But if a beam of particles is send along two different paths, the phase difference between the paths will produce inter­ference effects when the beams merge again.

Systems that do not return to the same state when they are taken around a closed loop are not just restricted to quantum mechanics. A classical example is the Foucault pendulum, whose plane of oscill­ation picks up a daily angular deviation when the motion of the earth carries it around a circle. Such systems are called “non­holonomic” or “anholonomic.”