Consider the Schrödinger equation
However, the Hamiltonian varies with time for the systems of interest
here. Still, at any given time its eigenfunctions form a complete
set. So it is still possible to write the wave function as a sum of
them, say like
To get an equation for their variation, plug the expression for
in the Schrödinger equation. That gives:
However, the purpose of the current derivation is to address the
adiabatic approximation. The adiabatic approximation assumes that the
entire evolution takes place very slowly over a large time interval
.![]()
![]()
.
changes by a finite fraction of
,![]()
![]()
changes by a finite amount. This implies
that the time derivatives of the slowly varying quantities are
normally small, of order 1/
.
Consider now first the case that there is no degeneracy, in other
words, that there is only one eigenfunction
for each energy
.
.![]()
too.
(Recall that the square magnitudes of the coefficients give the
probability for the corresponding energy. So the magnitude of the
coefficients is bounded by 1. Also, for simplicity it will be assumed
that the number of eigenfunctions in the system is finite. Otherwise
the sums over
might explode. This book routinely assumes that
it is “good enough” to approximate an infinite system by
a large-enough finite one. That makes life a lot easier, not just
here but also in other derivations like {D.18}.)
It is convenient to split up the sum in (D.19):
However, that is not because it is small due to the time derivative in
it, as one reference claims. While the time derivative of
is indeed small of order 1/
,
.
shows, it varies
on the normal time scale, rather than on the long time scale
.
To show that more precisely, note that the formal solution of the full
equation (D.20) is, [39, 19.2]:
All the integrals are negligibly small because of the rapid variation of the first exponential in them. To verify that, rewrite them a bit and then perform an integration by parts:

And that means that in the adiabatic approximation
Note that
is real. To verify that, differentiate the
normalization requirement to get
Since both
and
are real, it follows that
the magnitudes of the coefficients of the eigenfunctions do not change
in time. In particular, if the system starts out in a single
eigenfunction, then it stays in that eigenfunction.
So far it has been assumed that there is no degeneracy, at least not for the considered state. However it is no problem if at a finite number of times, the energy of the considered state crosses some other energy. For example, consider a three-dimensional harmonic oscillator with three time varying spring stiffnesses. Whenever any two stiffnesses become equal, there is significant degeneracy. Despite that, the given adiabatic solution still applies. (This does assume that you have chosen the eigenfunctions to change smoothly through degeneracy, as perturbation theory says you can, {D.80}.)
To verify that the solution is indeed still valid, cut out a time
interval of size
around each crossing time. Here
is some number still to be chosen. The parts of the integrals in
(D.21) outside of these intervals have magnitudes
that become zero when
for the
same reasons as before. The parts of the integrals corresponding to
the intervals can be estimated as no more than some finite multiple of
.![]()
and they are integrated over ranges of size
.
small enough that the intervals
contribute no more than 0.5% and then take
large enough that the
remaining integration range contributes no more than 0.5% too. Since
you can play the same game for 0.1%, 0.01% or any arbitrarily small
amount, the conclusion is that for infinite
,
,
Things change if some energy levels are permanently degenerate.
Consider an harmonic oscillator for which at least two spring
stiffnesses are permanently equal. In that case, you need to solve
for all coefficients at a given energy level
together. To
figure out how to do that, you will need to consult a book on
mathematics that covers systems of ordinary differential equations.
In particular, the coefficient
in (D.21)
gets replaced by a vector of coefficients with the same energy. The
scalar
becomes a matrix with indices ranging over the
set of coefficients in the vector. Also,
gets
replaced by a “fundamental solution matrix,” a matrix
consisting of independent solution vectors. And
is the inverse matrix. The sum no longer
includes any of the coefficients of the considered energy.
More recent derivations allow the spectrum to be continuous, in which
case the nonzero energy gaps
can no longer be assumed
to be larger than some nonzero amount. And unfortunately, assuming
the system to be approximated by a finite one helps only partially
here; an accurate approximation will produce very closely spaced
energies. Such problems are well outside the scope of this book.