Subsections


7.1 The Schrödinger Equation

In Newtonian mechanics, Newton's second law states that the linear momentum changes in time propor­tional to the applied force; ${\rm d}{m}\vec{v}$$\raisebox{.5pt}{$/$}$${\rm d}{t}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $m\vec{a}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\vec{F}$. The equivalent in quantum mechanics is the Schrödinger equation, which describes how the wave function evolves. This section discusses this equation, and a few of its immediate consequences.


7.1.1 The equation

The Schrödinger equation says that the time derivative of the wave function is obtained by applying the Hamiltonian on it. More precisely:

\begin{displaymath}
\fbox{$\displaystyle
{\rm i}\hbar \frac{\partial \Psi}{\partial t} = H \Psi
$}
%
\end{displaymath} (7.1)

An equivalent and earlier formul­ation of quantum mechanics was given by Heisenberg, {A.12}. However, the Schrödinger equation tends to be easier to deal with, especially in non­relativistic appli­cations. An integral version of the Schrödinger equation that is sometimes convenient is in {A.13}.

The Schrödinger equations is non­relativistic. The simplest relativistic version is called the Klein-Gordon equation. A discussion is in addendum {A.14}. However, relativity introduces a fundamentally new issue: following Einstein’s mass-energy equivalence, particles may be created out of pure energy or destroyed. To deal with that, you typically need a formul­ation of quantum mechanics called quantum field theory. A very brief introduction is in addendum {A.15}.


Key Points
$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
The Schrödinger equation describes the time evolution of the wave function.

$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
The time derivative is propor­tional to the Hamiltonian.


7.1.2 Solution of the equation

The solution to the Schrödinger equation can immediately be given for most cases of inter­est. The only condition that needs to be satisfied is that the Hamiltonian depends only on the state the system is in, and not explicitly on time. This condition is satisfied in all cases discussed so far, including the particle in a box, the harmonic oscillator, the hydrogen and heavier atoms, and the molecules, so the following solution applies to them all:

To satisfy the Schrödinger equation, write the wave function $\Psi$ in terms of whatever are the energy eigen­functions $\psi_{\vec n}$ of the Hamiltonian,

\begin{displaymath}
\Psi
= c_{{\vec n}_1}(t) \psi_{{\vec n}_1} + c_{{\vec n}...
... n}_2} + \ldots
= \sum_{\vec n}c_{\vec n}(t) \psi_{\vec n}
\end{displaymath} (7.2)

Then the coefficients $c_{\vec n}$ must evolve in time as complex exponentials:

\begin{displaymath}
\fbox{$\displaystyle
c_{\vec n}(t) = c_{\vec n}(0) e^{-{\rm i}E_{\vec n}t /\hbar}
$}
%
\end{displaymath} (7.3)

for every combin­ation of quantum numbers ${\vec n}$.

In short, you get the wave function for arbitrary times by taking the initial wave function and shoving in additional factors $e^{-{{\rm i}}E_{{\vec n}}t/\hbar}$. The initial values $c_{\vec n}(0)$ of the coefficients are not determined from the Schrödinger equation, but from whatever initial condition for the wave function is given. As always, the appro­priate set of quantum numbers ${\vec n}$ depends on the problem.

Consider how this works out for the electron in the hydrogen atom. Here each spatial energy state $\psi_{nlm}$ is charac­terized by the three quantum numbers $n$, $l$, $m$, chapter 4.3. However, there is a spin-up version $\psi_{nlm}{\uparrow}$ of each state in which the electron has spin magnetic quantum number $m_s$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\frac12$, and a spin-down version $\psi_{nlm}{\downarrow}$ in which $m_s$ $\vphantom0\raisebox{1.5pt}{$=$}$ $-\frac12$, chapter 5.5.1. So the states are charac­terized by the set of four quantum numbers

\begin{displaymath}
{\vec n}\equiv (n,l,m,m_s)
\end{displaymath}

The most general wave function for the hydrogen atom is then:

\begin{eqnarray*}
\lefteqn{\Psi(r,\theta,\phi,S_z,t) =} \\
&&
\sum_{n=1}^...
...
e^{-{\rm i}E_n t/\hbar} \psi_{nlm}(r,\theta,\phi){\downarrow}
\end{eqnarray*}

Note that each eigen­function has been given its own coefficient that depends exponentially on time. (The summation limits come from chapter 4.3.)

The given solution in terms of eigen­functions covers most cases of inter­est, but as noted, it is not valid if the Hamiltonian depends explicitly on time. That possi­bility arises when there are external influences on the system; in such cases the energy does not just depend on what state the system itself is in, but also on what the external influences are like at the time.


Key Points
$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
Normally, the coefficients of the energy eigen­functions must be propor­tional to $e^{-{{\rm i}}E_{{\vec n}}t/\hbar}$.

7.1.2 Review Questions
  1. The energy of a photon is $\hbar\omega $ where $\omega $ is the classical frequency of the electro­magnetic field produced by the photon. So what is $e^{-{{\rm i}}E_{{\vec n}}t/\hbar}$ for a photon? Are you surprised by the result?

    Solution schrodsol-a

  2. For the one-di­mensional harmonic oscillator, the energy eigen­values are

    \begin{displaymath}
E_n = \frac{2n+1}{2} \omega
\end{displaymath}

    Write out the coefficients $c_n(0)e^{-{{\rm i}}E_nt/\hbar}$ for those energies.

    Now classi­cally, the harmonic oscillator has a natural frequency $\omega $. That means that whenever ${\omega}t$ is a whole multiple of $2\pi $, the harmonic oscillator is again in the same state as it started out with. Show that the coefficients of the energy eigen­functions have a natural frequency of $\frac 12\omega $; $\frac 12{\omega}t$ must be a whole multiple of $2\pi $ for the coefficients to return to their original values.

    Solution schrodsol-b

  3. Write the full wave function for a one-di­mensional harmonic oscillator. Formulae are in chapter 4.1.2.

    Solution schrodsol-c


7.1.3 Energy conservation

The Schrödinger equation implies that the energy of a system is conserved, assuming that there are no external influences on the system.

To see why, consider the general form of the wave function:

\begin{displaymath}
\Psi = \sum_{\vec n}c_{\vec n}(t) \psi_{\vec n}
\qquad
c_{\vec n}(t) = c_{\vec n}(0) e^{-{\rm i}E_{\vec n}t /\hbar}
\end{displaymath}

According to chapter 3.4, the square magnitudes $\vert c_{\vec n}\vert^2$ of the coefficients of the energy eigen­functions give the proba­bility for the corre­sponding energy. While the coefficients vary with time, their square magnitudes do not:

\begin{displaymath}
\vert c_{\vec n}(t)\vert^2 \equiv c_{\vec n}^*(t)c_{\vec n...
...e^{-{\rm i}E_{\vec n}t /\hbar}
= \vert c_{\vec n}(0)\vert^2
\end{displaymath}

So the proba­bility of measuring a given energy level does not vary with time either. That means that energy is conserved.

For example, a wave function for a hydrogen atom at the excited energy level $E_2$ might be of the form:

\begin{displaymath}
\Psi = e^{-{\rm i}E_2 t/\hbar} \psi_{210}{\uparrow}
\end{displaymath}

(This corre­sponds to an assumed initial condition in which all coefficients $c_{nlmm_s}$ are zero except $c_{2101}$ $\vphantom0\raisebox{1.5pt}{$=$}$ 1.) The square magnitude of the exponential is one, so the energy of this excited atom will stay $E_2$ with 100% certainty for all time. The energy of the atom is conserved.

This is an important example, because it also illustrates that an excited atom will stay excited for all time if left alone. That is an apparent contra­diction because, as discussed in chapter 4.3, the above excited atom will eventually emit a photon and transition back to the ground state. Even if you put it in a sealed box whose inter­ior is at absolute zero temperature, it will still decay.

The explan­ation for this apparent contra­diction is that an atom is never truly left alone. Simply put, even at absolute zero temperature, quantum uncertainty in energy allows an electro­magnetic photon to pop up that perturbs the atom and causes the decay. (To describe more precisely what happens is a major objective of this chapter.)

Returning to the unperturbed atom, you may wonder what happens to energy conserv­ation if there is uncertainty in energy. In that case, what does not change with time are the proba­bilities of measuring the possible energy levels. As an arbitrary example, the following wave function describes a case of an unperturbed hydrogen atom whose energy has a 50/50 chance of being measured as $E_1$, (-13.6 eV), or as $E_2$, (-3.4 eV):

\begin{displaymath}
\Psi =
{\displaystyle\frac{1}{\sqrt2}} e^{-{\rm i}E_1 t/...
...frac{1}{\sqrt2}} e^{-{\rm i}E_2 t/\hbar} \psi_{210}{\uparrow}
\end{displaymath}

The 50/50 proba­bility applies regardless how long the wait is before the measurement is done.

You can turn the observ­ations of this subsection also around. If an external effect changes the energy of a system, then clearly the proba­bilities of the individual energies must change. So then the coefficients of the energy eigen­functions cannot be simply vary exponentially with time as they do for the unperturbed systems discussed above.


Key Points
$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
Energy conserv­ation is a fundamental consequence of the Schrödinger equation.

$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
An isolated system that has a given energy retains that energy.

$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
Even if there is uncertainty in the energy of an isolated system, still the proba­bilities of the various energies do not change with time.


7.1.4 Stationary states

The quest for the dynamical impli­cations of the Schrödinger equation must start with the simplest case. That is the case in which there is only a single energy eigen­function involved. Then the wave function is of the form

\begin{displaymath}
\Psi = c_{\vec n}(0) e^{-{\rm i}E_{\vec n}t /\hbar} \psi_{\vec n}
\end{displaymath}

Such states are called “stationary states.” Systems in their ground state are of this type.

To see why these states are called stationary, note first of all that the energy of the state is $E_{\vec n}$ for all time, with no uncertainty.

But energy is not the only thing that does not change in time. According to the Born inter­pretation, chapter 3.1, the square magnitude of the wave function of a particle gives the proba­bility of finding the particle at that position and time. Now the square magnitude of the wave function above is

\begin{displaymath}
\vert\Psi\vert^2 = \vert\psi_{\vec n}\vert^2
\end{displaymath}

Time has dropped out in the square magnitude; the proba­bility of finding the particle is the same for all time.

For example, consider the case of the particle in a pipe of chapter 3.5. If the particle is in the ground state, its wave function is of the form

\begin{displaymath}
\Psi=c_{111}(0)e^{-{{\rm i}}E_{111}t/\hbar}\psi_{111}
\end{displaymath}

The precise form of the function $\psi_{111}$ is not of particular inter­est here, but it can be found in chapter 3.5.

The relative proba­bility for where the particle may be found can be shown as grey tones:

Figure 7.1: The ground state wave function looks the same at all times.
\begin{figure}
\centering
{}%
\epsffile{pipet1.eps}
\end{figure}

The bottom line is that this picture is the same for all time.

If the wave function is purely the first excited state $\psi_{211}$, the corre­sponding picture looks for all time like:

Figure 7.2: The first excited state at all times.
\begin{figure}
\centering
{}%
\epsffile{pipet2.eps}
\end{figure}

And it is not just position that does not change. Neither do linear or angular momentum, kinetic energy, etcetera. That can be easily checked. The proba­bility for a specific value of any physical quantity is given by

\begin{displaymath}
\vert\langle\alpha\vert\Psi\rangle\vert^2
\end{displaymath}

where $\alpha$ is the eigen­function corre­sponding to the value. (If there is more than one eigen­function with that value, sum their contributions.) The exponential drops out in the square magnitude. So the proba­bility does not depend on time.

And if proba­bilities do not change, then neither do expec­tation values, uncertainties, etcetera. No physi­cally meaningful quantity changes with time.

Hence it is not really surprising that none of the energy eigen­functions derived so far had any resemblance to the classical Newtonian picture of a particle moving around. Each energy eigen­function by itself is a stationary state. There is no change in the proba­bility of finding the particle regardless of the time that you look. So how could it possibly resemble a classical particle that is at different positions at different times?

To get time variations of physical quantities, states of different energy must be combined. In other words, there must be uncertainty in energy.


Key Points
$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
States of definite energy are stationary states.

$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
To get non­trivial time variation of a system requires uncertainty in energy.


7.1.5 The adiabatic approximation

The previous subsections discussed the solution for systems in which the Hamiltonian does not explicitly depend on time. Typically that means isolated systems, unaffected by external effects, or systems for which the external effects are relatively simple. If the external effects produce a time-dependent Hamiltonian, things get much messier. You cannot simply make the coefficients of the eigen­functions vary exponentially in time as done in the previous subsections.

However, dealing with systems with time-dependent Hamiltonians can still be relatively easy if the Hamiltonian varies sufficiently slowly in time. Such systems are quasi-steady ones.

So physicists cannot call these systems quasi-steady; that would give the secret away to these hated non­specialists and pesky students. Fortunately, physicists were able to find a much better name. They call these systems “adiabatic.” That works much better because the word “adiabatic” is a well-known term in thermo­dynamics: it indicates systems that evolve fast enough that heat conduction with the surroundings can be ignored. So, what better name to use also for quantum systems that evolve slowly enough that they stay in equilibrium with their surroundings? No one familiar with even the most basic thermo­dynamics will ever guess what it means.

As a simple example of an adiabatic system, assume that you have a particle in the ground state in a box. Now you change the volume of the box by a significant amount. The question is, will the particle still be in the ground state after the volume change? Normally there is no reason to assume so; after all, either way the energy of the particle will change significantly. However, the “adiabatic theorem” says that if the change is performed slowly enough, it will. The particle will indeed remain in the ground state, even though that state slowly changes into a completely different form.

If the system is in an energy state other than the ground state, the particle will stay in that state as it evolves during an adiabatic process. The theorem does assume that the energy is non­degenerate, so that the energy state is unambiguous. More sophis­ticated versions of the analysis exist to deal with degeneracy and continuous spectra.

A derivation of the theorem can be found in {D.34}. Some additional impli­cations are in addendum {A.16}. The most important practical appli­cation of the adiabatic theorem is without doubt the Born-Oppenheimer approxi­mation, which is discussed separately in chapter 9.2.


Key Points
$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
If the properties of a system in its ground state are changed, but slowly, the system will remain in the changing ground state.

$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
More generally, the “adiabatic” approxi­mation can be used to analyze slowly changing systems.

$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
No, it has nothing to do with the normal use of the word “adiabatic.”