Solving the equations of quantum mechanics is typically difficult, so approximations must usually be made. One very effective way to find an approximate ground state is the variational principle. This section gives some of the basic ideas, including ways to apply it best, and how to find eigenstates of higher energy in similar ways.
The variational method is based on the observation that the ground
state is the state among all allowable wave functions that has the
lowest expectation value of energy:
The variational method has already been used to find the ground states for the hydrogen molecular ion, chapter 4.6, and the hydrogen molecule, chapter 5.2. The general procedure is to guess an approximate form of the wave function, invariably involving some parameters whose best values you are unsure about. Then search for the parameters that give you the lowest expectation value of the total energy; those parameters will give your best possible approximation to the true ground state {N.6}. In particular, you can be confident that the true ground state energy is no higher than what you compute, {A.7}.
To get the second lowest energy state, you could search for the lowest energy among all wave functions orthogonal to the ground state. But since you would not know the exact ground state, you would need to use your approximate one instead. That would involve some error, and it is no longer sure that the true second-lowest energy level is no higher than what you compute, but anyway.
If you want to get more accurate values, you will need to increase the number of parameters. The molecular example solutions were based on the atom ground states, and you could consider adding some excited states to the mix. In general, a procedure using appropriate guessed functions is called a Rayleigh-Ritz method. Alternatively, you could just chop space up into little pieces, or elements, and use a simple polynomial within each piece. That is called a finite element method. In either case, you end up with a finite, but relatively large number of unknowns; the parameters and/or coefficients of the functions, or the coefficients of the polynomials.
You might by now wonder about the wisdom of trying to find the minimum energy by searching through the countless possible combinations of a lot of parameters. Brute-force search worked fine for the hydrogen molecule examples since they really only depended nontrivially on the distance between the nuclei. But if you add some more parameters for better accuracy, you quickly get into trouble. Semi-analytical approaches like Hartree-Fock even leave whole functions unspecified. In that case, simply put, every single function value is an unknown parameter, and a function has infinitely many of them. You would be searching in an infinite-dimensional space, and could search forever. Maybe you could try some clever genetic algorithm.
Usually it is a much better idea to write some equations for the
minimum energy first. From calculus, you know that if you want to
find the minimum of a function, the sophisticated way to do it is to
note that the partial derivatives of the function must be zero at the
minimum. Less rigorously, but a lot more intuitive, at the minimum of
a function the changes in the function due to small changes in
the variables that it depends on must be zero. In the simplest
possible example of a function
of one variable
,
must be zero. Instead a typical physicist would say that the
change
,
,
due to a small
change
in
must be zero. It is the same thing, since
,
is zero, then
so is
.
is as well as what
is, and there is often more than one
possible choice for
,
In physics terms, the fact that the expectation energy must be minimal
in the ground state means that you must have:
As an example of how you can apply the variational formulation of the
previous subsection analytically, and how it can also describe
eigenstates of higher energy, this subsection will work out a very
basic example. The idea is to figure out what you get if you truly
zero the changes in the expectation value of energy
over all
acceptable wave functions
.
The differential statement is:
But how do you crunch a statement like that down mathematically?
Well, there is a very important mathematical trick to simplify this.
Instead of rigorously trying to enforce that the changed wave function
is still normalized, just allow any change in wave function.
But add “penalty points” to the change in expectation
energy if the change in wave function goes out of allowed bounds:
You do not, however, have to explicitly tune the penalty factor yourself. All you need to know is that a proper one exists. In actual application, all you do in addition to ensuring that the penalized change in expectation energy is zero is ensure that at least the unchanged wave function is normalized. It is really a matter of counting equations versus unknowns. Compared to simply setting the change in expectation energy to zero with no constraints on the wave function, one additional unknown has been added, the penalty factor. And quite generally, if you add one more unknown to a system of equations, you need one more equation to still have a unique solution. As the one-more equation, use the normalization condition. With enough equations to solve, you will get the correct solution, which means that the implied value of the penalty factor should be OK too.
So what does this variational statement now produce? Writing out the
differences explicitly, you must have
That is not yet good enough to say something specific about. But
remember that you can exchange the sides of an inner product if you
add a complex conjugate, so
You can now combine them into one inner product with
on
the left:
So you see that you have recovered the Hamiltonian eigenvalue problem
from the requirement that the variation of the expectation energy is
zero. Unavoidably then,
will have to be an energy
eigenvalue
.
Indeed, you may remember from calculus that the derivatives of a
function may be zero at more than one point. For example, a function
might also have a maximum, or local minima and maxima, or stationary
points where the function is neither a maximum nor a minimum, but the
derivatives are zero anyway. This sort of thing happens here too: the
ground state is the state of lowest possible energy, but there will be
other states for which
is zero, and these
will correspond to energy eigenstates of higher energy,
{D.49}.