5.8 Matrix Formulation

When the number of unknowns in a quantum mechanical problem has been reduced to a finite number, the problem can be reduced to a linear algebra one. This allows the problem to be solved using standard analytical or numerical techniques. This section describes how the linear algebra problem can be obtained.

Typically, quantum mechanical problems can be reduced to a finite number of unknowns using some finite set of chosen wave functions, as in the previous section. There are other ways to make the problems finite, it does not really make a difference here. But in general some simplifi­cation will still be needed afterwards. A multiple sum like equation (5.30) for distin­guishable particles is awkward to work with, and when various coefficients drop out for identical particles, its gets even messier. So as a first step, it is best to order the terms involved in some way; any ordering will in principle do. Ordering allows each term to be indexed by a single counter $q$, being the place of the term in the ordering.

Using an ordering, the wave function for a total of $I$ particles can be written more simply as

\begin{displaymath}
\Psi =
a_1 \psi^{\rm S}_1({\skew0\vec r}_1,S_{z1}, {\ske...
...\vec r}_2,S_{z2}, \ldots, {\skew0\vec r}_I,S_{zI})
+ \ldots
\end{displaymath}

or in index notation:
\begin{displaymath}
\Psi = \sum_{q=1}^Q a_q
\psi^{\rm S}_q({\skew0\vec r}_1,...
...}, {\skew0\vec r}_2,S_{z2}, \ldots, {\skew0\vec r}_I,S_{zI}).
\end{displaymath} (5.32)

where $Q$ is the total count of the chosen $I$-particle wave functions and the single counter $q$ in $a_q$ replaces a set of $I$ indices in the description used in the previous section. The $I$-particle functions $\psi^{\rm S}_q$ are allowed to be anything; individual (Hartree) products of single-particle wave functions for distin­guishable particles as in (5.30), Slater determinants for identical fermions, permanents for identical bosons, or whatever. The only thing that will be assumed is that they are mutually ortho­normal. (Which means that any underlying set of single-particle functions $\pp{n}/{\skew0\vec r}///$ as described in the previous section should be ortho­normal. If they are not, there are procedures like Gram-Schmidt to make them so. Or you can just put in some correction terms.)

Under those conditions, the energy eigenvalue problem $H\psi$ $\vphantom0\raisebox{1.5pt}{$=$}$ $E\psi$ takes the form:

\begin{displaymath}
\sum_{q=1}^Q H a_q \psi^{\rm S}_q = \sum_{q=1}^Q E a_q \psi^{\rm S}_q
\end{displaymath}

The trick is now to take the inner product of both sides of this equation with each function $\psi^{\rm S}_{\underline q}$ in the set of wave functions in turn. In other words, take an inner product with $\langle\psi^{\rm S}_1\vert$ to get one equation, then take an inner product with $\langle\psi^{\rm S}_2\vert$ to get a second equation, and so on. This produces, using the fact that the functions are ortho­normal to clean up the right-hand side,

\begin{displaymath}
\begin{array}{ccccccccl}
H_{11} a_1 & + & H_{12} a_2 & +...
...} a_2 & + & \ldots & + & H_{QQ} a_Q & = & E a_Q
\end{array}
\end{displaymath}

where

\begin{displaymath}
H_{11} = \langle \psi^{\rm S}_1\vert H \psi^{\rm S}_1\rang...
...H_{QQ} = \langle \psi^{\rm S}_Q\vert H \psi^{\rm S}_Q\rangle.
\end{displaymath}

are the matrix coefficients, or Hamiltonian coefficients.

This can again be written more compactly in index notation:

\begin{displaymath}
\sum_{q=1}^Q H_{{\underline q}q} a_q = E a_{\underline q}
...
...ngle \psi^{\rm S}_{\underline q}\vert H \psi^{\rm S}_q\rangle
\end{displaymath} (5.33)

which is just a finite-size matrix eigenvalue problem.

Since the functions $\psi^{\rm S}_q$ are known, chosen, functions, and the Hamiltonian $H$ is also known, the matrix coefficients $H_{{{\underline q}}q}$ can be determined. The eigen­values $E$ and corre­sponding eigen­vectors $(a_1,a_2,\ldots)$ can then be found using linear algebra procedures. Each eigen­vector produces a corre­sponding approximate eigen­function $a_1\psi^{\rm S}_1+a_2\psi^{\rm S}_2+\ldots$ with an energy equal to the eigenvalue $E$.


Key Points
$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
Operator eigenvalue problems can be approximated by the matrix eigenvalue problems of linear algebra.

$\begin{picture}(15,5.5)(0,-3)
\put(2,0){\makebox(0,0){\scriptsize\bf0}}
\put(12...
...\thicklines \put(3,0){\line(1,0){12}}\put(11.5,-2){\line(1,0){3}}
\end{picture}$
That allows standard analytical or numerical techniques to be used in their solution.

5.8 Review Questions
  1. As a relatively simple example, work out the above ideas for the $Q$ $\vphantom0\raisebox{1.5pt}{$=$}$ 2 hydrogen molecule spatial states $\psi^{\rm S}_1$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\psi_{\rm {l}}\psi_{\rm {r}}$ and $\psi^{\rm S}_2$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\psi_{\rm {l}}\psi_{\rm {r}}$. Write the matrix eigenvalue problem and identify the two eigen­values and eigen­vectors. Compare with the results of section 5.3.

    Assume that $\psi_{\rm {l}}$ and $\psi_{\rm {r}}$ have been slightly adjusted to be ortho­normal. Then so are $\psi^{\rm S}_1$ and $\psi^{\rm S}_2$ ortho­normal, since the various six-di­mensional inner product integrals, like

    \begin{eqnarray*}\lefteqn{ \langle\psi^{\rm S}_1\vert\psi^{\rm S}_2\rangle\equiv...
... r}_2) {\,\rm d}^3 {\skew0\vec r}_1 {\,\rm d}^3 {\skew0\vec r}_2
\end{eqnarray*}

    can according to the rules of calculus be factored into three-di­mensional integrals as

    \begin{eqnarray*}\lefteqn{\langle\psi^{\rm S}_1\vert\psi^{\rm S}_2\rangle} \\ &&...
..._{\rm {r}}\rangle\langle\psi_{\rm {r}}\vert\psi_{\rm {l}}\rangle
\end{eqnarray*}

    which is zero if $\psi_{\rm {l}}$ and $\psi_{\rm {r}}$ are ortho­normal.

    Also, do not try to find actual values for $H_{11}$, $H_{12}$, $H_{21}$, and $H_{22}$. As section 5.2 noted, that can only be done numeri­cally. Instead just refer to $H_{11}$ as $J$ and to $H_{12}$ as $\vphantom0\raisebox{1.5pt}{$-$}$$L$:

    \begin{eqnarray*}& H_{11} \equiv\langle\psi^{\rm S}_1\vert H\psi^{\rm S}_1\rangl...
...i_{\rm {r}}\vert H\psi_{\rm {r}}\psi_{\rm {l}}\rangle\equiv - L.
\end{eqnarray*}

    Next note that you also have

    \begin{eqnarray*}& H_{22} \equiv\langle\psi^{\rm S}_2\vert H\psi^{\rm S}_2\rangl...
...}}\psi_{\rm {l}}\vert H\psi_{\rm {l}}\psi_{\rm {r}}\rangle = - L
\end{eqnarray*}

    because they are the exact same inner product integrals; the difference is just which electron you number 1 and which one you number 2 that determines whether the wave functions are listed as $\psi_{\rm {l}}\psi_{\rm {r}}$ or $\psi_{\rm {r}}\psi_{\rm {l}}$.

    Solution matfor-a

  2. Find the eigen­states for the same problem, but now including spin.

    As section 5.7 showed, the anti­symmetric wave function with spin consists of a sum of six Slater determinants. Ignoring the highly excited first and sixth determinants that have the electrons around the same nucleus, the remaining $C$ $\vphantom0\raisebox{1.5pt}{$=$}$ 4 Slater determinants can be written out explicitly to give the two-particle states

    \begin{eqnarray*}\psi^{\rm S}_1 = \frac{\psi_{\rm {l}}\psi_{\rm {r}}{\uparrow}{\...
... \psi_{\rm {r}}\psi_{\rm {l}}{\downarrow}{\downarrow}}{\sqrt{2}}
\end{eqnarray*}

    Note that the Hamiltonian does not involve spin, to the approxi­mation used in most of this book, so that, following the techniques of section 5.5, an inner product like $H_{23}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\langle\psi^{\rm S}_2\vert H\psi^{\rm S}_3\rangle $ can be written out like

    \begin{eqnarray*}H_{23} & = & \frac 12\langle\psi_{\rm {l}}\psi_{\rm {r}}{\uparr...
...w}- (H\psi_{\rm {r}}\psi_{\rm {l}}){\uparrow}{\downarrow}\rangle
\end{eqnarray*}

    and then multi­plied out into inner products of matching spin components to give

    \begin{displaymath}
H_{23} = -\frac 12\langle\psi_{\rm {l}}\psi_{\rm {r}}\vert H...
...}\psi_{\rm {l}}\vert H\psi_{\rm {l}}\psi_{\rm {r}}\rangle = L.
\end{displaymath}

    The other 15 matrix coefficients can be found similarly, and most will be zero.

    If you do not have experi­ence with linear algebra, you may want to skip this question, or better, just read the solution. However, the four eigen­vectors are not that hard to guess; maybe easier to guess than correctly derive.

    Solution matfor-b