Subsections


11.14 Application to Particles in a Box

This section applies the ideas developed in the previous sections to weakly inter­acting particles in a box. This allows some of the details of the “shelves” in figures 11.1 through 11.3 to be filled in for a concrete case.

For particles in a macroscopic box, the single-particle energy levels ${\vphantom' E}^{\rm p}$ are so closely spaced that they can be taken to be continuously varying. The one exception is the ground state when Bose-Einstein condens­ation occurs; that will be ignored for now. In continuum approxi­mation, the number of single-particle energy states in a macroscopi­cally small energy range ${\rm d}{{\vphantom' E}^{\rm p}}$ is approximately, following (6.6),

\begin{displaymath}
\fbox{$\displaystyle
{\rm d}N = V n_s {\cal D}{\,\rm d}{...
...hantom' E}^{\rm p}}
{\,\rm d}{\vphantom' E}^{\rm p}
$}
%
\end{displaymath} (11.42)

Here $n_s$ $\vphantom0\raisebox{1.5pt}{$=$}$ $2s+1$ is the number of spin states.

Now according to the derived distributions, the number of particles in a single energy state at energy ${\vphantom' E}^{\rm p}$ is

\begin{displaymath}
\iota = \frac{1}{e^{({\vphantom' E}^{\rm p}-\mu)/{k_{\rm B}}T}\pm1}
\end{displaymath}

where the plus sign applies for fermions and the minus sign for bosons. The term can be ignored completely for distin­guishable particles.

To get the total number of particles, just integrate the particles per state $\iota$ over all states:

\begin{displaymath}
I = \int_{{\vphantom' E}^{\rm p}=0}^\infty \iota V n_s {\c...
...rm p}-\mu)/{k_{\rm B}}T}\pm1} {\,\rm d}{\vphantom' E}^{\rm p}
\end{displaymath}

and to get the total energy, integrate the energy of each single-particle state times the number of particles in that state over all states:

\begin{displaymath}
E = \int_{{\vphantom' E}^{\rm p}=0}^\infty {\vphantom' E}^...
...rm p}-\mu)/{k_{\rm B}}T}\pm1} {\,\rm d}{\vphantom' E}^{\rm p}
\end{displaymath}

The expression for the number of particles can be non­dimension­alized by rearranging and taking a root to give

\begin{displaymath}
\fbox{$\displaystyle
\frac{\displaystyle
\frac{\hbar^2...
...k_{\rm B}}T}\quad u_0 \equiv \frac{\mu}{{k_{\rm B}}T}
$}
%
\end{displaymath} (11.43)

Note that the left hand side is a non­di­mensional ratio of a typical quantum microscopic energy, based on the average particle spacing $\sqrt[3]{V/I}$, to the typical classical microscopic energy ${k_{\rm B}}T$. This ratio is a key non­di­mensional number governing weakly inter­acting particles in a box. To put the typical quantum energy into context, a single particle in its own volume of size $V$$\raisebox{.5pt}{$/$}$$I$ would have a ground state energy $3\pi^2\hbar^2$$\raisebox{.5pt}{$/$}$$2m(V/I)^{2/3}$.

Some references, [4], define a “thermal de Broglie wavelength” $\lambda_{\rm {th}}$ by writing the classical microscopic energy ${k_{\rm B}}T$ in a quantum-like way:

\begin{displaymath}
{k_{\rm B}}T \equiv 4\pi \frac{\hbar^2}{2m} \frac{1}{\lambda_{\rm th}^2}
\end{displaymath}

In some simple cases, you can think of this as roughly the quantum wavelength corre­sponding to the momentum of the particles. It allows various results that depend on the non­di­mensional ratio of energies to be reformulated in terms of a non­di­mensional ratio of lengths, as in

\begin{displaymath}
\frac{\displaystyle \frac{\hbar^2}{2m}\left(\frac{I}{V}\ri...
...left[\frac{\lambda_{\rm th}}{\left(V/I\right)^{1/3}}\right]^2
\end{displaymath}

Since the ratio of energies is fully equivalent, and has an unambiguous meaning, this book will refrain from making theory harder than needed by defining super­fluous quantities. But in practice, thinking in terms of numerical values that are lengths is likely to be more intuitive than energies, and then the numerical value of the thermal wavelength would be the one to keep in mind.

Note that (11.43) provides a direct relationship between the ratio of typical quantum/classical energies on one side, and $u_0$, the ratio of atomic chemical potential $\mu$ to typical classical microscopic energy ${k_{\rm B}}T$ on the other side. While the two energy ratios are not the same, (11.43) makes them equivalent for systems of weakly inter­acting particles in boxes. Know one and you can in principle compute the other.

The expression for the system energy may be non­dimension­alized in a similar way to get

\begin{displaymath}
\fbox{$\displaystyle
\frac{E}{I{k_{\rm B}}T}
=
\lef...
...k_{\rm B}}T}\quad u_0 \equiv \frac{\mu}{{k_{\rm B}}T}
$}
%
\end{displaymath} (11.44)

The integral in the bottom arises when getting rid of the ratio of energies that forms using (11.43).

The quantity in the left hand side is the non­di­mensional ratio of the actual system energy over the system energy if every particle had the typical classical energy ${k_{\rm B}}T$. It too is a unique function of $u_0$, and as a consequence, also of the ratio of typical microscopic quantum and classical energies.


11.14.1 Bose-Einstein condensation

Bose-Einstein condens­ation is said to have occurred when in a macroscopic system the number of bosons in the ground state becomes a finite fraction of the number of particles $I$. It happens when the temperature is lowered sufficiently or the particle density is increased sufficiently or both.

According to derivation {D.58}, the number of particles in the ground state is given by

\begin{displaymath}
I_1 = \frac{N_1-1}{e^{({\vphantom' E}^{\rm p}_1-\mu)/{k_{\rm B}}T}-1}.
\end{displaymath} (11.45)

In order for this to become a finite fraction of the large number of particles $I$ of a macroscopic system, the denominator must become extremely small, hence the exponential must become extremely close to one, hence $\mu$ must come extremely close to the lowest energy level ${\vphantom' E}^{\rm p}_1$. To be precise, $E_1-\mu$ must be small of order ${k_{\rm B}}T$$\raisebox{.5pt}{$/$}$$I$; smaller than the classical microscopic energy by the humongous factor $I$. In addition, for a macroscopic system of weakly inter­acting particles in a box, ${\vphantom' E}^{\rm p}_1$ is extremely close to zero, (it is smaller than the microscopic quantum energy defined above by a factor $I^{2/3}$.) So condens­ation occurs when $\mu$ $\vphantom0\raisebox{1.1pt}{$\approx$}$ ${\vphantom' E}^{\rm p}_1$ $\vphantom0\raisebox{1.1pt}{$\approx$}$ 0, the approxi­mations being extremely close. If the ground state is unique, $N_1$ $\vphantom0\raisebox{1.5pt}{$=$}$ 1, Bose-Einstein condens­ation simply occurs when $\mu$ $\vphantom0\raisebox{1.5pt}{$=$}$ ${\vphantom' E}^{\rm p}_1$ $\vphantom0\raisebox{1.1pt}{$\approx$}$ 0.

You would therefore expect that you can simply put $u_0$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\mu$$\raisebox{.5pt}{$/$}$${k_{\rm B}}T$ to zero in the integrals (11.43) and (11.44). However, if you do so (11.43) fails to describe the number of particles in the ground state; it only gives the number of particles $I-I_1$ not in the ground state:

\begin{displaymath}
\frac{\displaystyle
\frac{\hbar^2}{2m}\left(\frac{I-I_1}...
...qrt{u}{\,\rm d}u}{e^u-1}
\right)^{2/3} \qquad\mbox{for BEC}
\end{displaymath} (11.46)

To see that the number of particles in the ground state is indeed not included in the integral, note that while the integrand does become infinite when $u\downarrow0$, it becomes infinite propor­tionally to 1/$\sqrt{u}$, which integrates as propor­tional to $\sqrt{u}$, and $\sqrt{u_1}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\sqrt{{\vphantom' E}^{\rm p}_1/{k_{\rm B}}T}$ is vanishingly small, not finite. Arguments given in derivation {D.58} do show that the only significant error occurs for the ground state; the above integral does correctly approximate the number of particles not in the ground state when condens­ation has occurred.

The value of the integral can be found in mathematical handbooks, [39, p. 201, with typo], as $\frac12!\zeta\left(\frac32\right)$ with $\zeta$ the so-called Riemann zeta function, due to, who else, Euler. Euler showed that it is equal to a product of terms ranging over all prime numbers, but you do not want to know that. All you want to know is that $\zeta\left(\frac32\right)$ $\vphantom0\raisebox{1.1pt}{$\approx$}$ 2.612 and that $\frac12!$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\frac12\sqrt{\pi}$.

The Bose-Einstein temperature $T_B$ is the temperature at which Bose-Einstein condens­ation starts. That means it is the temperature for which $I_1$ $\vphantom0\raisebox{1.5pt}{$=$}$ 0 in the expression above, giving

\begin{displaymath}
\frac{\displaystyle
\frac{\hbar^2}{2m}\left(\frac{I-I_1}...
...ight)^{2/3} \quad T\mathrel{\raisebox{-.7pt}{$\leqslant$}}T_B
\end{displaymath} (11.47)

It implies that for a given system of bosons, at Bose-Einstein condens­ation there is a fixed numerical ratio between the microscopic quantum energy based on particle density and the classical microscopic energy ${k_{\rm B}}T_B$. That also illustrates the point made at the beginning of this subsection that both changes in temperature and changes in particle density can produce Bose-Einstein condens­ation.

The first equality in the equation above can be cleaned up to give the fraction of bosons in the ground state as:

\begin{displaymath}
\frac{I_1}{I} = 1 - \left(\frac{T}{T_B}\right)^{3/2} \qquad T\mathrel{\raisebox{-.7pt}{$\leqslant$}}T_B
\end{displaymath} (11.48)


11.14.2 Fermions at low temperatures

Another appli­cation of the integrals (11.43) and (11.44) is to find the Fermi energy ${\vphantom' E}^{\rm p}_{\rm {F}}$ and inter­nal energy $E$ of a system of weakly inter­acting fermions for vanishing temperature.

For low temperatures, the non­di­mensional energy ratio $u_0$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\mu$$\raisebox{.5pt}{$/$}$${k_{\rm B}}T$ blows up, since ${k_{\rm B}}T$ becomes zero and the chemical potential $\mu$ does not; $\mu$ becomes the Fermi energy ${\vphantom' E}^{\rm p}_{\rm {F}}$, chapter 6.10. To deal with the blow up, the integrals can be rephrased in terms of $u$$\raisebox{.5pt}{$/$}$$u_0$ $\vphantom0\raisebox{1.5pt}{$=$}$ ${\vphantom' E}^{\rm p}$$\raisebox{.5pt}{$/$}$$\mu$, which does not blow up.

In particular, the ratio (11.43) involving the typical microscopic quantum energy can be rewritten by taking a factor $u_0^{3/2}$ out of the integral and root and to the other side to give:

\begin{displaymath}
\frac{\displaystyle \frac{\hbar^2}{2m} \left(\frac{I}{V}\r...
.../u_0}{\,\rm d}(u/u_0)}{e^{u_0[(u/u_0)-1]}+ 1}
\right)^{2/3}
\end{displaymath}

Now since $u_0$ is large, the exponential in the denominator becomes extremely large for $u$$\raisebox{.5pt}{$/$}$$u_0$ $\raisebox{.3pt}{$>$}$ 1, making the integrand negligibly small. Therefore the upper limit of integr­ation can be limited to $u$$\raisebox{.5pt}{$/$}$$u_0$ $\vphantom0\raisebox{1.5pt}{$=$}$ 1. In that range, the exponential is vanishingly small, except for a negligibly small range around $u$$\raisebox{.5pt}{$/$}$$u_0$ $\vphantom0\raisebox{1.5pt}{$=$}$ 1, so it can be ignored. That gives

\begin{displaymath}
\frac{\displaystyle \frac{\hbar^2}{2m} \left(\frac{I}{V}\r...
... \right)^{2/3}
=
\left( \frac{n_s}{6\pi^2} \right)^{2/3}
\end{displaymath}

It follows that the Fermi energy is

\begin{displaymath}
{\vphantom' E}^{\rm p}_{\rm {F}} = \mu\vert _{T=0} =
\le...
...ht)^{2/3}
\frac{\hbar^2}{2m} \left(\frac{I}{V}\right)^{2/3}
\end{displaymath}

Physicist like to define a “Fermi temperature” as the temperature where the classical microscopic energy ${k_{\rm B}}T$ becomes equal to the Fermi energy. It is

\begin{displaymath}
T_{\rm {F}} = \frac{1}{k_{\rm B}} \left(\frac{6\pi^2}{n_s}...
...^{2/3}
\frac{\hbar^2}{2m} \left(\frac{I}{V}\right)^{2/3}
%
\end{displaymath} (11.49)

It may be noted that except for the numerical factor, the expression for the Fermi temperature $T_{\rm {F}}$ is the same as that for the Bose-Einstein condens­ation temperature $T_B$ given in the previous subsection.

Electrons have $n_s$ $\vphantom0\raisebox{1.5pt}{$=$}$ 2. For the valence electrons in typical metals, the Fermi temperatures are in the order of ten thousands of degrees Kelvin. The metal will melt before it is reached. The valence electrons are pretty much the same at room temperature as they are at absolute zero.

The integral (11.44) can be integrated in the same way and then shows that $E$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\frac35I\mu$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\frac35I{\vphantom' E}^{\rm p}_{\rm {F}}$. In short, at absolute zero, the average energy per particle is $\frac35$ times ${\vphantom' E}^{\rm p}_{\rm {F}}$, the maximum single-particle energy.

It should be admitted that both of the results in this subsection have been obtained more simply in chapter 6.10. However, the analysis in this subsection can be used to find the corrected expressions when the temperature is fairly small but not zero, {D.63}, or for any temperature by brute-force numerical integr­ation. One result is the specific heat at constant volume of the free-electron gas for low temperatures:

\begin{displaymath}
C_v = \frac{\pi^2}{2}\frac{k_{\rm B}T}{{\vphantom' E}^{\rm p}_{\rm {F}}} \frac{k_{\rm B}}{m}
(1 + \ldots)
\end{displaymath} (11.50)

where $k_{\rm B}$$\raisebox{.5pt}{$/$}$$m$ is the gas constant $R$. All low-temperature expansions proceed in powers of $({k_{\rm B}}T/{\vphantom' E}^{\rm p}_{\rm {F}})^2$, so the dots in the expression for $C_v$ above are of that order. The specific heat vanishes at zero temperature and is typically small.


11.14.3 A generalized ideal gas law

While the previous subsections produced a lot of inter­esting infor­mation about weakly inter­acting particles near absolute zero, how about some info about conditions that you can check in a T-shirt? And how about something mathemati­cally simple, instead of elaborate integrals that produce weird functions?

Well, there is at least one. By definition, (11.8), the pressure is the expec­tation value of $-{\rm d}{\vphantom' E}^{\rm S}_q$$\raisebox{.5pt}{$/$}$${\rm d}{V}$ where the ${\vphantom' E}^{\rm S}_q$ are the system energy eigen­values. For weakly inter­acting particles in a box, chapter 6.2 found that the single particle energies are inversely propor­tional to the squares of the linear dimensions of the box, which means propor­tional to $V^{-2/3}$. Then so are the system energy eigen­functions, since they are sums of single-particle ones: ${\vphantom' E}^{\rm S}_q$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\mbox{constant~}V^{-2/3}$ Differen­tiating produces ${\rm d}{\vphantom' E}^{\rm S}_q$$\raisebox{.5pt}{$/$}$${\rm d}{V}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $-\frac23{\vphantom' E}^{\rm S}_q$$\raisebox{.5pt}{$/$}$$V$ and taking the expec­tation value

\begin{displaymath}
\fbox{$\displaystyle
PV={\textstyle\frac{2}{3}} E
$}
%
\end{displaymath} (11.51)

This expression is valid for weakly inter­acting bosons and fermions even if the symmetr­ization requirements cannot be ignored.


11.14.4 The ideal gas

The weakly inter­acting particles in a box can be approximated as an ideal gas if the number of particles is so small, or the box so large, that the average number of particles in an energy state is much less than one.

Since the number of particles per energy state is given by

\begin{displaymath}
\iota = \frac{1}{e^{({\vphantom' E}^{\rm p}-\mu)/{k_{\rm B}}T}\pm 1}
\end{displaymath}

ideal gas conditions imply that the exponential must be much greater than one, and then the $\pm1$ can be ignored. That means that the difference between fermions and bosons, which accounts for the $\pm1$, can be ignored for an ideal gas. Both can be approximated by the distribution derived for distin­guishable particles.

The energy integral (11.44) can now easily be done; the $e^{u_0}$ factor divides away and an integr­ation by parts in the numerator produces $E$ $\vphantom0\raisebox{1.5pt}{$=$}$ $\frac32I{k_{\rm B}}T$. Plug it into the gener­alized ideal gas law (11.51) to get the normal “ideal gas law”

\begin{displaymath}
\fbox{$\displaystyle
PV=I k_{\rm B}T
\qquad\Longleftri...
...rrow\qquad
Pv=RT \quad R \equiv \frac{k_{\rm B}}{m}
$}
%
\end{displaymath} (11.52)

Also, following (11.34),

\begin{displaymath}
e = {\textstyle\frac{3}{2}} \frac{k_{\rm B}}{m} T = C_v T ...
...textstyle\frac{3}{2}} R \quad C_p = {\textstyle\frac{5}{2}} R
\end{displaymath}

but note that these formulae are specific to the simplistic ideal gases described by the model, (like noble gases.) For ideal gases with more complex molecules, like air, the specific heats are not constants, but vary with temperature, as discussed in section 11.15.

The ideal gas equation is identical to the one derived in classical physics. That is important since it establishes that what was defined to be the temperature in this chapter is in fact the ideal gas temperature that classical physics defines.

The integral (11.43) can be done using integr­ation by parts and a result found in the notations under “!”. It gives an expression for the single-particle chemical potential $\mu$:

\begin{displaymath}
-\frac{\mu}{{k_{\rm B}}T}
=
{\textstyle\frac{3}{2}}
...
...hbar^2}{2m}\right.
\left(\frac{I}{V}\right)^{2/3}
\right]
\end{displaymath}

Note that the argument of the logarithm is essentially the ratio between the classical microscopic energy and the quantum microscopic energy based on average particle spacing. This ratio has to be big for an accurate ideal gas, to get the exponential in the particle energy distribution $\iota$ to be big.

Next is the specific entropy $s$. Recall that the chemical potential is just the Gibbs free energy. By the definition of the Gibbs free energy, the specific entropy $s$ equals $(h-g)$$\raisebox{.5pt}{$/$}$$T$. Now the specific Gibbs energy is just the Gibbs energy per unit mass, in other words, $\mu$$\raisebox{.5pt}{$/$}$$m$ while $h$$\raisebox{.5pt}{$/$}$$T$ $\vphantom0\raisebox{1.5pt}{$=$}$ $C_p$ as above. So

\begin{displaymath}
\fbox{$\displaystyle
s =
C_v
\ln
\left[
{k_{\rm ...
...t.
\left(\frac{I}{V}\right)^{2/3}
\right]
+ C_p
$}
%
\end{displaymath} (11.53)

In terms of classical thermo­dynamics, $V$$\raisebox{.5pt}{$/$}$$I$ is $m$ times the specific volume $v$. So classical thermo­dynamics takes the logarithm above apart as

\begin{displaymath}
s = C_v\ln(T) + R\ln(v) + \mbox{some combined constant}
\end{displaymath}

and then promptly forgets about the constant, damn units.


11.14.5 Blackbody radiation

This section takes a closer look at blackbody radiation, discussed earlier in chapter 6.8. Blackbody radiation is the basic model for absorption and emission of electro­magnetic radiation. Electro­magnetic radiation includes light and a wide range of other radiation, like radio waves, microwaves, and X-rays. All surfaces absorb and emit radiation; otherwise we would not see anything. But “black” surfaces are the most easy to understand theoreti­cally.

No, a black body need not look black. If its temperature is high enough, it could look like the sun. What defines an ideal black body is that it absorbs, (inter­nalizes instead of reflects,) all radiation that hits it. But it may be emitting its own radiation at the same time. And that makes a difference. If the black body is cool, you will need your infrared camera to see it; it would look really black to the eye. It is not reflecting any radiation, and it is not emitting any visible amount either. But if it is at the temperature of the sun, better take out your sunglasses. It is still absorbing all radiation that hits it, but it is emitting large amounts of its own too, and lots of it in the visible range.

So where do you get a nearly perfectly black surface? Matte black paint? A piece of blackboard? Soot? Actually, pretty much all materials will reflect in some range of wave lengths. You get the blackest surface by using no material at all. Take a big box and paint its inter­ior the blackest you can. Close the box, then drill a very tiny hole in its side. From the outside, the area of the hole will be truly, absolutely black. Whatever radiation enters there is gone. Still, when you heat the box to very high temperatures, the hole will shine bright.

While any radiation entering the hole will most surely be absorbed somewhere inside, the inside of the box itself is filled with electro­magnetic radiation, like a gas of photons, produced by the hot inside surface of the box. And some of those photons will manage to escape through the hole, making it shine.

The amount of photons in the box may be computed from the Bose-Einstein distribution with a few caveats. The first is that there is no limit on the number of photons; photons will be created or absorbed by the box surface to achieve thermal equilibrium at whatever level is most probable at the given temperature. This means the chemical potential $\mu$ of the photons is zero, as you can check from the derivations in notes {D.58} and {D.59}.

The second caveat is that the usual density of states (6.6) is non­relativistic. It does not apply to photons, which move at the speed of light. For photons you must use the density of modes (6.7).

The third caveat is that there are only two independent spin states for a photon. As a spin-one particle you would expect that photons would have the spin values 0 and $\pm1$, but the zero value does not occur in the direction of propag­ation, addendum {A.21.6}. Therefore the number of independent states that exist is two, not three. A different way to understand this is classical: the electric field can only oscillate in the two independent directions normal to the direction of propag­ation, (13.10); oscill­ation in the direction of propag­ation itself is not allowed by Maxwell’s laws because it would make the divergence of the electric field non­zero. The fact that there are only two independent states has already been accounted for in the density of modes (6.7).

The energy per unit box volume and unit frequency range found under the above caveats is Planck’s blackbody spectrum already given in chapter 6.8:

\begin{displaymath}
\rho(\omega) \equiv
\frac{{\rm d}(E/V)}{{\rm d}\omega} =...
...ar}{\pi^2c^3} \frac{\omega^3}{e^{\hbar\omega/{k_{\rm B}}T}-1}
\end{displaymath} (11.54)

The expression for the total inter­nal energy per unit volume is called the “Stefan-Boltzmann formula.” It is found by integr­ation of Planck’s spectrum over all frequencies just like for the Stefan-Boltzmann law in chapter 6.8:

\begin{displaymath}
\fbox{$\displaystyle
\frac{E}{V} =
\frac{\pi^2}{15\hbar^3c^3} (k_{\rm B}T)^4
$}
%
\end{displaymath} (11.55)

The number of particles may be found similar to the energy, by dropping the $\hbar\omega$ energy per particle from the integral. It is, [39, 36.24, with typo]:

\begin{displaymath}
\frac{I}{V} =
\frac{2\zeta(3)}{\pi^2\hbar^3c^3} (k_{\rm B}T)^3
\qquad\zeta(3)\approx 1.202
%
\end{displaymath} (11.56)

Taking the ratio with (11.55), the average energy per photon may be found:
\begin{displaymath}
\fbox{$\displaystyle
\frac{E}{I} =
\frac{\pi^4}{30\zeta(3)} k_{\rm B}T
\approx 2.7 {k_{\rm B}}T
$}
%
\end{displaymath} (11.57)

The temperature has to be roughly 9,000 K for the average photon to become visible light. That is one reason a black body will look black at a room temperature of about 300 K. The solar surface has a temperature of about 6,000 K, so the visible light photons it emits are more energetic than average, but there are still plenty of them.

The entropy $S$ of the photon gas follows from integrating $\int{\rm d}{E}$$\raisebox{.5pt}{$/$}$$T$ using (11.55), starting from absolute zero and keeping the volume constant:

\begin{displaymath}
\fbox{$\displaystyle
\frac{S}{V} =
\frac{4\pi^2}{45\hbar^3c^3} k_{\rm B}(k_{\rm B}T)^3
$}
%
\end{displaymath} (11.58)

Dividing by (11.56) shows the average entropy per photon to be
\begin{displaymath}
\frac{S}{I} = \frac{2\pi^4}{45\zeta(3)} k_{\rm B}
%
\end{displaymath} (11.59)

independent of temperature.

The gener­alized ideal gas law (11.51) does not apply to the pressure exerted by the photon gas, because the energy of the photons is ${\hbar}ck$ and that is propor­tional to the wave number instead of its square. The corrected expression is:

\begin{displaymath}
\fbox{$\displaystyle
PV={\textstyle\frac{1}{3}} E
$}
%
\end{displaymath} (11.60)


11.14.6 The Debye model

To explain the heat capacity of simple solids, Debye modeled the energy in the crystal vibrations very much the same way as the photon gas of the previous subsection. This subsection briefly outlines the main ideas.

For electro­magnetic waves propagating with the speed of light $c$, substitute acoustical waves propagating with the speed of sound $c_{\rm {s}}$. For photons with energy $\hbar\omega$, substitute phonons with energy $\hbar\omega$. Since unlike electro­magnetic waves, sound waves can vibrate in the direction of wave propag­ation, for the number of spin states substitute $n_s$ $\vphantom0\raisebox{1.5pt}{$=$}$ 3 instead of 2; in other words, just multiply the various expressions for photons by 1.5.

The critical difference for solids is that the number of modes, hence the frequencies, is not infinitely large. Since each individual atom has three degrees of freedom (it can move in three individual directions), there are $3I$ degrees of freedom, and reformulating the motion in terms of acoustic waves does not change the number of degrees of freedom. The shortest wave lengths will be comparable to the atom spacing, and no waves of shorter wave length will exist. As a result, there will be a highest frequency $\omega_{\rm {max}}$. The “Debye temperature” $T_D$ is defined as the temperature at which the typical classical microscopic energy ${k_{\rm B}}T$ becomes equal to the maximum quantum microscopic energy $\hbar\omega_{\rm {max}}$

\begin{displaymath}
\fbox{$\displaystyle
{k_{\rm B}}T_D=\hbar\omega_{max}
$}
%
\end{displaymath} (11.61)

The expression for the inter­nal energy becomes, from (6.11) times 1.5:

\begin{displaymath}
\fbox{$\displaystyle
\frac{E}{V} = \int_0^{\omega_{\rm m...
...ega^3}{e^{\hbar\omega/{k_{\rm B}}T}-1}{\,\rm d}\omega
$}
%
\end{displaymath} (11.62)

If the temperatures are very low the exponential will make the integrand zero except for very small frequencies. Then the upper limit is essentially infinite compared to the range of integr­ation. That makes the energy propor­tional to $T^4$ just like for the photon gas and the heat capacity is therefore propor­tional to $T^3$. At the other extreme, when the temperature is large, the exponential in the bottom can be expanded in a Taylor series and the energy becomes propor­tional to $T$, making the heat capacity constant.

The maximum frequency, hence the Debye temperature, can be found from the requirement that the number of modes is $3I$, to be applied by integrating (6.7), or an empirical value can be used to improve the approxi­mation for whatever temperature range is of inter­est. Literature values are often chosen to approximate the low temperature range accurately, since the model works best for low temperatures. If integr­ation of (6.7) is used at high temperatures, the law of Dulong and Petit results, as described in section 11.15.

More sophis­ticated versions of the analysis exist to account for some of the very non­trivial differen­ces between crystal vibrations and electro­magnetic waves. They will need to be left to literature.