D.61 Checks on the expression for entropy

According to the microscopic definition, the differen­tial of the entropy $S$ should be

\begin{displaymath}
{\rm d}S = -k_{\rm B}{\rm d}\left[\sum_q P_q \ln P_q\right]
\end{displaymath}

where the sum is over all system energy eigen­functions $\psi^{\rm S}_q$ and $P_q$ is their proba­bility. The differen­tial can be simplified to

\begin{displaymath}
{\rm d}S = - k_{\rm B}\sum_q \left[\ln P_q + 1\right]{\,\rm d}P_q
= - k_{\rm B}\sum_q \ln P_q{\,\rm d}P_q,
\end{displaymath}

the latter equality since the sum of the proba­bilities is always one, so $\sum_q{{\rm d}}P_q$ $\vphantom0\raisebox{1.5pt}{$=$}$ 0.

This is to be compared with the macroscopic differen­tial for the entropy. Since the macroscopic expression requires thermal equilibrium, $P_q$ in the microscopic expression above can be equated to the canonical value $e^{-{\vphantom' E}^{\rm S}_q/{k_{\rm B}}T}$$\raisebox{.5pt}{$/$}$$Z$ where ${\vphantom' E}^{\rm S}_q$ is the energy of system eigen­function $\psi^{\rm S}_q$. It simplifies the microscopic differen­tial of the entropy to

\begin{displaymath}
{\rm d}S
= - k_{\rm B}\sum_q \left[-\frac{{\vphantom' E}...
...= \frac{1}{T} \sum_q {\vphantom' E}^{\rm S}_q{\,\rm d}P_q,
%
\end{displaymath} (D.38)

the second inequality since $Z$ is a constant in the summation and $\sum_q{{\rm d}}P_q$ $\vphantom0\raisebox{1.5pt}{$=$}$ 0.

The macroscopic expression for the differen­tial of entropy is given by (11.18),

\begin{displaymath}
{\rm d}S = \frac{\delta Q}{T}.
\end{displaymath}

Substituting in the differen­tial first law (11.11),

\begin{displaymath}
{\rm d}S = \frac{1}{T}{\,\rm d}E + \frac{1}{T}P {\,\rm d}V
\end{displaymath}

and plugging into that the definitions of $E$ and $P$,

\begin{displaymath}
{\rm d}S = \frac{1}{T} {\,\rm d}\left[\sum_qP_q {\vphantom...
...{{\rm d}{\vphantom' E}^{\rm S}_q}{{\rm d}V}\right] {\,\rm d}V
\end{displaymath}

and differen­tiating out the product in the first term, one part drops out versus the second term and what is left is the differen­tial for $S$ according to the microscopic definition (D.38). So, the macroscopic and microscopic definitions agree to within a constant on the entropy. That means that they agree completely, because the macroscopic definition has no clue about the constant.

Now consider the case of a system with zero indeterminacy in energy. According to the fundamental assumption, all the eigen­functions with the correct energy should have the same proba­bility in thermal equilibrium. From the entropy’s point of view, thermal equilibrium should be the stable most messy state, having the maximum entropy. For the two views to agree, the maximum of the microscopic expression for the entropy should occur when all eigen­functions of the given energy have the same proba­bility. Restricting attention to only the energy eigen­functions $\psi^{\rm S}_q$ with the correct energy, the maximum entropy occurs when the derivatives of

\begin{displaymath}
F = - k_{\rm B}\sum_q P_q \ln P_q -\epsilon \left(\sum_q P_q -1\right)
\end{displaymath}

with respect to the $P_q$ are zero. Note that the constraint that the sum of the proba­bilities must be one has been added as a penalty term with a Lagrangian multi­plier, {D.48}. Taking derivatives produces

\begin{displaymath}
- k_{\rm B}\ln(P_q) -k_{\rm B}- \epsilon = 0
\end{displaymath}

showing that, yes, all the $P_q$ have the same value at the maximum entropy. (Note that the minima in entropy, all $P_q$ zero except one, do not show up in the derivation; $P_q\ln{P}_q$ is zero when $P_q$ $\vphantom0\raisebox{1.5pt}{$=$}$ 0, but its derivative does not exist there. In fact, the infinite derivative can be used to verify that no maxima exist with any of the $P_q$ equal to zero if you are worried about that.)

If the energy is uncertain, and only the expec­tation energy is known, the penalized function becomes

\begin{displaymath}
F = - k_{\rm B}\sum_q P_q \ln P_q
- \epsilon_1 \left(\su...
...\epsilon_2 \left(\sum {\vphantom' E}^{\rm S}_q P_q - E\right)
\end{displaymath}

and the derivatives become

\begin{displaymath}
- k_{\rm B}\ln(P_q) -k_{\rm B}- \epsilon_1 -\epsilon_2 {\vphantom' E}^{\rm S}_q = 0
\end{displaymath}

which can be solved to show that

\begin{displaymath}
P_q = C_1 e^{-{\vphantom' E}^{\rm S}_q/C_2}
\end{displaymath}

with $C_1$ and $C_2$ constants. The requirement to conform with the given definition of temperature identifies $C_2$ as ${k_{\rm B}}T$ and the fact that the proba­bilities must sum to one identifies $C_1$ as 1/$Z$.

For two systems $A$ and $B$ in thermal contact, the proba­bilities of the combined system energy eigen­functions are found as the products of the proba­bilities of those of the individual systems. The maximum of the combined entropy, constrained by the given total energy $E$, is then found by differen­tiating

\begin{eqnarray*}
F &=& - k_{\rm B}\sum_{q_A}\sum_{q_B} P_{q_A}P_{q_B} \ln(P_{...
...S}_{q_A} + \sum_{q_B} P_{q_B} {\vphantom' E}^{\rm S}_{q_B} - E)
\end{eqnarray*}

$F$ can be simplified by taking apart the logarithm and noting that the proba­bilities $P_{q_A}$ and $P_{q_B}$ sum to one to give

\begin{eqnarray*}
F &=&
- k_{\rm B}\sum_{q_A} P_{q_A} \ln(P_{q_A})
- k_{\...
...S}_{q_A} + \sum_{q_B} P_{q_B} {\vphantom' E}^{\rm S}_{q_B} - E)
\end{eqnarray*}

Differen­tiation now produces

\begin{eqnarray*}
- k_{\rm B}\ln(P_{q_A}) - k_{\rm B}-\epsilon_{1,A} - \epsilo...
...B}-\epsilon_{1,B} - \epsilon_2 {\vphantom' E}^{\rm S}_{q_B} = 0
\end{eqnarray*}

which produces $P_{q_A}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $C_{1,A}e^{-{\vphantom' E}^{\rm S}_{q_A}/C_2}$ and $P_{q_B}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $C_{1,B}e^{-{\vphantom' E}^{\rm S}_{q_B}/C_2}$ and the common constant $C_2$ then implies that the two systems have the same temperature.