Subsections


A.11 Thermoelectric effects

This note gives additional infor­mation on thermo­electric effects.


A.11.1 Peltier and Seebeck coefficient ballparks

The approximate expressions for the semi­conductor Peltier coefficients come from [27]. Straub et al (App. Phys. Let. 95, 052107, 2009) note that to better approxi­mation, $\frac32{k_{\rm B}}T$ should be $(\frac52+r){k_{\rm B}}T$ with $r$ typically $-\frac12$. Also, a phonon contribution should be added.

The estimate for the Peltier coefficient of a metal assumes that the electrons form a free-electron gas. The conduction will be assumed to be in the $x$-direction. To ballpark the Peltier coefficient requires the average charge flow per electron $\overline{-ev_x}$ and the average energy flow per electron $\overline{{{\vphantom' E}^{\rm p}}v_x}$. Here $v_x$ is the electron velocity in the $x$-direction, $\vphantom0\raisebox{1.5pt}{$-$}$$e$ the electron charge, ${\vphantom' E}^{\rm p}$ the electron energy, and an overline is used to indicate an average over all electrons. To find ballparks for the two averages, assume the model of conduction of the free-electron gas as given in chapter 6.20. The conduction occurred since the Fermi sphere got displaced a bit towards the right in the wave number space figure 6.17. Call the small amount of displacement $k_{\rm {d}}$. Assume for simplicity that in a coordinate system $k_xk_yk_z$ with origin at the center of the displaced Fermi sphere, the occupation of the single-particle states by electrons is still exactly given by the equilibrium Fermi-Dirac distribution. However, due to the displacement $k_{\rm {d}}$ along the $k_x$ axis, the velocities and energies of the single-particle states are now given by

\begin{displaymath}
v_x = \frac{\hbar}{m}(k_x+k_{\rm {d}})
\qquad
{\vphant...
...c{\hbar^2}{2m}\left(k^2+ 2k_xk_{\rm {d}}+k_{\rm {d}}^2\right)
\end{displaymath}

To simplify the notations, the above expressions will be abbreviated to

\begin{displaymath}
v_x = C_v(k_x+k_{\rm {d}})
\qquad
{\vphantom' E}^{\rm p}= C_E(k^2+ 2k_xk_{\rm {d}}+k_{\rm {d}}^2)
\end{displaymath}

In this notation, the average charge and energy flows per electron become

\begin{displaymath}
\overline{-ev_x} = \overline{-eC_v(k_x+k_{\rm {d}})}
\qq...
...{C_E(k^2+ 2k_xk_{\rm {d}}+k_{\rm {d}}^2)C_v(k_x+k_{\rm {d}})}
\end{displaymath}

Next note that the averages involving odd powers of $k_x$ are zero, because for every state of positive $k_x$ in the Fermi sphere there is a corre­sponding state of negative $k_x$. Also the constants, including $k_{\rm {d}}$, can be taken out of the averages. So the flows simplify to

\begin{displaymath}
\overline{-ev_x} = -e C_v k_{\rm {d}}
\qquad
\overline...
...}v_x} =
C_E(2\overline{k_x^2}+\overline{k^2})C_vk_{\rm {d}}
\end{displaymath}

where the term cubically small in $k_{\rm {d}}$ was ignored. Now by symmetry the averages of $k_x^2$, $k_y^2$, and $k_z^2$ are equal, so each must be one third of the average of $k^2$. And $C_E$ times the average of $k^2$ is the average energy per electron ${\vphantom' E}^{\rm p}_{\rm {ave}}$ in the absence of conduction. Also, by definition $C_vk_{\rm {d}}$ is the drift velocity $v_{\rm {d}}$ that produces the current. Therefore:

\begin{displaymath}
\overline{-ev_x} = -e v_{\rm {d}}
\qquad
\overline{v_x...
...le\frac{5}{3}} {\vphantom' E}^{\rm p}_{\rm {ave}} v_{\rm {d}}
\end{displaymath}

Note that if you would simply have ballparked the average of $v_x{\vphantom' E}^{\rm p}$ as the average of $v_x$ times the average of ${\vphantom' E}^{\rm p}$ you would have missed the factor 5/3. That would produce a Peltier coefficient that would be giganti­cally wrong.

To get the heat flow, the energy must be taken relative to the Fermi level $\mu$. In other words, the energy flow $\overline{v_x\mu}$ must be subtracted from $\overline{v_x{\vphantom' E}^{\rm p}}$. The Peltier coefficient is the ratio of that heat flow to the charge flow:

\begin{displaymath}
{\mathscr P}= \frac{\overline{v_x({\vphantom' E}^{\rm p}-\...
...tstyle\frac{5}{3}}{\vphantom' E}^{\rm p}_{\rm {ave}}-\mu}{-e}
\end{displaymath}

If you plug in the expressions for the average energy per electron and the chemical potential found in derivation {D.63}, you get the Peltier ballpark listed in the text.

To get Seebeck coefficient ballparks, simply divide the Peltier coefficients by the absolute temperature. That works because of Kelvin’s second relationship discussed below. To get the Seebeck coefficient ballpark for a metal directly from the Seebeck effect, equate the increase in electro­static potential energy of an electron migrating from hot to cold to the decrease in average electron kinetic energy. Using the average kinetic energy of derivation {D.63}:

\begin{displaymath}
- e {\,\rm d}\varphi = - {\,\rm d}\frac{\pi^2}{4} \frac{(k_{\rm B}T)^2}{{\vphantom' E}^{\rm p}_{\rm {F}}}
\end{displaymath}

Divide by $e{\,\rm d}{T}$ to get the Seebeck coefficient.


A.11.2 Figure of merit

To compare thermo­electric materials, an important quantity is the figure of merit of the material. The figure of merit is by convention written as $M^2$ where

\begin{displaymath}
M = {\mathscr P}\sqrt{\frac{\sigma}{\kappa T}}
\end{displaymath}

The temperature $T$ of the material should typically be taken as the average temperature in the device being examined. The reason that $M$ is important has to do with units. Number $M$ is “non­di­mensional,” it has no units. In SI units, the Peltier coefficient ${\mathscr P}$ is in volts, the electrical conduc­tivity $\sigma$ in ampere/volt-meter, the temperature in Kelvin, and the thermal conduc­tivity $\kappa$ in watt/Kelvin-meter with watt equal to volt ampere. That makes the combin­ation above non­di­mensional.

To see why that is relevant, suppose you have a material with a low Peltier coefficient. You might consider compensating for that by, say, scaling up the size of the material or the current through it. And maybe that does give you a better device than you would get with a material with a higher Peltier coefficient. Maybe not. How do you know?

di­mensional analysis can help answer that question. It says that non­di­mensional quantities depend only on non­di­mensional quantities. For example, for a Peltier cooler you might define an efficiency as the heat removed from your ice cubes per unit electrical energy used. That is a non­di­mensional number. It will not depend on, say, the actual size of the semi­conductor blocks, but it will depend on such non­di­mensional parameters as their shape, and their size relative to the overall device. Those are within your complete control during the design of the cooler. But the efficiency will also depend on the non­di­mensional figure of merit $M$ above, and there you are limited to the available materials. Having a material with a higher figure of merit would give you a higher thermo­electric effect for the same losses due to electrical resistance and heat leaks.

To be sure, it is somewhat more complicated than that because two different materials are involved. That makes the efficiency depend on at least two non­di­mensional figures of merit, one for each material. And it might also depend on other non­di­mensional numbers that can be formed from the properties of the materials. For example, the efficiency of a simple thermo­electric generator turns out to depend on a net figure of merit given by, [8],

\begin{displaymath}
M_{\rm net} =
M_{\rm A}
\frac{\sqrt{\kappa_{\rm {A}}/\...
...}/\sigma_{\rm {A}}}+\sqrt{\kappa_{\rm {B}}/\sigma_{\rm {B}}}}
\end{displaymath}

It shows that the figures of merit $M_{\rm {A}}$ and $M_{\rm {B}}$ of the two materials get multi­plied by non­di­mensional fractions. These fractions are in the range from 0 to 1, and they sum to one. To get the best efficiency, you would like $M_{\rm {A}}$ to be as large positive as possible, and $M_{\rm {B}}$ as large negative as possible. That is as noted in the text. But all else being the same, the efficiency also depends to some extent on the non­di­mensional fractions multi­plying $M_{\rm {A}}$ and $M_{\rm {B}}$. It helps if the material with the larger figure of merit $\vert M\vert$ also has the larger ratio of $\kappa$$\raisebox{.5pt}{$/$}$$\sigma$. If say $M_{\rm {A}}$ exceeds $-M_{\rm {B}}$ for the best materials A and B, then you could potentially replace B by a cheaper material with a much lower figure of merit, as long as that replacement material has a very low value of $\kappa$$\raisebox{.5pt}{$/$}$$\sigma$ relative to A. In general, the more non­di­mensional numbers there are that are important, the harder it is to analyze the efficiency theoreti­cally.


A.11.3 Physical Seebeck mechanism

The given quali­tative description of the Seebeck mechanism is very crude. For example, for semi­conductors it ignores variations in the number of charge carriers. Even for a free-electron gas model for metals, there may be variations in charge carrier density that offset velocity effects. Worse, for metals it ignores the exclusion principle that restricts the motion of the electrons. And it ignores the fact that the hotter side does not just have electrons with higher energy relative to the Fermi level than the colder side, it also has electrons with lower energy that can be excited to move. If the lower energy electrons have a larger mean free path, they can come from larger distances than the higher energy ones. And for metal electrons in a lattice, the velocity might easily go down with energy instead of up. That is readily appreciated from the spectra in chapter 6.22.2.

For a much more detailed description, see “Thermo­electric Effects in Metals: Thermo­couples” by S. O. Kasap, 2001. This paper is available on the web for personal study. It includes actual data for metals compared to the simple theory.


A.11.4 Full thermoelectric equations

To understand the Peltier, Seebeck, and Thomson effects more precisely, the full equations of heat and charge flow are needed. That is classical thermo­dynamics, not quantum mechanics. However, standard under­graduate thermo­dynamics classes do not cover it, and even the thick standard under­graduate text books do not provide much more than a super­ficial mention that thermo­electric effects exist. Therefore this subsection will describe the equations of thermo­electrics in a nutshell.

The discussion will be one-di­mensional. Think of a bar of material aligned in the $x$-direction. If the full three-di­mensional equations of charge and heat flow are needed, for isotropic materials you can simply replace the $x$ derivatives by gradients.

Heat flow is primarily driven by variations in temperature, and electric current by variations in the chemical potential of the electrons. The question is first of all what is the precise relation between those variations and the heat flow and current that they cause.

Now the microscopic scales that govern the motion of atoms and electrons are normally extremely small. Therefore an atom or electron “sees” only a very small portion of the macroscopic temperature and chemical potential distributions. The atoms and electrons do notice that the distributions are not constant, otherwise they would not conduct heat or current at all. But they see so little of the distributions that to them they appear to vary linearly with position. As a result it is simple gradients, i.e. first derivatives, of the temperature and potential distributions that drive heat flow and current in common solids. Symboli­cally:

\begin{displaymath}
q = f_1\left(\frac{{\rm d}T}{{\rm d}x},\frac{{\rm d}\varph...
...\rm d}T}{{\rm d}x},\frac{{\rm d}\varphi_\mu}{{\rm d}x}\right)
\end{displaymath}

Here $q$ is the “heat flux density;” “flux” is a fancy word for “flow” and the qualifier “density” indicates that it is per unit cross-sectional area of the bar. Similarly $j$ is the current density, the current per unit cross-sectional area. If you want, it is the charge flux density. Further $T$ is the temperature, and $\varphi_\mu$ is the chemical potential $\mu$ per unit electron charge $\vphantom0\raisebox{1.5pt}{$-$}$$e$. That includes the electro­static potential (simply put, the voltage) as well as an intrinsic chemical potential of the electrons. The unknown functions $f_1$ and $f_2$ will be different for different materials and different conditions.

The above equations are not valid if the temperature and potential distributions change non­trivially on microscopic scales. For example, shock waves in super­sonic flows of gases are extremely thin; therefore you cannot use equations of the type above for them. Another example is highly rarefied flows, in which the molecules move long distances without collisions. Such extreme cases can really only be analyzed numeri­cally and they will be ignored here. It is also assumed that the materials maintain their internal integrity under the conduction processes.

Under normal conditions, a further approxi­mation can be made. The functions $f_1$ and $f_2$ in the expressions for the heat flux and current densities would surely depend non­linearly on their two arguments if these would appear finite on a microscopic scale. But on a microscopic scale, temperature and potential hardly change. (Super­sonic shock waves and similar are again excluded.) Therefore, the gradients appear small in microscopic terms. And if that is true, functions $f_1$ and $f_2$ can be linearized using Taylor series expansion. That gives:

\begin{displaymath}
q = A_{11}\frac{{\rm d}T}{{\rm d}x} + A_{12}\frac{{\rm d}\...
...{\rm d}T}{{\rm d}x}+A_{22}\frac{{\rm d}\varphi_\mu}{{\rm d}x}
\end{displaymath}

The four coefficients $A_{..}$ will normally need to be determined experi­mentally for a given material at a given temperature. The properties of solids vary normally little with pressure.

By convention, the four coefficients are rewritten in terms of four other, more intuitive, ones:

\begin{displaymath}
\fbox{$\displaystyle
q = -(\kappa+{\mathscr P}{\mathscr ...
...\rm d}x}
-\sigma\frac{{\rm d}\varphi_\mu}{{\rm d}x}
$}
%
\end{displaymath} (A.31)

This defines the heat conduc­tivity $\kappa$, the electrical conduc­tivity $\sigma$, the Seebeck coefficient ${\mathscr S}$ and the Peltier coefficient ${\mathscr P}$ of the material. (The signs of the Peltier and Seebeck coefficients vary considerably between references.)

If conditions are isothermal, the second equation is Ohm’s law for a unit cube of material, with $\sigma$ the usual conduc­tivity, the inverse of the resistance of the unit cube. The Seebeck effect corre­sponds to the case that there is no current. In that case, the second equation produces

\begin{displaymath}
\fbox{$\displaystyle
\frac{{\rm d}\varphi_\mu}{{\rm d}x} = - {\mathscr S}\frac{{\rm d}T}{{\rm d}x}
$}
%
\end{displaymath} (A.32)

To see what this means, integrate this along a closed circuit all the way from lead 1 of a voltmeter through a sample to the other lead 2. That gives
\begin{displaymath}
\varphi_{\mu,2} - \varphi_{\mu,1} = - \int_1^2 {\mathscr S}{\rm d}T
\end{displaymath} (A.33)

Assuming that the two leads of the voltmeter are at the same temperature, their intrinsic chemical potentials are the same. In that case, the difference in potentials is equal to the difference in electro­static potentials. In other words, the integral gives the difference between the voltages inside the two leads. And that is the voltage that will be displayed by the voltmeter.

It is often convenient to express the heat flux density $q$ in terms of the current density instead of the gradient of the potential $\varphi_\mu$. Eliminating this gradient from the equations (A.31) produces

\begin{displaymath}
\fbox{$\displaystyle
q = -\kappa\frac{{\rm d}T}{{\rm d}x} + {\mathscr P}j
$}
%
\end{displaymath} (A.34)

In case there is no current, this is the well-known Fourier’s law of heat conduction, with $\kappa$ the usual thermal conduc­tivity. Note that the heat flux density is often simply called the heat flux, even though it is per unit area. In the presence of current, the heat flux density is augmented by the Peltier effect, the second term.

The total energy flowing through the bar is the sum of the thermal heat flux and the energy carried along by the electrons:

\begin{displaymath}
j_E = q + j \varphi_\mu
\end{displaymath}

If the energy flow is constant, the same energy flows out of a piece ${\rm d}{x}$ of the bar as flows into it. Otherwise the negative $x$-derivative of the energy flux density gives the net energy accumul­ation $\dot{e}$ per unit volume:

\begin{displaymath}
\dot e = -\frac{{\rm d}j_E}{{\rm d}x}
= - \frac{{\rm d}q}{{\rm d}x} - j \frac{{\rm d}\varphi_\mu}{{\rm d}x}
\end{displaymath}

where it was assumed that the electric current is constant as it must be for a steady state. Of course, in a steady state any non­zero $\dot{e}$ must be removed through heat conduction through the sides of the bar of material being tested, or through some alternative means. Substituting in from (A.34) for $q$ and from the second of (A.31) for the gradient of the potential gives:

\begin{displaymath}
\dot e =
\frac{{\rm d}}{{\rm d}x} \left(\kappa \frac{{\r...
... K}\equiv \frac{{\rm d}{\mathscr P}}{{\rm d}T} - {\mathscr S}
\end{displaymath}

The final term in the energy accumul­ation is the Thomson effect or Kelvin heat. The Kelvin (Thomson) coefficient ${\mathscr K}$ can be cleaned up using the second Kelvin relationship given in a later subsection.

The equations (A.31) are often said to be represen­tative of non­equilibrium thermo­dynamics. However, they corre­spond to a vanishingly small perturb­ation from thermo­dynamical equilibrium. The equations would more correctly be called quasi-equilibrium thermo­dynamics. Non­equilibrium thermo­dynamics is what you have inside a shock wave.


A.11.5 Charge locations in thermoelectrics

The statement that the charge density is neutral inside the material comes from [[9]].

A simplified macroscopic derivation can be given based on the thermo­electric equations (A.31). The derivation assumes that the temperature and chemical potential are almost constant. That means that derivatives of thermo­dynamic quantities and electric potential are small. That makes the heat flux and current also small.

Next, in three dimensions replace the $x$ derivatives in the thermo­electric equations (A.31) by the gradient operator $\nabla$. Now under steady-state conditions, the divergence of the current density must be zero, or there would be an unsteady local accumul­ation or depletion of net charge, chapter 13.2. Similarly, the divergence of the heat flux density must be zero, or there would be an accumul­ation or depletion of thermal energy. (This ignores local heat generation as an effect that is quadrati­cally small for small currents and heat fluxes.)

Therefore, taking the divergence of the equations (A.31) and ignoring the variations of the coefficients, which give again quadrati­cally small contributions, it follows that the Laplacians of both the temperature and the chemical potential are zero.

Now the chemical potential includes both the intrinsic chemical potential and the additional electro­static potential. The intrinsic chemical potential depends on temperature. Using again the assumption that quadrati­cally small terms can be ignored, the Laplacian of the intrinsic potential is propor­tional to the Laplacian of the temperature and therefore zero.

Then the Laplacian of the electro­static potential must be zero too, to make the Laplacian of the total potential zero. And that then implies the absence of net charge inside the material according to Maxwell’s first equation, chapter 13.2. Any net charge must accumulate at the surfaces.


A.11.6 Kelvin relationships

This subsection gives an explan­ation of the definition of the thermal heat flux in thermo­electrics. It also explains that the Kelvin (or Thomson) relationships are a special case of the more general “Onsager reciprocal relations.” If you do not know what thermo­dynamical entropy is, you should not be reading this subsection. Not before reading chapter 11, at least.

For simplicity, the discussion will again assume one-di­mensional conduction of heat and current. The physical picture is therefore conduction along a bar aligned in the $x$-direction. It will be assumed that the bar is in a steady state, in other words, that the temperature and chemical potential distributions, heat flux and current through the bar all do not change with time.

Figure A.1: Analysis of conduction.
\begin{figure}
\centering
\setlength{\unitlength}{1pt}
\begin{picture}(...
...= \mu + \frac{{\rm d}\mu}{{\rm d}x} {\rm d}x$}}
\end{picture}
\end{figure}

The primary question is what is going on in a single short segment ${\rm d}{x}$ of such a bar. Here ${\rm d}{x}$ is assumed to be small on a macroscopic scale, but large on a microscopic scale. To analyze the segment, imagine it taken out of the bar and sandwiched between two big idealized “reservoirs” 1 and 2 of the same material, as shown in figure A.1. The idealized reservoirs are assumed to remain at uniform, thermo­dynami­cally reversible, conditions. Reservoir 1 is at the considered time at the same temperature and chemical potential as the start of the segment, and reservoir 2 at the same temperature and chemical potential as the end of the segment. The reservoirs are assumed to be big enough that their properties change slowly in time. Therefore it is assumed that their time variations do not have an effect on what happens inside the bar segment at the considered time. For simplicity, it will also be assumed that the material consists of a single particle type. Some of these particles are allowed to move through the bar segment from reservoir 1 to reservoir 2.

In other words, there is a flow, or flux, of particles through the bar segment. The corre­sponding particle flux density $j_I$ is the particle flow per unit area. For simplicity, it will be assumed that the bar has unit area. Then there is no difference between the particle flow and the particle flux density. Note that the same flow of particles $j_I$ must enter the bar segment from reservoir 1 as must exit from the segment into reservoir 2. If that was not the case, there would be a net accumul­ation or depletion of particles inside the bar segment. That is not possible, because the bar segment is assumed to be in a steady state. Therefore the flow of particles through the bar segment decreases the number of particles $I_1$ in reservoir 1, but increases the number $I_2$ in reservoir 2 corre­spondingly:

\begin{displaymath}
j_I = - \frac{{\rm d}I_1}{{\rm d}t} = \frac{{\rm d}I_2}{{\rm d}t}
\end{displaymath}

Further, due to the energy carried along by the moving particles, as well as due to thermal heat flow, there will be a net energy flow $j_E$ through the bar segment. Like the particle flow, the energy flow comes out of reservoir 1 and goes into reservoir 2:

\begin{displaymath}
j_E = - \frac{{\rm d}E_1}{{\rm d}t} = \frac{{\rm d}E_2}{{\rm d}t}
\end{displaymath}

Here $E_1$ is the total energy inside reservoir 1, and $E_2$ that inside reservoir 2. It is assumed that the reservoirs are kept at constant volume and are thermally insulated except at the junction with the bar, so that no energy is added due to pressure work or heat conduction elsewhere. Similarly, the sides of the bar segment are assumed thermally insulated.

One question is how to define the heat flux through the bar segment. In the absence of particle motion, the second law of thermo­dynamics allows an unambiguous answer. The heat flux $q$ through the bar enters reservoir 2, and the second law of thermo­dynamics then says:

\begin{displaymath}
q_2 = T_2\frac{{\rm d}S_2}{{\rm d}t}
\end{displaymath}

Here $S_2$ is the entropy of the reservoir 2. In the presence of particles moving through the bar, the definition of thermal energy, and so the corre­sponding heat flux, becomes more ambiguous. The particles also carry along non­thermal energy. The question then becomes what should be counted as thermal energy, and what as non­thermal. To resolve that, the heat flux into reservoir 2 will be defined by the expression above. Note that the heat flux out of reservoir 1 might be slightly different because of variations in energy carried by the particles. It is the total energy flow $j_E$, not the heat flow $q$, that must be exactly constant.

To understand the relationship between heat flux and energy flux more clearly, some basic thermo­dynamics can be used. See chapter 11.12 for more details, including gener­alization to more than one particle type. A combin­ation of the first and second laws of thermo­dynamics produces

\begin{displaymath}
T {\,\rm d}\bar s = {\,\rm d}\bar e + P {\,\rm d}\bar v
\qquad
S = \bar sI \quad E = \bar e I \quad V = \bar v I
\end{displaymath}

in which $\bar{s}$, $\bar{e}$, and $\bar{v}$ are the entropy, internal energy, and volume per particle, and $P$ is the pressure. That can be used to rewrite the derivative of entropy in the definition of the heat flux above:

\begin{displaymath}
T {\,\rm d}S = T {\rm d}(\bar s I) = T ({\rm d}\bar s) I +...
...= ({\rm d}\bar e + P {\,\rm d}\bar v) I + T \bar s ({\rm d}I)
\end{displaymath}

That can be rewritten as

\begin{displaymath}
T {\,\rm d}S = {\rm d}E + P {\rm d}V - (\bar e + P\bar v -T\bar s) {\rm d}I
\end{displaymath}

as can be verified by writing $E$ and $V$ as $\bar{e}I$ and $\bar{v}I$ and differen­tiating out. The parenthetical expression in the above equation is in thermo­dynamics known as the Gibbs free energy. Chapter 11.13 explains that it is the same as the chemical potential $\mu$ in the distribution laws. Therefore:

\begin{displaymath}
T {\,\rm d}S = {\rm d}E + P {\rm d}V - \mu {\rm d}I
\end{displaymath}

(Chapter 11.13 does not include an additional electro­static energy due to an ambient electric field. But an intrinsic chemical potential can be defined by subtracting the electro­static potential energy. The corre­sponding intrinsic energy also excludes the electro­static potential energy. That makes the expression for the chemical potential the same in terms of intrinsic quantities as in terms of non­intrinsic ones. See also the discussion in chapter 6.14.)

Using the above expression for the change in entropy in the definition of the heat flux gives, noting that the volume is constant,

\begin{displaymath}
q_2 = \frac{{\rm d}E_2}{{\rm d}t} - \mu_2 \frac{{\rm d}I_2}{{\rm d}t}
= j_E - \mu_2 j_I
\end{displaymath}

It can be concluded from this that the non­thermal energy carried along per particle is $\mu$. The rest of the net energy flow is thermal energy.

The Kelvin relationships are related to the net entropy generated by the segment of the bar. The second law implies that irreversible processes always increase the net entropy in the universe. And by definition, the complete system figure A.1 examined here is isolated. It does not exchange work nor heat with its surroundings. Therefore, the entropy of this system must increase in time due to irreversible processes. More specifi­cally, the net system entropy must go up due to the irreversible heat conduction and particle transport in the segment of the bar. The reservoirs are taken to be thermo­dynami­cally reversible; they do not create entropy out of nothing. But the heat conduction in the bar is irreversible; it goes from hot to cold, not the other way around, in the absence of other effects. Similarly, the particle transport goes from higher chemical potential to lower.

While the conduction processes in the bar create net entropy, the entropy of the bar still does not change. The bar is assumed to be in a steady state. Instead the entropy created in the bar causes a net increase in the combined entropy of the reservoirs. Specifi­cally,

\begin{displaymath}
\frac{{\rm d}S_{\rm net}}{{\rm d}t} = \frac{{\rm d}S_2}{{\rm d}t} + \frac{{\rm d}S_1}{{\rm d}t}
\end{displaymath}

By definition of the heat flux,

\begin{displaymath}
\frac{{\rm d}S_{\rm net}}{{\rm d}t} = \frac{q_2}{T_2} - \frac{q_1}{T_1}
\end{displaymath}

Substituting in the expression for the heat flux in terms of the energy and particle fluxes gives

\begin{displaymath}
\frac{{\rm d}S_{\rm net}}{{\rm d}t} =
\left(\frac{1}{T_2...
...t) -
\left(\frac{1}{T_1} j_E - \frac{\mu_1}{T_1} j_I\right)
\end{displaymath}

Since the area of the bar is one, its volume is ${\rm d}{x}$. Therefore, the entropy generation per unit volume is:
\begin{displaymath}
\frac{1}{{\rm d}x} \frac{{\rm d}S_{\rm net}}{{\rm d}t} =
...
...1/T}{{\rm d}x} j_E + \frac{{\rm d}{- \mu/T}}{{\rm d}x} j_I
%
\end{displaymath} (A.35)

This used the fact that since ${\rm d}{x}$ is infini­tesimal, any expression of the form $(f_2-f_1)$$\raisebox{.5pt}{$/$}$${\rm d}{x}$ is by definition the derivative of $f$.

The above expression for the entropy generation implies that a non­zero derivative of 1$\raisebox{.5pt}{$/$}$$T$ must cause an energy flow of the same sign. Otherwise the entropy of the system would decrease if the derivative in the second term is zero. Similarly, a non­zero derivative of $\vphantom0\raisebox{1.5pt}{$-$}$$\mu$$\raisebox{.5pt}{$/$}$$T$ must cause a particle flow of the same sign. Of course, that does not exclude that the derivative of 1$\raisebox{.5pt}{$/$}$$T$ may also cause a particle flow as a secondary effect, or a derivative of $\vphantom0\raisebox{1.5pt}{$-$}$$\mu$$\raisebox{.5pt}{$/$}$$T$ an energy flow. Using the same reasoning as in an earlier subsection gives:

\begin{displaymath}
j_E = L_{11} \frac{{\rm d}1/T}{{\rm d}x} + L_{12} \frac{{\...
...1/T}{{\rm d}x}
+ L_{22} \frac{{\rm d}{-\mu/T}}{{\rm d}x}
%
\end{displaymath} (A.36)

where the $L_{..}$ are again coefficients to be determined experi­mentally. But in this case, the coefficients $L_{11}$ and $L_{22}$ must necessarily be positive. That can provide a sanity check on the experi­mental results. It is an advantage gained from taking the flows and derivatives directly from the equation of entropy generation. In fact, somewhat stronger constraints apply. If the expressions for $j_E$ and $j_I$ are plugged into the expression for the entropy generation, the result must be positive regardless of what the values of the derivatives are. That requires not just that $L_{11}$ and $L_{22}$ are positive, but also that the average of $L_{12}$ and $L_{21}$ is smaller in magnitude than the geometric average of $L_{11}$ and $L_{22}$.

The so-called Onsager reciprocal relations provide a further, and much more specific constraint. They say that the coefficients of the secondary effects, $L_{12}$ and $L_{21}$, must be equal. In the terms of linear algebra, matrix $L_{..}$ must be symmetric and positive definite. In real life, it means that only three, not four coefficients have to be determined experi­mentally. That is very useful because the experi­mental determin­ation of secondary effects is often difficult.

The Onsager relations remain valid for much more general systems, involving flows of other quantities. Their validity can be argued based on experi­mental evidence, or also theoreti­cally based on the symmetry of the microscopic dynamics with respect to time reversal. If there is a magnetic field involved, a coefficient $L_{ij}$ will only equal $L_{ji}$ after the magnetic field has been reversed: time reversal causes the electrons in your electro­magnet to go around the opposite way. A similar observ­ation holds if Coriolis forces are a factor in a rotating system.

The equations (A.36) for $j_E$ and $j_I$ above can readily be converted into expressions for the heat flux density $q$ $\vphantom0\raisebox{1.5pt}{$=$}$ $j_E-{\mu}j_I$ and the current density $j$ $\vphantom0\raisebox{1.5pt}{$=$}$ $-ej_I$. If you do so, then differen­tiate out the derivatives, and compare with the thermo­electric equations (A.31) given earlier, you find that the Onsager relation $L_{12}$ $\vphantom0\raisebox{1.5pt}{$=$}$ $L_{21}$ translates into the second Kelvin relation

\begin{displaymath}
{\mathscr P}={\mathscr S}T
\end{displaymath}

That allows you to clean up the Kelvin coefficient to the first Kelvin relationship:

\begin{displaymath}
{\mathscr K}\equiv \frac{{\rm d}{\mathscr P}}{{\rm d}T} - ...
...cr S}}{{\rm d}T}
= \frac{{\rm d}{\mathscr S}}{{\rm d}\ln T}
\end{displaymath}

It should be noted that while the second Kelvin relationship is named after Kelvin, he never gave a valid proof of the relationship. Neither did many other authors that tried. It was Onsager who first succeeded in giving a more or less convincing theoretical justifi­cation. Still, the most convincing support for the reciprocal relations remains the over­whelming experi­mental data. See Miller (Chem. Rev. 60, 15, 1960) for examples. Therefore, the reciprocal relationships are commonly seen as an additional axiom to be added to thermo­dynamics to allow quasi-equilibrium systems to be treated.