### 13.2 Maxwell’s Equa­tions

Maxwell’s equa­tions are com­monly not cov­ered in a typ­i­cal en­gi­neer­ing pro­gram. While these laws are not di­rectly re­lated to quan­tum me­chan­ics, they do tend to pop up in nan­otech­nol­ogy. This sec­tion in­tends to give you some of the ideas. The de­scrip­tion is based on the di­ver­gence and curl spa­tial de­riv­a­tive op­er­a­tors, and the re­lated Gauss and Stokes the­o­rems com­monly found in cal­cu­lus courses (Cal­cu­lus III in the US sys­tem.)

Skip­ping the first equa­tion for now, the sec­ond of Maxwell’s equa­tions comes di­rectly out of the quan­tum me­chan­i­cal de­scrip­tion of the pre­vi­ous sec­tion. Con­sider the ex­pres­sion for the mag­netic field de­rived (guessed) there, (13.3). If you take its di­ver­gence, (pre­mul­ti­ply by ), you get rid of the vec­tor po­ten­tial , since the di­ver­gence of any curl is al­ways zero, so you get

 (13.4)

and that is the sec­ond of Maxwell’s four beau­ti­fully con­cise equa­tions. (The com­pact mod­ern no­ta­tion us­ing di­ver­gence and curl is re­ally due to Heav­i­side and Gibbs, though.)

The first of Maxwell’s equa­tions is a sim­i­lar ex­pres­sion for the elec­tric field , but its di­ver­gence is not zero:

 (13.5)

where is the elec­tric charge per unit vol­ume that is present and the con­stant 8.85 10 C/J m is called the per­mit­tiv­ity of space.

What does it all mean? Well, the first thing to ver­ify is that Maxwell’s first equa­tion is just a very clever way to write Coulomb’s law for the elec­tric field of a point charge. Con­sider there­fore an elec­tric point charge of strength , and imag­ine this charge sur­rounded by a translu­cent sphere of ra­dius , as shown in fig­ure 13.1. By sym­me­try, the elec­tric field at all points on the spher­i­cal sur­face is ra­dial, and every­where has the same mag­ni­tude ; fig­ure 13.1 shows it for eight se­lected points.

Now watch what hap­pens if you in­te­grate both sides of Maxwell’s first equa­tion (13.5) over the in­te­rior of this sphere. Start­ing with the right hand side, since the charge den­sity is the charge per unit vol­ume, by de­f­i­n­i­tion its in­te­gral over the vol­ume is the charge . So the right hand side in­te­grates sim­ply to . How about the left hand side? Well, the Gauss, or di­ver­gence, the­o­rem of cal­cu­lus says that the di­ver­gence of any vec­tor, in this case, in­te­grated over the vol­ume of the sphere, equals the ra­dial elec­tric field in­te­grated over the sur­face of the sphere. Since is con­stant on the sur­face, and the sur­face of a sphere is just , the right hand side in­te­grates to . So in to­tal, you get for the in­te­grated first Maxwell’s equa­tion that . Take the to the other side and there you have the Coulomb elec­tric field of a point charge:

 (13.6)

Mul­ti­ply by and you have the elec­tro­sta­tic force on an elec­tron in that field ac­cord­ing to the Lorentz equa­tion (13.1). In­te­grate with re­spect to and you have the po­ten­tial en­ergy that has been used ear­lier to an­a­lyze atoms and mol­e­cules.

Of course, all this raises the ques­tion, why bother? If Maxwell’s first equa­tion is just a rewrite of Coulomb’s law, why not sim­ply stick with Coulomb’s law in the first place? Well, to de­scribe the elec­tric field at a given point us­ing Coulomb’s law re­quires you to con­sider every charge every­where else. In con­trast, Maxwell’s equa­tion only in­volves lo­cal quan­ti­ties at the given point, to wit, the de­riv­a­tives of the lo­cal elec­tric field and the lo­cal charge per unit vol­ume. It so hap­pens that in nu­mer­i­cal or an­a­lyt­i­cal work, most of the time it is much more con­ve­nient to deal with lo­cal quan­ti­ties, even if those are de­riv­a­tives, than with global ones.

Of course, you can also in­te­grate Maxwell’s first equa­tion over more gen­eral re­gions than a sphere cen­tered around a charge. For ex­am­ple fig­ure 13.2 shows a sphere with an off-cen­ter charge. But the elec­tric field strength is no longer con­stant over the sur­face, and di­ver­gence the­o­rem now re­quires you to in­te­grate the com­po­nent of the elec­tric field nor­mal to the sur­face over the sur­face. Clearly, that does not have much in­tu­itive mean­ing. How­ever, if you are will­ing to loosen up a bit on math­e­mat­i­cal pre­cise­ness, there is a bet­ter way to look at it. It is in terms of the elec­tric field lines, the lines that every­where trace the di­rec­tion of the elec­tric field. The left fig­ure in fig­ure 13.2 shows the field lines through the se­lected points; a sin­gle charge has ra­dial field lines.

As­sume that you draw the field lines densely, more like fig­ure 13.3 say, and more­over, that you make the num­ber of field lines com­ing out of a charge pro­por­tional to the strength of that charge. In that case, the lo­cal den­sity of field lines at a point be­comes a mea­sure of the strength of the elec­tric field at that point, and in those terms, Maxwell’s in­te­grated first equa­tion says that the net num­ber of field lines leav­ing a re­gion is pro­por­tional to the net charge in­side that re­gion. That re­mains true when you add more charges in­side the re­gion. In that case the field lines will no longer be straight, but the net num­ber go­ing out will still be a mea­sure of the net charge in­side.

Now con­sider the ques­tion why Maxwell’s sec­ond equa­tion says that the di­ver­gence of the mag­netic field is zero. For the elec­tric field you can shove, say, some elec­trons in the re­gion to cre­ate a net neg­a­tive charge, or you can shove in some ion­ized mol­e­cules to cre­ate a net pos­i­tive charge. But the mag­netic equiv­a­lents to such par­ti­cles, called “mag­netic monopoles”, be­ing sep­a­rate mag­netic north pole par­ti­cles or mag­netic south pole par­ti­cles, sim­ply do not ex­ist, {N.31}. It might ap­pear that your bar mag­net has a north pole and a south pole, but if you take it apart into lit­tle pieces, you do not end up with north pole pieces and south pole pieces. Each lit­tle piece by it­self is still a lit­tle mag­net, with equally strong north and south poles. The only rea­son the com­bined mag­net seems to have a north pole is that all the mi­cro­scopic mag­nets of which it con­sists have their north poles pref­er­en­tially pointed in that di­rec­tion.

If all mi­cro­scopic mag­nets have equal strength north and south poles, then the same num­ber of mag­netic field lines that come out of the north poles go back into the south poles, as fig­ure 13.4 il­lus­trates. So the net mag­netic field lines leav­ing a given re­gion will be zero; what­ever goes out comes back in. True, if you en­close the north pole of a long bar mag­net by an imag­i­nary sphere, you can get a pretty good mag­netic ap­prox­i­ma­tion of the elec­tri­cal case of fig­ure 13.1. But even then, if you look in­side the mag­net where it sticks through the spher­i­cal sur­face, the field lines will be found to go in to­wards the north pole, in­stead of away from it. You see why Maxwell’s sec­ond equa­tion is also called ab­sence of mag­netic monopoles. And why, say, elec­trons can have a net neg­a­tive charge, but have zero mag­netic pole strength; their spin and or­bital an­gu­lar mo­menta pro­duce equally strong mag­netic north and south poles, a mag­netic di­pole (di mean­ing two.)

You can get Maxwell’s third equa­tion from the elec­tric field de­rived in the pre­vi­ous sec­tion. If you take its curl, (pre­mul­ti­ply by ), you get rid of the po­ten­tial , since the curl of any gra­di­ent is al­ways zero, and the curl of is the mag­netic field. So the third of Maxwell’s equa­tions is:

 (13.7)

The curl, , is also of­ten in­di­cated as rot.

Now what does that one mean? Well, the first thing to ver­ify in this case is that this is just a clever rewrite of Fara­day's law of in­duc­tion, gov­ern­ing elec­tric power gen­er­a­tion. As­sume that you want to cre­ate a volt­age to drive some load (a bulb or what­ever, don’t worry what the load is, just how to get the volt­age for it.) Just take a piece of cop­per wire and bend it into a cir­cle, as shown in fig­ure 13.5. If you can cre­ate a volt­age dif­fer­ence be­tween the ends of the wire you are in busi­ness; just hook your bulb or what­ever to the ends of the wire and it will light up. But to get such a volt­age, you will need an elec­tric field as shown in fig­ure 13.5 be­cause the volt­age dif­fer­ence be­tween the ends is the in­te­gral of the elec­tric field strength along the length of the wire. Now Stokes' the­o­rem of cal­cu­lus says that the elec­tric field strength along the wire in­te­grated over the length of the wire equals the in­te­gral of the curl of the elec­tric field strength in­te­grated over the in­side of the wire, in other words over the imag­i­nary translu­cent cir­cle in fig­ure 13.5. So to get the volt­age, you need a nonzero curl of the elec­tric field on the translu­cent cir­cle. And Maxwell’s third equa­tion above says that this means a time-vary­ing mag­netic field on the translu­cent cir­cle. Mov­ing the end of a strong mag­net closer to the cir­cle should do it, as sug­gested by fig­ure 13.5. You bet­ter not make that a big bulb un­less you you wrap the wire around a lot more times to form a spool, but any­way. {N.32}.

Maxwell’s fourth and fi­nal equa­tion is a sim­i­lar ex­pres­sion for the curl of the mag­netic field:

 (13.8)

where is the elec­tric cur­rent den­sity, the charge flow­ing per unit cross sec­tional area, and is the speed of light. (It is pos­si­ble to rescale by a fac­tor to get the speed of light to show up equally in the equa­tions for the curl of and the curl of , but then the Lorentz force law must be ad­justed too.)

The big dif­fer­ence from the third equa­tion is the ap­pear­ance of the cur­rent den­sity . So, there are two ways to cre­ate a cir­cu­la­tory mag­netic field, as shown in fig­ure 13.6: (1) pass a cur­rent through the en­closed cir­cle (the cur­rent den­sity in­te­grates over the area of the cir­cle into the cur­rent through the cir­cle), and (2) by cre­at­ing a vary­ing elec­tric field over the cir­cle, much like was done for the elec­tric field in fig­ure 13.5.

The fact that a cur­rent cre­ates a sur­round­ing mag­netic field was al­ready known as Am­pere's law when Maxwell did his analy­sis. Maxwell him­self how­ever added the time de­riv­a­tive of the elec­tric field to the equa­tion to have the math­e­mat­ics make sense. The prob­lem was that the di­ver­gence of any curl must be zero, and by it­self, the di­ver­gence of the cur­rent den­sity in the right hand side of the fourth equa­tion is not zero. Just like the di­ver­gence of the elec­tric field is the net field lines com­ing out of a re­gion per unit vol­ume, the di­ver­gence of the cur­rent den­sity is the net cur­rent com­ing out. And it is per­fectly OK for a net charge to flow out of a re­gion: it sim­ply re­duces the charge re­main­ing within the re­gion by that amount. This is ex­pressed by the con­ti­nu­ity equa­tion:

 (13.9)

So Maxwell’s fourth equa­tion with­out the time de­riv­a­tive of the elec­tric field is math­e­mat­i­cally im­pos­si­ble. But af­ter he added it, if you take the di­ver­gence of the to­tal right hand side then you do in­deed get zero as you should. To check that, use the con­ti­nu­ity equa­tion above and the first equa­tion.

In empty space, Maxwell’s equa­tions sim­plify: there are no charges so both the charge den­sity and the cur­rent den­sity will be zero. In that case, the so­lu­tions of Maxwell’s equa­tions are sim­ply com­bi­na­tions of “trav­el­ing waves.” A trav­el­ing wave takes the form

 (13.10)

where for sim­plic­ity, the -​axis of the co­or­di­nate sys­tem has been aligned with the di­rec­tion in which the wave trav­els, and the -​axis with the am­pli­tude of the elec­tric field of the wave. Such a wave is called “lin­early po­lar­ized” in the -​di­rec­tion. The con­stant is the an­gu­lar fre­quency of the wave, equal to times its fre­quency in cy­cles per sec­ond, and is re­lated to its wave length by . The con­stant is just a phase an­gle. For these sim­ple waves, the mag­netic and elec­tric field must be nor­mal to each other, as well as to the di­rec­tion of wave prop­a­ga­tion.

You can plug the above wave so­lu­tion into Maxwell’s equa­tions and so ver­ify that it sat­is­fies them all. With more ef­fort and knowl­edge of Fourier analy­sis, you can show that they are the most gen­eral pos­si­ble so­lu­tions that take this trav­el­ing wave form, and that any ar­bi­trary so­lu­tion is a com­bi­na­tion of these waves (if all di­rec­tions of the prop­a­ga­tion di­rec­tion and of the elec­tric field rel­a­tive to it, are in­cluded.)

The point is that the waves travel with the speed . When Maxwell wrote down his equa­tions, was just a con­stant to him, but when the prop­a­ga­tion speed of elec­tro­mag­netic waves matched the ex­per­i­men­tally mea­sured speed of light, it was just too much of a co­in­ci­dence and he cor­rectly con­cluded that light must be trav­el­ing elec­tro­mag­netic waves.

It was a great vic­tory of math­e­mat­i­cal analy­sis. Long ago, the Greeks had tried to use math­e­mat­ics to make guesses about the phys­i­cal world, and it was an abysmal fail­ure. You do not want to hear about it. Only when the Re­nais­sance started mea­sur­ing how na­ture re­ally works, the cor­rect laws were dis­cov­ered for peo­ple like New­ton and oth­ers to put into math­e­mat­i­cal form. But here, Maxwell suc­cess­fully amends Am­pere's mea­sured law, just be­cause the math­e­mat­ics did not make sense. More­over, by de­riv­ing how fast elec­tro­mag­netic waves move, he dis­cov­ers the very fun­da­men­tal na­ture of the then mys­ti­fy­ing phys­i­cal phe­nom­e­non hu­mans call light.

For those with a knowl­edge of par­tial dif­fer­en­tial equa­tions, sep­a­rate wave equa­tions for the elec­tric and mag­netic fields and their po­ten­tials are de­rived in ad­den­dum {A.37}.

An elec­tro­mag­netic field ob­vi­ously con­tains en­ergy; that is how the sun trans­ports heat to our planet. The elec­tro­mag­netic en­ergy within an oth­er­wise empty vol­ume can be found as

 (13.11)

This is typ­i­cally de­rived by com­par­ing the en­ergy from dis­charg­ing a con­denser to the elec­tric field that it ini­tially holds, and from com­par­ing the en­ergy from dis­charg­ing a coil to the mag­netic field it ini­tially holds. That is too much de­tail for this book.

But at least the re­sult can be made plau­si­ble. First note that the time de­riv­a­tive of the en­ergy above can be writ­ten as

Here is the sur­face of vol­ume , and is the unit vec­tor nor­mal to the sur­face el­e­ment . To ver­ify this ex­pres­sion, bring the time de­riv­a­tive in­side the in­te­gral in (13.11), then get rid of the time de­riv­a­tives us­ing Maxwell’s third and fourth laws, use the stan­dard vec­tor iden­tity [41, 20.40], and fi­nally the di­ver­gence the­o­rem.

Now sup­pose you have a fi­nite amount of ra­di­a­tion in oth­er­wise empty space. If the amount of ra­di­a­tion is fi­nite, the field should dis­ap­pear at in­fin­ity. So, tak­ing the vol­ume to be all of space, the in­te­gral in the right hand side above will be zero. So will be con­stant. That in­di­cates that should be at least a mul­ti­ple of the en­ergy. Af­ter all, what other scalar quan­tity than en­ergy would be con­stant? And the fac­tor is needed be­cause of units. That misses only the fac­tor in the ex­pres­sion for the en­ergy.

For an ar­bi­trary vol­ume , the sur­face in­te­gral must then be the en­ergy out­flow through the sur­face of the vol­ume. That sug­gests that the en­ergy flow rate per unit area is given by the so-called “Poynt­ing vec­tor”

 (13.12)

Un­for­tu­nately, this ar­gu­ment is flawed. You can­not de­duce lo­cal val­ues of the en­ergy flow from its in­te­gral over an en­tire closed sur­face. In par­tic­u­lar, you can find dif­fer­ent vec­tors that de­scribe the en­ergy flow also with­out in­con­sis­tency. Just add an ar­bi­trary so­le­noidal vec­tor, a vec­tor whose di­ver­gence is zero, to the Poynt­ing vec­tor. For ex­am­ple, adding a mul­ti­ple of the mag­netic field would do it. How­ever, if you look at sim­ple light­waves like (13.10), the Poynt­ing vec­tor seems the in­tu­itive choice. This para­graph was in­cluded be­cause other books have Poynt­ing vec­tors and you would be very dis­ap­pointed if yours did not.

You will usu­ally not find Maxwell’s equa­tions in the ex­act form de­scribed here. To ex­plain what is go­ing on in­side ma­te­ri­als, you would have to ac­count for the elec­tric and mag­netic fields of every elec­tron and pro­ton (and neu­tron!) of the ma­te­r­ial. That is just an im­pos­si­ble task, so physi­cists have de­vel­oped ways to av­er­age away all those ef­fects by mess­ing with Maxwell’s equa­tions. But then the messed-up in one of Maxwell’s equa­tions is no longer the same as the messed-up in an­other, and the same for . So physi­cists re­name one messed-up as, maybe, the elec­tric flux den­sity , and a messed up mag­netic field as, maybe, the aux­il­iary field. And they de­fine many other sym­bols, and even re­fer to the aux­il­iary field as be­ing the mag­netic field, all to keep en­gi­neers out of nan­otech­nol­ogy. Don’t let them! When you need to un­der­stand the messed-up Maxwell’s equa­tions, Wikipedia has a list of the count­less de­f­i­n­i­tions.