The below are the simplest possible descriptions of various symbols, just to help you keep reading if you do not remember/know what they stand for.

Watch it. There are so many ad hoc usages of symbols, some will have been overlooked here. Always use common sense first in guessing what a symbol means in a given context.

A dot might indicate And also many more prosaic things (punctuation signs, decimal points, ...).

Multiplication symbol. May indicate:

Might be used to indicate a factorial. Example: $5!=1\times2\times3\times4\times5=120$.

The function that generalizes $n!$ to noninteger values of $n$ is called the gamma function; $n!=\Gamma(n+1)$. The gamma function generalization is due to, who else, Euler. (However, the fact that $n!=\Gamma(n+1)$ instead of $n!=\Gamma(n)$ is due to the idiocy of Legendre.) In Legendre-resistant notation,

n!=\int_0^{\infty} t^n e^{-t} { \rm d}{t}

Straightforward integration shows that $0!$ is 1 as it should, and integration by parts shows that $(n+1)!=(n+1)n!$, which ensures that the integral also produces the correct value of $n!$ for any higher integer value of $n$ than 0. The integral, however, exists for any real value of $n$ above $-1$, not just integers. The values of the integral are always positive, tending to positive infinity for both $n\downarrow-1$, (because the integral then blows up at small values of $t$), and for $n\uparrow\infty$, (because the integral then blows up at medium-large values of $t$). In particular, Stirling’s formula says that for large positive $n$, $n!$ can be approximated as

n! \sim \sqrt{2\pi n} n^n e^{-n} \left[1 + \ldots\right]

where the value indicated by the dots becomes negligibly small for large $n$. The function $n!$ can be extended further to any complex value of $n$, except the negative integer values of $n$, where $n!$ is infinite, but is then no longer positive. Euler’s integral can be done for $n=-\frac12$ by making the change of variables $\sqrt{t}=u$, producing the integral $\int_0^\infty2e^{-u^2}{ \rm d}{u}$, or $\int_{-\infty}^{\infty}e^{-u^2}{ \rm d}{u}$, which equals $\sqrt{\int_{-\infty}^{\infty}e^{-x^2}{ \rm d}{x}\int_{-\infty}^{\infty}e^{-y^2}{ \rm d}{y}}$ and the integral under the square root can be done analytically using polar coordinates. The result is that

-\frac12! = \int_{-\infty}^{\infty}e^{-u^2}{ \rm d}{u} = \sqrt{\pi}

To get $\frac12!$, multiply by $\frac12$, since $n!=n(n-1)!$.

May indicate:

Summation symbol. Example: if in three dimensional space a vector $\vec f$ has components $f_1=2$, $f_2=1$, $f_3=4$, then $\sum_{\mbox{\scriptsize all }i} f_i$ stands for $2+1+4=7$.

Integration symbol, the continuous version of the summation symbol. For example,

\int_{\mbox{\scriptsize all }x} f(x){ \rm d}x

is the summation of $f(x){ \rm d}x$ over all little fragments ${\rm d}x$ that make up the entire $x$-range.

May indicate:

Vector symbol. An arrow above a letter indicates it is a vector. A vector is a quantity that requires more than one number to be characterized. Typical vectors in physics include position $\vec
r$, velocity $\vec v$, linear momentum $\vec p$, acceleration $\vec
a$, force $\vec F$, moment $\vec M$, etcetera.

May indicate:

The spatial differentiation operator nabla. In Cartesian coordinates:

\nabla \equiv
\frac{\partial}{\partial x},
...partial}{\partial y} +
{\hat k}\frac{\partial}{\partial z}

Nabla can be applied to a scalar function $f$ in which case it gives a vector of partial derivatives called the gradient of the function:

{\rm grad} f = \nabla f =
{\hat\imath}\frac{\partial f}... f}{\partial y} +
{\hat k}\frac{\partial f}{\partial z}.

Nabla can be applied to a vector in a dot product multiplication, in which case it gives a scalar function called the divergence of the vector:

{\rm div} \vec v = \nabla\cdot\vec v =
\frac{\partial v...
...partial v_y}{\partial y} +
\frac{\partial v_z}{\partial z}

or in index notation

{\rm div} \vec v = \nabla\cdot\vec v =
\sum_{i=1}^3 \frac{\partial v_i}{\partial x_i}

Nabla can also be applied to a vector in a vectorial product multiplication, in which case it gives a vector function called the curl or rot of the vector. In index notation, the $i$-th component of this vector is

\left({\rm curl} \vec v\right)_i =
\left({\rm rot} \ve...
...line{\imath}}}}{\partial x_{{\overline{\overline{\imath}}}}}

where ${\overline{\imath}}$ is the index following $i$ in the sequence 123123..., and ${\overline{\overline{\imath}}}$ the one preceding it (or the second following it).

The operator $\nabla^2$ is called the Laplacian. In Cartesian coordinates:

\nabla^2 \equiv
\frac{\partial^2}{\partial x^2}+
\frac{\partial^2}{\partial y^2}+
\frac{\partial^2}{\partial z^2}

In non Cartesian coordinates, don’t guess; look these operators up in a table book.

A superscript star normally indicates a complex conjugate. In the complex conjugate of a number, every ${\rm i}$ is changed into a $-{\rm i}$.

Less than.

Greater than.

Emphatic equals sign. Typically means “by definition equal” or “everywhere equal.”

Indicates approximately equal. Normally the approximation applies when something is small or large. Read it as “is approximately equal to.”

Proportional to. The two sides are equal except for some unknown constant factor.

(Gamma) May indicate:

(capital delta) May indicate:

(delta) May indicate:

(partial) Indicates a vanishingly small change or interval of the following variable. For example, $\partial f/\partial x$ is the ratio of a vanishingly small change in function $f$ divided by the vanishingly small change in variable $x$ that causes this change in $f$. Such ratios define derivatives, in this case the partial derivative of $f$ with respect to $x$.

(variant of epsilon) May indicate:

(eta) May be used to indicate a $y$-position.

(capital theta) Used in this book to indicate some function of $\theta$ to be determined.

(theta) May indicate:

(variant of theta) An alternate symbol for $\theta$.

(lambda) May indicate:

(xi) May indicate:

(pi) May indicate:

(rho) May indicate:

(tau) May indicate:

(capital phi) May indicate:

(phi) May indicate:

(variant of phi) May indicate:

(omega) May indicate:

May indicate:

May indicate:

May indicate:

The adjoint $A^H$ or $A^\dagger$ of a matrix is the complex-conjugate transpose of the matrix.

Alternatively, it is the matrix you get if you take it to the other side of an inner product. (While keeping the value of the inner product the same regardless of whatever two vectors or functions may be involved.)

Hermitian”matrices are “self-adjoint;”they are equal to their adjoint. “Skew-Hermitian”matrices are the negative of their adjoint.

Unitary”matrices are the inverse of their adjoint. Unitary matrices generalize rotations and reflections of vectors. Unitary operators preserve inner products.

Fourier transforms are unitary operators on account of the Parseval equality that says that inner products are preserved.

According to trigonometry, if the length of a segment of a circle is divided by its radius, it gives the total angular extent of the circle segment. More precisely, it gives the angle, in radians, between the line from the center to the start of the circle segment and the line from the center to the end of the segment. The generalization to three dimensions is called the “solid angle;” the total solid angle over which a segment of a spherical surface extends, measured from the center of the sphere, is the area of that segment divided by the square radius of the sphere.

May indicate:

May indicate:

A basis is a minimal set of vectors or functions that you can write all other vectors or functions in terms of. For example, the unit vectors ${\hat\imath}$, ${\hat\jmath}$, and ${\hat k}$ are a basis for normal three-dimensional space. Every three-dimensional vector can be written as a linear combination of the three.

May indicate:

Cauchy-Schwartz inequality
The Cauchy-Schwartz inequality describes a limitation on the magnitude of inner products. In particular, it says that for any vectors $\vec v$ and vec $w$

\vert\vec v^H \vec w\vert \le \vert\vec v\vert \vert\vec w\vert

For example, if $\vec v$ and $\vec w$ are real vectors, the inner product is the dot product and we have

\vec v\cdot \vec w = \vert\vec v\vert \vert\vec w\vert\cos\theta

where $\vert\vec v\vert$ is the length of vector $\vec v$ and $\vert\vec w\vert$ the one of $\vec w$, and $\theta$ is the angle in between the two vectors. Since a cosine is less than one in magnitude, the Cauchy-Schwartz inequality is therefore true for vectors.

The cosine function, a periodic function oscillating between 1 and -1 as shown in [2, pp. 40-...].

The curl of a vector field $\vec{v}$ is defined as ${\rm {curl}}\;\vec{v}={\rm {rot}}\;\vec{v}=\nabla\times\vec{v}$.

${\rm d}$
Indicates a vanishingly small change or interval of the following variable. For example, ${\rm d}x$ can be thought of as a small segment of the $x$-axis.

A derivative of a function is the ratio of a vanishingly small change in a function divided by the vanishingly small change in the independent variable that causes the change in the function. The derivative of $f(x)$ with respect to $x$ is written as ${\rm d}f/{\rm d}x$, or also simply as $f'$. Note that the derivative of function $f(x)$ is again a function of $x$: a ratio $f'$ can be found at every point $x$. The derivative of a function $f(x,y,z)$ with respect to $x$ is written as $\partial{f}/\partial{x}$ to indicate that there are other variables, $y$ and $z$, that do not vary.

The determinant of a square matrix $A$ is a single number indicated by $\vert A\vert$. If this number is nonzero, $A\vec v$ can be any vector $\vec w$ for the right choice of $\vec v$. Conversely, if the determinant is zero, $A\vec v$ can only produce a very limited set of vectors. But if it can produce a vector $w$, it can do so for multiple vectors $\vec v$.

There is a recursive algorithm that allows you to compute determinants from increasingly bigger matrices in terms of determinants of smaller matrices. For a $1\times 1$ matrix consisting of a single number, the determinant is simply that number:

\left\vert a_{11} \right\vert = a_{11}

(This determinant should not be confused with the absolute value of the number, which is written the same way. Since we normally do not deal with $1\times 1$ matrices, there is normally no confusion.) For $2\times 2$ matrices, the determinant can be written in terms of $1\times 1$ determinants:

a_{11} & a_{12} \\
... \\
a_{21} & \phantom{a_{22}}

so the determinant is $a_{11}a_{22}-a_{12}a_{21}$ in short. For $3\times 3$ matrices, we have

a_{11} & a_{12...
...a_{31} & a_{32} & \phantom{a_{33}}

and we already know how to work out those $2\times 2$ determinants, so we now know how to do $3\times 3$ determinants. Written out fully:


For $4\times 4$ determinants,

a_{11} & a_{1...
...a_{42} & a_{43} & \phantom{a_{44}}

Etcetera. Note the alternating sign pattern of the terms.

As you might infer from the above, computing a good size determinant takes a large amount of work. Fortunately, it is possible to simplify the matrix to put zeros in suitable locations, and that can cut down the work of finding the determinant greatly. We are allowed to use the following manipulations without seriously affecting the computed determinant:

  1. We may “transpose”the matrix, i.e. change its columns into its rows.
  2. We can create zeros in a row by subtracting a suitable multiple of another row.
  3. We may also swap rows, as long as we remember that each time that we swap two rows, it will flip over the sign of the computed determinant.
  4. We can also multiply an entire row by a constant, but that will multiply the computed determinant by the same constant.
Applying these tricks in a systematic way, called “Gaussian elimination” or “reduction to lower triangular form”, we can eliminate all matrix coefficients $a_{ij}$ for which $j$ is greater than $i$, and that makes evaluating the determinant pretty much trivial.

The divergence of a vector field $\vec{v}$ is defined as ${\rm {div}}\;\vec{v}=\nabla\cdot\vec{v}$.

May indicate:

$e^{{\rm i}a x}$
Assuming that $a$ is an ordinary real number, and $x$ a real variable, $e^{{\rm i}a x}$ is a complex function of magnitude one. The derivative of $e^{{\rm i}a x}$ with respect to $x$ is ${\rm i}ae^{{\rm i}a x}$

A vector $\vec v$ is an eigenvector of a matrix $A$ if $\vec v$ is nonzero and $A\vec v=\lambda\vec v$ for some number $\lambda$ called the corresponding eigenvalue.

exponential function
A function of the form $e^{\ldots}$, also written as $\exp(\ldots)$. See function and $e$.

May indicate:

May indicate:

A mathematical object that associates values with other values. A function $f(x)$ associates every value of $x$ with a value $f$. For example, the function $f(x)=x^2$ associates $x=0$ with $f=0$, $x=\frac12$ with $f=\frac14$, $x=1$ with $f=1$, $x=2$ with $f=4$, $x=3$ with $f=9$, and more generally, any arbitrary value of $x$ with the square of that value $x^2$. Similarly, function $f(x)=x^3$ associates any arbitrary $x$ with its cube $x^3$, $f(x)=\sin(x)$ associates any arbitrary $x$ with the sine of that value, etcetera.

One way of thinking of a function is as a procedure that allows you, whenever given a number, to compute another number.

A functional associates entire functions with single numbers. For example, the expectation energy is mathematically a functional: it associates any arbitrary wave function with a number: the value of the expectation energy if physics is described by that wave function.

May indicate:

The gradient of a scalar $f$ is defined as ${\rm {grad}}\;f=\nabla{f}$.

The imaginary part of a complex number. If $c=c_r+{{\rm i}}c_i$ with $c_r$ and $c_i$ real numbers, then $\Im(c)=c_i$. Note that $c-c^*=2{\rm i}\Im(c)$.

May indicate: Not to be confused with ${\rm i}$.

${\rm i}$
The standard square root of minus one: ${\rm i}=\sqrt{-1}$, ${\rm i}^2 = -1$, $1/{\rm i}=-{\rm i}$, ${\rm i}^*=-{\rm i}$.

index notation
A more concise and powerful way of writing vector and matrix components by using a numerical index to indicate the components. For Cartesian coordinates, we might number the coordinates $x$ as 1, $y$ as 2, and $z$ as 3. In that case, a sum like $v_x+v_y+v_z$ can be more concisely written as $\sum_i v_i$. And a statement like $v_x\ne0,v_y\ne0,v_z\ne0$ can be more compactly written as $v_i\ne0$. To really see how it simplifies the notations, have a look at the matrix entry. (And that one shows only 2 by 2 matrices. Just imagine 100 by 100 matrices.)

Emphatic “if.” Should be read as “if and only if.”

Integer numbers are the whole numbers: $\ldots,-2,-1,0,1,2,3,4,\ldots$.

(Of matrices.) If a matrix $A$ converts a vector $\vec
v$ into a vector $\vec w$, then the inverse of the matrix, $A^{-1}$, converts $\vec w$ back into $\vec v$.

in other words, $A^{-1} A = A A^{-1} = I$ with $I$ the unit, or identity, matrix.

The inverse of a matrix only exists if the matrix is square and has nonzero determinant.

May indicate:

May indicate:

May indicate:

May indicate:

Indicates the final result of an approaching process. $\lim_{\varepsilon\to 0}$ indicates for practical purposes the value of the following expression when $\varepsilon$ is extremely small.

linear combination
A very generic concept indicating sums of objects times coefficients. For example, a position vector ${\skew0\vec r}$ is the linear combination $x{\hat\imath}+y{\hat\jmath}+z{\hat k}$ with the objects the unit vectors ${\hat\imath}$, ${\hat\jmath}$, and ${\hat k}$ and the coefficients the position coordinates $x$, $y$, and $z$.

A table of numbers.

As a simple example, a two-dimensional matrix $A$ is a table of four numbers called $a_{11}$, $a_{12}$, $a_{21}$, and $a_{22}$:

a_{11} & a_{12} \\
a_{21} & a_{22}

unlike a two-dimensional (ket) vector $\vec v$, which would consist of only two numbers $v_1$ and $v_2$ arranged in a column:

v_1 \\

(Such a vector can be seen as a “rectangular matrix” of size $2\times1$, but let’s not get into that.)

In index notation, a matrix $A$ is a set of numbers $\{a_{ij}\}$ indexed by two indices. The first index $i$ is the row number, the second index $j$ is the column number. A matrix turns a vector $\vec
v$ into another vector $\vec w$ according to the recipe

w_i = \sum_{\mbox{{\scriptsize all }}j} a_{ij} v_j \quad \mbox{for all $i$}

where $v_j$ stands for “the $j$-th component of vector $\vec
v$,” and $w_i$ for “the $i$-th component of vector $\vec w$.”

As an example, the product of $A$ and $\vec v$ above is by definition

a_{11} & a_{12} \\
a_{21} ...
...2} v_2 \\
a_{21} v_1 + a_{22} v_2

which is another two-dimensional ket vector.

Note that in matrix multiplications like the example above, in geometric terms we take dot products between the rows of the first factor and the column of the second factor.

To multiply two matrices together, just think of the columns of the second matrix as separate vectors. For example:

a_{11} & a_{12} \\
a_{21} ...
...{21} & a_{21} b_{12} + a_{22} b_{22}

which is another two-dimensional matrix. In index notation, the $ij$ component of the product matrix has value $\sum_k a_{ik}b_{kj}$.

The zero matrix is like the number zero; it does not change a matrix it is added to and turns whatever it is multiplied with into zero. A zero matrix is zero everywhere. In two dimensions:

0 & 0 \\
0 & 0

A unit matrix is the equivalent of the number one for matrices; it does not change the quantity it is multiplied with. A unit matrix is one on its “main diagonal” and zero elsewhere. The 2 by 2 unit matrix is:

1 & 0 \\
0 & 1

More generally the coefficients, $\{\delta_{ij}\}$, of a unit matrix are one if $i=j$ and zero otherwise.

The transpose of a matrix $A$, $A^T$, is what you get if you switch the two indices. Graphically, it turns its rows into its columns and vice versa. The Hermitian “adjoint”$A^H$ is what you get if you switch the two indices and then take the complex conjugate of every element. If you want to take a matrix to the other side of an inner product, you will need to change it to its Hermitian adjoint. “Hermitian matrices”are equal to their Hermitian adjoint, so this does nothing for them.

See also “determinant” and “eigenvector.”

May indicate:

May indicate:

May indicate: and maybe some other stuff.

Natural numbers are the numbers: $1,2,3,4,\ldots$.

A normal operator or matrix is one that has orthonormal eigenfunctions or eigenvectors. Since eigenvectors are not orthonormal in general, a normal operator or matrix is abnormal! Normal matrices are matrices that commute with their adjoint.

The opposite of a number $a$ is $-a$. In other words, it is the additive inverse.

perpendicular bisector
For two given points $P$ and $Q$, the perpendicular bisector consists of all points $R$ that are equally far from $P$ as they are from $Q$. In two dimensions, the perpendicular bisector is the line that passes through the point exactly half way in between $P$ and $Q$, and that is orthogonal to the line connecting $P$ and $Q$. In three dimensions, the perpendicular bisector is the plane that passes through the point exactly half way in between $P$ and $Q$, and that is orthogonal to the line connecting $P$ and $Q$. In vector notation, the perpendicular bisector of points $P$ and $Q$ is all points $R$ whose radius vector ${\skew0\vec r}$ satisfies the equation:

({\skew0\vec r}-{\skew0\vec r}_P)\cdot({\skew0\vec r}_Q-{\...
..._Q-{\skew0\vec r}_P)\cdot({\skew0\vec r}_Q-{\skew0\vec r}_P)

(Note that the halfway point ${\skew0\vec r}-{\skew0\vec r}_P={\textstyle\frac{1}{2}}({\skew0\vec r}_Q-{\skew0\vec r}_P)$ is included in this formula, as is the half way point plus any vector that is normal to $({\skew0\vec r}_Q-{\skew0\vec r}_P)$.)

phase angle
Any complex number can be written in “polar form” as $c=\vert c\vert e^{{\rm i}\alpha}$ where both the magnitude $\vert c\vert$ and the phase angle $\alpha$ are real numbers. Note that when the phase angle varies from zero to $2\pi$, the complex number $c$ varies from positive real to positive imaginary to negative real to negative imaginary and back to positive real. When the complex number is plotted in the complex plane, the phase angle is the direction of the number relative to the origin. The phase angle $\alpha$ is often called the argument, but so is about everything else in mathematics, so that is not very helpful.

In complex time-dependent waves of the form $e^{{\rm i}({\omega}t-\phi)}$, and its real equivalent $\cos({\omega}t-\phi)$, the phase angle $\phi$ gives the angular argument of the wave at time zero.

May indicate:

May indicate:

The real part of a complex number. If $c=c_r+{{\rm i}}c_i$ with $c_r$ and $c_i$ real numbers, then $\Re(c)=c_r$. Note that $c+c^*=2\Re(c)$.

May indicate:

$\vec r$
The position vector. In Cartesian coordinates $(x,y,z)$ or $x{\hat\imath}+y{\hat\jmath}+z{\hat k}$. In spherical coordinates $r\hat\imath_r$. Its three Cartesian components may be indicated by $r_1,r_2,r_3$ or by $x,y,z$ or by $x_1,x_2,x_3$.

The reciprocal of a number $a$ is $1/a$. In other words, it is the multiplicative inverse.

The rot of a vector $\vec{v}$ is defined as ${\rm {curl}}\;\vec{v}={\rm {rot}}\;\vec{v}=\nabla\times\vec{v}$.

A quantity characterized by a single number.

The sine function, a periodic function oscillating between 1 and -1 as shown in [2, pp. 40-]. Good to remember: $\cos^2 \alpha + \sin^2 \alpha=1$.

Stokes' Theorem
This theorem, first derived by Kelvin and first published by someone else I cannot recall, says that for any reasonably smoothly varying vector $\vec v$,

\int_A \left(\nabla \times \vec v\right) { \rm d}A
\oint \vec v \cdot {\rm d}\vec r

where the first integral is over any smooth surface area $A$ and the second integral is over the edge of that surface. How did Stokes get his name on it? He tortured his students with it, that’s how!

Symmetries are operations under which an object does not change. For example, a human face is almost, but not completely, mirror symmetric: it looks almost the same in a mirror as when seen directly. The electrical field of a single point charge is spherically symmetric; it looks the same from whatever angle you look at it, just like a sphere does. A simple smooth glass (like a glass of water) is cylindrically symmetric; it looks the same whatever way you rotate it around its vertical axis.

May indicate:

triple product
A product of three vectors. There are two different versions:

May indicate:

May indicate:

May indicate:

$\vec v$
May indicate:

A quantity characterized by a list of numbers. A vector $\vec v$ in index notation is a set of numbers $\{v_i\}$ indexed by an index $i$. In normal three-dimensional Cartesian space, $i$ takes the values 1, 2, and 3, making the vector a list of three numbers, $v_1$, $v_2$, and $v_3$. These numbers are called the three components of $\vec v$.

vectorial product
An vectorial product, or cross product is a product of vectors that produces another vector. If

\vec c=\vec a\times\vec b,

it means in index notation that the $i$-th component of vector $\vec c$ is

c_i = a_{{\overline{\imath}}} b_{{\overline{\overline{\ima...
... - a_{{\overline{\overline{\imath}}}}b_{{\overline{\imath}}}

where ${\overline{\imath}}$ is the index following $i$ in the sequence 123123..., and ${\overline{\overline{\imath}}}$ the one preceding it. For example, $c_1$ will equal $a_2b_3-a_3b_2$.

May indicate:

$\vec w$
Generic vector.

Used in this book to indicate a function of $x$ to be determined.

May indicate:

Used in this book to indicate a function of $y$ to be determined.

May indicate:

Used in this book to indicate a function of $z$ to be determined.

May indicate: