Here, we discuss a beautifully straight-forward way of calculating the effects of an incident magnetic field on a substance composed of atoms with a net spin of $s=\frac{1}{2}$ from the framework of statistical mechanics. As a side note on notation, most derivations will use the notation for the magnetic field being as being $\mathbf{H}$; however, I am not fond of this misuse of notation, for the "true" magnetic field is $\mathbf{B}$, $\mathbf{H}$ being merely a measure of the magnetic field caused by free currents, called the auxiliary field (which does not have the same units of $\mathbf{B}$). Now, consider a system composed of a of weakly interacting atoms spin-1/2 atoms (i.e. the spin-spin interactions of each atom can be neglected) at some temperature T. Then we can consider the probability of a single atom occupying a particular energy state to be described by the Boltzmann distribution:
$$p(E_r,T)=\frac{\exp(-\beta E_r)}{\sum_r \exp(-\beta E_r)}$$
which we can obtain from a Taylor expansion of the natural log of the number of accessible states of the entire system of atoms while using the fundamental relations $\frac{\partial S}{\partial E}=\frac{1}{kT}=\beta$, where k is the Boltzmann constant, T is the temperature, and S is the entropy. $E_r$ denotes the energy of the atom in a state $r$, which is either spin-up or spin-down. Recall from classical electrodynamics that the energy (or Hamiltonian in our notation) of a dipole interacting with an external field is:
$$H=-\mu B$$
where $\mu$ is the magnetic moment. Generally, this is an inner product, but we are assuming the $\mathbf{B}$ is uniform in magnitude a direction, so $\mu$ points along the same direction as $\mathbf{B}$. Since the atom either occupies a spin-up state or spin-down state, the two possible energies are either positive or negative, corresponding to the magnetic moment being anti-aligned with the external field or aligned with the external field, respectively. Hence, the probability of being spin-up is:
$$p_{+}(E_r,T)=\frac{\exp(\beta \mu B)}{\exp(-\mu B)+\exp(\mu B)}$$
while spin-down is:
$$p_{-}(E_r,T)=\frac{\exp(-\beta \mu B)}{\exp(-\mu B)+\exp(\mu B)}$$
In the regime of relatively low temperatures:
$$\bigg(\frac{\exp(\beta \mu B)}{\exp(-\mu B)+\exp(\mu B)}\bigg )>\bigg( \frac{\exp(-\beta \mu B)}{\exp(-\mu B)+\exp(\mu B)}\bigg)$$
and thus:
$$p_{+} > p_{-}$$
so the probability of the magnetic moment being aligned with the magnetic field is greater than being anti-aligned. However, if the temperature is high, then $\beta \implies 0$, so
$$p_{+} \approx p_{-}$$
This behavior of the probabilities in the high temperature regime can be attributed to the random thermal motion of the atoms, for the higher the temperature, the higher the average kinetic energy of each atom and hence the higher the resistance of the atoms to being aligned in a particular direction of spin as dictated the direction of the incident uniform magnetic field. Therefore, the preference of the magnetic moment to align with the magnetic field is fundamentally an effect of statistical mechanics! We can, however, do even better than simply calculating the probabilities of a particular atom occupying some particular state - we can go as far as calculating the magnetization of the material from the mean magnetic moment. From the definition of the mean:
$$\bar{\mu} = \sum_r p_r \mu_r=\mu \frac{\exp(\mu B)-\exp(-\mu B)}{\exp(\mu B)+\exp(-\mu B)}=\mu \tanh\bigg(\frac{\mu B}{kT}\bigg)$$
from the definition of the magnetization (dipole per unit volume), we can let N define the number of atoms per volume. Hence:
$$M=N\mu \tanh\bigg(\frac{\mu B}{kT}\bigg)$$
So in the regime of high temperature ($\tanh$ approaches zero):
$$M\approx \frac{N\mu^2}{kT}B=\chi B$$
where $\chi$ is the magnetic susceptibility. However, by the shape of hyperbolic tangent, in the regime of low temperatures and/or strong magnetic field, $M$ approaches $M=N \mu$. Very beautiful!

References:

Reif, Frederick. Fundamentals of Statistical and Thermal Physics. Calcuta: Levant, 2010. Print.

# Curtis Peterson

## Sunday, June 11, 2017

## Thursday, May 18, 2017

## Tuesday, May 9, 2017

## Sunday, May 7, 2017

## Friday, April 28, 2017

### Proof: The Set of all Positive Integers Is the Same Size as the Set of all Integers

In the following discussion, I will provide a quick, painless, and cute proof that the set of all positive integers has the same number of elements, which is called cardinality, as the set of all integers. My goal is to make this as readable as possible for a more general audience. Before we delve into our discussion, we must build up some preliminary knowledge. First, let us define a function:

Function: A relation between any two sets such that for every element in the domain, there exists a unique element in the codomain.

The domain is the 'set of all inputs', whereas the codomain is the 'set of all outputs' (also called the range). What our definition means, colloquially, is that a function takes any element in a set, say the set of all real numbers, and maps it onto only one element in another set, say the set of all real numbers, donkeys, neoconservatives, whatever we define our function to be a mapping between. We then ask ourselves: how can we compare the sizes of any two sets? As it turns out, any two sets have the same size if there exists a function which is one-to-one (injective) and onto (surjective) - this function is called a bijection - that maps elements between the two sets. Let us then define one-to-one and onto:

One-to-one: For any two elements in the domain, if a function maps these elements to the same element in the codomain, then the two elements in the domain are equal to each other.

Onto: For every element in the codomain, b, there exists an element, a, in the domain such that f(a)=b.

The reason why we seek a bijective function is that bijective functions have inverses which are also functions, i.e. every element in the codomain of some bijective function maps onto some element in the domain by the inverse of the bijective function. This is only possible if the codomain is the same size as the domain and vice-versa. Therefore, if we want to prove that two sets have the same size, we must find a bijective function that maps elements from one set to the other. If a function has an inverse that is also a function, we are guaranteed that the function is bijective. We can use this to prove all sorts of seemingly ridiculous statements, such as our original goal - so let's do it:

Theorem: The set of all positive integers has the same cardinality as the set of all integers.

Proof. Let us define a function whose domain is the set of all positive integers, such that $$F(n)=n/2$$ for all positive even integers and $$F(n)=\frac{1-n}{2}$$ for all positive odd integers. If we plug positive even integers into our first function, we generate the set of all positive integers and if we plug in positive odd integers into the second function, we generate the set of all integers less than or equal to zero. The inverse of this function for the set of all positive even integers is simply $F^{-1}(N)=2N$, whereas the inverse of this function for the set of all positive odd integers is $F^{-1}(N)=1-2N$. Since we can define an inverse, this function is bijective. Since there exists a bijective function that maps elements between the set of all positive integers and the set of all integers, these two sets have the same cardinality.

Function: A relation between any two sets such that for every element in the domain, there exists a unique element in the codomain.

The domain is the 'set of all inputs', whereas the codomain is the 'set of all outputs' (also called the range). What our definition means, colloquially, is that a function takes any element in a set, say the set of all real numbers, and maps it onto only one element in another set, say the set of all real numbers, donkeys, neoconservatives, whatever we define our function to be a mapping between. We then ask ourselves: how can we compare the sizes of any two sets? As it turns out, any two sets have the same size if there exists a function which is one-to-one (injective) and onto (surjective) - this function is called a bijection - that maps elements between the two sets. Let us then define one-to-one and onto:

One-to-one: For any two elements in the domain, if a function maps these elements to the same element in the codomain, then the two elements in the domain are equal to each other.

Onto: For every element in the codomain, b, there exists an element, a, in the domain such that f(a)=b.

The reason why we seek a bijective function is that bijective functions have inverses which are also functions, i.e. every element in the codomain of some bijective function maps onto some element in the domain by the inverse of the bijective function. This is only possible if the codomain is the same size as the domain and vice-versa. Therefore, if we want to prove that two sets have the same size, we must find a bijective function that maps elements from one set to the other. If a function has an inverse that is also a function, we are guaranteed that the function is bijective. We can use this to prove all sorts of seemingly ridiculous statements, such as our original goal - so let's do it:

Theorem: The set of all positive integers has the same cardinality as the set of all integers.

Proof. Let us define a function whose domain is the set of all positive integers, such that $$F(n)=n/2$$ for all positive even integers and $$F(n)=\frac{1-n}{2}$$ for all positive odd integers. If we plug positive even integers into our first function, we generate the set of all positive integers and if we plug in positive odd integers into the second function, we generate the set of all integers less than or equal to zero. The inverse of this function for the set of all positive even integers is simply $F^{-1}(N)=2N$, whereas the inverse of this function for the set of all positive odd integers is $F^{-1}(N)=1-2N$. Since we can define an inverse, this function is bijective. Since there exists a bijective function that maps elements between the set of all positive integers and the set of all integers, these two sets have the same cardinality.

## Tuesday, March 7, 2017

### The Bending of Electric Field Lines about Unlike Ponderous Media

Perhaps one of the coolest properties of electric fields, and amongst the most important, is that they can bend when moving between media with different dielectric constants. In the following discussion, we are going to use a rather simple method (that is, the method of images) to show that the electric fields lines of a point charge in a dielectric of permittivity, $\epsilon_1$, will bend according to the relative magnitude of the permittivity of another medium, $\epsilon_2$, that we will allow them to pass through. The scenario that we are dealing with is depicted in the figure below.
There are four conditions that we need to satisfy to completely describe the potential and electric field of our setup in the figure above. First, we are dealing with an electrostatic situation, so
$$\nabla \times \mathbf{E} = 0 $$
Furthermore, in region I we have a charge, so we should expect that we have a point charge distribution existing within the dielectric material
$$\nabla \cdot \mathbf{E} = \frac{1}{\epsilon_1} \rho$$
while in the region II, we should observe no charge, i.e.
$$\nabla \cdot \mathbf{E} = 0$$
Our final conditions are what sets this solution apart from a normal old method of images problem, as instead of imposing that the potential be zero at the plane between the dielectric media, we impose a continuity condition between the displacement fields in the z-direction and the electric fields in the $\rho$-direction.
$$\mathbf{D}^{\text{perp.}}_{\epsilon_1} |_{z=0} = \mathbf{D}^{\text{perp.}}_{\epsilon_2} |_{z=0}$$
$$E^{\text{para.}}_{\epsilon_1}\hat{\rho} |_{z=0} = E^{\text{para.}}_{\epsilon_2} \hat{\rho} |_{z=0}$$
The first of these last two conditions is simply a statement asserting that the bound charge is zero, so there is not a discontinuity in the displacement field at the boundary between the two dielectric media. The very last condition is a statement asserting that, equivalently, there is no discontinuity in the electric field about the parallel points along the plane splitting the two regions. These are both standard boundary conditions typical to many electrostatic problems. Starting off with the normal old method of images trick, we suppose that there exists a dummy charge, $q_2$, on the other side of the plane that makes the potential in region I
$$\phi_{\epsilon_1}=\frac{1}{4\pi\epsilon_1}\bigg( \frac{q}{r_1} + \frac{q_2}{r_2} \bigg)$$
where $r_1$ and $r_2$ are separation vectors, not to be confused with an attempt to center either charge at the origin and use spherical coordinates. For a matter of fact, they are
$$r_1=\sqrt{\rho^2+(z-a)^2}$$
$$r_2=\sqrt{\rho^2+(z+a)^2}$$
And that is all we really need to do for region I. Now we would like to obtain a solution for the potential in region II. Our use of the method of images will already satisfy our third condition, as we can simply suppose that there exists an image charge in region I, such that
$$\phi_{\epsilon_2}=\frac{1}{4\pi \epsilon_2}\frac{q_3}{r_1}$$
i.e., the image charge is located in the same exact region as our real charge; however, it occupies a charge value of $q_3$. What is left to us now is to show that our choice of potentials are not arbitrary and that we can, indeed, satisfy the boundary conditions using what we have here to exploit the uniqueness theorem for Laplace's equation by finding the one and only solution to the setup depicted in the figure at the beginning of our derivation. By our fourth condition, we require that
$$\epsilon_1 \mathbf{E}^{\text{perp.}}_{\epsilon_1} |_{z=0} = \epsilon_2 \mathbf{E}^{\text{perp.}}_{\epsilon_2} |_{z=0}$$
meaning that our gradient in the z-direction of our potentials must be continuous:
$$\epsilon_1 \frac{\partial}{\partial z}\phi_{\epsilon_1} |_{z=0} = \epsilon_2 \frac{\partial}{\partial z} \phi_{\epsilon_2} |_{z=0}$$
Furthermore, utilizing our fifth condition
$$ \frac{\partial}{\partial \rho}\phi_{\epsilon_1} |_{z=0} = \frac{\partial}{\partial \rho} \phi_{\epsilon_2} |_{z=0}$$
Putting these two together yields the following relationship between our charges
$$\frac{1}{\epsilon_1}(q+q_2)=\frac{q_3}{\epsilon_2}$$
$$q-q_2=q_3$$
Rearranging our two relations between the charges, we obtain a relationship for our two image charges in terms of our original charge
$$q_2=\frac{\epsilon_2-\epsilon_1}{\epsilon_2+\epsilon_1}q$$
$$q_3=\frac{2\epsilon_2}{\epsilon_2+\epsilon_1}q$$
so our potentials are then
$$\phi_{\epsilon_1}=\frac{1}{4 \pi \epsilon_1}\bigg(\frac{q}{r_1}+\frac{(\epsilon_2-\epsilon_1)q}{(\epsilon_2+\epsilon_1)r_2}\bigg)$$
for region I, and
$$\phi=\frac{1}{4\pi \epsilon_2}\bigg( \frac{(2\epsilon_2)q}{(\epsilon_2+\epsilon_1)r_1}\bigg)$$
for region II. Notice something very very fascinating that you may have already anticipated earlier - the first potential (that is, for region I) looks a hell of a lot like a dipole. Depending on the relative magnitude of $\epsilon_1$ and $\epsilon_2$, it does indeed act like a dipole, such that, for $\epsilon_2>\epsilon_1$, we obtain a solution that looks like two unlike charges in sign and magnitude that are separated by a finite difference (a physical dipole) and, for $\epsilon_2<\epsilon_1$, our solution looks like the potential for two charges that have the same sign for their charge, but different values of charge. This is reflected in the following drawings of the electric fields produced by our image charge solution, such that the first drawing depicts $\epsilon_2>\epsilon_1$ and the second drawing depicts $\epsilon_2<\epsilon_1$:
Now, the whole goal of our discussion was to show that the electric field lines for a point charge bend and this is precisely what we have been capable of displaying in our solution. Because the potential for region II acts like a point charge, whereas the potential in region I acts like two charges of equal or opposite parity, our continuity conditions guarantee that, even though the fields look drastically different, we will obtain electric fields that bleed into the second boundary and bend about that boundary while remaining continuous.

References:

[1] Jackson, John David. Classical electrodynamics. N.p.: John Wiley & Sons, Ltd., 1962. Print.

[2] Griffiths, David J. Introduction to electrodynamics. Noida, India: Pearson India Education Services, 2015. Print.

References:

[1] Jackson, John David. Classical electrodynamics. N.p.: John Wiley & Sons, Ltd., 1962. Print.

[2] Griffiths, David J. Introduction to electrodynamics. Noida, India: Pearson India Education Services, 2015. Print.

## Sunday, January 15, 2017

### A Quick Analytical Derivation of the Quantum Mechanical Simple Harmonic Oscillator

Here we present an alternative method to solving the quantum mechanical simple harmonic oscillator (SHO) than many undergraduate texts present (that is, devoid of matrix mechanics). Though the non-analytical solution serves as a wonderful teaching tool, the analytical solution is enlightening in its own special way. Particularly, it gives the reader an idea of how solutions are manipulated to satisfy the condition that the quantum mechanical state resides in a Hilbert space with a convergent inner product. As it will turn out, our journey to finding this kind of solution is what actually leads to quantization of the allowed energies of a Hooke's law potential. Furthermore, energy quantization is what leads to the existence of a non-zero lowest energy state for the SHO. Without further adieu, the SHO Hamiltonian:
$$\hat{H}\psi = \frac{\hat{P}^2}{2m}\psi+\frac{m\omega^2}{2}\hat{x}^2\psi$$
Using the position space representation of the momentum and position operators.
$$\hat{P}= -i\hbar\frac{d}{dx}$$
$$\hat{x}=x$$
$$\hat{H}\psi = \frac{-\hbar^2}{2m}\frac{d^2}{dx^2}\psi + \frac{m\omega^2}{2}x^2\psi$$
This you have probably seen many times, but it doesn't hurt to go about it once more. Next, we make a quick substitution of our independent variable, x, to a dimensionless parameter, $\xi$. The power behind doing so will become apparent in a moment. Let
$$x = \bigg(\frac{\hbar}{m\omega}\bigg)^{\frac{1}{2}}\xi$$
Invoking the chain rule, we can change our x dependence in the derivative to $\xi$ dependence.
$$\frac{d}{dx} = \frac{\partial}{\partial \xi}\frac{d\xi}{dx} = \bigg(\frac{m\omega}{\hbar}\bigg)^{\frac{1}{2}}\frac{d}{d\xi}$$
We do this twice to get the an expression for the second order derivative in the Hamiltonian. After doing so, we replace x with $\xi$ in the Hooke's law potential and cancel terms to obtain a new expression for the SHO Hamiltonian.
$$\hat{H}\psi = \frac{-\hbar \omega}{2}\frac{d^2}{d\xi^2}\psi + \frac{\hbar \omega}{2}\xi^2\psi = E\psi$$
To make this even more clean, we make another substitution. Let
$$K = \frac{2E}{\hbar \omega}$$
Then we can rearrange the SHO Hamiltonian to read
$$\frac{d^2}{d\xi^2}\psi+(K-\xi^2)\psi=0$$
That is the wonderful simplification that our variable substitution does for us! No immediate solution to this ordinary differential equation comes to mind right away, so let's try to get a ballpark solution and base an educated guess off of it. If we look at this equation as $\xi$ approaches infinity, then K becomes negligible, so
$$\frac{d^2}{d\xi^2}\psi-\xi^2\psi=0$$
Don't worry if the solution to this seems completely obvious or not. We can plug in
$$\psi \approx A\xi^ne^{-\frac{\xi^2}{2}} + B\xi^ne^{\frac{\xi^2}{2}}$$
and see it this does, indeed, satisfy the equation. This is our first instance of making sure that our solution residents in a Hilbert space with a convergent inner product. Because $e^{\frac{\xi^2}{2}}$ increases without bound, we let B=0, otherwise our wave function will not be normalizable. We now have an idea that our solution may look something like a Gaussian with dependence on powers of $\xi$. We are then justified in guessing that our solution is of the form.
$$\psi = u(\xi)e^{\frac{-\xi^2}{2}}$$
Substituting this into the SHO Hamiltonian:
$$\frac{d^2}{d\xi^2}u(\xi)e^{-\frac{\xi^2}{2}}+(K-\xi^2)u(\xi)e^{-\frac{\xi^2}{2}}=0$$
Taking the second derivative twice and canceling all of the $e^{-\frac{\xi^2}{2}}$ terms out, we obtain a differential equation of the form
$$\frac{d^2}{d\xi^2}u(\xi)-2\xi\frac{d}{d\xi}u(\xi)+(K-1)u(\xi)=0$$
Unless you have studied this differential equation before, you will likely not be capable of guessing the answer. In these cases, it may be sensible to use the Method of Frobenius. That is, assume the solution to be a power series and see if we can obtain a tangible function from the convergence of the assumed series. Let
$$u(\xi) = \sum_{j=0}^\infty a_j \xi^j $$
Plugging this into our differential equation, we obtain
$$\sum_{j=0}^\infty a_j[j(j-1)\xi^{j-2}-2j\xi^j+(K-1)\xi^j]=0$$
Notice that the first term in this sum evaluates to zero for j=1,2. We can use this to our advantage by defining a dummy variable and utilizing index switching to obtain an equivalent summation with only terms of $\xi^j$. We know
$$\sum_{j=0}^\infty a_jj(j-1)\xi^{j-2}=\sum_{j=2}^\infty a_jj(j-1)\xi^{j-2}$$
Let us, then, define a dummy variable, k, in the following manner
$$j=k+2$$
Then
$$\sum_{j=2}^\infty a_jj(j-1)\xi^{j-2} = \sum_{k=0}^\infty a_{k+2}(k+2)(k+1)\xi^{k}$$
This, however, is only a dummy variable. We can then let j=k and we will have obtained an equivalent summation to our original series. Our new series is then
$$\sum_{j=0}^\infty \xi^j [a_{j+2}(j+2)(j+1)+a_j(K-2j-1)]=0$$
From linear independence, we know that the coefficients have to evaluate to zero. We can assure that this occurs by setting up the following recursion relation between $a_j$ and $a_{j+2}$ (this amounts to letting the coefficients inside the sum evaluate to zero)
$$a_{j+2}=\frac{2j+1-K}{(j+2)(j+1)}a_j$$
You can check that this does, indeed, make the coefficients in the sum go to zero, while leaving $\xi$ as a free variable. This is all very odd because we have obtained a solution without putting any restrictions on K (which is dependent on E). But wait a second - does our solution really reside in a Hilbert space? Let us expand our guessed solution out to a few terms
$$u(\xi) = [a_0+a_2\frac{2(2)+1-K}{(2+2)(2+1)}\xi^2+a_4\frac{2(4)+1-K}{(4+2)(4+1)}\xi^4+...]+[a_1\xi+...]$$
You may have noticed a problem here. Our variable $\xi$ keeps increasing in power to infinity. This solution does not reside in our Hilbert space because it is not finite! No need to fret, however, as we can fix this. If we let
$$K=2j_{max}+1$$
Then our series will truncate at the highest integer of our choosing, $j_{max}$. What we have done here is not only make our solution finite. We have quantized the allowed energies! Recall
$$K=\frac{2E}{\hbar\omega}$$
Then
$$K=\frac{2E}{\hbar\omega}=2j_{max}+1$$
Solving for E
$$E=\hbar \omega\bigg(j_{max}+\frac{1}{2}\bigg)$$
If you have studied the quantum mechanical SHO, you will recognize this as the energy levels of the oscillator! We only have one more thing to do to make this physically realizable.
Observe for that any symmetric potential, the allowed wave functions are of either even or odd parity. We can satisfy this condition by simply setting $a_0=0$ for odd solutions and $a_1=0$ for even solutions. The class of even or odd polynomials that our solution $u(\xi)$ generates are called the physicists' Hermite polynomials. We will now redefine $u(\xi)=AH_j(\xi)$, where A is our normalization constant. For j=0,1,2,3, $H_j(\xi)$ is
$$H_0(\xi)=\sum_0^{1}a_j\xi^j=1$$
$$H_1(\xi)=\sum_0^{1}a_j\xi^j=\xi$$
$$H_2(\xi)=\sum_0^{1}a_j\xi^j=\xi^2-1$$
$$H_3(\xi)=\sum_0^{1}a_j\xi^j=\xi^3-3\xi$$
And here we are!
$$\psi(x)=AH_j(\xi)e^{-\frac{\xi^2}{2}}$$
I am not going to go further with finding the normalization constant, as it is a rather tedious and non-instructive task. If we transform the $\xi$ dependence back into x dependence and fill in our normalization constant
$$\psi_j(x)=\bigg(\frac{m\omega}{\hbar \pi(j!)^{2}2^{2j}}\bigg)^{\frac{1}{4}}H_j\bigg(\sqrt{\frac{m\omega}{\hbar}}x\bigg)\exp\bigg(\frac{-m\omega x^2}{2\hbar}\bigg)$$
Because our SHO Hamiltonian
$$\hat{H}\Psi = \frac{-\hbar^2}{2m}\frac{d^2}{dx^2}\Psi + \frac{m\omega^2}{2}x^2\Psi=i\hbar\partial_t\Psi$$
is separable. We can tack on the stationary state time dependence to get the temporal evolution.
$$\Psi_j(x,t)=\bigg(\frac{m\omega}{\hbar \pi(j!)^{2}2^{2j}}\bigg)^{\frac{1}{4}}H_j\bigg(\sqrt{\frac{m\omega}{\hbar}}x\bigg)\exp\bigg(\frac{-m\omega x^2}{2\hbar}\bigg)\exp\bigg(-i\omega\bigg[j+\frac{1}{2}\bigg] t\bigg)$$
And, finally, the wave function in its full glory
$$\Psi(x,t)=\sum_{j=0}^{\infty}\bigg(\frac{m\omega}{\hbar \pi(j!)^{2}2^{2j}}\bigg)^{\frac{1}{4}}H_j\bigg(\sqrt{\frac{m\omega}{\hbar}}x\bigg)\exp\bigg(\frac{-m\omega x^2}{2\hbar}\bigg)\exp\bigg(-i\omega\bigg[j+\frac{1}{2}\bigg] t\bigg)$$
Beautiful.

Subscribe to:
Posts (Atom)