# Fermi–Dirac statistics

In statistical mechanics, Fermi-Dirac statistics is a particular case of particle statistics developed by Enrico Fermi and Paul Dirac that determines the statistical distribution of fermions over the energy states for a system in thermal equilibrium. In other words, it is a probability of a given energy level to be occupied by a fermion.

More generally, Fermi-Dirac statistics means that the total wavefunction of fermions must be antisymmetric under an exchange of every pair of fermions (that is, if one exchanges any fermion with another, the wavefunction gets an overall minus sign).

Fermions are particles which are indistinguishable and obey the Pauli exclusion principle, i.e., no more than one particle may occupy the same quantum state at the same time. Fermions have half-integral spin. Statistical thermodynamics is used to describe the behaviour of large numbers of particles. A collection of non-interacting fermions is called a Fermi gas.

F-D statistics was introduced in 1926 by Enrico Fermi and Paul Dirac and applied in 1926 by Ralph Fowler to describe the collapse of a star to a white dwarf and in 1927 by Arnold Sommerfeld to electrons in metals. Pascual Jordan developed in 1925 the same statistics which he called Pauli statistics. The problem was that his referee Max Born forgot the paper for six months before finding it again. In the meantime it was independently discovered by Enrico Fermi and Paul Dirac.[1]

For F-D statistics, the expected number of particles in states with energy ${\displaystyle \epsilon _{i}}$ is

${\displaystyle n_{i}={\frac {g_{i}}{e^{(\epsilon _{i}-\mu )/kT}+1}}}$

where:

${\displaystyle n_{i}\ }$ is the number of particles in state i,
${\displaystyle \epsilon _{i}\ }$ is the energy of state i,
${\displaystyle g_{i}\ }$ is the degeneracy of state(density of states) i (the number of states with energy ${\displaystyle \epsilon _{i}\ }$),
${\displaystyle \mu \ }$ is the chemical potential (Sometimes the Fermi energy ${\displaystyle E_{F}\ }$ is used instead, as a low-temperature approximation),
${\displaystyle \ k\ }$ is Boltzmann's constant, and
${\displaystyle \ T\ }$ is absolute temperature.

In the case where ${\displaystyle \mu }$ is the Fermi energy ${\displaystyle E_{F}\ }$ and ${\displaystyle g_{i}=1\ }$, the function is called the Fermi function: ${\displaystyle F(E)=\left(1+e^{(E-E_{F})/kT}\right)^{-1}}$

## Historical note

Before the advent of the Fermi Dirac statistics, our understanding of some aspects of the electrons behavior was somewhat rudimentary since it was difficult to understand for example why in a metal, electrons can move freely to conduct the electric current and why their contribution, in the same metal to the specific heat capacity, was negligible as if their number become for an unknown reason considerably reduced.

The difficulty encountered by the electronic theory of metals at that time comes from the fact that electrons were considered (by the classical statistics theory) "all equivalent". In other words it was believed that each electron contribute to the specific heat by providing an "amount" which is of the order of Boltzmann constant k. This "statistical problem" is remained unsolved until the derivation of the Pauli exclusion principle and the Fermi-Dirac distribution in the middle of the 1920's.

## A derivation

Consider a single-particle state of a multiparticle system, whose energy is ${\displaystyle \mathbf {\epsilon } }$. For example, if our system is some quantum gas in a box, then a state might be a particular single-particle wave function. Recall that, for a grand canonical ensemble in general, the grand partition function is

${\displaystyle Z\;=\sum _{s}e^{-(E(s)-\mu N(s))/kT}}$

where

${\displaystyle E(s)}$ is the energy of a state s,
${\displaystyle N(s)}$ is the number of particles possessed by the system when in the state s,
${\displaystyle \mu }$ denotes the chemical potential, and
s is an index that runs through all possible microstates of the system.

In the present context, we take our system to be a fixed single-particle state (not a particle). So our system has energy ${\displaystyle n\cdot \epsilon }$ when the state is occupied by n particles, and 0 if it is unoccupied. Consider the balance of single-particle states to be the reservoir. Since the system and the reservoir occupy the same physical space, there is clearly exchange of particles between the two (indeed, this is the very phenomenon we are investigating). This is why we use the grand partition function, which, via chemical potential, takes into consideration the flow of particles between a system and its thermal reservoir.

For fermions, a state can only be either occupied by a single particle or unoccupied. Therefore our system has multiplicity two: occupied by one particle, or unoccupied, called ${\displaystyle s_{1}}$ and ${\displaystyle s_{2}}$ respectively. We see that ${\displaystyle E(s_{1})=\;\epsilon }$, ${\displaystyle N(s_{1})=\;1}$, and ${\displaystyle E(s_{2})=\;0}$, ${\displaystyle N(s_{2})=\;0}$. The partition function is therefore

${\displaystyle Z=\sum _{i=1}^{2}e^{-(E(s_{i})-\mu N(s_{i}))/kT}=e^{-(\epsilon -\mu )/kT}+1}$.

For a grand canonical ensemble, probability of a system being in the microstate ${\displaystyle s_{\alpha }}$ is given by

${\displaystyle P(s_{\alpha })={\frac {e^{-(E(s_{\alpha })-\mu N(s_{\alpha }))/kT}}{Z}}}$.

Our state being occupied by a particle means the system is in microstate ${\displaystyle s_{1}}$, whose probability is

${\displaystyle {\bar {n}}=P(s_{1})={\frac {e^{-(E(s_{1})-\mu N(s_{1}))/kT}}{Z}}={\frac {e^{-(\epsilon -\mu )/kT}}{e^{-(\epsilon -\mu )/kT}+1}}={\frac {1}{e^{(\epsilon -\mu )/kT}+1}}}$.

${\displaystyle {\bar {n}}}$ is called the Fermi-Dirac distribution. For a fixed temperature T, ${\displaystyle {\bar {n}}(\epsilon )}$ is the probability that a state with energy ε will be occupied by a fermion. Notice ${\displaystyle {\bar {n}}}$ is a decreasing function in ε. This is consistent with our expectation that higher energy states are less likely to be occupied.

Note that if the energy level ε has degeneracy ${\displaystyle \;g_{\epsilon }}$, then we would make the simple modification:

${\displaystyle {\bar {n}}=g_{\epsilon }\cdot {\frac {1}{e^{(\epsilon -\mu )/kT}+1}}}$.

This number is then the expected number of particles in the totality of the states with energy ε.

For all temperature T, ${\displaystyle {\bar {n}}(\mu )={\frac {1}{2}}}$ , that is, the states whose energy is μ will always have equal probability of being occupied or unoccupied.

In the limit ${\displaystyle T\rightarrow 0}$, ${\displaystyle {\bar {n}}}$ becomes a step function (see graph above). All states whose energy is below the chemical potential will be occupied with probability 1 and those states with energy above μ will be unoccupied. The chemical potential at zero temperature is called Fermi energy, denoted by ${\displaystyle E_{F}}$, i.e.

${\displaystyle E_{F}=\;\mu (T=0)}$.

It may be of interest here to note that, in general the chemical potential is temperature-dependent. However, for systems well below the Fermi temperature ${\displaystyle T_{F}={\frac {E_{F}}{k}}}$, it is often sufficient to use the approximation ${\displaystyle \mathbf {\mu } }$${\displaystyle \;E_{F}}$ .

## Another derivation

In the previous derivation, we have made use of the grand partition function (or Gibbs sum over states). Equivalently, the same result can be achieved by directly analyzing the multiplicities of the system.

Suppose there are two fermions placed in a system with four energy levels. There are six possible arrangements of such a system, which are shown in the diagram below.

   ε1   ε2   ε3   ε4
A  *    *
B  *         *
C  *              *
D       *    *
E       *         *
F            *    *


Each of these arrangements is called a microstate of the system. Assume that, at thermal equilibrium, each of these microstates will be equally likely, subject to the constraints that there be a fixed total energy and a fixed number of particles.

Depending on the values of the energy for each state, it may be that total energy for some of these six combinations is the same as others. Indeed, if we assume that the energies are multiples of some fixed value ε, the energies of each of the microstates become:

A: 3ε
B: 4ε
C: 5ε
D: 5ε
E: 6ε
F: 7ε

So if we know that the system has an energy of 5ε, we can conclude that it will be equally likely that it is in state C or state D. Note that if the particles were distinguishable (the classical case), there would be twelve microstates altogether, rather than six.

Now suppose we have a number of energy levels, labeled by index i, each level having energy εi  and containing a total of ni  particles. Suppose each level contains gi  distinct sublevels, all of which have the same energy, and which are distinguishable. For example, two particles may have different momenta, in which case they are distinguishable from each other, yet they can still have the same energy. The value of gi  associated with level i is called the "degeneracy" of that energy level. The Pauli exclusion principle states that only one fermion can occupy any such sublevel.

Let w(ng) be the number of ways of distributing n particles among the g sublevels of an energy level. It's clear that there are g ways of putting one particle into a level with g sublevels, so that w(1, g) = g which we will write as:

${\displaystyle w(1,g)={\frac {g!}{1!(g-1)!}}}$

We can distribute 2 particles in g sublevels by putting one in the first sublevel and then distributing the remaining (n − 1) particles in the remaining (g − 1) sublevels, or we could put one in the second sublevel and then distribute the remaining (n − 1) particles in the remaining (g − 2) sublevels, etc. so that w'(2, g) = w(1, g − 1) + w(1,g − 2) + ... + w(1, 1) or

${\displaystyle w(2,g)=\sum _{k=1}^{g-1}w(1,g-k)=\sum _{k=1}^{g-1}{\frac {(g-k)!}{1!(g-k-1)!}}}$ ${\displaystyle =\sum _{g-k=1}^{g-1}{\frac {(g-k)!}{1!(g-k-1)!}}={\frac {g!}{2!(g-2)!}}}$

where we have used the following theorem involving binomial coefficients:

${\displaystyle \sum _{k=n}^{g}{\frac {k!}{n!(k-n)!}}={\frac {(g+1)!}{(n+1)!(g-n)!}}}$

Continuing this process, we can see that w(ng) is just a binomial coefficient

${\displaystyle w(n,g)={\frac {g!}{n!(g-n)!}}}$

The number of ways that a set of occupation numbers ni can be realized is the product of the ways that each individual energy level can be populated:

${\displaystyle W=\prod _{i}w(n_{i},g_{i})=\prod _{i}{\frac {g_{i}!}{n_{i}!(g_{i}-n_{i})!}}}$

Following the same procedure used in deriving the Maxwell-Boltzmann statistics, we wish to find the set of ni for which W is maximized, subject to the constraint that there be a fixed number of particles, and a fixed energy. We constrain our solution using Lagrange multipliers forming the function:

${\displaystyle f(n_{i})=\ln(W)+\alpha (N-\sum n_{i})+\beta (E-\sum n_{i}\epsilon _{i})}$

Again, using Stirling's approximation for the factorials and taking the derivative with respect to ni, and setting the result to zero and solving for ni yields the Fermi-Dirac population numbers:

${\displaystyle n_{i}={\frac {g_{i}}{e^{\alpha +\beta \epsilon _{i}}+1}}}$

It can be shown thermodynamically that β = 1/kT where k  is Boltzmann's constant and T is the temperature, and that α = -μ/kT where μ is the chemical potential, so that finally:

${\displaystyle n_{i}={\frac {g_{i}}{e^{(\epsilon _{i}-\mu )/kT}+1}}}$

Note that the above formula is sometimes written:

${\displaystyle n_{i}={\frac {g_{i}}{e^{\epsilon _{i}/kT}/z+1}}}$

where ${\displaystyle z=exp(\mu /kT)}$ is the fugacity.

## References

Carter, Ashley H., "Classical and Statistical Thermodynamics", Prentice-Hall, Inc., 2001, New Jersey.

Griffiths, David J., "Introduction to Quantum Mechanics", 2nd ed. Pearson Education, Inc., 2005.

Charles Kittel and Herbert Kroemer, Thermal Physics, 2nd ed. (W.H. Freeman, 1980)

C. J. Pethick and H. Smith, "Bose-Einstein Condensation in Dilute Gases", University Press, 2002, Cambridge.