# Cumulant

In probability theory and statistics, a random variable *X* has an expected value μ = *E*(*X*) and a variance σ^{2} = *E*((*X* − μ)^{2}). These are the first two **cumulants**: μ = κ_{1} and σ^{2} = κ_{2}.

The cumulants κ_{n} are defined by the **cumulant-generating function**:

The derivative of the cumulant generating function is simply:

so that the cumulants are the derivatives

- κ
_{1}= μ =*g*' (0), - κ
_{2}= σ^{2}=*g*' '(0), - κ
_{n}=*g*^{(n)}(0).

A distribution with given cumulants κ_{n} can be approximated through the Edgeworth series.

## Contents

- 1 Cumulants of some discrete probability distributions
- 2 Cumulants of some continuous probability distributions
- 3 Some properties of cumulants
- 4 Joint cumulants
- 5 History
- 6 Formal cumulants
- 7 Bell numbers
- 8 Cumulants of a polynomial sequence of binomial type
- 9 Free cumulants
- 10 See also
- 11 External links

## Cumulants of some discrete probability distributions

- The constant random variable
*X*= 1. The derivative of the cumulant generating function is*g*'(*t*) = 1. The first cumulant is κ_{1}=*g*'(0) = 1 and the other cumulants are zero, κ_{2}= κ_{3}= κ_{4}= ... = 0.

- The constant random variables
*X*= μ. Every cumulant is just μ times the corresponding cumulant of the constant random variable*X*= 1. The derivative of the cumulant generating function is*g*'(*t*) = μ. The first cumulant is κ_{1}=*g*'(0) = μ and the other cumulants are zero, κ_{2}= κ_{3}= κ_{4}= ... = 0. So the derivative of cumulant generating functions is a generalisation of the real constants.

- The Bernoulli distributions, (number of successes in one trial with probability
*p*of success). The special case*p*= 1 is the constant random variable*X*= 1. The derivative of the cumulant generating function is*g*'(*t*) = ((*p*^{−1}−1)·e^{−t}+ 1)^{−1}. The first cumulants are κ_{1}=*g*'(0) =*p*and κ_{2}=*g*' '(0) =*p*·(1−*p*) . The cumulants satisfy a recursion formula

- The geometric distributions, (number of failures before one success with probability
*p*of success on each trial). The derivative of the cumulant generating function is*g*'(*t*) = ((1−*p*)^{−1}·e^{−t}−1)^{−1}. The first cumulants are κ_{1}=*g*'(0) =*p*^{−1}−1, and κ_{2}=*g*' '(0) = κ_{1}·*p*^{−1}. Substituting*p*= (μ+1)^{−1}gives*g*'(*t*) = ((μ^{−1}+1)·e^{−t}−1)^{−1}and κ_{1}= μ.

- The Poisson distributions. The derivative of the cumulant generating function is
*g*'(*t*) = μ·e^{t}. All cumulants are equal to the parameter: κ_{1}= κ_{2}= κ_{3}= ...=μ.

- The binomial distributions, (number of successes in
*n*independent trials with probability*p*of success on each trial). The special case*n*= 1 is a Bernoulli distribution. Every cumulant is just*n*times the corresponding cumulant of the corresponding Bernoulli distribution. The derivative of the cumulant generating function is*g*'(*t*) =*n*·((*p*^{−1}−1)·e^{−t}+ 1)^{−1}. The first cumulants are κ_{1}=*g*'(0) =*n·p*and κ_{2}=*g*' '(0) = κ_{1}·(1−*p*). Substituting*p*= μ·*n*^{−1}gives*g*'(*t*) = ((μ^{−1}−*n*^{−1})·e^{−t}+*n*^{−1})^{−1}and κ_{1}= μ. The limiting case*n*^{−1}= 0 is a Poisson distribution.

- The negative binomial distributions, (number of failures before
*n*successes with probability*p*of success on each trial). The special case*n*= 1 is a geometric distribution. Every cumulant is just*n*times the corresponding cumulant of the corresponding geometric distribution. The derivative of the cumulant generating function is*g*'(*t*) =*n*·((1−*p*)^{−1}·e^{−t}−1)^{−1}. The first cumulants are κ_{1}=*g*'(0) =*n*·(*p*^{−1}−1), and κ_{2}=*g*' '(0) = κ_{1}·*p*^{−1}. Substituting*p*= (μ·*n*^{−1}+1)^{−1}gives*g*'(*t*) = ((μ^{−1}+*n*^{−1})·e^{−t}−*n*^{−1})^{−1}and κ_{1}= μ. Comparing these formulas to those of the binomial distributions explains the name 'negative binomial distribution'. The limiting case*n*^{−1}= 0 is a Poisson distribution.

Introducing the variance-to-mean ratio, є = μ^{−1}·σ^{2} = κ_{1}^{−1}·κ_{2}, the above probability distributions get a unified formula for the derivative of the cumulant generating function:

*g*'(*t*) = μ·(ε·e^{−t}− ε + 1)^{−1}.

The second derivative is

*g*' '(*t*) =*g*'(*t*)·(1 + (ε^{−1}− 1)·e^{t})^{−1}

confirming that the first cumulant is κ_{1} = *g* '(0) = μ and the second cumulant is κ_{2} = *g* ' '(0) = μ·ε.
The constant random variables *X* = μ have є = 0. The binomial distributions have є = 1 − *p* so that 0<є<1. The Poisson distributions have є = 1. The negative binomial distributions have є = *p*^{−1} so that є > 1. Note the analogy to the eccentricity theory of the conic sections: circles є = 0, ellipses 0 < є < 1, parabolas є = 1, hyperbolas є > 1.

## Cumulants of some continuous probability distributions

- For the normal distribution with expected value μ and variance σ
^{2}, the derivative of the cumulant generating function is*g*'(*t*) = μ + σ^{2}·*t*. The cumulants are κ_{1}= μ, κ_{2}= σ^{2}, and κ_{3}= κ_{4}= ... = 0. The special case σ^{2}= 0 is a constant random variable*X*= μ.

- The cumulants of the uniform distribution on the interval [−1, 0] are κ
_{n}=*B*_{n}/*n*, where*B*_{n}is the*n*th Bernoulli number.

## Some properties of cumulants

### Invariance and equivariance

The first cumulant is shift-equivariant; all of the others are shift-invariant. To state this less tersely, denote by κ_{n}(*X*) the *n*th cumulant of the probability distribution of the random variable *X*. The statement is that if *c* is constant then κ_{1}(*X* + *c*) = κ_{1}(*X*) + *c* and κ_{n}(*X* + *c*) = κ_{n}(*X*) for *n* ≥ 2, i.e., *c* is added to the first cumulant, but all higher cumulants are unchanged.

### Homogeneity

The *n*th cumulant is homogeneous of degree *n*, i.e. if *c* is any constant, then

### Additivity

If *X* and *Y* are independent random variables then κ_{n}(*X* + *Y*) = κ_{n}(*X*) + κ_{n}(*Y*).

### Cumulants and moments

The moment generating function is:

so the cumulant generating function is simply the logarithm of the moment generating function. The first cumulant is simply the expected value; the second and third cumulants are respectively the second and third central moments (the second central moment is the variance); but the higher cumulants are neither moments nor central moments, but rather more complicated polynomial functions of the moments.

The cumulants are related to the moments by the following recursion formula:

The *n*th moment μ′_{n} is an *n*th-degree polynomial in the first *n* cumulants, thus:

The coefficients are precisely those that occur in Faà di Bruno's formula.

The "prime" distinguishes the moments μ′_{n} from the central moments μ_{n}. To express the *central* moments as functions of the cumulants, just drop from these polynomials all terms in which κ_{1} appears as a factor:

### Cumulants and set-partitions

These polynomials have a remarkable combinatorial interpretation: the coefficients count certain partitions of sets. A general form of these polynomials is

where

- π runs through the list of all partitions of a set of size
*n*;

- "
*B*∈ π" means*B*is one of the "blocks" into which the set is partitioned; and

- |
*B*| is the size of the set*B*.

Thus each monomial is a constant times a product of cumulants in which the sum of the indices is *n* (e.g., in the term κ_{3} κ_{2}^{2} κ_{1}, the sum of the indices is 3 + 2 + 2 + 1 = 8; this appears in the polynomial that expresses the 8th moment as a function of the first eight cumulants). A partition of the integer *n* corresponds to each term. The *coefficient* in each term is the number of partitions of a set of *n* members that collapse to that partition of the integer *n* when the members of the set become indistinguishable.

## Joint cumulants

The **joint cumulant** of several random variables *X*_{1}, ..., *X*_{n} is

where π runs through the list of all partitions of { 1, ..., *n* }, and *B* runs through the list of all blocks of the partition π. For example,

The joint cumulant of just one random variable is its expected value, and that of two random variables is their covariance. If some of the random variables are independent of all of the others, then the joint cumulant is zero. If all *n* random variables are the same, then the joint cumulant is the *n*th ordinary cumulant.

The combinatorial meaning of the expression of moments in terms of cumulants is easier to understand than that of cumulants in terms of moments:

For example:

Another important property of joint cumulants is multilinearity:

Just as the second cumulant is simply the variance, the joint cumulant of just two random variables is just the covariance. The familiar identity

generalizes to cumulants:

### Conditional cumulants and the law of total cumulance

The law of total expectation and the law of total variance generalize naturally to conditional cumulants. The case *n* = 3, expressed in the language of (central) moments rather than that of cumulants, says

The general result stated below first appeared in 1969 in *The Calculation of Cumulants via Conditioning* by David R. Brillinger in volume 21 of *Annals of the Institute of Statistical Mathematics*, pages 215-218.

In general, we have

where

- the sum is over all partitions π of the set { 1, ...,
*n*} of indices, and

- π
_{1}, ..., π_{b}are all of the "blocks" of the partition π; the expression κ(*X*_{πk}) indicates that the joint cumulant of the random variables whose indices are in that block of the partition.

## History

Cumulants were first introduced by the Danish astronomer, actuary, mathematician, and statistician Thorvald N. Thiele (1838 - 1910) in 1889. Thiele called them *half-invariants*. They were first called *cumulants* in a 1931 paper, *The derivation of the pattern formulae of two-way partitions from those of simpler patterns*, Proceedings of the London Mathematical Society, Series 2, v. 33, pp. 195-208, by the great statistical geneticist Sir Ronald Fisher and the statistician John Wishart, eponym of the Wishart distribution. The historian Stephen Stigler has said that the name *cumulant* was suggested to Fisher in a letter from Harold Hotelling. In another paper published in 1929, Fisher had called them *cumulative moment functions*.

## Formal cumulants

More generally, the cumulants of a sequence { *m*_{n} : *n* = 1, 2, 3, ... }, not necessarily the moments of any probability distribution, are given by

where the values of κ_{n} for *n* = 1, 2, 3, ... are found formally, i.e., by algebra alone, in disregard of questions of whether any series converges. All of the difficulties of the "problem of cumulants" are absent when one works formally. The simplest example is that the second cumulant of a probability distribution must always be nonnegative, and is zero only if all of the higher cumulants are zero. Formal cumulants are subject to no such constraints.

## Bell numbers

In combinatorics, the *n*th Bell number is the number of partitions of a set of size *n*. All of the cumulants of the sequence of Bell numbers are equal to 1. The Bell numbers are the moments of the Poisson distribution with expected value 1.

## Cumulants of a polynomial sequence of binomial type

For any sequence { κ_{n} : *n* = 1, 2, 3, ... } of scalars in a field of characteristic zero, being considered formal cumulants, there is a corresponding sequence { μ ′ : *n* = 1, 2, 3, ... } of formal moments, given by the polynomials above. For those polynomials, construct a polynomial sequence in the following way. Out the polynomial

make a new polynomial in these plus one additional variable *x*:

... and generalize the pattern. The pattern is that the numbers of blocks in the aforementioned partitions are the exponents on *x*. Each coefficient is a polynomial in the cumulants; these are the Bell polynomials, named after Eric Temple Bell.

This sequence of polynomials is of binomial type. In fact, no other sequences of binomial type exist; every polynomial sequence of binomial type is completely determined by its sequence of formal cumulants.

## Free cumulants

In the identity

one sums over *all* partitions of the set { 1, ..., *n* }. If instead, one sums only over the noncrossing partitions, then one gets **"free cumulants"** rather than conventional cumulants treated above. These play a central role in free probability theory. In that theory, rather than considering independence of random variables, defined in terms of Cartesian products of algebras of random variables, one considers instead **"freeness"** of random variables, defined in terms of free products of algebras rather than Cartesian products of algebras.

The ordinary cumulants of degree higher than 2 of the normal distribution are zero. The *free* cumulants of degree higher than 2 of the Wigner semicircle distribution are zero. This is one respect in which the role of the Wigner distribution in free probability theory is analogous to that of the normal distribution in conventional probability theory.