Multivariate normal distribution

Jump to: navigation, search
MVN redirects here. For the airport with that IATA code in Mount Vernon, Kentucky, see Mount Vernon Airport.
Multivariate normal
Probability density function
Cumulative distribution function
Parameters location (real vector)
covariance matrix (positive-definite real matrix)
Support
Probability density function (pdf)
Cumulative distribution function (cdf)
Mean
Median
Mode
Variance (covariance matrix)
Skewness 0
Excess kurtosis 0
Entropy
Moment-generating function (mgf)
Characteristic function

In probability theory and statistics, a multivariate normal distribution, also sometimes called a multivariate Gaussian distribution, is a specific probability distribution, which can be thought of as a generalization to higher dimensions of the one-dimensional normal distribution (also called a Gaussian distribution). It is also closely related to matrix normal distribution.

General case

A random vector follows a multivariate normal distribution if it satisfies the following equivalent conditions:

  • every linear combination is normally distributed
  • there is a random vector , whose components are independent standard normal random variables, a vector and an matrix such that .
  • there is a vector and a symmetric, positive semi-definite matrix such that the characteristic function of X is

If is non-singular, then the distribution may be described by the following PDF:

where is the determinant of . Note how the equation above reduces to that of the univariate normal distribution if is a scalar (i.e., a multiple of the identity matrix).

The vector μ in these conditions is the expected value of X and the matrix is the covariance matrix of the components Xi.

It is important to realize that the covariance matrix must be allowed to be singular (thus not described by above formula for which is defined). That case arises frequently in statistics; for example, in the distribution of the vector of residuals in ordinary linear regression problems. Note also that the Xi are in general not independent; they can be seen as the result of applying the linear transformation A to a collection of independent Gaussian variables Z.

That the distribution of a random vector X is a multivariate normal distribution can be written in the following notation:

or to make it explicitly known that X is N-dimensional,

Cumulative distribution function

The cumulative distribution function (cdf) is defined as the probability that all values in a random vector are less than or equal to the corresponding values in vector . Though there is no closed form for , there are a number of algorithms that estimate it numerically. For example, see MVNDST under [1] (includes FORTRAN code) or [2] (includes MATLAB code).

A counterexample

The fact that two or more random variables X and Y are normally distributed does not imply that the pair (XY) has a joint normal distribution. A simple example is one in which Y = X if |X| > 1 and Y = −X if |X| < 1.

Also see normally distributed and uncorrelated does not imply independent.

Normally distributed and independent

If X and Y are normally distributed and independent, then they are "jointly normally distributed", i.e., the pair (XY) has a bivariate normal distribution. There are of course also many bivariate normal distributions in which the components are correlated.

Bivariate case

In the 2-dimensional nonsingular case, the probability density function (with mean (0,0)) is

where is the correlation between and . In this case,

Linear transformation

If is a linear transformation of where is an matrix then has a multivariate normal distribution with expected value and variance (i.e., ).

Corollary: any subset of the has a marginal distribution that is also multivariate normal. To see this consider the following example: to extract the subset , use

which extracts the desired elements directly.

Geometric interpretation

The equidensity contours of a non-singular multivariate normal distribution are ellipsoids (i.e. linear transformations of hyperspheres) centered at the mean[1]. The directions of the principal axes of the ellipsoids are given by the eigenvectors of the covariance matrix . The squared relative lengths of the principal axes are given by the corresponding eigenvalues.

If is an eigendecomposition where the columns of U are unit eigenvectors and is a diagonal matrix of the eigenvalues, then we have

Moreover, U can be chosen to be a rotation matrix, as inverting an axis does not have any effect on , but inverting a column changes the sign of U's determinant. The distribution is in effect scaled by , rotated by U and translated by .

Conversely, any choice of , full rank matrix U, and positive diagonal entries yields a non-singular multivariate normal distribution. If any is zero and U is square, the resulting covariance matrix is singular. Geometrically this means that every contour ellipsoid is infinitely thin and has zero volume in n-dimensional space, as at least one of the principal axes has length of zero.

Correlations and independence

In general, random variables may be uncorrelated but highly dependent. But if a random vector has a multivariate normal distribution then any two or more of its components that are uncorrelated are independent. This implies that any two or more of its components that are pairwise independent are independent.

But it is not true that two random variables that are (separately, marginally) normally distributed and uncorrelated are independent. Two random variables that are normally distributed may fail to be jointly normally distributed, i.e., the vector whose components they are may fail to have a multivariate normal distribution. For an example of two normally distributed random variables that are uncorrelated but not independent, see normally distributed and uncorrelated does not imply independent.

Higher moments

The kth-order moments of X are defined by

where

The central -order moments are given as follows

(a) If is odd, .

(b) If is even with , then

where the sum is taken over all allocations of the set into (unordered) pairs, giving terms in the sum, each being the product of covariances. The covariances are determined by replacing the terms of the list by the corresponding terms of the list consisting of ones, then twos, etc, after each of the possible allocations of the former list into pairs.

In particular, the 4-order moments are

For fourth order moments (four variables) there are three terms. For sixth-order moments there are 3 × 5 = 15 terms, and for eighth-order moments there are 3 × 5 × 7 = 105 terms. The sixth-order moment case can be expanded as

Conditional distributions

If and are partitioned as follows

with sizes
with sizes

then the distribution of conditional on is multivariate normal where

and covariance matrix

This matrix is the Schur complement of in .

Note that knowing the value of to be alters the variance; perhaps more surprisingly, the mean is shifted by ; compare this with the situation of not knowing the value of , in which case would have distribution .

The matrix is known as the matrix of regression coefficients.

Fisher information matrix

The Fisher information matrix (FIM) for a normal distribution takes a special formulation. The element of the FIM for is

where

  • is the trace function

Kullback-Leibler divergence

The Kullback-Leibler divergence from to is:

Estimation of parameters

The derivation of the maximum-likelihood estimator of the covariance matrix of a multivariate normal distribution is perhaps surprisingly subtle and elegant. See estimation of covariance matrices.

In short, the probability density function (pdf) of an N-dimensional multivariate normal is

and the ML estimator of the covariance matrix is

which is simply the sample covariance matrix for sample size n. This is a biased estimator whose expectation is

An unbiased sample covariance is

Entropy

The differential entropy of the multivariate normal distribution is [2]

where is the determinant of the covariance matrix .

Multivariate normality tests

Multivariate normality tests check a given set of data for similarity to the multivariate normal distribution. The null hypothesis is that the data set is similar to the normal distribution, therefore a sufficiently small p-value indicates non-normal data. Multivariate normality tests include the Cox-Small test [3] and Smith and Jain's adaptation [4] of the Friedman-Rafsky test.

Drawing values from the distribution

A widely used method for drawing a random vector from the -dimensional multivariate normal distribution with mean vector and covariance matrix (required to be symmetric and positive-definite) works as follows:

  1. Compute the Cholesky decomposition (matrix square root) of , that is, find the unique lower triangular matrix such that .
  2. Let be a vector whose components are independent standard normal variates (which can be generated, for example, by using the Box-Muller transform).
  3. Let be .

References

  1. Nikolaus Hansen. "The CMA Evolution Strategy: A Tutorial" (PDF).
  2. Gokhale, DV (1989). "Entropy Expressions and Their Estimators for Multivariate Distributions". Information Theory, IEEE Transactions on. 35 (3): 688–692. Unknown parameter |coauthors= ignored (help); Unknown parameter |month= ignored (help)
  3. Cox, D. R. (1978). "Testing multivariate normality". Biometrika. 65 (2): 263–272. Unknown parameter |coauthors= ignored (help); Unknown parameter |month= ignored (help)
  4. Smith, Stephen P. (1988). "A test to determine the multivariate normality of a dataset". IEEE Transactions on Pattern Analysis and Machine Intelligence. 10 (5): 757–761. doi:10.1109/34.6789. Unknown parameter |coauthors= ignored (help); Unknown parameter |month= ignored (help)

de:Multivariate Verteilung#Die multivariate Normalverteilung nl:Multivariate normale verdeling sv:Multivariat normalfördelning


Linked-in.jpg