Normally distributed and uncorrelated does not imply independent

In probability theory, two random variables being uncorrelated does not imply their independence. In some contexts, uncorrelatedness implies at least pairwise independence (as when the random variables involved have Bernoulli distributions).

It is sometimes mistakenly thought that one context in which uncorrelatedness implies independence is when the random variables involved are normally distributed. Here are the facts:


 * Suppose two random variables X and Y are jointly normally distributed. That is the same as saying that the random vector (X, Y) has a multivariate normal distribution.  It means that the joint probability distribution of X and Y is such that for any two constant (i.e., non-random) scalars a and b, the random variable aX + bY is normally distributed.  In that case if X and Y are uncorrelated, i.e., their covariance cov(X, Y) is zero, then they are independent.


 * But it is possible for two random variables X and Y to be so distributed jointly that each one alone is normally distributed, and they are uncorrelated, but they are not independent. Examples appear below.

Examples

 * Suppose X has a normal distribution with expected value 0 and variance 1. Let W = 1 or &minus;1, each with probability 1/2, and assume W is independent of X. Let Y = WX.  Then
 * X and Y are uncorrelated;
 * Both have the same normal distribution; and
 * X and Y are not independent.
 * Again, the distribution of X + Y concentrates positive probability at 0, since Pr(X + Y = 0) = 1/2.
 * To see that X and Y are uncorrelated, consider
 * $$ \begin{align}

\operatorname{cov}(X,Y) &{} = E(XY) - E(X)E(Y) = E(XY) = E(E(XY\mid W)) \\ & {} = E(X^2)\Pr(W=1) + E(-X^2)\Pr(W=-1) \\ & {} = 1\cdot\frac12 + (-1)\cdot\frac12 = 0. \end{align} $$
 * To see that Y has the same normal distribution as X, consider
 * $$\begin{align}

\Pr(Y \le x) & {} = E(\Pr(Y \le x\mid W)) \\ & {} = \Pr(X \le x)\Pr(W = 1) + \Pr(-X\le x)\Pr(W = -1) \\ & {} = \Phi(x) \cdot\frac12 + \Phi(x)\cdot\frac12 \end{align}$$
 * (since X and &minus;X both have the same normal distribution).
 * To see that X and Y are not independent, observe that Pr(Y > 1|X = 1/2) = 0.


 * Suppose X has a normal distribution with expected value 0 and variance 1. Let


 * $$Y=\left\{\begin{matrix} -X & \mbox{if}\ \left|X\right|c \end{matrix}\right.$$


 * where c is a positive number to be specified below. If c is very small, then the correlation corr(X, Y) is near 1; if c is very large, then corr(X, Y) is near &minus;1.  Since the correlation is a continuous function of c, the intermediate value theorem implies there is some particular value of c that makes the correlation 0.  That value is approximately 1.54.  In that case, X and Y are uncorrelated, but they are clearly not independent, since X completely determines Y.


 * To see that Y is normally distributed&mdash;indeed, that its distribution is the same as that of X&mdash;let us find its cumulative distribution function:


 * $$\Pr(Y \leq x) = \Pr(\{|X|c\mbox{ and }Xc\mbox{ and }Xc\mbox{ and }X<x)\,$$


 * (This follows from the symmetry of the distribution of X and the symmetry of the condition that |X| < c.)


 * $$= \Pr(X<x).\,$$


 * Observe that the sum X + Y is nowhere near being normally distributed, since it has a substantial probability (about 0.88) of it being equal to 0, whereas the normal distribution, being a continuous distribution, has no discrete part, i.e., does not concentrate more than zero probability at any single point. Consequently X and Y are not jointly normally distributed, even though they are separately normally distributed.