Cochran's theorem

Jump to: navigation, search

In statistics, Cochran's theorem is used in the analysis of variance.

Suppose U1, ..., Un are independent standard normally distributed random variables, and an identity of the form

can be written where each Qi is a sum of squares of linear combinations of the Us. Further suppose that

where ri is the rank of Qi. Cochran's theorem states that the Qi are independent, and Qi has a chi-square distribution with ri degrees of freedom.

Cochran's theorem is the converse of Fisher's theorem.

Example

If X1, ..., Xn are independent normally distributed random variables with mean μ and standard deviation σ then

is standard normal for each i.

It is possible to write

(here, summation is from 1 to n, that is over the observations). To see this identity, multiply throughout by and note that

and expand to give

The third term is zero because it is equal to a constant times

and the second term is just n identical terms added together.

Combining the above results (and dividing by σ2), we have:

Now the rank of Q2 is just 1 (it is the square of just one linear combination of the standard normal variables). The rank of Q1 can be shown to be n − 1, and thus the conditions for Cochran's theorem are met.

Cochran's theorem then states that Q1 and Q2 are independent, with Chi-squared distribution with n − 1 and 1 degree of freedom respectively.

This shows that the sample mean and sample variance are independent; also

To estimate the variance σ2, one estimator that is often used is

Cochran's theorem shows that

which shows that the expected value of is σ2(n − 1)/n.

Both these distributions are proportional to the true but unknown variance σ2; thus their ratio is independent of σ2 and because they are independent we have

where F1,n − 1 is the F-distribution with 1 and n − 1 degrees of freedom (see also Student's t-distribution).

de:Satz von Cochran it:Teorema di Cochran


Linked-in.jpg