Degrees of freedom (statistics)
In statistics, the term degrees of freedom has two distinct senses.
In fitting statistical models to data, the vectors of residuals are often constrained to lie in a space of smaller dimension than the number of components in the vector. That smaller dimension is the number of degrees of freedom for error.
Perhaps the simplest example is this. Suppose
be the "sample mean". Then the quantities
are residuals that may be considered estimates of the errors Xi − μ. The sum of the residuals (unlike the sum of the errors) is necessarily 0. That means they are constrained to lie in a space of dimension n − 1. If one knows the values of any n − 1 of the residuals, one can thus find the last one. One says that "there are n − 1 degrees of freedom for error."
An only slightly less simple example is that of least squares estimation of a and b in the model
where εi, and hence Yi are random. Let and be the least-squares estimates of a and b. Then the residuals
are constrained to lie within the space defined by the two equations
One says that there are n − 2 degrees of freedom for error.
The capital Y is used in specifying the model, and lower-case y in the definition of the residuals. That is because the former are hypothesized random variables and the latter are data.
Another simple and frequently seen example arises in multiple comparisons.
Parameters in probability distributions
The probability distributions of residuals are often parametrized by these numbers of degrees of freedom. Thus one speaks of a chi-square distribution with a specified number of degrees of freedom, an F-distribution, a Student's t-distribution, or a Wishart distribution with specified numbers of degrees of freedom in the numerator and the denominator respectively.
In the familiar uses of these distributions, the number of degrees of freedom takes only integer values. The underlying mathematics in most cases allows for fractional degrees of freedom, which can arise in more sophisticated uses.