Durbin-Watson statistic

The Durbin-Watson statistic is a test statistic used to detect the presence of autocorrelation in the residuals from a regression analysis. It is named after James Durbin and Geoffrey Watson.

If et is the residual associated with the observation at time t, then the test statistic is $$d = {\sum_{t=2}^T (e_t - e_{t-1})^2 \over {\sum_{t=1}^T e_t^2}}$$. Its value always lies between 0 and 4.

A value of 2 indicates there appears to be no autocorrelation. If the Durbin-Watson statistic is substantially less than 2, there is evidence of positive serial correlation. As a rough rule of thumb, if Durbin-Watson is less than 1.5, there may be cause for alarm. Small values of d indicate successive error terms are, on average, close in value to one another, or positively correlated. Large values of d indicate successive error terms are, on average, much different in value to one another, or negatively correlated.

To test for positive autocorrelation at significance α, the test statistic d is compared to lower and upper critical values (dL,α and dU,α):
 * If d < dL,α, there is statistical evidence that the error terms are positively autocorrelated.
 * If d > dU,α, there is statistical evidence that the error terms are not positively autocorrelated.
 * If dL,α < d < dU,α, the test is inconclusive.

To test for negative autocorrelation at significance α, the test statistic (4 - d) is compared to lower and upper critical values (dL,α and dU,α):
 * If (4 &minus; d) < dL,α, there is statistical evidence that the error terms are negatively autocorrelated.
 * If (4 - d) > dU,α, there is statistical evidence that the error terms are not negatively autocorrelated.
 * If dL,α < (4 &minus; d) < dU,α, the test is inconclusive.

The critical values, dL,α and dU,α, vary by level of significance (α), the number of observations, and the number of predictors in the regression equation. Their derivation is complex&mdash;statisticians typically obtain them from the appendices of statistical texts (e.g. Bounds for Critical Values for the Durbin-Watson Statistic: Λ = number of regressors, N = number of observations).

Durbin h-statistic
This statistic is biased for autoregressive moving average models, so that autocorrelation is underestimated. But for big samples one can easily compute the unbiased normally distributed h-statistic:


 * $$h=(1-\frac {1} {2} d) \sqrt{\frac {T} {1-T \cdot \hat Var(\hat\beta_1\,)}}$$, with the estimated variance $$\hat Var(\hat\beta_1)$$ of the regression coefficient of the lagged dependend variable for $$T \cdot \hat Var(\hat\beta_1)<1 \,$$.

Durbin-Watson test for panel data
For panel data this statistic can be generalized as follows:


 * If ei,t is the residual from an OLS regression with fixed effects for each panel i, associated with the observation in panel i at time t, then the test statistic is $$d_{pd}=\frac{\sum_{i=1}^N \sum_{t=2}^T (e_{i,t} - e_{i,t-1})^2} {\sum_{i=1}^N \sum_{t=1}^T e_{i,t}^2}$$

This statistic can be compared with tabulated rejection values [compare for example Bhargava et al. (1982), page 537]. These values are calculated dependent on T (length of the balanced panel—time periods the individuals were surveyed), K (number of regressors) and N (number of individuals in the panel).