# Coefficient of determination

In statistics, the **coefficient of determination** *R*^{2} is the proportion of variability in a data set that is accounted for by a statistical model. In this definition, the term "variability" is defined as the sum of squares. There are equivalent expressions for *R*^{2} based on an analysis of variance decomposition. A general version, based on comparing the variability of the estimation errors with the variability of the original values, is

Another version is common in statistics texts but holds only if the modelled values are obtained by ordinary least squares regression (which must include a fitted intercept or constant term): it is

In the above definitions,

where are the original data values and modelled values respectively. That is, is the total sum of squares, is the regression sum of squares, and is the sum of squared errors. In some texts, the abbreviations and have the opposite meaning: stands for the residual sum of squares (which then refers to the sum of squared errors in the upper example) and stands for the explained sum of squares (another name for the regression sum of squares).

In the second definition, R^{2} is the ratio of the variability of the modelled values to the variability of the original data values. Another version of the definition, which again only holds if the modelled values are obtained by ordinary least squares regression, gives R^{2} as the square of the correlation coefficient between the original and modelled data values.

R^{2} is a statistic that will give some information about the goodness of fit of a model. In regression, the R^{2} coefficient of determination is a statistical measure of how well the regression line approximates the real data points. An R^{2} of 1.0 indicates that the regression line perfectly fits the data.

In some (but not all) instances where R^{2} is used, the predictors are calculated by ordinary least-squares regression: that is, by minimising SS_{E}. In this case R-squared increases as we increase the number of variables in the model (R-squared will not decrease). This illustrates a drawback to one possible use of R^{2}, where one might try to include more variables in the model until "there is no more improvement". This leads to the alternative approach of looking at the adjusted R^{2}. The explanation of this statistic is almost the same as R-squared but it penalizes the statistic as extra variables are included in the model. For cases other than fitting by ordinary least squares, the R^{2} statistic can be calculated as above and may still be a useful measure. However, the conclusion that that R-squared increases with extra variables no longer holds, but downward variations are usually small. If fitting is by weighted least squares or generalized least squares, alternative versions of R^{2} can be calculated appropriate to those statistical frameworks, while the "raw" R^{2} may still be useful if it is more easily interpreted. Values for R^{2} can be calculated for any type of predictive model, which need not have a statistical basis.

## Contents

## Explanation and interpretation of *R*^{2}

For expository purposes, consider a linear model of the form

where, for the i'th case, is the response variable, are *p* regressors, and is a mean zero error term. The quantities are unknown coefficients, whose values are determined by least squares. The coefficient of determination *R*^{2} is a measure of the global fit of the model. Specifically, is an element of [0,1] and represents the proportion of variability in *Y*_{i} that may be attributed to some linear combination of the regressors (explanatory variables) in *X*.

More simply, *R*^{2} is often interpreted as the proportion of response variation "explained" by the regressors in the model. Thus, indicates that the fitted model explains all variability in , while indicates no 'linear' relationship between the response variable and regressors. An interior value such as may be interpreted as follows: "Approximately seventy percent of the variation in the response variable can be explained by the explanatory variable. The remaining thirty percent can be explained by unknown, lurking variables or inherent variability."

A caution that applies to *R*^{2}, as to other statistical descriptions of correlation and association is that "correlation does not imply causation." In other words, while correlations may provide valuable clues regarding causal relationships among variables, a high correlation between two variables does not represent adequate evidence that changing one variable has resulted, or may result, from changes of other variables.

In case of a single regressor, fitted by least squares, is the square of the Pearson product-moment correlation coefficient relating the regressor and the response variable. More generally, is the square of the correlation between the constructed predictor and the response variable.

## Inflation of *R*^{2}

In least squares regression, *R*^{2} is weakly increasing in the number of regressors in the model. As such, *R*^{2} cannot be used as a meaningful comparison of models with different numbers of independent varables. As a reminder of this, some authors denote *R*^{2} by *R*^{2}_{p}, where *p* is the number of columns in *X*

Demonstration of this property is trivial. To begin, recall that the objective of least squares regression is (in matrix notation)

The optimal value of the objective is weakly smaller as additional columns of are added, by the fact that relatively unconstrained minimization leads to a solution which is weakly smaller than relatively constrained minimization. Given the previous conclusion and noting that depends only on *y*, the non-decreasing property of *R*_{2} follows directly from the definition above.

## Adjusted *R*^{2}

Adjusted *R*^{2} is a modification of *R*^{2} that adjusts for the number of explanatory terms in a model. Unlike *R*^{2}, the adjusted *R*^{2} increases only if the new term improves the model more than would be expected by chance. The adjusted *R*^{2} can be negative, and will always be less than or equal to *R*^{2}. The adjusted *R*^{2} is defined as

where p is the total number of regressors in the linear model (but not counting the constant term), and *n* is sample size.

The principal behind the Adjusted *R*^{2} statistic can be seen by rewriting the ordinary *R*^{2} as

where and are estimates of the variances of the errors and of the observations, respectively. These estimates are replaced by notionally "unbiased" versions: and .

Adjusted *R*^{2} *does not have the same interpretation as R ^{2}*. As such, care must be taken in interpreting and reporting this statistic. Adjusted

*R*

^{2}is particularly useful in the Feature selection stage of model building.

Adjusted is not always *better* than : adjusted will be more useful only if the is calculated based on a sample, not the entire population. For example, if our unit of analysis is a state, and we have data for all counties, then adjusted will not yield any more useful information than .

## Notes on interpreting *R*^{2}

does *NOT* tell whether:

- the independent variables are a true cause of the changes in the dependent variable
- omitted-variable bias exists
- the correct regression was used
- the most appropriate set of independent variables has been chosen
- there is co-linearity present in the data
- the model might be improved by using transformed versions of the existing set of independent variables

## External links

- Adjusted R-Square Calculator
- R-squared is an often-misused criterion for goodness-of-fit, where bigger isn't always better.
- Rules for Cheaters: How to Get a High R squared