# Linear regression

 Articles WikiDoc Resources for Linear regression

## Overview

In statistics, linear regression is a regression method that models the relationship between a dependent variable Y, independent variables Xi, i = 1, ..., p, and a random term ε. The model can be written as

$Y=\beta _{0}+\beta _{1}X_{1}+\beta _{2}X_{2}+\cdots +\beta _{p}X_{p}+\varepsilon$ where $\beta _{0}$ is the intercept ("constant" term), the $\beta _{i}$ s are the respective parameters of independent variables, and $p$ is the number of parameters to be estimated in the linear regression. Linear regression can be contrasted with nonlinear regression.

This method is called "linear" because the relation of the response (the dependent variable $Y$ ) to the independent variables is assumed to be a linear function of the parameters. It is often erroneously thought that the reason the technique is called "linear regression" is that the graph of $Y=\beta _{0}+\beta x$ is a straight line or that $Y$ is a linear function of the X variables. But if the model is (for example)

$Y=\alpha +\beta x+\gamma x^{2}+\varepsilon$ the problem is still one of linear regression, that is, linear in x and x2 respectively, even though the graph on $x$ by itself is not a straight line.

## Historical remarks

The earliest form of linear regression was the method of least squares, which was published by Legendre in 1805, and by Gauss in 1809. The term “least squares” is from Legendre’s term, moindres carrés. However, Gauss claimed that he had known the method since 1795.

Legendre and Gauss both applied the method to the problem of determining, from astronomical observations, the orbits of bodies about the sun. Euler had worked on the same problem (1748) without success. Gauss published a further development of the theory of least squares in 1821, including a version of the Gauss–Markov theorem.

## Notation and naming convention

In the notation below:

• a vector of variables is denoted using a bolded arrow over the vector, such as ${\vec {X}}$ • matrices are denoted using a bolded font, such as X
• a vector of parameters ("constants") is a bolded β without subscript

An X matrix-times-β vector is written as . The dependent variable, Y in regression is conventionally called the "response variable." The independent variables (in vector form) are called the explanatory variables or regressors. Other terms include "exogenous variables," "input variables," and "predictor variables".

A hat, ${\hat {}}$ , over variable denotes that the variable or parameter has been estimated, for example, ${\hat {\beta }}$ , estimated values of the parameter vector β.

## The linear regression model

The linear regression model can be written in vector-matrix notation as

$\ Y=X\beta +\varepsilon .\,$ The term ε is the model's "error term" (a misnomer but a standard usage) and represents the unpredicted or unexplained variation in the response variable; it is conventionally called the “error” whether it is really a measurement error or not, and is assumed to be independent of ${\vec {X}}$ . For simple linear regression, where there is only a single explanatory variable and two parameters, the above equation reduces to:

$y=a+bx+\varepsilon .\,$ An equivalent formulation that explicitly shows the linear regression as a model of conditional expectation can be given as

${\mbox{E}}(y|x)=\alpha +\beta x\,$ with the conditional distribution of y given x is identical to the distribution of the error term.

## Types of linear regression

There are many different approaches to solving the regression problem, that is, determining suitable estimates for the parameters.

### Least-squares analysis

Least-squares analysis was developed by Carl Friedrich Gauss in the 1820s. This method uses the following Gauss-Markov assumptions:

• The random errors εi have expected value 0.
• The random errors εi are uncorrelated (this is weaker than an assumption of probabilistic independence).
• The random errors εi are homoscedastic, i.e., they all have the same variance.

(See also Gauss-Markov theorem). These assumptions imply that least-squares estimates of the parameters are optimal in a certain sense.

A linear regression with p parameters (including the regression intercept β1) and n data points (sample size), with $n\geq (p+1)$ allows construction of the following vectors and matrix with associated standard errors:

${\begin{bmatrix}y_{1}\\y_{2}\\\vdots \\y_{n}\end{bmatrix}}={\begin{bmatrix}1&x_{12}&x_{13}&\dots &x_{1p}\\1&x_{22}&x_{23}&\dots &x_{2p}\\\vdots &\vdots &\vdots &&\vdots \\1&x_{n2}&x_{n3}&\dots &x_{np}\end{bmatrix}}{\begin{bmatrix}\beta _{1}\\\beta _{2}\\\vdots \\\beta _{p}\end{bmatrix}}+{\begin{bmatrix}\varepsilon _{1}\\\varepsilon _{2}\\\vdots \\\varepsilon _{n}\end{bmatrix}}$ or, from vector-matrix notation above,

$\ y=\mathbf {X} \cdot \beta +\varepsilon .\,$ Each data point can be given as $({\vec {x}}_{i},y_{i})$ , $i=1,2,\dots ,n.$ . For n = p, standard errors of the parameter estimates could not be calculated. For n less than p, parameters could not be calculated.

The estimated values of the parameters can be given as

${\widehat {\beta }}$ $=(\mathbf {X} ^{T}\mathbf {X} )^{-1}\mathbf {X} ^{T}{\vec {y}}$ Using the assumptions provided by the Gauss-Markov Theorem, it is possible to analyse the results and determine whether or not the model determined using least-squares is valid. The number of degrees of freedom is given by n − p.

The residuals, representing the difference between the observations and the model's predictions, are required to analyse the regression. They are determined from

${\hat {\vec {\varepsilon }}}={\vec {y}}-\mathbf {X} {\hat {\beta }}\,$ The standard deviation, ${\hat {\sigma }}$ for the model is determined from

${{\hat {\sigma }}={\sqrt {\frac {{\hat {\vec {\varepsilon }}}^{T}{\hat {\vec {\varepsilon }}}}{n-p}}}={\sqrt {\frac {{\vec {y}}^{T}{\vec {y}}-{\hat {\vec {\beta }}}^{T}\mathbf {X} ^{T}{\vec {y}}}{n-p}}}}$ The variance in the errors can be described using the Chi-square distribution:

${\hat {\sigma }}^{2}\sim {\frac {\chi _{n-p}^{2}\ \sigma ^{2}}{n-p}}$ The $100(1-\alpha )\%$ confidence interval for the parameter, $\beta _{i}$ , is computed as follows:

${{\widehat {\beta }}_{i}\pm t_{{\frac {\alpha }{2}},n-p}{\hat {\sigma }}{\sqrt {(\mathbf {X} ^{T}\mathbf {X} )_{ii}^{-1}}}}$ where t follows the Student's t-distribution with $n-p$ degrees of freedom and $(\mathbf {X} ^{T}\mathbf {X} )_{ii}^{-1}$ denotes the value located in the $i^{th}$ row and column of the matrix.

The $100(1-\alpha )\%$ mean response confidence interval for a prediction (interpolation or extrapolation) for a value ${\vec {x}}={\vec {x_{d}}}$ is given by:

${{\vec {x_{0}}}{\widehat {\beta }}\pm t_{{\frac {\alpha }{2}},n-p}{\hat {\sigma }}{\sqrt {{\vec {x_{0}}}(\mathbf {X} ^{T}\mathbf {X} )_{}^{-1}{\vec {x_{0}}}^{T}}}}$ where ${\vec {x_{0}}}=<1,x_{2},x_{3},...,x_{p}>$ .

The $100(1-\alpha )\%$ predicted response confidence intervals for the data are given by:

${{\vec {x_{0}}}{\widehat {\beta }}\pm t_{{\frac {\alpha }{2}},n-p}{\hat {\sigma }}{\sqrt {1+{\vec {x_{0}}}(\mathbf {X} ^{T}\mathbf {X} )_{}^{-1}{\vec {x_{0}}}^{T}}}}$ .

The regression sum of squares SSR is given by:

${{\mathit {SSR}}=\sum {\left({{\hat {y}}_{i}-{\bar {y}}}\right)^{2}}={\hat {\beta }}^{T}\mathbf {X} ^{T}{\vec {y}}-{\frac {1}{n}}\left({{\vec {y}}^{T}{\vec {u}}{\vec {u}}^{T}{\vec {y}}}\right)}$ where ${\bar {y}}={\frac {1}{n}}\sum y_{i}$ and ${\vec {u}}$ is an n by 1 unit vector (i.e. each element is 1). Note that the term ${\frac {1}{n}}y^{T}uu^{T}y$ is equivalent to ${\frac {1}{n}}(\sum y_{i})^{2}$ .

The error sum of squares ESS is given by:

${{\mathit {ESS}}=\sum {\left({y_{i}-{\hat {y}}_{i}}\right)^{2}}={\vec {y}}^{T}{\vec {y}}-{\hat {\beta }}^{T}\mathbf {X} ^{T}{\vec {y}}}.$ The total sum of squares TSS' is given by

${{\mathit {TSS}}=\sum {\left({y_{i}-{\bar {y}}}\right)^{2}}={\vec {y}}^{T}{\vec {y}}-{\frac {1}{n}}\left({{\vec {y}}^{T}{\vec {u}}{\vec {u}}^{T}{\vec {y}}}\right)={\mathit {SSR}}+{\mathit {ESS}}}.$ Pearson's co-efficient of regression, R² is then given as

${R^{2}={\frac {\mathit {SSR}}{\mathit {TSS}}}=1-{\frac {\mathit {ESS}}{\mathit {TSS}}}}.$ ### Assessing the least-squares model

Once the above values have been corrected, the model should be checked for two different things:

1. Whether the assumptions of least-squares are fulfilled and
2. Whether the model is valid

#### Checking model assumptions

The model assumptions are checked by calculating the residuals and plotting them. The residuals are calculated as follows:

${\hat {\vec {\varepsilon }}}={\vec {y}}-{\hat {\vec {y}}}={\vec {y}}-\mathbf {X} {\hat {\beta }}\,$ The following plots can be constructed to test the validity of the assumptions:

1. A normal probability plot of the residuals to test normality. The points should lie along a straight line.
2. A time series plot of the residuals, that is, plotting the residuals as a function of time.
3. Residuals against the explanatory variables, $\mathbf {X}$ .
4. Residuals against the fitted values, ${\hat {\vec {y}}}\,$ .
5. Residuals against the preceding residual.

There should not be any noticeable pattern to the data in all but the first plot.

#### Checking model validity

The validity of the model can be checked using any of the following methods:

1. Using the confidence interval for each of the parameters, $\beta _{i}$ . If the confidence interval includes 0, then the parameter can be removed from the model. Ideally, a new regression analysis excluding that parameter would need to be performed and continued until there are no more parameters to remove.
2. Calculate Pearson’s co-efficient of regression. The closer the value is to 1; the better the regression is. This co-efficient gives what fraction of the observed behaviour can be explained by the given variables.
3. Examining the observational and prediction confidence intervals. The smaller they are the better.
4. Computing the F-statistics.

### Modifications of least-squares analysis

There are various different ways in which least-squares analysis can be modified including

• weighted least squares, which is a generalisation of the least squares method
• polynomial fitting, which involves fitting a polynomial to the given data.

### Polynomial fitting

A polynomial fit is a specific type of multiple regression. The simple regression model (a first-order polynomial) can be trivially extended to higher orders. The regression model $y_{i}\,=\,\alpha _{0}+\alpha _{1}x_{i}+\alpha _{2}x_{i}^{2}+\cdots +\alpha _{m}x_{i}^{m}+\varepsilon _{i}\ (i=1,2,\dots ,n)$ is a system of polynomial equations of order m with polynomial coefficients $\{\alpha _{0},\dots ,\alpha _{m}\}$ . As before, we can express the model using data matrix $\mathbf {X}$ , target vector ${\vec {y}}$ and parameter vector ${\vec {\alpha }}$ . The ith row of $\mathbf {X}$ and ${\vec {y}}$ will contain the x and y value for the ith data sample. Then the model can be written as a system of linear equations:

${\begin{bmatrix}y_{1}\\y_{2}\\\vdots \\y_{n}\end{bmatrix}}={\begin{bmatrix}1&x_{1}&x_{1}^{2}&\dots &x_{1}^{m}\\1&x_{2}&x_{2}^{2}&\dots &x_{2}^{m}\\\vdots &\vdots &\vdots &&\vdots \\1&x_{n}&x_{n}^{2}&\dots &x_{n}^{m}\end{bmatrix}}{\begin{bmatrix}\alpha _{0}\\\alpha _{1}\\\alpha _{2}\\\vdots \\\alpha _{m}\end{bmatrix}}+{\begin{bmatrix}\varepsilon _{1}\\\varepsilon _{2}\\\vdots \\\varepsilon _{n}\end{bmatrix}}$ which when using pure matrix notation remains, as before,

$Y=\mathbf {X} {\vec {\alpha }}+\varepsilon ,\,$ and the vector of polynomial coefficients is

${\widehat {\vec {\alpha }}}=(\mathbf {X} ^{T}\mathbf {X} )^{-1}\;\mathbf {X} ^{T}Y.\,$ ### Robust regression

A host of alternative approaches to the computation of regression parameters are included in the category known as robust regression. One technique minimizes the mean absolute error, or some other function of the residuals, instead of mean squared error as in linear regression. Robust regression is much more computationally intensive than linear regression and is somewhat more difficult to implement as well. While least squares estimates are not very sensitive to breaking the normality of the errors assumption, this is not true when the variance or mean of the error distribution is not bounded, or when an analyst that can identify outliers is unavailable.

In the Stata culture, Robust regression means linear regression with Huber-White standard error estimates. This relaxes the assumption of homoscedasticity for variance estimates only; the predictors are still ordinary least squares (OLS) estimates.

## Applications of linear regression

### The trend line

For trend lines as used in technical analysis, see Trend lines (technical analysis)

A trend line represents a trend, the long-term movement in time series data after other components have been accounted for. It tells whether a particular data set (say GDP, oil prices or stock prices) have increased or decreased over the period of time. A trend line could simply be drawn by eye through a set of data points, but more properly their position and slope is calculated using statistical techniques like linear regression. Trend lines typically are straight lines, although some variations use higher degree polynomials depending on the degree of curvature desired in the line.

Trend lines are sometimes used in business analytics to show changes in data over time. This has the advantage of being simple. Trend lines are often used to argue that a particular action or event (such as training, or an advertising campaign) caused observed changes at a point in time. This is a simple technique, and does not require a control group, experimental design, or a sophisticated analysis technique. However, it suffers from a lack of scientific validity in cases where other potential changes can affect the data.

### Examples

Linear regression is widely used in biological, behavioral and social sciences to describe relationships between variables. It ranks as one of the most important tools used in these disciplines.

#### Medicine

As one example, early evidence relating tobacco smoking to mortality and morbidity came from studies employing regression. Researchers usually include several variables in their regression analysis in an effort to remove factors that might produce spurious correlations. For the cigarette smoking example, researchers might include socio-economic status in addition to smoking to ensure that any observed effect of smoking on mortality is not due to some effect of education or income. However, it is never possible to include all possible confounding variables in a study employing regression. For the smoking example, a hypothetical gene might increase mortality and also cause people to smoke more. For this reason, randomized controlled trials are considered to be more trustworthy than a regression analysis.

#### Finance

Linear regression underlies the capital asset pricing model, and the concept of using Beta for analyzing and quantifying the systematic risk of an investment. This comes directly from the Beta coefficient of the linear regression model that relates the return on the investment to the return on all risky assets. 