# Nonlinear regression

Jump to: navigation, search
 Articles WikiDoc Resources for Nonlinear regression

## Overview=

In statistics, nonlinear regression is the problem of inference for a model

$y=f(x,\theta )+\varepsilon$ based on multidimensional $x$ ,$y$ data, where $f$ is some nonlinear function with respect to unknown parameters θ. At a minimum, we may like to obtain the parameter values associated with the best fitting curve (usually, least squares). (See the article on curve fitting.) Also, statistical inference may be needed, such as confidence intervals for parameters, or a test of whether or not the fitted model agrees well with the data.

The scope of nonlinear regression is clarified by considering the case of polynomial regression, which actually is best not treated as a case of nonlinear regression. When $f$ takes a form such as

$f(x)=ax^{2}+bx+c$ our function $f$ is nonlinear as a function of $x$ but it is linear as a function of unknown parameters $a$ , $b$ , and $c$ . The latter is the sense of "linear" in the context of statistical regression modeling. The appropriate computational procedures for polynomial regression are procedures of (multiple) linear regression with two predictor variables $x$ and $x^{2}$ say. However, on occasion it is suggested that nonlinear regression is needed for fitting polynomials. Practical consequences of the misunderstanding include that a nonlinear optimization procedure may be used when the solution is actually available in closed form. Also, capabilities for linear regression are likely to be more comprehensive in some software than capabilities related to nonlinear regression.

## General

### Linearization

Some nonlinear regression problems can be linearized by a suitable transformation of the model formulation.

For example, consider the nonlinear regression problem (ignoring the error):

$y=ae^{bx}.\,\!$ If we take a logarithm of both sides, it becomes

$\ln {(y)}=\ln {(a)}+bx,\,\!$ suggesting estimation of the unknown parameters by a linear regression of ln(y) on x, a computation that does not require iterative optimization. However, use of linearization requires caution. The influences of the data values will change, as will the error structure of the model and the interpretation of any inferential results. These may not be desired effects. On the other hand, depending on what the largest source of error is, linearization may make your errors be distributed in a normal fashion, so the choice to perform linearization must be informed by modeling considerations.

"Linearization" as used here is not to be confused with the local linearization involved in standard algorithms such as the Gauss-Newton algorithm. Similarly, the methodology of generalized linear models does not involve linearization for parameter estimation.

### Ordinary and weighted least squares

The best-fit curve is often assumed to be that which minimizes the sum of squared deviations (residuals), SSR say. This is the (ordinary) least squares (OLS) approach. However, in cases where there are different error variances for different errors, a sum of weighted squared residuals may be minimized, SSWR say, the weighted least squares (WLS) criterion. In practice the variance may depend on the fitted mean. Then in practice weights may be recomputed on each iteration, in an iteratively weighted least squares algorithm.

In general, there is no closed-form expression for the best-fitting parameters, as there is in linear regression. Usually numerical optimization algorithms are applied to determine the best-fitting parameters. Again in contrast to linear regression, there may be many local maxima of the function to be optimized. In practice, guess values of the parameters are used, in conjunction with the optimization algorithm, to attempt to find the global maximum.

### Monte Carlo estimation of errors

If the error of each data point is known, then the reliability of the parameters can be estimated by Monte Carlo simulation. Each data point is randomized according to its mean and standard deviation, the curve is fitted and parameters recorded. The data points are then randomized again and new parameters determined. In time, many sets of parameters will be generated and their mean and standard deviation can be calculated. 