Minimum mean square error

Jump to: navigation, search


In statistics, minimum mean square error (or MMSE) describes the statistical estimator with the least possible mean squared error. MMSE estimators are commonly described as optimal.

Let be a point estimator of the parameter :

Suppose there are two estimators and of the parameter Set and equal to the mean square errors of those two estimators. Then the relative efficiency of and can be defined as:

Operational Considerations

Unfortunately, the correct distribution from which to estimate the mean-squared error of the estimator is a point of contention between Bayesian and frequentist schools of probability theory. Orthodox statistics employs a transformation of variables to get the probability distribution of the estimator from the sampling distribution, giving the estimator's probability independent of the actual data set obtained. This distribution correctly describes the variation of the estimator over all possible data sets.

Bayesian statistics instead holds that the correct distribution to use is that which represents the probability an observer would give to the variable after observing the actual data set.

Where I represents some information the observer has about the nature of the variable . This distribution correctly describes the observer's state of knowledge about the parameter to be estimated after taking the observed data set into consideration.

It is interesting to note that these alternate viewpoints can sometimes (but not always) produce the same mean +/- standard deviation answer, as for example, estimation of the mean of a Normally-distributed data set. (Jaynes)

References



Linked-in.jpg