Chapman–Robbins bound

In statistics, the Chapman–Robbins bound or Hammersley–Chapman–Robbins bound is a lower bound on the variance of estimators of a deterministic parameter. It is a generalization of the Cramér–Rao bound; compared to the Cramér–Rao bound, it is both tighter and applicable to a wider range of problems. However, it is usually more difficult to compute.

The bound was independently discovered by Hammersley in 1950 and by Chapman and Robbins in 1951.

Statement
Let $$\theta \in {\mathbb R}^n$$ be an unknown, deterministic parameter, and let $$X \in {\mathbb R}^k$$ be a random variable, interpreted as a measurement of $$\theta$$. Suppose the probability density function of $$X$$ is given by $$p(x;\theta)$$. It is assumed that $$p(x;\theta)$$ is well-defined and positive for all values of $$x$$ and $$\theta$$.

Suppose $$\delta(X)$$ is an unbiased estimate of an arbitrary function $$g(\theta)$$ of $$\theta$$, i.e.,
 * $$E\{\delta(X)\} = g(\theta)\,\!$$ for all $$\theta$$.

The Chapman–Robbins bound then states that
 * $$\mathrm{Var}(\delta(X)) \ge \sup_{\Delta} \frac{\left[ g(\theta+\Delta) - g(\theta) \right]^2}{E \left[ \tfrac{p(x;\theta+\Delta)}{p(x;\theta)} - 1 \right]^2}.$$

Relation to Cramér–Rao bound
The Chapman–Robbins bound converges to the Cramér–Rao bound when $$\Delta \rightarrow 0$$, assuming the regularity conditions of the Cramér–Rao bound hold. This implies that, when both bounds exist, the Chapman–Robbins version is always at least as tight as the Cramér–Rao bound; in many cases, it is substantially tighter.

The Chapman–Robbins bound also holds under much weaker regularity conditions. For example, no assumption is made regarding differentiability of the probability density function $$p(x;\theta)$$. When $$p(x;\theta)$$ is non-differentiable, the Fisher information is not defined, and hence the Cramér–Rao bound does not exist.