# Order statistic

File:OrderStatistics.gif
Probability distributions for the n = 5 order statistics of an exponential distribution with θ = 3.

In statistics, the kth order statistic of a statistical sample is equal to its kth-smallest value. Together with rank statistics, order statistics are among the most fundamental tools in non-parametric statistics and inference.

Important special cases of the order statistics are the minimum and maximum value of a sample, and (with some qualifications discussed below) the sample median and other sample quantiles.

When using probability theory to analyse order statistics of random samples from a continuous distribution, the cumulative distribution function is used to reduce the analysis to the case of order statistics of the uniform distribution.

## Notation and examples

For example, suppose that four numbers are observed or recorded, resulting in a sample of size ${\displaystyle n=4}$. if the sample values are

6, 9, 3, 8,

they will usually be denoted

${\displaystyle x_{1}=6;\ \ x_{2}=9;\ \ x_{3}=3;\ \ x_{4}=8\,}$

where the subscript i in ${\displaystyle x_{i}}$ indicates simply the order in which the observations were recorded and is usually assumed not to be significant. A case when the order is significant is when the observations are part of a time series.

The order statistics would be denoted

${\displaystyle x_{(1)}=3;\ \ x_{(2)}=6;\ \ x_{(3)}=8;\ \ x_{(4)}=9\,}$

where the subscript (i) enclosed in parentheses indicates the ith order statistic of the sample.

The first order statistic (or smallest order statistic) is always the minimum of the sample, that is,

${\displaystyle X_{(1)}=\min\{\,X_{1},\ldots ,X_{n}\,\}}$

where, following a common convention, we use upper-case letters to refer to random variables, and lower-case letters (as above) to refer to their actual observed values.

Similarly, for a sample of size n, the nth order statistic (or largest order statistic) is the maximum, that is,

${\displaystyle X_{(n)}=\max\{\,X_{1},\ldots ,X_{n}\,\}.}$

The sample range is the difference between the maximum and minimum. It is clearly a function of the order statistics:

${\displaystyle {\rm {Range}}\{\,X_{1},\ldots ,X_{n}\,\}=X_{(n)}-X_{(1)}.}$

A similar important statistic in exploratory data analysis that is simply related to the order statistics is the sample interquartile range.

The sample median may or may not be an order statistic, since there is a single middle value only when the number ${\displaystyle n}$ of observations is odd. More precisely, if ${\displaystyle n=2m+1}$ for some ${\displaystyle m}$, then the sample median is ${\displaystyle X_{(m+1)}}$ and so is an order statistic. On the other hand, when ${\displaystyle n}$ is even, ${\displaystyle n=2m}$ and there are two middle values, ${\displaystyle X_{(m)}}$ and ${\displaystyle X_{(m+1)}}$, and the sample median is some function of the two (usually the average) and hence not an order statistic. Similar remarks apply to all sample quantiles.

## Probabilistic analysis

Given any random variables ${\displaystyle X_{1},X_{2},\ldots ,X_{n}}$, the order statistics ${\displaystyle X_{(1)},X_{(2)},\ldots ,X_{(n)}}$ are also random variables, defined by sorting the values (realizations) of ${\displaystyle X_{1},X_{2},\ldots ,X_{n}}$ in increasing order.

When the random variables ${\displaystyle X_{1},X_{2},\ldots ,X_{n}}$ form a sample, they are independent and identically distributed (iid). This is the case treated below. In general, the random variables ${\displaystyle X_{1},X_{2},\ldots ,X_{n}}$ can arise by sampling from more than one population. Then they are independent but not necessarily identically distributed, and their joint probability distribution is given by the Bapat-Beg theorem.

From now on, we will assume that the random variables under consideration are continuous and, where convenient we will also assume that they have a density (that is, they are absolutely continuous). The peculiarities of the analysis of distributions assigning mass to points (in particular, discrete distributions) are discussed at the end.

### Distribution of each order statistic of an absolutely continuous distribution

Let ${\displaystyle X_{1},X_{2},\ldots ,X_{n}}$ be iid absolutely continuously distributed random variables, and ${\displaystyle X_{(1)},X_{(2)},\ldots ,X_{(n)}}$ be the corresponding order statistics. Let ${\displaystyle f(x)}$ be the probability density function and ${\displaystyle F(x)}$ be the cumulative distribution function of ${\displaystyle X_{i}}$. Then the probability density of the kth statistic can be found as follows.

${\displaystyle f_{X_{(k)}}(x)={d \over dx}F_{X_{(k)}}(x)={d \over dx}P\left(X_{(k)}\leq x\right)={d \over dx}P(\mathrm {at} \ \mathrm {least} \ k\ \mathrm {of} \ \mathrm {the} \ n\ X\mathrm {s} \ \mathrm {are} \leq x)}$
${\displaystyle ={d \over dx}P(\geq k\ \mathrm {successes} \ \mathrm {in} \ n\ \mathrm {trials} )={d \over dx}\sum _{j=k}^{n}{n \choose j}P(X_{1}\leq x)^{j}(1-P(X_{1}\leq x))^{n-j}}$
${\displaystyle ={d \over dx}\sum _{j=k}^{n}{n \choose j}F(x)^{j}(1-F(x))^{n-j}}$
${\displaystyle =\sum _{j=k}^{n}{n \choose j}\left(jF(x)^{j-1}f(x)(1-F(x))^{n-j}+F(x)^{j}(n-j)(1-F(x))^{n-j-1}(-f(x))\right)}$
${\displaystyle =\sum _{j=k}^{n}\left(n{n-1 \choose j-1}F(x)^{j-1}(1-F(x))^{n-j}-n{n-1 \choose j}F(x)^{j}(1-F(x))^{n-j-1}\right)f(x)}$
${\displaystyle =nf(x)\left(\sum _{j=k-1}^{n-1}{n-1 \choose j}F(x)^{j}(1-F(x))^{(n-1)-j}-\sum _{j=k}^{n}{n-1 \choose j}F(x)^{j}(1-F(x))^{(n-1)-j}\right)}$

and the sum above telescopes, so that all terms cancel except the first and the last:

${\displaystyle =nf(x)\left({n-1 \choose k-1}F(x)^{k-1}(1-F(x))^{(n-1)-(k-1)}-\underbrace {{n-1 \choose n}F(x)^{n}(1-F(x))^{(n-1)-n}} \right)}$

and the term over the underbrace is zero, so:

${\displaystyle =nf(x){n-1 \choose k-1}F(x)^{k-1}(1-F(x))^{(n-1)-(k-1)}}$
${\displaystyle ={n! \over (k-1)!(n-k)!}F(x)^{k-1}(1-F(x))^{n-k}f(x).}$

### Probability distributions of order statistics

In this section we show that the order statistics of the uniform distribution on the unit interval have marginal distributions belonging to the Beta family. We also give a simple method to derive the joint distribution of any number of order statistics, and finally translate these results to arbitrary continuous distributions using the cdf.

We assume throughout this section that ${\displaystyle X_{1},X_{2},\ldots ,X_{n}}$ is a random sample drawn from a continuous distribution with cdf ${\displaystyle F_{X}}$. Denoting ${\displaystyle U_{i}=F_{X}(X_{i})}$ we obtain the corresponding random sample ${\displaystyle U_{1},\ldots ,U_{n}}$ from the standard uniform distribution. Note that the order statistics also satisfy ${\displaystyle U_{(i)}=F_{X}(X_{(i)})}$.

#### The order statistics of the uniform distribution

The probability of the order statistic ${\displaystyle U_{(k)}}$ falling in the interval ${\displaystyle [u,u+du]}$ is equal to

${\displaystyle {n! \over (k-1)!(n-k)!}u^{k-1}(1-u)^{n-k}du+O(du^{2}),}$

that is, the kth order statistic of the uniform distribution is a Beta random variable.

${\displaystyle U_{(k)}\sim B(k,n+1-k).}$

The proof of these statements is as follows. In order for ${\displaystyle U_{(k)}}$ to be between u and u+du, it is necessary that exactly k-1 elements of the sample are smaller than u, and that at least one is between u and u+du. The probability that more than one is in this latter interval is already ${\displaystyle O(du^{2})}$, so we have to calculate the probability that exactly k-1, 1 and n-k observations fall in the intervals ${\displaystyle (0,u)}$, ${\displaystyle (u,u+du)}$ and ${\displaystyle (u+du,1)}$ respectively. This equals (refer to multinomial distribution for details)

${\displaystyle {n! \over (k-1)!1!(n-k)!}u^{k-1}\cdot du\cdot (1-u-du)^{n-k}}$

and the result follows.

#### Joint distributions

Similarly, for i < j, the joint probability density function of the two order statistics Ui < Uj can be shown to be

${\displaystyle f_{U_{(i)},U_{(j)}}(u,v)du\,dv=n!{u^{i-1} \over (i-1)!}{(v-u)^{j-i-1} \over (j-i-1)!}{(1-v)^{n-j} \over (n-j)!}\,du\,dv}$

which is (up to terms of higher order than ${\displaystyle O(du\,dv)}$) the probability that i − 1, 1, j − 1 − i, 1 and n − j sample elements fall in the intervals ${\displaystyle (0,u)}$, ${\displaystyle (u,u+du)}$, ${\displaystyle (u+du,v)}$, ${\displaystyle (v,v+dv)}$, ${\displaystyle (v+dv,1)}$ respectively.

One reasons in an entirely analogous way to derive the higher-order joint distributions. Perhaps surprisingly, the joint density of the n order statistics turns out to be constant:

${\displaystyle f_{U_{(1)},U_{(2)},\ldots ,U_{(n)}}(u_{1},u_{2},\ldots ,u_{n})\,du_{1}\,\cdots \,du_{n}=n!\,du_{1}\cdots du_{n}.}$

One way to understand this is that the unordered sample does have constant density equal to 1, and that there are n! different permutations of the sample corresponding to the same sequence of order statistics. This is related to the fact that 1/n! is the volume of the region ${\displaystyle 0.

## Application: confidence intervals for quantiles

An interesting question is how well the order statistics perform as estimators of the quantiles of the underlying distribution.

### Estimating the median

The simplest case to consider is how well the sample median estimates the population median.

#### A small-sample-size example

As an example, consider a random sample of size 6. In that case, the sample median is usually defined as the midpoint of the interval delimited by the 3rd and 4th order statistics. However, we know from the preceding discussion that the probability that this interval actually contains the population median is

${\displaystyle {6 \choose 3}2^{-6}={5 \over 16}\approx 31\%.}$

Although the sample median is probably among the best distribution-independent point estimates of the population median, what this example illustrates is that it is not a particularly good one in absolute terms. In this particular case, a better confidence interval for the median is the one delimited by the 2nd and 5th order statistics, which contains the population median with probability

${\displaystyle \left[{6 \choose 2}+{6 \choose 3}+{6 \choose 4}\right]2^{-6}={25 \over 32}\approx 78\%.}$

With such a small sample size, if one wants at least 95% confidence, one is reduced to saying that the median is between the minimum and the maximum of the 6 observations with probability 31/32 or approximately 97%. Size 6 is, in fact, the smallest sample size such that the interval determined by the minimum and the maximum is at least a 95% confidence interval for the population median.

If the distribution is known to be symmetric and have finite variance (as is the case for the normal distribution) the population mean equals the median, and the sample mean has much better confidence intervals than the sample median. This is an illustration of the relative weakness of distribution-free statistical methods. On the other hand, using methods tailored to the wrong distribution can lead to large systematic errors in estimation.

## Computing order statistics

The problem of computing the kth smallest (or largest) element of a list is called the selection problem and is solved by a selection algorithm. Although this problem is difficult for very large lists, sophisticated selection algorithms have been created that can solve this problem in time proportional to the number of elements in the list, even if the list is totally unordered. If the data is stored in certain specialized data structures, this time can be brought down to O(log n).