# Rényi entropy

In information theory, the Rényi entropy, a generalisation of Shannon entropy, is one of a family of functionals for quantifying the diversity, uncertainty or randomness of a system. It is named after Alfréd Rényi.

The Rényi entropy of order α, where α ${\displaystyle \geq }$ 0, is defined as

${\displaystyle H_{\alpha }(X)={\frac {1}{1-\alpha }}\log {\Bigg (}\sum _{i=1}^{n}p_{i}^{\alpha }{\Bigg )}}$

where pi are the probabilities of {x1, x2 ... xn}. If the probabilities are all the same then all the Rényi entropies of the distribution are equal, with Hα(X)=log n. Otherwise the entropies are weakly decreasing as a function of α.

Some particular cases:

${\displaystyle H_{0}(X)=\log n=\log |X|,\,}$

which is the logarithm of the cardinality of X, sometimes called the Hartley entropy of X.

In the limit that ${\displaystyle \alpha }$ approaches 1, it can be shown that ${\displaystyle H_{\alpha }}$ converges to

${\displaystyle H_{1}(X)=-\sum _{i=1}^{n}p_{i}\log p_{i}}$

which is the Shannon entropy. Sometimes Renyi entropy refers only to the case ${\displaystyle \alpha =2}$,

${\displaystyle H_{2}(X)=-\log \sum _{i=1}^{n}p_{i}^{2}=-\log P(X=Y)}$

where Y is a random variable independent of X but identically distributed to X. As ${\displaystyle \alpha \rightarrow \infty }$, the limit exists as

${\displaystyle H_{\infty }(X)=-\log \sup _{i=1..n}p_{i}}$

and this is called Min-entropy, because it is smallest value of ${\displaystyle H_{\alpha }}$. These two latter cases are related by ${\displaystyle H_{\infty }, while on the other hand Shannon entropy can be arbitrarily high for a random variable X with fixed min-entropy.

The Rényi entropies are important in ecology and statistics as indices of diversity. They also lead to a spectrum of indices of fractal dimension.

## Rényi relative informations

As well as the absolute Rényi entropies, Rényi also defined a spectrum of generalised relative information gains (the negative of relative entropies), generalising the Kullback–Leibler divergence.

The Rényi generalised divergence of order α, where α > 0, of an approximate distribution or a prior distribution Q(x) from a "true" distribution or an updated distribution P(x) is defined to be:

${\displaystyle D_{\alpha }(P\|Q)={\frac {1}{\alpha -1}}\log {\Bigg (}\sum _{i=1}^{n}{\frac {p_{i}^{\alpha }}{q_{i}^{\alpha -1}}}{\Bigg )}={\frac {1}{\alpha -1}}\log \sum _{i=1}^{n}p_{i}^{\alpha }q_{i}^{1-\alpha }\,}$

Like the Kullback-Leibler divergence, the Rényi generalised divergences are always non-negative.

Some special cases:

${\displaystyle D_{0}(P\|Q)=-\log \Pr(\{i:q_{i}>0\})}$ : minus the log probability that qi>0;
${\displaystyle D_{1/2}(P\|Q)=-2\log \sum _{i=1}^{n}{\sqrt {p_{i}q_{i}}}}$ : minus twice the logarithm of the Bhattacharyya coefficient;
${\displaystyle D_{1}(P\|Q)=\sum _{i=1}^{n}p_{i}\log {\frac {p_{i}}{q_{i}}}}$ : the Kullback-Leibler divergence;
${\displaystyle D_{2}(P\|Q)=\log {\Big \langle }{\frac {p_{i}}{q_{i}}}{\Big \rangle }\,}$ : the log of the expected ratio of the probabilities;
${\displaystyle D_{\infty }(P\|Q)=\log \sup _{i}{\frac {p_{i}}{q_{i}}}}$ : the log of the maximum ratio of the probabilities.

## Why α = 1 is special

The value α = 1, which gives the Shannon entropy and the Kullback–Leibler divergence, is special because it is only when α=1 that one can separate out variables A and X from a joint probability distribution, and write:

${\displaystyle H(A,X)=H(A)+\mathbb {E} _{p(a)}\{H(X|a)\}}$

for the absolute entropies, and

${\displaystyle D_{\mathrm {KL} }(p(x|a)p(a)||m(x,a))=\mathbb {E} _{p(a)}\{D_{\mathrm {KL} }(p(x|a)||m(x|a))\}+D_{\mathrm {KL} }(p(a)||m(a)),}$

for the relative entropies.

The latter in particular means that if we seek a distribution p(x,a) which minimises the divergence of some underlying prior measure m(x,a), and we acquire new information which only affects the distribution of a, then the distribution of p(x|a) remains m(x|a), unchanged.

The other Rényi divergences satisfy the criteria of being positive and continuous; being invariant under 1-to-1 co-ordinate transformations; and of combining additively when A and X are independent, so that if p(A,X) = p(A)p(X), then

${\displaystyle H_{\alpha }(A,X)=H_{\alpha }(A)+H_{\alpha }(X)\;}$

and

${\displaystyle D_{\alpha }(P(A)P(X)\|Q(A)Q(X))=D_{\alpha }(P(A)\|Q(A))+D_{\alpha }(P(X)\|Q(X)).}$

The stronger properties of the α = 1 quantities, which allow the definition of the conditional informations and mutual informations which are so important in communication theory, may be very important in other applications, or entirely unimportant, depending on those applications' requirements.

## References

A. Rényi (1961). "On measures of information and entropy" (PDF). Proceedings of the 4th Berkeley Symposium on Mathematics, Statistics and Probability 1960. pp. 547–561.