Degenerate distribution

Parameters Probability mass functionPlot of the degenerate distribution PMF for k0=0PMF for k0=0. The horizontal axis is the index i of ki. (Note that the function is only defined at integer indices. The connecting lines do not indicate continuity.) Cumulative distribution functionPlot of the degenerate distribution CDF for k0=0CDF for k0=0. The horizontal axis is the index i of ki. ${\displaystyle k_{0}\in (-\infty ,\infty )\,}$ ${\displaystyle k=k_{0}\,}$ ${\displaystyle {\begin{matrix}1&{\mbox{for }}k=k_{0}\\0&{\mbox{otherwise }}\end{matrix}}}$ ${\displaystyle {\begin{matrix}0&{\mbox{for }}k ${\displaystyle k_{0}\,}$ ${\displaystyle k_{0}\,}$ ${\displaystyle k_{0}\,}$ ${\displaystyle 0\,}$ ${\displaystyle 0\,}$ ${\displaystyle 0\,}$ ${\displaystyle 0\,}$ ${\displaystyle e^{k_{0}t}\,}$ ${\displaystyle e^{ik_{0}t}\,}$

In mathematics, a degenerate distribution is the probability distribution of a discrete random variable whose support consists of only one value. Examples include a two-headed coin and rolling a die whose sides all show the same number. While this distribution does not appear random in the everyday sense of the word, it does satisfy the definition of random variable.

The degenerate distribution is localized at a point k0 on the real line. The probability mass function is given by:

${\displaystyle f(k;k_{0})=\left\{{\begin{matrix}1,&{\mbox{if }}k=k_{0}\\0,&{\mbox{if }}k\neq k_{0}\end{matrix}}\right.}$

The cumulative distribution function of the degenerate distribution is then:

${\displaystyle F(k;k_{0})=\left\{{\begin{matrix}1,&{\mbox{if }}k\geq k_{0}\\0,&{\mbox{if }}k

Constant random variable

In probability theory, a constant random variable is a discrete random variable that takes a constant value, regardless of any event that occurs. This is technically different from an almost surely constant random variable, which may take other values, but only on events with probability zero. Constant and almost surely constant random variables provide a way to deal with constant values in a probabilistic framework.

Let  X: Ω → R  be a random variable defined on a probability space  (Ω, P). Then  X  is an almost surely constant random variable if

${\displaystyle \Pr(X=c)=1,}$

and is furthermore a constant random variable if

${\displaystyle X(\omega )=c,\quad \forall \omega \in \Omega .}$

Note that a constant random variable is almost surely constant, but not necessarily vice versa, since if  X  is almost surely constant then there may exist  γ ∈ Ω  such that  X(γ) ≠ c  (but then necessarily Pr({γ}) = 0, in fact Pr(X ≠ c) = 0).

For practical purposes, the distinction between  X  being constant or almost surely constant is unimportant, since the probability mass function  f(x)  and cumulative distribution function  F(x)  of  X  do not depend on whether  X  is constant or 'merely' almost surely constant. In either case,

${\displaystyle f(x)={\begin{cases}1,&x=c,\\0,&x\neq c.\end{cases}}}$

and

${\displaystyle F(x)={\begin{cases}1,&x\geq c,\\0,&x

The function  F(x)  is a step function.