Cross-correlation

In statistics, the term cross-correlation is sometimes used to refer to the covariance cov(X, Y) between two random vectors X and Y, in order to distinguish that concept from the "covariance" of a random vector X, which is understood to be the matrix of covariances between the scalar components of X.

In signal processing, the cross-correlation (or sometimes "cross-covariance") is a measure of similarity of two signals, commonly used to find features in an unknown signal by comparing it to a known one. It is a function of the relative time between the signals, is sometimes called the sliding dot product, and has applications in pattern recognition and cryptanalysis.

For discrete functions fi and gi the cross-correlation is defined as


 * $$(f\star g)_i \ \stackrel{\mathrm{def}}{=}\ \sum_j f^*_j\,g_{i+j}$$

where the sum is over the appropriate values of the integer j and a superscript asterisk indicates the complex conjugate. For continuous functions f (x) and g (x) the cross-correlation is defined as


 * $$(f\star g)(x) \ \stackrel{\mathrm{def}}{=}\ \int f^*(t) g(x+t)\,dt$$

where the integral is over the appropriate values of t.

The cross-correlation is similar in nature to the convolution of two functions. Whereas convolution involves reversing a signal, then shifting it and multiplying by another signal, correlation only involves shifting it and multiplying (no reversing).

In an Autocorrelation, which is the cross-correlation of a signal with itself, there will always be a peak at a lag of zero.

If $$X$$ and $$Y$$ are two independent random variables with probability distributions f and g, respectively, then the probability distribution of the difference $$ -X + Y$$ is given by the cross-correlation f $$ \star $$ g. In contrast, the convolution f $$ * $$ g gives the probability distribution of the sum $$X + Y$$

Explanation
For example, consider two real valued functions f and g that differ only by a shift along the x-axis. One can calculate the cross-correlation to figure out how much g must be shifted along the x-axis to make it identical to f. The formula essentially slides the g function along the x-axis, calculating the integral for each possible amount of sliding. When the functions match, the value of $$(f\star g)$$ is maximized. The reason for this is that when lumps (positives areas) are aligned, they contribute to making the integral larger. Also, when the troughs (negative areas) align, they also make a positive contribution to the integral because the product of two negative numbers is positive.

With complex valued functions f and g, taking the conjugate of f ensures that aligned lumps (or aligned troughs) with imaginary components will contribute positively to the integral.

In econometrics, lagged cross-correlation is sometimes referred to as cross-autocorrelation (Campbell, Lo, and MacKinlay 1996).

Properties

 * The cross-correlation is related to the convolution by:


 * $$f(t)\star g(t) = f^*(-t)*g(t)$$

so that if either f or g is an even function


 * $$(f\star g) = f*g$$

Also: $$(f\star g)\star(f\star g)=(f\star f)\star (g\star g)$$


 * In analogy with the convolution theorem, the cross-correlation satisfies


 * $$\mathcal{F}[f\star g]=(\mathcal{F}[f])^* \cdot (\mathcal{F}[g])$$

where $$\mathcal{F}$$ denotes the Fourier transform, and an asterisk again indicates the complex conjugate. Coupled with fast Fourier transform algorithms, this property is often exploited for the efficient numerical computation of cross-correlations.


 * The cross-correlation is related to the spectral density. See Wiener–Khinchin theorem


 * the cross correlation of a convolution of f and h with a function g is the convolution of the correlation of f and g with the kernel h:


 * $$(f * h) \star g = h*(f \star g)$$