# Convolution

For the usage in formal language theory, see convolution (computer science).

In mathematics and, in particular, functional analysis, convolution is a mathematical operator that takes two functions f and g and produces a third function that is typically viewed as a modified version of one of the original functions. Convolution is an extraordinarily useful mathematical tool with applications including statistics, computer vision, image and signal processing, and differential equations.

In electrical engineering, the third function is the output of a linear time-invariant system. The function being modified is the input, and the other is the system's time-response to a brief but strong impulse (see impulse response). At any given moment, the output is an accumulated effect of all the prior values of the input function, with the most recent values typically having the most influence (expressed as a multiplicative factor). The impulse response function provides that factor as a function of the elapsed time since each input value occurred.

File:Convolution3.PNG
Visual explanation of convolution. Make each waveform a function of the dummy variable $\tau$ . Time-invert one of the waveforms and add t to allow it to slide back and forth on the $\tau$ -axis while remaining stationary with respect to t. Finally, start the function at negative infinity and slide it all the way to positive infinity. Wherever the two functions intersect, find the integral of their product. The resulting waveform (not shown here) is the convolution of the two functions. If the stationary waveform is a unit impulse, the end result would be the original version of the sliding waveform, as it is time-inverted back again because the right edge hits the unit impulse first and the left edge last. This is also the reason for the time-inversion in general, as complex signals can be thought to consist of unit impulses.

## Definition

The convolution of $f\,$ and $g\,$ is written $f*g\,$ . It is defined as the integral of the product of the two functions after one is reversed and shifted. As such, it is a particular kind of integral transform:

$(f*g)(t)=\int _{-\infty }^{\infty }f(\tau )g(t-\tau )\,d\tau .$ While the symbol $t\,$ is used above, it need not represent the time domain. But in that context, the convolution formula can be described as a weighted average of the function $g\,$ at the moment $t.\,$ For $\tau >0,\,$ $f(\tau )\,$ is the weight/multiplier to be applied to the value that function $g\,$ had $\tau \,$ seconds prior to that moment. (And for $\tau <0,\,$ it is the weight/multiplier to be applied to the value that function $g\,$ has $|\tau |\,$ seconds after the moment $t.\,$ )  The time-reversal of one function relative to the other in the convolution integral leads to that interpretation. Conversely, defining a weighting function that way leads to the time-reversal. (LTI system theory)

## Circular convolution

When $g(t)\,$ is periodic, with period $T,\,$ it can be shown that:

$(f*g)(t)=\int _{t_{o}}^{t_{o}+T}\left[\sum _{k=-\infty }^{\infty }f(\tau -kT)\right]g(t-\tau )\ d\tau ,\,$ where to is an arbitrary parameter. The summation is called a periodic extension of the function $f.\,$ If $g\,$ is a periodic extension of another function, then $(f*g)(t)\,$ is known as a circular, cyclic, or periodic convolution. As shown here, it can be computed by integration on either an infinite domain or a finite domain.

## Discrete convolution

For discrete functions, the convolution operation is given by:

$(f*g)(n)=\sum _{m=-\infty }^{\infty }{f[m]\cdot g[n-m]}\,$ When multiplying two polynomials, the coefficients of the product are given by the convolution of the original coefficient sequences, extended with zeros where necessary to avoid undefined terms.

Generalizing the above cases, the convolution can be defined for any two integrable functions defined on a locally compact topological group (see convolutions on groups below).

A different generalization is the convolution of distributions.

### Fast convolution algorithms

When $f[n]\,$ is a finite duration function of length N, the formula above requires N arithmetic operations per output value and N2 operations for N outputs. That can be significantly reduced with any of several fast algorithms. Digital signal processing and other applications typically use fast convolution algorithms to increase the speed of the convolution to O(N log N) complexity.

The most common fast convolution algorithms use fast Fourier transform (FFT) algorithms via the circular convolution theorem. Specifically, the circular convolution of two finite-length sequences is found by taking an FFT of each sequence, multiplying pointwise, and then performing an inverse FFT. Convolutions of the type defined above are then efficiently implemented using that technique in conjunction with zero-extension and/or discarding portions of the output.

There are also many other fast convolution algorithms that do not employ FFTs per se, such as number-theoretic transform algorithms.

## Properties

### Commutativity

$f*g=g*f\,$ ### Associativity

$f*(g*h)=(f*g)*h\,$ ### Distributivity

$f*(g+h)=(f*g)+(f*h)\,$ ### Identity element

$f*\delta =\delta *f=f\,$ where δ denotes the Dirac delta

### Associativity with scalar multiplication

$a(f*g)=(af)*g=f*(ag)\,$ for any real (or complex) number $a\,$ .

### Differentiation rule

${\mathcal {D}}(f*g)={\mathcal {D}}f*g=f*{\mathcal {D}}g\,$ where ${\mathcal {D}}f$ denotes the derivative of $f$ or, in the discrete case, the difference operator ${\mathcal {D}}f(n)=f(n+1)-f(n)$ . Consequently, convolution can be viewed as a "smoothing" operation: the convolution of f and g is differentiable as many times as either f or g is, whichever is greater.

### Convolution theorem

The convolution theorem states that

${\mathcal {F}}(f*g)=k\left[{\mathcal {F}}(f)\right]\cdot \left[{\mathcal {F}}(g)\right]$ where ${\mathcal {F}}(f)\,$ denotes the Fourier transform of $f$ , and $k$ is a constant which depends upon the specific normalization of the Fourier transform (e.g., $k=1$ if ${\mathcal {F}}\left[f(x)\right]\equiv \int _{-\infty }^{\infty }f(x)\exp(\pm 2\pi ix\xi )dx$ ). Versions of this theorem also hold for the Laplace transform, two-sided Laplace transform, Z-transform and Mellin transform.

## Convolution inverse

Many functions have an inverse element, f(-1), which satisfies the relationship:

$f^{(-1)}*f=\delta \,$ These functions form an abelian group, with the group operation being convolution.

## Convolutions on groups

If G is a suitable group endowed with a measure m (for instance, a locally compact Hausdorff topological group with the Haar measure) and if f and g are real or complex valued m-integrable functions on G, then we can define their convolution by

$(f*g)(x)=\int _{G}f(y)g(xy^{-1})\,dm(y)\,$ The circle group T with the Lebesgue measure is an immediate example. For a fixed g in L1(T), we have the following familiar operator acting on the Hilbert space L2(T):

$Tf(x)={\frac {1}{2\pi }}\int _{\mathbb {T} }f(y)g(x-y)dy.$ The operator T is compact. A direct calculation shows that its adjoint T* is convolution with

${\bar {g}}(-y).$ By the commutativity property cited above, T is normal, i.e. T*T = TT*. Also, T commutes with the translation operators. Consider the family S of operators consisting of all such convolutions and the translation operators. S is a commuting family of normal operators. According to spectral theory, there exists an orthonormal basis {hk} that simultaneously diagonalizes S. This characterizes convolutions on the circle. Specifically, we have

$h_{k}(x)=e^{ikx},\;$ which are precisely the characters of T. Each convolution is a compact multiplication operator in this basis. This can be viewed as a version of the convolution theorem discussed above.

The above example may convince one that convolutions arise naturally in the context of harmonic analysis on groups. For more general groups, it is also possible to give, for instance, a Convolution Theorem, however it is much more difficult to phrase and requires representation theory for these types of groups and the Peter-Weyl theorem. It is very difficult to do these calculations without more structure, and Lie groups turn out to be the setting in which these things are done.Template:Unclear

## Convolution of measures

If μ and ν are measures on the family of Borel subsets of the real line, then the convolution μ * ν is defined by

$(\mu *\nu )(A)=(\mu \times \nu )(\{\,(x,y)\in \mathbb {R} ^{2}\,:\,x+y\in A\,\}).$ In case μ and ν are absolutely continuous with respect to Lebesgue measure, so that each has a density function, then the convolution μ * ν is also absolutely continuous, and its density function is just the convolution of the two separate density functions.

If μ and ν are probability measures, then the convolution μ * ν is the probability distribution of the sum X + Y of two independent random variables X and Y whose respective distributions are μ and ν.

## Applications

Convolution and related operations are found in many applications of engineering and mathematics.

• In statistics, as noted above, a weighted moving average is a convolution.
• In probability theory, the probability distribution of the sum of two independent random variables is the convolution of their individual distributions.
• In optics, many kinds of "blur" are described by convolutions. A shadow (e.g. the shadow on the table when you hold your hand between the table and a light source) is the convolution of the shape of the light source that is casting the shadow and the object whose shadow is being cast. An out-of-focus photograph is the convolution of the sharp image with the shape of the iris diaphragm. The photographic term for this is bokeh.
• Similarly, in digital image processing, convolutional filtering plays an important role in many important algorithms in edge detection and related processes.
• In linear acoustics, an echo is the convolution of the original sound with a function representing the various objects that are reflecting it.
• In artificial reverberation (digital signal processing, pro audio), convolution is used to map the impulse response of a real room on a digital audio signal (see previous and next point for additional information).
• In electrical engineering and other disciplines, the output (response) of a (stationary, or time- or space-invariant) linear system is the convolution of the input (excitation) with the system's response to an impulse or Dirac delta function. See LTI system theory and digital signal processing.
• In time-resolved fluorescence spectroscopy, the excitation signal can be treated as a chain of delta pulses, and the measured fluorescence is a sum of exponential decays from each delta pulse.
• In physics, wherever there is a linear system with a "superposition principle", a convolution operation makes an appearance.
• This is the fundamental problem term in the Navier–Stokes equations relating to the Clay Mathematics Millennium Problem and the associated million dollar prize.
• In digital signal processing, frequency filtering can be simplified by convolving two functions (data with a filter) in the time domain, which is analogous to multiplying the data with a filter in the frequency domain. 