Fourier analysis


 * Also see bottom: How it works .

Fourier analysis, named after Joseph Fourier's introduction of the Fourier series, is the decomposition of a function in terms of a sum of sinusoidal functions (called basis functions) of different frequencies that can be recombined to obtain the original function. The recombination process is called Fourier synthesis (in which case, Fourier analysis refers specifically to the decomposition process).

The result of the decomposition is the amount (i.e. amplitude) and the phase to be imparted to each basis function (each frequency) in the reconstruction. It is therefore also a function (of frequency), whose value can be represented as a complex number, in either polar or rectangular coordinates. And it is referred to as the frequency domain representation of the original function. A useful analogy is the waveform produced by a musical chord and the set of musical notes (the frequency components) that it comprises.

The term Fourier transform can refer to either the frequency domain representation of a function or to the process/formula that "transforms" one function into the other. However, the transform is usually given a more specific name depending upon the domain and other properties of the function being transformed, as elaborated below. Moreover, the original concept of Fourier analysis has been extended over time to apply to more and more abstract and general situations, and the general field is often known as harmonic analysis. See also: List of Fourier-related transforms.

Applications
Fourier analysis has many scientific applications &mdash; in physics, number theory, combinatorics, signal processing, probability theory, statistics, option pricing, cryptography, acoustics, oceanography, optics and diffraction, geometry, and other areas.

This wide applicability stems from many useful properties of the transforms:


 * The transforms are linear operators and, with proper normalization, are unitary as well (a property known as Parseval's theorem or, more generally, as the Plancherel theorem, and most generally via Pontryagin duality).
 * The transforms are invertible, and in fact the inverse transform has almost the same form as the forward transform.
 * The exponential basis functions are eigenfunctions of differentiation, which means that this representation transforms linear differential equations with constant coefficients into ordinary algebraic ones. (For example, in a linear time-invariant physical system, frequency is a conserved quantity, so the behavior at each frequency can be solved independently.)
 * By the convolution theorem, Fourier transforms turn the complicated convolution operation into simple multiplication, which means that they provide an efficient way to compute convolution-based operations such as polynomial multiplication and multiplying large numbers.
 * The discrete version of the Fourier transform (see below) can be evaluated quickly on computers using fast Fourier transform (FFT) algorithms.

Variants of Fourier analysis
Fourier analysis has different forms, depending on certain properties of the function or data being analyzed. The resultant transforms can be seen as special cases or generalizations of each other. Four basic varieties are summarized in the following table, which also indicates that the properties of discreteness and periodicity are duals. I.e., if the signal representation in one domain has either (or both) of those properties, then its transform representation to the other domain has the other property (or both).

Fourier transform

 * Main article: Fourier transform

Most often, the unqualified term Fourier transform refers to the transform of functions of a continuous real argument, representing any square-integrable function $$s \left( t \right)$$ as a linear combination of complex exponentials with frequencies $$\omega\,$$:



s \left( t \right) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} S\left( \omega\right) e^{i\omega t}\,d\omega.$$

The quantity, $$S(\omega)\,$$, provides both the amplitude and initial phase (as a complex number) of basis function: $$e^{i\omega t}\,$$.

The function, $$S(\omega)\,$$, is the Fourier transform of $$s(t)\,$$, denoted by the operator $$\mathcal{F}\,$$:


 * $$S(\omega) = \left(\mathcal{F}s\right)(\omega)= \mathcal{F}\{s\}(\omega)\,$$

And the inverse transform (shown above) is written:


 * $$s(t) = \left(\mathcal{F}^{-1}S\right)(t)= \mathcal{F}^{-1}\{S\}(t)\,$$

Together the two functions are referred to as a transform pair. See continuous Fourier transform for more information, including:
 * formula for the forward transform
 * tabulated transforms of specific functions
 * discussion of the transform properties
 * various conventions for amplitude normalization and frequency scaling/units

Multi-dimensional version
The formulation for the Fourier transform given above applies in one dimension. The Fourier transform, however, can be expanded to arbitrary dimension $$n$$. The more generalised version of this transform in dimension $$n$$, notated by $$\mathcal{F}_n$$ is:
 * $$s \left( \mathbf{x} \right) = \left( \mathcal{F}^{-1}_n S \right) \left( \mathbf{x} \right) = \frac{1}{(2\pi)^{n/2}} \int S \left( \boldsymbol{\omega} \right) e^{i \left\langle \boldsymbol{\omega} ,\mathbf{x} \right\rangle} \, d \boldsymbol{\omega},$$

where $$\mathbf{x}$$ and $$\boldsymbol{\omega}$$ are $$n$$-dimensional vectors, $$\left\langle \boldsymbol{\omega} ,\mathbf{x} \right\rangle$$ is the inner product of these two vectors, and the integration is performed over all $$n$$ dimensions.

Fourier series

 * Main article: Fourier series

The continuous transform is itself actually a generalization of an earlier concept, a Fourier series, which was specific to periodic (or finite-domain) functions $$s \left( t \right)$$ (with period $$\tau \,$$), and represents these functions as a series of sinusoids:


 * $$s(t) = \sum_{k=-\infty}^{\infty} S_k \cdot e^{i \omega_k t}, \,$$

where $$\omega_k = 2\pi k / \tau \,$$, and $$S_k \,$$ is a (complex) amplitude.

For real-valued $$s(t)\,$$, an equivalent variation is:


 * $$s(t) = \frac{1}{2}a_0 + \sum_{k=1}^\infty\left[a_k\cdot \cos(\omega_k t)+b_k\cdot \sin(\omega_k t)\right],$$

where $$a_k = 2\cdot \operatorname{Re}\{S_k\} \,$$  and   $$b_k = -2\cdot \operatorname{Im}\{S_k\} \,$$.

Discrete-time Fourier transform

 * Main article: Discrete-time Fourier transform

For use on computers, both for scientific computation and digital signal processing, one must have functions, x[n], that are defined for discrete instead of continuous domains, again finite or periodic. A useful "discrete-time" function can be obtained by sampling a "continuous-time" function, x(t). And similar to the continuous Fourier transform, the function can be represented as a sum of complex sinusoids:


 * $$x[n] = \frac{1}{2 \pi}\int_{-\pi}^{\pi} X(\omega)\cdot e^{i \omega n} \, d \omega.$$

But in this case, the limits of integration need only span one period of the periodic function, $$X(\omega ) \,$$, which is derived from the samples by the discrete-time Fourier transform (DTFT):


 * $$X(\omega) = \sum_{n=-\infty}^{\infty} x[n] \,e^{-i \omega n}.\,$$

Discrete Fourier transform

 * Main article: Discrete Fourier transform

The DTFT is defined on a continuous domain. So despite its periodicity, it still cannot be numerically evaluated for every unique frequency. But a very useful approximation can be made by evaluating it at regularly-spaced intervals, with arbitrarily small spacing. Due to periodicity, the number of unique coefficients (N) to be evaluated is always finite, leading to this simplification:


 * $$X[k] = X\left(\frac{2 \pi }{N} k\right)= \sum_{n=-\infty}^{\infty} x[n] \,e^{-i 2 \pi \frac{k}{N} n},$$    for $$k =  0, 1, \dots, N-1. \,$$

When the portion of x[n] between n=0 and n=N-1 is a good (or exact) representation of the entire x[n] sequence, it is useful to compute:


 * $$X[k] = \sum_{n=0}^{N-1} x[n] \,e^{-i 2 \pi \frac{k}{N} n},$$

which is called discrete Fourier transform (DFT). Commonly the length of the x[n] sequence is finite, and a larger value of N is chosen. Effectively, the x[n] sequence is padded with zero-valued samples, referred to as zero padding.

The inverse DFT represents x[n] as the sum of complex sinusoids:


 * $$x[n] = \frac{1}{N} \sum_{k=0}^{N-1} X[k] e^{i 2 \pi \frac{k}{N} n}, \quad \quad n = 0, 1, \dots, N-1. \,$$

The table below will note that this actually produces a periodic x[n]. If the original sequence was not periodic to begin with, this phenomenon is the time-domain consequence of approximating the continuous-domain DTFT function with the discrete-domain DFT function.

The DFT can be computed using a fast Fourier transform (FFT) algorithm, which makes it a practical and important transformation on computers.

Fourier transforms on arbitrary locally compact abelian topological groups
The Fourier variants can also be generalized to Fourier transforms on arbitrary locally compact abelian topological groups, which are studied in harmonic analysis; there, the Fourier transform takes functions on a group to functions on the dual group. This treatment also allows a general formulation of the convolution theorem, which relates Fourier transforms and convolutions. See also the Pontryagin duality for the generalized underpinnings of the Fourier transform.

Time-frequency transforms
Time-frequency transforms such as the short-time Fourier transform, wavelet transforms, chirplet transforms, and the fractional Fourier transform try to obtain frequency information from a signal as a function of time (or whatever the independent variable is), although the ability to simultaneously resolve frequency and time is limited by an (mathematical) uncertainty principle.

Interpretation in terms of time and frequency
In terms of signal processing, the transform takes a time series representation of a signal function and maps it into a frequency spectrum, where &omega; is angular frequency. That is, it takes a function in the time domain into the frequency domain; it is a decomposition of a function into harmonics of different frequencies.

When the function f is a function of time and represents a physical signal, the transform has a standard interpretation as the frequency spectrum of the signal. The magnitude of the resulting complex-valued function F at frequency &omega; represents the amplitude of a frequency component whose initial phase is given by:  arctan (imaginary part/real part).

However, it is important to realize that Fourier transforms are not limited to functions of time, and temporal frequencies. They can equally be applied to analyze spatial frequencies, and indeed for nearly any function domain.

Applications in signal processing
When processing signals, such as audio, radio waves, light waves, seismic waves, and even images, Fourier analysis can isolate individual components of a compound waveform, concentrating them for easier detection and/or removal. A large family of signal processing techniques consist of Fourier-transforming a signal, manipulating the Fourier-transformed data in a simple way, and reversing the transformation.

Some examples include:


 * Telephone dialing; the touch-tone signals for each telephone key, when pressed, are each a sum of two separate tones (frequencies).  Fourier analysis can be used to separate (or analyze) the telephone signal, to reveal the two component tones and therefore which button was pressed.
 * Removal of unwanted frequencies from an audio recording (used to eliminate hum from leakage of AC power into the signal, to eliminate the stereo subcarrier from FM radio recordings, or to create karaoke tracks with the vocals removed);
 * Noise gating of audio recordings to remove quiet background noise by eliminating Fourier components that do not exceed a preset amplitude;
 * Equalization of audio recordings with a series of bandpass filters;
 * Digital radio reception with no superheterodyne circuit, as in a modern cell phone or radio scanner;
 * Image processing to remove periodic or anisotropic artifacts such as jaggies from interlaced video, stripe artifacts from strip aerial photography, or wave patterns from radio frequency interference in a digital camera;
 * Cross correlation of similar images for co-alignment;
 * X-ray crystallography to reconstruct a protein's structure from its diffraction pattern;
 * Fourier transform ion cyclotron resonance mass spectrometry to determine the mass of ions from the frequency of cyclotron motion in a magnetic field.

Fourier transformation is also useful as a compact representation of a signal. For example, JPEG compression uses Fourier transformation of small square pieces of a digital image. The Fourier components of each square are rounded to lower arithmetic precision, and weak components are eliminated entirely, so that the remaining components can be stored very compactly. In image reconstruction, each Fourier-transformed image square is reassembled from the preserved approximate components, and then inverse-transformed to produce an approximation of the original image.

About notation
The Fourier transform is a mapping on a function space. This mapping is here denoted $$\mathcal{F}$$ and $$\mathcal{F}\{s\}$$ is used to denote the Fourier transform of the function s. This mapping is linear, which means that $$\mathcal{F}$$ can also be seen as a linear transformation on the function space and implies that the standard notation in linear algebra of applying a linear transformation to a vector (here the signal s) can be used to write $$\mathcal{F} s$$ instead of $$\mathcal{F}\{s\}$$. Since the result of applying the Fourier transform is again a function, we can be interested in the value of this function evaluated at the value $$\omega$$ for its variable, and this is denoted either as $$\mathcal{F}\{s\}(\omega)$$ or as $$(\mathcal{F} s)(\omega)$$. Notice that in the former case, it is implicitly understood that $$\mathcal{F}$$ is applied first to s and then the resulting function is evaluated at $$\omega$$, not the other way around.

In mathematics and various applied sciences it is often necessary distinguish between a function s and the value of s when its variable equals t, denoted s(t). This means that a notation like $$\mathcal{F}\{s(t)\}$$ formally can be interpreted as the Fourier transform of the values of s at t, which must be considered as an ill-formed expression since it describes the Fourier transform of a function value rather than of a function. Despite this flaw, the previous notation appears frequently, often when a particular function or a function of a particular variable is to be transformed. For example, $$\mathcal{F}\{ \mathrm{rect}(t) \} = \mathrm{sinc}(\omega)$$ is sometimes used to express that the Fourier transform of a rectangular function is a sinc function, or $$\mathcal{F}\{s(t+t_{0})\} = \mathcal{F}\{s(t)\} e^{i \omega t_{0}}$$ is used to express the shift property of the Fourier transform. Notice, that the last example is only correct under the assumption that the transformed function is a function of t, not of $$t_{0}$$. If possible, this informal usage of the $$\mathcal{F}$$ operator should be avoided, in particular when it is not perfectly clear which variable the function to be transform depends on.

How it works (a basic explanation)
To measure the amplitude and phase of a particular frequency component, the transform process multiplies the original function (the one being analyzed) by a sinusoid with the same frequency (called a basis function). If the original function contains a component with the same shape (i.e. same frequency), its shape (but not its amplitude) is effectively squared. The complex numbers produced by the product of the original function and the basis function are subsequently summed into a single result. The contributions from the component that matches the basis function all have the same sign (or vector direction). The other components contribute values that alternate in sign (or vectors that rotate in direction) and tend to cancel out of the summation. The final value is therefore dominated by the component that matches the basis function. The stronger it is, the larger is the measurement. Repeating this measurement for all the basis functions produces the frequency-domain representation.
 * Squaring implies that at every point on the product waveform, the contribution of the matching component to that product is a positive contribution, even though the product might be negative.
 * Squaring describes the case where the phases happen to match. What happens more generally is that a constant phase difference produces vectors at every point that are all aimed in the same direction, which is determined by the difference between the two phases.  To make that happen actually requires two sinusoidal basis functions, cosine and sine, which are combined into a basis function that is complex-valued (see Complex exponential).  The vector analogy refers to the polar coordinate representation.
 * Note that if the functions are continuous, rather than sets of discrete points, this step requires integral calculus or numerical integration. But the basic concept is just addition.