Monte Carlo methods in finance

In the field of mathematical finance, many problems, for instance the problem of finding the arbitrage-free value of a particular derivative, boil down to the computation of a particular integral. In many cases these integrals can be valued analytically, and in still more cases they can be valued using numerical integration, or computed using a partial differential equation (PDE), for example Black-Scholes.

However when the number of dimensions (or degrees of freedom) in the problem is large, PDE's and numerical integrals become intractable, and in these cases Monte Carlo methods often give better results. For large dimensional integrals, Monte Carlo methods converge to the solution more quickly than numerical integration methods, require less memory and are easier to program. The advantage Monte Carlo methods offer increases as the dimensions of the problem increase.

Monte Carlo methods were first introduced to finance in 1977 by Phelim Boyle in his seminal paper "Options: A Monte Carlo Approach" in the Journal of Financial Economics.

This article discusses typical financial problems in which Monte Carlo methods are used. It also touches on the use of so-called "quasi-random" methods such as the use of Sobol sequences.

Monte Carlo methods
The fundamental theorem of arbitrage-free pricing states that the value of a derivative is equal to the discounted expected value of the derivative payoff where the expectation is taken under the risk-neutral measure [1]. An expectation is, in the language of pure mathematics, simply an integral with respect to the measure. Monte Carlo methods are ideally suited to evaluating difficult integrals (see also Monte Carlo method).

Thus if we suppose that our risk-neutral probability space is $$\mathbb{P}$$ and that we have a derivative H that depends on a set of underlying instruments S_1,...,S_n. Then given a sample $$\omega$$ from the probability space the value of the derivative is $$H( S_1(\omega),S_2(\omega),..., S_n(\omega) =: H(\omega) $$. Today's value of the derivative is found by taking the expectation over all possible samples and discounting at the risk-free rate. I.e. the derivative has value:


 * $$ H_0 = {df}_T \int_{\omega} H(\omega) d\mathbb{P}(\omega) $$

where $${df}_T$$ is the discount factor corresponding to the risk-free rate to the final maturity date T years into the future.

Now suppose the integral is hard to compute. We can approximate the integral by generating sample paths and then taking an average. Suppose we generate N samples then


 * $$ H_0 \approx {df}_T \frac{1}{N} \sum_{\omega\in SampleSet} H(\omega)$$

which is much easier to compute.

Sample paths for standard models
In finance underlying random variables (such as an underlying stock price) are usually assumed to follow a path that is a function of a Brownian motion 2. For example in the standard Black-Scholes model, the stock price evolves as


 * $$ dS = \mu(t) S dt + \sigma(t) S dW_t $$.

To sample a path following this distribution from time 0 to T, we chop the time interval into M units of length $$\delta t$$, and approximate the Brownian motion over the interval $$dt$$ by a single normal variable of mean 0 and variance $$\delta t$$. This leads to a sample path of


 * $$ S( k\delta t) = S(0) \exp( \Sigma_{i=0}^{i=k-1} [(\mu - \frac{\sigma^2}{2})\delta t + \sigma\epsilon_i\delta t] )$$

for each k between 1 and M. Here each $$\epsilon_i$$ is a draw from a standard normal distribution.

Let us suppose that a derivative H pays the average value of S between 0 and T then a sample path $$\omega$$ corresponds to a set $$\{\epsilon_1,...,\epsilon_M\}$$ and


 * $$ H(\omega) = \frac{1}{M} \Sigma_{k=0}^{k<M} S( k \delta t ) $$

We obtain the Monte-Carlo value of this derivative by generating N lots of M normal variables, creating N sample paths and so N values of H, and then taking the derivative. Commonly the derivative will depend on two or more (possibly correlated) underlyings. The method here can be extended to generate sample paths of several variables, where the normal variables building up the sample paths are appropriately correlated.

It follows from the Central Limit Theorem that quadrupling the number of sample paths approximately halves the error in the simulated price (i.e. the error has order sqrt(N) convergence).

In practice Monte Carlo methods are used for European-style derivatives involving at least three variables (more direct methods involving numerical integration can usually be used for those problems with only one or two underlyings. See Monte Carlo option model.

Greeks
Estimates for the "Greeks" of an option i.e. the (mathematical) derivatives of option value with respect to input parameters, can be obtained by numerical differentiation. This can be a time-consuming process (an entire Monte Carlo run must be performed for each "bump" or small change in input parameters). Further, taking numerical derivatives tends to emphasize the error (or noise) in the Monte Carlo value - making it necessary to simulate with a large number of sample paths. Practitioners regard these points as a key problem with using Monte Carlo methods.

Variance reduction
Square root convergence is slow, and so using the naive approach described above requires using a very large number of sample paths (1 million, say, for a typical problem) in order to obtain an accurate result. This state of affairs can be mitigated by variance reduction techniques. A simple technique is, for every sample path obtained, to take its antithetic path - that is given a path $$\{\epsilon_1,...,\epsilon_M\}$$ to also take $$\{-\epsilon_1,...,-\epsilon_M\}$$. Not only does this reduce the number of normal samples to be taken to generate N paths, but also reduces the variance of the sample paths, improving the accuracy.

Secondly it is also natural to use a control variate. Let us suppose that we wish to obtain the Monte Carlo value of a derivative H, but know the value analytically of a similar derivative I. Then H* = (Value of H according to Monte Carlo) + (Value of I analytically) - (Value of I according to same Monte Carlo paths) is a better estimate.

American Options
Monte-Carlo methods are harder to use with American options. This is because, in contrast to a partial differential equation, the Monte Carlo method really only estimates the option value assuming a given starting point and time. However, for early exercise, we would also need to know the option value at the intermediate times between the simulation start time and the option expiry time. In the Black-Scholes PDE approach these prices are easily obtained, because the simulation runs backwards from the expiry date. In Monte-Carlo this information is harder to obtain, but it can be done for example using the Least Squares algorithm of Longstaff and Schwartz (see link to original paper)

Quasi-random (low-discrepancy) methods
Instead of generating sample paths randomly, it is possible to systematically (and in fact completely deterministically, despite the "quasi-random" in the name) select points in a probability spaces so as to optimally "fill up" the space. The selection of points is a low-discrepancy sequence such as a Sobol sequence. Taking averages of derivative payoffs at points in a low-discrepancy sequence is often more efficient than taking averages of payoffs at random points.