Monte Carlo Integration

Navigation:  Randomness and Randomization > Monte Carlo methods >

Monte Carlo Integration

Previous pageReturn to chapter overviewNext page

At first sight it seems unlikely that statistical procedures can be used to provide numerical integration, but on closer examination it is quickly apparent that it is both possible and, in some instances, advantageous. We start with a simple example, which is essentially a problem in the field of geometry. The question is: what is the area of a circle, radius 0.5? Essentially this is a question of determining a value for Pi. One simple Monte Carlo approach would be to generate a large number, n, of pairs of random uniform values (x,y). Each pair, (x,y), locates a point within a unit square. The center of this square, (0.5,0.5) is the center of a circle, radius 0.5, so if the distance, d, from (x,y) to (0.5,0.5) is less then 0.5, it lies within a circle centered on (0.5,0.5) radius 0.5 - we can call this a success. We can count the number of times we have a success, s, and compute the ratio s/n. This gives an estimate of the area of the circle, because the unit square has area 1. Since this area is πr2 we have π=4s/n as our estimate. For sufficiently large n this will converge to the true value of Pi. With a sample of 1 million points we obtained an estimate of π=3.142716 - quite reasonable, but this required a very large number of random samples. Although clearly not efficient, this example serves to illustrate that Monte Carlo methods can be used to obtain a two-dimensional integral - and it is easy to see how this idea could be generalized, in various forms, to more complex functions and higher dimensions.

Estimation of Pi

circlesquare

A similar idea can be used to obtain estimated values for quite general integrals. The important result here is the mean value theorem for integrals (see Apostol, 1974, for a formal description of this theorem, [APO1]). Essentially what this states is that if you have a single-valued continuous function, f(x)>0, defined over some interval [a,b], there exists a value c [a,b], such that we can draw a horizontal line at some level f(c)>0 that represents the average value of this function over the range in question (see diagram, below). The area of the rectangle defined by the range [a,b] and this horizontal line, f(c), equals the area between the x-axis and the function, f(x), over the interval, i.e. it is the definite integral of f(x).

Mean value theorem for integrals

integral_meanvalue

More formally we can write:

Notice that the form of f(x) is quite general, and there is no requirement for the integral of f(x) to sum to 1, at least, not at this stage. We can use the above result, in conjunction with Monte Carlo procedures, to estimate the integral and to estimate the numerical value of more general integrals of the form:

where h(x) can also be quite a general function, such as h(x)=x, or h(x)=x2 etc, i.e. we can use the procedure to obtain estimates of the moments of probability density functions. To illustrate this idea we can take a very simply asymmetric pdf, such as the Exponential distribution, with mean λ=0.6 (see graph, below). Here we have plotted the distribution over the range [0,5]. We now generate a set of n random numbers, {x}, from a Uniform distribution and evaluate f(x) in each case. The average of these values, for reasonably large n, will give us a good estimate of f(c). We then multiply f(c) by the range, which is 5, to produce an estimate of the area under the curve, which should be very close to 1, and indeed this is the case. It is clear that c0.2 in this example.

Exponential distribution, mean value 0.6

expondist06

Now suppose we want to compute estimates of the mean and variance of this distribution, in a similar manner. The procedure is the same, but f(x) is replaced with g1(x)=xf(x) and g2(x)=x2f(x) in order to estimate the first two crude moments, and thereby the mean and variance. If we proceed as before, to estimate the mean value, we find that a single simulation run of say 1000 random uniform values will yield an estimate of the mean close to, but not exactly 0.6. In fact, repeated runs of such simulations, produces a frequency distribution of the parameter λ, as illustrated below (with for to the Normal distribution also shown). The mean of these values is very close to the true value. Notice, however, that values as low as 0.54 and as high as 0.63 are observed, so a substantial range of estimates remains - the standard deviation of the values is approximately 0.02. With increasing sample size the estimate will converge to the true value, but only slowly.

Exponential distribution, Monte Carlo estimates of the mean value (100 simulations)

expondist06mean1

As noted this approach is not very efficient, but it demonstrates that Monte Carlo methods can be applied to a variety of integration problems, and points to a solution approach that can be applied to more complex problems. Indeed, such methods can be particularly effective if (a) the sampling algorithm can be made more efficient, and (b) the samples can be made to mimic the form of the target distribution, i.e. they represent samples distributed according to the target distribution rather than the Uniform distribution. Both of these objectives can be achieved, with some rather unexpected adjustments to the procedure described above which make use of the properties of ergodic Markov Chains. The procedure is known as the Monte Carlo Markov Chain method, and is described in the following subsection.

References

[APO1] Apostol T M (1974) Mathematical Analysis. Ch. 7-18. Addison-Wesley, Reading, Mass.