The use of the term Monte Carlo methods dates back to the 1940s, when Stan Ulam and Nicholas Metropolis were investigating ways of tackling a number of problems in physics that were not amenable to conventional analytical or numerical methods. Ulam coined the term, and Metropolis developed it with the algorithm now known as the Monte Carlo Markov Chain (or MCMC) procedure. The essential idea of Monte Carlo methods is that a range of problems could be addressed by some form of casino-like simulation procedure, in which a set of uniform random numbers are generated and then used to determine the attributes or behavior of a function, system or process of interest. One of the first applications was in the field of numerical integration (see further the Monte Carlo Integration section, below). As computing power developed, a huge range of applications blossomed, and rapidly became a central part of the overall discipline that is described as computational methods. In the field of statistics one of the most widespread use of such methods is in connection with estimation and inference for Bayesian statistics. The particular procedure that is used is a development of ideas originally put forward by Metropolis noted above. The MCMC approach is also used in other areas of statistical analysis, including imputing missing values in datasets and obtaining random samples from complex distributions.