﻿ Estimation and estimators

# Estimation and estimators

Navigation:  »No topics above this level«

# Estimation and estimators

Estimators are statistics produced from samples that seek to approximate or estimate the value of a population parameter such as the mean or variance. A number of estimation procedures have been developed during the course of the development of statistical science. These commenced with basic descriptive statistics such as the mean, although the formulation of the least squares method by Gauss two centuries ago is generally regarded as the most important initial development in this field. Karl Pearson introduced the method of moment estimators (sometimes referred to as MOME), which enabled the parameters of selected probability distributions to be estimated from functions of sampled data. However, the principal development of the subject came from R A Fisher, with the formal analysis of estimation and bias and the introduction of the notion of maximum likelihood estimation (MLE), which is implemented in many statistical software packages today. The central idea behind the MLE approach is to select the estimated value of a parameter or parameters that maximize the likelihood of observing the sample data.

Fisher introduced the notion that some estimators were inherently better than others, in that they were: (a) unbiased; (b) efficient; and (c) consistent. MLE estimates, under a quite broad range of conditions, can be shown to be unbiased, consistent and the most efficient estimators possible. These terms are briefly explained below:

a)unbiased — an estimator is said to be unbiased if the expected value of the estimator equals the parameter in question. The arithmetic mean of a sample is an unbiased estimator of the population mean. In this context the expected value is defined as in the analysis of moments, as the sum or integral of xf(x). As we have seen in our earlier discussion of the sample variance, the expected value of the statistic when the divisor n is used is not an unbiased estimator of the population value, and a correction factor of n/(n-1) is required to ensure the estimator is unbiased

b)efficient — an efficient estimator is one that does not vary greatly from the true population value for a given sample size. In particular, estimators are sought that minimize this possible variation. A commonly used measure of efficiency is the mean squared error (i.e. the mean squared difference between the estimator and the true value, assuming the latter can be determined)

c)consistent — a consistent estimator is one that progressively approaches the population value as the sample size increases

Other terms and approaches used in statistical estimation include:

BLUE: The least squares procedure of Gauss applied to linear regression models with uncorrelated errors whose expected value is zero and whose variance is constant, is unbiased and is said to be the Best Linear Unbiased Estimate (or BLUE) estimate of the unknown parameters of the regression model (this is often referred to as the Gauss-Markov Theorem).

MDE: Minimum distance estimation (MDE) is a technique whereby a goodness-of-fit test is used as a means of judging the best parameter or parameters to choose in order to ensure that a sample frequency distribution (the empirical frequency distribution) matches a pre-determined population distribution, e.g. the Poisson or Normal distributions. Parameter selection is made such that the fit is a close as possible, i.e. the overall measure of the distance between the empirical and target distributions.

More recently a wider range of estimation techniques and approaches have been adopted, most notably the use of Bayesian estimators and associated MCMC procedures (as, for example, are implemented in the BUGS project). The following sections provide a brief outline of maximum likelihood estimation and Bayesian estimation. Readers wishing to obtain a fuller understanding of these, and related methods, are recommended to study the references provided at the end of each subsection.