Power and robustness

Navigation:  Statistical concepts >

Power and robustness

Previous pageReturn to chapter overviewNext page

Power

Power is a term used in a technical sense in statistics to refer to the probability that a test will reject a null hypothesis when it should do, i.e. when the hypothesis is false. This equates to requiring that the probability of a Type II error is small, and with the probability of a Type II error being denoted by β, the power of a test is often denoted by 1-β. The desired power of a test (often set at 0.8 or 80%), given the level of Type I error that is acceptable (typically set at 0.05 or 5%, or 0.01 or 1%), can be used to help identify the type of test and size of sample to choose. The Neyman-Pearson Lemma emphasizes the importance of tests based on the likelihood ratio as a means of ensuring that the power of a test is as great as possible. In general, parametric tests are regarded as more powerful (in this sense) than non-parametric tests carried out on datasets of similar size.

Robustness

Robustness is a term that relates to how sensitive a descriptive statistic or test is to extreme values or underlying assumptions. Some descriptive statistics, such as the median, the inter-quartile range and the trimmed mean, are more robust than others, such as the arithmetic mean and the range. Likewise, a statistical test or procedure (e.g. regression) is described as being robust if it not especially sensitive to small changes in the data or assumptions, and in particular if the effect of outliers not too great.