In the Binomial distribution there are only two possible outcomes, p and q=not p. We could denote these outcomes as p1 and p2, with p1+p2=1, and the distribution for n trials determined by the Binomial expansion B(p1,p2,n)=(p1+p2)n. The multinomial distribution extends this model to the case in which there are more than two, mutually exclusive outcomes in the case of sampling with replacement (see the hypergeometric for sampling without replacement).

The terms of the distribution are defined by the coefficients of the multinomial expansion:

The distribution function is defined as:

where the ni are the number of occurrences of each event. Because of the difficulty of computing exact probabilities for the multinomial, researchers in the past sought effective approximations for the distribution. The most common approach was to consider the limiting form of the distribution as N tends to infinity, with the probabilities p1,p2....pn remaining constant. As Johnson and Kotz [JOH1] have noted, the distribution can be approximated by the expression:

This expression is particularly interesting because the term X2 is of the form:

where Oi is the observed count in group i, and Ei is the expected count evaluated by multiplying the probability of being in that group times the total number of events, N. In 1900 Pearson showed that the distribution of X2 is approximately chi-square with n-1 degrees of freedom, and from this proposed the use of this expression as a general means of testing goodness-of-fit and conducting contingency table analysis. There are actually a number of assumptions involved in this approach, including: the approximation of the true multinomial distribution by the expression above; the approximation of the distribution of X2 by the chi-square distribution; and the associated impact of the sizes of N and n on the accuracy of these approximations. It has been shown that the accuracy decreases as the number of groups increases, and increases as the minimum count in the groups increases. In an investigation of five tests for multinomial proportions, Cai and Krishnamoorthy (2006, [CAI1]) found that the chi-square approximation was surprisingly good for a wide range of cases, notably when there are at least 3 cells and the cell probabilities are not too small, although the results are not as good as those produced by Nass (1959, [NAS1]).

Key measures for the distribution are provided in the table below:

Mean, i |
Npi |
---|---|

Variance, i |
Npi(1-pi) |

MGF |

The Multinomial distribution has applications in a number of areas, most notably in random sampling where data are grouped into a fixed number of n groups and the population distribution needs to be estimated, and in the analysis of contingency tables and goodness-of-fit. Truncated forms of the distribution (e.g. with certain groups not observed) and the negative form (Negative Multinomial, matching the Negative Binomial described elsewhere in this Handbook) are discussed by Johnson and Kotz (1969, [JOH1]).

References

[JOH1] Johnson N L, Kotz S (1969) Discrete distributions. Houghton Mifflin/J Wiley & Sons, New York

[CAI1] Cai Y, Krishnamoorthy K (2006) Exact Size and Power Properties of Five Tests for Multinomial Proportions. Communications in Statistics, Simulation and Computation, 35, 149–160

[NAS1] Nass C A G (1959) The chi-square test for small expectations in contingency tables, with special reference to accidents and absenteeism. Biometrika 46, 365–385

Mathworld/Weisstein E W: Multinomial Distribution: http://mathworld.wolfram.com/MultinomialDistribution.html