<< Click to Display Table of Contents >> Navigation: »No topics above this level« Contingency tables 
The term contingency table appears somewhat outdated, as modern usage of the word contingency usually refers to planning for some kind of emergency or unexpected event. The term was introduced by Karl Pearson at the start of the 20th century to refer to crosstabulations of data that have been recorded as counts, for example a count of the number of children in a sample with fair or dark hair, tabulated against the color of their mother's hair (dark or fair). This would be a 2x2 contingency table, as illustrated below:
HAIR COLOR 
Child 
Total 

Fair 
Dark 

Mother 
Fair 
a 
b 
g=a+b 
Dark 
c 
d 
h=c+d 

Total 
e=a+c 
f=b+d 
n=a+b+c+d 
Typically the rows and the columns of such tables represented distinct, i.e. mutually exclusive categories, and the cells entries are the recorded counts. Sometimes the rows or columns are purely nominal categories (possibly defined somewhat subjectively), such as "Fair" and "Dark", or they might be classes derived from a continuous variable, such as "Low", "Medium" and "High" frequency sounds.
A contingency table can be regarded as a form of twodimensional frequency distribution. The row totals can be seen as one set of frequencies (the rowwise marginal distribution), and the column totals are a second set of frequencies (the columnwise marginal distribution). If the rows and columns are independent then each individual rowcolumn entry can be estimated by the product of the row and column marginal probabilities (e.g. for cell 1 in the example above, the expected value based on the assumption of independence is E=(e/n x g/n)xn = eg/n. The difference between the expected entries under the assumption of independence and the actual or observed cell entries, O, can be used to help determine whether or not there is some form of relationship between the row and column variables. If the result indicates that the assumption of independence does not hold, it suggests some kind of relationship does exist, but it does not indicate either the nature of the relationship (causality) or its strength. A number of measures of the strength of the association have been devised, but all should be treated as 'indicative'.
A number of tests have been devised for analyzing data organized in this manner. The most widely used has been the chisquare contingency table test, which is essentially the same as the chisquare goodnessoffit test discussed earlier. A similar test, which also uses the chisquare distribution and is now the recommended procedure, is the Gtest. Both of these methods rely on an approximation of the exact test, which is based on the hypergeometric distribution and is known as Fisher's exact test. Informationtheoretic equivalent tests may also be used, as for example are described by Kullback (1959, Ch. 8 [KUL1]). Indeed, the Gtest is the information theoretic test of independence in a twoway classification table. For 2x2 tables, tables with small n, and for tables with small values in individual cells (<5) that must not be combined or grouped with other entries, the exact test should be used.
Analysis of more complex contingency tables is possible, for example cases for which counts relating to three or more variables (dimensions) are recorded. For the class of tables in which the rows are a number of samples (representing data relating to one or more independent variables) and the columns are outcomes or responses (dependent variables) with the entries being counts, then analysis techniques similar to those applied in analysis of variance may be used. Typically this will employ some form of loglinear model, and is discussed further under the topic Generalized Linear Models (or GLIM) and more specifically in the topic on Poisson regression.
References
[KUL1] Kullback S (1959) Information theory and statistics. John Wiley & Sons Inc.