Generalized Linear Models (GLIM)

<< Click to Display Table of Contents >>

Navigation:  Regression and smoothing >

Generalized Linear Models (GLIM)

GLIM was developed by a working group of the Royal Statistical Society under the chairmanship of John Nelder in the 1970s and implemented initially as a software package called GLIM (no longer available). It provides a form of unified statistical model, incorporating General Linear Models (GLMs) but also incorporating models that deal with count data, proportion data and binary data. GLIM is particularly useful where the errors are not Normally distributed (e.g. the distribution of errors is very skewed and/or peaky) and/or where the variance is not constant (as is the case with the Poisson and Gamma distributions). Somewhat confusingly, the R function that supports the GLIM functionality is glm(). In broad terms the GLIM approach consists of three components: a probability distribution (also known as the error structure) from the exponential family (which includes the Normal, Poisson and Binomial); a linear predictor (i.e. some form of linear model of the type described above, without the error component); and a link function which associates the mean of the distribution function to the linear predictor. If this link function is the Identity function, the GLIM is simply a general linear model (GLM), i.e. the GLM is a special case of the GLIM approach. Expressed in matrix algebraic terms, the most general form of the standard linear model (the general linear model or GLM) can be represented by the matrix equation:

This is the expression we discussed in the introductory section on statistical modeling. The variant of this form most familiar in regression modeling has a single dependent variable, y, and a vector of parameters to be determined, β, with an error vector, ε. The matrix expression that models the observed data is then of the form:

and the expected value of y is:

The generalized linear model version of this is very similar, but includes a link function, g(), such that:


The term Xβ is known as the linear predictor, essentially the linear regression function. If it is assumed that the data are drawn from a Normal distribution and the link function is the Identity function, the model is simple linear regression and can be fitted using least squares or maximum likelihood methods as noted previously.

If the data are binary, the proportion, p, that are 1s (or 0s) are most commonly modeled using logistic regression, for which the link function is the logit transform:

If the data are counts, and the Poisson model is deemed appropriate, then Poisson regression may be used. In this case the link function is simply:

The GLIM framework thus embraces a number of well-established regression models, but also offers an approach that allows for extension of these models. Since its inception GLIMs have themselves been extended, for example by permitting the linear predictor to be generalized as a sum of smoothing functions (generalized additive models, or GAMs). For more details readers are referred to the definitive text on GLIMs by McCullagh and Nelder (1989, MCC1]) and chapters 13-18 of Crawley (2007, [CRA1]).


Goodness of fit for GLIMs is generally tested using several different measures, perhaps the most important of which is the deviance. This is a form of comparison between the model applied and a saturated model, that is, a model that exactly fits the data. The formulas used for deviance vary according to the family of distributions used for the error structure of the model, but for count data it is of the form:

where O and E are the observed and expected counts. Thus in this instance the deviance is very similar in concept to the Chi-square contingency table test and the G test, and both may be provided as a measure of goodness of fit.


[CRA1] Crawley M J (2007) The R Book. J Wiley & Son, Chichester, UK, 2nd ed 2015

[MCC1] McCullagh P, Nelder J (1989) Generalized Linear Models. 2nd ed., Chapman & Hall/CRC Press, Boca Raton, USA