Probability distributions

<< Click to Display Table of Contents >>

Navigation:  Statistical concepts > Probability theory >

Probability distributions

Let us now assume that we have a random variable, X, which takes a finite set of values {xi}, and a function, f(xi) ≥ 0 for i =1,2,3... that represents the probability of observing the specific values {xi}. In the frequentist model such a function might be based on observed or simulated frequencies of each value divided by the total number of observations (the sum of all the frequencies). We now consider an event, E, which is a subset of all the {xi}, and define the probability of this event P(E) as:

f() is called the probability function (or the discrete density function, or the probability mass function, pmf) of the random variable, X. Note that the sum over all the sample space, S, is 1. The cumulative discrete density function is generally denoted with a capital F, and is simply the sum of the set of frequencies for each x-value, where the x-values are ordered by size. Thus

If X is a continuous random variable, i.e. if the values of x are not limited to discrete values but may take on an infinite set of values over a finite or infinite range, then very similar notation is used, but with integrals replacing summations. Note that sample data are always discrete — continuous distributions are essentially theoretical constructs. For continuous random variables the discrete probability mass function is replaced with its continuous equivalent, the probability density function (pdf):

The probability that x lies in the range ab is thus:

Clearly if a=b the integral is zero, so the probabilities for continuous distributions are always defined over a range. If a is the lower limit and b the upper limit of the range for the function f(x), the integral evaluates to 1. Also observe that in this notation the frequency function, f(x), can take any positive value for a given x value, i.e. the y-axis representing the magnitude of f(x) is often greater than 1; what matters is that the sum of the area under the frequency distribution across the full range or domain of x, equals 1.

The cumulative distribution function (cdf), is simply:

If the cumulative distribution function is known (as an analytic expression) the density, if it can be obtained, is simply the differential dF(x)/dx.

The definitions above can be readily extended to two or more variables to form discrete and continuous multivariate distributions. The summations and integrations are simply extended over each of the n-dimensional random variables, giving, for example:

In a previous subsection we described joint, marginal and conditional distributions, using a two-way table of frequencies of height data. It can be argued that the sampled heights of fathers and sons derive from a population that has a continuous distribution of heights, over a finite positive range, [a,b], where a is the smallest possible male human and b is the tallest — perhaps [0,3] would be a suitable range in meters. Thus the observed discrete distribution of the data can be seen as a sample from a continuous (bivariate) population distribution.

For more details on specific probability distributions, see the Probability Distributions topic later in this Handbook.