The t-test described above can be applied, with minor changes, to the difference between two mean values obtained from independent samples. Typically the hypothesis that is tested is that the difference between two mean values equals some value, d, where d=0 is frequently the hypothesis (i.e. the two mean values are equal). Initially we consider the case where the population standard deviations of the two distributions are equal:
Case 1: σ1=σ2
For this model (the homoskedastic version) the t-statistic to compute is:
where m1 and m2 are the two sample means from samples of size n1 and n2, and s is an estimate of the standard deviation obtained from the pooled variance estimate:
The computed value of t is then compared with the t-distribution, as before, but with n1+n2-2 degrees of freedom.
Case 2: σ1≠σ2
For this model (the heteroskedastic version), we just consider the case in which d=0. The t-statistic to compute is:
This particular formulation is due to Welch (1947, [WEL1]) and differs from Student's original version as it does not used a pooled estimate of the variance. It requires a value for the degrees of freedom before the computed value of t can be compared with the t-distribution. The formula used for this is:
Welch's version of the two-sample t-test is now that implemented as the standard form in R. Other software packages may provide the Welch version and/or the original Student's form, or a variation in the computation of the p-value (e.g. due to Cochran) and may or may not be clear as to which version is being applied! Many software packages allow the input dataset items to be weighted or have frequency values associated with them, and adjust the computations accordingly.
Note: if the samples are actually paired rather than independent, the difference between the pairs of values can be computed and then the mean of these differences tested using the t-test for a single mean.