T-test
From DrugPedia: A Wikipedia for Drug discovery
Line 41: | Line 41: | ||
==Calculations== | ==Calculations== | ||
===Independent one-sample ''t''-test=== | ===Independent one-sample ''t''-test=== | ||
- | This equation is used to compare one sample mean to a specific value | + | This equation is used to compare one sample mean to a specific value [[Image:Formu.jpg]]. |
[[Image:Formulae.jpg]] | [[Image:Formulae.jpg]] | ||
Line 80: | Line 80: | ||
[[Image:Formula6.jpg]] | [[Image:Formula6.jpg]] | ||
- | For this equation, the differences between all pairs must be calculated. The pairs are either one person's pre-test and post-test scores or between pairs of persons matched into meaningful groups (for instance drawn from the same family or age group: see table). The average (''X<sub>D</sub>'') and standard deviation (''s<sub>D</sub>'') of those differences are used in the equation. The constant | + | For this equation, the differences between all pairs must be calculated. The pairs are either one person's pre-test and post-test scores or between pairs of persons matched into meaningful groups (for instance drawn from the same family or age group: see table). The average (''X<sub>D</sub>'') and standard deviation (''s<sub>D</sub>'') of those differences are used in the equation. The constant [[Image:Formu.jpg]] is non-zero if you want to test whether the average of the difference is significantly different than [[Image:Formu.jpg]]. The degree of freedom used is N-1. |
{| border="1" cellspacing="0" cellpadding="5" align="right" | {| border="1" cellspacing="0" cellpadding="5" align="right" |
Current revision
A t-test is any statistical hypothesis test in which the test statistic has a Student's t distribution if the null hypothesis is true. It is applied when the population is assumed to be normally distributed but the sample sizes are small enough that the statistic on which inference is based is not normally distributed because it relies on an uncertain estimate of standard deviation rather than on a precisely known value.
Contents |
[edit] Use
Among the most frequently used t tests are:
A test of the null hypothesis that the means of two normally distributed populations are equal. Given two data sets, each characterized by its mean, standard deviation and number of data points, we can use some kind of t test to determine whether the means are distinct, provided that the underlying distributions can be assumed to be normal. All such tests are usually called Student's t tests, though strictly speaking that name should only be used if the variances of the two populations are also assumed to be equal; the form of the test used when this assumption is dropped is sometimes called Welch's t test.
There are different versions of the t test depending on whether the two samples are unpaired, independent of each other (e.g., individuals randomly assigned into two groups, measured after an intervention and compared with the other group), or paired, so that each member of one sample has a unique relationship with a particular member of the other sample (e.g., the same people measured before and after an intervention, or IQ test scores of a husband and wife).
If the calculated p-value is below the threshold chosen for statistical significance (usually the 0.10, the 0.05, or 0.01 level), then the null hypothesis which usually states that the two groups do not differ is rejected in favor of an alternative hypothesis, which typically states that the groups do differ.
A test of whether the mean of a normally distributed population has a value specified in a null hypothesis.
A test of whether the slope of a regression line differs significantly from 0.
Once a t value is determined, a p-value can be found using a table of values from Student's t-distribution.
[edit] Assumptions
Normal distribution of data, tested by using a normality test, such as Shapiro-Wilk and Kolmogorov-Smirnov test.
Equality of variances, tested by using either the F test, the more robust Levene's test, Bartlett's test, or the Brown-Forsythe test.
Samples may be independent or dependent, depending on the hypothesis and the type of samples: Independent samples are usually two randomly selected groups
Dependent samples are either two groups matched on some variable (for example, age) or are the same people being tested twice (called repeated measures)
Since all calculations are done subject to the null hypothesis, it may be very difficult to come up with a reasonable null hypothesis that accounts for equal means in the presence of unequal variances. In the usual case, the null hypothesis is that the different treatments have no effect — this makes unequal variances untenable. In this case, one should forgo the ease of using this variant afforded by the statistical packages. See also Behrens-Fisher problem.
One scenario in which it would be plausible to have equal means but unequal variances is when the 'samples' represent repeated measurements of a single quantity, taken using two different methods. If systematic error is negligible (e.g. due to appropriate calibration) the effective population means for the two measurement methods are equal, but they may still have different levels of precision and hence different variances.
[edit] Determining type
For novices, the most difficult issue is often whether the samples are independent or dependent. Independent samples typically consist of two groups with no relationship. Dependent samples typically consist of a matched sample (or a "paired" sample) or one group that has been tested twice (repeated measures).
Dependent t-tests are also used for matched-paired samples, where two groups are matched on a particular variable. For example, if we examined the heights of men and women in a relationship, the two groups are matched on relationship status. This would call for a dependent t-test because it is a paired sample (one man paired with one woman). Alternatively, we might recruit 100 men and 100 women, with no relationship between any particular man and any particular woman; in this case we would use an independent samples test.
Another example of a matched sample would be to take two groups of students, match each student in one group with a student in the other group based on an achievement test result, then examine how much each student reads. An example pair might be two students that score 90 and 91 or two students that scored 45 and 40 on the same test. The hypothesis would be that students that did well on the test may or may not read more. Alternatively, we might recruit students with low scores and students with high scores in two groups and assess their reading amounts independently.
An example of a repeated measures t-test would be if one group were pre- and post-tested. (This example occurs in education quite frequently.) If a teacher wanted to examine the effect of a new set of textbooks on student achievement, (s)he could test the class at the beginning of the year (pretest) and at the end of the year (posttest). A dependent t-test would be used, treating the pretest and posttest as matched variables (matched by student).
[edit] Calculations
[edit] Independent one-sample t-test
This equation is used to compare one sample mean to a specific value .
Where s is the grand standard deviation of the sample. n is the sample size. The degrees of freedom used in this test is n − 1.
[edit] Independent two-sample t-test
[edit] Equal sample sizes, equal variance
This equation is only used when both:
- the two sample sizes (that is, the n or number of participants of each group) are equal;
- it can be assumed that the two distributions have the same variance.
Violations of these assumptions are discussed below.
The t statistic to test whether the means are different can be calculated as follows:
Here is the grand standard deviation (or pooled standard deviation, 1 = group one, 2 = group two. The denominator of t is the standard error of the difference between two means. For significance testing, the degrees of freedom for this test is 2n − 2 where n = # of participants in each group.
[edit] Unequal sample sizes, equal variance
This equation is only used when it can be assumed that the two distributions have the same variance. (When this assumption is violated, see below.) The t statistic to test whether the means are different can be calculated as follows:
is the unbiased estimator of the variance of the two samples, n = number of participants, 1 = group one, 2 = group two. n − 1 is the number of degrees of freedom for either group, and the total sample size minus two is the total number of degrees of freedom, which is used in significance testing.
The statistical significance level associated with the t value calculated in this way is the probability that, under the null hypothesis of equal means, the absolute value of t could be that large or larger just by chance—in other words, it's a two-tailed test, testing whether the means are different when, if they are, either one may be the larger (see Press et al, 1999, p. 616).
[edit] Dependent t-test
This equation is used when the samples are dependent; that is, when there is only one sample that has been tested twice (repeated measures) or when there are two samples that have been matched or "paired".
For this equation, the differences between all pairs must be calculated. The pairs are either one person's pre-test and post-test scores or between pairs of persons matched into meaningful groups (for instance drawn from the same family or age group: see table). The average (XD) and standard deviation (sD) of those differences are used in the equation. The constant is non-zero if you want to test whether the average of the difference is significantly different than . The degree of freedom used is N-1.
Example of repeated measures | |||
Number | Name | Test 1 | Test 2 |
---|---|---|---|
1 | Mike | 35% | 67% |
2 | Melanie | 50% | 46% |
3 | Melissa | 90% | 86% |
4 | Mitchell | 78% | 91% |
Example of matched pairs | |||
Pair | Name | Age | Test |
---|---|---|---|
1 | Jon | 35 | 250 |
1 | Jane | 36 | 340 |
2 | Jimmy | 22 | 460 |
2 | Jessy | 21 | 200 |
[edit] Original source
This article was originally posted in Wikipedia.