There are two kinds of errors that might result when a significance test is
applied. They go by completely uninformative names:
- A "Type I error" is rejecting the null hypothesis when it is true.
- A "Type II error" is not rejecting the null hypothesis when it is false.
If the cutoff point for significance (the "alpha level", usually 5%) is moved
lower (say to 2%), a type I error becomes less likely, but a type II error becomes
more likely. If the cutoff is moved up (say to 10%), the reverse is true.
Example: Suppose a coin is flipped 10 times, and 8 heads result. With the
null hypothesis (N.H.) of a fair coin, the probability of a count of 8 or more
is [C(10,8)+C(10,9)+C(10,10)]/210, or
about 5.5%. So:
- with a cutoff of 10%, we would reject the N.H.,
- but with a cutoff of 5%, we would not reject it.
Therefore:
- if the coin is fair,
- the 10% test makes a type I error, but
- the 5% test yields the correct answer;
- while if the coin is unfair,
- the 10% test yields the correct answer, but
- the 5% test makes a type II error.