The power of a significance test is technically defined to be "the probability that it will reject a false null hypothesis" (or, in other words, the probability of not making a type II error).

But it is usually difficult to compute that probability, because to do so requires knowing things that we have no way of knowing. So it is probably not useful to think of the power of a test as a number, but rather as a good quality of the test that we might want to enhance. But how can we increase the power of a significance test? Here are two ways: