The power of a significance test is technically defined
to be "the probability that it will reject a false null hypothesis"
(or, in other words, the probability of not making a type II
error).
But it is usually difficult to compute that probability, because to do
so requires knowing things that we have no way of knowing. So it is
probably not
useful to think of the power of a test as a number, but rather as a good
quality of the test that we might want to enhance. But how can we
increase the power of a significance test? Here are two ways:
- Pick a larger "alpha-level" (instead of the usual 5%), so that more often
an experimental result will be considered significant evidence against the
null hypothesis. But of course this would make a type I error (rejecting the
null hypothesis when it is true) more likely.
- Use a larger sample for the experiment. For example, in a z-test,
that would make the SE smaller -- because the denominator in the
formula SD/(sample size) would be larger -- so the z-value would be
larger -- because the denominator in the formula (x - EV)/SE would
be smaller -- so the p-value would be smaller, i.e., more likely to be
significant. Of course, this might be giving a "spurious power" to your
test;the apparently convincing p-value may discourage careful
review of your experimental method for error or bias.