## What is a Power Analysis?

Power is the probability of correctly rejecting the null hypothesis when it is, in fact, false. In other words, we are trying to avoid making a Type II error. We conduct an a priori power analysis in order to determine how many subjects or participants we will need in our sample so that we can be reasonably confident that, when we say we find a statistically significant difference, that difference is real. Ideally, an a priori power analysis should be conducted for each hypothesis.

Power is determined by three factors: sample size, alpha level, and effect size.

A power analysis is a way of estimating the sample size you will need before beginning your study. Since the size of your sample will determine the power (validity) of your test results, it is important to understand the basics of power analysis.

Smaller samples will tend to have higher sampling error (some error is inherent in all samples) because there will be more variability. Generally, increasing the size of your sample will boost power.

In order to conduct an a priori power analysis, you first need to know:

The type of test you plan to use (independent or t-test, ANOVA, regression, etc.)

The significance level (alpha) you are using (usually .05)

The expected effect size

The sample size you are planning to use

There are several factors you must take into consideration:

The purpose of your study:

Are you aiming to describe, compare, explain, or predict?

The type of study you are conducting:

What type of study design will you use in your study? Will you be conducting an experiment or collecting self-report data from surveys?

Who your population of interest is:

Does your study design require a random sample?

What is the cost underestimating it if your sample is not representative?

What is statistical significance?

The probability value, or p-value, you get from your test is compared to a critical value, or alpha level, that you determine ahead of time. These values range from zero to 1.0, and the lower your resulting p-value the more likely it is that any differences you are finding are not by chance. Since we typically set alpha to .05, a p-value of less than .05 would be considered statistically significant.

What is effect size?

Even if your test results are significant, they may not be meaningful. This is where effect size comes in. Effect size tells us about the magnitude of the difference between groups. It is simply calculated by taking the mean of the two groups you are comparing and dividing it by the standard deviation of one of the groups. Generally, the breakdown (Cohen, 1988) is as follows (though it can vary somewhat):

0.2 = small effect

0.5 = moderate effect

0.8 = large difference effect