A Type I error occurs when the null hypothesis (\(H_0\)) is true, but we reject it based on our sample data. The probability of making a Type I error is denoted by \(\alpha\) and is typically set at 0.05 or 0.01. The formula for a Type I error is:
\[\text{Type I error} = P(\text{reject } H_0 \mid H_0 \text{ is true}) = \alpha\]
A Type II error occurs when the null hypothesis (\(H_0\)) is false, but we fail to reject it based on our sample data. The probability of making a Type II error is denoted by \(\beta\). The power of a test is \(1 - \beta\), which represents the probability of correctly rejecting a false null hypothesis. The formula for a Type II error is:
\[\text{Type II error} = P(\text{fail to reject } H_0 \mid H_0 \text{ is false}) = \beta\]
It is worth noting that the probabilities of Type I and Type II errors are inversely related. That is, if we decrease the probability of making a Type I error by lowering the significance level (\(\alpha\)), we increase the probability of making a Type II error (\(\beta\)) and vice versa. Therefore, it is important to choose an appropriate significance level based on the consequences of making each type of error.
the next few slides will show what a typical type I and type II error look like.