You calculate a p value by calculating the probability of obtaining data as or more extreme than you actually obtained in favor of the alternative, where the probability calculation is done under the null. When communicating a P-value, the reader can perform the test at whatever Type I error rate that they would like. Just compare the P-value to the desired Type I error rate and if the P-value is smaller, reject the null hypothesis.
Formally, the P-value is the probability of getting data as or more extreme than the observed data in favor of the alternative. The probability calculation is done assuming that the null is true. In other words if we get a very large T statistic the P-value answers the question “How likely would it be to get a statistic this large or larger if the null was actually true?”. If the answer to that question is “very unlikely”, in other words the P-value is very small, then it sheds doubt on the null being true, since you actually observed a statistic that extreme.
Idea: Suppose nothing is going on - how unusual is it to see the estimate we got?
Approach:
choose(8, 7) * .5 ^ 8 + choose(8, 8) * .5 ^ 8
[1] 0.03515625
pbinom(6, size = 8, prob = .5, lower.tail = FALSE)
[1] 0.03515625
Here , if we were testing that hypothesis, we would reject at a 5% level. We would reject at a 4% level, but we would not reject at an type 1 error rate of 3%.
ppois(9, 5, lower.tail = FALSE)
[1] 0.03182806
The rate is 0.05 having been monitored for 100 person days at risk, is the probability of obtaining 10 or more infections. So in R we plug in (n-1) that is 9 for the upper tail, days at risk is 5 and lower.tail=FALSE because want 10 or more, not less than 10!
So the results show that there is only a 3% chance of that occurring. As the real infection was 5 for a 100 person days at risk, 5%, this hospital should perhaps should execute planned quality control procedures.