At regular intervals I find myself reading a power/sample size calculation and wishing the researchers had done it differently. For example, they’ve quoted the effect size they can detect with 80% power and I want to know what it would be for 90% power, or vice versa.

Here are some conversion tables.

First, for effect size: the power you have is the row, the power you want is the column

have <- c(0.8,.85,.9)
want <- c(0.5,.75,.8,.85,.9)
ratio <- outer(have,want, function(h,w) (qnorm(.975)-qnorm(1-h))/(qnorm(.975)-qnorm(1-w)) )
rownames(ratio)<-have*100
colnames(ratio)<-want*100
round(ratio,2)
##      50   75   80   85   90
## 80 1.43 1.06 1.00 0.93 0.86
## 85 1.53 1.14 1.07 1.00 0.92
## 90 1.65 1.23 1.16 1.08 1.00

Now, sample size.

round(ratio^-2,2)
##      50   75   80   85   90
## 80 0.49 0.88 1.00 1.14 1.34
## 85 0.43 0.77 0.87 1.00 1.17
## 90 0.37 0.66 0.75 0.85 1.00

That is, if a sample size of 300 gives you 80% power for an effect size of 2 units, a sample size of \(300\times 1.34=\) 402 is needed for 90% power, or with the same sample size of 300 you have 90% power for an effect size of \(2\times 0.86=\) 1.72

These calculations are only for a two-sided 0.05 or one-sided 0.025 test, because that’s what people do. If you’re looking at a Bayesian sample size calculation or one for a genome-wide association study, you’ll need to actually do the work yourself.