This is problem set #3, in which we want you to integrate your knowledge of data wrangling with some basic simulation skills and some linear modeling.
For ease of reading, please separate your answers from our text by marking your text with the > character (indicating quotes).
library(tidyverse)
## ── Attaching packages ────────────────────
## ✔ ggplot2 3.0.0 ✔ purrr 0.2.5
## ✔ tibble 1.4.2 ✔ dplyr 0.7.6
## ✔ tidyr 0.8.2 ✔ stringr 1.3.1
## ✔ readr 1.1.1 ✔ forcats 0.3.0
## Warning: package 'ggplot2' was built under R version 3.4.4
## Warning: package 'tidyr' was built under R version 3.4.4
## Warning: package 'purrr' was built under R version 3.4.4
## Warning: package 'dplyr' was built under R version 3.4.4
## Warning: package 'stringr' was built under R version 3.4.4
## ── Conflicts ──── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
Let’s start by convincing ourselves that t-tests have the appropriate false positive rate. Run 10,000 t-tests with standard, normally-distributed data from a made up 30-person, single-measurement experiment (the command for sampling from a normal distribution is rnorm). What’s the mean number of “significant” results?
First do this using a for loop.
p <- numeric(10000) #Create empty vector
for (i in 1:10000) { #Run 10,000 times
p[i] <- t.test(rnorm(30))$p.value < 0.05 #Conduct t.test and record p value
}
sum(p)/10000 #Calculate proportion of 10,000 trials that were significant
## [1] 0.053
Next, do this using the replicate function:
p <- replicate(10000, t.test(rnorm(30))$p.value < 0.05) #Run t.test 10,000 times
sum(p)/10000 #Calculate proportion of 10,000 trials that were significant
## [1] 0.0487
Ok, that was a bit boring. Let’s try something more interesting - let’s implement a p-value sniffing simulation, in the style of Simons, Nelson, & Simonsohn (2011).
Consider this scenario: you have done an experiment, again with 30 participants (one observation each, just for simplicity). The question is whether their performance is above chance. You aren’t going to check the p-value every trial, but let’s say you run 30 - then if the p-value is within the range p < .25 and p > .05, you optionally run 30 more and add those data, then test again. But if the original p value is < .05, you call it a day, and if the original is > .25, you also stop.
First, write a function that implements this sampling regime.
double.sample <- function () {
x <- rnorm(30)
sig <- t.test(x)$p.value
if (sig <.25 & sig > .05) {
new <- c(x,rnorm(30))
sig <- t.test(new)$p.value
}
return(sig) #Output the p value
}
Now call this function 10k times and find out what happens.
output <- replicate(10000, double.sample()<0.05)
sum(output)/10000
## [1] 0.073
Is there an inflation of false positives? How bad is it?
Yes, there is an inflation of false positives: now the actual Type 1 Error Rate is around 0.07 (higher than 0.05).
Now modify this code so that you can investigate this “double the sample” rule in a bit more depth. Let’s see what happens when you double the sample ANY time p > .05 (not just when p < .25), or when you do it only if p < .5 or < .75. How do these choices affect the false positive rate?
HINT: Try to do this by making the function double.sample take the upper p value as an argument, so that you can pass this through dplyr.
HINT 2: You may need more samples. Find out by looking at how the results change from run to run.
double.sample <- function (max) { #Make upper p value an argument
x <- rnorm(30)
sig <- t.test(x)$p.value
if (sig < max & sig > .05) { #The argument "max" goes into the boolean
new <- c(x,rnorm(30))
sig <- t.test(new)$p.value
}
return(sig) #Output the p value
}
#What happens when you double the sample ANY time p > .05
output1 <- replicate(100000, double.sample(1)<0.05)
sum(output1)/100000
## [1] 0.08416
#What happens when you double the sample only if p < .5
output2 <- replicate(100000, double.sample(.5)<0.05)
sum(output2)/100000
## [1] 0.07985
#What happens when you double the sample only if p < .75
output3 <- replicate(100000, double.sample(.75)<0.05)
sum(output3)/100000
## [1] 0.08171
What do you conclude on the basis of this simulation? How bad is this kind of data-dependent policy?
These simulations indicate that the Type 1 error rate will get inflated beyond 5% if you decide whether or not to continue sampling based on the p-value you initially obtain. The looser the condition is for resampling (p is between 0.05 ~ 1 vs. p is between 0.05 vs. 0.25), the more inflated the alpha error rate will be. While these “data-dependent policies” may not seem catastrophic individually, if many people are doing them, they will cumulatively lead to (and have led to) pretty severe ramifications for the field.