This is problem set #3, in which we want you to integrate your knowledge of data wrangling with some basic simulation skills and some linear modeling.

For ease of reading, please separate your answers from our text by marking our text with the > character (indicating quotes).

library(tidyverse)
## ── Attaching packages ────────────────────────────────── tidyverse 1.2.1 ──
## ✔ ggplot2 3.1.0     ✔ purrr   0.2.5
## ✔ tibble  1.4.2     ✔ dplyr   0.7.7
## ✔ tidyr   0.8.2     ✔ stringr 1.3.1
## ✔ readr   1.1.1     ✔ forcats 0.3.0
## ── Conflicts ───────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag()    masks stats::lag()

Let’s start by convincing ourselves that t-tests have the appropriate false positive rate. Run 10,000 t-tests with standard, normally-distributed data from a made up 30-person, single-measurement experiment (the command for sampling from a normal distribution is rnorm). What’s the mean number of “significant” results?

First do this using a for loop.

significance_test1 = 0

for (i in 1:10000){
  p_value = t.test(rnorm(30))$p.value
  if(p_value < .05) significance_test1 = significance_test1 + 1
}
print(significance_test1)
## [1] 509

There are approximately 5%.

Next, do this using the replicate function:

significance_test2 = replicate(10000, t.test(rnorm(30))$p.value <.05)

sum(significance_test2)
## [1] 529

There are approximately 5%.

Ok, that was a bit boring. Let’s try something more interesting - let’s implement a p-value sniffing simulation, in the style of Simons, Nelson, & Simonsohn (2011).

Consider this scenario: you have done an experiment, again with 30 participants (one observation each, just for simplicity). The question is whether their performance is above chance. You aren’t going to check the p-value every trial, but let’s say you run 30 - then if the p-value is within the range p < .25 and p > .05, you optionally run 30 more and add those data, then test again. But if the original p value is < .05, you call it a day, and if the original is > .25, you also stop.

First, write a function that implements this sampling regime.

double.sample <- function () {
  significance_test3 = rnorm(30)
  p_value = t.test(significance_test3)$p.value
  if (p_value < .25 & p_value > .05){
    significance_test3 = c(significance_test3, rnorm(30))
  p_value = t.test(significance_test3)$p.value
  }
return(p_value)
}

Now call this function 10k times and find out what happens.

double_sample = replicate(10000, double.sample() < .05)
sum(double_sample)
## [1] 769

Is there an inflation of false positives? How bad is it?

Fairly bad inflation. Now 7% false positive rate.

Now modify this code so that you can investigate this “double the sample” rule in a bit more depth. Let’s see what happens when you double the sample ANY time p > .05 (not just when p < .25), or when you do it only if p < .5 or < .75. How do these choices affect the false positive rate?

HINT: Try to do this by making the function double.sample take the upper p value as an argument, so that you can pass this through dplyr.

HINT 2: You may need more samples. Find out by looking at how the results change from run to run.

double.sample = function(upper_p_value){
  significance_test4 = rnorm(30)
  p_value = t.test(significance_test4)$p.value
  if (p_value < upper_p_value & p_value > .05){
    significance_test4 = c(significance_test4,rnorm(30))
    p_value = t.test(significance_test4)$p.value
  }
  return(p_value)
}
double_sample = replicate(10000, double.sample(.75) < .05)
sum(double_sample)
## [1] 781

What do you conclude on the basis of this simulation? How bad is this kind of data-dependent policy?

This type of data-dependent policy can drastically increase the false positive rate.