Psych 251 PS4: Simulation + Analysis

Author

Mike Frank

Published

December 31, 2018

This is problem set #4, in which we want you to integrate your knowledge of data wrangling with some basic simulation skills. It’s a short problem set to help consolidate your ggplot2 skills and then help you get your feet wet in testing statistical concepts through “making up data” rather than consulting a textbook or doing math.

For ease of reading, please separate your answers from our text by marking our text with the > character (indicating quotes).

library(tidyverse) 
── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
✔ dplyr     1.1.4     ✔ readr     2.1.5
✔ forcats   1.0.0     ✔ stringr   1.5.1
✔ ggplot2   3.5.1     ✔ tibble    3.2.1
✔ lubridate 1.9.3     ✔ tidyr     1.3.1
✔ purrr     1.0.2     
── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
✖ dplyr::filter() masks stats::filter()
✖ dplyr::lag()    masks stats::lag()
ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
library(knitr) 
library(readxl) 
library(dplyr)
library(ggplot2)

Part 1: ggplot practice

This part is a warmup, it should be relatively straightforward ggplot2 practice.

Load data from Frank, Vul, Saxe (2011, Infancy), a study in which we measured infants’ looking to hands in moving scenes. There were infants from 3 months all the way to about two years, and there were two movie conditions (Faces_Medium, in which kids played on a white background, and Faces_Plus, in which the backgrounds were more complex and the people in the videos were both kids and adults). An eye-tracker measured children’s attention to faces. This version of the dataset only gives two conditions and only shows the amount of looking at hands (other variables were measured as well).

fvs <- read_csv("data/FVS2011-hands.csv")
Rows: 232 Columns: 4
── Column specification ────────────────────────────────────────────────────────
Delimiter: ","
chr (1): condition
dbl (3): subid, age, hand.look

ℹ Use `spec()` to retrieve the full column specification for this data.
ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.

First, use ggplot to plot a histogram of the ages of children in the study. NOTE: this is a repeated measures design, so you can’t just take a histogram of every measurement.

# Keep one age per subject
fvs_unique <- fvs %>%
  distinct(subid, age) 

# Plot histogram
ggplot(fvs_unique, aes(x = age)) +
  geom_histogram(binwidth = 1, color = "black", fill = "skyblue") +
  scale_x_continuous(
    breaks = 0:max(fvs_unique$age),
    labels = 0:max(fvs_unique$age)
  )+
  labs(title = "Histogram of Child Ages",
       x = "Age (in months)",
       y = "Count")+
  theme_classic()

Second, make a scatter plot showing hand looking as a function of age and condition. Add appropriate smoothing lines. Take the time to fix the axis labels and make the plot look nice.

ggplot(fvs, aes(x = age, y = hand.look, color = condition)) +
  geom_point(alpha = 0.6) +                            
  geom_smooth(method = "lm", se = TRUE) +  
  geom_vline(xintercept = 7, linetype = "dashed", linewidth = 0.5) +   
  scale_x_continuous(
    breaks = 0:max(fvs_unique$age),
    labels = 0:max(fvs_unique$age)
  ) +
  labs(title = "Hand Looking as a Function of Age",
       x = "Age (in months)",
       y = "Hand Looking Time",
       color = "Condition") +
  theme_classic()
`geom_smooth()` using formula = 'y ~ x'

What do you conclude from this pattern of data?

The data are fairly scattered overall. In general, older children tend to show more hand-looking time. For children under 7 months, the “faces medium” condition (children viewed stimuli on a plain white background) shows higher looking times compared to “faces plus” (more complex backgrounds with both children and adults). After 7 months, this trend appears to reverse. However, the standard errors of the linear models overlap considerably until around 12 months, at which point a clearer pattern emerges: children in the faces plus condition begin to show significantly longer hand-looking times than those in the faces medium condition.

What statistical analyses would you perform here to quantify these differences?

To quantify the differences in hand-looking time as a function of age and condition, while accounting for the repeated-measures design with multiple observations per child, I would use a linear mixed-effects model. In this model, the dependent variable would be hand.look, with age, condition, and their interaction (age × condition) included as fixed effects. To account for the repeated measures within participants, I would include random intercepts for each child.

Part 2: Simulation

library(tidyverse)

Let’s start by convincing ourselves that t-tests have the appropriate false positive rate. Run 10,000 t-tests with standard, normally-distributed data from a made up 30-person, single-measurement experiment (the command for sampling from a normal distribution is rnorm).

The goal of these t-tests are to determine, based on 30 observations, whether the underlying distribution (in this case a normal distribution with mean 0 and standard deviation 1) has a mean that is different from 0. In reality, the mean is not different from 0 (we sampled it using rnorm), but sometimes the 30 observations we get in our experiment will suggest that the mean is higher or lower. In this case, we’ll get a “significant” result and incorrectly reject the null hypothesis of mean 0.

What’s the proportion of “significant” results (\(p < .05\)) that you see?

First do this using a for loop.

set.seed(123)  
n_sim <- 10000
n <- 30
p_values <- numeric(n_sim)

for (i in 1:n_sim) {
  data <- rnorm(n, mean = 0, sd = 1)  
  test <- t.test(data, mu = 0)        
  p_values[i] <- test$p.value
}

Next, do this using the replicate function:

set.seed(123)  

p_values <- replicate(10000, {
  data <- rnorm(n, mean = 0, sd = 1)
  t.test(data, mu = 0)$p.value
})

# proportion of significant results
mean(p_values < 0.05)
[1] 0.0465

How does this compare to the intended false-positive rate of \(\alpha=0.05\)?

Your result of 0.0465 is very close to the intended false-positive rate of α = 0.05. This means that, out of 10,000 t-tests where the null hypothesis is true (the true mean is 0), roughly 4.65% of the tests were “significant” by chance, which is almost exactly what we would expect given the 5% Type I error rate.

Ok, that was a bit boring. Let’s try something more interesting - let’s implement a p-value sniffing simulation, in the style of Simons, Nelson, & Simonsohn (2011).

Consider this scenario: you have done an experiment, again with 30 participants (one observation each, just for simplicity). The question is whether the true mean is different from 0. You aren’t going to check the p-value every trial, but let’s say you run 30 - then if the p-value is within the range p < .25 and p > .05, you optionally run 30 more and add those data, then test again. But if the original p value is < .05, you call it a day, and if the original is > .25, you also stop.

First, write a function that implements this sampling regime.

double.sample <- function() {
  n1 <- 30
  n2 <- 30
  
  data1 <- rnorm(n1, mean = 0, sd = 1)
  p1 <- t.test(data1, mu = 0)$p.value
  
  if (p1 < 0.05) {
    return(p1)   
  } else if (p1 > 0.25) {
    return(p1)  
  } else {
    data2 <- rnorm(n2, mean = 0, sd = 1)
    combined <- c(data1, data2)
    p2 <- t.test(combined, mu = 0)$p.value
    return(p2)
  }
}

Now call this function 10k times and find out what happens.

set.seed(123)
p_values_seq <- replicate(10000, double.sample())

# proportion of significant results
mean(p_values_seq < 0.05)
[1] 0.0669

Is there an inflation of false positives? How bad is it?

Yes, there is an inflation of false positives. The simulated rate of 0.0669 means the false-positive rate increased from the intended 5% to about 6.7%, so it’s higher but not drastically so I think. Optional stopping in this scenario modestly inflates type I error.

Now modify this code so that you can investigate this “double the sample” rule in a bit more depth. In the previous question, the researcher doubles the sample only when they think they got “close” to a significant result, i.e. when their not-significant p is less than 0.25. What if the researcher was more optimistic? See what happens in these 3 other scenarios:

  • The researcher doubles the sample whenever their pvalue is not significant, but it’s less than 0.5.
  • The researcher doubles the sample whenever their pvalue is not significant, but it’s less than 0.75.
  • The research doubles their sample whenever they get ANY pvalue that is not significant.

How do these choices affect the false positive rate?

HINT: Try to do this by making the function double.sample take the upper p value as an argument, so that you can pass this through dplyr.

HINT 2: You may need more samples. Find out by looking at how the results change from run to run.

double.sample <- function(p_upper = 0.25) {
  n1 <- 30
  n2 <- 30
  
  data1 <- rnorm(n1, mean = 0, sd = 1)
  p1 <- t.test(data1, mu = 0)$p.value
  
  if (p1 < 0.05) {
    return(p1)  
  } else if (p1 > p_upper) {
    return(p1)   
  } else {
    data2 <- rnorm(n2, mean = 0, sd = 1)
    combined <- c(data1, data2)
    p2 <- t.test(combined, mu = 0)$p.value
    return(p2)
  }
}

set.seed(1)
n_sim <- 10000

# Thresholds to test
thresholds <- c(0.25, 0.5, 0.75, 1.0)  

results <- sapply(thresholds, function(th) {
  p_vals <- replicate(n_sim, double.sample(p_upper = th))
  mean(p_vals < 0.05)
})

results
[1] 0.0703 0.0795 0.0815 0.0839

What do you conclude on the basis of this simulation? How bad is this kind of data-dependent policy?

The simulation shows that data-dependent sampling inflates the false-positive rate above the nominal 5%. The more aggressively the researcher adds data based on intermediate p-values, the higher the false-positive rate, reaching ~8.4% in the most extreme case. While the inflation is moderate, this kind of policy still biases results and undermines the reliability of p-values.