Psych 251 PS4: Simulation + Analysis

Author

Adani Abutto

Published

November 26, 2023

This is problem set #4, in which we want you to integrate your knowledge of data wrangling with some basic simulation skills. It’s a short problem set to help consolidate your ggplot2 skills and then help you get your feet wet in testing statistical concepts through “making up data” rather than consulting a textbook or doing math.

For ease of reading, please separate your answers from our text by marking our text with the > character (indicating quotes).

Part 1: ggplot practice

This part is a warmup, it should be relatively straightforward ggplot2 practice.

Load data from Frank, Vul, Saxe (2011, Infancy), a study in which we measured infants’ looking to hands in moving scenes. There were infants from 3 months all the way to about two years, and there were two movie conditions (Faces_Medium, in which kids played on a white background, and Faces_Plus, in which the backgrounds were more complex and the people in the videos were both kids and adults). An eye-tracker measured children’s attention to faces. This version of the dataset only gives two conditions and only shows the amount of looking at hands (other variables were measured as well).

library(tidyverse)
── Attaching packages ─────────────────────────────────────── tidyverse 1.3.1 ──
✔ ggplot2 3.4.2     ✔ purrr   1.0.2
✔ tibble  3.2.1     ✔ dplyr   1.1.4
✔ tidyr   1.3.0     ✔ stringr 1.5.1
✔ readr   2.1.2     ✔ forcats 0.5.1
── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
✖ dplyr::filter() masks stats::filter()
✖ dplyr::lag()    masks stats::lag()
fvs <- read_csv("data/FVS2011-hands.csv")
Rows: 232 Columns: 4
── Column specification ────────────────────────────────────────────────────────
Delimiter: ","
chr (1): condition
dbl (3): subid, age, hand.look

ℹ Use `spec()` to retrieve the full column specification for this data.
ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.

First, use ggplot to plot a histogram of the ages of children in the study. NOTE: this is a repeated measures design, so you can’t just take a histogram of every measurement.

fvs_wide <-
  fvs %>%
  pivot_wider(names_from = condition,
              values_from = hand.look) %>%
  arrange(subid) # pivot data set to wide format; one row/age value per child

fvs_wide %>%
  ggplot(aes(x = age)) +
  geom_histogram(binwidth = .5, fill = "#69b3a2", color="#e9ecef") + # adjust bin width and colors
  ylab("Absolute Frequency") + # adjust y-axis title
  xlab("Age (in months)") + # adjust x-axis title
  xlim(min(fvs_wide$age), max(fvs_wide$age)) + # adjust x-axis limits
  scale_x_continuous(breaks = floor(min(fvs_wide$age):ceiling(max(fvs_wide$age)))) + # adjust x-axis ticks/breaks
  theme_classic() # remove gridlines
Scale for x is already present.
Adding another scale for x, which will replace the existing scale.

Second, make a scatter plot showing hand looking as a function of age and condition. Add appropriate smoothing lines. Take the time to fix the axis labels and make the plot look nice.

library(paletteer)

fvs %>%
  ggplot(aes(x = age, y = hand.look, color = condition)) +
  geom_point() + # add dots
  geom_smooth(method = "lm", alpha = .15) + # add smoothed lines, adjust opacity
  xlim(0, max(fvs$age)) + # adjust limits of x-axis
  ylim(0, max(fvs$hand.look)) + # adjust limits of y-axis
  scale_x_continuous(breaks = floor(min(fvs$age):ceiling(max(fvs$age)))) + # adjust axis ticks/breaks
  labs(colour = "Condition") + # Adjust legend title
  ylab("Hand Looking (in %)") + # adjust y-axis title
  xlab("Age (in months)") + # adjust x-axis title
  theme_classic() + # remove gridlines
  scale_color_manual(values = paletteer_d("ggsci::alternating_igv"),
                     labels = c("Faces Medium", "Faces Plus")) + # change color fill
  theme(legend.position = "top") # position legend at top of plot
Scale for x is already present.
Adding another scale for x, which will replace the existing scale.
`geom_smooth()` using formula = 'y ~ x'

What do you conclude from this pattern of data?

There is a positive trend in the relationship between children’s proportion of hand looking and age in months. There might also be an interaction effect between condition (“Faces Medium”, “Faces Plus”) and proportion of hand looking.

What statistical analyses would you perform here to quantify these differences?

A linear regression, with age, condition, and the interaction between age and condition being the predictors, and proportion of hand looking being the dependent variable:

summary(lm(hand.look ~
             age + # effect of age
             condition + # effect of condition
             age*condition, # interaction between age and condition
           data = fvs))

Call:
lm(formula = hand.look ~ age + condition + age * condition, data = fvs)

Residuals:
      Min        1Q    Median        3Q       Max 
-0.190121 -0.036508 -0.001239  0.032091  0.239474 

Coefficients:
                         Estimate Std. Error t value Pr(>|t|)   
(Intercept)              0.022394   0.015417   1.453  0.14772   
age                      0.003143   0.001176   2.673  0.00807 **
conditionFaces_Plus     -0.028440   0.021680  -1.312  0.19089   
age:conditionFaces_Plus  0.003950   0.001653   2.389  0.01770 * 
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 0.05456 on 228 degrees of freedom
Multiple R-squared:  0.1871,    Adjusted R-squared:  0.1764 
F-statistic: 17.49 on 3 and 228 DF,  p-value: 2.955e-10

The above regression results confirm that age predicts children’s proportions of looking time, whereby this effect varies in its magnitude depending on condition (the relationship between age and looking time proportion is stronger in the “Faces Plus” condition).

Part 2: Simulation

library(tidyverse)

Let’s start by convincing ourselves that t-tests have the appropriate false positive rate. Run 10,000 t-tests with standard, normally-distributed data from a made up 30-person, single-measurement experiment (the command for sampling from a normal distribution is rnorm).

The goal of these t-tests are to determine, based on 30 observations, whether the underlying distribution (in this case a normal distribution with mean 0 and standard deviation 1) has a mean that is different from 0. In reality, the mean is not different from 0 (we sampled it using rnorm), but sometimes the 30 observations we get in our experiment will suggest that the mean is higher or lower. In this case, we’ll get a “significant” result and incorrectly reject the null hypothesis of mean 0.

What’s the proportion of “significant” results (\(p < .05\)) that you see?

First do this using a for loop.

n = 30 # sample size
k = 10000 # number of simulations
n_sig_results = 0 # counter for number of t-test results with p ≤ .05

for (i in 1:k) {
  if(t.test(rnorm(n))$p.value <= 0.05) {
    n_sig_results = n_sig_results+1 # Add 1 to counter if t-test result is p ≤ .05
  }
}

print(paste("Mean number of significant results: ", n_sig_results / k)) # Print counter
[1] "Mean number of significant results:  0.0459"

Next, do this using the replicate function:

n_sig_results_replicate =
  replicate(k, # number of simulations
            t.test(rnorm(n))$p.value) # run t tests

print(paste("Mean number of significant results: ",
            length(n_sig_results_replicate[n_sig_results_replicate <= 0.05]) / k)) # grab number of t test results with p ≤ .05
[1] "Mean number of significant results:  0.0538"

How does this compare to the intended false-positive rate of \(\alpha=0.05\)?

In both cases, the simulated values are quite close to α = .05; they tend to range slightly above/below to the “true” false positive rate (e.g., α = .0486, α = .0546), which is attributable to sampling variability.

Ok, that was a bit boring. Let’s try something more interesting - let’s implement a p-value sniffing simulation, in the style of Simons, Nelson, & Simonsohn (2011).

Consider this scenario: you have done an experiment, again with 30 participants (one observation each, just for simplicity). The question is whether the true mean is different from 0. You aren’t going to check the p-value every trial, but let’s say you run 30 - then if the p-value is within the range p < .25 and p > .05, you optionally run 30 more and add those data, then test again. But if the original p value is < .05, you call it a day, and if the original is > .25, you also stop.

First, write a function that implements this sampling regime.

double.sample <- function(n = 30, p_lower_limit = .05, p_upper_limit = .25) {
  data <- rnorm(n = n) # draw n = 30 data points from normal distribution
  p_value <- t.test(data)$p.value # grab p-value from t-test
  
  # check if the p-value is between .25 and .05
  if(p_value > p_lower_limit & p_value < p_upper_limit) {
    data <- rbind(data, rnorm(n = n)) # if yes, draw another n = 30 data points
    p_value <- t.test(data)$p.value # grab p-value from t-test again
  }
  # check if the p-value is *below* .05 OR *above* .25
  else if (p_value < p_lower_limit | p_value > p_upper_limit) {
    p_value = p_value # keep p-value from initial/current sample
  }
  
  return(p_value) # print p-value
}

Now call this function 10k times and find out what happens.

k = 10000

n_sig_results_replicate <-
  replicate(n = k, # run 10000 times
            expr = double.sample(), # call function
            simplify = TRUE) 

print(paste("Mean number of significant results: ",
            length(n_sig_results_replicate[n_sig_results_replicate <= 0.05]) / k)) # grab number of t test results with p ≤ .05
[1] "Mean number of significant results:  0.0749"

Is there an inflation of false positives? How bad is it?

There is a slight inflation of false positives; α hovers around .07 (i.e., around .02 points above α = .05). As before, there is slight variation each time we run the simulation due to sampling variability.

(Though it’s worth noting that in the instructions, it says to add n = 30 each time, not to double the sample. I ran my simulations based on this.)

Now modify this code so that you can investigate this “double the sample” rule in a bit more depth. In the previous question, the researcher doubles the sample only when they think they got “close” to a significant result, i.e. when their not-significant p is less than 0.25. What if the researcher was more optimistic? See what happens in these 3 other scenarios:

  • The researcher doubles the sample whenever their pvalue is not significant, but it’s less than 0.5.
double.sample.flexible <- function(n = 30, p_lower_limit = .05, p_upper_limit = .5) {
  data <- rnorm(n = n) # draw n = 30 data points from normal distribution
  p_value <- t.test(data)$p.value # grab p-value from t-test
  
  # check if the p-value is between .5 and .05
  if(p_value > p_lower_limit & p_value < p_upper_limit) {
    data <- rbind(data, rnorm(n = n*2)) # if yes, draw n*2 data points
    p_value <- t.test(data)$p.value # grab p-value from t-test again
  }
  # check if the p-value is *below* .05 OR *above* .5
  else if (p_value < p_lower_limit | p_value > p_upper_limit) {
    p_value = p_value # keep p-value from initial/current sample
  }
  
  return(p_value) # print p-value
}

k = 100000 # set k to 100,000 times instead

n_sig_results_replicate <-
  replicate(n = k, # run 100000 times
            expr = double.sample.flexible(), # call function
            simplify = TRUE)

print(paste("Mean number of significant results: ",
            length(n_sig_results_replicate[n_sig_results_replicate <= 0.05]) / k)) # grab number of t test results with p ≤ .05
[1] "Mean number of significant results:  0.11908"
  • The researcher doubles the sample whenever their pvalue is not significant, but it’s less than 0.75.
n_sig_results_replicate <-
  replicate(n = k, # run 100000 times
            expr = double.sample.flexible(p_upper_limit = .75), # call function
            simplify = TRUE)

print(paste("Mean number of significant results: ",
            length(n_sig_results_replicate[n_sig_results_replicate <= 0.05]) / k)) # grab number of t test results with p ≤ .05
[1] "Mean number of significant results:  0.12615"
  • The research doubles their sample whenever they get ANY pvalue that is not significant.
n_sig_results_replicate <-
  replicate(n = k, # run 100000 times
            expr = double.sample.flexible(p_upper_limit = 1), # call function
            simplify = TRUE)

print(paste("Mean number of significant results: ",
            length(n_sig_results_replicate[n_sig_results_replicate <= 0.05]) / k)) # grab number of t test results with p ≤ .05
[1] "Mean number of significant results:  0.12588"

How do these choices affect the false positive rate?

HINT: Try to do this by making the function double.sample take the upper p value as an argument, so that you can pass this through dplyr.

HINT 2: You may need more samples. Find out by looking at how the results change from run to run.

What do you conclude on the basis of this simulation? How bad is this kind of data-dependent policy?

I ran my simulations with 100,000 samples. α tends to hover around .12, that is, around .07 points above α = .05. In other words, we get more than twice as many false positives (12%) even when our bar for continuing to sample is “only” .05 ≥ p ≤ .50. It seems like the inflation effect for false positives doesn’t grow substantially past this point, though; even if we shift our bar for doubling or sample (e.g., p ≤ .75. p < 1.0), α tends to hover around .12.

Fundamentally, this policy is bad because we increase the likelihood of finding and reporting a false positive if we proceed to double our sample and re-check for statistical significance as long as we detect a null effect.