’This is problem set #4, in which we want you to integrate your knowledge of data wrangling with some basic simulation skills. It’s a short problem set to help consolidate your ggplot2 skills and then help you get your feet wet in testing statistical concepts through “making up data” rather than consulting a textbook or doing math.

For ease of reading, please separate your answers from our text by marking our text with the > character (indicating quotes).

Part 1: ggplot practice

This part is a warmup, it should be relatively straightforward ggplot2 practice.

Load data from Frank, Vul, Saxe (2011, Infancy), a study in which we measured infants’ looking to hands in moving scenes. There were infants from 3 months all the way to about two years, and there were two movie conditions (Faces_Medium, in which kids played on a white background, and Faces_Plus, in which the backgrounds were more complex and the people in the videos were both kids and adults). An eye-tracker measured children’s attention to faces. This version of the dataset only gives two conditions and only shows the amount of looking at hands (other variables were measured as well). ’

Step 1: Load packages

library(tidyverse) 
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
## ✓ ggplot2 3.3.2     ✓ purrr   0.3.4
## ✓ tibble  3.0.4     ✓ dplyr   1.0.2
## ✓ tidyr   1.1.2     ✓ stringr 1.4.0
## ✓ readr   1.4.0     ✓ forcats 0.5.0
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## x dplyr::filter() masks stats::filter()
## x dplyr::lag()    masks stats::lag()
library(knitr) 

Step 2: Load data

fvs <- read_csv("~/R Mac working folder/class/problem_sets/data/FVS2011-hands.csv")
## 
## ── Column specification ────────────────────────────────────────────────────────
## cols(
##   subid = col_double(),
##   age = col_double(),
##   condition = col_character(),
##   hand.look = col_double()
## )
view(fvs)
glimpse(fvs)
## Rows: 232
## Columns: 4
## $ subid     <dbl> 2, 93, 29, 76, 48, 101, 67, 73, 69, 84, 66, 81, 28, 82, 26,…
## $ age       <dbl> 3.156164, 5.030137, 5.852055, 5.852055, 6.082192, 6.147945,…
## $ condition <chr> "Faces_Medium", "Faces_Medium", "Faces_Medium", "Faces_Medi…
## $ hand.look <dbl> 0.03187500, 0.11885333, 0.09212000, 0.12961111, 0.01378571,…
unique(fvs$subid)
##   [1]   2  93  29  76  48 101  67  73  69  84  66  81  28  82  26  94  79 102
##  [19]  49 112  17  15  47  95  58  60  68  87   6  23 108   3  37  24  38 113
##  [37]  12  78  32  35  80  27 111  19 114  70  46 117  57 118  56  20  13  52
##  [55]  18 104  36  64  97  41   1  43  61  62 119  30   9  42 107 105  83 103
##  [73] 116  11   8   7  33  14  96 109  86  10  25  40  74  75 100  92  16  59
##  [91]  91   4  55  53  54 110  72  51  22  44  71  50  34  45  65  39 115  89
## [109]  21  90  88  85  99  31 106  77  63   5  98

‘First, use ggplot to plot a histogram of the ages of children in the study. NOTE: this is a repeated measures design, so you can’t just take a histogram of every measurement.’

df <- fvs %>% arrange(subid) %>% group_by(subid) %>% filter(condition == 'Faces_Medium') %>%
  summarise(
    n = n(),
    age = age) ##also revealed missing subid 5 from the faces medium , missing p 5 and 105
## `summarise()` ungrouping output (override with `.groups` argument)
ggplot(df, aes(x=age)) + geom_histogram()  #may be wrong because adding in duplicates ages per participant - how to get only 1 age per subid?
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.

‘Second, make a scatter plot showing hand looking as a function of age and condition. Add appropriate smoothing lines. Take the time to fix the axis labels and make the plot look nice.’

ggplot(fvs, aes(x=age, y=hand.look, color=condition)) + 
         geom_point() + 
         geom_smooth(method = lm)+
  labs(title="Hand Looking ~ Age and Condition", x="Age (Months)", y="Hand Looking Time")
## `geom_smooth()` using formula 'y ~ x'

‘What do you conclude from this pattern of data?’

ANSWER HERE There seems to be a strong linear relationship between age and hand looking. The condition also looks to be significantly related to this relationship based on the change in slope for faces plus. Suggest as children get older they generally have higher hand looking times but the effect of faces_plus enhances this effect.

‘What statistical analyses would you perform here to quantify these differences?’

ANSWER HERE I would run a lm and look at the interaction between these variables to see that this is a significant linear relationship. Before doing that, I would look at the correlations between these variables. Final question would ask if there is a statistically significant difference in hand looking between age + condition which could be answeredwith. Or using the linear model (hand looking ~ condition + age) see if age + condition does significantly predict hand looking.

d.lm <- lm(hand.look ~ condition * age, fvs) #shows relationships estimated from above
summary(d.lm)
## 
## Call:
## lm(formula = hand.look ~ condition * age, data = fvs)
## 
## Residuals:
##       Min        1Q    Median        3Q       Max 
## -0.190121 -0.036508 -0.001239  0.032091  0.239474 
## 
## Coefficients:
##                          Estimate Std. Error t value Pr(>|t|)   
## (Intercept)              0.022394   0.015417   1.453  0.14772   
## conditionFaces_Plus     -0.028440   0.021680  -1.312  0.19089   
## age                      0.003143   0.001176   2.673  0.00807 **
## conditionFaces_Plus:age  0.003950   0.001653   2.389  0.01770 * 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.05456 on 228 degrees of freedom
## Multiple R-squared:  0.1871, Adjusted R-squared:  0.1764 
## F-statistic: 17.49 on 3 and 228 DF,  p-value: 2.955e-10

Part 2: Simulation

library(tidyverse)

’Let’s start by convincing ourselves that t-tests have the appropriate false positive rate. Run 10,000 t-tests with standard, normally-distributed data from a made up 30-person, single-measurement experiment (the command for sampling from a normal distribution is rnorm).

The goal of these t-tests are to determine, based on 30 observations, whether the underlying distribution (in this case a normal distribution with mean 0 and standard deviation 1) has a mean that is different from 0. In reality, the mean is not different from 0 (we sampled it using rnorm), but sometimes the 30 observations we get in our experiment will suggest that the mean is higher or lower. In this case, we’ll get a “significant” result and incorrectly reject the null hypothesis of mean 0.

What’s the proportion of “significant” results (\(p < .05\)) that you see?

First do this using a for loop.’

# independent 2-group t-test
ttest <- t.test(fvs$hand.look~fvs$condition) # where y is numeric and x is a binary factor - is there a sig diff in means between groups? - yes we can reject the null

names(ttest)
##  [1] "statistic"   "parameter"   "p.value"     "conf.int"    "estimate"   
##  [6] "null.value"  "stderr"      "alternative" "method"      "data.name"
ttest$statistic
##         t 
## -2.613564
x <- rnorm(30, mean = 0, sd = 1) #can reload this and keep visualizing the changes in histogram of means

hist(x)

t.test(x)
## 
##  One Sample t-test
## 
## data:  x
## t = 0.48609, df = 29, p-value = 0.6306
## alternative hypothesis: true mean is not equal to 0
## 95 percent confidence interval:
##  -0.2485308  0.4034996
## sample estimates:
##  mean of x 
## 0.07748438
#solutions from julie:


set.seed(55)

nsamples = 10000

participant_n = 30

for_p_values <- tibble(nsamples)

for (i in 1:nsamples) {
  for_samples <- rnorm(30, mean = 0, sd = 1)
  for_p_values[i] <- t.test(for_samples)$p.value
}

for_prop_under05 <- sum(for_p_values < 0.05)/10000

for_prop_under05
## [1] 0.05

‘Next, do this using the replicate function:’

ts = replicate(10000,t.test(rnorm(30))$statistic)


range(ts)
## [1] -5.161283  4.169831
set.seed(55)

rep_p_values <- replicate(nsamples, t.test(rnorm(participant_n))$p.value)

rep_prop_under05 <- sum(rep_p_values < 0.05)/10000

rep_prop_under05
## [1] 0.05

‘How does this compare to the intended false-positive rate of \(\alpha=0.05\)?’

ANSWER It means that we can still get a false positive, or in other words, inaccurately reject the null even when we have adequate sample size and methods.

Ok, that was a bit boring. Let’s try something more interesting - let’s implement a p-value sniffing simulation, in the style of Simons, Nelson, & Simonsohn (2011).

Consider this scenario: you have done an experiment, again with 30 participants (one observation each, just for simplicity). The question is whether the true mean is different from 0. You aren’t going to check the p-value every trial, but let’s say you run 30 - then if the p-value is within the range p < .25 and p > .05, you optionally run 30 more and add those data, then test again. But if the original p value is < .05, you call it a day, and if the original is > .25, you also stop.

First, write a function that implements this sampling regime.

double.sample <- function () {
  sample_1 <- rnorm(participant_n)
  psniff_p_value <- (t.test(sample_1)$p.value)
  if (psniff_p_value > .05 & psniff_p_value < 0.25) {
    sample_2 <- c(sample_1, rnorm(participant_n))
    psniff_p_value2 <- t.test(sample_2)$p.value
    return(psniff_p_value2)
  } else {return(psniff_p_value)}
}

Now call this function 10k times and find out what happens.

set.seed(55)

nsamples2 <- 10000

psniff_pvalues <- replicate(nsamples2, double.sample())

psniff_prop_under05 <- sum(psniff_pvalues < 0.05)/10000

psniff_prop_under05
## [1] 0.0723

Is there an inflation of false positives? How bad is it?

ANSWER

Now modify this code so that you can investigate this “double the sample” rule in a bit more depth. In the previous question, the researcher doubles the sample only when they think they got “close” to a significant result, i.e. when their not-significant p is less than 0.25. What if the researcher was more optimistic? See what happens in these 3 other scenarios:

How do these choices affect the false positive rate?

HINT: Try to do this by making the function double.sample take the upper p value as an argument, so that you can pass this through dplyr.

HINT 2: You may need more samples. Find out by looking at how the results change from run to run.

set.seed(55)

double.sample.05 <- function () {
  sample_1_05 <- rnorm(participant_n)
  psniff_p_value_05 <- (t.test(sample_1_05)$p.value)
  if (psniff_p_value_05 > .05 & psniff_p_value_05 < 0.5) {
    sample_2_05 <- c(sample_1_05, rnorm(participant_n))
    psniff_p_value2_05 <- t.test(sample_2_05)$p.value
    return(psniff_p_value2_05)
  } else {return(psniff_p_value_05)}
}

nsamples3 <- 10000

psniff_pvalues_05 <- replicate(nsamples3, double.sample.05())

psniff_prop_under05_05 <- sum(psniff_pvalues_05 < 0.05)/10000

psniff_prop_under05_05
## [1] 0.0809

What do you conclude on the basis of this simulation? How bad is this kind of data-dependent policy?

ANSWER