Chapter 3 - Sampling the Imaginary

This chapter introduced the basic procedures for manipulating posterior distributions. Our fundamental tool is samples of parameter values drawn from the posterior distribution. These samples can be used to produce intervals, point estimates, posterior predictive checks, as well as other kinds of simulations. Posterior predictive checks combine uncertainty about parameters, as described by the posterior distribution, with uncertainty about outcomes, as described by the assumed likelihood function. These checks are useful for verifying that your software worked correctly. They are also useful for prospecting for ways in which your models are inadequate.

Place each answer inside the code chunk (grey box). The code chunks should contain a text response or a code that completes/answers the question or activity requested. Make sure to include plots if the question requests them. Problems are labeled Easy (E), Medium (M), and Hard(H).

Finally, upon completion, name your final output .html file as: YourName_ANLY505-Year-Semester.html and publish the assignment to your R Pubs account and submit the link to Canvas. Each question is worth 5 points

Simulate the posterior distribution

p_grid <- seq( from=0 , to=1 , length.out=1000 )
prior <- rep( 1 , 1000 )
likelihood <- dbinom( 6 , size=9 , prob=p_grid )
posterior <- likelihood * prior
posterior <- posterior / sum(posterior)
set.seed(100)
samples <- sample( p_grid , prob=posterior , size=1e4 , replace=TRUE )

Questions

Use the values in samples to answer the questions that follow.

3E1. How much posterior probability lies below p = 0.2?

mean(samples < 0.2)
## [1] 4e-04

3E2. How much posterior probability lies above p = 0.8?

mean(samples > 0.8)
## [1] 0.1116

3E3. How much posterior probability lies between p = 0.2 and p = 0.8?

mean(samples > 0.2 & samples < 0.8)
## [1] 0.888

3E4. 20% of the posterior probability lies below which value of p?

quantile(samples,probs = 0.2)
##       20% 
## 0.5185185

3E5. 20% of the posterior probability lies above which value of p?

quantile(samples,probs = 0.8)
##       80% 
## 0.7557558

3E6. Which values of p contain the narrowest interval equal to 66% of the posterior probability?

library(rethinking)
HPDI(samples, prob = 0.66)
##     |0.66     0.66| 
## 0.5085085 0.7737738

3E7. Which values of p contain 66% of the posterior probability, assuming equal posterior probability both below and above the interval?

PI(samples, prob = 0.66)
##       17%       83% 
## 0.5025025 0.7697698

3M1. Suppose the globe tossing data had turned out to be 8 water in 15 tosses. Construct the posterior distribution, using grid approximation. Plot the posterior. Use the same flat prior as before.

p_grid = seq( from=0 , to=1 , length.out=1000 )
prior = rep( 1 , 1000 )

likelihood_m = dbinom( 8 , size = 15 , prob=p_grid )
posterior_m = likelihood_m * prior
posterior_m = posterior_m / sum(posterior_m)

plot(x = p_grid, y = posterior_m, type = "l")  

3M2. Draw 10,000 samples from the grid approximation from above. Then use the samples to calculate the 90% HPDI for p.

#set.seed(100)
samples_m <- sample( p_grid , prob=posterior_m , size=1e4 , replace=TRUE )

HPDI(samples_m, prob = 0.9)
##      |0.9      0.9| 
## 0.3293293 0.7167167

3M3. Construct a posterior predictive check for this model and data. This means simulate the distribution of samples, averaging over the posterior uncertainty in p. What is the probability of observing 8 water in 15 tosses?

nw = rbinom(1e4, size =15, prob = samples_m)

mean(nw == 8)
## [1] 0.1444
simplehist(nw)

3M4. Using the posterior distribution constructed from the new (8/15) data, now calculate the probability of observing 6 water in 9 tosses.

nw2 = rbinom(1e4, size = 9, prob = samples_m)

mean(nw2 == 6)
## [1] 0.1751
simplehist(nw2)

3M5. Start over at 3M1, but now use a prior that is zero below p = 0.5 and a constant above p = 0.5. This corresponds to prior information that a majority of the Earth’s surface is water. Repeat each problem above and compare the inferences. Plot the posterior. What difference does the better prior make? If it helps, compare inferences (using both priors) to the true value p = 0.7.

p_grid = seq( from=0 , to=1 , length.out=1000 )
prior2 = ifelse(p_grid < 0.5, 0, 1)

# Repeat 3M1
likelihood_m2 = dbinom( 8 , size = 15 , prob=p_grid )
posterior_m2 = likelihood_m2 * prior2
posterior_m2 = posterior_m2 / sum(posterior_m2)
plot(x = p_grid, y = posterior_m2, type = "l")  

# Repeat 3M2
#set.seed(100)
samples_m2 <- sample( p_grid , prob=posterior_m2 , size=1e4 , replace=TRUE )

HPDI(samples_m2, prob = 0.9)
##      |0.9      0.9| 
## 0.5005005 0.7117117
# Repeat 3M3
nw3 = rbinom(1e4, size =15, prob = samples_m2)
mean(nw3 == 8)
## [1] 0.1589
simplehist(nw3)

# Repeat 3M4

nw4 = rbinom(1e4, size = 9, prob = samples_m2)
mean(nw4 == 6)
## [1] 0.2415
simplehist(nw4)

# Comparison:
# With the better priors, the posterior probability values have become zero for p<0.5, and the height of the curve for p> 0.5 has increased, so the probability mass appears to have shifted up to the rest of the curve.
# Also, we notice that HPDI with both priors include the true value of P = 0.7. However, the HPDI with better prior is much narrower than the previous one. Therefore, the model with the new prior is giving better information on the inference to the true value p.

3M6. Suppose you want to estimate the Earth’s proportion of water very precisely. Specifically, you want the 99% percentile interval of the posterior distribution of p to be only 0.05 wide. This means the distance between the upper and lower bound of the interval should be 0.05. How many times will you have to toss the globe to do this?

library(ggplot2)

N_seq = seq(from=1000, to = 3000, length.out= 21)
intervals_diff = rep(NA,21)

p_grid = seq( from=0 , to=1 , length.out=1000 )
prior2 = ifelse(p_grid < 0.5, 0, 1)

true_p = 0.7

for (n in N_seq){
  
  likelihood_m3 <- dbinom( round(n*true_p), size=n, prob=p_grid )
  posterior_m3 <- likelihood_m3 * prior2
  posterior_m3 <- posterior_m3 / sum(posterior_m3)
  samples_3 <- sample(p_grid, prob=posterior_m3, size=1e8, replace=TRUE )
  interval <- PI(samples_3, prob=0.99)
  names(interval) <- NULL
  intervals_diff[(n-1000)/100+1] = diff( interval )
  
}




ggplot(data.frame(N_seq, intervals_diff), aes(N_seq, intervals_diff)) +
  geom_point() 

data.frame(N_seq, intervals_diff)
##    N_seq intervals_diff
## 1   1000     0.07407407
## 2   1100     0.07107107
## 3   1200     0.06806807
## 4   1300     0.06506507
## 5   1400     0.06306306
## 6   1500     0.06106106
## 7   1600     0.05905906
## 8   1700     0.05705706
## 9   1800     0.05505506
## 10  1900     0.05405405
## 11  2000     0.05305305
## 12  2100     0.05105105
## 13  2200     0.05005005
## 14  2300     0.04904905
## 15  2400     0.04804805
## 16  2500     0.04704705
## 17  2600     0.04604605
## 18  2700     0.04604605
## 19  2800     0.04404404
## 20  2900     0.04404404
## 21  3000     0.04304304
# Based on the plot and table we generate above, it would take about 2200 observations for the distance between the upper and lower bound of the interval to be 0.05.