9E1. Which of the following is a requirement of the simple Metropolis algorithm?

  1. The parameters must be discrete.
  2. The likelihood function must be Gaussian.
  3. The proposal distribution must be symmetric.

I select #3

9E2. Gibbs sampling is more efficient than the Metropolis algorithm. How does it achieve this extra efficiency? Are there any limitations to the Gibbs sampling strategy?

Gibbs sampling employs posterior distribution more efficiently, so there could be fewer rejections in comparison to base Metropolis algorithm.

Regarding the limitation, Gibbs sampling utilize the information of likelihood and conjugate priors, so it can merge proposal and reject/accept steps.

9E3. Which sort of parameters can Hamiltonian Monte Carlo not handle? Can you explain why?

Hamiltonian Monte Carlo method couldn’t handle discrete parameters. Method is based on the physical metaphor of the moving particle. This particle moves in the space of parameters with speed proportional to the gradient of likelihood, so the method should be able to differentiate the space. Discrete parameters aren’t differentiable.

9E4. Explain the difference between the effective number of samples, n_eff as calculated by Stan, and the actual number of samples.

# Effective sample size, n_eff, is an estimate of the number of imdependent draws from the posterior distribution in terms of estimating functions like posterior mean. in Morkov chains, samples are not independent as there is a autocorrelation, n_eff is usually smaller than actual sample size. 

9E5. Which value should Rhat approach, when a chain is sampling the posterior distribution correctly?

#Rhat may approach 1 when a chain is sampling the posterior distribution correctly

9E6. Sketch a good trace plot for a Markov chain, one that is effectively sampling from the posterior distribution. What is good about its shape? Then sketch a trace plot for a malfunctioning Markov chain. What about its shape indicates malfunction?

plot(y=rnorm(1000,0,1),x=1:1000, type="l")

9E7. Repeat the problem above, but now for a trace rank plot.

trankplot( m8.1 )

trankplot( m9.2 )

9M1. Re-estimate the terrain ruggedness model from the chapter, but now using a uniform prior for the standard deviation, sigma. The uniform prior should be dunif(0,1). Use ulam to estimate the posterior. Does the different prior have any detectible influence on the posterior distribution of sigma? Why or why not?

#data(rugged)
#d <- rugged
#d$log_gdp <- log(d$rgdppc_2000)
#dd <- d[ complete.cases(d$rgdppc_2000) , ]

#dd$log_gdp_std <- dd$log_gdp/ mean(dd$log_gdp)
#dd$rugged_std<- dd$rugged/max(dd$rugged)

#dd$cid<-ifelse(dd$cont_africa==1,1,2)

#m8.3 <- quap(
#  alist(
#    log_gdp_std ~ dnorm( mu , sigma ) ,
#    mu <- a[cid] + b[cid]* (rugged_std-0.215) ,
#    a[cid] ~ dnorm(1,0.1),
#    b[cid] ~ dnorm(0,0.3),
#    sigma ~ dexp(1)
#  ) , 
#  data=dd)

#precis(m8.3 , depth=2)

9M2. Modify the terrain ruggedness model again. This time, change the prior for b[cid] to dexp(0.3). What does this do to the posterior distribution? Can you explain it?

#m8.3_exp <- quap( alist( log_gdp_std ~ dnorm( mu , sigma ) , mu <- a[cid] + b[cid]* (rugged_std-0.215) , a[cid] ~ dnorm(1,0.1), b[cid] ~ dnorm(0,0.3), sigma ~ dexp(0.3) ) , data=dd) precis(m8.3_exp , depth=2)

#pairs(m8.3_exp)
#no difference in the posterior distribution

9M3. Re-estimate one of the Stan models from the chapter, but at different numbers of warmup iterations. Be sure to use the same number of sampling iterations in each case. Compare the n_eff values. How much warmup is enough?

#formula8m3 <- quap(
#alist(
#log_gdp_std ~ dnorm( mu , sigma ) ,
#mu <- a[cid] + b[cid]*( rugged_std - 0.215 ) ,
#a[cid] ~ dnorm( 1 , 0.1 ) ,
#b[cid] ~ dnorm( 0 , 0.3 ) ,
#sigma ~ dexp( 0.3 )
#) ,
#data=dd )
#precis( m9.2 , depth=2 )

9H1. Run the model below and then inspect the posterior distribution and explain what it is accomplishing.

mp <- ulam( alist( a ~ dnorm(0,1), b ~ dcauchy(0,1) ), data=list(y=1) , chains=1 )

Compare the samples for the parameters a and b. Can you explain the different trace plots? If you are unfamiliar with the Cauchy distribution, you should look it up. The key feature to attend to is that it has no expected value. Can you connect this fact to the trace plot?