Chapter 8 - Conditional Manatees

This chapter introduced interactions, which allow for the association between a predictor and an outcome to depend upon the value of another predictor. While you can’t see them in a DAG, interactions can be important for making accurate inferences. Interactions can be difficult to interpret, and so the chapter also introduced triptych plots that help in visualizing the effect of an interaction. No new coding skills were introduced, but the statistical models considered were among the most complicated so far in the book.

Place each answer inside the code chunk (grey box). The code chunks should contain a text response or a code that completes/answers the question or activity requested. Make sure to include plots if the question requests them.

Finally, upon completion, name your final output .html file as: YourName_ANLY505-Year-Semester.html and publish the assignment to your R Pubs account and submit the link to Canvas. Each question is worth 5 points.

Questions

8-1. Recall the tulips example from the chapter. Suppose another set of treatments adjusted the temperature in the greenhouse over two levels: cold and hot. The data in the chapter were collected at the cold temperature. You find none of the plants grown under the hot temperature developed any blooms at all, regardless of the water and shade levels. Can you explain this result in terms of interactions between water, shade, and temperature?

Temperature is the most deciding factor. Under hot temparature, plants do not bloom at all, so temparature for sure interacts with water directly, which then interacts with blooms indirectly, temparature also likely interacts with blooms through other unobserved variables. Under cold temparature, the interaction between water and shade starts to matter for the bloom.

8-2. Can you invent a regression equation that would make the bloom size zero, whenever the temperature is hot?

Use 0 to denote hot temparature, use 1 to denote cold temparature. Ti stands for temparature at sample i. We can add T and interactions between T and other into the equation, but it wouldn't have any effect on the model mathematically, since T is always 1 inside of the parentheses

\[ \mu_i = T_i(\alpha + \beta_W W_i + \beta_S S_i + \beta_{WS} W_i S_i) \]

8-3. Repeat the tulips analysis, but this time use priors that constrain the effect of water to be positive and the effect of shade to be negative. Use prior predictive simulation and visualize. What do these prior assumptions mean for the interaction prior, if anything?

By constraining bw to be positive and bs to be negative, it has no effect on the sign of bws prior, because bws comes from (bw + bs), which now cancels each other out.
data("tulips")
d = tulips
d$blooms_std = d$blooms / max(d$blooms)
d$water_cent = d$water - mean(d$water)
d$shade_cent = d$shade - mean(d$shade)
model = quap(alist(
  blooms_std ~ dnorm(mu, sigma),
  mu <- a + bw * water_cent - bs * shade_cent + bws * water_cent * shade_cent,
  a ~ dnorm(0.5, 0.25),
  bw ~ dlnorm(0, 0.25),
  bs ~ dlnorm(0, 0.25),
  bws ~ dnorm(0, 0.25),
  sigma ~ dexp(1)
), data=d)

par(mfrow=c(1,3))
for (s in -1:1) {
  idx = which(d$shade_cent == s)
  plot(d$water_cent[idx], d$blooms_std[idx], xlim=c(-1, 1), ylim=c(0, 1), xlab='water', ylab='blooms', pch=15, col=rangi2)
  mu <- link(model, data=data.frame(shade_cent=s, water_cent=-1:1))
  for (i in 1:20) lines(-1:1, mu[i,], col=col.alpha("black", 0.3))
}

par(mfrow=c(1,3))
for (w in -1:1) {
  idx = which(d$water_cent == w)
  plot(d$shade_cent[idx], d$blooms_std[idx], xlim=c(-1, 1), ylim=c(0, 1), xlab='shade', ylab='blooms', pch=15, col=rangi2)
  mu <- link(model, data=data.frame(shade_cent=-1:1, water_cent=w))
  for (i in 1:20) lines(-1:1, mu[i,], col=col.alpha("black", 0.3))
}

8-4. Return to the data(tulips) example in the chapter. Now include the bed variable as a predictor in the interaction model. Don’t interact bed with the other predictors; just include it as a main effect. Note that bed is categorical. So to use it properly, you will need to either construct dummy variables or rather an index variable, as explained in Chapter 5.

library('fastDummies')
d = dummy_cols(d, select_columns='bed')
model_with_bed = quap(alist(
  blooms_std ~ dnorm(mu, sigma),
  mu <- a + bw * water_cent - bs * shade_cent + bws * water_cent * shade_cent + bba * bed_a + bbb * bed_b + bbc * bed_c,
  a ~ dnorm(0.5, 0.25),
  bw ~ dnorm(0, 0.25),
  bs ~ dnorm(0, 0.25),
  bws ~ dnorm(0, 0.25),
  bba ~ dnorm(0, 0.25),
  bbb ~ dnorm(0, 0.25),
  bbc ~ dnorm(0, 0.25),
  sigma ~ dexp(1)
), data=d)

8-5. Use WAIC to compare the model from 8-4 to a model that omits bed. What do you infer from this comparison? Can you reconcile the WAIC results with the posterior distribution of the bed coefficients?

The model with bed is better at predicting and has a lower WAIC score. From the coefficients we can clearly see that bed A does have a significant impact on blooms, which aligns with our WAIC score comparison result.
m.8.4 = quap(alist(
  blooms_std ~ dnorm(mu, sigma),
  mu <- a + bw * water_cent - bs * shade_cent,
  a ~ dnorm(0.5, 0.25),
  bw ~ dnorm(0, 0.25),
  bs ~ dnorm(0, 0.25),
  sigma ~ dexp(1)
), data=d)
compare(model, model_with_bed)
##                     WAIC       SE    dWAIC      dSE    pWAIC       weight
## model_with_bed -22.08791 10.27436  0.00000       NA 10.42907 9.999999e-01
## model           10.67166 13.08499 32.75958 10.81961 10.31758 7.697478e-08
model_with_bed
## 
## Quadratic approximate posterior distribution
## 
## Formula:
## blooms_std ~ dnorm(mu, sigma)
## mu <- a + bw * water_cent - bs * shade_cent + bws * water_cent * 
##     shade_cent + bba * bed_a + bbb * bed_b + bbc * bed_c
## a ~ dnorm(0.5, 0.25)
## bw ~ dnorm(0, 0.25)
## bs ~ dnorm(0, 0.25)
## bws ~ dnorm(0, 0.25)
## bba ~ dnorm(0, 0.25)
## bbb ~ dnorm(0, 0.25)
## bbc ~ dnorm(0, 0.25)
## sigma ~ dexp(1)
## 
## Posterior means:
##            a           bw           bs          bws          bba          bbb 
##  0.393060477  0.207432180  0.113843391 -0.143895443 -0.121975673  0.001155767 
##          bbc        sigma 
##  0.013876796  0.108145372 
## 
## Log-likelihood: 21.69