Chapter 8 - Conditional Manatees

This chapter introduced interactions, which allow for the association between a predictor and an outcome to depend upon the value of another predictor. While you can’t see them in a DAG, interactions can be important for making accurate inferences. Interactions can be difficult to interpret, and so the chapter also introduced triptych plots that help in visualizing the effect of an interaction. No new coding skills were introduced, but the statistical models considered were among the most complicated so far in the book.

Place each answer inside the code chunk (grey box). The code chunks should contain a text response or a code that completes/answers the question or activity requested. Problems are labeled Easy (E), Medium (M), and Hard(H).

Finally, upon completion, name your final output .html file as: YourName_ANLY505-Year-Semester.html and publish the assignment to your R Pubs account and submit the link to Canvas. Each question is worth 5 points.

Questions

8E1. For each of the causal relationships below, name a hypothetical third variable that would lead to an interaction effect:

  1. Bread dough rises because of yeast.
  2. Education leads to higher income.
  3. Gasoline makes a car go.
#1. Time- Time it takes to rise dough because of yeast
#2. Occupation- Occupations in demand have higher income
#3. Combustion- Gasoline combusts and releases energy for the car to go 

8E2. Which of the following explanations invokes an interaction?

  1. Caramelizing onions requires cooking over low heat and making sure the onions do not dry out.
  2. A car will go faster when it has more cylinders or when it has a better fuel injector.
  3. Most people acquire their political beliefs from their parents, unless they get them instead from their friends.
  4. Intelligent animal species tend to be either highly social or have manipulative appendages (hands, tentacles, etc.).
#1. Caramelization of onions depends on temperature and dryness
#4. Intelligence in animal species depends on being social or manipulative appendages

8E3. For each of the explanations in 8E2, write a linear model that expresses the stated relationship.

#1. yi = βh*xh + βd*xd + βhd*xd*xh
#2. yi = βc*xc + βi*xi 
#3. yi = βp*xp + βf*xf
#4. yi = βs*xs + βm*xm + βsm*xs*xm

8M1. Recall the tulips example from the chapter. Suppose another set of treatments adjusted the temperature in the greenhouse over two levels: cold and hot. The data in the chapter were collected at the cold temperature. You find none of the plants grown under the hot temperature developed any blooms at all, regardless of the water and shade levels. Can you explain this result in terms of interactions between water, shade, and temperature?

#The result with hot temperature did not develop any blooms at all because influence of hot temperature on blooms was dependent upon on the interaction between temperature, water and shade or the interaction between temperature and water or water and shade or temperature and shade. Either of these interactions could have resulted in plants not blooming at all under the hot temperature.

8M2. Can you invent a regression equation that would make the bloom size zero, whenever the temperature is hot?

#μi = α + βt*xt + βw*xw + βs*xs + βtws*xt*xw*xs + βtw*xt*xw + βws*xw*xs + βts*xt*xs
#μi is the bloom size, for μi = 0 when xt is hot, depends on other variables
#Assuming xt = 1 when temperature is hot, then this equation would make the bloom size zero when the temperature is hot
# α + βt + βw*xw + βs*xs + βtws*xw*xs + βtw*xw + βws*xw*xs + βts*xs = 0 

8M4. Repeat the tulips analysis, but this time use priors that constrain the effect of water to be positive and the effect of shade to be negative. Use prior predictive simulation. What do these prior assumptions mean for the interaction prior, if anything?

library(rethinking)
## Loading required package: rstan
## Loading required package: StanHeaders
## Loading required package: ggplot2
## rstan (Version 2.21.2, GitRev: 2e1f913d3ca3)
## For execution on a local, multicore CPU with excess RAM we recommend calling
## options(mc.cores = parallel::detectCores()).
## To avoid recompilation of unchanged Stan programs, we recommend calling
## rstan_options(auto_write = TRUE)
## Do not specify '-march=native' in 'LOCAL_CPPFLAGS' or a Makevars file
## Loading required package: parallel
## rethinking (Version 2.13)
## 
## Attaching package: 'rethinking'
## The following object is masked from 'package:stats':
## 
##     rstudent
data(tulips)
df <- tulips
df$blooms_std = df$blooms / max(df$blooms)
df$shade_cent = df$shade - mean(df$shade)
df$water_cent = df$water - mean(df$water)

m1 <-quap(
  alist(
    blooms_std ~ dnorm(mu, sigma),
    mu<- α + βs*shade_cent + βw*water_cent + βsw*shade_cent*water_cent,
    α ~ dnorm(0.5, 0.25),
    βs ~ dnorm(0,0.25),
    βw ~ dnorm(0.5,0.25),
    βsw ~ dnorm(0,0.25),
    sigma ~ dexp(1)
  ), data=df)
precis(m1)
##             mean         sd        5.5%       94.5%
## a      0.3579846 0.02392156  0.31975337  0.39621592
## ßs    -0.1134625 0.02923083 -0.16017902 -0.06674600
## ßw     0.2135665 0.02924716  0.16682391  0.26030912
## ßsw   -0.1431598 0.03568369 -0.20018924 -0.08613039
## sigma  0.1248593 0.01694660  0.09777536  0.15194324

8H1. Return to the data(tulips) example in the chapter. Now include the bed variable as a predictor in the interaction model. Don’t interact bed with the other predictors; just include it as a main effect. Note that bed is categorical. So to use it properly, you will need to either construct dummy variables or rather an index variable, as explained in Chapter 5.

df <- tulips
df$blooms_std = df$blooms / max(df$blooms)
df$shade_cent = df$shade - mean(df$shade)
df$water_cent = df$water - mean(df$water)
df$bed2 = coerce_index(df$bed)

m2 <-quap(
  alist(
    blooms_std ~ dnorm(mu, sigma),
    mu<- α + βs*shade_cent + βw*water_cent + βb*bed2 +βsw*shade_cent*water_cent,
    α ~ dnorm(0.5, 0.25),
    βs ~ dnorm(0,0.25),
    βw ~ dnorm(0.5,0.25),
    βb ~ dnorm(0,0.25),
    βsw ~ dnorm(0,0.25),
    sigma ~ dexp(1)
  ), data=df)
precis(m2)
##              mean         sd        5.5%       94.5%
## a      0.23314808 0.05525374  0.14484193  0.32145423
## ßs    -0.11377538 0.02613938 -0.15555116 -0.07199959
## ßw     0.21276770 0.02615116  0.17097310  0.25456230
## ßb     0.06274481 0.02564353  0.02176149  0.10372812
## ßsw   -0.14375318 0.03193065 -0.19478453 -0.09272183
## sigma  0.11150177 0.01517267  0.08725292  0.13575063

8H5. Consider the data(Wines2012) data table. These data are expert ratings of 20 different French and American wines by 9 different French and American judges. Your goal is to model score, the subjective rating assigned by each judge to each wine. I recommend standardizing it. In this problem, consider only variation among judges and wines. Construct index variables of judge and wine and then use these index variables to construct a linear regression model. Justify your priors. You should end up with 9 judge parameters and 20 wine parameters. How do you interpret the variation among individual judges and individual wines? Do you notice any patterns, just by plotting the differences? Which judges gave the highest/lowest ratings? Which wines were rated worst/best on average?

data(Wines2012)
df2 <- Wines2012
df2_list = list(s = standardize(df2$score),
                 wine = as.integer(df2$wine),
                 judge = as.integer(df2$judge))
m3 <- ulam(alist(
              s ~ dnorm(mu, sigma),
              mu <- z[judge] + x[wine] ,
              x[wine] ~ dnorm(0, 0.5),
              z[judge] ~ dnorm(0, 0.5),
              sigma ~ dexp(1)),
         data = df2_list, 
         chains = 4,
         cores = 4)
precis(m3, 2)
##               mean         sd       5.5%       94.5%    n_eff     Rhat4
## x[1]   0.123160478 0.26495921 -0.3027336  0.54554720 3773.856 0.9992403
## x[2]   0.089803498 0.26076553 -0.3189302  0.50745058 3515.583 0.9995571
## x[3]   0.223048876 0.24360218 -0.1622650  0.61014805 2848.927 0.9996039
## x[4]   0.464685331 0.26251208  0.0618846  0.89649113 3162.775 0.9994041
## x[5]  -0.106494367 0.25597473 -0.5070024  0.31383405 3010.210 0.9996581
## x[6]  -0.311303223 0.26120321 -0.7300867  0.10626072 3257.631 0.9990011
## x[7]   0.244765388 0.27137611 -0.2005353  0.67066335 3599.382 0.9985271
## x[8]   0.222554565 0.26513763 -0.2036469  0.64033915 3606.328 0.9993020
## x[9]   0.062872679 0.26117134 -0.3382704  0.48900075 2715.112 0.9996053
## x[10]  0.105071555 0.25455862 -0.3072796  0.51305758 2951.303 1.0001504
## x[11] -0.021627481 0.25868329 -0.4382061  0.39401895 3070.115 0.9992087
## x[12] -0.028998375 0.25733746 -0.4289797  0.38076725 3297.687 1.0001502
## x[13] -0.089729240 0.25332896 -0.4934478  0.30849230 2599.976 0.9991607
## x[14]  0.001572114 0.25523257 -0.4044483  0.41686955 2382.055 0.9999408
## x[15] -0.182554862 0.26472094 -0.6129995  0.23836050 3562.983 0.9990629
## x[16] -0.169612235 0.25850824 -0.5836658  0.24283009 3007.480 0.9998358
## x[17] -0.118992874 0.25872971 -0.5397862  0.29689481 3149.954 0.9991105
## x[18] -0.719166305 0.26493212 -1.1455351 -0.29536746 2991.224 0.9994182
## x[19] -0.134811010 0.25389011 -0.5301803  0.26629218 3056.438 0.9986677
## x[20]  0.314558608 0.26235444 -0.1130816  0.73478664 2803.316 0.9986021
## z[1]  -0.277792892 0.18876827 -0.5791392  0.02689093 2021.362 1.0003597
## z[2]   0.215977006 0.20167652 -0.1074170  0.52504070 2437.572 0.9997576
## z[3]   0.209161614 0.19730407 -0.1135915  0.51590851 2332.581 0.9997157
## z[4]  -0.542919170 0.19391167 -0.8585798 -0.23266652 2517.233 0.9990858
## z[5]   0.803357473 0.20184178  0.4780837  1.13313054 2598.175 0.9990431
## z[6]   0.475292331 0.19664675  0.1566101  0.78151782 2250.193 0.9991591
## z[7]   0.128994917 0.19566094 -0.1857785  0.44807428 2143.314 1.0005169
## z[8]  -0.654435242 0.20180247 -0.9840765 -0.33448568 1892.917 1.0020961
## z[9]  -0.347525695 0.20306702 -0.6822069 -0.02547097 2732.974 0.9985941
## sigma  0.847025127 0.04881537  0.7721509  0.92798504 2580.213 0.9999479
traceplot(m3)
## [1] 1000
## [1] 1
## [1] 1000
## Waiting to draw page 2 of 2

#I can observe noticeable differences between various wines and judges from the traceplot. On an average, wine 4 seems to be with highest rating and wine 18 with least rating. And judge 5 seems to have highest average rating and judge 8 seems to have least average rating based on the mean values and traceplot.