Chapter 8 - Conditional Manatees

This chapter introduced interactions, which allow for the association between a predictor and an outcome to depend upon the value of another predictor. While you can’t see them in a DAG, interactions can be important for making accurate inferences. Interactions can be difficult to interpret, and so the chapter also introduced triptych plots that help in visualizing the effect of an interaction. No new coding skills were introduced, but the statistical models considered were among the most complicated so far in the book.

Place each answer inside the code chunk (grey box). The code chunks should contain a text response or a code that completes/answers the question or activity requested. Make sure to include plots if the question requests them. Problems are labeled Easy (E), Medium (M), and Hard(H).

Finally, upon completion, name your final output .html file as: YourName_ANLY505-Year-Semester.html and publish the assignment to your R Pubs account and submit the link to Canvas. Each question is worth 5 points.

Questions

8E1. For each of the causal relationships below, name a hypothetical third variable that would lead to an interaction effect:

  1. Bread dough rises because of yeast.
  2. Education leads to higher income.
  3. Gasoline makes a car go.
# 1. Time. Time should be a factor to affect yeast to produce. 
# 2. Region. Different region would have different income level. 
# 3. Engine. The type of engine would affect the car performance and gasoline burning efficiency. 

8E2. Which of the following explanations invokes an interaction?

  1. Caramelizing onions requires cooking over low heat and making sure the onions do not dry out.
  2. A car will go faster when it has more cylinders or when it has a better fuel injector.
  3. Most people acquire their political beliefs from their parents, unless they get them instead from their friends.
  4. Intelligent animal species tend to be either highly social or have manipulative appendages (hands, tentacles, etc.).
# 1. It invokes an interaction between heat and dryness. 
# 2. No interaction invoked because of "or".
# 3. It invokes an interaction between people and their parents or friends to acquire political belief. 
# 4. No interaction invoked because of "or".

8E3. For each of the explanations in 8E2, write a linear model that expresses the stated relationship.

# 1. μi = βH * Hi + βD * Di + βHD * Hi * Di

# 2. μi = βC * Ci + βQ * Qi

# 3. μi = βTP * Ti * Pi + βTF * Ti * Fi

# 4. μi = βS * Si + βA * Ai + βSA * Si * Ai

8M1. Recall the tulips example from the chapter. Suppose another set of treatments adjusted the temperature in the greenhouse over two levels: cold and hot. The data in the chapter were collected at the cold temperature. You find none of the plants grown under the hot temperature developed any blooms at all, regardless of the water and shade levels. Can you explain this result in terms of interactions between water, shade, and temperature?

# Interactions between blossoms and water and between blossoms and shade depend upon the value of temperature. We can see that, there are three predictors: water, shade, and temperature. Either of these interactions could have resulted in plants not blooming under the hot temperature.

8M2. Can you invent a regression equation that would make the bloom size zero, whenever the temperature is hot?

# μi = α + βW*Wi+βS*Si + βT*i + βWST*Wi*Si*Ti + βWS*Wi*Si + βWT*Wi*Ti + βST*Si*Ti

# Assuming all predictors are equal to 1:
# μi|Wi=1,Si=1,T=1=α+βW+βS+βT+βWST+βWS+βWT+βST

# When Ti=0, the full regression equation would be:
# μi = α + βW*Wi + βS*Si − αTi − βWS*Wi*Si*Ti + βWS*Wi*Si − βW*Wi*Ti − βS*Si*Ti

8M4. Repeat the tulips analysis, but this time use priors that constrain the effect of water to be positive and the effect of shade to be negative. Use prior predictive simulation. What do these prior assumptions mean for the interaction prior, if anything? Visualize the prior simulation.

options(repos=structure(c(CRAN="http://cran.r-project.org")))
install.packages(c('devtools','coda','mvtnorm'))
## 
## The downloaded binary packages are in
##  /var/folders/m5/rkn4nv217j9fq8q9lj6w575r0000gn/T//Rtmp1WznyN/downloaded_packages
library(devtools)
library(rethinking)

data(tulips)
d <- tulips
str(d)
## 'data.frame':    27 obs. of  4 variables:
##  $ bed   : Factor w/ 3 levels "a","b","c": 1 1 1 1 1 1 1 1 1 2 ...
##  $ water : int  1 1 1 2 2 2 3 3 3 1 ...
##  $ shade : int  1 2 3 1 2 3 1 2 3 1 ...
##  $ blooms: num  0 0 111 183.5 59.2 ...
d$blooms_std <- d$blooms / max(d$blooms)
d$water_cent <- d$water - mean(d$water)
d$shade_cent <- d$shade - mean(d$shade)

bw_d<- abs(rnorm(nrow(d),0,0.25))
bs_d<- (-abs(rnorm(nrow(d),0,0.25)))

m_tulip <- quap(
  alist(
    blooms_std ~ dnorm( mu , sigma ) ,
    mu <- a + bw*water_cent + bs*shade_cent + bws*water_cent*shade_cent,
    a ~ dnorm( 0.5, 0.25) ,
    bw ~ dnorm(bw_d) ,
    bs ~ dnorm(bs_d) ,
    bws ~ dnorm( 0 , 0.25 ),
    sigma ~ dexp( 1 )
    ) ,
  data=d )

precis(m_tulip)
##             mean         sd        5.5%       94.5%
## a      0.3579854 0.02391543  0.31976392  0.39620687
## bw     0.2109937 0.02908645  0.16450799  0.25747951
## bs    -0.1163235 0.02908609 -0.16280868 -0.06983829
## bws   -0.1431639 0.03567461 -0.20017883 -0.08614899
## sigma  0.1248270 0.01693313  0.09776457  0.15188941

8H1. Return to the data(tulips) example in the chapter. Now include the bed variable as a predictor in the interaction model. Don’t interact bed with the other predictors; just include it as a main effect. Note that bed is categorical. So to use it properly, you will need to either construct dummy variables or rather an index variable, as explained in Chapter 5.

data(tulips)
d <- tulips
str(d)
## 'data.frame':    27 obs. of  4 variables:
##  $ bed   : Factor w/ 3 levels "a","b","c": 1 1 1 1 1 1 1 1 1 2 ...
##  $ water : int  1 1 1 2 2 2 3 3 3 1 ...
##  $ shade : int  1 2 3 1 2 3 1 2 3 1 ...
##  $ blooms: num  0 0 111 183.5 59.2 ...
d$bed_id <- coerce_index(d$bed) 
d$blooms_std <- d$blooms / max(d$blooms)
d$water_cent <- d$water - mean(d$water)
d$shade_cent <- d$shade - mean(d$shade)

m_bed <- quap(
  alist(
    blooms_std ~ dnorm(mu, sigma),
    mu <- a + bb*bed_id + bw*water_cent + bs*shade_cent + bws*water_cent*shade_cent,
    a ~ dnorm(0.5,0.25),
    bw ~ dnorm(0,0.25),
    bs ~ dnorm(0,0.25),
    bws ~ dnorm(0,0.25),
    bb ~ dnorm(0, 0.25),
    sigma ~ dunif(0, 100)
    ), 
    data=d
)
precis(m_bed,depth = 2)
##              mean         sd        5.5%       94.5%
## a      0.23320435 0.05535636  0.14473419  0.32167451
## bw     0.20729350 0.02619489  0.16542901  0.24915799
## bs    -0.11377077 0.02618974 -0.15562703 -0.07191451
## bws   -0.14374411 0.03199186 -0.19487329 -0.09261493
## bb     0.06272014 0.02569137  0.02166037  0.10377992
## sigma  0.11171887 0.01524582  0.08735311  0.13608463

8H5. Consider the data(Wines2012) data table. These data are expert ratings of 20 different French and American wines by 9 different French and American judges. Your goal is to model score, the subjective rating assigned by each judge to each wine. I recommend standardizing it. In this problem, consider only variation among judges and wines. Construct index variables of judge and wine and then use these index variables to construct a linear regression model. Justify your priors. You should end up with 9 judge parameters and 20 wine parameters. Plot the parameter estimates. How do you interpret the variation among individual judges and individual wines? Do you notice any patterns, just by plotting the differences? Which judges gave the highest/lowest ratings? Which wines were rated worst/best on average?

data(Wines2012)
d <- Wines2012

dat_list <- list(
    S = standardize(d$score),
    jid = as.integer(d$judge),
    wid = as.integer(d$wine)
)

str(dat_list)
## List of 3
##  $ S  : num [1:180] -1.5766 -0.4505 -0.0751 0.3003 -2.3274 ...
##   ..- attr(*, "scaled:center")= num 14.2
##   ..- attr(*, "scaled:scale")= num 2.66
##  $ jid: int [1:180] 4 4 4 4 4 4 4 4 4 4 ...
##  $ wid: int [1:180] 1 3 5 7 9 11 13 15 17 19 ...
m1 <- ulam(
  alist(
    S ~ dnorm(mu, sigma),
    mu <- a[jid] + b[wid], 
    a[jid] ~ dnorm(0, 0.5), 
    b[wid] ~ dnorm(0, 0.5), 
    sigma ~dexp(1)
    
  ), data=dat_list, chains=4 , cores=4)

precis(m1, 2)
##               mean        sd        5.5%       94.5%    n_eff     Rhat4
## a[1]  -0.280279293 0.1970092 -0.59401257  0.03061632 1852.544 0.9988898
## a[2]   0.213441643 0.1987660 -0.10290650  0.53282260 2430.824 0.9997925
## a[3]   0.207782002 0.1929041 -0.09865005  0.52744665 2126.948 0.9997478
## a[4]  -0.542165830 0.2016152 -0.87056572 -0.22250249 2094.872 0.9996467
## a[5]   0.795170300 0.1958213  0.47718189  1.10294706 2511.004 0.9985919
## a[6]   0.474095831 0.1953750  0.16986678  0.78924699 2442.382 0.9993011
## a[7]   0.133784845 0.1942380 -0.18945440  0.44009274 2003.961 0.9990550
## a[8]  -0.661254295 0.1946964 -0.96843712 -0.34555853 1910.304 1.0011777
## a[9]  -0.347178012 0.1992486 -0.66447361 -0.02693751 2204.731 1.0002995
## b[1]   0.121304899 0.2607517 -0.30030711  0.52123280 2804.579 0.9987635
## b[2]   0.086643515 0.2537378 -0.32497474  0.48279984 2427.672 1.0002805
## b[3]   0.225494450 0.2623111 -0.18853211  0.64516896 3160.930 0.9982497
## b[4]   0.461647508 0.2511127  0.06988337  0.87119179 2689.558 0.9986025
## b[5]  -0.105607095 0.2630993 -0.52163472  0.32097132 2563.264 0.9989148
## b[6]  -0.317519779 0.2511956 -0.71580501  0.07981601 2657.997 0.9999066
## b[7]   0.252464901 0.2580456 -0.16449313  0.66227182 2562.037 1.0009856
## b[8]   0.236368453 0.2569902 -0.18180132  0.64124373 2551.966 0.9998378
## b[9]   0.070523314 0.2632676 -0.36422655  0.48288500 2847.798 0.9995231
## b[10]  0.097583294 0.2579403 -0.31191702  0.51398194 2463.123 0.9988098
## b[11] -0.002830291 0.2610895 -0.41276391  0.43006875 2740.401 0.9983831
## b[12] -0.017573540 0.2534127 -0.41253081  0.38500423 2407.083 0.9987265
## b[13] -0.083998139 0.2574041 -0.50608875  0.30614135 2431.861 0.9986805
## b[14]  0.005092856 0.2600505 -0.40181349  0.41376036 2952.535 0.9983762
## b[15] -0.182730573 0.2602398 -0.59117362  0.23446807 2621.526 0.9996818
## b[16] -0.168753180 0.2646760 -0.59950679  0.26446957 2679.255 0.9992320
## b[17] -0.117195449 0.2663923 -0.54571409  0.30483171 2926.500 0.9996650
## b[18] -0.726739575 0.2649194 -1.14151993 -0.29369156 2840.732 0.9998046
## b[19] -0.135625850 0.2591321 -0.55151544  0.27605313 2496.340 1.0000030
## b[20]  0.314679576 0.2609784 -0.09394318  0.72287067 2743.752 1.0004024
## sigma  0.846986961 0.0476450  0.77602492  0.92633532 2447.948 0.9988780