title: ‘Assignment #5’
author: “Jiayuan Shi”
date: “2020-08-21”
output: html_document

Chapter 6 - The Haunted DAG & The Causal Terror

Multiple regression is no oracle, but only a golem. It is logical, but the relationships it describes are conditional associations, not causal influences. Therefore additional information, from outside the model, is needed to make sense of it. This chapter presented introductory examples of some common frustrations: multicollinearity, post-treatment bias, and collider bias. Solutions to these frustrations can be organized under a coherent framework in which hypothetical causal relations among variables are analyzed to cope with confounding. In all cases, causal models exist outside the statistical model and can be difficult to test. However, it is possible to reach valid causal inferences in the absence of experiments. This is good news, because we often cannot perform experiments, both for practical and ethical reasons.

Place each answer inside the code chunk (grey box). The code chunks should contain a text response or a code that completes/answers the question or activity requested. Problems are labeled Easy (E), Medium (M), and Hard(H).

Finally, upon completion, name your final output .html file as: YourName_ANLY505-Year-Semester.html and publish the assignment to your R Pubs account and submit the link to Canvas. Each question is worth 5 points.

Questions

6E1. List three mechanisms by which multiple regression can produce false inferences about causal effects.

#multicollinearity, post-treatment bias, and collider bias

6E2. For one of the mechanisms in the previous problem, provide an example of your choice, perhaps from your own research.

#Multicollinearity generally occurs when there are high correlations between two or more predictor variables.
#Eample: if a study is going to analyze the amount of calories students eat in a day (Y) and their height (X1) and weight (X2), when all the three variables are included into the multiple regression model Y ~ X1 + X2, there will be an multicollinearity problem since the predictors height and weight are highly correlated with each other.

6E3. List the four elemental confounds. Can you explain the conditional dependencies of each?

#The Fork, X ← Z → Y. Z is a common cause of X and Y. X and Y are independent, conditional on Z. 
#The Pipe, X → Z → Y. X causes Z which impacts Y. X and Y are independent, conditional on Z. 
#The Collider, X → Z ← Y. X and Y jointly cause Z. There is no association between X and Y unless you condition on Z. 
#The Descendant, X → Z → Y, Z → D. The descendant D is a variable that is influenced by Z. Controlling on D is like conditioning on Z, but is weaker.

6E4. How is a biased sample like conditioning on a collider? Think of the example at the open of the chapter.

#For Example, there is no association between smoking and birth defects at first, however, smoking and birth defect can lead babies with low birthweight (conditioning on a collider).
#Smoking → low birthweight ← birth defect

6M1. Modify the DAG on page 186 to include the variable V, an unobserved cause of C and Y: C ← V → Y. Reanalyze the DAG. How many paths connect X to Y? Which must be closed? Which variables should you condition on now?

#If including C ← V → Y, there are two paths from X to Y: 
#(1) X<-U<-A->C<-V->Y 
#This path conatains a fork U<-A->C, a pipe X<-U, and a collider C<-V->Y.
#This path is open at the left side, X←U←A. If no conditioning on V, there is no assocation at the right hand side, C<-V->Y.
#To close this path, we can condition on A.

#(2) X<-U->B<-C<-V->Y
#This path conatins a collider in the middle X<-U->B, and pipe B<-C<-V. And a second collider at the right hand side, C<-V->Y.
#This path is closed if no conditioning on B and V.

6M2. Sometimes, in order to avoid multicollinearity, people inspect pairwise correlations among predictors before including them in a model. This is a bad procedure, because what matters is the conditional association, not the association before the variables are included in the model. To highlight this, consider the DAG X → Z → Y. Simulate data from this DAG so that the correlation between X and Z is very large. Then include both in a model prediction Y. Do you observe any multicollinearity? Why or why not? What is different from the legs example in the chapter?

library(rethinking)
## Loading required package: rstan
## Loading required package: StanHeaders
## Loading required package: ggplot2
## rstan (Version 2.21.2, GitRev: 2e1f913d3ca3)
## For execution on a local, multicore CPU with excess RAM we recommend calling
## options(mc.cores = parallel::detectCores()).
## To avoid recompilation of unchanged Stan programs, we recommend calling
## rstan_options(auto_write = TRUE)
## Do not specify '-march=native' in 'LOCAL_CPPFLAGS' or a Makevars file
## Loading required package: parallel
## rethinking (Version 2.12)
## 
## Attaching package: 'rethinking'
## The following object is masked from 'package:stats':
## 
##     rstudent
n<- 1000
b_xz<- 0.9
b_zy<- 0.7

set.seed(100)
x<- rnorm(n)
z<- rnorm(n,x*b_xz)
y<- rnorm(n,z*b_zy)

d <- data.frame(x,y,z)
cor(d)
##           x         y         z
## x 1.0000000 0.4562717 0.6924074
## y 0.4562717 1.0000000 0.6351279
## z 0.6924074 0.6351279 1.0000000
m6.2<- quap( alist( y ~ dnorm( mu , sigma ), mu <- a + b_xz*x + b_zy*z,a ~ dnorm( 0 , 100 ), c(b_xz,b_zy) ~ dnorm( 0 , 100 ), sigma ~ dexp( 1 ) ),data=d ) 
## Caution, model may not have converged.
## Code 1: Maximum iterations reached.
precis(m6.2)
##               mean         sd        5.5%      94.5%
## a     -0.008135237 0.03332760 -0.06139918 0.04512871
## b_xz   0.041459808 0.04483542 -0.03019585 0.11311547
## b_zy   0.619099380 0.03398903  0.56477834 0.67342042
## sigma  1.053730409 0.02383294  1.01564077 1.09182005
plot(precis(m6.2))

post <- extract.samples(m6.2) 
plot( z ~ x , post , col=col.alpha(rangi2,0.1) , pch=16 )

#There are multicollilnearity. Given that we assign beta between x and z to be 0.9, the result from model shows beta(x->z) ranges mostly from -0.03 and 0.11.
#The difference with legs example is that beta(Z->Y) is estimated correctly here, but not correctly in legs example. This means using Z will provide extra information about Y.