6E1. List three mechanisms by which multiple regression can produce false inferences about causal effects.

# Multicollinearity, Post-treatment bias and Collider bias

6E2. For one of the mechanisms in the previous problem, provide an example of your choice, perhaps from your own research.

# Multicollinearity
# For example, a research wants to build a model to estimate individul's happiniess, using data including marriage, age, job industry, income, education, etc.. If both income and education are both put into this model, there will be multicollinearty problem. Since generally higher education will result in higher income. 

6E3. List the four elemental confounds. Can you explain the conditional dependencies of each?

# Fork: X<-Z->Y. X and Y are independent, conditional on Z. 
# Pipe: X->Z->Y. X and Y are independent, conditional on Z.
# Collider: X->Z<-Y. no association between X and Y unless condition on Z. Conditioning on Z, information flows between X and Y. 
# Descendent: Condition on a descendent of Z in the pipe, it’ll still be like (weakly) closing the pipe. 

6E4. How is a biased sample like conditioning on a collider? Think of the example at the open of the chapter.

# Use scientific studies' newsworthy and trustworthy as an example. When selecting proposals to be funded, editors will rate
# newsworthiness and trustworthiness equally, then pick the top 10% of the proposals putting newsworthy score and trustworthy score together.
# Conditioning on the final scores, there will be a negative association between newsworthy and trustworthy.
# It is because any proposals that are selected to be funded has to have either high newsworthiness or high trustworhiness.
# if,forexample, a selected proposal has low trustworthiness,then it must have high newsworthiness. Otherwise it wouldn’t have been funded. 

6M1. Modify the DAG on page 186 to include the variable V, an unobserved cause of C and Y: C ← V → Y. Reanalyze the DAG. How many paths connect X to Y? Which must be closed? Which variables should you condition on now?

# Aside from the direct path, there are two paths from X to Y: 
#(1) X<-U<-A->C<-V->Y 
# This path conatains a fork U<-A->C, then a pipe X<-U, and a collider C<-V->Y.
# This path is open at the left side, X←U←A. If no conditioning on V, there is no assocation at the right hand side, C<-V->Y.
# To close this path, we can condition on A.

#(2) X<-U->B<-C<-V->Y
# This path conatins a collider in the middle, X<-U->B, and B<-C<-V. And a second collider at the right hand side, C<-V->Y.
# This path is closed if no conditioning on B and V.

6M2. Sometimes, in order to avoid multicollinearity, people inspect pairwise correlations among predictors before including them in a model. This is a bad procedure, because what matters is the conditional association, not the association before the variables are included in the model. To highlight this, consider the DAG X → Z → Y. Simulate data from this DAG so that the correlation between X and Z is very large. Then include both in a model prediction Y. Do you observe any multicollinearity? Why or why not? What is different from the legs example in the chapter?

n<- 1000
b_xz<- 0.9
b_zy<- 0.7

set.seed(100)
x<- rnorm(n)
z<- rnorm(n,x*b_xz)
y<- rnorm(n,z*b_zy)

d <- data.frame(x,y,z)
cor(d)
##           x         y         z
## x 1.0000000 0.4562717 0.6924074
## y 0.4562717 1.0000000 0.6351279
## z 0.6924074 0.6351279 1.0000000
## There are multicollilnearity. Given that we assign beta between x and z to be 0.9, the result from model shows beta(x->z) ranges mostly from -0.03 and 0.11.
# The difference between this pipe DAG and legs example is that beta(z->y) is estimated correctly in this model, whereas both beta are estimated incorrectly in legs example.