Multiple regression is no oracle, but only a golem. It is logical, but the relationships it describes are conditional associations, not causal influences. Therefore additional information, from outside the model, is needed to make sense of it. This chapter presented introductory examples of some common frustrations: multicollinearity, post-treatment bias, and collider bias. Solutions to these frustrations can be organized under a coherent framework in which hypothetical causal relations among variables are analyzed to cope with confounding. In all cases, causal models exist outside the statistical model and can be difficult to test. However, it is possible to reach valid causal inferences in the absence of experiments. This is good news, because we often cannot perform experiments, both for practical and ethical reasons.
Place each answer inside the code chunk (grey box). The code chunks should contain a text response or a code that completes/answers the question or activity requested. Problems are labeled Easy (E), Medium (M), and Hard(H).
Finally, upon completion, name your final output .html
file as: YourName_ANLY505-Year-Semester.html
and publish the assignment to your R Pubs account and submit the link to Canvas. Each question is worth 5 points.
6E1. List three mechanisms by which multiple regression can produce false inferences about causal effects.
#Multicollinearity, post-treatment bias, collider bias
6E2. For one of the mechanisms in the previous problem, provide an example of your choice, perhaps from your own research.
#Multicollinearity means a very strong association between two or more predictor variables.
#For example, when observing the health condition of employees over the course of time of 12 quarters, the change of age and quarter are highly related (basically the same variable). When they are both included in the model, there will be multicollinearity.
6E3. List the four elemental confounds. Can you explain the conditional dependencies of each?
#The fork. X <- Z -> Y. X _||_ Y | Z. When Z is a common cause of X and Y, conditioning on Z removes dependency between X and Y.
#The pipe. X -> Z -> Y. X _||_ Y | Z in both. Conditioning on Z removes dependency between X and Y.
#The collider. X -> Z <- Y. X ̸⊥⊥ Y|Z. Conditioning on Z creates dependency between X and Y.
#The descendant. X -> Z -> Y and Z -> A. Conditioning on A is like conditioning on Z.
6E4. How is a biased sample like conditioning on a collider? Think of the example at the open of the chapter.
#The example was whether newsworthy scientific studies are trustworthy.The grant panel weighs trustworthiness and newsworthiness equally. Then they rank the proposals by their combined scores and select the top 10% for funding. There is no correlation at all between trustworthiness and newsworthiness. It's more common for studies to score high on one item than on both, so among funded proposals, the most newsworthy studies can actually have less than average trustworthiness.
#The collider is the final score. Conditioning the final score will create dependency on the trustworthiness and newsworthiness of studies.
6M1. Modify the DAG on page 186 to include the variable V, an unobserved cause of C and Y: C ← V → Y. Reanalyze the DAG. How many paths connect X to Y? Which must be closed? Which variables should you condition on now?
#This is a descendant problem. C is the descendant of V. Conditioning on C is like conditioning on V, so we should condition on C.
library(dagitty)
dag6.1.1 <-dagitty("dag{
U [unobserved]
X -> Y
X <- U <- A -> C -> Y
U -> B <- C <- V
}")
adjustmentSets(dag6.1.1,exposure="X",outcome="Y")
## { C }
## { A }
6M2. Sometimes, in order to avoid multicollinearity, people inspect pairwise correlations among predictors before including them in a model. This is a bad procedure, because what matters is the conditional association, not the association before the variables are included in the model. To highlight this, consider the DAG X → Z → Y. Simulate data from this DAG so that the correlation between X and Z is very large. Then include both in a model prediction Y. Do you observe any multicollinearity? Why or why not? What is different from the legs example in the chapter?
N <- 1000
X <- rnorm(n = N, mean = 0, sd = 1)
Z <- rnorm(n = N, mean = X*0.9, sd = 1)
Y <- rnorm(n = N, mean = 0, sd = 1)
d <-data.frame(X, Z, Y)
cor(d)
## X Z Y
## X 1.00000000 0.68779136 0.02038612
## Z 0.68779136 1.00000000 0.02634959
## Y 0.02038612 0.02634959 1.00000000
#model prediction Y
library(rethinking)
## Loading required package: rstan
## Loading required package: StanHeaders
## Loading required package: ggplot2
## rstan (Version 2.21.2, GitRev: 2e1f913d3ca3)
## For execution on a local, multicore CPU with excess RAM we recommend calling
## options(mc.cores = parallel::detectCores()).
## To avoid recompilation of unchanged Stan programs, we recommend calling
## rstan_options(auto_write = TRUE)
## Do not specify '-march=native' in 'LOCAL_CPPFLAGS' or a Makevars file
## Loading required package: parallel
## rethinking (Version 2.13)
##
## Attaching package: 'rethinking'
## The following object is masked from 'package:stats':
##
## rstudent
m6.1.2 <- quap(
alist(
Y ~ dnorm(mu, sigma),
mu <- a + bX * X + bZ * Z,
a ~ dnorm(10, 100),
bX ~ dnorm(2, 10),
bZ ~ dnorm(2, 10),
sigma ~ dexp(1)
),
data = d
)
precis(m6.1.2)
## mean sd 5.5% 94.5%
## a -0.065274901 0.03310974 -0.11819065 -0.01235915
## bX 0.004513609 0.04553557 -0.06826102 0.07728824
## bZ 0.018381968 0.03419398 -0.03626661 0.07303055
## sigma 1.043438903 0.02331366 1.00617917 1.08069863
plot(precis(m6.1.2))
#We do see a multicollinearity, because with both X and Z in the model, we cannot infer that either X or Z has a relationship with Y. Our example here is X -> Z -> Y. The leg example is Left <- Height -> Right.