Multiple regression is no oracle, but only a golem. It is logical, but the relationships it describes are conditional associations, not causal influences. Therefore additional information, from outside the model, is needed to make sense of it. This chapter presented introductory examples of some common frustrations: multicollinearity, post-treatment bias, and collider bias. Solutions to these frustrations can be organized under a coherent framework in which hypothetical causal relations among variables are analyzed to cope with confounding. In all cases, causal models exist outside the statistical model and can be difficult to test. However, it is possible to reach valid causal inferences in the absence of experiments. This is good news, because we often cannot perform experiments, both for practical and ethical reasons.
Place each answer inside the code chunk (grey box). The code chunks should contain a text response or a code that completes/answers the question or activity requested. Make sure to include plots if the question requests them. Problems are labeled Easy (E), Medium (M), and Hard(H).
Finally, upon completion, name your final output .html file as: YourName_ANLY505-Year-Semester.html and publish the assignment to your R Pubs account and submit the link to Canvas. Each question is worth 5 points.
6E1. List three mechanisms by which multiple regression can produce false inferences about causal effects.
# Post-treatment bias
# Multicollinearity
# Collider bias
6E2. For one of the mechanisms in the previous problem, provide an example of your choice, perhaps from your own research.
# I recently encountered problems caused by multi-collinearity as part of work.
# It happened when I was trying to build a model to predict return on investment
# (ROI) using the capital spend and total sales made by a department as
# the independent variables. Because capital spend has a direct bearing on the
# department sales (more sales -> more budget for the department),
# there is a strong multi-collinearity between the independent variables.
# As a consequence, I ended up misinterpreting the statistical significance
# of my predictor variables.
6E3. List the four elemental confounds. Can you explain the conditional dependencies of each?
# Fork: X<-Z->Y. X and Y are independent, conditional on Z.
# Pipe: X->Z->Y. X and Y are independent, conditional on Z.
# Collider: X->Z<-Y. no association between X and Y unless condition on Z.
# Conditioning on Z, information flows between X and Y.
# Descendant: Condition on a descendant of Z in the pipe.
6E4. How is a biased sample like conditioning on a collider? Think of the example at the open of the chapter.
# A biased sample implies that a sample over-represents or under-represents
# the population that it belongs to. The example from the opening of the chapter
# is newsworthiness vs trustworthiness in scientific papers publication.
# When making decisions on which proposals to fund, editors will weigh both
# newsworthiness and trustworthiness , then picking the ones that have high
# final scores.In this case, conditioning on the final scores, there is a
# negative association between newsworthy and trustworthy (newsworthy proposal
# generally are not very trustworthy). Proposals selected by editors have to
# have either high newsworthiness or high trust worthiness.
6M1. Modify the DAG on page 186 to include the variable V, an unobserved cause of C and Y: C ← V → Y. Reanalyze the DAG. Draw the DAG. How many paths connect X to Y? Which must be closed? Which variables should you condition on now?
# Two paths from X to Y:
# (1) X<-U<-A->C<-V->Y: this path contains a fork U<-A->C, then a pipe X<-U,
# and a collider C<-V->Y. This is open at the left side, X←U←A. If no
# conditioning on V, there is no relation to the right hand side, C<-V->Y.
# To close this path, we need to condition on A.
# (2) X<-U->B<-C<-V->Y
# This path contains a collider in the middle, X<-U->B, and B<-C<-V.
# And a second collider at the right hand side, C<-V->Y.
# This path is closed if no conditioning on B or V.
6M2. Sometimes, in order to avoid multicollinearity, people inspect pairwise correlations among predictors before including them in a model. This is a bad procedure, because what matters is the conditional association, not the association before the variables are included in the model. To highlight this, consider the DAG X → Z → Y. Simulate data from this DAG so that the correlation between X and Z is very large. Then include both in a model prediction Y. Do you observe any multicollinearity? Why or why not? What is different from the legs example in the chapter?
# There are multi-collilnearity. Given that we assign beta between x and z to be
# 0.9, the result from model shows beta(x->z) ranges mostly from -0.03 and 0.11.
# The difference between this pipe DAG and legs example is that beta(z->y) is
# estimated correctly in this model, whereas both beta are estimated incorrectly
# in the legs example.
library(processx)
library(rstan)
library(coda)
library(mvtnorm)
library(devtools)
library(loo)
library(dagitty)
library(rethinking)
n <- 1000
b_xz <- 0.9
b_zy <- 0.7
set.seed(100)
x <- rnorm(n)
z <- rnorm(n, x*b_xz)
y <- rnorm(n, z*b_zy)
d <- data.frame(x, y, z)
cor(d)
## x y z
## x 1.0000000 0.4562717 0.6924074
## y 0.4562717 1.0000000 0.6351279
## z 0.6924074 0.6351279 1.0000000
pairs(d)
model1 <- quap(alist(
y ~ dnorm(mu, sigma),
mu <- a + bx*x + bz*z ,
a ~ dnorm(0, 100),
bx ~ dnorm(0, 100),
bz ~ dnorm(0, 100),
sigma ~ dexp(1)), data=d)
precis(model1)
## mean sd 5.5% 94.5%
## a -0.008135237 0.03332760 -0.06139918 0.04512871
## bx 0.041459808 0.04483542 -0.03019585 0.11311547
## bz 0.619099380 0.03398903 0.56477834 0.67342042
## sigma 1.053730409 0.02383294 1.01564077 1.09182005
# There is multi-collinearity. The use of z provides additional information
# about y. The difference is that in the legs example, bad priors were used.