Week 6 - Good & Bad Controls

Multiple regression is no oracle, but only a golem. It is logical, but the relationships it describes are conditional associations, not causal influences. Therefore additional information, from outside the model, is needed to make sense of it. This chapter presented introductory examples of some common frustrations: multi-collinearity, post-treatment bias, and collider bias. Solutions to these frustrations can be organized under a coherent framework in which hypothetical causal relations among variables are analyzed to cope with confounding. In all cases, causal models exist outside the statistical model and can be difficult to test. However, it is possible to reach valid causal inferences in the absence of experiments. This is good news, because we often cannot perform experiments, both for practical and ethical reasons.

Place each answer inside the code chunk (grey box). The code chunks should contain a text response or a code that completes/answers the question or activity requested. Make sure to include plots if the question requests them.

Finally, upon completion, name your final output .html file as: YourName_ANLY505-Year-Semester.html and publish the assignment to your R Pubs account and submit the link to Canvas. Each question is worth 5 points.

Questions

6-1. Use the Waffle House data, data(WaffleDivorce), to find the total causal influence of number of Waffle Houses on divorce rate. Justify your model or models with a causal graph (draw a DAG).

DAG <- dagitty("dag {
    A -> D
    A -> M -> D
    A <- S -> M
    S -> W -> D
  }")

coordinates(DAG) <- list(
  x = c(A = 1, S = 1, M = 2, W = 3, D = 3),
  y = c(A = 1, S = 3, M = 2, W = 3, D = 1)
)
drawdag(DAG)

adjustmentSets(DAG, exposure = "W", outcome = "D")
## { A, M }
## { S }
data(WaffleDivorce)
data <- WaffleDivorce

model <- quap(
  alist(
    Divorce ~ dnorm(mu, sigma),
    mu <- a + bS * South + bW * WaffleHouses,
    a ~ dnorm(0, 0.2),
    c(bS, bW) ~ dnorm(0, 0.5),
    sigma ~ dexp(1)
  ),
  data = data
)
precis(model)
##             mean         sd        5.5%      94.5%
## a     0.24043329 0.20212206 -0.08259679 0.56346338
## bS    0.21513201 0.49546059 -0.57670970 1.00697373
## bW    0.06221587 0.01550205  0.03744059 0.08699115
## sigma 7.82361348 0.72301972  6.66808833 8.97913864

6-2. Build a series of models to test the implied conditional independencies of the causal graph you used in the previous problem. If any of the tests fail, how do you think the graph needs to be amended? Does the graph need more or fewer arrows? Feel free to nominate variables that aren’t in the data.

# Implied conditional independence:
impliedConditionalIndependencies(DAG)
## A _||_ W | S
## D _||_ S | A, M, W
## M _||_ W | S
# Check A _||_ W | S
model_1 <- quap(
alist(
    MedianAgeMarriage ~ dnorm(mu, sigma),
    mu <- a + bS * South + bW * WaffleHouses,
    a ~ dnorm(0, 0.2),
    c(bS, bW) ~ dnorm(0, 0.5),
    sigma ~ dexp(1))
  ,
  data = data
)

precis(model_1)
##              mean         sd        5.5%      94.5%
## a      0.10712659 0.20030749 -0.21300346  0.4272566
## bS     0.08254234 0.49920381 -0.71528176  0.8803664
## bW     0.15553090 0.03841524  0.09413592  0.2169259
## sigma 19.75044536 1.57124094 17.23929886 22.2615919
# Check D _||_ S | A, M, W
model_2 <- quap(
  alist(
    Divorce ~ dnorm(mu, sigma),
    mu <- a + bA * MedianAgeMarriage + bS * South + bM * Marriage + bW * WaffleHouses,
    a ~ dnorm(0, 0.2),
    c(bA, bS, bM, bW) ~ dnorm(0, 0.5),
    sigma ~ dexp(1)
  ),
  data = data
)

precis(model_2)
##              mean         sd         5.5%      94.5%
## a     0.019665147 0.19996091 -0.299911004 0.33924130
## bA    0.168846042 0.04072966  0.103752186 0.23393990
## bS    0.406127638 0.41291918 -0.253796961 1.06605224
## bM    0.246454452 0.05151472  0.164123983 0.32878492
## bW    0.005314721 0.00405780 -0.001170428 0.01179987
## sigma 1.639864613 0.16147717  1.381792903 1.89793632
# Check M _||_ W | S
model_3 <- quap(
  alist(
    Marriage ~ dnorm(mu, sigma),
    mu <- a + bS * South + bW * WaffleHouses,
    a ~ dnorm(0, 0.2),
    c(bS, bW) ~ dnorm(0, 0.5),
    sigma ~ dexp(1)
  ),
  data = data
)

precis(model_3)
##             mean         sd        5.5%      94.5%
## a      0.1267978 0.20045205 -0.19356326  0.4471589
## bS     0.1059572 0.49881614 -0.69124734  0.9031617
## bW     0.1224477 0.03098108  0.07293393  0.1719614
## sigma 15.8854772 1.31519762 13.78353739 17.9874170

6-3. Consider your own research question. Draw a DAG to represent it. What are the testable implications of your DAG? Are there any variables you could condition on to close all backdoor paths? Are there unobserved variables that you have omitted? Would a reasonable colleague imagine additional threats to causal inference that you have ignored?

# We want to know how the consumption of coffee will affect the price of coffee bean.There is a direction relationship that the higher the consumption of coffee, the higher the price of coffee bean. However, in a bit longer time, coffee bean producers will notice the increasing demand for coffee bean and they can make more profits by producing more coffee bean. So, there is a backdoor channel that the increase in consumption will also make business owner adjust production and sell more coffee bean. The increasing supply of coffee bean will also affect the price of coffee bean.

# The DAG the describe the above relationship:

DAG_2 <- dagitty("dag {
    Consumption -> Price
    Consumption -> Production -> Price

  }")


drawdag(DAG_2)

adjustmentSets(DAG_2, exposure = "Consumption", outcome = "Price")
##  {}

6-4. For the DAG you made in the previous problem, can you write a data generating simulation for it? Can you design one or more statistical models to produce causal estimates? If so, try to calculate interesting counterfactuals. If not, use the simulation to estimate the size of the bias you might expect. Under what conditions would you, for example, infer the opposite of a true causal effect?

# Sample simulation:
N <-100000
set.seed(123)
consumption <-rnorm(N,10,3) 
production <-consumption*rnorm(N,0,2)+rnorm(N,10,3)
price <-consumption - production + rnorm(N,0,2)
data_2 <-data.frame(consumption,production,price)

# Build the causal model and close the backdoor
model_4 <- quap(
alist(
    price ~ dnorm(mu, sigma),
    mu <- a + bD * consumption + bS * production,
    a ~ dnorm(0, 1),
    c(bD, bS) ~ dnorm(0, 2),
    sigma ~ dexp(1))
  ,
  data = data_2
)

precis(model_4)
##                mean           sd        5.5%       94.5%
## a      0.0001343554 0.0222506146 -0.03542642  0.03569514
## bD     1.0010094439 0.0021106558  0.99763621  1.00438268
## bS    -1.0009017339 0.0002992973 -1.00138007 -1.00042340
## sigma  2.0022455987 0.0044770864  1.99509035  2.00940085
# When there is over building of production capacity, we would infer the opposite of a true causal effect. There is a very common phenomenon called bullwhip effect in the supply chain. A small amount change of consumption may cause a large change in the production across the supply chain.In this case, the direct impact of consumption on price is limited while the production change may put enormous pressure on price.