Chapter 14 - Adventures in Covariance

This chapter extended the basic multilevel strategy of partial pooling to slopes as well as intercepts. Accomplishing this meant modeling covariation in the statistical population of parameters. The LKJcorr prior was introduced as a convenient family of priors for correlation matrices. You saw how covariance models can be applied to causal inference, using instrumental variables and the front-door criterion. Gaussian processes represent a practical method of extending the varying effects strategy to continuous dimensions of similarity, such as spatial, network, phylogenetic, or any other abstract distance between entities in the data.

Place each answer inside the code chunk (grey box). The code chunks should contain a text response or a code that completes/answers the question or activity requested. Make sure to include plots if the question requests them.

Finally, upon completion, name your final output .html file as: YourName_ANLY505-Year-Semester.html and publish the assignment to your R Pubs account and submit the link to Canvas. Each question is worth 5 points.

Questions

14-1. Repeat the café robot simulation from the beginning of the chapter. This time, set rho to zero, so that there is no correlation between intercepts and slopes. Plot the posterior distribution. How does the posterior distribution of the correlation reflect this change in the underlying simulation?

## Determination of café robot simulation in reference to the chapter's example and setting the *rho* to zero.


## Determining and setting up of the simulation model parameters:


# Parameter "a" referencing for average cafe's morning wait time.

a <- 3.5 

# Parameter "b" referencing for average afternoon cafe wait time.

b <- (-1) 

# Parameter "sigma_a" referencing for standard deviation in intercepts.

sigma_a <- 1 # std dev in intercepts

# Parameter "sigma_b" referencing for standard deviation for slopes.

sigma_b <- 0.5 # std dev in slopes


# Parameter "rho" referencing for correlation between slopes and intercepts.

rho <- 0 # correlation between intercepts and slopes

# Paremeter "Mu".

Mu <- c(a, b)

# Determination for Covariance "cov_ab":

cov_ab <- sigma_a * sigma_b * rho

## Determination for standard deviation for model:

Sigma <- matrix(c(sigma_a^2, cov_ab, cov_ab, sigma_b^2), ncol = 2)


# Assignment for the simulate observations:

N_cafes <- 20

set.seed(123)

vary_effects <- mvrnorm(N_cafes, Mu, Sigma)

a_cafe <- vary_effects[, 1]

b_cafe <- vary_effects[, 2]

N_visits <- 10

afternoon <- rep(0:1, N_visits * N_cafes / 2)

cafe_id <- rep(1:N_cafes, each = N_visits)

mu <- a_cafe[cafe_id] + b_cafe[cafe_id] * afternoon

## Determination for the standard deviation within cafe's:

sigma <- 0.5 # std dev within cafes

wait <- rnorm(N_visits * N_cafes, mu, sigma)


# Determination for the data frame:

dat <- data.frame(cafe = cafe_id, afternoon = afternoon, wait = wait)


## Determination for the revised cafe robot simulation model in reference to chapter's model: 14.1 and with revised parameter for "rho" set as zero:

Rvsd_Cafe_Model <- ulam(
  alist(
    wait ~ normal(mu, sigma),
    mu <- a_cafe[cafe] + b_cafe[cafe] * afternoon,
    c(a_cafe, b_cafe)[cafe] ~ multi_normal(c(a, b), Rho, sigma_cafe),
    a ~ normal(5, 2),
    b ~ normal(-1, 0.5),
    sigma_cafe ~ exponential(1),
    sigma ~ exponential(1),
    Rho ~ lkj_corr(2)
  ),
  data = dat, chains = 6, cores = 6
)
## Running MCMC with 6 parallel chains, with 1 thread(s) per chain...
## 
## Chain 1 Iteration:   1 / 1000 [  0%]  (Warmup) 
## Chain 1 Iteration: 100 / 1000 [ 10%]  (Warmup) 
## Chain 2 Iteration:   1 / 1000 [  0%]  (Warmup) 
## Chain 2 Iteration: 100 / 1000 [ 10%]  (Warmup) 
## Chain 2 Iteration: 200 / 1000 [ 20%]  (Warmup) 
## Chain 3 Iteration:   1 / 1000 [  0%]  (Warmup) 
## Chain 3 Iteration: 100 / 1000 [ 10%]  (Warmup) 
## Chain 3 Iteration: 200 / 1000 [ 20%]  (Warmup) 
## Chain 4 Iteration:   1 / 1000 [  0%]  (Warmup) 
## Chain 4 Iteration: 100 / 1000 [ 10%]  (Warmup) 
## Chain 4 Iteration: 200 / 1000 [ 20%]  (Warmup) 
## Chain 5 Iteration:   1 / 1000 [  0%]  (Warmup) 
## Chain 5 Iteration: 100 / 1000 [ 10%]  (Warmup) 
## Chain 5 Iteration: 200 / 1000 [ 20%]  (Warmup) 
## Chain 6 Iteration:   1 / 1000 [  0%]  (Warmup) 
## Chain 6 Iteration: 100 / 1000 [ 10%]  (Warmup) 
## Chain 6 Iteration: 200 / 1000 [ 20%]  (Warmup) 
## Chain 1 Iteration: 200 / 1000 [ 20%]  (Warmup) 
## Chain 1 Iteration: 300 / 1000 [ 30%]  (Warmup) 
## Chain 2 Iteration: 300 / 1000 [ 30%]  (Warmup) 
## Chain 3 Iteration: 300 / 1000 [ 30%]  (Warmup) 
## Chain 4 Iteration: 300 / 1000 [ 30%]  (Warmup) 
## Chain 5 Iteration: 300 / 1000 [ 30%]  (Warmup) 
## Chain 6 Iteration: 300 / 1000 [ 30%]  (Warmup) 
## Chain 2 Iteration: 400 / 1000 [ 40%]  (Warmup) 
## Chain 3 Iteration: 400 / 1000 [ 40%]  (Warmup) 
## Chain 5 Iteration: 400 / 1000 [ 40%]  (Warmup) 
## Chain 6 Iteration: 400 / 1000 [ 40%]  (Warmup) 
## Chain 1 Iteration: 400 / 1000 [ 40%]  (Warmup) 
## Chain 2 Iteration: 500 / 1000 [ 50%]  (Warmup) 
## Chain 2 Iteration: 501 / 1000 [ 50%]  (Sampling) 
## Chain 2 Iteration: 600 / 1000 [ 60%]  (Sampling) 
## Chain 3 Iteration: 500 / 1000 [ 50%]  (Warmup) 
## Chain 3 Iteration: 501 / 1000 [ 50%]  (Sampling) 
## Chain 4 Iteration: 400 / 1000 [ 40%]  (Warmup) 
## Chain 5 Iteration: 500 / 1000 [ 50%]  (Warmup) 
## Chain 5 Iteration: 501 / 1000 [ 50%]  (Sampling) 
## Chain 5 Iteration: 600 / 1000 [ 60%]  (Sampling) 
## Chain 6 Iteration: 500 / 1000 [ 50%]  (Warmup) 
## Chain 6 Iteration: 501 / 1000 [ 50%]  (Sampling) 
## Chain 1 Iteration: 500 / 1000 [ 50%]  (Warmup) 
## Chain 1 Iteration: 501 / 1000 [ 50%]  (Sampling) 
## Chain 2 Iteration: 700 / 1000 [ 70%]  (Sampling) 
## Chain 3 Iteration: 600 / 1000 [ 60%]  (Sampling) 
## Chain 4 Iteration: 500 / 1000 [ 50%]  (Warmup) 
## Chain 4 Iteration: 501 / 1000 [ 50%]  (Sampling) 
## Chain 5 Iteration: 700 / 1000 [ 70%]  (Sampling) 
## Chain 5 Iteration: 800 / 1000 [ 80%]  (Sampling) 
## Chain 5 Iteration: 900 / 1000 [ 90%]  (Sampling) 
## Chain 6 Iteration: 600 / 1000 [ 60%]  (Sampling) 
## Chain 1 Iteration: 600 / 1000 [ 60%]  (Sampling) 
## Chain 2 Iteration: 800 / 1000 [ 80%]  (Sampling) 
## Chain 3 Iteration: 700 / 1000 [ 70%]  (Sampling) 
## Chain 4 Iteration: 600 / 1000 [ 60%]  (Sampling) 
## Chain 5 Iteration: 1000 / 1000 [100%]  (Sampling) 
## Chain 6 Iteration: 700 / 1000 [ 70%]  (Sampling) 
## Chain 5 finished in 0.9 seconds.
## Chain 1 Iteration: 700 / 1000 [ 70%]  (Sampling) 
## Chain 2 Iteration: 900 / 1000 [ 90%]  (Sampling) 
## Chain 3 Iteration: 800 / 1000 [ 80%]  (Sampling) 
## Chain 3 Iteration: 900 / 1000 [ 90%]  (Sampling) 
## Chain 4 Iteration: 700 / 1000 [ 70%]  (Sampling) 
## Chain 4 Iteration: 800 / 1000 [ 80%]  (Sampling) 
## Chain 6 Iteration: 800 / 1000 [ 80%]  (Sampling) 
## Chain 1 Iteration: 800 / 1000 [ 80%]  (Sampling) 
## Chain 2 Iteration: 1000 / 1000 [100%]  (Sampling) 
## Chain 3 Iteration: 1000 / 1000 [100%]  (Sampling) 
## Chain 4 Iteration: 900 / 1000 [ 90%]  (Sampling) 
## Chain 6 Iteration: 900 / 1000 [ 90%]  (Sampling) 
## Chain 2 finished in 1.2 seconds.
## Chain 3 finished in 1.3 seconds.
## Chain 1 Iteration: 900 / 1000 [ 90%]  (Sampling) 
## Chain 4 Iteration: 1000 / 1000 [100%]  (Sampling) 
## Chain 6 Iteration: 1000 / 1000 [100%]  (Sampling) 
## Chain 4 finished in 1.3 seconds.
## Chain 6 finished in 1.3 seconds.
## Chain 1 Iteration: 1000 / 1000 [100%]  (Sampling) 
## Chain 1 finished in 1.5 seconds.
## 
## All 6 chains finished successfully.
## Mean chain execution time: 1.2 seconds.
## Total execution time: 1.6 seconds.
## Determination for the plot of posterior distribution of the correlation for our revised Cafe Model "Rvsd_Cafe_Model":

Post <- extract.samples(Rvsd_Cafe_Model)

## Posterior Distribution Plot:

ggplot() +
  stat_halfeye(aes(x = Post$Rho[, 1, 2])) +
  theme_bw() +
  labs(x = "Rho")

## Determination for the Posterior distribution plot of the correlation between intercepts and slopes for the revised cafe model and reference to chapters example:

# Posterior distribution represented as non-doted line in the plot:

dens( Post$Rho[,1,2] , xlim=c(-1,1) ) 

# Prior distribution represented as doted line in the plot:

R <- rlkjcorr( 1e4 , K=2 , eta=2 )   

dens( R[,1,2] , add=TRUE , lty=2 )

## Models precis determination:

precis(Rvsd_Cafe_Model, pars = "Rho[1,2]")
##            result
## mean    0.1779598
## sd      0.2979891
## 5.5%   -0.3019453
## 94.5%   0.6482488
## n_eff 219.8107436
## Rhat    1.0322603
## Revised Cafe Model Posterior Distributioon of Correlation Result Findings:

# The plots for the posterior and prior distribution of the correlation between intercepts and slopes for our revised cafe model reflected the change in the underlying simulation by shifting or moving the posterior to the right direction and major proportion of the posterior concentrated on the positive values and to a small extent or proportion on the negative values as well, and prior distribution with almost same proportion for positive and negative values and perhaps the prior might be able to learn form the slightly positove correlation rather than negative correlation, and besides this the precis output findings was further in sync with our underlying correlation of zero.

14-2. Fit this multilevel model to the simulated café data: \[\begin{align} \ W_i ∼ Normal(μ_i, σ) \\ \ μ_i = α_{CAFE[i]} + β_{CAFE[i]}A_i \\ \ α_{CAFE} ∼ Normal(α, σ_α) \\ \ β_{CAFE} ∼ Normal(β, σ_β) \\ \ α ∼ Normal(0, 10) \\ \ β ∼ Normal(0, 10) \\ \ σ, σ_α, σ_β ∼ Exponential(1) \\ \end{align}\] Use WAIC to compare this model to the model from the chapter, the one that uses a multi-variate Gaussian prior. Visualize the differences between the posterior means. Explain the result.

## Extraction of our baseline chapters cafe model and it's associated parameters from the chapter and for later comparison with our new derived multilevel model:


## Chapters cafe Model 14.1 parameters set up:


# Parameter "a" referencing for average cafe's morning wait time.

a <- 3.5 

# Parameter "b" referencing for average afternoon cafe wait time.

b <- (-1) 

# Parameter "sigma_a" referencing for standard deviation in intercepts.

sigma_a <- 1 # std dev in intercepts

# Parameter "sigma_b" referencing for standard deviation for slopes.

sigma_b <- 0.5 # std dev in slopes


# Parameter "rho" referencing for correlation between slopes and intercepts.

rho <- -0.7 # correlation between intercepts and slopes

# Paremeter "Mu".

Mu <- c(a, b)

# Determination for Covariance "cov_ab":

cov_ab <- sigma_a * sigma_b * rho

## Determination for standard deviation for model:

Sigma <- matrix(c(sigma_a^2, cov_ab, cov_ab, sigma_b^2), ncol = 2)


# Assignment for the simulate observations:

N_cafes <- 20

set.seed(123)

vary_effects <- mvrnorm(N_cafes, Mu, Sigma)

a_cafe <- vary_effects[, 1]

b_cafe <- vary_effects[, 2]

N_visits <- 10

afternoon <- rep(0:1, N_visits * N_cafes / 2)

cafe_id <- rep(1:N_cafes, each = N_visits)

mu <- a_cafe[cafe_id] + b_cafe[cafe_id] * afternoon

## Determination for the standard deviation within cafe's:

sigma <- 0.5 # std dev within cafes

wait <- rnorm(N_visits * N_cafes, mu, sigma)

# Determination for the data frame:

dat <- data.frame(cafe = cafe_id, afternoon = afternoon, wait = wait)



## Determination and running of the chapters baseline model 14.1 with the defined parameters:

Baseline_Cafe_Model <- ulam(
  alist(
    wait ~ normal(mu, sigma),
    mu <- a_cafe[cafe] + b_cafe[cafe] * afternoon,
    c(a_cafe, b_cafe)[cafe] ~ multi_normal(c(a, b), Rho, sigma_cafe),
    a ~ normal(5, 2),
    b ~ normal(-1, 0.5),
    sigma_cafe ~ exponential(1),
    sigma ~ exponential(1),
    Rho ~ lkj_corr(2)
  ),
  data = dat, chains = 4, cores = 4, log_lik = TRUE
)
## Running MCMC with 4 parallel chains, with 1 thread(s) per chain...
## 
## Chain 1 Iteration:   1 / 1000 [  0%]  (Warmup) 
## Chain 1 Iteration: 100 / 1000 [ 10%]  (Warmup) 
## Chain 2 Iteration:   1 / 1000 [  0%]  (Warmup) 
## Chain 3 Iteration:   1 / 1000 [  0%]  (Warmup) 
## Chain 3 Iteration: 100 / 1000 [ 10%]  (Warmup) 
## Chain 4 Iteration:   1 / 1000 [  0%]  (Warmup) 
## Chain 1 Iteration: 200 / 1000 [ 20%]  (Warmup) 
## Chain 2 Iteration: 100 / 1000 [ 10%]  (Warmup) 
## Chain 2 Iteration: 200 / 1000 [ 20%]  (Warmup) 
## Chain 3 Iteration: 200 / 1000 [ 20%]  (Warmup) 
## Chain 4 Iteration: 100 / 1000 [ 10%]  (Warmup) 
## Chain 4 Iteration: 200 / 1000 [ 20%]  (Warmup) 
## Chain 1 Iteration: 300 / 1000 [ 30%]  (Warmup) 
## Chain 2 Iteration: 300 / 1000 [ 30%]  (Warmup) 
## Chain 3 Iteration: 300 / 1000 [ 30%]  (Warmup) 
## Chain 4 Iteration: 300 / 1000 [ 30%]  (Warmup) 
## Chain 1 Iteration: 400 / 1000 [ 40%]  (Warmup) 
## Chain 2 Iteration: 400 / 1000 [ 40%]  (Warmup) 
## Chain 3 Iteration: 400 / 1000 [ 40%]  (Warmup) 
## Chain 4 Iteration: 400 / 1000 [ 40%]  (Warmup) 
## Chain 1 Iteration: 500 / 1000 [ 50%]  (Warmup) 
## Chain 1 Iteration: 501 / 1000 [ 50%]  (Sampling) 
## Chain 2 Iteration: 500 / 1000 [ 50%]  (Warmup) 
## Chain 2 Iteration: 501 / 1000 [ 50%]  (Sampling) 
## Chain 3 Iteration: 500 / 1000 [ 50%]  (Warmup) 
## Chain 3 Iteration: 501 / 1000 [ 50%]  (Sampling) 
## Chain 4 Iteration: 500 / 1000 [ 50%]  (Warmup) 
## Chain 4 Iteration: 501 / 1000 [ 50%]  (Sampling) 
## Chain 1 Iteration: 600 / 1000 [ 60%]  (Sampling) 
## Chain 2 Iteration: 600 / 1000 [ 60%]  (Sampling) 
## Chain 3 Iteration: 600 / 1000 [ 60%]  (Sampling) 
## Chain 4 Iteration: 600 / 1000 [ 60%]  (Sampling) 
## Chain 1 Iteration: 700 / 1000 [ 70%]  (Sampling) 
## Chain 2 Iteration: 700 / 1000 [ 70%]  (Sampling) 
## Chain 4 Iteration: 700 / 1000 [ 70%]  (Sampling) 
## Chain 1 Iteration: 800 / 1000 [ 80%]  (Sampling) 
## Chain 3 Iteration: 700 / 1000 [ 70%]  (Sampling) 
## Chain 4 Iteration: 800 / 1000 [ 80%]  (Sampling) 
## Chain 1 Iteration: 900 / 1000 [ 90%]  (Sampling) 
## Chain 2 Iteration: 800 / 1000 [ 80%]  (Sampling) 
## Chain 3 Iteration: 800 / 1000 [ 80%]  (Sampling) 
## Chain 4 Iteration: 900 / 1000 [ 90%]  (Sampling) 
## Chain 1 Iteration: 1000 / 1000 [100%]  (Sampling) 
## Chain 2 Iteration: 900 / 1000 [ 90%]  (Sampling) 
## Chain 3 Iteration: 900 / 1000 [ 90%]  (Sampling) 
## Chain 4 Iteration: 1000 / 1000 [100%]  (Sampling) 
## Chain 1 finished in 1.3 seconds.
## Chain 4 finished in 1.2 seconds.
## Chain 2 Iteration: 1000 / 1000 [100%]  (Sampling) 
## Chain 3 Iteration: 1000 / 1000 [100%]  (Sampling) 
## Chain 2 finished in 1.4 seconds.
## Chain 3 finished in 1.4 seconds.
## 
## All 4 chains finished successfully.
## Mean chain execution time: 1.3 seconds.
## Total execution time: 1.4 seconds.
### Determination for our new revised multilevel model and with the assumption of having no correlation related parameters or with assumed correlation of zero, and with the above question's 14.2 indicated parameters:

Revised_Multilevel_Cafe_Model <- ulam(
  alist(
    wait ~ dnorm(mu, sigma),
    mu <- a_cafe[cafe] + b_cafe[cafe] * afternoon,
    a_cafe[cafe] ~ dnorm(a, sigma_alpha),
    b_cafe[cafe] ~ dnorm(b, sigma_beta),
    a ~ dnorm(0, 10),
    b ~ dnorm(0, 10),
    sigma ~ exponential(1),
    sigma_alpha ~ exponential(1),
    sigma_beta ~ exponential(1)
  ),
  data = dat, chains = 4, cores = 4, log_lik = TRUE
)
## Running MCMC with 4 parallel chains, with 1 thread(s) per chain...
## 
## Chain 1 Iteration:   1 / 1000 [  0%]  (Warmup) 
## Chain 1 Iteration: 100 / 1000 [ 10%]  (Warmup) 
## Chain 1 Iteration: 200 / 1000 [ 20%]  (Warmup) 
## Chain 1 Iteration: 300 / 1000 [ 30%]  (Warmup) 
## Chain 2 Iteration:   1 / 1000 [  0%]  (Warmup) 
## Chain 2 Iteration: 100 / 1000 [ 10%]  (Warmup) 
## Chain 3 Iteration:   1 / 1000 [  0%]  (Warmup) 
## Chain 3 Iteration: 100 / 1000 [ 10%]  (Warmup) 
## Chain 3 Iteration: 200 / 1000 [ 20%]  (Warmup) 
## Chain 4 Iteration:   1 / 1000 [  0%]  (Warmup) 
## Chain 4 Iteration: 100 / 1000 [ 10%]  (Warmup) 
## Chain 4 Iteration: 200 / 1000 [ 20%]  (Warmup) 
## Chain 1 Iteration: 400 / 1000 [ 40%]  (Warmup) 
## Chain 1 Iteration: 500 / 1000 [ 50%]  (Warmup) 
## Chain 1 Iteration: 501 / 1000 [ 50%]  (Sampling) 
## Chain 1 Iteration: 600 / 1000 [ 60%]  (Sampling) 
## Chain 2 Iteration: 200 / 1000 [ 20%]  (Warmup) 
## Chain 2 Iteration: 300 / 1000 [ 30%]  (Warmup) 
## Chain 3 Iteration: 300 / 1000 [ 30%]  (Warmup) 
## Chain 3 Iteration: 400 / 1000 [ 40%]  (Warmup) 
## Chain 4 Iteration: 300 / 1000 [ 30%]  (Warmup) 
## Chain 4 Iteration: 400 / 1000 [ 40%]  (Warmup) 
## Chain 1 Iteration: 700 / 1000 [ 70%]  (Sampling) 
## Chain 1 Iteration: 800 / 1000 [ 80%]  (Sampling) 
## Chain 2 Iteration: 400 / 1000 [ 40%]  (Warmup) 
## Chain 3 Iteration: 500 / 1000 [ 50%]  (Warmup) 
## Chain 3 Iteration: 501 / 1000 [ 50%]  (Sampling) 
## Chain 4 Iteration: 500 / 1000 [ 50%]  (Warmup) 
## Chain 4 Iteration: 501 / 1000 [ 50%]  (Sampling) 
## Chain 4 Iteration: 600 / 1000 [ 60%]  (Sampling) 
## Chain 1 Iteration: 900 / 1000 [ 90%]  (Sampling) 
## Chain 1 Iteration: 1000 / 1000 [100%]  (Sampling) 
## Chain 2 Iteration: 500 / 1000 [ 50%]  (Warmup) 
## Chain 2 Iteration: 501 / 1000 [ 50%]  (Sampling) 
## Chain 3 Iteration: 600 / 1000 [ 60%]  (Sampling) 
## Chain 3 Iteration: 700 / 1000 [ 70%]  (Sampling) 
## Chain 4 Iteration: 700 / 1000 [ 70%]  (Sampling) 
## Chain 4 Iteration: 800 / 1000 [ 80%]  (Sampling) 
## Chain 1 finished in 0.5 seconds.
## Chain 2 Iteration: 600 / 1000 [ 60%]  (Sampling) 
## Chain 2 Iteration: 700 / 1000 [ 70%]  (Sampling) 
## Chain 3 Iteration: 800 / 1000 [ 80%]  (Sampling) 
## Chain 3 Iteration: 900 / 1000 [ 90%]  (Sampling) 
## Chain 4 Iteration: 900 / 1000 [ 90%]  (Sampling) 
## Chain 4 Iteration: 1000 / 1000 [100%]  (Sampling) 
## Chain 4 finished in 0.6 seconds.
## Chain 2 Iteration: 800 / 1000 [ 80%]  (Sampling) 
## Chain 2 Iteration: 900 / 1000 [ 90%]  (Sampling) 
## Chain 3 Iteration: 1000 / 1000 [100%]  (Sampling) 
## Chain 3 finished in 0.7 seconds.
## Chain 2 Iteration: 1000 / 1000 [100%]  (Sampling) 
## Chain 2 finished in 0.7 seconds.
## 
## All 4 chains finished successfully.
## Mean chain execution time: 0.6 seconds.
## Total execution time: 0.9 seconds.
## Determination of the "WAIC" Table for the comparison of all the above derived models utlizing "WAIC" function.

Comp_Waic_Cafe_mdls <-
  compare(Baseline_Cafe_Model, Revised_Multilevel_Cafe_Model, func = WAIC) %>% 
  as_tibble(rownames = "Model") %>% 
  knitr::kable(digits = 2)

Comp_Waic_Cafe_mdls
Model WAIC SE dWAIC dSE pWAIC weight
Baseline_Cafe_Model 314.10 20.90 0.00 NA 27.36 0.96
Revised_Multilevel_Cafe_Model 320.45 21.05 6.35 3.75 27.14 0.04
## Determination of the posterior means and Extraction of posterior samples for slopes and intercepts for our baseline model "Baseline_Cafe_Model" (Chapters Model:14.1), as well as new derived multilevel model "Revised_Multilevel_Cafe_Model", and for the visualization of the posterior means differences of models:


Post_Baseline_Cafe_Model <- extract.samples(Baseline_Cafe_Model)
a_Baseline <- apply(Post_Baseline_Cafe_Model$a_cafe, 2, mean)
b_Baseline <- apply(Post_Baseline_Cafe_Model$b_cafe, 2, mean)


Post_Revised_Multilevel_Cafe_Model <- extract.samples(Revised_Multilevel_Cafe_Model)
a_Rvsd_Mdl <- apply(Post_Revised_Multilevel_Cafe_Model$a_cafe, 2, mean)
b_Rvsd_Mdl <- apply(Post_Revised_Multilevel_Cafe_Model$b_cafe, 2, mean)
plot(a_Rvsd_Mdl, b_Rvsd_Mdl,
  xlab = "intercept", ylab = "slope",
  pch = 16, col = "red", ylim = c(min(b_Rvsd_Mdl) - 0.05, max(b_Rvsd_Mdl) + 0.05),
  xlim = c(min(a_Rvsd_Mdl) - 0.1, max(a_Rvsd_Mdl) + 0.1), cex = 2
)
points(a_Baseline, b_Baseline, pch = 1, cex = 2)

## Baseline & Revised Cafe Model Posterior means plot and WAIC model comparison result Findings:

## WAIC Table Models Comparison Inference:
# It is pretty evident from our WAIC model comparison output, that our baseline model i.e., "Baseline_Cafe_Model" (Chapters Model:14.1) was found to be slightly of better fit model and with a lower WAIC score value for fit assessment as compared to our new revised cafe model "Revised_Multilevel_Cafe_Model", and also having a slightly higher Akaike weight, however this difference of our original base-lined cafe model with our revised cafe model was also observed to be not that extremely significant, and also referring to our plot output where the red dot's are representing the samples from our new revised cafe model, and the transparent dot's representing posterior samples, it is quite evident there was not a significant differences and specifically between the scale of 2.5 and 4.5 or towards the center where there were high concentration of estimates and which appears to be almost identical , and further with increasing divergences of our extracted posterior samples at the fringes of the slope and intercept range, therefore our baseline model is based on the assumption of slopes and intercept being related to each other via a negative correlation as clearly visible in the plot's upper left and lower right plot's corner, hence per our baseline model established assumption, it can be stated that the larger intercepts are associated with strongly negative slopes and similarly small intercepts to be associated with not strongly negative slopes.

14-3. Use WAIC to compare the Gaussian process model of Oceanic tools to the models fit to the same data in Chapter 11. Pay special attention to the effective numbers of parameters, as estimated by WAIC. Explain the result.

## Determination for the Oceanic tool Gaussian process model reference to chapter -11 "Kline" dataset:

data(Kline)

dat <- Kline

str(dat)
## 'data.frame':    10 obs. of  5 variables:
##  $ culture    : Factor w/ 10 levels "Chuuk","Hawaii",..: 4 7 6 10 3 9 1 5 8 2
##  $ population : int  1100 1500 3600 4791 7400 8000 9200 13000 17500 275000
##  $ contact    : Factor w/ 2 levels "high","low": 2 2 2 1 1 1 1 2 1 2
##  $ total_tools: int  13 22 24 43 33 19 40 28 55 71
##  $ mean_TU    : num  3.2 4.7 4 5 5 4 3.8 6.6 5.4 6.6
# Data set filtering and standardization:

dat$P <- scale(log(dat$population))

dat$contact_id <- ifelse(dat$contact == "high", 2, 1)


dat$society <- 1:10


data(islandsDistMatrix)


Data <- list(T = dat$total_tools, P = dat$population, cid = dat$contact_id)


## Oceanic tool Gaussian process model determination reference to Model 11.11 Chapter-11:


Model_11.11 <- ulam(
  alist(
    T ~ dpois(lambda),
    lambda <- exp(a[cid]) * P^b[cid] / g,
    a[cid] ~ dnorm(1, 1),
    b[cid] ~ dexp(1),
    g ~ dexp(1)
  ),
  data = Data, chains = 4, cores = 4, log_lik = TRUE
)
## Running MCMC with 4 parallel chains, with 1 thread(s) per chain...
## 
## Chain 1 Iteration:   1 / 1000 [  0%]  (Warmup) 
## Chain 1 Iteration: 100 / 1000 [ 10%]  (Warmup) 
## Chain 1 Iteration: 200 / 1000 [ 20%]  (Warmup) 
## Chain 2 Iteration:   1 / 1000 [  0%]  (Warmup) 
## Chain 2 Iteration: 100 / 1000 [ 10%]  (Warmup) 
## Chain 2 Iteration: 200 / 1000 [ 20%]  (Warmup) 
## Chain 2 Iteration: 300 / 1000 [ 30%]  (Warmup) 
## Chain 2 Iteration: 400 / 1000 [ 40%]  (Warmup) 
## Chain 3 Iteration:   1 / 1000 [  0%]  (Warmup) 
## Chain 3 Iteration: 100 / 1000 [ 10%]  (Warmup) 
## Chain 3 Iteration: 200 / 1000 [ 20%]  (Warmup) 
## Chain 3 Iteration: 300 / 1000 [ 30%]  (Warmup) 
## Chain 3 Iteration: 400 / 1000 [ 40%]  (Warmup) 
## Chain 4 Iteration:   1 / 1000 [  0%]  (Warmup) 
## Chain 4 Iteration: 100 / 1000 [ 10%]  (Warmup) 
## Chain 4 Iteration: 200 / 1000 [ 20%]  (Warmup) 
## Chain 4 Iteration: 300 / 1000 [ 30%]  (Warmup) 
## Chain 1 Iteration: 300 / 1000 [ 30%]  (Warmup) 
## Chain 1 Iteration: 400 / 1000 [ 40%]  (Warmup) 
## Chain 1 Iteration: 500 / 1000 [ 50%]  (Warmup) 
## Chain 1 Iteration: 501 / 1000 [ 50%]  (Sampling) 
## Chain 1 Iteration: 600 / 1000 [ 60%]  (Sampling) 
## Chain 2 Iteration: 500 / 1000 [ 50%]  (Warmup) 
## Chain 2 Iteration: 501 / 1000 [ 50%]  (Sampling) 
## Chain 2 Iteration: 600 / 1000 [ 60%]  (Sampling) 
## Chain 2 Iteration: 700 / 1000 [ 70%]  (Sampling) 
## Chain 2 Iteration: 800 / 1000 [ 80%]  (Sampling) 
## Chain 3 Iteration: 500 / 1000 [ 50%]  (Warmup) 
## Chain 3 Iteration: 501 / 1000 [ 50%]  (Sampling) 
## Chain 3 Iteration: 600 / 1000 [ 60%]  (Sampling) 
## Chain 4 Iteration: 400 / 1000 [ 40%]  (Warmup) 
## Chain 4 Iteration: 500 / 1000 [ 50%]  (Warmup) 
## Chain 4 Iteration: 501 / 1000 [ 50%]  (Sampling) 
## Chain 4 Iteration: 600 / 1000 [ 60%]  (Sampling) 
## Chain 1 Iteration: 700 / 1000 [ 70%]  (Sampling) 
## Chain 1 Iteration: 800 / 1000 [ 80%]  (Sampling) 
## Chain 1 Iteration: 900 / 1000 [ 90%]  (Sampling) 
## Chain 2 Iteration: 900 / 1000 [ 90%]  (Sampling) 
## Chain 2 Iteration: 1000 / 1000 [100%]  (Sampling) 
## Chain 3 Iteration: 700 / 1000 [ 70%]  (Sampling) 
## Chain 3 Iteration: 800 / 1000 [ 80%]  (Sampling) 
## Chain 4 Iteration: 700 / 1000 [ 70%]  (Sampling) 
## Chain 4 Iteration: 800 / 1000 [ 80%]  (Sampling) 
## Chain 2 finished in 0.4 seconds.
## Chain 1 Iteration: 1000 / 1000 [100%]  (Sampling) 
## Chain 3 Iteration: 900 / 1000 [ 90%]  (Sampling) 
## Chain 3 Iteration: 1000 / 1000 [100%]  (Sampling) 
## Chain 4 Iteration: 900 / 1000 [ 90%]  (Sampling) 
## Chain 4 Iteration: 1000 / 1000 [100%]  (Sampling) 
## Chain 1 finished in 0.5 seconds.
## Chain 3 finished in 0.5 seconds.
## Chain 4 finished in 0.5 seconds.
## 
## All 4 chains finished successfully.
## Mean chain execution time: 0.5 seconds.
## Total execution time: 0.6 seconds.
## Model determination reference to Chapter -14 Model (Model:14.8), and leveraging the Chapter-11 Oceanic Tool data set


## Data standardization:

Dat_list <- list(T = dat$total_tools, P = dat$population, society = dat$society, Dtamtrix = islandsDistMatrix)


# Model Determination:

Rvsd_Model_14.8 <- ulam(
  alist(
    T ~ dpois(lambda),
    lambda <- (a * P^b / g) * exp(k[society]),
    vector[10]:k ~ multi_normal(0, SIGMA),
    matrix[10, 10]:SIGMA <- cov_GPL2(Dtamtrix, etasq, rhosq, 0.01), 
    c(a, b, g) ~ dexp(1), etasq ~ dexp(2), rhosq ~ dexp(0.5)
  ),
  data = Dat_list, chains = 4, cores = 4, iter = 2000, log_lik = TRUE
)
## Running MCMC with 4 parallel chains, with 1 thread(s) per chain...
## 
## Chain 1 Iteration:    1 / 2000 [  0%]  (Warmup) 
## Chain 2 Iteration:    1 / 2000 [  0%]  (Warmup) 
## Chain 3 Iteration:    1 / 2000 [  0%]  (Warmup) 
## Chain 4 Iteration:    1 / 2000 [  0%]  (Warmup) 
## Chain 1 Iteration:  100 / 2000 [  5%]  (Warmup) 
## Chain 2 Iteration:  100 / 2000 [  5%]  (Warmup) 
## Chain 3 Iteration:  100 / 2000 [  5%]  (Warmup) 
## Chain 4 Iteration:  100 / 2000 [  5%]  (Warmup) 
## Chain 3 Iteration:  200 / 2000 [ 10%]  (Warmup) 
## Chain 4 Iteration:  200 / 2000 [ 10%]  (Warmup) 
## Chain 1 Iteration:  200 / 2000 [ 10%]  (Warmup) 
## Chain 2 Iteration:  200 / 2000 [ 10%]  (Warmup) 
## Chain 3 Iteration:  300 / 2000 [ 15%]  (Warmup) 
## Chain 4 Iteration:  300 / 2000 [ 15%]  (Warmup) 
## Chain 1 Iteration:  300 / 2000 [ 15%]  (Warmup) 
## Chain 2 Iteration:  300 / 2000 [ 15%]  (Warmup) 
## Chain 2 Iteration:  400 / 2000 [ 20%]  (Warmup) 
## Chain 3 Iteration:  400 / 2000 [ 20%]  (Warmup) 
## Chain 4 Iteration:  400 / 2000 [ 20%]  (Warmup) 
## Chain 1 Iteration:  400 / 2000 [ 20%]  (Warmup) 
## Chain 3 Iteration:  500 / 2000 [ 25%]  (Warmup) 
## Chain 4 Iteration:  500 / 2000 [ 25%]  (Warmup) 
## Chain 2 Iteration:  500 / 2000 [ 25%]  (Warmup) 
## Chain 1 Iteration:  500 / 2000 [ 25%]  (Warmup) 
## Chain 2 Iteration:  600 / 2000 [ 30%]  (Warmup) 
## Chain 3 Iteration:  600 / 2000 [ 30%]  (Warmup) 
## Chain 4 Iteration:  600 / 2000 [ 30%]  (Warmup) 
## Chain 1 Iteration:  600 / 2000 [ 30%]  (Warmup) 
## Chain 2 Iteration:  700 / 2000 [ 35%]  (Warmup) 
## Chain 3 Iteration:  700 / 2000 [ 35%]  (Warmup) 
## Chain 4 Iteration:  700 / 2000 [ 35%]  (Warmup) 
## Chain 1 Iteration:  700 / 2000 [ 35%]  (Warmup) 
## Chain 3 Iteration:  800 / 2000 [ 40%]  (Warmup) 
## Chain 1 Iteration:  800 / 2000 [ 40%]  (Warmup) 
## Chain 2 Iteration:  800 / 2000 [ 40%]  (Warmup) 
## Chain 4 Iteration:  800 / 2000 [ 40%]  (Warmup) 
## Chain 2 Iteration:  900 / 2000 [ 45%]  (Warmup) 
## Chain 3 Iteration:  900 / 2000 [ 45%]  (Warmup) 
## Chain 4 Iteration:  900 / 2000 [ 45%]  (Warmup) 
## Chain 1 Iteration:  900 / 2000 [ 45%]  (Warmup) 
## Chain 3 Iteration: 1000 / 2000 [ 50%]  (Warmup) 
## Chain 3 Iteration: 1001 / 2000 [ 50%]  (Sampling) 
## Chain 2 Iteration: 1000 / 2000 [ 50%]  (Warmup) 
## Chain 2 Iteration: 1001 / 2000 [ 50%]  (Sampling) 
## Chain 4 Iteration: 1000 / 2000 [ 50%]  (Warmup) 
## Chain 4 Iteration: 1001 / 2000 [ 50%]  (Sampling) 
## Chain 1 Iteration: 1000 / 2000 [ 50%]  (Warmup) 
## Chain 1 Iteration: 1001 / 2000 [ 50%]  (Sampling) 
## Chain 2 Iteration: 1100 / 2000 [ 55%]  (Sampling) 
## Chain 3 Iteration: 1100 / 2000 [ 55%]  (Sampling) 
## Chain 1 Iteration: 1100 / 2000 [ 55%]  (Sampling) 
## Chain 4 Iteration: 1100 / 2000 [ 55%]  (Sampling) 
## Chain 2 Iteration: 1200 / 2000 [ 60%]  (Sampling) 
## Chain 3 Iteration: 1200 / 2000 [ 60%]  (Sampling) 
## Chain 4 Iteration: 1200 / 2000 [ 60%]  (Sampling) 
## Chain 1 Iteration: 1200 / 2000 [ 60%]  (Sampling) 
## Chain 4 Iteration: 1300 / 2000 [ 65%]  (Sampling) 
## Chain 1 Iteration: 1300 / 2000 [ 65%]  (Sampling) 
## Chain 2 Iteration: 1300 / 2000 [ 65%]  (Sampling) 
## Chain 3 Iteration: 1300 / 2000 [ 65%]  (Sampling) 
## Chain 2 Iteration: 1400 / 2000 [ 70%]  (Sampling) 
## Chain 3 Iteration: 1400 / 2000 [ 70%]  (Sampling) 
## Chain 4 Iteration: 1400 / 2000 [ 70%]  (Sampling) 
## Chain 1 Iteration: 1400 / 2000 [ 70%]  (Sampling) 
## Chain 4 Iteration: 1500 / 2000 [ 75%]  (Sampling) 
## Chain 2 Iteration: 1500 / 2000 [ 75%]  (Sampling) 
## Chain 3 Iteration: 1500 / 2000 [ 75%]  (Sampling) 
## Chain 1 Iteration: 1500 / 2000 [ 75%]  (Sampling) 
## Chain 2 Iteration: 1600 / 2000 [ 80%]  (Sampling) 
## Chain 4 Iteration: 1600 / 2000 [ 80%]  (Sampling) 
## Chain 1 Iteration: 1600 / 2000 [ 80%]  (Sampling) 
## Chain 3 Iteration: 1600 / 2000 [ 80%]  (Sampling) 
## Chain 4 Iteration: 1700 / 2000 [ 85%]  (Sampling) 
## Chain 1 Iteration: 1700 / 2000 [ 85%]  (Sampling) 
## Chain 2 Iteration: 1700 / 2000 [ 85%]  (Sampling) 
## Chain 4 Iteration: 1800 / 2000 [ 90%]  (Sampling) 
## Chain 3 Iteration: 1700 / 2000 [ 85%]  (Sampling) 
## Chain 1 Iteration: 1800 / 2000 [ 90%]  (Sampling) 
## Chain 2 Iteration: 1800 / 2000 [ 90%]  (Sampling) 
## Chain 3 Iteration: 1800 / 2000 [ 90%]  (Sampling) 
## Chain 4 Iteration: 1900 / 2000 [ 95%]  (Sampling) 
## Chain 2 Iteration: 1900 / 2000 [ 95%]  (Sampling) 
## Chain 1 Iteration: 1900 / 2000 [ 95%]  (Sampling) 
## Chain 3 Iteration: 1900 / 2000 [ 95%]  (Sampling) 
## Chain 4 Iteration: 2000 / 2000 [100%]  (Sampling) 
## Chain 4 finished in 3.6 seconds.
## Chain 1 Iteration: 2000 / 2000 [100%]  (Sampling) 
## Chain 2 Iteration: 2000 / 2000 [100%]  (Sampling) 
## Chain 1 finished in 3.8 seconds.
## Chain 2 finished in 3.8 seconds.
## Chain 3 Iteration: 2000 / 2000 [100%]  (Sampling) 
## Chain 3 finished in 3.8 seconds.
## 
## All 4 chains finished successfully.
## Mean chain execution time: 3.8 seconds.
## Total execution time: 4.0 seconds.
## Determination of the "WAIC" Table for the comparison of all the above derived process models utlizing "WAIC" function.

Comp_Waic_Process_Mdls <-
  compare(Rvsd_Model_14.8, Model_11.11, func = WAIC) %>% 
  as_tibble(rownames = "Model") %>% 
  knitr::kable(digits = 2)

Comp_Waic_Process_Mdls
Model WAIC SE dWAIC dSE pWAIC weight
Rvsd_Model_14.8 67.83 2.31 0.0 NA 4.18 1
Model_11.11 80.23 11.29 12.4 11.4 4.92 0
## Models WAIC comparison Result Findings: 
# It is evident from WAIC model comparison table findings, that our complex model i.e., "Rvsd_Model_14.8" had a comparatively lower WAIC score value thus a better model fit in comparison of our chapter-11 Gaussian model ("Model_11.11"), further the Akaike weight of our Chapter -14 model ("Rvsd_Model_14.8") also had the full weight and in comparison to chapter-11's model ("Model_11.11"), however in contrast it has also been clearly reflected from the WAIC model comparison table output for these in-scope above models, that the value for the effective number of parameters (pWAIC) was perhaps higher for our Chapter-11 model ("Model_11.11") and in comparison of our Chapter-14 complex model ("Rvsd_Model_14.8"), and probably as an outcome of having more intensified regularization on our Gaussian Process model's part.

14-4. Let’s revisit the Bangladesh fertility data, data(bangladesh), from the practice problems for Chapter 13. Fit a model with both varying intercepts by district_id and varying slopes of urban by district_id. You are still predicting use.contraception. Inspect the correlation between the intercepts and slopes. Can you interpret this correlation, in terms of what it tells you about the pattern of contraceptive use in the sample? Plot the mean (or median) varying effect estimates for both the intercepts and slopes, by district. Then you can visualize the correlation and maybe more easily think through what it means to have a particular correlation. Plot the predicted proportion of women using contraception, with urban women on one axis and rural on the other. Explain the results.

set.seed(123)

## Loading/Extracting the Fertility data for Bangladesh (Ref: Chapter-13 problem):

data(bangladesh)

dat <- bangladesh

str(dat)
## 'data.frame':    1934 obs. of  6 variables:
##  $ woman            : int  1 2 3 4 5 6 7 8 9 10 ...
##  $ district         : int  1 1 1 1 1 1 1 1 1 1 ...
##  $ use.contraception: int  0 0 0 0 0 0 0 0 0 0 ...
##  $ living.children  : int  4 1 3 4 1 1 4 4 2 4 ...
##  $ age.centered     : num  18.44 -5.56 1.44 8.44 -13.56 ...
##  $ urban            : int  1 1 1 1 1 1 1 1 1 1 ...
dat_list <- list(
  Cntrcptn = dat$use.contraception,
  did = as.integer(as.factor(dat$district)),
  urban = dat$urban
)

## Determination of the model with both varying intercepts by "district_id" and varying slopes of urban by "district_id" reference to the Bangladesh data set.

Model_Frtlty <- ulam(
  alist(
    Cntrcptn ~ bernoulli(p),
    logit(p) <- a[did] + b[did] * urban,
    c(a, b)[did] ~ multi_normal(c(abar, bbar), Rho, Sigma),
    abar ~ normal(0, 1),
    bbar ~ normal(0, 0.5),
    Rho ~ lkj_corr(2),
    Sigma ~ exponential(1)
  ),
  data = dat_list, chains = 4, cores = 4, iter = 4000
)
## Running MCMC with 4 parallel chains, with 1 thread(s) per chain...
## 
## Chain 1 Iteration:    1 / 4000 [  0%]  (Warmup) 
## Chain 2 Iteration:    1 / 4000 [  0%]  (Warmup) 
## Chain 3 Iteration:    1 / 4000 [  0%]  (Warmup) 
## Chain 4 Iteration:    1 / 4000 [  0%]  (Warmup) 
## Chain 4 Iteration:  100 / 4000 [  2%]  (Warmup) 
## Chain 1 Iteration:  100 / 4000 [  2%]  (Warmup) 
## Chain 4 Iteration:  200 / 4000 [  5%]  (Warmup) 
## Chain 2 Iteration:  100 / 4000 [  2%]  (Warmup) 
## Chain 1 Iteration:  200 / 4000 [  5%]  (Warmup) 
## Chain 3 Iteration:  100 / 4000 [  2%]  (Warmup) 
## Chain 4 Iteration:  300 / 4000 [  7%]  (Warmup) 
## Chain 1 Iteration:  300 / 4000 [  7%]  (Warmup) 
## Chain 2 Iteration:  200 / 4000 [  5%]  (Warmup) 
## Chain 3 Iteration:  200 / 4000 [  5%]  (Warmup) 
## Chain 4 Iteration:  400 / 4000 [ 10%]  (Warmup) 
## Chain 1 Iteration:  400 / 4000 [ 10%]  (Warmup) 
## Chain 2 Iteration:  300 / 4000 [  7%]  (Warmup) 
## Chain 3 Iteration:  300 / 4000 [  7%]  (Warmup) 
## Chain 4 Iteration:  500 / 4000 [ 12%]  (Warmup) 
## Chain 2 Iteration:  400 / 4000 [ 10%]  (Warmup) 
## Chain 1 Iteration:  500 / 4000 [ 12%]  (Warmup) 
## Chain 3 Iteration:  400 / 4000 [ 10%]  (Warmup) 
## Chain 4 Iteration:  600 / 4000 [ 15%]  (Warmup) 
## Chain 2 Iteration:  500 / 4000 [ 12%]  (Warmup) 
## Chain 1 Iteration:  600 / 4000 [ 15%]  (Warmup) 
## Chain 3 Iteration:  500 / 4000 [ 12%]  (Warmup) 
## Chain 2 Iteration:  600 / 4000 [ 15%]  (Warmup) 
## Chain 4 Iteration:  700 / 4000 [ 17%]  (Warmup) 
## Chain 1 Iteration:  700 / 4000 [ 17%]  (Warmup) 
## Chain 3 Iteration:  600 / 4000 [ 15%]  (Warmup) 
## Chain 4 Iteration:  800 / 4000 [ 20%]  (Warmup) 
## Chain 2 Iteration:  700 / 4000 [ 17%]  (Warmup) 
## Chain 1 Iteration:  800 / 4000 [ 20%]  (Warmup) 
## Chain 3 Iteration:  700 / 4000 [ 17%]  (Warmup) 
## Chain 2 Iteration:  800 / 4000 [ 20%]  (Warmup) 
## Chain 4 Iteration:  900 / 4000 [ 22%]  (Warmup) 
## Chain 1 Iteration:  900 / 4000 [ 22%]  (Warmup) 
## Chain 3 Iteration:  800 / 4000 [ 20%]  (Warmup) 
## Chain 2 Iteration:  900 / 4000 [ 22%]  (Warmup) 
## Chain 4 Iteration: 1000 / 4000 [ 25%]  (Warmup) 
## Chain 1 Iteration: 1000 / 4000 [ 25%]  (Warmup) 
## Chain 3 Iteration:  900 / 4000 [ 22%]  (Warmup) 
## Chain 2 Iteration: 1000 / 4000 [ 25%]  (Warmup) 
## Chain 4 Iteration: 1100 / 4000 [ 27%]  (Warmup) 
## Chain 1 Iteration: 1100 / 4000 [ 27%]  (Warmup) 
## Chain 3 Iteration: 1000 / 4000 [ 25%]  (Warmup) 
## Chain 2 Iteration: 1100 / 4000 [ 27%]  (Warmup) 
## Chain 4 Iteration: 1200 / 4000 [ 30%]  (Warmup) 
## Chain 1 Iteration: 1200 / 4000 [ 30%]  (Warmup) 
## Chain 3 Iteration: 1100 / 4000 [ 27%]  (Warmup) 
## Chain 4 Iteration: 1300 / 4000 [ 32%]  (Warmup) 
## Chain 2 Iteration: 1200 / 4000 [ 30%]  (Warmup) 
## Chain 3 Iteration: 1200 / 4000 [ 30%]  (Warmup) 
## Chain 1 Iteration: 1300 / 4000 [ 32%]  (Warmup) 
## Chain 2 Iteration: 1300 / 4000 [ 32%]  (Warmup) 
## Chain 4 Iteration: 1400 / 4000 [ 35%]  (Warmup) 
## Chain 3 Iteration: 1300 / 4000 [ 32%]  (Warmup) 
## Chain 1 Iteration: 1400 / 4000 [ 35%]  (Warmup) 
## Chain 2 Iteration: 1400 / 4000 [ 35%]  (Warmup) 
## Chain 4 Iteration: 1500 / 4000 [ 37%]  (Warmup) 
## Chain 3 Iteration: 1400 / 4000 [ 35%]  (Warmup) 
## Chain 1 Iteration: 1500 / 4000 [ 37%]  (Warmup) 
## Chain 2 Iteration: 1500 / 4000 [ 37%]  (Warmup) 
## Chain 4 Iteration: 1600 / 4000 [ 40%]  (Warmup) 
## Chain 3 Iteration: 1500 / 4000 [ 37%]  (Warmup) 
## Chain 1 Iteration: 1600 / 4000 [ 40%]  (Warmup) 
## Chain 2 Iteration: 1600 / 4000 [ 40%]  (Warmup) 
## Chain 4 Iteration: 1700 / 4000 [ 42%]  (Warmup) 
## Chain 1 Iteration: 1700 / 4000 [ 42%]  (Warmup) 
## Chain 3 Iteration: 1600 / 4000 [ 40%]  (Warmup) 
## Chain 2 Iteration: 1700 / 4000 [ 42%]  (Warmup) 
## Chain 4 Iteration: 1800 / 4000 [ 45%]  (Warmup) 
## Chain 1 Iteration: 1800 / 4000 [ 45%]  (Warmup) 
## Chain 3 Iteration: 1700 / 4000 [ 42%]  (Warmup) 
## Chain 4 Iteration: 1900 / 4000 [ 47%]  (Warmup) 
## Chain 2 Iteration: 1800 / 4000 [ 45%]  (Warmup) 
## Chain 1 Iteration: 1900 / 4000 [ 47%]  (Warmup) 
## Chain 2 Iteration: 1900 / 4000 [ 47%]  (Warmup) 
## Chain 4 Iteration: 2000 / 4000 [ 50%]  (Warmup) 
## Chain 4 Iteration: 2001 / 4000 [ 50%]  (Sampling) 
## Chain 3 Iteration: 1800 / 4000 [ 45%]  (Warmup) 
## Chain 1 Iteration: 2000 / 4000 [ 50%]  (Warmup) 
## Chain 1 Iteration: 2001 / 4000 [ 50%]  (Sampling) 
## Chain 4 Iteration: 2100 / 4000 [ 52%]  (Sampling) 
## Chain 2 Iteration: 2000 / 4000 [ 50%]  (Warmup) 
## Chain 2 Iteration: 2001 / 4000 [ 50%]  (Sampling) 
## Chain 3 Iteration: 1900 / 4000 [ 47%]  (Warmup) 
## Chain 4 Iteration: 2200 / 4000 [ 55%]  (Sampling) 
## Chain 1 Iteration: 2100 / 4000 [ 52%]  (Sampling) 
## Chain 3 Iteration: 2000 / 4000 [ 50%]  (Warmup) 
## Chain 3 Iteration: 2001 / 4000 [ 50%]  (Sampling) 
## Chain 4 Iteration: 2300 / 4000 [ 57%]  (Sampling) 
## Chain 2 Iteration: 2100 / 4000 [ 52%]  (Sampling) 
## Chain 1 Iteration: 2200 / 4000 [ 55%]  (Sampling) 
## Chain 4 Iteration: 2400 / 4000 [ 60%]  (Sampling) 
## Chain 3 Iteration: 2100 / 4000 [ 52%]  (Sampling) 
## Chain 2 Iteration: 2200 / 4000 [ 55%]  (Sampling) 
## Chain 4 Iteration: 2500 / 4000 [ 62%]  (Sampling) 
## Chain 1 Iteration: 2300 / 4000 [ 57%]  (Sampling) 
## Chain 3 Iteration: 2200 / 4000 [ 55%]  (Sampling) 
## Chain 4 Iteration: 2600 / 4000 [ 65%]  (Sampling) 
## Chain 2 Iteration: 2300 / 4000 [ 57%]  (Sampling) 
## Chain 1 Iteration: 2400 / 4000 [ 60%]  (Sampling) 
## Chain 3 Iteration: 2300 / 4000 [ 57%]  (Sampling) 
## Chain 4 Iteration: 2700 / 4000 [ 67%]  (Sampling) 
## Chain 3 Iteration: 2400 / 4000 [ 60%]  (Sampling) 
## Chain 1 Iteration: 2500 / 4000 [ 62%]  (Sampling) 
## Chain 2 Iteration: 2400 / 4000 [ 60%]  (Sampling) 
## Chain 4 Iteration: 2800 / 4000 [ 70%]  (Sampling) 
## Chain 4 Iteration: 2900 / 4000 [ 72%]  (Sampling) 
## Chain 3 Iteration: 2500 / 4000 [ 62%]  (Sampling) 
## Chain 1 Iteration: 2600 / 4000 [ 65%]  (Sampling) 
## Chain 2 Iteration: 2500 / 4000 [ 62%]  (Sampling) 
## Chain 4 Iteration: 3000 / 4000 [ 75%]  (Sampling) 
## Chain 3 Iteration: 2600 / 4000 [ 65%]  (Sampling) 
## Chain 1 Iteration: 2700 / 4000 [ 67%]  (Sampling) 
## Chain 4 Iteration: 3100 / 4000 [ 77%]  (Sampling) 
## Chain 2 Iteration: 2600 / 4000 [ 65%]  (Sampling) 
## Chain 4 Iteration: 3200 / 4000 [ 80%]  (Sampling) 
## Chain 3 Iteration: 2700 / 4000 [ 67%]  (Sampling) 
## Chain 1 Iteration: 2800 / 4000 [ 70%]  (Sampling) 
## Chain 2 Iteration: 2700 / 4000 [ 67%]  (Sampling) 
## Chain 4 Iteration: 3300 / 4000 [ 82%]  (Sampling) 
## Chain 3 Iteration: 2800 / 4000 [ 70%]  (Sampling) 
## Chain 1 Iteration: 2900 / 4000 [ 72%]  (Sampling) 
## Chain 4 Iteration: 3400 / 4000 [ 85%]  (Sampling) 
## Chain 3 Iteration: 2900 / 4000 [ 72%]  (Sampling) 
## Chain 1 Iteration: 3000 / 4000 [ 75%]  (Sampling) 
## Chain 2 Iteration: 2800 / 4000 [ 70%]  (Sampling) 
## Chain 4 Iteration: 3500 / 4000 [ 87%]  (Sampling) 
## Chain 1 Iteration: 3100 / 4000 [ 77%]  (Sampling) 
## Chain 3 Iteration: 3000 / 4000 [ 75%]  (Sampling) 
## Chain 4 Iteration: 3600 / 4000 [ 90%]  (Sampling) 
## Chain 2 Iteration: 2900 / 4000 [ 72%]  (Sampling) 
## Chain 4 Iteration: 3700 / 4000 [ 92%]  (Sampling) 
## Chain 3 Iteration: 3100 / 4000 [ 77%]  (Sampling) 
## Chain 1 Iteration: 3200 / 4000 [ 80%]  (Sampling) 
## Chain 2 Iteration: 3000 / 4000 [ 75%]  (Sampling) 
## Chain 4 Iteration: 3800 / 4000 [ 95%]  (Sampling) 
## Chain 3 Iteration: 3200 / 4000 [ 80%]  (Sampling) 
## Chain 1 Iteration: 3300 / 4000 [ 82%]  (Sampling) 
## Chain 4 Iteration: 3900 / 4000 [ 97%]  (Sampling) 
## Chain 2 Iteration: 3100 / 4000 [ 77%]  (Sampling) 
## Chain 1 Iteration: 3400 / 4000 [ 85%]  (Sampling) 
## Chain 3 Iteration: 3300 / 4000 [ 82%]  (Sampling) 
## Chain 4 Iteration: 4000 / 4000 [100%]  (Sampling) 
## Chain 4 finished in 38.8 seconds.
## Chain 1 Iteration: 3500 / 4000 [ 87%]  (Sampling) 
## Chain 2 Iteration: 3200 / 4000 [ 80%]  (Sampling) 
## Chain 3 Iteration: 3400 / 4000 [ 85%]  (Sampling) 
## Chain 1 Iteration: 3600 / 4000 [ 90%]  (Sampling) 
## Chain 3 Iteration: 3500 / 4000 [ 87%]  (Sampling) 
## Chain 2 Iteration: 3300 / 4000 [ 82%]  (Sampling) 
## Chain 1 Iteration: 3700 / 4000 [ 92%]  (Sampling) 
## Chain 3 Iteration: 3600 / 4000 [ 90%]  (Sampling) 
## Chain 2 Iteration: 3400 / 4000 [ 85%]  (Sampling) 
## Chain 1 Iteration: 3800 / 4000 [ 95%]  (Sampling) 
## Chain 3 Iteration: 3700 / 4000 [ 92%]  (Sampling) 
## Chain 2 Iteration: 3500 / 4000 [ 87%]  (Sampling) 
## Chain 3 Iteration: 3800 / 4000 [ 95%]  (Sampling) 
## Chain 1 Iteration: 3900 / 4000 [ 97%]  (Sampling) 
## Chain 1 Iteration: 4000 / 4000 [100%]  (Sampling) 
## Chain 3 Iteration: 3900 / 4000 [ 97%]  (Sampling) 
## Chain 1 finished in 44.4 seconds.
## Chain 2 Iteration: 3600 / 4000 [ 90%]  (Sampling) 
## Chain 2 Iteration: 3700 / 4000 [ 92%]  (Sampling) 
## Chain 3 Iteration: 4000 / 4000 [100%]  (Sampling) 
## Chain 3 finished in 45.1 seconds.
## Chain 2 Iteration: 3800 / 4000 [ 95%]  (Sampling) 
## Chain 2 Iteration: 3900 / 4000 [ 97%]  (Sampling) 
## Chain 2 Iteration: 4000 / 4000 [100%]  (Sampling) 
## Chain 2 finished in 47.6 seconds.
## 
## All 4 chains finished successfully.
## Mean chain execution time: 44.0 seconds.
## Total execution time: 47.7 seconds.
## Determination of model's precis:

precis(Model_Frtlty)
##            mean         sd       5.5%      94.5%    n_eff    Rhat4
## abar -0.6840240 0.09937373 -0.8414603 -0.5276194 5599.716 1.000181
## bbar  0.6385474 0.16130081  0.3806824  0.8980084 3769.591 1.001031
## Determination for precis for in-depth exploration of posterior estimates "Rho", & "Sigma" :

precis(Model_Frtlty, depth = 3, pars = c("Rho", "Sigma"))
##                mean         sd       5.5%      94.5%     n_eff    Rhat4
## Rho[1,1]  1.0000000 0.00000000  1.0000000  1.0000000       NaN      NaN
## Rho[1,2] -0.6481966 0.16896848 -0.8613399 -0.3476195 1512.2380 1.001886
## Rho[2,1] -0.6481966 0.16896848 -0.8613399 -0.3476195 1512.2380 1.001886
## Rho[2,2]  1.0000000 0.00000000  1.0000000  1.0000000       NaN      NaN
## Sigma[1]  0.5747337 0.09978873  0.4247389  0.7442693 2166.6985 1.000424
## Sigma[2]  0.7813049 0.19752602  0.4843640  1.1089269  899.9517 1.001565
## Determination of plot for investigating relationship between the varying effects of slopes and intercepts for obtaining a better perspective:

Post <- extract.samples(Model_Frtlty)
a <- apply(Post$a, 2, mean)
b <- apply(Post$b, 2, mean)
plot(a, b, xlab = "a (intercept)", ylab = "b (urban slope)")
abline(h = 0, lty = 2)
abline(v = 0, lty = 2)
R <- apply(Post$Rho, 2:3, mean)
s <- apply(Post$Sigma, 2, mean)
S <- diag(s) %*% R %*% diag(s)
ll <- c(0.5, 0.67, 0.89, 0.97)
for (l in ll) {
  el <- ellipse(S, centre = c(mean(Post$abar), mean(Post$bbar)), level = l)
  lines(el, col = "blue", lwd = 0.5)
}

## Determination for the probability scale plot, leveraging the estimates inverse logit transformation:

P_0 <- inv_logit(a)
P_1 <- inv_logit(a + b)
plot(P_0, P_1, xlim = c(0, 1), ylim = c(0, 1), xlab = "urban = 0", ylab = "urban = 1")
abline(h = 0.5, lty = 2)
abline(v = 0.5, lty = 2)

## Model Result Findings from above posterior estimates exploration and varying correlation effects and probability plot:

# It is evident from the precis output for our posterior estimates of model that there exists a negative correlation between intercepts and slopes i.e. for Rho[1,2] & Rho[2,1], further in reference to our plot output for further exploration of relationship between the varying effects, it's observed that some of the districts having the higher magnitude of contraception usages were found to be outside the urban areas, as references from the smaller slopes, or in other words the districts claiming or boasting to have a higher magnitude of the contraception outside of the urban region area's in general do not have a marked shift for usages during transition to urban region area's of Bangladesh, and in addition our model's precis output also indicated a positive correlation effect findings for the bbar, thus implying that the magnitude of contraception usages were higher in the urban region areas that have lower than average contraceptive usage in rural areas.

14-5 Now consider the predictor variables age.centered and living.children, also contained in data(bangladesh). Suppose that age influences contraceptive use (changing attitudes) and number of children (older people have had more time to have kids). Number of children may also directly influence contraceptive use. Draw a DAG that reflects these hypothetical relationships. Then build models needed to evaluate the DAG. You will need at least two models. Retain district and urban, as in 14-4. What do you conclude about the causal influence of age and children?

## Determination for the DAG for this exercise scenario reflecting the hypothetical relation of Age and living children on the Contraception.

library(dagitty)


Dag <- dagitty::dagitty(" dag {
                        Age_Centered -> Contraception
                        Living_Children -> Contraception
                        Age_Centered -> Living_Children
                        }")

drawdag(Dag)

## Determination for the models with inclusion of new predictors of *age* and *living children* in the models.


set.seed(1234)

## Loading/Extracting the Fertility data for Bangladesh (Ref: Chapter-13 problem):

data(bangladesh)

dat <- bangladesh

## Inclusion of additional predictor parameters in the dataset.

dat_list <- list(
  Cntrcptn = dat$use.contraception,
  did = as.integer(as.factor(dat$district)),
  urban = dat$urban,
  age = standardize(dat$age.centered),
  children = standardize(dat$living.children
))


## Determination of Model for our DAG evaluation with *age* predictor and retaining the other *urban* and *district* parameters as well in the model:

Model_1_Age <- ulam( alist(
Cntrcptn ~ bernoulli( p ),
logit(p) <- a[did] + b[did]*urban + bA*age,
c(a,b)[did] ~ multi_normal( c(abar,bbar), Rho, Sigma), 
abar ~ normal(0,1),
c(bbar,bA) ~ normal(0,0.5),
Rho ~ lkj_corr(2),
Sigma ~ exponential(1)
) , data=dat_list, chains=2, cores=4, log_lik = TRUE, iter = 2000)
## Running MCMC with 2 chains, at most 4 in parallel, with 1 thread(s) per chain...
## 
## Chain 1 Iteration:    1 / 2000 [  0%]  (Warmup) 
## Chain 2 Iteration:    1 / 2000 [  0%]  (Warmup) 
## Chain 1 Iteration:  100 / 2000 [  5%]  (Warmup) 
## Chain 2 Iteration:  100 / 2000 [  5%]  (Warmup) 
## Chain 1 Iteration:  200 / 2000 [ 10%]  (Warmup) 
## Chain 2 Iteration:  200 / 2000 [ 10%]  (Warmup) 
## Chain 1 Iteration:  300 / 2000 [ 15%]  (Warmup) 
## Chain 2 Iteration:  300 / 2000 [ 15%]  (Warmup) 
## Chain 1 Iteration:  400 / 2000 [ 20%]  (Warmup) 
## Chain 2 Iteration:  400 / 2000 [ 20%]  (Warmup) 
## Chain 1 Iteration:  500 / 2000 [ 25%]  (Warmup) 
## Chain 2 Iteration:  500 / 2000 [ 25%]  (Warmup) 
## Chain 1 Iteration:  600 / 2000 [ 30%]  (Warmup) 
## Chain 2 Iteration:  600 / 2000 [ 30%]  (Warmup) 
## Chain 1 Iteration:  700 / 2000 [ 35%]  (Warmup) 
## Chain 2 Iteration:  700 / 2000 [ 35%]  (Warmup) 
## Chain 1 Iteration:  800 / 2000 [ 40%]  (Warmup) 
## Chain 2 Iteration:  800 / 2000 [ 40%]  (Warmup) 
## Chain 1 Iteration:  900 / 2000 [ 45%]  (Warmup) 
## Chain 2 Iteration:  900 / 2000 [ 45%]  (Warmup) 
## Chain 1 Iteration: 1000 / 2000 [ 50%]  (Warmup) 
## Chain 1 Iteration: 1001 / 2000 [ 50%]  (Sampling) 
## Chain 2 Iteration: 1000 / 2000 [ 50%]  (Warmup) 
## Chain 2 Iteration: 1001 / 2000 [ 50%]  (Sampling) 
## Chain 2 Iteration: 1100 / 2000 [ 55%]  (Sampling) 
## Chain 1 Iteration: 1100 / 2000 [ 55%]  (Sampling) 
## Chain 2 Iteration: 1200 / 2000 [ 60%]  (Sampling) 
## Chain 1 Iteration: 1200 / 2000 [ 60%]  (Sampling) 
## Chain 2 Iteration: 1300 / 2000 [ 65%]  (Sampling) 
## Chain 1 Iteration: 1300 / 2000 [ 65%]  (Sampling) 
## Chain 2 Iteration: 1400 / 2000 [ 70%]  (Sampling) 
## Chain 1 Iteration: 1400 / 2000 [ 70%]  (Sampling) 
## Chain 2 Iteration: 1500 / 2000 [ 75%]  (Sampling) 
## Chain 1 Iteration: 1500 / 2000 [ 75%]  (Sampling) 
## Chain 2 Iteration: 1600 / 2000 [ 80%]  (Sampling) 
## Chain 1 Iteration: 1600 / 2000 [ 80%]  (Sampling) 
## Chain 2 Iteration: 1700 / 2000 [ 85%]  (Sampling) 
## Chain 1 Iteration: 1700 / 2000 [ 85%]  (Sampling) 
## Chain 2 Iteration: 1800 / 2000 [ 90%]  (Sampling) 
## Chain 1 Iteration: 1800 / 2000 [ 90%]  (Sampling) 
## Chain 2 Iteration: 1900 / 2000 [ 95%]  (Sampling) 
## Chain 1 Iteration: 1900 / 2000 [ 95%]  (Sampling) 
## Chain 2 Iteration: 2000 / 2000 [100%]  (Sampling) 
## Chain 2 finished in 21.3 seconds.
## Chain 1 Iteration: 2000 / 2000 [100%]  (Sampling) 
## Chain 1 finished in 21.9 seconds.
## 
## Both chains finished successfully.
## Mean chain execution time: 21.6 seconds.
## Total execution time: 21.9 seconds.
## Determination of "Model_1_Age" model's precis:

precis(Model_1_Age)
##             mean         sd         5.5%      94.5%    n_eff     Rhat4
## abar -0.69013439 0.09945174 -0.850962385 -0.5312371 1299.117 1.0005364
## bA    0.08358689 0.04887309  0.005886026  0.1599228 3379.155 0.9992032
## bbar  0.64188194 0.16504341  0.376553570  0.8936463 1029.324 0.9999948
## Determination for precis for in-depth exploration of posterior estimates "Rho", & "Sigma" :

precis(Model_1_Age, depth = 3, pars = c("Rho", "Sigma"))
##                mean         sd       5.5%      94.5%    n_eff     Rhat4
## Rho[1,1]  1.0000000 0.00000000  1.0000000  1.0000000      NaN       NaN
## Rho[1,2] -0.6546850 0.15908412 -0.8598429 -0.3803236 442.6730 0.9995798
## Rho[2,1] -0.6546850 0.15908412 -0.8598429 -0.3803236 442.6730 0.9995798
## Rho[2,2]  1.0000000 0.00000000  1.0000000  1.0000000      NaN       NaN
## Sigma[1]  0.5829248 0.09631096  0.4378659  0.7432145 552.3435 0.9996069
## Sigma[2]  0.7955594 0.19794895  0.4894764  1.1224184 279.8296 0.9994057
## Determination for the second Model for our DAG evaluation with *age* as well as "children* as predictors and further retaining the other *urban* and *district* parameters also included in this new model:


Model_2_Age_Children <- ulam( alist(
Cntrcptn ~ bernoulli( p ),
logit(p) <- a[did] + b[did]*urban + bA*age +bC*children,
c(a,b)[did] ~ multi_normal( c(abar,bbar), Rho, Sigma), 
abar ~ normal(0,1),
c(bbar,bA,bC) ~ normal(0,0.5),
Rho ~ lkj_corr(2),
Sigma ~ exponential(1)
) , data=dat_list, chains=2, cores=4, log_lik = TRUE, iter = 2000)
## Running MCMC with 2 chains, at most 4 in parallel, with 1 thread(s) per chain...
## 
## Chain 1 Iteration:    1 / 2000 [  0%]  (Warmup) 
## Chain 2 Iteration:    1 / 2000 [  0%]  (Warmup) 
## Chain 2 Iteration:  100 / 2000 [  5%]  (Warmup) 
## Chain 1 Iteration:  100 / 2000 [  5%]  (Warmup) 
## Chain 2 Iteration:  200 / 2000 [ 10%]  (Warmup) 
## Chain 1 Iteration:  200 / 2000 [ 10%]  (Warmup) 
## Chain 2 Iteration:  300 / 2000 [ 15%]  (Warmup) 
## Chain 1 Iteration:  300 / 2000 [ 15%]  (Warmup) 
## Chain 2 Iteration:  400 / 2000 [ 20%]  (Warmup) 
## Chain 2 Iteration:  500 / 2000 [ 25%]  (Warmup) 
## Chain 1 Iteration:  400 / 2000 [ 20%]  (Warmup) 
## Chain 2 Iteration:  600 / 2000 [ 30%]  (Warmup) 
## Chain 1 Iteration:  500 / 2000 [ 25%]  (Warmup) 
## Chain 2 Iteration:  700 / 2000 [ 35%]  (Warmup) 
## Chain 1 Iteration:  600 / 2000 [ 30%]  (Warmup) 
## Chain 2 Iteration:  800 / 2000 [ 40%]  (Warmup) 
## Chain 1 Iteration:  700 / 2000 [ 35%]  (Warmup) 
## Chain 2 Iteration:  900 / 2000 [ 45%]  (Warmup) 
## Chain 1 Iteration:  800 / 2000 [ 40%]  (Warmup) 
## Chain 2 Iteration: 1000 / 2000 [ 50%]  (Warmup) 
## Chain 2 Iteration: 1001 / 2000 [ 50%]  (Sampling) 
## Chain 1 Iteration:  900 / 2000 [ 45%]  (Warmup) 
## Chain 2 Iteration: 1100 / 2000 [ 55%]  (Sampling) 
## Chain 1 Iteration: 1000 / 2000 [ 50%]  (Warmup) 
## Chain 1 Iteration: 1001 / 2000 [ 50%]  (Sampling) 
## Chain 2 Iteration: 1200 / 2000 [ 60%]  (Sampling) 
## Chain 1 Iteration: 1100 / 2000 [ 55%]  (Sampling) 
## Chain 2 Iteration: 1300 / 2000 [ 65%]  (Sampling) 
## Chain 1 Iteration: 1200 / 2000 [ 60%]  (Sampling) 
## Chain 2 Iteration: 1400 / 2000 [ 70%]  (Sampling) 
## Chain 1 Iteration: 1300 / 2000 [ 65%]  (Sampling) 
## Chain 1 Iteration: 1400 / 2000 [ 70%]  (Sampling) 
## Chain 2 Iteration: 1500 / 2000 [ 75%]  (Sampling) 
## Chain 1 Iteration: 1500 / 2000 [ 75%]  (Sampling) 
## Chain 2 Iteration: 1600 / 2000 [ 80%]  (Sampling) 
## Chain 1 Iteration: 1600 / 2000 [ 80%]  (Sampling) 
## Chain 2 Iteration: 1700 / 2000 [ 85%]  (Sampling) 
## Chain 1 Iteration: 1700 / 2000 [ 85%]  (Sampling) 
## Chain 2 Iteration: 1800 / 2000 [ 90%]  (Sampling) 
## Chain 1 Iteration: 1800 / 2000 [ 90%]  (Sampling) 
## Chain 1 Iteration: 1900 / 2000 [ 95%]  (Sampling) 
## Chain 2 Iteration: 1900 / 2000 [ 95%]  (Sampling) 
## Chain 1 Iteration: 2000 / 2000 [100%]  (Sampling) 
## Chain 1 finished in 24.3 seconds.
## Chain 2 Iteration: 2000 / 2000 [100%]  (Sampling) 
## Chain 2 finished in 24.6 seconds.
## 
## Both chains finished successfully.
## Mean chain execution time: 24.4 seconds.
## Total execution time: 24.7 seconds.
## Determination of "Model_2_Age_Children" model's precis:

precis(Model_2_Age_Children)
##            mean         sd       5.5%      94.5%    n_eff    Rhat4
## abar -0.7178324 0.10178245 -0.8816537 -0.5593799 1498.571 1.001117
## bC    0.5127975 0.07198108  0.3984615  0.6298332 2288.885 1.000595
## bA   -0.2643795 0.06942725 -0.3748423 -0.1528250 2349.998 1.000479
## bbar  0.6853259 0.16016314  0.4318945  0.9418312 1171.401 1.000037
## Determination for precis for in-depth exploration of posterior estimates "Rho", & "Sigma"  for the above model-2:

precis(Model_2_Age_Children, depth = 3, pars = c("Rho", "Sigma"))
##                mean         sd       5.5%      94.5%    n_eff     Rhat4
## Rho[1,1]  1.0000000 0.00000000  1.0000000  1.0000000      NaN       NaN
## Rho[1,2] -0.6319646 0.16156155 -0.8382028 -0.3351197 515.8645 0.9997978
## Rho[2,1] -0.6319646 0.16156155 -0.8382028 -0.3351197 515.8645 0.9997978
## Rho[2,2]  1.0000000 0.00000000  1.0000000  1.0000000      NaN       NaN
## Sigma[1]  0.6126084 0.09952783  0.4681074  0.7819966 609.9453 0.9996297
## Sigma[2]  0.7778562 0.19868474  0.4767733  1.0949399 267.3124 1.0074257
## Determination for the comparison of both the above models i.e. "Model_1_Age", and "Model_2_Age_Children" utlizing "WAIC" function.

Comp_Waic_Age_Children <-
  compare(Model_1_Age, Model_2_Age_Children, func = WAIC) %>% 
  as_tibble(rownames = "Model") %>% 
  knitr::kable(digits = 2)

Comp_Waic_Age_Children
Model WAIC SE dWAIC dSE pWAIC weight
Model_2_Age_Children 2412.73 30.42 0.0 NA 55.09 1
Model_1_Age 2466.42 28.09 53.7 14.21 52.91 0
## Model Comparison Findings for the Causal influence of age and children predictors:

# It is evident from the precis output for our posterior estimates of both the models i.e., "Model_1_Age", and "Model_2_Age_Children", that there exists a negative correlation between intercepts and slopes i.e. for Rho[1,2] & Rho[2,1] and further our WAIC model comparison output presented the Model with both the predictors of "age" as well as "children" i.e. "Model_2_Age_Children" to be of better performance and a comparitively lower WAIC score value and full Akaike weight, as compare to Model with only Age predictor, i.e. "Model_1_Age", however this different in score values was also not found to be that significant, moreover in reference to our model's output comparison's it is also equally evident that it is in sync with the DAG i.e., direct causal effect of Age is more in magnitude in comparison to indirect as a consequence of mediation of  number of living children involved in to the equation, therefore it can be concluded from our model output findings that the older people with higher age are less likelihood of using the contraception for the purpose adjustment of the number of children, where as since people with older age also tend to have more children numbers, and if we change the children number, it is also seen that the people with higher age using contraception is also impacted, provided the children number increases and that has an effect on less or dropped percentage usage of contraception.