Untitled

Jags code

The code has to happen in different steps

Step 1: Bundle data:

dataList <- list(y = y, x = x, n = n)
library(jagsUI)
Warning: package 'jagsUI' was built under R version 4.3.3

Step 2: Write model file:

cat(file = "modeljags.txt", "
model {
# Priors
alpha ~ dnorm(0, 0.0001)
beta ~ dnorm(0, 0.0001)
sigma ~ dunif(0, 100) # May consider smaller range
tau <- pow(sigma, -2)
# Likelihood
for (i in 1:n) {
  y[i] ~ dnorm(mu[i], tau) # Response y distributed normally
  mu[i] <- alpha + beta * x[i] # Linear predictor
}
}
")

Define initial values, and run:

# Function to generate starting values
inits <- function(){ list(alpha = rnorm(1),
beta = rnorm(1), sigma = rlnorm(1))}
# Note lognormal init for sigma
# to avoid values < 0
# Parameters to estimate
params <- c("alpha","beta", "sigma")
# MCMC settings 
na <- 1000 ; ni <- 6000 ; nb <- 1000 ; nc <- 4 ; nt <- 1
# Call JAGS (ART <1 min), check convergence and summarize posteriors
modeljags <- jags(dataList, inits, params, "modeljags.txt", n.iter = ni, n.burnin = nb,
n.chains = nc, n.thin = nt, n.adapt = na, parallel = TRUE)

Processing function input....... 

Done. 
 
Beginning parallel processing using 4 cores. Console output will be suppressed.

Parallel processing completed.

Calculating statistics....... 

Done. 
jagsUI::traceplot(modeljags) # not shown

Obrain values and compare them

print(modeljags, 2)
JAGS output for model 'modeljags.txt', generated by jagsUI.
Estimates based on 4 chains of 6000 iterations,
adaptation = 1000 iterations (sufficient),
burn-in = 1000 iterations and thin rate = 1,
yielding 20000 total samples from the joint posterior. 
MCMC ran in parallel for 0.023 minutes at time 2025-04-30 05:56:14.09541.

          mean   sd  2.5%   50%  97.5% overlap0    f Rhat n.eff
alpha    40.72 2.70 35.46 40.74  46.10    FALSE 1.00    1 12792
beta     -0.69 0.28 -1.24 -0.69  -0.14    FALSE 0.99    1 20000
sigma     5.04 1.07  3.42  4.88   7.65    FALSE 1.00    1  9452
deviance 95.53 2.90 92.23 94.78 103.08    FALSE 1.00    1  7704

Successful convergence based on Rhat values (all < 1.1). 
Rhat is the potential scale reduction factor (at convergence, Rhat=1). 
For each parameter, n.eff is a crude measure of effective sample size. 

overlap0 checks if 0 falls in the parameter's 95% credible interval.
f is the proportion of the posterior with the same sign as the mean;
i.e., our confidence that the parameter is positive or negative.

DIC info: (pD = var(deviance)/2) 
pD = 4.2 and DIC = 99.75 
DIC is an estimate of expected predictive error (lower is better).
# Save JAGS estimates
jags_est <- modeljags$summary[1:3, 1]
jags_est
     alpha       beta      sigma 
40.7188317 -0.6884598  5.0412043 
# Compare results of lm() and JAGS
comp <- cbind(truth = truth, lm = lm_est, JAGS = jags_est)
print(comp, 4)
      truth      lm    JAGS
alpha  40.0 40.7426 40.7188
beta   -0.5 -0.6907 -0.6885
sigma   5.0  4.5798  5.0412

Another program: STAN

  1. Bundle data:
dataList <- list(y = y, x = x, n = n)
  1. Write the model in STAN
library(rstan)
Warning: package 'rstan' was built under R version 4.3.3
Loading required package: StanHeaders
Warning: package 'StanHeaders' was built under R version 4.3.3

rstan version 2.32.6 (Stan version 2.32.2)
For execution on a local, multicore CPU with excess RAM we recommend calling
options(mc.cores = parallel::detectCores()).
To avoid recompilation of unchanged Stan programs, we recommend calling
rstan_options(auto_write = TRUE)
For within-chain threading using `reduce_sum()` or `map_rect()` Stan functions,
change `threads_per_chain` option:
rstan_options(threads_per_chain = 1)
Do not specify '-march=native' in 'LOCAL_CPPFLAGS' or a Makevars file

Attaching package: 'rstan'
The following object is masked from 'package:jagsUI':

    traceplot
# Write text file with model description in Stan language
cat(file = "model.stan", # This line is R code
"data { // This is the first line of Stan code
  int <lower = 0> n; // Define the format of all data
  vector[n] y; //. . . including the dimension of vectors
  vector[n] x; //
}
parameters { // Define format for all parameters
  real alpha;
  real beta;
  real <lower = 0> sigma; // sigma (sd) cannot be negative
}
transformed parameters {
  vector[n] mu;
  for (i in 1:n){
    mu[i] = alpha + beta * x[i]; // Calculate linear predictor
  }
}
model {
  // Priors
  alpha ~ normal(0, 100);
  beta ~ normal(0, 100);
  sigma ~ cauchy(0, 10);
  // Likelihood (could be vectorized to increase speed)
  for (i in 1:n){
    y[i] ~ normal(mu[i], sigma);
  }
}

" )

Set settings

# HMC settings
ni <- 3000 ; nb <- 1000 ; nc <- 4 ; nt <- 1
# Call STAN (ART 30/3 sec), assess convergence, save estimates

modelstan <- stan(file = "model.stan", data = dataList,
chains = nc, iter = ni, warmup = nb, thin = nt)

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
Chain 1: 
Chain 1: Gradient evaluation took 2.1e-05 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.21 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1: 
Chain 1: 
Chain 1: Iteration:    1 / 3000 [  0%]  (Warmup)
Chain 1: Iteration:  300 / 3000 [ 10%]  (Warmup)
Chain 1: Iteration:  600 / 3000 [ 20%]  (Warmup)
Chain 1: Iteration:  900 / 3000 [ 30%]  (Warmup)
Chain 1: Iteration: 1001 / 3000 [ 33%]  (Sampling)
Chain 1: Iteration: 1300 / 3000 [ 43%]  (Sampling)
Chain 1: Iteration: 1600 / 3000 [ 53%]  (Sampling)
Chain 1: Iteration: 1900 / 3000 [ 63%]  (Sampling)
Chain 1: Iteration: 2200 / 3000 [ 73%]  (Sampling)
Chain 1: Iteration: 2500 / 3000 [ 83%]  (Sampling)
Chain 1: Iteration: 2800 / 3000 [ 93%]  (Sampling)
Chain 1: Iteration: 3000 / 3000 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 0.036 seconds (Warm-up)
Chain 1:                0.079 seconds (Sampling)
Chain 1:                0.115 seconds (Total)
Chain 1: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2).
Chain 2: 
Chain 2: Gradient evaluation took 5e-06 seconds
Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.05 seconds.
Chain 2: Adjust your expectations accordingly!
Chain 2: 
Chain 2: 
Chain 2: Iteration:    1 / 3000 [  0%]  (Warmup)
Chain 2: Iteration:  300 / 3000 [ 10%]  (Warmup)
Chain 2: Iteration:  600 / 3000 [ 20%]  (Warmup)
Chain 2: Iteration:  900 / 3000 [ 30%]  (Warmup)
Chain 2: Iteration: 1001 / 3000 [ 33%]  (Sampling)
Chain 2: Iteration: 1300 / 3000 [ 43%]  (Sampling)
Chain 2: Iteration: 1600 / 3000 [ 53%]  (Sampling)
Chain 2: Iteration: 1900 / 3000 [ 63%]  (Sampling)
Chain 2: Iteration: 2200 / 3000 [ 73%]  (Sampling)
Chain 2: Iteration: 2500 / 3000 [ 83%]  (Sampling)
Chain 2: Iteration: 2800 / 3000 [ 93%]  (Sampling)
Chain 2: Iteration: 3000 / 3000 [100%]  (Sampling)
Chain 2: 
Chain 2:  Elapsed Time: 0.042 seconds (Warm-up)
Chain 2:                0.089 seconds (Sampling)
Chain 2:                0.131 seconds (Total)
Chain 2: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3).
Chain 3: 
Chain 3: Gradient evaluation took 4e-06 seconds
Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.04 seconds.
Chain 3: Adjust your expectations accordingly!
Chain 3: 
Chain 3: 
Chain 3: Iteration:    1 / 3000 [  0%]  (Warmup)
Chain 3: Iteration:  300 / 3000 [ 10%]  (Warmup)
Chain 3: Iteration:  600 / 3000 [ 20%]  (Warmup)
Chain 3: Iteration:  900 / 3000 [ 30%]  (Warmup)
Chain 3: Iteration: 1001 / 3000 [ 33%]  (Sampling)
Chain 3: Iteration: 1300 / 3000 [ 43%]  (Sampling)
Chain 3: Iteration: 1600 / 3000 [ 53%]  (Sampling)
Chain 3: Iteration: 1900 / 3000 [ 63%]  (Sampling)
Chain 3: Iteration: 2200 / 3000 [ 73%]  (Sampling)
Chain 3: Iteration: 2500 / 3000 [ 83%]  (Sampling)
Chain 3: Iteration: 2800 / 3000 [ 93%]  (Sampling)
Chain 3: Iteration: 3000 / 3000 [100%]  (Sampling)
Chain 3: 
Chain 3:  Elapsed Time: 0.039 seconds (Warm-up)
Chain 3:                0.073 seconds (Sampling)
Chain 3:                0.112 seconds (Total)
Chain 3: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4).
Chain 4: 
Chain 4: Gradient evaluation took 4e-06 seconds
Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.04 seconds.
Chain 4: Adjust your expectations accordingly!
Chain 4: 
Chain 4: 
Chain 4: Iteration:    1 / 3000 [  0%]  (Warmup)
Chain 4: Iteration:  300 / 3000 [ 10%]  (Warmup)
Chain 4: Iteration:  600 / 3000 [ 20%]  (Warmup)
Chain 4: Iteration:  900 / 3000 [ 30%]  (Warmup)
Chain 4: Iteration: 1001 / 3000 [ 33%]  (Sampling)
Chain 4: Iteration: 1300 / 3000 [ 43%]  (Sampling)
Chain 4: Iteration: 1600 / 3000 [ 53%]  (Sampling)
Chain 4: Iteration: 1900 / 3000 [ 63%]  (Sampling)
Chain 4: Iteration: 2200 / 3000 [ 73%]  (Sampling)
Chain 4: Iteration: 2500 / 3000 [ 83%]  (Sampling)
Chain 4: Iteration: 2800 / 3000 [ 93%]  (Sampling)
Chain 4: Iteration: 3000 / 3000 [100%]  (Sampling)
Chain 4: 
Chain 4:  Elapsed Time: 0.042 seconds (Warm-up)
Chain 4:                0.067 seconds (Sampling)
Chain 4:                0.109 seconds (Total)
Chain 4: 
rstan::traceplot(modelstan) # not shown
'pars' not specified. Showing first 10 parameters by default.

print(modelstan, dig = 2) # not shown
Inference for Stan model: anon_model.
4 chains, each with iter=3000; warmup=1000; thin=1; 
post-warmup draws per chain=2000, total post-warmup draws=8000.

         mean se_mean   sd   2.5%    25%    50%    75%  97.5% n_eff Rhat
alpha   40.69    0.05 2.65  35.38  39.02  40.70  42.41  45.94  3064    1
beta    -0.69    0.00 0.27  -1.21  -0.86  -0.68  -0.51  -0.14  3030    1
sigma    4.94    0.02 1.00   3.40   4.23   4.79   5.51   7.26  3356    1
mu[1]   40.01    0.04 2.41  35.17  38.47  40.02  41.56  44.77  3130    1
mu[2]   39.32    0.04 2.18  34.94  37.91  39.33  40.72  43.61  3229    1
mu[3]   38.64    0.03 1.96  34.70  37.36  38.66  39.90  42.53  3382    1
mu[4]   37.95    0.03 1.76  34.45  36.82  37.97  39.07  41.45  3624    1
mu[5]   37.26    0.02 1.58  34.11  36.27  37.27  38.27  40.40  4024    1
mu[6]   36.58    0.02 1.43  33.72  35.68  36.58  37.48  39.43  4701    1
mu[7]   35.89    0.02 1.31  33.28  35.06  35.90  36.71  38.53  5835    1
mu[8]   35.21    0.01 1.25  32.72  34.40  35.21  35.98  37.71  7541    1
mu[9]   34.52    0.01 1.25  32.03  33.73  34.54  35.30  37.01  9250    1
mu[10]  33.84    0.01 1.30  31.23  33.00  33.85  34.67  36.43  9591    1
mu[11]  33.15    0.02 1.40  30.33  32.25  33.17  34.04  35.95  8204    1
mu[12]  32.46    0.02 1.55  29.37  31.47  32.48  33.46  35.56  6856    1
mu[13]  31.78    0.02 1.73  28.34  30.67  31.79  32.89  35.20  5881    1
mu[14]  31.09    0.03 1.93  27.31  29.85  31.09  32.33  34.88  5208    1
mu[15]  30.41    0.03 2.14  26.19  29.03  30.39  31.79  34.58  4746    1
mu[16]  29.72    0.04 2.37  25.05  28.20  29.74  31.25  34.30  4422    1
lp__   -31.68    0.03 1.30 -35.02 -32.29 -31.34 -30.72 -30.17  2661    1

Samples were drawn using NUTS(diag_e) at Wed Apr 30 05:57:27 2025.
For each parameter, n_eff is a crude measure of effective sample size,
and Rhat is the potential scale reduction factor on split chains (at 
convergence, Rhat=1).
stan_est <- summary(modelstan)$summary[1:3,1]
comp <- cbind(truth = truth, lm = lm_est, JAGS = jags_est, Stan = stan_est)
print(comp, 4)
      truth      lm    JAGS    Stan
alpha  40.0 40.7426 40.7188 40.6921
beta   -0.5 -0.6907 -0.6885 -0.6857
sigma   5.0  4.5798  5.0412  4.9429