Untitled

Jags code

The code has to happen in different steps

Step 1: Bundle data:

dataList <- list(y = y, x = x, n = n)
library(jagsUI)
Warning: package 'jagsUI' was built under R version 4.3.3

Step 2: Write model file:

cat(file = "modeljags.txt", "
model {
# Priors
alpha ~ dunif(-100, 100)
beta ~ dunif(-100, 100)
sigma ~ dunif(0, 100) # May consider smaller range
tau <- pow(sigma, -2)
# Likelihood
for (i in 1:n) {
  y[i] ~ dnorm(mu[i], tau) # Response y distributed normally
  mu[i] <- alpha + beta * x[i] # Linear predictor
}
}
")

Define initial values, and run:

# Function to generate starting values
inits <- function(){ list(alpha = rnorm(1),
beta = rnorm(1), sigma = rlnorm(1))}
# Note lognormal init for sigma
# to avoid values < 0
# Parameters to estimate
params <- c("alpha","beta", "sigma")
# MCMC settings 
na <- 1000 ; ni <- 6000 ; nb <- 1000 ; nc <- 4 ; nt <- 1
# Call JAGS (ART <1 min), check convergence and summarize posteriors
modeljags <- jags(dataList, inits, params, "modeljags.txt", n.iter = ni, n.burnin = nb,
n.chains = nc, n.thin = nt, n.adapt = na, parallel = TRUE)

Processing function input....... 

Done. 
 
Beginning parallel processing using 4 cores. Console output will be suppressed.

Parallel processing completed.

Calculating statistics....... 

Done. 
jagsUI::traceplot(modeljags) # not shown

Obrain values and compare them

print(modeljags, 2)
JAGS output for model 'modeljags.txt', generated by jagsUI.
Estimates based on 4 chains of 6000 iterations,
adaptation = 1000 iterations (sufficient),
burn-in = 1000 iterations and thin rate = 1,
yielding 20000 total samples from the joint posterior. 
MCMC ran in parallel for 0.013 minutes at time 2025-11-13 21:44:54.819453.

          mean   sd  2.5%   50%  97.5% overlap0    f Rhat n.eff
alpha    40.71 2.65 35.33 40.70  45.91    FALSE 1.00    1  1070
beta     -0.69 0.28 -1.23 -0.69  -0.13    FALSE 0.99    1  1029
sigma     5.04 1.08  3.43  4.88   7.65    FALSE 1.00    1  1686
deviance 95.53 2.90 92.20 94.76 103.07    FALSE 1.00    1   877

Successful convergence based on Rhat values (all < 1.1). 
Rhat is the potential scale reduction factor (at convergence, Rhat=1). 
For each parameter, n.eff is a crude measure of effective sample size. 

overlap0 checks if 0 falls in the parameter's 95% credible interval.
f is the proportion of the posterior with the same sign as the mean;
i.e., our confidence that the parameter is positive or negative.

DIC info: (pD = var(deviance)/2) 
pD = 4.2 and DIC = 99.71 
DIC is an estimate of expected predictive error (lower is better).
# Save JAGS estimates
jags_est <- modeljags$summary[1:3, 1]
jags_est
     alpha       beta      sigma 
40.7087568 -0.6877259  5.0445288 
# Compare results of lm() and JAGS
comp <- cbind(truth = truth, lm = lm_est, JAGS = jags_est)
print(comp, 4)
      truth      lm    JAGS
alpha  40.0 40.7426 40.7088
beta   -0.5 -0.6907 -0.6877
sigma   5.0  4.5798  5.0445

STAN

  1. Bundle data:
dataList <- list(y = y, x = x, n = n)
  1. Write the model in STAN
library(rstan)
Warning: package 'rstan' was built under R version 4.3.3
Loading required package: StanHeaders
Warning: package 'StanHeaders' was built under R version 4.3.3

rstan version 2.32.6 (Stan version 2.32.2)
For execution on a local, multicore CPU with excess RAM we recommend calling
options(mc.cores = parallel::detectCores()).
To avoid recompilation of unchanged Stan programs, we recommend calling
rstan_options(auto_write = TRUE)
For within-chain threading using `reduce_sum()` or `map_rect()` Stan functions,
change `threads_per_chain` option:
rstan_options(threads_per_chain = 1)
Do not specify '-march=native' in 'LOCAL_CPPFLAGS' or a Makevars file

Attaching package: 'rstan'
The following object is masked from 'package:jagsUI':

    traceplot
# Write text file with model description in Stan language
cat(file = "model.stan", # This line is R code
"data { // This is the first line of Stan code
  int <lower = 0> n; // Define the format of all data
  vector[n] y; //. . . including the dimension of vectors
  vector[n] x; //
}
parameters { // Define format for all parameters
  real alpha;
  real beta;
  real <lower = 0> sigma; // sigma (sd) cannot be negative
}
transformed parameters {
  vector[n] mu;
  for (i in 1:n){
    mu[i] = alpha + beta * x[i]; // Calculate linear predictor
  }
}
model {
  // Priors
  alpha ~ normal(0, 100);
  beta ~ normal(0, 100);
  sigma ~ cauchy(0, 10);
  // Likelihood (could be vectorized to increase speed)
  for (i in 1:n){
    y[i] ~ normal(mu[i], sigma);
  }
}

" )

Set settings

# HMC settings
ni <- 3000 ; nb <- 1000 ; nc <- 4 ; nt <- 1
# Call STAN (ART 30/3 sec), assess convergence, save estimates

modelstan <- stan(file = "model.stan", data = dataList,
chains = nc, iter = ni, warmup = nb, thin = nt)

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
Chain 1: 
Chain 1: Gradient evaluation took 2.6e-05 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.26 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1: 
Chain 1: 
Chain 1: Iteration:    1 / 3000 [  0%]  (Warmup)
Chain 1: Iteration:  300 / 3000 [ 10%]  (Warmup)
Chain 1: Iteration:  600 / 3000 [ 20%]  (Warmup)
Chain 1: Iteration:  900 / 3000 [ 30%]  (Warmup)
Chain 1: Iteration: 1001 / 3000 [ 33%]  (Sampling)
Chain 1: Iteration: 1300 / 3000 [ 43%]  (Sampling)
Chain 1: Iteration: 1600 / 3000 [ 53%]  (Sampling)
Chain 1: Iteration: 1900 / 3000 [ 63%]  (Sampling)
Chain 1: Iteration: 2200 / 3000 [ 73%]  (Sampling)
Chain 1: Iteration: 2500 / 3000 [ 83%]  (Sampling)
Chain 1: Iteration: 2800 / 3000 [ 93%]  (Sampling)
Chain 1: Iteration: 3000 / 3000 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 0.052 seconds (Warm-up)
Chain 1:                0.082 seconds (Sampling)
Chain 1:                0.134 seconds (Total)
Chain 1: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2).
Chain 2: 
Chain 2: Gradient evaluation took 9e-06 seconds
Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.09 seconds.
Chain 2: Adjust your expectations accordingly!
Chain 2: 
Chain 2: 
Chain 2: Iteration:    1 / 3000 [  0%]  (Warmup)
Chain 2: Iteration:  300 / 3000 [ 10%]  (Warmup)
Chain 2: Iteration:  600 / 3000 [ 20%]  (Warmup)
Chain 2: Iteration:  900 / 3000 [ 30%]  (Warmup)
Chain 2: Iteration: 1001 / 3000 [ 33%]  (Sampling)
Chain 2: Iteration: 1300 / 3000 [ 43%]  (Sampling)
Chain 2: Iteration: 1600 / 3000 [ 53%]  (Sampling)
Chain 2: Iteration: 1900 / 3000 [ 63%]  (Sampling)
Chain 2: Iteration: 2200 / 3000 [ 73%]  (Sampling)
Chain 2: Iteration: 2500 / 3000 [ 83%]  (Sampling)
Chain 2: Iteration: 2800 / 3000 [ 93%]  (Sampling)
Chain 2: Iteration: 3000 / 3000 [100%]  (Sampling)
Chain 2: 
Chain 2:  Elapsed Time: 0.043 seconds (Warm-up)
Chain 2:                0.09 seconds (Sampling)
Chain 2:                0.133 seconds (Total)
Chain 2: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3).
Chain 3: 
Chain 3: Gradient evaluation took 7e-06 seconds
Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.07 seconds.
Chain 3: Adjust your expectations accordingly!
Chain 3: 
Chain 3: 
Chain 3: Iteration:    1 / 3000 [  0%]  (Warmup)
Chain 3: Iteration:  300 / 3000 [ 10%]  (Warmup)
Chain 3: Iteration:  600 / 3000 [ 20%]  (Warmup)
Chain 3: Iteration:  900 / 3000 [ 30%]  (Warmup)
Chain 3: Iteration: 1001 / 3000 [ 33%]  (Sampling)
Chain 3: Iteration: 1300 / 3000 [ 43%]  (Sampling)
Chain 3: Iteration: 1600 / 3000 [ 53%]  (Sampling)
Chain 3: Iteration: 1900 / 3000 [ 63%]  (Sampling)
Chain 3: Iteration: 2200 / 3000 [ 73%]  (Sampling)
Chain 3: Iteration: 2500 / 3000 [ 83%]  (Sampling)
Chain 3: Iteration: 2800 / 3000 [ 93%]  (Sampling)
Chain 3: Iteration: 3000 / 3000 [100%]  (Sampling)
Chain 3: 
Chain 3:  Elapsed Time: 0.045 seconds (Warm-up)
Chain 3:                0.08 seconds (Sampling)
Chain 3:                0.125 seconds (Total)
Chain 3: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4).
Chain 4: 
Chain 4: Gradient evaluation took 5e-06 seconds
Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.05 seconds.
Chain 4: Adjust your expectations accordingly!
Chain 4: 
Chain 4: 
Chain 4: Iteration:    1 / 3000 [  0%]  (Warmup)
Chain 4: Iteration:  300 / 3000 [ 10%]  (Warmup)
Chain 4: Iteration:  600 / 3000 [ 20%]  (Warmup)
Chain 4: Iteration:  900 / 3000 [ 30%]  (Warmup)
Chain 4: Iteration: 1001 / 3000 [ 33%]  (Sampling)
Chain 4: Iteration: 1300 / 3000 [ 43%]  (Sampling)
Chain 4: Iteration: 1600 / 3000 [ 53%]  (Sampling)
Chain 4: Iteration: 1900 / 3000 [ 63%]  (Sampling)
Chain 4: Iteration: 2200 / 3000 [ 73%]  (Sampling)
Chain 4: Iteration: 2500 / 3000 [ 83%]  (Sampling)
Chain 4: Iteration: 2800 / 3000 [ 93%]  (Sampling)
Chain 4: Iteration: 3000 / 3000 [100%]  (Sampling)
Chain 4: 
Chain 4:  Elapsed Time: 0.043 seconds (Warm-up)
Chain 4:                0.078 seconds (Sampling)
Chain 4:                0.121 seconds (Total)
Chain 4: 
rstan::traceplot(modelstan) # not shown
'pars' not specified. Showing first 10 parameters by default.

print(modelstan, dig = 2) # not shown
Inference for Stan model: anon_model.
4 chains, each with iter=3000; warmup=1000; thin=1; 
post-warmup draws per chain=2000, total post-warmup draws=8000.

         mean se_mean   sd   2.5%    25%    50%    75%  97.5% n_eff Rhat
alpha   40.69    0.05 2.65  35.38  39.02  40.70  42.41  45.94  3064    1
beta    -0.69    0.00 0.27  -1.21  -0.86  -0.68  -0.51  -0.14  3030    1
sigma    4.94    0.02 1.00   3.40   4.23   4.79   5.51   7.26  3356    1
mu[1]   40.01    0.04 2.41  35.17  38.47  40.02  41.56  44.77  3130    1
mu[2]   39.32    0.04 2.18  34.94  37.91  39.33  40.72  43.61  3229    1
mu[3]   38.64    0.03 1.96  34.70  37.36  38.66  39.90  42.53  3382    1
mu[4]   37.95    0.03 1.76  34.45  36.82  37.97  39.07  41.45  3624    1
mu[5]   37.26    0.02 1.58  34.11  36.27  37.27  38.27  40.40  4024    1
mu[6]   36.58    0.02 1.43  33.72  35.68  36.58  37.48  39.43  4701    1
mu[7]   35.89    0.02 1.31  33.28  35.06  35.90  36.71  38.53  5835    1
mu[8]   35.21    0.01 1.25  32.72  34.40  35.21  35.98  37.71  7541    1
mu[9]   34.52    0.01 1.25  32.03  33.73  34.54  35.30  37.01  9250    1
mu[10]  33.84    0.01 1.30  31.23  33.00  33.85  34.67  36.43  9591    1
mu[11]  33.15    0.02 1.40  30.33  32.25  33.17  34.04  35.95  8204    1
mu[12]  32.46    0.02 1.55  29.37  31.47  32.48  33.46  35.56  6856    1
mu[13]  31.78    0.02 1.73  28.34  30.67  31.79  32.89  35.20  5881    1
mu[14]  31.09    0.03 1.93  27.31  29.85  31.09  32.33  34.88  5208    1
mu[15]  30.41    0.03 2.14  26.19  29.03  30.39  31.79  34.58  4746    1
mu[16]  29.72    0.04 2.37  25.05  28.20  29.74  31.25  34.30  4422    1
lp__   -31.68    0.03 1.30 -35.02 -32.29 -31.34 -30.72 -30.17  2661    1

Samples were drawn using NUTS(diag_e) at Thu Nov 13 21:45:53 2025.
For each parameter, n_eff is a crude measure of effective sample size,
and Rhat is the potential scale reduction factor on split chains (at 
convergence, Rhat=1).
stan_est <- summary(modelstan)$summary[1:3,1]
comp <- cbind(truth = truth, lm = lm_est, JAGS = jags_est, Stan = stan_est)
print(comp, 4)
      truth      lm    JAGS    Stan
alpha  40.0 40.7426 40.7088 40.6921
beta   -0.5 -0.6907 -0.6877 -0.6857
sigma   5.0  4.5798  5.0445  4.9429

Example with males and females (change values slightly)

set.seed(5)
n <- 16 # Number of years
a <- 35 # Intercept (females = 0)
b <- -0.4 # Slope
b2<- 10 #difference between males and females intercept
b3<- -0.2 #difference between males and females slope
sigma2 <- 12 # Residual variance
# Save true values for later comparisons
truth2 <- c(alpha = a, beta = b, beta2 = b2, beta3=b3, sigma = sqrt(sigma2))

# See Fig. 5–3 (left) for a variant of this plot
x <- c(1:16,1:16) # Values of covariate year
sex<-(rep(0:1,each=16))


y <- a + b*x + b2*sex+ b3*x*sex +rnorm(n*2, mean = 0, sd = sqrt(sigma2))# Assemble data set


model <- lm(y ~ x*as.factor(sex))
summary(model)

Call:
lm(formula = y ~ x * as.factor(sex))

Residuals:
    Min      1Q  Median      3Q     Max 
-7.1028 -1.9189 -0.0291  2.0087  6.0748 

Coefficients:
                  Estimate Std. Error t value Pr(>|t|)    
(Intercept)        35.5145     1.7549  20.237  < 2e-16 ***
x                  -0.5321     0.1815  -2.932  0.00664 ** 
as.factor(sex)1     8.5756     2.4818   3.455  0.00177 ** 
x:as.factor(sex)1   0.1557     0.2567   0.607  0.54887    
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 3.346 on 28 degrees of freedom
Multiple R-squared:  0.7475,    Adjusted R-squared:  0.7205 
F-statistic: 27.63 on 3 and 28 DF,  p-value: 1.62e-08
lm_est <- c(coef(model), sigma = sigma(model))
library(ggplot2)
Warning: package 'ggplot2' was built under R version 4.3.3
df<-data.frame(x=x+1989,y=y,sex=as.factor(sex))
ggplot(df,aes(x=x+1989,y=y,col=as.factor(sex)))+
  geom_point(size=1.5)+
  ylab("Prop. occupied (%)")+
  xlab("Year")+theme_classic()

Model

library(rstan)
dataList <- list(y = y, x = x, n = n*2, x2=sex)

# Write text file with model description in Stan language
cat(file = "model.stan", # This line is R code
"data { // This is the first line of Stan code
  int <lower = 0> n; // Define the format of all data
  vector[n] y; //. . . including the dimension of vectors
  vector[n] x; //
  array[n] int<lower=0, upper=1> x2;
}
parameters { // Define format for all parameters
  real alpha;
  real beta;
  real beta2;
  real beta3;
  real <lower = 0> sigma; // sigma (sd) cannot be negative
}
transformed parameters {
  vector[n] mu;
  for (i in 1:n){
    mu[i] = alpha + beta * x[i] + beta2 * x2[i] + beta3 * x[i] * x2[i] ; // Calculate linear predictor
  }
}
model {
  // Priors
  alpha ~ normal(0, 100);
  beta ~ normal(0, 100);
  beta2 ~ normal(0, 100);
  beta3 ~ normal(0, 100);
  sigma ~ cauchy(0, 10);
  // Likelihood (could be vectorized to increase speed)
  for (i in 1:n){
    y[i] ~ normal(mu[i], sigma);
  }
}

" )
# HMC settings
ni <- 3000 ; nb <- 1000 ; nc <- 4 ; nt <- 1
# Call STAN (ART 30/3 sec), assess convergence, save estimates

modelstan <- stan(file = "model.stan", data = dataList,
chains = nc, iter = ni, warmup = nb, thin = nt)

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
Chain 1: 
Chain 1: Gradient evaluation took 4.6e-05 seconds
Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.46 seconds.
Chain 1: Adjust your expectations accordingly!
Chain 1: 
Chain 1: 
Chain 1: Iteration:    1 / 3000 [  0%]  (Warmup)
Chain 1: Iteration:  300 / 3000 [ 10%]  (Warmup)
Chain 1: Iteration:  600 / 3000 [ 20%]  (Warmup)
Chain 1: Iteration:  900 / 3000 [ 30%]  (Warmup)
Chain 1: Iteration: 1001 / 3000 [ 33%]  (Sampling)
Chain 1: Iteration: 1300 / 3000 [ 43%]  (Sampling)
Chain 1: Iteration: 1600 / 3000 [ 53%]  (Sampling)
Chain 1: Iteration: 1900 / 3000 [ 63%]  (Sampling)
Chain 1: Iteration: 2200 / 3000 [ 73%]  (Sampling)
Chain 1: Iteration: 2500 / 3000 [ 83%]  (Sampling)
Chain 1: Iteration: 2800 / 3000 [ 93%]  (Sampling)
Chain 1: Iteration: 3000 / 3000 [100%]  (Sampling)
Chain 1: 
Chain 1:  Elapsed Time: 0.203 seconds (Warm-up)
Chain 1:                0.35 seconds (Sampling)
Chain 1:                0.553 seconds (Total)
Chain 1: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2).
Chain 2: 
Chain 2: Gradient evaluation took 1.1e-05 seconds
Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds.
Chain 2: Adjust your expectations accordingly!
Chain 2: 
Chain 2: 
Chain 2: Iteration:    1 / 3000 [  0%]  (Warmup)
Chain 2: Iteration:  300 / 3000 [ 10%]  (Warmup)
Chain 2: Iteration:  600 / 3000 [ 20%]  (Warmup)
Chain 2: Iteration:  900 / 3000 [ 30%]  (Warmup)
Chain 2: Iteration: 1001 / 3000 [ 33%]  (Sampling)
Chain 2: Iteration: 1300 / 3000 [ 43%]  (Sampling)
Chain 2: Iteration: 1600 / 3000 [ 53%]  (Sampling)
Chain 2: Iteration: 1900 / 3000 [ 63%]  (Sampling)
Chain 2: Iteration: 2200 / 3000 [ 73%]  (Sampling)
Chain 2: Iteration: 2500 / 3000 [ 83%]  (Sampling)
Chain 2: Iteration: 2800 / 3000 [ 93%]  (Sampling)
Chain 2: Iteration: 3000 / 3000 [100%]  (Sampling)
Chain 2: 
Chain 2:  Elapsed Time: 0.191 seconds (Warm-up)
Chain 2:                0.321 seconds (Sampling)
Chain 2:                0.512 seconds (Total)
Chain 2: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3).
Chain 3: 
Chain 3: Gradient evaluation took 1.6e-05 seconds
Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.16 seconds.
Chain 3: Adjust your expectations accordingly!
Chain 3: 
Chain 3: 
Chain 3: Iteration:    1 / 3000 [  0%]  (Warmup)
Chain 3: Iteration:  300 / 3000 [ 10%]  (Warmup)
Chain 3: Iteration:  600 / 3000 [ 20%]  (Warmup)
Chain 3: Iteration:  900 / 3000 [ 30%]  (Warmup)
Chain 3: Iteration: 1001 / 3000 [ 33%]  (Sampling)
Chain 3: Iteration: 1300 / 3000 [ 43%]  (Sampling)
Chain 3: Iteration: 1600 / 3000 [ 53%]  (Sampling)
Chain 3: Iteration: 1900 / 3000 [ 63%]  (Sampling)
Chain 3: Iteration: 2200 / 3000 [ 73%]  (Sampling)
Chain 3: Iteration: 2500 / 3000 [ 83%]  (Sampling)
Chain 3: Iteration: 2800 / 3000 [ 93%]  (Sampling)
Chain 3: Iteration: 3000 / 3000 [100%]  (Sampling)
Chain 3: 
Chain 3:  Elapsed Time: 0.187 seconds (Warm-up)
Chain 3:                0.358 seconds (Sampling)
Chain 3:                0.545 seconds (Total)
Chain 3: 

SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4).
Chain 4: 
Chain 4: Gradient evaluation took 1.1e-05 seconds
Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds.
Chain 4: Adjust your expectations accordingly!
Chain 4: 
Chain 4: 
Chain 4: Iteration:    1 / 3000 [  0%]  (Warmup)
Chain 4: Iteration:  300 / 3000 [ 10%]  (Warmup)
Chain 4: Iteration:  600 / 3000 [ 20%]  (Warmup)
Chain 4: Iteration:  900 / 3000 [ 30%]  (Warmup)
Chain 4: Iteration: 1001 / 3000 [ 33%]  (Sampling)
Chain 4: Iteration: 1300 / 3000 [ 43%]  (Sampling)
Chain 4: Iteration: 1600 / 3000 [ 53%]  (Sampling)
Chain 4: Iteration: 1900 / 3000 [ 63%]  (Sampling)
Chain 4: Iteration: 2200 / 3000 [ 73%]  (Sampling)
Chain 4: Iteration: 2500 / 3000 [ 83%]  (Sampling)
Chain 4: Iteration: 2800 / 3000 [ 93%]  (Sampling)
Chain 4: Iteration: 3000 / 3000 [100%]  (Sampling)
Chain 4: 
Chain 4:  Elapsed Time: 0.181 seconds (Warm-up)
Chain 4:                0.358 seconds (Sampling)
Chain 4:                0.539 seconds (Total)
Chain 4: 
rstan::traceplot(modelstan) # not shown
'pars' not specified. Showing first 10 parameters by default.

print(modelstan, dig = 2) # not shown
Inference for Stan model: anon_model.
4 chains, each with iter=3000; warmup=1000; thin=1; 
post-warmup draws per chain=2000, total post-warmup draws=8000.

         mean se_mean   sd   2.5%    25%    50%    75%  97.5% n_eff Rhat
alpha   35.50    0.04 1.84  31.95  34.23  35.51  36.75  39.08  2343    1
beta    -0.53    0.00 0.19  -0.91  -0.66  -0.53  -0.40  -0.16  2330    1
beta2    8.59    0.05 2.59   3.36   6.85   8.65  10.34  13.55  2487    1
beta3    0.15    0.01 0.27  -0.36  -0.03   0.15   0.34   0.70  2422    1
sigma    3.47    0.01 0.48   2.69   3.13   3.43   3.76   4.60  3475    1
mu[1]   34.97    0.03 1.67  31.73  33.82  34.99  36.10  38.25  2399    1
mu[2]   34.44    0.03 1.51  31.52  33.39  34.45  35.46  37.41  2484    1
mu[3]   33.91    0.03 1.36  31.32  32.97  33.91  34.82  36.60  2619    1
mu[4]   33.38    0.02 1.22  31.03  32.55  33.38  34.19  35.77  2831    1
mu[5]   32.85    0.02 1.09  30.74  32.11  32.84  33.58  35.00  3193    1
mu[6]   32.32    0.02 0.99  30.40  31.66  32.30  32.97  34.26  3834    1
mu[7]   31.79    0.01 0.91  30.01  31.19  31.78  32.40  33.61  5006    1
mu[8]   31.26    0.01 0.87  29.53  30.69  31.25  31.83  32.99  7140    1
mu[9]   30.73    0.01 0.88  28.97  30.16  30.72  31.31  32.47  8882    1
mu[10]  30.20    0.01 0.92  28.31  29.61  30.19  30.80  32.00  9205    1
mu[11]  29.67    0.01 1.00  27.64  29.03  29.68  30.32  31.60  8088    1
mu[12]  29.14    0.01 1.10  26.90  28.43  29.16  29.88  31.27  6385    1
mu[13]  28.61    0.02 1.23  26.09  27.81  28.62  29.42  30.99  5157    1
mu[14]  28.08    0.02 1.37  25.28  27.19  28.10  29.00  30.73  4448    1
mu[15]  27.55    0.02 1.52  24.42  26.55  27.58  28.58  30.48  3953    1
mu[16]  27.02    0.03 1.69  23.54  25.91  27.04  28.16  30.26  3613    1
mu[17]  43.71    0.02 1.68  40.48  42.59  43.70  44.81  47.04  6079    1
mu[18]  43.34    0.02 1.52  40.42  42.32  43.31  44.33  46.33  6188    1
mu[19]  42.96    0.02 1.37  40.32  42.05  42.94  43.86  45.70  6352    1
mu[20]  42.58    0.02 1.23  40.19  41.77  42.57  43.39  45.05  6630    1
mu[21]  42.21    0.01 1.11  40.07  41.47  42.20  42.93  44.42  6953    1
mu[22]  41.83    0.01 1.00  39.89  41.15  41.84  42.49  43.86  7450    1
mu[23]  41.45    0.01 0.93  39.65  40.83  41.46  42.06  43.32  8041    1
mu[24]  41.08    0.01 0.89  39.32  40.49  41.07  41.67  42.85  8581    1
mu[25]  40.70    0.01 0.90  38.92  40.11  40.69  41.31  42.47  8853    1
mu[26]  40.33    0.01 0.94  38.44  39.70  40.32  40.96  42.16  8921    1
mu[27]  39.95    0.01 1.02  37.89  39.29  39.95  40.63  41.96  8557    1
mu[28]  39.57    0.01 1.12  37.29  38.84  39.57  40.32  41.78  7982    1
mu[29]  39.20    0.01 1.25  36.66  38.39  39.21  40.02  41.65  7492    1
mu[30]  38.82    0.02 1.39  35.99  37.94  38.83  39.73  41.56  7160    1
mu[31]  38.44    0.02 1.54  35.29  37.46  38.47  39.44  41.52  6916    1
mu[32]  38.07    0.02 1.70  34.63  36.97  38.09  39.18  41.46  6702    1
lp__   -54.24    0.03 1.70 -58.38 -55.15 -53.90 -52.99 -51.98  2657    1

Samples were drawn using NUTS(diag_e) at Thu Nov 13 21:46:50 2025.
For each parameter, n_eff is a crude measure of effective sample size,
and Rhat is the potential scale reduction factor on split chains (at 
convergence, Rhat=1).
stan_est <- summary(modelstan)$summary[1:5,1]
stan_est
     alpha       beta      beta2      beta3      sigma 
35.4984666 -0.5300711  8.5914974  0.1536086  3.4744988