O documento a seguir apresenta a utilização do STAN
Para utilização interna, basta utilizar o pacote “rstan”"
O pacote foi rodado em 4 exemplos diferentes:
Verossimilhança normal e priori normal, uma observação.
stancode <- 'data {real y_mean;} parameters {real y;} model {y ~ normal(y_mean,1);}'
mod <- stan_model(model_code = stancode)
fit <- sampling(mod, data = list(y_mean = 0))##
## SAMPLING FOR MODEL '73fc79f8b1915e8208c736914c86d1a1' NOW (CHAIN 1).
## Chain 1:
## Chain 1: Gradient evaluation took 0 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1:
## Chain 1:
## Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup)
## Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup)
## Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup)
## Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup)
## Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup)
## Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup)
## Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling)
## Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling)
## Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling)
## Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling)
## Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling)
## Chain 1: Iteration: 2000 / 2000 [100%] (Sampling)
## Chain 1:
## Chain 1: Elapsed Time: 0.016 seconds (Warm-up)
## Chain 1: 0.016 seconds (Sampling)
## Chain 1: 0.032 seconds (Total)
## Chain 1:
##
## SAMPLING FOR MODEL '73fc79f8b1915e8208c736914c86d1a1' NOW (CHAIN 2).
## Chain 2:
## Chain 2: Gradient evaluation took 0 seconds
## Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0 seconds.
## Chain 2: Adjust your expectations accordingly!
## Chain 2:
## Chain 2:
## Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup)
## Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup)
## Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup)
## Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup)
## Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup)
## Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup)
## Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling)
## Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling)
## Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling)
## Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling)
## Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling)
## Chain 2: Iteration: 2000 / 2000 [100%] (Sampling)
## Chain 2:
## Chain 2: Elapsed Time: 0 seconds (Warm-up)
## Chain 2: 0.015 seconds (Sampling)
## Chain 2: 0.015 seconds (Total)
## Chain 2:
##
## SAMPLING FOR MODEL '73fc79f8b1915e8208c736914c86d1a1' NOW (CHAIN 3).
## Chain 3:
## Chain 3: Gradient evaluation took 0 seconds
## Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0 seconds.
## Chain 3: Adjust your expectations accordingly!
## Chain 3:
## Chain 3:
## Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup)
## Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup)
## Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup)
## Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup)
## Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup)
## Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup)
## Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling)
## Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling)
## Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling)
## Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling)
## Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling)
## Chain 3: Iteration: 2000 / 2000 [100%] (Sampling)
## Chain 3:
## Chain 3: Elapsed Time: 0.018 seconds (Warm-up)
## Chain 3: 0.009 seconds (Sampling)
## Chain 3: 0.027 seconds (Total)
## Chain 3:
##
## SAMPLING FOR MODEL '73fc79f8b1915e8208c736914c86d1a1' NOW (CHAIN 4).
## Chain 4:
## Chain 4: Gradient evaluation took 0 seconds
## Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0 seconds.
## Chain 4: Adjust your expectations accordingly!
## Chain 4:
## Chain 4:
## Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup)
## Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup)
## Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup)
## Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup)
## Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup)
## Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup)
## Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling)
## Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling)
## Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling)
## Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling)
## Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling)
## Chain 4: Iteration: 2000 / 2000 [100%] (Sampling)
## Chain 4:
## Chain 4: Elapsed Time: 0.005 seconds (Warm-up)
## Chain 4: 0 seconds (Sampling)
## Chain 4: 0.005 seconds (Total)
## Chain 4:
print(fit)## Inference for Stan model: 73fc79f8b1915e8208c736914c86d1a1.
## 4 chains, each with iter=2000; warmup=1000; thin=1;
## post-warmup draws per chain=1000, total post-warmup draws=4000.
##
## mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat
## y -0.01 0.03 1.02 -2.06 -0.67 -0.02 0.67 2 1461 1
## lp__ -0.52 0.02 0.76 -2.74 -0.67 -0.23 -0.05 0 1517 1
##
## Samples were drawn using NUTS(diag_e) at Fri Jan 25 05:10:46 2019.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at
## convergence, Rhat=1).
summary(fit)## $summary
## mean se_mean sd 2.5% 25% 50%
## y -0.008610042 0.02668554 1.0201688 -2.062664 -0.6709301 -0.01671636
## lp__ -0.520279145 0.01957470 0.7625094 -2.743548 -0.6737909 -0.22682023
## 75% 97.5% n_eff Rhat
## y 0.67450663 2.0032935877 1461.477 1.001855
## lp__ -0.05153447 -0.0005177675 1517.400 1.000925
##
## $c_summary
## , , chains = chain:1
##
## stats
## parameter mean sd 2.5% 25% 50%
## y -0.04657824 1.0429379 -2.097752 -0.7474129 -0.02678476
## lp__ -0.54440060 0.8224671 -2.770167 -0.6745145 -0.25450238
## stats
## parameter 75% 97.5%
## y 0.66013731 2.0487291830
## lp__ -0.06026975 -0.0002807073
##
## , , chains = chain:2
##
## stats
## parameter mean sd 2.5% 25% 50%
## y 0.01691413 1.0050039 -2.128561 -0.6414465 0.006456592
## lp__ -0.50465440 0.7146636 -2.668720 -0.6544870 -0.230774658
## stats
## parameter 75% 97.5%
## y 0.71474233 1.996288026
## lp__ -0.05170444 -0.001138555
##
## , , chains = chain:3
##
## stats
## parameter mean sd 2.5% 25% 50%
## y 0.01751481 1.0177801 -1.875796 -0.6213283 -0.02873719
## lp__ -0.51757359 0.7515088 -2.788371 -0.6707985 -0.21845533
## stats
## parameter 75% 97.5%
## y 0.67716442 2.0901378677
## lp__ -0.05141108 -0.0004117112
##
## , , chains = chain:4
##
## stats
## parameter mean sd 2.5% 25% 50%
## y -0.02229086 1.014647 -2.063807 -0.6487604 0.008412678
## lp__ -0.51448799 0.758023 -2.571124 -0.6987987 -0.208079192
## stats
## parameter 75% 97.5%
## y 0.63838474 1.821086886
## lp__ -0.04601569 -0.000664757
plot(fit, ci_level = 0.95, outer_level = 0.999)## ci_level: 0.95 (95% intervals)
## outer_level: 0.999 (99.9% intervals)
plot(fit, show_density=TRUE, ci_level=0.95, outer_level=0.999)## ci_level: 0.95 (95% intervals)
## outer_level: 0.999 (99.9% intervals)
Gerando os dados
n = 100
alpha = -2
beta = 2
sigma = 1
x = sort(runif(n, 0, 10))
y = rnorm(n, mean = alpha + beta * x, sd = sigma)
plot(x, y)
lines(x, alpha + beta * x)data <- list(y = y, x = x, N = n)Código para o modelo rstan
model = "data {
int<lower=1> N;
vector[N] y;
vector[N] x;
}
parameters {
real alpha;
real beta;
real<lower=0> sigma;
}
model {
alpha ~ normal(0, 100);
beta ~ normal(0, 100);
sigma ~ gamma(0.01, 0.001);
y ~ normal(alpha + beta * x, sigma^-2);
}
generated quantities {
vector[N] y_rep;
for(n in 1:N) {
y_rep[n] = normal_rng(alpha + beta * x[n], sigma^-2);
}
}"
data <- list(y = y, x = x, N = n)Resultados
fit <- stan(model_code = model,
data = data, iter = 1000, warmup = 100, chains = 2)##
## SAMPLING FOR MODEL 'ef0eeae4bad4842686a70167ddd3c7bd' NOW (CHAIN 1).
## Chain 1:
## Chain 1: Gradient evaluation took 0 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1:
## Chain 1:
## Chain 1: WARNING: There aren't enough warmup iterations to fit the
## Chain 1: three stages of adaptation as currently configured.
## Chain 1: Reducing each adaptation stage to 15%/75%/10% of
## Chain 1: the given number of warmup iterations:
## Chain 1: init_buffer = 15
## Chain 1: adapt_window = 75
## Chain 1: term_buffer = 10
## Chain 1:
## Chain 1: Iteration: 1 / 1000 [ 0%] (Warmup)
## Chain 1: Iteration: 100 / 1000 [ 10%] (Warmup)
## Chain 1: Iteration: 101 / 1000 [ 10%] (Sampling)
## Chain 1: Iteration: 200 / 1000 [ 20%] (Sampling)
## Chain 1: Iteration: 300 / 1000 [ 30%] (Sampling)
## Chain 1: Iteration: 400 / 1000 [ 40%] (Sampling)
## Chain 1: Iteration: 500 / 1000 [ 50%] (Sampling)
## Chain 1: Iteration: 600 / 1000 [ 60%] (Sampling)
## Chain 1: Iteration: 700 / 1000 [ 70%] (Sampling)
## Chain 1: Iteration: 800 / 1000 [ 80%] (Sampling)
## Chain 1: Iteration: 900 / 1000 [ 90%] (Sampling)
## Chain 1: Iteration: 1000 / 1000 [100%] (Sampling)
## Chain 1:
## Chain 1: Elapsed Time: 0.016 seconds (Warm-up)
## Chain 1: 0.081 seconds (Sampling)
## Chain 1: 0.097 seconds (Total)
## Chain 1:
##
## SAMPLING FOR MODEL 'ef0eeae4bad4842686a70167ddd3c7bd' NOW (CHAIN 2).
## Chain 2:
## Chain 2: Gradient evaluation took 0 seconds
## Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0 seconds.
## Chain 2: Adjust your expectations accordingly!
## Chain 2:
## Chain 2:
## Chain 2: WARNING: There aren't enough warmup iterations to fit the
## Chain 2: three stages of adaptation as currently configured.
## Chain 2: Reducing each adaptation stage to 15%/75%/10% of
## Chain 2: the given number of warmup iterations:
## Chain 2: init_buffer = 15
## Chain 2: adapt_window = 75
## Chain 2: term_buffer = 10
## Chain 2:
## Chain 2: Iteration: 1 / 1000 [ 0%] (Warmup)
## Chain 2: Iteration: 100 / 1000 [ 10%] (Warmup)
## Chain 2: Iteration: 101 / 1000 [ 10%] (Sampling)
## Chain 2: Iteration: 200 / 1000 [ 20%] (Sampling)
## Chain 2: Iteration: 300 / 1000 [ 30%] (Sampling)
## Chain 2: Iteration: 400 / 1000 [ 40%] (Sampling)
## Chain 2: Iteration: 500 / 1000 [ 50%] (Sampling)
## Chain 2: Iteration: 600 / 1000 [ 60%] (Sampling)
## Chain 2: Iteration: 700 / 1000 [ 70%] (Sampling)
## Chain 2: Iteration: 800 / 1000 [ 80%] (Sampling)
## Chain 2: Iteration: 900 / 1000 [ 90%] (Sampling)
## Chain 2: Iteration: 1000 / 1000 [100%] (Sampling)
## Chain 2:
## Chain 2: Elapsed Time: 0 seconds (Warm-up)
## Chain 2: 0.094 seconds (Sampling)
## Chain 2: 0.094 seconds (Total)
## Chain 2:
plot(fit, ci_level = 0.95, outer_level = 0.999)## 'pars' not specified. Showing first 10 parameters by default.
## ci_level: 0.95 (95% intervals)
## outer_level: 0.999 (99.9% intervals)
plot(fit, show_density=TRUE, ci_level=0.95, outer_level=0.999)## 'pars' not specified. Showing first 10 parameters by default.
## ci_level: 0.95 (95% intervals)
## outer_level: 0.999 (99.9% intervals)
sample = extract(fit)
p1 <- mcmc_areas(as.data.frame(sample), pars = c("alpha", "beta", "sigma"))
p1
Ajustando os dados
load("C:/Users/Rafael/Desktop/Pesquisa/Bases/homicidio.RData")
y = homicidio$X__2
x = homicidio$X__1
lm1 <- lm(y ~ x)
summary(lm1)##
## Call:
## lm(formula = y ~ x)
##
## Residuals:
## Min 1Q Median 3Q Max
## -2.2456 -0.7098 0.2115 0.5496 1.8874
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 15.7631 2.1774 7.239 4.29e-06 ***
## x 1.1262 0.2375 4.742 0.000315 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 1.173 on 14 degrees of freedom
## Multiple R-squared: 0.6163, Adjusted R-squared: 0.5889
## F-statistic: 22.49 on 1 and 14 DF, p-value: 0.0003151
plot(x, y)
lines(x, 15.7 + 1.13 * x)n = length(y)Código para o modelo rstan
model = "data {
int<lower=1> N;
vector[N] y;
vector[N] x;
}
parameters {
real alpha;
real beta;
real<lower=0> sigma;
}
model {
alpha ~ normal(0, 100);
beta ~ normal(0, 100);
sigma ~ gamma(0.01, 0.001);
y ~ normal(alpha + beta * x, sigma^-2);
}
generated quantities {
vector[N] y_rep;
for(n in 1:N) {
y_rep[n] = normal_rng(alpha + beta * x[n], sigma^-2);
}
}"
data <- list(y = y, x = x, N = n)Resultados
fit <- stan(model_code = model,
data = data, iter = 1000, warmup = 100, chains = 2)##
## SAMPLING FOR MODEL '4b8266bfd982d6c642f4fe958ce7655b' NOW (CHAIN 1).
## Chain 1:
## Chain 1: Gradient evaluation took 0 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1:
## Chain 1:
## Chain 1: WARNING: There aren't enough warmup iterations to fit the
## Chain 1: three stages of adaptation as currently configured.
## Chain 1: Reducing each adaptation stage to 15%/75%/10% of
## Chain 1: the given number of warmup iterations:
## Chain 1: init_buffer = 15
## Chain 1: adapt_window = 75
## Chain 1: term_buffer = 10
## Chain 1:
## Chain 1: Iteration: 1 / 1000 [ 0%] (Warmup)
## Chain 1: Iteration: 100 / 1000 [ 10%] (Warmup)
## Chain 1: Iteration: 101 / 1000 [ 10%] (Sampling)
## Chain 1: Iteration: 200 / 1000 [ 20%] (Sampling)
## Chain 1: Iteration: 300 / 1000 [ 30%] (Sampling)
## Chain 1: Iteration: 400 / 1000 [ 40%] (Sampling)
## Chain 1: Iteration: 500 / 1000 [ 50%] (Sampling)
## Chain 1: Iteration: 600 / 1000 [ 60%] (Sampling)
## Chain 1: Iteration: 700 / 1000 [ 70%] (Sampling)
## Chain 1: Iteration: 800 / 1000 [ 80%] (Sampling)
## Chain 1: Iteration: 900 / 1000 [ 90%] (Sampling)
## Chain 1: Iteration: 1000 / 1000 [100%] (Sampling)
## Chain 1:
## Chain 1: Elapsed Time: 0.016 seconds (Warm-up)
## Chain 1: 0.128 seconds (Sampling)
## Chain 1: 0.144 seconds (Total)
## Chain 1:
##
## SAMPLING FOR MODEL '4b8266bfd982d6c642f4fe958ce7655b' NOW (CHAIN 2).
## Chain 2:
## Chain 2: Gradient evaluation took 0 seconds
## Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0 seconds.
## Chain 2: Adjust your expectations accordingly!
## Chain 2:
## Chain 2:
## Chain 2: WARNING: There aren't enough warmup iterations to fit the
## Chain 2: three stages of adaptation as currently configured.
## Chain 2: Reducing each adaptation stage to 15%/75%/10% of
## Chain 2: the given number of warmup iterations:
## Chain 2: init_buffer = 15
## Chain 2: adapt_window = 75
## Chain 2: term_buffer = 10
## Chain 2:
## Chain 2: Iteration: 1 / 1000 [ 0%] (Warmup)
## Chain 2: Iteration: 100 / 1000 [ 10%] (Warmup)
## Chain 2: Iteration: 101 / 1000 [ 10%] (Sampling)
## Chain 2: Iteration: 200 / 1000 [ 20%] (Sampling)
## Chain 2: Iteration: 300 / 1000 [ 30%] (Sampling)
## Chain 2: Iteration: 400 / 1000 [ 40%] (Sampling)
## Chain 2: Iteration: 500 / 1000 [ 50%] (Sampling)
## Chain 2: Iteration: 600 / 1000 [ 60%] (Sampling)
## Chain 2: Iteration: 700 / 1000 [ 70%] (Sampling)
## Chain 2: Iteration: 800 / 1000 [ 80%] (Sampling)
## Chain 2: Iteration: 900 / 1000 [ 90%] (Sampling)
## Chain 2: Iteration: 1000 / 1000 [100%] (Sampling)
## Chain 2:
## Chain 2: Elapsed Time: 0 seconds (Warm-up)
## Chain 2: 0.156 seconds (Sampling)
## Chain 2: 0.156 seconds (Total)
## Chain 2:
#sampling(fit, iter = 300, data = data)
plot(fit, ci_level = 0.95, outer_level = 0.999)## 'pars' not specified. Showing first 10 parameters by default.
## ci_level: 0.95 (95% intervals)
## outer_level: 0.999 (99.9% intervals)
plot(fit, show_density=TRUE, ci_level=0.95, outer_level=0.999)## 'pars' not specified. Showing first 10 parameters by default.
## ci_level: 0.95 (95% intervals)
## outer_level: 0.999 (99.9% intervals)
Carregando os dados
load("C:/Users/Rafael/Desktop/Pesquisa/Bases/enemAMOSTRA.Rda")
y = enem$NU_NOTA_REDACAO
x = enem$NU_NOTA_MT
lm1 <- lm(y ~ x)
summary(lm1)##
## Call:
## lm(formula = y ~ x)
##
## Residuals:
## Min 1Q Median 3Q Max
## -598.34 -84.79 -3.69 88.03 321.00
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 353.68468 32.02075 11.045 <2e-16 ***
## x 0.52649 0.05585 9.426 <2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 127.3 on 348 degrees of freedom
## Multiple R-squared: 0.2034, Adjusted R-squared: 0.2011
## F-statistic: 88.86 on 1 and 348 DF, p-value: < 2.2e-16
plot(x, y)
lines(x, 354 + 0.52* x)n = length(y)Criando o modelo em rstan
model = "data {
int<lower=1> N;
vector[N] y;
vector[N] x;
}
parameters {
real alpha;
real beta;
real<lower=0> sigma;
}
model {
alpha ~ normal(0, 100);
beta ~ normal(0, 100);
sigma ~ gamma(0.01, 0.001);
y ~ normal(alpha + beta * x, sigma^-2);
}
generated quantities {
vector[N] y_rep;
for(n in 1:N) {
y_rep[n] = normal_rng(alpha + beta * x[n], sigma^-2);
}
}"
data <- list(y = y, x = x, N = n)Resultados
fit <- stan(model_code = model,
data = data, iter = 1000, warmup = 100, chains = 2)##
## SAMPLING FOR MODEL '034b453fe732b8400474d3a94aab892c' NOW (CHAIN 1).
## Chain 1:
## Chain 1: Gradient evaluation took 0 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1:
## Chain 1:
## Chain 1: WARNING: There aren't enough warmup iterations to fit the
## Chain 1: three stages of adaptation as currently configured.
## Chain 1: Reducing each adaptation stage to 15%/75%/10% of
## Chain 1: the given number of warmup iterations:
## Chain 1: init_buffer = 15
## Chain 1: adapt_window = 75
## Chain 1: term_buffer = 10
## Chain 1:
## Chain 1: Iteration: 1 / 1000 [ 0%] (Warmup)
## Chain 1: Iteration: 100 / 1000 [ 10%] (Warmup)
## Chain 1: Iteration: 101 / 1000 [ 10%] (Sampling)
## Chain 1: Iteration: 200 / 1000 [ 20%] (Sampling)
## Chain 1: Iteration: 300 / 1000 [ 30%] (Sampling)
## Chain 1: Iteration: 400 / 1000 [ 40%] (Sampling)
## Chain 1: Iteration: 500 / 1000 [ 50%] (Sampling)
## Chain 1: Iteration: 600 / 1000 [ 60%] (Sampling)
## Chain 1: Iteration: 700 / 1000 [ 70%] (Sampling)
## Chain 1: Iteration: 800 / 1000 [ 80%] (Sampling)
## Chain 1: Iteration: 900 / 1000 [ 90%] (Sampling)
## Chain 1: Iteration: 1000 / 1000 [100%] (Sampling)
## Chain 1:
## Chain 1: Elapsed Time: 0.203 seconds (Warm-up)
## Chain 1: 5.92 seconds (Sampling)
## Chain 1: 6.123 seconds (Total)
## Chain 1:
##
## SAMPLING FOR MODEL '034b453fe732b8400474d3a94aab892c' NOW (CHAIN 2).
## Chain 2:
## Chain 2: Gradient evaluation took 0 seconds
## Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0 seconds.
## Chain 2: Adjust your expectations accordingly!
## Chain 2:
## Chain 2:
## Chain 2: WARNING: There aren't enough warmup iterations to fit the
## Chain 2: three stages of adaptation as currently configured.
## Chain 2: Reducing each adaptation stage to 15%/75%/10% of
## Chain 2: the given number of warmup iterations:
## Chain 2: init_buffer = 15
## Chain 2: adapt_window = 75
## Chain 2: term_buffer = 10
## Chain 2:
## Chain 2: Iteration: 1 / 1000 [ 0%] (Warmup)
## Chain 2: Iteration: 100 / 1000 [ 10%] (Warmup)
## Chain 2: Iteration: 101 / 1000 [ 10%] (Sampling)
## Chain 2: Iteration: 200 / 1000 [ 20%] (Sampling)
## Chain 2: Iteration: 300 / 1000 [ 30%] (Sampling)
## Chain 2: Iteration: 400 / 1000 [ 40%] (Sampling)
## Chain 2: Iteration: 500 / 1000 [ 50%] (Sampling)
## Chain 2: Iteration: 600 / 1000 [ 60%] (Sampling)
## Chain 2: Iteration: 700 / 1000 [ 70%] (Sampling)
## Chain 2: Iteration: 800 / 1000 [ 80%] (Sampling)
## Chain 2: Iteration: 900 / 1000 [ 90%] (Sampling)
## Chain 2: Iteration: 1000 / 1000 [100%] (Sampling)
## Chain 2:
## Chain 2: Elapsed Time: 0.202 seconds (Warm-up)
## Chain 2: 6.702 seconds (Sampling)
## Chain 2: 6.904 seconds (Total)
## Chain 2:
## Warning: There were 657 transitions after warmup that exceeded the maximum treedepth. Increase max_treedepth above 10. See
## http://mc-stan.org/misc/warnings.html#maximum-treedepth-exceeded
## Warning: There were 1 chains where the estimated Bayesian Fraction of Missing Information was low. See
## http://mc-stan.org/misc/warnings.html#bfmi-low
## Warning: Examine the pairs() plot to diagnose sampling problems
#sampling(fit, iter = 300, data = data)
plot(fit, ci_level = 0.95, outer_level = 0.999)## 'pars' not specified. Showing first 10 parameters by default.
## ci_level: 0.95 (95% intervals)
## outer_level: 0.999 (99.9% intervals)
plot(fit, show_density=TRUE, ci_level=0.95, outer_level=0.999)## 'pars' not specified. Showing first 10 parameters by default.
## ci_level: 0.95 (95% intervals)
## outer_level: 0.999 (99.9% intervals)