The demonstration here comes from John K. Kruschke, Doing Bayesian Data Analysis, A Tutorial with R, JAGS, and STAN, 2015, Elsevier, Chapter 10, Section 10.2. EXAMPLE: TWOFACTORIES OF COINS.
Define the model
Notice that: the likelihood function is the same for both models and only the priors are different for the data parameter θ. For m=1 the prior is concentrated around ω1=0.25 and for m=2 the prior is concentrated around ω2=0.75. Concentration levels are the same κ=12. The resulting parameters of beta distribution are: (α1=3.5,β1=8.5) and (α2=8.5,β2=3.5), correspondingly.
modelString="
model {
for (i in 1:Ntotal) {
y[i]~dbern(theta)
}
theta~dbeta(omega[m]*(kappa-2)+1,(1-omega[m])*(kappa-2)+1)
omega[1]<-.25
omega[2]<-.75
kappa<-12
m~dcat(mPriorProb[])
mPriorProb[1]<-.5
mPriorProb[2]<-.5
}
"
writeLines(modelString,con="Tempmodel.txt")
Create list of data corresponding to 6 successes out of 9 trials.
y<-c(rep(0,3),rep(1,6))
(Ntotal<-length(y))
## [1] 9
(dataList<-list(y=y,Ntotal=Ntotal))
## $y
## [1] 0 0 0 1 1 1 1 1 1
##
## $Ntotal
## [1] 9
Send model to JAGS.
library(rjags)
## Warning: package 'rjags' was built under R version 4.0.4
## Loading required package: coda
## Warning: package 'coda' was built under R version 4.0.4
## Linked to JAGS 4.3.0
## Loaded modules: basemod,bugs
jagsModel <- jags.model(file = "TempModel.txt", data = dataList, n.chains = 4, n.adapt = 500)
## Compiling model graph
## Resolving undeclared variables
## Allocating nodes
## Graph information:
## Observed stochastic nodes: 9
## Unobserved stochastic nodes: 2
## Total graph size: 26
##
## Initializing model
names(jagsModel)
## [1] "ptr" "data" "model" "state" "nchain" "iter"
## [7] "sync" "recompile"
Burn in.
update(jagsModel, n.iter = 600)
Generate MCMC trajectories.
codaSamples<-coda.samples(jagsModel,variable.names=c("m"),thin=1,n.iter=5000)
list.samplers(jagsModel)
## $`bugs::ConjugateBeta`
## [1] "theta"
##
## $`base::Finite`
## [1] "m"
head(codaSamples)
## [[1]]
## Markov Chain Monte Carlo (MCMC) output:
## Start = 601
## End = 607
## Thinning interval = 1
## m
## [1,] 2
## [2,] 2
## [3,] 2
## [4,] 1
## [5,] 1
## [6,] 2
## [7,] 2
##
## [[2]]
## Markov Chain Monte Carlo (MCMC) output:
## Start = 601
## End = 607
## Thinning interval = 1
## m
## [1,] 2
## [2,] 2
## [3,] 2
## [4,] 2
## [5,] 2
## [6,] 2
## [7,] 2
##
## [[3]]
## Markov Chain Monte Carlo (MCMC) output:
## Start = 601
## End = 607
## Thinning interval = 1
## m
## [1,] 2
## [2,] 2
## [3,] 2
## [4,] 2
## [5,] 2
## [6,] 2
## [7,] 2
##
## [[4]]
## Markov Chain Monte Carlo (MCMC) output:
## Start = 601
## End = 607
## Thinning interval = 1
## m
## [1,] 2
## [2,] 2
## [3,] 2
## [4,] 2
## [5,] 2
## [6,] 2
## [7,] 2
##
## attr(,"class")
## [1] "mcmc.list"
Analyze convergence.
summary(codaSamples)
##
## Iterations = 601:5600
## Thinning interval = 1
## Number of chains = 4
## Sample size per chain = 5000
##
## 1. Empirical mean and standard deviation for each variable,
## plus standard error of the mean:
##
## Mean SD Naive SE Time-series SE
## 1.821600 0.382859 0.002707 0.005159
##
## 2. Quantiles for each variable:
##
## 2.5% 25% 50% 75% 97.5%
## 1 2 2 2 2
plot(codaSamples)
Plot of the samples is not very informative because each sample is a trajectory switching between 1 and 2.
autocorr.plot(codaSamples,ask=F)
effectiveSize(codaSamples)
## m
## 5539.666
We see that autocorrelation converges to zero only in about 5 lags or so. This is confirmed by ESS.
Rerun the chains with thinning parameter equal to 5.
codaSamples<-coda.samples(jagsModel, variable.names = c("m"), thin = 5, n.iter = 5000)
plot(codaSamples)
autocorr.plot(codaSamples,ask=F)
effectiveSize(codaSamples)
## m
## 3775.556
lapply(codaSamples,length)
## [[1]]
## [1] 1000
##
## [[2]]
## [1] 1000
##
## [[3]]
## [1] 1000
##
## [[4]]
## [1] 1000
Now autocorrelation function is not significant.
Effective size is 3340.8160604, but this is out of total 4,000 of observations instead of 20,000. Thinning reduces sample. When we apply thinning we need to make sample longer.
Potential scale reduction factor or shrink factor showed convergence.
gelman.diag(codaSamples)
## Potential scale reduction factors:
##
## Point est. Upper C.I.
## m 1 1
gelman.plot(codaSamples)
Now analyze the results.
Look at the chain means.
(means<-lapply(codaSamples,mean))
## [[1]]
## [1] 1.836
##
## [[2]]
## [1] 1.803
##
## [[3]]
## [1] 1.839
##
## [[4]]
## [1] 1.814
Means are calculated as
Mean=1P(m=1)+2P(m=2)=1(1−P(m=2))+2P(m=2)=1−P(m=2).
From this P(m=2)=Mean−1
Find posterior probabilities of m=2 for each of the 4 chains and their average.
(prob.2<-lapply(means,function(z) z-1))
## [[1]]
## [1] 0.836
##
## [[2]]
## [1] 0.803
##
## [[3]]
## [1] 0.839
##
## [[4]]
## [1] 0.814
mean(unlist(prob.2))
## [1] 0.823
This means that posterior probability of m=1 is 0.163.
Obviously, observing 6 successes out of 9 is more consistent with the model concentrating around level of 0.75.
Find how much time each chain spent in each of the state for m.
lapply(codaSamples,function(z) sum(z==2)/length(z))
## [[1]]
## [1] 0.836
##
## [[2]]
## [1] 0.803
##
## [[3]]
## [1] 0.839
##
## [[4]]
## [1] 0.814
This is a caveat of using hierarchical model for model comparison: if one of the models is a strong leader the sample for the underdog becomes too short which leads to more unstable results.
One obvious way to overcome this is to sacrifice efficiency and run chains longer.
Also, it may be a good idea to try avoiding significant difference in prior probabilities of competing models.
source("../DBDA2Eprograms/DBDA2E-utilities.R")
##
## *********************************************************************
## Kruschke, J. K. (2015). Doing Bayesian Data Analysis, Second Edition:
## A Tutorial with R, JAGS, and Stan. Academic Press / Elsevier.
## *********************************************************************
## Warning: package 'runjags' was built under R version 4.0.4
library(rstan)
## Warning: package 'StanHeaders' was built under R version 4.0.4
## Warning: package 'ggplot2' was built under R version 4.0.4
options(mc.cores = parallel::detectCores())
rstan_options(auto_write = TRUE)
HDIofMCMC
## function (sampleVec, credMass = 0.95)
## {
## sortedPts = sort(sampleVec)
## ciIdxInc = ceiling(credMass * length(sortedPts))
## nCIs = length(sortedPts) - ciIdxInc
## ciWidth = rep(0, nCIs)
## for (i in 1:nCIs) {
## ciWidth[i] = sortedPts[i + ciIdxInc] - sortedPts[i]
## }
## HDImin = sortedPts[which.min(ciWidth)]
## HDImax = sortedPts[which.min(ciWidth) + ciIdxInc]
## HDIlim = c(HDImin, HDImax)
## return(HDIlim)
## }
Consider a simple problem of estimation of parameter of binomial distribution.
Specification of model is similar to JAGS.
# Specify model:
modelString = "
data {
int<lower=0> N ;
int y[N] ; //length-N vector of integers
}
parameters {
real<lower=0,upper=1> theta ;
}
model {
theta ~ beta(1,1) ;
y ~ bernoulli(theta) ;
}
" # close quote for modelString
Stan is based on C++ library. That is why the model description must be translated into C++. Then it compiles and becomes an executable dynamic shared object (DSO).
This is done by stan_model()
.
# Translate model to C++ and compile to DSO:
stanDso <- stan_model(model_code=modelString)
Data are specified similar to JAGS.
# Specify data:
N = 50 ; z = 10
y = c(rep(1,z),rep(0,N-z))
dataList = list(
y = y ,
N = N
)
Running MCMC is done by sampling()
. Argument warmup
has the same meaning as “burn in.” Argument iter
is total number of steps per chain, including warmup
period. The total number of steps used for sampling from posterior distribution is
chains×(iter−warmup)thin
Argument init
is not used in the following call allowing Stan to use default random initiation.
# Generate posterior sample:
stanFit <- sampling( object=stanDso ,
data = dataList ,
chains = 3 ,
iter = 1000 ,
warmup = 200 ,
thin = 1 )
class(stanFit)
## [1] "stanfit"
## attr(,"package")
## [1] "rstan"
Use application shinystan for exploration of the MCMC object.
library(shinystan)
## Warning: package 'shinystan' was built under R version 4.0.4
## Loading required package: shiny
## Warning: package 'shiny' was built under R version 4.0.4
##
## This is shinystan version 2.5.0
Run application launch_shinystan()
#Not evaluated in Rmd
#launch_shinystan(stanFit)
Or create an object of shinystan application and save it to load and run it later.
myFirstStanFit <- as.shinystan(stanFit)
save(myFirstStanFit, file=("firststanfit.Rdata"))
load("firststanfit.Rdata")
#launch_shinystan(myFirstStanFit)
Use standard graphs from Stan or from [DBDA2E].
#openGraph()
traceplot(stanFit,pars=c("theta"))
#saveGraph()
#openGraph()
plot(stanFit,pars=c("theta"))
## ci_level: 0.8 (80% intervals)
## outer_level: 0.95 (95% intervals)
#saveGraph()
# Make graphs:
# For consistency with JAGS-oriented functions in DBDA2E collection,
# convert stan format to coda format. This excludes warmup and thinned steps.
mcmcCoda = mcmc.list( lapply( 1:ncol(stanFit) ,
function(x) { mcmc(as.array(stanFit)[,x,]) } ) )
diagMCMC( mcmcCoda , parName=c("theta") )
#saveGraph()
Therapeutic touch is a nursing technique in which the practitioner manually manipulates the “energy field” of a patient who is suffering from a disease. The practitioner holds her or his hands near but not actually touching the patient, and “re-patterns” the energy field to relieve congestion and restore balance, allowing the body to heal. (Rosa et al. 1998) reported that therapeutic touch has been widely taught and widely used in nursing colleges and hospitals despite there being little if any evidence of its efficacy.
(Rosa et al. 1998) investigated a key claim of practitioners of therapeutic touch, namely, that the practitioners can sense a body’s energy field. If this is true, then practitioners should be able to sense which of their hands is near another person’s hand, even without being able to see their hands. The practitioner sat with her hands extended through cutouts in a cardboard screen, which prevented the practitioner from seeing the experimenter. On each trial, the experimenter flipped a coin and held her hand a few centimeters above one or the other of the practitioner’s hands, as dictated by the flip of the coin. The practitioner then responded with her best guess regarding which of her hands was being hovered over.
Each trial was scored as correct or wrong. The experimenter (and co-author of the article) was 9-years old at the time. Each practitioner was tested for 10 trials. There were a total of 21 practitioners in the study, seven of whom were tested twice approximately a year apart. The retests were counted by the authors as separate subjects, yielding 28 nominal subjects."
The proportions correct for the 28 subjects are shown in Figure 9.9. of the book.
Chance performance (guessing) is 0.50.
The question is how much the group as a whole differed from chance performance, and how much any individuals differed from chance performance?
This hierarchical model is appropriate for these data, because it estimates the underlying ability of each subject while simultaneously estimating the modal ability of the group and the consistency of the group. Moreover, the distribution of proportions correct across subjects can be meaningfully described as a beta distribution. With 28 subjects, there are a total of 30 parameters being estimated.
The example runs with the script “Jags-Ydich-XnomSsubj-MbernBetaOmegaKappa.R
” & “Stan-Ydich-XnomSsubj-MbernBetaOmegaKappa.R
” given in the book.
#in JAGS
source("../DBDA2Eprograms/Jags-Ydich-XnomSsubj-MbernBetaOmegaKappa.R")
##
## *********************************************************************
## Kruschke, J. K. (2015). Doing Bayesian Data Analysis, Second Edition:
## A Tutorial with R, JAGS, and Stan. Academic Press / Elsevier.
## *********************************************************************
show(genMCMC)
## function (data, sName = "s", yName = "y", numSavedSteps = 50000,
## saveName = NULL, thinSteps = 1)
## {
## require(rjags)
## y = data[, yName]
## s = as.numeric(data[, sName])
## if (any(y != 0 & y != 1)) {
## stop("All y values must be 0 or 1.")
## }
## Ntotal = length(y)
## Nsubj = length(unique(s))
## dataList = list(y = y, s = s, Ntotal = Ntotal, Nsubj = Nsubj)
## modelString = "\n model {\n for ( i in 1:Ntotal ) {\n y[i] ~ dbern( theta[s[i]] )\n }\n for ( sIdx in 1:Nsubj ) {\n theta[sIdx] ~ dbeta( omega*(kappa-2)+1 , (1-omega)*(kappa-2)+1 ) \n }\n omega ~ dbeta( 1 , 1 ) # broad uniform\n # omega ~ dbeta( 5001 , 15001 ) # Skeptical prior for ESP\n kappa <- kappaMinusTwo + 2\n # kappaMinusTwo ~ dgamma( 0.01 , 0.01 ) # mean=1 , sd=10 (generic vague)\n kappaMinusTwo ~ dgamma( 1.105125 , 0.1051249 ) # mode=1 , sd=10 \n # kappaMinusTwo ~ dgamma( 36 , 0.12 ) # mode=300 , sd=50 : skeptical \n }\n "
## writeLines(modelString, con = "TEMPmodel.txt")
## initsList = function() {
## thetaInit = rep(0, Nsubj)
## for (sIdx in 1:Nsubj) {
## includeRows = (s == sIdx)
## yThisSubj = y[includeRows]
## resampledY = sample(yThisSubj, replace = TRUE)
## thetaInit[sIdx] = sum(resampledY)/length(resampledY)
## }
## thetaInit = 0.001 + 0.998 * thetaInit
## meanThetaInit = mean(thetaInit)
## kappaInit = 100
## return(list(theta = thetaInit, omega = meanThetaInit,
## kappaMinusTwo = kappaInit - 2))
## }
## parameters = c("theta", "omega", "kappa")
## adaptSteps = 500
## burnInSteps = 500
## nChains = 4
## nIter = ceiling((numSavedSteps * thinSteps)/nChains)
## jagsModel = jags.model("TEMPmodel.txt", data = dataList,
## inits = initsList, n.chains = nChains, n.adapt = adaptSteps)
## cat("Burning in the MCMC chain...\n")
## update(jagsModel, n.iter = burnInSteps)
## cat("Sampling final MCMC chain...\n")
## codaSamples = coda.samples(jagsModel, variable.names = parameters,
## n.iter = nIter, thin = thinSteps)
## if (!is.null(saveName)) {
## save(codaSamples, file = paste(saveName, "Mcmc.Rdata",
## sep = ""))
## }
## return(codaSamples)
## }
#in STAN
source("../DBDA2Eprograms/Stan-Ydich-XnomSsubj-MbernBetaOmegaKappa.R")
##
## *********************************************************************
## Kruschke, J. K. (2015). Doing Bayesian Data Analysis, Second Edition:
## A Tutorial with R, JAGS, and Stan. Academic Press / Elsevier.
## *********************************************************************
show(genMCMC)
## function (data, sName = "s", yName = "y", numSavedSteps = 50000,
## saveName = NULL, thinSteps = 1)
## {
## require(rjags)
## require(rstan)
## y = as.numeric(data[, yName])
## s = as.numeric(data[, sName])
## if (any(y != 0 & y != 1)) {
## stop("All y values must be 0 or 1.")
## }
## Ntotal = length(y)
## Nsubj = length(unique(s))
## dataList = list(y = y, s = s, Ntotal = Ntotal, Nsubj = Nsubj)
## modelString = "\n data {\n int<lower=1> Nsubj ;\n int<lower=1> Ntotal ;\n int<lower=0,upper=1> y[Ntotal] ;\n int<lower=1> s[Ntotal] ; // notice Ntotal not Nsubj\n }\n parameters {\n real<lower=0,upper=1> theta[Nsubj] ; // individual prob correct\n real<lower=0,upper=1> omega ; // group mode\n real<lower=0> kappaMinusTwo ; // group concentration minus two\n }\n transformed parameters {\n real<lower=0> kappa ; \n kappa <- kappaMinusTwo + 2 ;\n }\n model {\n omega ~ beta( 1 , 1 ) ;\n kappaMinusTwo ~ gamma( 0.01 , 0.01 ) ; // mean=1 , sd=10 (generic vague)\n // kappaMinusTwo ~ gamma( 1.105125 , 0.1051249 ) ; # mode=1 , sd=10 \n theta ~ beta( omega*(kappa-2)+1 , (1-omega)*(kappa-2)+1 ) ; // vectorized\n for ( i in 1:Ntotal ) {\n y[i] ~ bernoulli( theta[s[i]] ) ;\n }\n }\n "
## initsList = function() {
## thetaInit = rep(0, Nsubj)
## for (sIdx in 1:Nsubj) {
## includeRows = (s == sIdx)
## yThisSubj = y[includeRows]
## resampledY = sample(yThisSubj, replace = TRUE)
## thetaInit[sIdx] = sum(resampledY)/length(resampledY)
## }
## thetaInit = 0.001 + 0.998 * thetaInit
## meanThetaInit = mean(thetaInit)
## kappaInit = 100
## return(list(theta = thetaInit, omega = meanThetaInit,
## kappaMinusTwo = kappaInit - 2))
## }
## parameters = c("theta", "omega", "kappa")
## burnInSteps = 500
## nChains = 4
## stanDso <- stan_model(model_code = modelString)
## stanFit <- sampling(object = stanDso, data = dataList, chains = nChains,
## iter = (ceiling(numSavedSteps/nChains) * thinSteps +
## burnInSteps), warmup = burnInSteps, thin = thinSteps,
## init = initsList)
## codaSamples = mcmc.list(lapply(1:ncol(stanFit), function(x) {
## mcmc(as.array(stanFit)[, x, ])
## }))
## if (!is.null(saveName)) {
## save(codaSamples, file = paste(saveName, "Mcmc.Rdata",
## sep = ""))
## save(stanFit, file = paste(saveName, "StanFit.Rdata",
## sep = ""))
## save(stanDso, file = paste(saveName, "StanDso.Rdata",
## sep = ""))
## }
## return(codaSamples)
## }
Here we call Stan directly.
Read the data.
Create the data list for the model.
myData = read.csv("TherapeuticTouchData.csv")
y = as.numeric(myData$y)
s = as.numeric(myData$s) # ensures consecutive integer levels
## Warning: NAs introduced by coercion
# Do some checking that data make sense:
if ( any( y!=0 & y!=1 ) ) { stop("All y values must be 0 or 1.") }
Ntotal = length(y)
Nsubj = length(unique(s))
# Specify the data in a list, for later shipment to JAGS:
dataList = list(
y = y ,
s = s ,
Ntotal = Ntotal ,
Nsubj = Nsubj
)
Describe the model.
# THE MODEL.
modelString = "
data {
int<lower=1> Nsubj ;
int<lower=1> Ntotal ;
int<lower=0,upper=1> y[Ntotal] ;
int<lower=1> s[Ntotal] ; // notice Ntotal not Nsubj
}
parameters {
real<lower=0,upper=1> theta[Nsubj] ; // individual prob correct
real<lower=0,upper=1> omega ; // group mode
real<lower=0> kappaMinusTwo ; // group concentration minus two
}
transformed parameters {
real<lower=0> kappa ;
kappa = kappaMinusTwo + 2 ;
}
model {
omega ~ beta( 1 , 1 ) ;
kappaMinusTwo ~ gamma( 0.01 , 0.01 ) ; // mean=1 , sd=10 (generic vague)
// kappaMinusTwo ~ gamma( 1.105125 , 0.1051249 ) ; # mode=1 , sd=10
theta ~ beta( omega*(kappa-2)+1 , (1-omega)*(kappa-2)+1 ) ; // vectorized
for ( i in 1:Ntotal ) {
y[i] ~ bernoulli( theta[s[i]] ) ;
}
}
" # close quote for modelString
Initialize the model.
# INTIALIZE THE CHAINS.
# Initial values of MCMC chains based on data:
initsList = function() {
thetaInit = rep(0,Nsubj)
for ( sIdx in 1:Nsubj ) { # for each subject
includeRows = ( s == sIdx ) # identify rows of this subject
yThisSubj = y[includeRows] # extract data of this subject
resampledY = sample( yThisSubj , replace=TRUE ) # resample
thetaInit[sIdx] = sum(resampledY)/length(resampledY)
}
thetaInit = 0.001+0.998*thetaInit # keep away from 0,1
meanThetaInit = mean( thetaInit )
kappaInit = 100 # lazy, start high and let burn-in find better value
return( list( theta=thetaInit , omega=meanThetaInit ,
kappaMinusTwo=kappaInit-2 ) )
}
Run the chains.
# RUN THE CHAINS
parameters = c( "theta","omega","kappa") # The parameters to be monitored
burnInSteps = 500 # Number of steps to burn-in the chains
nChains = 4 # nChains should be 2 or more for diagnostics
numSavedSteps=50000
thinSteps=1
# Translate to C++ and compile to DSO:
stanDso <- stan_model( model_code=modelString )
# Get MC sample of posterior:
startTime = proc.time()
stanFit <- sampling( object=stanDso ,
data = dataList ,
#pars = parameters , # optional
chains = nChains ,
iter = ( ceiling(numSavedSteps/nChains)*thinSteps
+burnInSteps ) ,
warmup = burnInSteps ,
thin = thinSteps ,
init = initsList ) # optional
## Error in FUN(X[[i]], ...) : Stan does not support NA (in s) in data
## failed to preprocess the data; sampling not done
stopTime = proc.time()
duration = stopTime - startTime
show(duration)
## user system elapsed
## 0.44 0.00 0.44
Check this useful resource, if we see message about divergencies at the end of the simulation process.
show(stanFit)
## Stan model '0abcd1c6e22c697f2c611d2f31f359c0' does not contain samples.
launch_shinystan(stanFit)
load("TouchAnalysisShiny.Rdata")
launch_shinystan(TouchAnalysis)
launch_shinystan(stanFit)
names("stanFit")
summary(stanFit)
plot(stanFit)
rstan::traceplot(stanFit,pars=c("omega","kappa"), ncol=1, inc_warmup=F)
stan_scat(stanFit, pars=c("omega","kappa"))
stan_dens(stanFit)
stan_ac(stanFit, separate_chains = T)
stan_diag(stanFit,information = "sample",chain=0)
stan_diag(stanFit,information = "stepsize",chain = 0)
(stanFit,information = "treedepth",chain = 0)
stan_diag(stanFit,information = "divergence",chain = 0)
Extract MCMC trajectories of hyperparameters ω,κ. Does the pair of hyperparameters ω,κ show any dependence?
OmegaKappa<-cbind(Omega=rstan::extract(stanFit,pars=c("omega","kappa"))$'omega',
Kappa=rstan::extract(stanFit,pars=c("omega","kappa"))$'kappa')
head(OmegaKappa)
plot(rank(OmegaKappa[,"Omega"]),rank(OmegaKappa[,"Kappa"]))
Extract parameters θ for each tested individual.
Thetas<-rstan::extract(stanFit,pars=names(stanFit))
Thetas<-matrix(unlist(Thetas), ncol = 32, byrow = F)
colnames(Thetas)<-names(stanFit)
Thetas<-Thetas[,-(29:32)]
head(Thetas)
Going back to the questions of the example.
How much the group as a whole differed from chance performance?
(sigmas<-apply(Thetas,2,sd))
hist(as.vector(Thetas)-.5)
qqnorm(as.vector(Thetas))
qqline(as.vector(Thetas))
t.test(as.vector(Thetas),mu=0.5)
The number of degrees of freedom is so large that t.test becomes very sensitive.
The distribution has fat tails because of non-constant standard deviations.
Bayesian approach is based on HDI.
library(HDInterval)
hdi(as.vector(Thetas))
What makes HDI so much wider than confidence interval?
How much any individuals differed from chance performance?
apply(Thetas,2,function(z) hdi(z))
HDIs of all chains contain 0.5.
apply(Thetas,2,function(z) t.test(as.vector(z),mu=0.5)$p.value)
FNP approach rejects all the null hypotheses.