Traditional time series models can rely on assumptions about dependence structure that poorly describe real time series. For example, time series are rarely strictly periodic, and data are often non-Gaussian. Bayesian state-space models estimate latent states together with parameters, so there is broad flexibility in the their posterior covariance structure.
Resources
Data file
combinedLevelChem2007.csv
Software
- Rstudio
source('../clarkFunctions2021.r')Readings
Objectives
understand what makes a state-space model hierarchical, and why it can be important for time-series data–think conditional independence
understand how the relative magnitudes of process and observation error affect estimates and predictions and why prior distributions should be informative
understand differences in scale of multivariate time series and how to accommodate it.
State-space models for time series
Hierarchical modeling allows for conditional independence of observations, taking up the dependence at a second, latent, stage, the underlying process. These states must be random and, therefore, must be estimated.
There have been many efforts to handle the combination of process and observation error in time series. Classical state-space models include the Kalman filter (Harvey 1989), an updating procedure whereby the prior estimate of the state is ‘corrected’ by how well it can predict the next observation. It is carried out as an iterative procedure. Perhaps I would like to estimate the underlying state \(x_t\), given that \(y_t\) is observed. I might want to estimate the transition parameters in a function \(f(x_t)\) that describes the process. I might want to predict \(y_{t+k}\) for some time \(k\) in the future. Why predict \(y_{t+k}\), rather than just \(x_{t+k}\)? I could be interested in both, but I may eventually observe \(y_t\), whereas I will never observe \(x_t\). To compare observation with prediction, I need \(y_t\). The Kalman filter is not ideal for nonlinear models and when errors are non-Gaussian. In this section I describe how the Bayesian framework is used for time series data with underlying states. Here I summarize basic elements of the approach from Carlin et al. (1992) and Calder et al. (2003, ecological examples in Clark and Bjornstad 2004).
Conditional independence in time series data
The independence assumption allows us to treat the likelihood, or data model, for multiple observations \([x_1, x_2] = [x_1|x_2][x_2]\) as a product of (independent) distributions \([x_1] [x_2]\). This assumption implies that \([x_1|x_2] = [x_1]\): the marginal distribution of \(x_1\) does not change when \(x_2\) changes. Time series data usually cannot be assumed to be independent, but this dependence can be taken up at a second stage.
Three state variables arranged in a time series.
Before considering the likelihood explicitly, recall the joint probability, now in the context of a time series. I begin with three states, having joint probability \(\left[ x_1, x_2, x_3 \right]\). For most models I assume that observations are independent and, thus, the likelihood is
\[ \left[ x_1, x_2, x_3 \right] = \left[ x_1 \right]\left[ x_2 \right]\left[ x_3 \right] \] The requirement for independent samples motivates much of experimental design. So what do I mean here by the term independent?
There is a common misconception that ‘independent’ means ‘dissimilar’. Here’s a definition from a recent text referring to the assumption of independence: ‘This assumption is violated when the value of one observation tends to be too similar to the values of other observations.’ There are many definitions like this in the literature. If I consider a simple regression model, the \(Y\)’s corresponding to similar values of \(X\) will be ‘similar’. If this were not the case, why would I fit a regression? Clearly, definitions like this miss the point.
The importance of independent observations results from the fact that the likelihood is used to represent the true joint distribution. A likelihood could be equal to the joint distribution only in the special case where a conditional distribution is equal to a marginal distribution, i.e., \([x_1 | x_2] = [x_1]\). If samples are not independent, then
\[ \left[ x_1, x_2, x_3 \right] \neq \left[ x_1 \right]\left[ x_2 \right]\left[ x_3 \right] \] because marginal distributions are not the same as conditional distributions.
The joint distribution on the left is hard to think about, because it is an \(n = 3\)-dimensional problem. I am not very good at thinking in \(n\) dimensions. What if \(n = 100\)? The traditional likelihood (the assumption of independence) reduces an \(n\)-dimensional distribution to a product of \(n\) univariate distributions. A hierarchical model accomplishes the same dimension reduction, but without ignoring the joint distribution. It does so by factoring the joint distribution into univariate conditional distributions. When I factor the joint distribution
\[ \begin{aligned} \left[ x_1, x_2, x_3 \right] &= \left[ x_3 | x_2 , x_1 \right]\left[ x_2, x_1 \right] \\ &= \left[ x_3 | x_2, x_1 \right] \left[ x_2 | x_1 \right]\left[x_1 \right] \end{aligned} \] I get a product of three univariate distributions.
Despite the apparent resolution of the dimensionality challenge, I still have a problem. The remaining challenge is my difficulty in defining the relationship between \(x_3\) and the two things that affect it, \(x_2\) and \(x_1\). If this does not seem hard to you, consider that \(n\) values of \(x\) would require me to specify the distribution \(\left[x_n | x_1, \dots, x_{n-1} \right]\). How do I specify how \(n-1\) different states combine into one effect?
Although the joint distribution is high-dimensional, the conditional distributions could be simple. In the state-space model, where the subscript on \(x\) is time, a given state \(x_t\) may conditionally depend only on \(x_{t-1}\) and \(x_{t+1}\). This is equivalent to saying that, given \(x_t\), the values \(x_{t-1}\) and \(x_{t+1}\) are conditionally independent of each other–the past is conditionally independent of the future (given \(x_t\)). This is the Markov property. Where this holds, I can simplify,
\[ \left[x_n | x_1, \dots, x_{n-1}, x_{n+1}, \dots, n_T \right] = \left[x_t | x_{t-1}, x_{t+1} \right] \]
The Markov property: there are direct links only between sequential values. Given \(x_t\), the future is independent of the past.
Conditional independence allows me to simplify in a hierarchical context. It allows me to sample conditionally, using Gibbs sampling. I can achieve conditional independence if I treat the data as a separate stage in the model, and I treat all of the links as stochastic. I let \(y_t\) represent the observed value at time \(t\), and I assume that it results from observation error on the true \(x_t\). With this second stage, the combined process and observation model is now,
\[ [\mathbf{x}] \left[ \mathbf{y} | \mathbf{x} \right] = [x_1] \prod_{t=2}^T \left[x_t | x_{t-1} \right] \prod_{t=1}^T \left[y_t | x_t \right] \]
The random walk with data \(y_t\) treated as a separate stage in the model.
I use a simple random walk to demonstrate.
Exercise 1. We have previously included dependence structure in the Gaussian distribution. Write the joint distribution for \(\left[ x_1, x_2, x_3 \right]\) where the marginal distribution of each \(x_i\) is normal and the joint distribution is multivariate normal. Under what circumstances would they be equivalent?
A random walk model
Consider the pair of equations
\[ \begin{aligned} x_{t+1} &= f_t(x_t) + \epsilon_t \\ y_t &= g_t(x_t) + \eta_t \end{aligned} \] The first equation is the process model, system equation, or evolution equation. The process model includes a deterministic component \(f_t(x_t)\), which encapsulates understanding of a process. The stochasticity in this equation is the second term, the process error, \(\epsilon_t\). It affects the process, because variability introduced at one time step affects the trajectory of change in the future. Here I assume this is additive, but I could have used some other assumption. This stochasticity in the first equation means that the process is flexible to model miss-specification.
As a specific example, a random walk could look like this,
\[ \begin{aligned} f_t(x_t) &= x_t \\ \epsilon_t & \sim N(0,\sigma^2) \end{aligned} \]
If the observations contain no additive or multiplicative bias a data model might look like this,
\[ y_t \sim N(x_t,\tau^2) \]
Simulated data
As previously, it is a good idea to check algorithms against simulated data where parameter values are known. Before going further I simulate a data set with missing values:
nt <- 200 # no. time intervals
time <- c(1:nt) # sample times
nm <- 20 # number of missing values
wm <- sort(sample(c(2:nt),nm)) # which are missing
sig <- 5 # process err variance
tau <- 20 # obs error variance
x <- rep(10,nt)
for(t in 2:nt)x[t] <- x[t-1] + rnorm(1,0,sqrt(sig))
y <- rnorm(nt,x,sqrt(tau)) # simulate observations
y[wm] <- NA # some are missing
plot(time, y, col='grey')
lines(time, x)
points( time[wm], x[wm], pch=16)Observed (dots) and latent states (lines). Missing values are shown as solid dots.
Note that the loop to simulate time series data runs from index 2, ..., nt.
The line in the figure shows the ‘true’ latent states through time. These values would not be observed. They are not ‘real’. They are a device that allows for conditional independence in observations and a breakdown of how I think my uncertainty in the model and in the data could affect what is observed.
Run this code a number of times to get a feel for the dynamics of a random walk. The autocorrelation in the series makes these random sequences appear as though there is some underlying, non-random control. A better way to appreciate the random walk is to look at the differenced series,
par(mfrow=c(2,1), mar=c(2,4,1,3), bty='n')
plot(time, x, type='l')
plot(time[-1], diff(x), type='l')
abline(h=0,lty=2)
abline(h=sqrt(sig)*1.96*c(-1,1),lty=2) # Gaussian approximation
abline(h=quantile(diff(x),c(.025,.975)),col=2) # from time seriesRandom walk above with differenced series below
An analyst looking at the plot of \(x_t\) could fail to appreciate the uncorrelated nature of values, shown in the lower plot. I have drawn dashed lines to bound 95% of the expected values.
The random walk with data, process, and parameters. The square boxes (observations and prior parameter values anchor the fit. All arrows are stochastic.
Fitting the model
As in all hierarchical models, the state-space model uses distributions to connect states.
Here I fit the random walk model to simulated data. I specify prior distributions and set up a simple MCMC algorithm using Gibbs sampling.
First I place a prior distribution on the variance parameters that are inverse gamma,
\[ \begin{aligned} \sigma^2 &\sim IG(s_1, s_2) \\ \tau^2 &\sim IG(t_1,t_2) \end{aligned} \]
I want these prior distributions to be centered on values that I believe to be accurate. Assuming I don’t know the values used to simulate the data, my prior distributions for sig and tau are centered on 8 and 5, respectively. In addition to mean values, I need to specify weights. I decide to place more weight on the observation error, proportional to the number of observations, \(2 \times T\), and less weight on the process error \(2\). Together, these two values will determine the balance of weight assigned to the underlying process model and to the observed values. Here it is in R:
sp <- varPrior(mu=8, wt=2) # weak prior
tp <- varPrior(15,nt) # wt comparable to data
sg <- 1/rgamma(1,sp[1],sp[2]) # initialize values
tg <- 1/rgamma(1,tp[1],tp[2])With these prior distributions on parameters, the posterior distribution for the random walk model is
\[ \begin{aligned} \left[ \mathbf{x}, \sigma^2, \tau^2 | \mathbf{y}, s_1, s_2, t_1, t_2 \right] & \propto N(x_1|x_0,\sigma^2) \\ & \times \prod_{t=2}^T N \left(x_t | x_{t-1}, \sigma^2 \right) \\ & \times \prod_{t=1}^T N\left(y_t | x_t, \tau^2 \right) \\ & \times IG(\sigma^2 | s_1, s_2) IG(\tau^2| t_1,t_2) \end{aligned} \] Note that the time index for states is \(2, \dots, T\), whereas for observations it is \(1, \dots, T\). Each \(x_t\) can be viewed as the prior mean for \(x_{t+1}\). For this reason, I explicitly specify a prior distribution for \(x_1\), centered on a value \(x_0\).
I have missing values in the data. I need an index for the locations of these missing values and initialize them to values that are reasonable. I use this function:
tmp <- initialStatesSS(y)
xg <- tmp$x
notMiss <- tmp$notMiss
miss <- tmp$missA Gibbs sampler
A Gibbs sampler for the state-space model is arranged such that the sample for each latent variable is updated before the next sample is drawn. For this example, I will simply omit a prior for \(x_1\). Once each of the states has been updated, update the variance parameters. The basic algorithm is the following:
- Sample each latent state conditional on all others,
\[ \begin{aligned} x_t | x_{t-1}, x_{t+1}, y_t, \sigma^2 &\propto [x_t|x_{t-1}, \sigma^2] \\ & \times [x_{t+1} | x_t, \sigma^2] \\ & \times [y_t | x_t ] \end{aligned} \] This conditional distribution is shown in the Appendix.
Sampling the latent state \(x_t\) engages only the arrows that connect it to \(x_{t-1}, x_{t+1}\), and \(y_t\).
- Sample the process variance,
\[ \sigma^2 | \mathbf{x}, s_1, s_2 \propto \prod_{t=2}^T [x_t | x_{t-1}, \sigma^2][\sigma^2 | s_1, s_2] \]
Sampling the process variance involves only the latent states and the prior distribution.
- Sample the observation variance from the conditional distribution,
\[ \tau^2 | \mathbf{x}, \mathbf{y} \propto \prod_{t=1}^T [y_t | x_t, \tau^2][\tau^2 | t_1, t_2] \]
Sampling the observation variance involves only the observations and the prior distribution.
- Repeat until a long converged sequence is available for analysis.
I will need to store the samples. I define a relatively small number of total steps and establish some matrices to hold samples:
ng <- 1000 # no. Gibbs steps
vgibbs <- matrix(0,ng,2) # store variances
colnames(vgibbs) <- c('sigma','tau')
xgibbs <- matrix(0,ng,nt) # store latent statesHere is a loop that executes the Gibbs sampling algorithm:
for(g in 1:ng){
xg <- updateSSRW(states=xg,y,missing=wm,tg=tg,sg=sg) # latent states
sg <- updateVariance(xg[-1],xg[-nt],sp[1],sp[2]) # process var
tg <- updateVariance(y[-wm],xg[-wm],tp[1],tp[2]) # obs variance
vgibbs[g,] <- c(sg,tg) # store values
xgibbs[g,] <- xg
}Note three pieces to the algorithm, i) the latent states, ii) the process variance, and iii) the observation variance.
Here are plots of samples and densities:
.processPars(vgibbs,xtrue=c(sig,tau),CPLOT=T,DPLOT=T) Posterior estimates of variances for the random walk example.
## $summary
## estimate se 0.025 0.975 true value
## sigma 7.403 2.061 4.504 12.79 5
## tau 15.320 1.021 13.450 17.34 20
Here are estimates of latent states:
xpost <- t( apply(xgibbs,2,quantile,c(.5,.025,.975)) ) #95% CI
shadeInterval(time,xpost[,2:3],add=F) #credible interval
lines(time,x,lwd=2) #true values
points(time,y,lwd=2,col='blue',cex=.5, pch=16) #obs values
points(time[wm],x[wm],col='orange',cex=.5, pch=16) #missing Posterior estimates of states as a 95% CI (shaded), true values (black line), observations (blue), and missing values (orange)
Exercise 2. Use the foregoing code for the following:
Conduct simlation experiments to determine how different values of
sigandtauaffect the behavior ofx. What is the effect ofsigif you holdtauconstant? and vice-versa?Determine how these same parameters affect the standard deviation of
xandy.Determine how the length of the series
ntaffects the standard deviation inxand in the differenced sequence ofx.
A state-space model with covariates
Now suppose that the process is driven by covariates that change over time,
\[ x_t \sim N(x_{t-1} + \mathbf{z}'_{t-1} \boldsymbol{\beta}, \sigma^2) \]
The model with covariates. There is no arrow between the latent states, unless it is included within the design vector.
The second term looks like a regression with a design vector \(\mathbf{z}_t\). At each time step there is an effect that comes from covariates and from the process error. Here is a simulation from this model:
nt <- 300 # no. time intervals
time <- c(1:nt) # sample times
wm <- 30:60 # a sequence of missing values
sig <- .01 # process err variance
tau <- .1 # obs error variance
x <- rep(0,nt)
q <- 3
z <- matrix( rnorm(q*(nt - 1), .1), nt-1, q) # design matrix
z[,1] <- 1
beta <- matrix(runif(q, -.01,.01)) # coefficient vector
zb <- z%*%beta
for(t in 2:nt)x[t] <- x[t-1] + rnorm(1, zb[t-1], sqrt(sig))
y <- rnorm(nt,x,sqrt(tau)) # observations
y[wm] <- NA # missing
plot(time,y, col='grey')
lines(time,x)Observed (dots) and latent states (line).
I now need to estimate the coefficients in \(\boldsymbol{\beta}\) together with the latent states and the variance parameters. In the above code the coefficients are the vector beta. Here are some prior distributions, initial states, and matrices to hold samples.
sp <- varPrior(2, 2) # weak prior
tp <- varPrior(6, 2) # wt comparable to data
sg <- 1/rgamma(1,sp[1],sp[2]) # initialize values for variances, coefficients
tg <- 1/rgamma(1,tp[1],tp[2])
bg <- beta*0
tmp <- initialStatesSS(y) # initial states
xg <- tmp$x
notMiss <- tmp$notMiss
miss <- tmp$miss
ng <- 1000
vgibbs <- matrix(0,ng,2)
xgibbs <- matrix(0,ng,nt)
bgibbs <- matrix(0,ng,q)
colnames(vgibbs) <- c('sigma','tau')
colnames(bgibbs) <- paste('beta',c(0:2),sep='')Note the additional step in the Gibbs sampler to update coefficients. Otherwise it looks like the previous algorithm:
for(g in 1:ng){
bg <- updateSSbeta(xg, z=z, sg, priorB=matrix(0,q),
priorIVB=diag(.001,q), addStates=T)
zb <- z%*%bg
xg <- updateSSRW(states=xg,yy=y,zb=zb,missing=wm,tg=tg,sg=sg)
sg <- updateVariance(xg[-1],xg[-nt] + zb,sp[1],sp[2])
tg <- updateVariance(y[-wm],xg[-wm],tp[1],tp[2])
vgibbs[g,] <- c(sg,tg)
xgibbs[g,] <- xg
bgibbs[g,] <- bg
}
par(mar=c(3,3,1,1))
.processPars(bgibbs,xtrue=beta,CPLOT=T,DPLOT=T) ## $summary
## estimate se 0.025 0.975 true value
## beta0 -0.005178 0.01717 -0.03991 0.02797 0.0000531
## beta1 -0.011450 0.03059 -0.07089 0.04697 0.0095030
## beta2 -0.003022 0.03133 -0.05716 0.05815 -0.0028650
Posterior estimates of coefficients.
Here are variances:
.processPars(vgibbs,xtrue=c(sig,tau),CPLOT=T,DPLOT=T) Variance estimates for model with covariates.
## $summary
## estimate se 0.025 0.975 true value
## sigma 0.07668 0.05637 0.05318 0.1003 0.01
## tau 0.15530 0.04627 0.12360 0.1913 0.10
In addition to variances there are now plots for estimates of the coefficients. Here are plots of latent states:
xpost <- t( apply(xgibbs,2,quantile,c(.5,.025,.975)) )
shadeInterval(time,xpost[,2:3],add=F) # credible interval
lines(time,x,lwd=2, col = 'brown') # true values
lines(time,xpost[,1],lwd=2) # posterior mean
points(time,y,lwd=2,cex=.5, col='blue', pch=16) # obs values
points(time[wm],x[wm],col='orange',cex=.5, pch=16) # missing valuesPosterior estimates of states as a 95% CI (shaded), true values (brown line), observations (blue), and missing values (orange).
Note the effect of a prolonged period of missing data. Experiment with some different values for nt, sig, and tau.
A state-space model in jags
Here is a version of the same model in jags. Note that I simply condition on the first observation, which has a prior distribution centered on the observed value.
cat( "model {
mu[1] ~ dnorm(y[1],10)
xg[1] <- mu[1]
for(t in 2:nt) {
mu[t] <- xg[t-1] + b1 + b2*z[t-1,2] + b3*z[t-1,3]
xg[t] ~ dnorm(mu[t],procErr)
y[t] ~ dnorm(xg[t],obsErr)
}
procErr ~ dunif(0,10)
obsErr ~ dunif(0,10)
sigma <- 1/procErr
tau <- 1/obsErr
b1 ~ dunif(-1,1)
b2 ~ dunif(-1,1)
b3 ~ dunif(-1,1)
} ", fill=T,file="stateSpaceModel.txt" )Here is the analysis:
library(rjags)
ssData <- list(y = y, z = z, nt = nt )
parNames <- c('b1','b2','b3', 'sigma', 'tau')
ssFit <- jags.model(data=ssData, file="stateSpaceModel.txt")## Compiling model graph
## Resolving undeclared variables
## Allocating nodes
## Graph information:
## Observed stochastic nodes: 268
## Unobserved stochastic nodes: 336
## Total graph size: 2406
##
## Initializing model
update(ssFit)
ssFit <- coda.samples(ssFit, variable.names = parNames, n.iter=5000)
par(mar=c(3,3,1,1))
plot( as.mcmc(ssFit) )Exercise 3. Working in your group, analyze the following model in jags. Start by simulating the data set. Then write jags code to fit the model.
\[ \begin{aligned} x_t &\sim N(\alpha + \beta x_{t-1}, \sigma^2) \\ y_t &\sim Poi( exp(x_t) ) \end{aligned} \]
- Simulate data: Change the block of code above from a Gaussian data model to Poisson. What link function is this?
- Change the jags code: In addition to the code for the Gaussian model in this example, look back at the implementation of the Poisson in unit 8.
- Run the model using the same code as shown above.
–>
Stream chemistry application
A former student wanted to know if total dissolved nitrogen (TDN) and dissolved organic nitrogen (DON) can be explained by fluctuations in temperature, precipitation, and chlorine concentation ([Cl]), and if it differs between coastal streams. We will answer these questions using a heirarchical state-space model.
The model I use is:
\[ \begin{aligned} x_{j,t+1} &= x_{j,t} + \mathbf{z}_{j,t} \beta + \epsilon_{j,t} \\ y_{j,t} &= x_{j,t} + \nu_{j,t} \end{aligned} \]
First I read the data and set up a design matrix for the sites and several covariates:
data <- read.csv('combinedLevelChem2007.csv', header=T)
nt <- nrow(data)
formula <- as.formula( ~ prec + Temp*Site + Cl*Site)
tmp <- model.frame(formula, data, na.action=na.pass)
z <- model.matrix(formula, tmp)
slevs <- sort( unique( data$Site ) )
sl <- match(data$Site,slevs)
ns <- length(slevs) - 1
ngroup <- ns + 1
ynames <- c('TDN','DON')
q <- ncol(z)
y <- data[,ynames[1]] # use TDN as response
xnames <- colnames(z)There are four sites. Use the function colSums to determine the number of observations by site.
Exercise 4: What model is being fitted in the above block of code? What will the interaction terms tell us?
Multiple time series
This example is complicated by the fact that there are not one, but four time series. I will stack them up, but I will need to identify the beginning and end of each series and model them accordingly. In the code that follows I make vectors first and last, holding these values and the design matrix z:
last <- which( diff(sl) == 1 ) # first and last row for each site
first <- c(1, last + 1)
last <- c(last,nt)Missing covariates, missing states
I have written a function to indicate the locations of missing values in y and z, both in the full data set and by sites:
v <- getMissX(z, y, first, last)
prows <- v$prows # t rows by site
rrows <- v$rrows # t+1 rows by site
missz <- v$missx
missy <- v$missy # missing y index in stacked data
missY <- v$missY # missing y by site as a listI need a prior distributions and initial values for parameters and missing values. In the code below, the initial values for missing states come from a function initialStatesSS.
priorx <- colMeans(z,na.rm=T) # prior mean for missing covariates
z[missz] <- priorx[missz[,2]] # initial missing z
sigM <- .1 # prior mean variances
tauM <- .1
sp <- varPrior(sigM, nt)
tp <- varPrior(tauM, nt/5) # weak prior
sg <- 1/rgamma(1,sp[1],sp[2]) # initialize values
tg <- 1/rgamma(1,tp[1],tp[2])
bg <- matrix(0,q,1)
tmp <- initialStatesSS(y) # initial missing y
xg <- tmp$xGibbs sampling
In the Gibbs sampler that follows, note two new steps, i) a loop to estimate latent variables by site, and ii) imputation for missing \(z\):
ng <- 2000
vgibbs <- matrix(0, ng, 2)
xgibbs <- matrix(0, ng, nt)
bgibbs <- matrix(0, ng, q)
zgibbs <- matrix(0, ng, length(missz))
colnames(vgibbs) <- c('sigma','tau')
colnames(bgibbs) <- xnames
for(g in 1:ng){
# parameters and mean vector
bg <- .updateBeta(z[prows,], xg[rrows] - xg[prows], sigma=sg,
priorIVB=diag(.001,q), priorB=matrix(0,q))
mu <- z%*%bg
# latent variables by site (there are ngroup sites)
for(j in 1:ngroup){
jj <- first[j]:last[j]
xg[jj] <- updateSSRW(states=xg[jj], yy=y[jj],
zb=mu[jj], missing=missY[[j]], tg=tg, sg=sg)
}
# variances
sg <- updateVariance(xg[rrows],xg[prows] + mu[prows],sp[1],sp[2])
tg <- updateVariance(y[-missy],xg[-missy],tp[1],tp[2])
# missing observations
z[missz] <- imputeX( missx = missz, xx = z[prows,],
yy = xg[prows] - xg[rrows],
bg = bg, sg, priorx )[missz]
vgibbs[g,] <- c(sg,tg)
xgibbs[g,] <- xg
bgibbs[g,] <- bg
zgibbs[g,] <- z[missz]
}Here are latent states:
keep <- 200:ng
xpost <- t( apply(xgibbs[keep,],2,quantile,c(.5,.025,.975)) )
ylim <- range(xpost)
par(mfrow=c(2,2),bty='n', mar=c(3,2,2,1))
for(j in 1:4){
ij <- first[j]:last[j]
shadeInterval(ij,xpost[ij,2:3],add=F,ylim=ylim,xlab='Weeks')
points(ij,y[ij],lwd=2,cex=.1, col='blue') #obs
wm <- missY[[j]]
points(ij[wm],xpost[ij[wm],1],col='orange',cex=.5,pch=16) #missing
title(slevs[j])
}Here are the missing z’s;
# variables that are missing;
zvars <- table(missz[,2])
zpost <- t( apply(zgibbs[keep,],2,quantile,c(.5,.025,.975)) )
zvar <- 3 # I choose variable 3: temperature
par(mfrow=c(2,2),bty='n', mar=c(3,2,2,1))
for(j in 1:4){
ww <- which(missz[,2] == zvar & missz[,1] >=
first[j] & missz[,1] <= last[j])
if(length(ww) == 0)next
zq <- zpost[ww,]
mj <- missz[ww, 1]
zj <- matrix(z[,zvar], nt, 3)
zj[,2:3] <- 0
zj[ mj,] <- zq
ij <- first[j]:last[j]
ylim <- range(zj[ij,])
shadeInterval(ij,zj[ij,2:3], add=F, ylim = ylim, xlab='Weeks', col='brown')
lines(ij, zj[ij,1])
segments(mj, zq[,2], mj, zq[,3], lwd = 1, col='brown')
points(mj, zq[,1], col='brown')
if(j == 1)title(colnames(z)[zvar])
}Here are some parameter estimates:
.processPars(bgibbs,xtrue=bg*0,CPLOT=T) ## $summary
## estimate se 0.025 0.975 true value
## (Intercept) -3.842e-02 0.0433300 -1.306e-01 0.0444000 0
## prec -7.096e-05 0.0008793 -1.778e-03 0.0016410 0
## Temp 3.510e-03 0.0024810 -9.190e-04 0.0087370 0
## SiteMidpoint -2.347e-02 0.0568800 -1.307e-01 0.0892800 0
## SiteOutflow -5.892e-02 0.0576200 -1.733e-01 0.0501900 0
## SitePump station 3.202e-02 0.0595200 -7.712e-02 0.1605000 0
## Cl -2.478e-04 0.0001371 -5.042e-04 0.0000309 0
## Temp:SiteMidpoint 1.097e-03 0.0031970 -5.371e-03 0.0072190 0
## Temp:SiteOutflow 1.226e-03 0.0032590 -4.966e-03 0.0076890 0
## Temp:SitePump station -3.677e-03 0.0033260 -1.074e-02 0.0020680 0
## SiteMidpoint:Cl 1.384e-04 0.0001525 -1.584e-04 0.0004335 0
## SiteOutflow:Cl 2.884e-04 0.0001392 7.438e-06 0.0005532 0
## SitePump station:Cl 3.958e-04 0.0002127 -2.015e-05 0.0008170 0
.processPars(bgibbs[keep,],xtrue=bg*0,DPLOT=T, burnin = 200) ## $summary
## estimate se 0.025 0.975 true value
## (Intercept) -3.837e-02 0.0425000 -1.276e-01 4.245e-02 0
## prec -8.889e-05 0.0008744 -1.781e-03 1.571e-03 0
## Temp 3.511e-03 0.0024440 -8.706e-04 8.684e-03 0
## SiteMidpoint -2.328e-02 0.0571500 -1.341e-01 8.849e-02 0
## SiteOutflow -5.962e-02 0.0578000 -1.745e-01 5.120e-02 0
## SitePump station 3.158e-02 0.0590800 -7.781e-02 1.579e-01 0
## Cl -2.478e-04 0.0001370 -5.041e-04 3.585e-05 0
## Temp:SiteMidpoint 1.080e-03 0.0032110 -5.479e-03 7.177e-03 0
## Temp:SiteOutflow 1.277e-03 0.0032490 -4.928e-03 7.775e-03 0
## Temp:SitePump station -3.669e-03 0.0033290 -1.076e-02 2.167e-03 0
## SiteMidpoint:Cl 1.384e-04 0.0001531 -1.585e-04 4.343e-04 0
## SiteOutflow:Cl 2.881e-04 0.0001396 -6.947e-07 5.564e-04 0
## SitePump station:Cl 3.967e-04 0.0002134 -2.239e-05 8.124e-04 0
.processPars(vgibbs[keep,],xtrue=c(sigM,tauM),DPLOT=T, burnin = 200) #compare prior mean## $summary
## estimate se 0.025 0.975 true value
## sigma 0.1069 0.002854 0.1015 0.1123 0.1
## tau 0.1704 0.008172 0.1548 0.1869 0.1
Here are some site differences for precipitation and Cl effects:
vars <- c('Temp', 'Cl')
par(mfrow=c(2,1),bty='n', mar=c(4,4,2,2))
sites <- unique(data$Site)
for(k in 1:2){
wk <- grep(vars[k],colnames(bgibbs))
bk <- bgibbs[keep,wk]
xlim <- range(bk)
ymax <- 8/diff(xlim)
plot(NULL, xlim=xlim,ylim=c(0,ymax),xlab=vars[k],ylab='Density')
for(j in 1:ncol(bk)){
dk <- density(bk[,j])
lines(dk[[1]],dk[[2]],col=j, lwd=2)
wmax <- which.max(dk[[2]])
text( dk[[1]][wmax], 1.2*dk[[2]][wmax], sites[j] ,col=j)
}
}Can you interpret these site differences?
Exercise 5 The following questions can be answered with this example:
Before running any more analysis, say what you think should be the variance on the process and the observations. Justify your answer.
How does the prior on \(\sigma^2, \tau^2\) affect estimates of \(\mathbf{\beta}\)? Explain how you manipulated prior weight to arrive at your answer.
Repeat the analysis with DON as the response variable and interpret results.
What is the answer to the motivating question: Can total dissolved nitrogen (TDN) and dissolved organic nitrogen (DON) be explained by temperature, precipitation, and chlorine concentration ([Cl]), and does it differ between coastal streams?
Reprise
State-space models model serial dependence hierarchically. Hierarchical models do not require marginal independence, only conditional independence. Modeling latent states allows for dependence structure that could be quite different from what could be achieved with a standard models (e.g., autoregressive). The flexibility of hierarchical modeling makes it attractive for time series data.
Informative prior distributions are important for variances. Examples here show how they affect estimates of covariates and of latent states. The relative magnitudes of process and observation error determine the extent which data versus process dominate the fit.
Appendix
The random walk has a conditionally normal posterior distribution that can be written this way:
\[ \begin{aligned} x_t | x_{t-1}, x_{t+1}, y_t, \sigma^2 &\propto N(x_t|x_{t-1}, \sigma^2) \times N(x_{t+1} | x_t, \sigma^2) \times N(y_t | x_t ) \\ &\propto exp \left[ -\frac{(x_t - x_{t-1})^2}{2\sigma^2} - \frac{(x_{t+1} - x_{t})^2}{2\sigma^2} - \frac{(y_t - x_t)^2}{2\tau^2} \right] \end{aligned} \] Because this is a product of normals, it will be normal, with this form,
\[ \begin{aligned} x_t &\sim N(Vv, V) \\ V &= \left( \frac{1}{2 \sigma^2} + \frac{1}{\tau^2} \right)^{-1} \\ v &= \frac{x_{t-1} + x_{t+1}}{\sigma^2} + \frac{y_t}{\tau^2} \end{aligned} \] Recall, \(v\) is a sum of the previous and next values in the series \((t-1, t+1)\) and the observation \(y_t\). The variances act as weights. The fitted latent states will be pulled to the data if observation variance \(\tau^2\) is small relative to \(\sigma^2\). The latent states will be pulled to the mean of past and previous values if \(\tau^2\) is large relative to \(\sigma^2\).
Here is the function:
updateSSRW <- function( states, yy, zb=rep(0,length(y)), missing, tg, sg){
# state-space random walk
# update continuous states, random walk
# missing times, obs y, obs error tg, process error sg
# zb = z%*%beta
nn <- length(states)
for(t in 1:nn){
VI <- 0
v <- 0
if(!t %in% missing){ # observation
v <- yy[t]/tg
VI <- 1/tg
}
if(t < nn){ # t+1 term excluded for last
v <- v + (states[t+1] - zb[t])/sg
VI <- VI + 1/sg
}
if(t > 1){ # t-1 term excluded for 1st
v <- v + (states[t-1] + zb[t-1])/sg
VI <- VI + 1/sg
}
V <- 1/VI
states[t] <- rnorm( 1, V*v, sqrt(V) )
}
states
}The conditional distribution for the process variance looks like this:
\[ \begin{aligned} \left[ \sigma^2 | \mathbf{x}, \mathbf{y} \right] &\propto \prod_{t=2}^T N(x_t | x_{t-1}, \sigma^2) IG(\sigma^2 | s_1, s_2 ) \\ &\propto \left( \sigma^2 \right)^{-(T-1)/2}exp \left[ -\frac{1}{2\sigma^2} \sum_{t=2}^T (x_t - x_{t-1})^2 \right] \times \left( \sigma^2 \right)^{-(s_1 + 1)} exp(-s_2/\sigma^2) \\ &= \left( \sigma^2 \right)^{-(T-1)/2 - s_1 - 1} exp \left[ -\frac{1}{2\sigma^2} \sum_{t=2}^T (x_t - x_{t-1})^2 - \frac{s_2}{\sigma^2} \right] \\ &= IG \left( \sigma^2 \bigg| (T-1)/2 + s_1, \frac{1}{2}\sum_{t=2}^T (x_t - x_{t-1})^2 + s_2 \right) \end{aligned} \]
The conditional distribution for the observation variance looks like this: \[ \begin{aligned} \left[ \tau^2 | \mathbf{x}, \mathbf{y} \right] &\propto \prod_{t=1}^T N(y_t | x_t, \tau^2) IG(\tau^2 | u_1, u_2 ) \\ &\propto \left( \tau^2 \right)^{-T/2}exp \left[ -\frac{1}{2\tau^2} \sum_{t=2}^T (y_t - x_t)^2 \right] \times \left( \tau^2 \right)^{-(u_1 + 1)} exp(-u_2/\tau^2) \\ &= IG \left( \tau^2 \bigg| T/2 + u_1, \frac{1}{2}\sum_{t=1}^T (y_t - x_t)^2 + u_2 \right) \end{aligned} \]