library(fpp3)
## Warning: package 'fpp3' was built under R version 3.6.3
## -- Attaching packages ------------------------------------------------ fpp3 0.4.0 --
## v tibble 3.1.1 v tsibble 1.1.1
## v dplyr 1.0.6 v tsibbledata 0.4.0
## v tidyr 1.1.3 v feasts 0.2.2
## v lubridate 1.7.4 v fable 0.3.0
## v ggplot2 3.3.5
## Warning: package 'tibble' was built under R version 3.6.3
## Warning: package 'dplyr' was built under R version 3.6.3
## Warning: package 'tidyr' was built under R version 3.6.3
## Warning: package 'fable' was built under R version 3.6.3
## -- Conflicts ----------------------------------------------------- fpp3_conflicts --
## x lubridate::date() masks base::date()
## x dplyr::filter() masks stats::filter()
## x tsibble::intersect() masks base::intersect()
## x tsibble::interval() masks lubridate::interval()
## x dplyr::lag() masks stats::lag()
## x tsibble::new_interval() masks lubridate::new_interval()
## x tsibble::setdiff() masks base::setdiff()
## x tsibble::union() masks base::union()
library(tidyverse)
## -- Attaching packages ------------------------------------------- tidyverse 1.2.1 --
## v readr 1.3.1 v stringr 1.4.0
## v purrr 0.3.2 v forcats 0.4.0
## -- Conflicts ---------------------------------------------- tidyverse_conflicts() --
## x lubridate::as.difftime() masks base::as.difftime()
## x lubridate::date() masks base::date()
## x dplyr::filter() masks stats::filter()
## x tsibble::intersect() masks lubridate::intersect(), base::intersect()
## x tsibble::interval() masks lubridate::interval()
## x dplyr::lag() masks stats::lag()
## x tsibble::new_interval() masks lubridate::new_interval()
## x tsibble::setdiff() masks lubridate::setdiff(), base::setdiff()
## x tsibble::union() masks lubridate::union(), base::union()
library(gridExtra)
##
## Attaching package: 'gridExtra'
## The following object is masked from 'package:dplyr':
##
## combine
Figure 9.32 shows the ACFs for 36 random numbers, 360 random numbers and 1,000 random numbers.
a. Explain the differences among these figures. Do they all indicate that the data are white noise?
One of the main differences that we can clearly observe is that each plot is made up of different sizes of random numbers (36, 360 and 1,000). And we can say that they all indicate that the data are white noise since all of the autocorrelation coefficients lie within the limits, close to zero.
Figure 9.32: Left: ACF for a white noise series of 36 numbers. Middle: ACF for a white noise series of 360 numbers. Right: ACF for a white noise series of 1,000 numbers.
b. Why are the critical values at different distances from the mean of zero? Why are the autocorrelations different in each figure when they each refer to white noise?
Critical values are at different distances from the mean zero because white noise data is expected to be within the bounds +-2/(sqrt(T) and in this case as T is getting larger (36, 360, 1000) the bounds or limits are getting smaller as well as the critical values.
A classic example of a non-stationary series are stock prices. Plot the daily closing prices for Amazon stock (contained in gafa_stock), along with the ACF and PACF. Explain how each plot shows that the series is non-stationary and should be differenced.
gafa_stock %>%
distinct(Symbol)
## # A tibble: 4 x 1
## Symbol
## <chr>
## 1 AAPL
## 2 AMZN
## 3 FB
## 4 GOOG
As observed in our plots below, we clearly see a trend in the data and this is also evident in the ACF plot, we see a slow decrease as the lags increase suggesting a trend, thus the series is non-stationary. Additionally, the large initial spike in the PACF plot also indicates that these data is not stationary, therefore it should be differenced to make it stationary.
amazon <- gafa_stock %>%
filter(Symbol == "AMZN")
p1_a <- amazon %>%
autoplot(Close) +
labs(title="Daily Closing Stock Price (Amazon) ")
p2_a <- amazon %>%
ACF(Close) %>%
autoplot() + labs(title="Correlation of Daily Closing Stock Price (Amazon) ")
## Warning: Provided data has an irregular interval, results should be treated with
## caution. Computing ACF by observation.
p3_a <- amazon %>%
PACF(Close) %>%
autoplot() + labs(title="Partial Autocorrelation Daily Closing Stock Price (Amazon) ")
## Warning: Provided data has an irregular interval, results should be treated with
## caution. Computing ACF by observation.
grid.arrange(p1_a, p2_a, p3_a, nrow = 2)
For the following series, find an appropriate Box-Cox transformation and order of differencing in order to obtain stationary data.
a. Turkish GDP from global_economy.
As evident in the plots below, the Turkish GDP data is non-stationary. We’ll make the appropriate Box-Cox transformations and order of differencing in order to obtain stationary data.
turk <- global_economy %>%
filter(Country == "Turkey")
p1_t <- turk %>%
autoplot(GDP) +
labs(title="Turkey GDP")
p2_t <- turk %>%
ACF(GDP) %>%
autoplot() + labs(title="Correlation of Turkey GDP ")
p3_t <- turk %>%
PACF(GDP) %>%
autoplot() + labs(title="Partial Autocorrelation of Turkey GDP ")
grid.arrange(p1_t, p2_t, p3_t, nrow = 2)
The guerrero feature suggests a transformation with lamda 0.16.
lambda_t <- turk %>%
features(GDP,features=guerrero) %>%
pull(lambda_guerrero)
lambda_t
## [1] 0.1572187
After the transformation, we determine that the appropriate order of differencing is 1.
turk %>%
features(box_cox(GDP, lambda_t), unitroot_ndiffs)
## # A tibble: 1 x 2
## Country ndiffs
## <fct> <int>
## 1 Turkey 1
As observed below, all critical values are within the bounds in the ACF and PACF plots, we were able to make the Turkish GDP stationary after finding an appropriate Box-Cox transformation and 1st order of differencing.
p4_t <- turk %>%
autoplot(difference(box_cox(GDP, lambda_t), 1)) +
labs(title="Turkey GDP After Transformation")
p5_t <- turk %>%
ACF(difference(box_cox(GDP, lambda_t), 1)) %>%
autoplot() + labs(title="Correlation of Turkey GDP After Transformation ")
p6_t <- turk %>%
PACF(difference(box_cox(GDP, lambda_t), 1)) %>%
autoplot() + labs(title="Partial Autocorrelation of Turkey GDP After Transformation ")
grid.arrange(p4_t, p5_t, p6_t, nrow = 2)
## Warning: Removed 1 row(s) containing missing values (geom_path).
b. Accommodation takings in the state of Tasmania from aus_accommodation.
As observed below, we may need to apply unitroot_nsdiffs() to the quarterly Tasmanian takings data in order to determine if we need any seasonal differencing.
tas <- aus_accommodation %>%
filter(State == "Tasmania")
p1_ta <- tas %>%
autoplot(Takings) +
labs(title="Accomodation Takings in Tasmania")
p2_ta <- tas %>%
ACF(Takings) %>%
autoplot() + labs(title="Correlation of Takings in Tasmania ")
p3_ta <- tas %>%
PACF(Takings) %>%
autoplot() + labs(title="Partial Autocorrelation of Takings in Tasmania ")
grid.arrange(p1_ta, p2_ta, p3_ta, nrow = 2)
The guerrero feature suggests a transformation with lamda -0.05.
lambda_ta <- tas %>%
features(Takings,features=guerrero) %>%
pull(lambda_guerrero)
lambda_ta
## [1] -0.04884781
After the transformation, we determine that the data needs 1 order of seasonal differencing.
tas %>%
features(box_cox(Takings, lambda_ta), unitroot_nsdiffs)
## # A tibble: 1 x 2
## State nsdiffs
## <chr> <int>
## 1 Tasmania 1
And after applying the unitroot_ndiffs() we can see that no additional differencing is needed.
tas %>%
features(difference(box_cox(Takings, lambda_ta), 4), unitroot_ndiffs)
## # A tibble: 1 x 2
## State ndiffs
## <chr> <int>
## 1 Tasmania 0
After applying the necessary transformations we can see on the ACF plot that about 3 autocorrelations are outside the 95% limit and 2 in the PACF plots. This is indicative that the transformations have made the data stationary.
p4_ta <- tas %>%
autoplot(difference(box_cox(Takings, lambda_ta), 4)) +
labs(title="Accomodation Takings in Tasmania After Transformation")
p5_ta <- tas %>%
ACF(difference(box_cox(Takings, lambda_ta), 4)) %>%
autoplot() + labs(title="Correlation of Takings After Tranformation ")
p6_ta <- tas %>%
PACF(difference(box_cox(Takings, lambda_ta), 4)) %>%
autoplot() + labs(title="Partial Autocorrelation of Takings After Transformation ")
grid.arrange(p4_ta, p5_ta, p6_ta, nrow = 2)
## Warning: Removed 4 row(s) containing missing values (geom_path).
c. Monthly sales from souvenirs.
As observed below, we may need to apply unitroot_nsdiffs() to the monthly sales data in order to determine if we need any seasonal differencing.
p1_s <- souvenirs %>%
autoplot(Sales) +
labs(title="Monthly Sales")
p2_s <- souvenirs %>%
ACF(Sales) %>%
autoplot() + labs(title="Correlation of Monthly Sales ")
p3_s <- souvenirs %>%
PACF(Sales) %>%
autoplot() + labs(title="Partial Autocorrelation of Monthly Sales ")
grid.arrange(p1_s, p2_s, p3_s, nrow = 2)
The guerrero feature suggests a transformation with lamda 0.002.
lambda_s <- souvenirs %>%
features(Sales,features=guerrero) %>%
pull(lambda_guerrero)
lambda_s
## [1] 0.002118221
After the transformation, we determine that the data needs 1 order of seasonal differencing.
souvenirs %>%
features(box_cox(Sales, lambda_s), unitroot_nsdiffs)
## # A tibble: 1 x 1
## nsdiffs
## <int>
## 1 1
And after applying the unitroot_ndiffs() we can see that no additional differencing is needed.
souvenirs %>%
features(difference(box_cox(Sales, lambda_s), 12), unitroot_ndiffs)
## # A tibble: 1 x 1
## ndiffs
## <int>
## 1 0
After applying the necessary transformations we can see on the ACF and PACF plots that only two autocorrelations are outside the 95% limits, which suggests we’ve made the data stationary.
p4_s <- souvenirs %>%
autoplot(difference(box_cox(Sales, lambda_s), 12)) +
labs(title="Monthly Sales After Transformation")
p5_s <- souvenirs %>%
ACF(difference(box_cox(Sales, lambda_s), 12)) %>%
autoplot() + labs(title="Correlation of Monthly Sales After Tranformation ")
p6_s <- souvenirs %>%
PACF(difference(box_cox(Sales, lambda_s), 12)) %>%
autoplot() + labs(title="Partial Autocorrelation of Monthly Sales After Transformation ")
grid.arrange(p4_s, p5_s, p6_s, nrow = 2)
## Warning: Removed 12 row(s) containing missing values (geom_path).
For your retail data (from Exercise 8 in Section 2.10), find the appropriate order of differencing (after transformation if necessary) to obtain stationary data.
set.seed(123)
myseries <- aus_retail %>%
filter(`Series ID` == sample(aus_retail$`Series ID`,1))
As observed below, we may need to apply unitroot_nsdiffs() to the monthly turnover data in order to determine if we need any seasonal differencing.
p1_m <- myseries %>%
autoplot(Turnover) +
labs(title="Monthly Turnover")
p2_m <- myseries %>%
ACF(Turnover) %>%
autoplot() + labs(title="Correlation of Monthly Turnover ")
p3_m <- myseries %>%
PACF(Turnover) %>%
autoplot() + labs(title="Partial Autocorrelation of Monthly Turnover ")
grid.arrange(p1_m, p2_m, p3_m, nrow = 2)
The guerrero feature suggests a transformation with lamda 0.22.
lambda_m <- myseries %>%
features(Turnover,features=guerrero) %>%
pull(lambda_guerrero)
lambda_m
## [1] 0.222814
After the transformation, we determine that the data needs 1 order of seasonal differencing.
myseries %>%
features(box_cox(Turnover, lambda_m), unitroot_nsdiffs)
## # A tibble: 1 x 3
## State Industry nsdiffs
## <chr> <chr> <int>
## 1 Victoria Household goods retailing 1
And after applying the unitroot_ndiffs() we can see that no additional differencing is needed.
myseries %>%
features(difference(box_cox(Turnover, lambda_m), 12), unitroot_ndiffs)
## # A tibble: 1 x 3
## State Industry ndiffs
## <chr> <chr> <int>
## 1 Victoria Household goods retailing 0
After applying the necessary transformations we can see on the ACF and PACF plots that some changes happend from the original data. However, it does not seem that we achieved to make the data stationary.
p4_m <- myseries %>%
autoplot(difference(box_cox(Turnover, lambda_m), 12)) +
labs(title="Monthly Turnover After Transformation")
p5_m <- myseries %>%
ACF(difference(box_cox(Turnover, lambda_m), 12)) %>%
autoplot() + labs(title="Correlation of Monthly Turnover After Tranformation ")
p6_m <- myseries %>%
PACF(difference(box_cox(Turnover, lambda_m), 12)) %>%
autoplot() + labs(title="Partial Autocorrelation of Monthly Turnover After Transformation ")
grid.arrange(p4_m, p5_m, p6_m, nrow = 2)
## Warning: Removed 12 row(s) containing missing values (geom_path).
Simulate and plot some data from simple ARIMA models.
a. Use the following R code to generate data from an AR(1) model with
ϕ1=0.6 and σ2=1. The process starts with y1=0.
y <- numeric(100)
e <- rnorm(100)
for(i in 2:100)
y[i] <- 0.6*y[i-1] + e[i]
sim <- tsibble(idx = seq_len(100), y = y, index = idx)
b. Produce a time plot for the series. How does the plot change as you change
ϕ1?
We’ll create additional variables y2 and y3 to visualize what happens to the data as we change ϕ1.
sim %>% autoplot(y)
set.seed(123)
y2 <- numeric(100)
y3 <- numeric(100)
e2 <- rnorm(100)
e3 <- rnorm(100)
for(i in 2:100){
y2[i] <- 0.1*y2[i-1] + e2[i]
y3[i] <- 0.9*y3[i-1] + e3[i]
}
sim <- tsibble(idx = seq_len(100), y = y, y2 = y2, y3 = y3, index = idx)
plot1 <- sim %>% autoplot(y2) + labs( title = "Phi = 0.1")
plot2 <- sim %>% autoplot(y) + labs( title = "Phi = 0.6")
plot3 <- sim %>% autoplot(y3) + labs( title = "Phi = 0.9")
grid.arrange(plot1, plot2, plot3, nrow = 2)
As observed above, as the ϕ1 decreases or approaches 0, yt starts to be equivalent to white noise.
c. Write your own code to generate data from an MA(1) model with θ1=0.6 and σ2=1.
set.seed(123)
y4 <- numeric(100)
e4 <- rnorm(100)
for(i in 2:100)
y4[i] <- 0.6*e4[i-1] + e4[i]
sim2 <- tsibble(idx = seq_len(100), y4 = y4, index = idx)
d. Produce a time plot for the series. How does the plot change as you change θ1?
set.seed(123)
set.seed(123)
y5 <- numeric(100)
y6 <- numeric(100)
e5 <- rnorm(100)
e6 <- rnorm(100)
for(i in 2:100){
y5[i] <- 0.1*e5[i-1] + e5[i]
y6[i] <- 0.9*e6[i-1] + e6[i]
}
sim2 <- tsibble(idx = seq_len(100), y4 = y4, y5 = y5, y6 = y6, index = idx)
plot4 <- sim2 %>% autoplot(y5) + labs( title = "Theta = 0.1")
plot5 <- sim2 %>% autoplot(y4) + labs( title = "Theta = 0.6")
plot6 <- sim2 %>% autoplot(y6) + labs( title = "Theta = 0.9")
grid.arrange(plot4, plot5, plot6, nrow = 2)
In this case as Theta decreases the most recent observations have higher weight than observations from the more distant past. Additionally, only the scale in the series changes, not the patterns.
e. Generate data from an ARMA(1,1) model with ϕ1=0.6, θ1=0.6 and σ2=1.
set.seed(123)
y7 <- (numeric(100))
e7 <- rnorm(100)
for(i in 2:100)
y7[i] <- 0.6*y7[i-1] + 0.6*e7[i-1] + e7[i]
sim3 <- tsibble(idx = seq_len(100), y7 = y7, index = idx)
f. Generate data from an AR(2) model with ϕ1=-0.8, ϕ2=0.3 and σ2=1. (Note that these parameters will give a non-stationary series.)
set.seed(123)
y8 <- (numeric(100))
e8 <- rnorm(100)
for(i in 3:100)
y8[i] <- -0.8*y8[i-1] + 0.3*y8[i-2] + e8[i]
sim4 <- tsibble(idx = seq_len(100), y8 = y8, index = idx)
g. Graph the latter two series and compare them.
plot7 <- sim3 %>% autoplot(y7) + labs( title = "ARMA(1,1) Phi = 0.6 and Theta = 0.6")
plot8 <- sim4 %>% autoplot(y8) + labs( title = "AR(2) Phi1 = -0.8 and Phi2 = 0.3")
grid.arrange(plot7, plot8, nrow = 2)
The ARMA(1,1) model seems to be approaching stationarity. Perhaps by decreasing Phi we could achieve this. The AR(2) model has negative coefficient of -.8 which will cause the first term to alternate between a positive and negative value. We can also observe that the AR(2) model shows larger values as time progresses due to the Phi2 term.
Consider aus_airpassengers, the total number of passengers (in millions) from Australian air carriers for the period 1970-2011.
autoplot(aus_airpassengers)
## Plot variable not specified, automatically selected `.vars = Passengers`
a. Use ARIMA() to find an appropriate ARIMA model. What model was selected. Check that the residuals look like white noise. Plot forecasts for the next 10 periods.
The model that was selected for the data is ARIMA(0,2,1). Additionally, it seems that the residuals on the time plot show a slight increase in variability from around year 1989 and on. We can also observe that the residuals look like white noise on the ACF plot.
fit <- aus_airpassengers %>%
model(ARIMA(Passengers))
report(fit)
## Series: Passengers
## Model: ARIMA(0,2,1)
##
## Coefficients:
## ma1
## -0.8963
## s.e. 0.0594
##
## sigma^2 estimated as 4.308: log likelihood=-97.02
## AIC=198.04 AICc=198.32 BIC=201.65
fit %>% gg_tsresiduals()
fit %>% forecast(h=10) %>%
autoplot(aus_airpassengers)
b. Write the model in terms of the backshift operator.
ARIMA(0,2,1): ((1−B)^2)Yt=(1+(Θ1)B)ϵt
c. Plot forecasts from an ARIMA(0,1,0) model with drift and compare these to part a.
fit2 <- aus_airpassengers %>%
model(ARIMA(Passengers ~ 1 + pdq(0,1,0)))
report(fit2)
## Series: Passengers
## Model: ARIMA(0,1,0) w/ drift
##
## Coefficients:
## constant
## 1.4191
## s.e. 0.3014
##
## sigma^2 estimated as 4.271: log likelihood=-98.16
## AIC=200.31 AICc=200.59 BIC=203.97
fit2 %>% forecast(h=10) %>%
autoplot(aus_airpassengers)
The forecasts look very similar, upward trending.
d. Plot forecasts from an ARIMA(2,1,2) model with drift and compare these to parts a and c. Remove the constant and see what happens.
fit3 <- aus_airpassengers %>%
model(ARIMA(Passengers ~ 1 + pdq(2,1,2)))
report(fit3)
## Series: Passengers
## Model: ARIMA(2,1,2) w/ drift
##
## Coefficients:
## ar1 ar2 ma1 ma2 constant
## -0.5518 -0.7327 0.5895 1.0000 3.2216
## s.e. 0.1684 0.1224 0.0916 0.1008 0.7224
##
## sigma^2 estimated as 4.031: log likelihood=-96.23
## AIC=204.46 AICc=206.61 BIC=215.43
fit3 %>% forecast(h=10) %>%
autoplot(aus_airpassengers)
The forecasts with this new ARIMA(2,1,2) model keeps looking very similar to those in part a and c.
fit4 <- aus_airpassengers %>%
model(ARIMA(Passengers ~ 0 + pdq(2,1,2)))
## Warning: 1 error encountered for ARIMA(Passengers ~ 0 + pdq(2, 1, 2))
## [1] non-stationary AR part from CSS
report(fit4)
## Series: Passengers
## Model: NULL model
## NULL model
When we remove the constant, we get an error (non-stationary AR part from CSS) and the model is NULL.
e. Plot forecasts from an ARIMA(0,2,1) model with a constant. What happens?
fit5 <- aus_airpassengers %>%
model(ARIMA(Passengers ~ 1 + pdq(0,2,1)))
## Warning: Model specification induces a quadratic or higher order polynomial trend.
## This is generally discouraged, consider removing the constant or reducing the number of differences.
report(fit5)
## Series: Passengers
## Model: ARIMA(0,2,1) w/ poly
##
## Coefficients:
## ma1 constant
## -1.0000 0.0571
## s.e. 0.0585 0.0213
##
## sigma^2 estimated as 3.855: log likelihood=-95.1
## AIC=196.21 AICc=196.79 BIC=201.63
fit5 %>% forecast(h=10) %>%
autoplot(aus_airpassengers)
We get a message from R discouraging us from using a constant as it induces a quadratic or higher order polynomial trend. As it is evident on the graph for the forecasts, we can see that the line has become steeper and forecasts are higher than seen in previous graphs.
For the United States GDP series (from global_economy):
As we can see below, the graph of US GDP shows a bit of a curve. There may be a transformation suggested by the guerrero feature that could help straighten the line.
us <- global_economy %>%
filter(Country == "United States")
autoplot(us)
## Plot variable not specified, automatically selected `.vars = GDP`
a. if necessary, find a suitable Box-Cox transformation for the data;
The guerrero feature suggests a transformation with lamda 0.28.
lambda_us <- us %>%
features(GDP,features=guerrero) %>%
pull(lambda_guerrero)
lambda_us
## [1] 0.2819443
Below, we can see that the transformation has helped straighten the line.
us %>% autoplot(box_cox(GDP, lambda_us))
b. fit a suitable ARIMA model to the transformed data using ARIMA();
The ARIMA() function suggests an ARIMA(1,1,0) model with drift.
fit_us <- us %>%
model(ARIMA(box_cox(GDP, lambda_us)))
report(fit_us)
## Series: GDP
## Model: ARIMA(1,1,0) w/ drift
## Transformation: box_cox(GDP, lambda_us)
##
## Coefficients:
## ar1 constant
## 0.4586 118.1822
## s.e. 0.1198 9.5047
##
## sigma^2 estimated as 5479: log likelihood=-325.32
## AIC=656.65 AICc=657.1 BIC=662.78
c. try some other plausible models by experimenting with the orders chosen;
fit_others <- us %>%
model(
"ARIMA(2,1,2)" = ARIMA(box_cox(GDP, lambda_us) ~ 1 + pdq(2,1,2)),
"ARIMA(0,1,0)" = ARIMA(box_cox(GDP, lambda_us) ~ 1 + pdq(0,1,0)),
"ARIMA(1,1,0)" = ARIMA(box_cox(GDP, lambda_us) ~ 1 + pdq(1,1,0))
)
As observed below, the model with the lowest AIC is the ARIMA(1,1,0) with drift, as it was previously suggested by the ARIMA() function.
glance(fit_others) %>%
arrange (AIC) %>%
select(.model:BIC)
## # A tibble: 3 x 6
## .model sigma2 log_lik AIC AICc BIC
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 ARIMA(1,1,0) 5479. -325. 657. 657. 663.
## 2 ARIMA(2,1,2) 5734. -325. 662. 664. 674.
## 3 ARIMA(0,1,0) 6774. -332. 668. 668. 672.
d. choose what you think is the best model and check the residual diagnostics;
It looks like the residuals are distributed evenly on the time plot. We can also observe that the residuals look like white noise on the ACF plot.
fit_us %>% gg_tsresiduals()
e. produce forecasts of your fitted model. Do the forecasts look reasonable?
The forecasts look reasonable as they are following the trend contained within the data.
fit_us %>%
forecast(h=5) %>%
autoplot(us)
f. compare the results with what you would obtain using ETS() (with no transformation).
From the graph below, we can observe that the prediction intervals for the ETS() model are a lot wider than for the ARIMA() model. And if we look at the model report, we can also realize that the AIC is a lot larger for the ETS() model, 3190 compared to 656 from our best ARIMA() model.
fit_ets <- us %>%
model(ETS(GDP))
fit_ets %>%
forecast(h=5) %>%
autoplot(us)
report(fit_ets)
## Series: GDP
## Model: ETS(M,A,N)
## Smoothing parameters:
## alpha = 0.9990876
## beta = 0.5011949
##
## Initial states:
## l b
## 448093333334 64917355687
##
## sigma^2: 7e-04
##
## AIC AICc BIC
## 3190.787 3191.941 3201.089