Autocorrelation Function (ACF) and Partial ACF

Autocorrelation measures the linear relationship between lagged variables in a time series data. The ACF plot shows different autocorrelation coefficients. For example, \(r_1\) measures the relationship between y_t and y_(t-1). \(r_2\) measures the relationship between y_t and y_(t-2) and so on.

ACF and PACF plots measure the relationship between y_t and y_(t-k) after removing the effects of lags 1,2,…,(k-1). So the first partial autocorrelation is identical to the first autocorrelation, because there is nothing between them to remove. Each partial autocorrelation can be estimated as the last coefficient in an autoregressive model.

When a plot has trends, then the ACF decreases gradually as lags increase. Because the housing starts series has autocorrelation, it is not white noise. A time series is white noise if all the variables are independently and identically distributed with a mean of 0, and a constant variance. The blue lines in the plot indicate significane. The spikes in the plot that exceed the significane lines above and below imply that the current level of housing starts is significantly autocorrelated with its lagged values.

PACF

The partial auto-correlation function measures the correlation between current variable and lagged variable after eliminating the correlation from previous lags. In simple terms, the PACF removes the lags that cause autocorrelation.

From the plot below, we will include 3 lags. Adding more lags decreases the degrees of freedom and power as we add more regressors to the model.

Portmanteau tests for autocorrelation

ARIMA models explain or capture serial correlation present within a time series.

We test whether the first h autocorrelations are significantly different from what would be expected from a white noise process. A test for a group of autocorrelations is called a portmanteau test. We can do the Ljung-Box test.

H0: at each lag, the time series data points are i.i.d i.e. there is no autocorrelation.

Ha: data points at each lag are not i.i.d. and are serially correlated.

Stationarity

A time series is stationary whose properties are independent of the time in which we observe the data. So, a series with trends or seasonality is non-stationary as trends and seasonality change the values of parameters at different points in time. Alternatively, a white noise series is stationary as it looks the same any time.Generally, stationary time series have no predictable patterns in the long run. Such time plots will be approximately horizontal,have constant variance and mean, albeit they may be cyclical.

Strictly Stationary Series

When the distribution of elements x_(t_1),…,x_(t_n) is equal to that of x_(t_(1+m)),…,x_(t_(n+m)), ∀ t_i,m, then the time series model, {x_t}, is strictly stationary. The distribution of the time series should be constant even when time arbitrarily changes.

Differencing to make a stationary time series

We can make a non-stationary time series stationary by differencing consecutive observations. Lograrithmic transformations and differencing can stabalize the variance and mean of the time series,respectively.Furthermore, differencing eliminates seasonality and trends.

We can look at ACF plot to identify non-stationary time series. For suchlike data, ACF plummets to zero fast. On the contrary, the ACF of a unit root series decreases relatively gradually. The value of r_1 is often large and positive for non-stationary data.

## 
##  Box-Ljung test
## 
## data:  diff(time_ser[, 1])
## X-squared = 70.293, df = 10, p-value = 3.892e-11

The null hypothesis is that the series is i.i.d. or has no serial correlation

The ACF of the differenced housing starts looks doesn’t like that of a white noise series. There are autocorrelations lying outside the 95% limits, and the Ljung-Box Q∗ statistic has a very small p-value of 3.892e-11 (for h=10). This suggests that the daily change in the US housing starts is not a random amount which is correlated with that of previous months.

Random Walk Model

Used for non-stationary economic and financial data, random walk models have long periods of trends and can change unpredictably in any direction. Thus, the forecasts are equal to the last observation as the values are equally likely to move up or down.

Let a time series be {w_t:t=1,…n}. If the elements of the series, w_i, are independent and identically distributed (i.i.d.), with zero mean, variance σ^2 and no serial correlation (i.e. Cor(w_i,w_j)≠ 0,∀ i≠j) then the time series is discrete white noise (DWN).

In particular, if the values w_i are drawn from a standard normal distribution (i.e. w_t ∼ N(0,σ^2)), then the series is known as Gaussian White Noise.

In a random walk, each term, x_t depends entirely on the previous term, x_(t−1) and a stochastic white noise term, w_t:

x_t = x_(t−1) + w_t

An extention of the random walk is the autoregressive model as it incorporates terms further back in time. Thus the AR model is linearly dependent on the previous terms.

Autoregressive Model of order p

A time series model, {x_t}, is an autoregressive model of order p, AR(p), if:

x_t = α_1 * x_(t−1) +… + α_p * x_(t−p) + w_t, where {w_t} is white noise.

Moving Average Model of order q

MA is a linear combination of the past white noise terms.

Intuitively, this means that the MA model sees such random white noise “shocks” directly at each current value of the model. This is in contrast to an AR(p) model, where the white noise “shocks” are only seen indirectly, via regression onto previous terms of the series.

A time series model, {x_t}, is a moving average model of order q, MA(q), if:

x_t = w_t + β_1 * w_(t−1) +…+ β_q * w_(t−q), where {w_t} is white noise

Autoregressive Moving Average Model of order p, q

A time series model, {x_t}, is an autoregressive moving average model of order p,q, ARMA(p,q), if:

x_t = α_1 * x_(t−1) + α_2 * x_(t−2) +…+ w_t + β_1 * w_(t−1) + β_2* w_(t−2) …+ β_q * w_(t−q)

Where {w_t} is white noise with E(w_t) = 0 and variance σ^2.

The former AR model considers its own past behaviour as inputs for the model and as such attempts to capture market participant effects, such as momentum and mean-reversion in stock trading.

The latter model is used to characterise “shock” information to a series, such as a surprise earnings announcement or unexpected event (such as the BP Deepwater Horizon oil spill).

Applying a difference operator to a non-stationary or a random walk series {x_t} gives a stationary or a white noise {w_t} series.

∇x_t=x_t − x_(t−1) = w_t

ARIMA repeatedly differences d times to make a stationary series.

Unit root or non-stationarity tests

We can use the statistical hypothesis unit toot tests to objectively determine whether the series requires differencing. In our analysis, we use the Augmented Dickey Fuller test.

The time series is modeled as: z_t = α * z_(t−1) + w_t, wherein w_t is discrete white noise. The null hypothesis is that α = 1, while the alternative hypothesis is that α < 1. In this test, the null hypothesis is that the data is not stationary or it is unit root, and we look for evidence that the null hypothesis is false. Consequently, small p-values (e.g., > 0.05) suggest that differencing is required.

## 
##  Augmented Dickey-Fuller Test
## 
## data:  time_ser[, 1]
## Dickey-Fuller = -2.4706, Lag order = 7, p-value = 0.3791
## alternative hypothesis: stationary

The test statistic is much bigger than the 5% critical value, so we fail to reject the null hypothesis. That is, the data is not stationary. We can difference the data twice (2nd difference), and apply the test again.Occasionally the differenced data will not appear to be stationary and it may be necessary to difference the data a second time to obtain a stationary series.

## 
##  Augmented Dickey-Fuller Test
## 
## data:  diff(time_ser[, 1])
## Dickey-Fuller = -7.7858, Lag order = 7, p-value = 0.01
## alternative hypothesis: stationary

This time, the test statistic is tiny, and well within the range we would expect for stationary data. So we can conclude that the differenced data are stationary.

Non Seasonal ARIMA Models

Combining autoregresion with differencing and a mocing average model yields a non-seasonal Auto-regressive Integrated Moving Average (ARIMA) model. A series with ARIMA(0,0,0) is a white noise series.

Intuitively, ARIMA denotes the number of previous time steps the current value of our variable depends on. For example, at time T, our variable X_t depends on X_(t-1) and X_(t-2), linearly. In this case, we have 2 AR terms and hence our p parameter=2

MA – This term is a measure of the average over multiple time periods we take into account. For example, to calculate the value of our variable at the current time step, if we take an average over previous 2 timesteps, the number of MA terms, denoted by q=2

Checking for stationarity of the predictors through plots

Graphically, all the predictors are non-stationary.Another way of checking is the unit root test for stationarity using the Augmented Dickey Fuller test for stationarity. The null hypothesis is that the series is unit root or non-stationarity.

The order of intergation is another concept closely associated to stationarity. The order tells the number of time we should difference the series to make it stationary.An I(0) series has order 0 if it does not require any differencing, and is already stationary. A series of order 1 or I(1) if it is non-stationary in the beginning, and the first difference makes it stationary. An I(0) series frequently crosses the mean, whereas I(1) and I(2) series can stary or wander farther from their mean value and rarely comes across the mean.

All the series are stationary after differencing upto 3 times.

statistic p.value parameter method alternative
Housing_Starts -7.785817 0.01 7 Augmented Dickey-Fuller Test stationary
Income -15.757528 0.01 7 Augmented Dickey-Fuller Test stationary
Federal_funds_rate -7.247981 0.01 7 Augmented Dickey-Fuller Test stationary
Yield_spread -7.480514 0.01 7 Augmented Dickey-Fuller Test stationary
Securitized_consumer_loans -12.691731 0.01 7 Augmented Dickey-Fuller Test stationary
Unemployment_rate -4.733799 0.01 7 Augmented Dickey-Fuller Test stationary
CPI -15.485510 0.01 7 Augmented Dickey-Fuller Test stationary
Private_house_completed -6.393007 0.01 7 Augmented Dickey-Fuller Test stationary
Mortgage_rate -7.348586 0.01 7 Augmented Dickey-Fuller Test stationary
Real_estate_loans -13.229879 0.01 7 Augmented Dickey-Fuller Test stationary
House_supply -8.092979 0.01 7 Augmented Dickey-Fuller Test stationary

New dataframe with the differenced variables

Autoregressive Integrated Moving Average with Explanatory Variable (ARIMAX) Model

The standard ARIMA models forecast solely based on the past values of the housing starts, and does not have covariates. The model assumes that the future values are linearly dependent on the past values and previous stochastic shocks. Similar to ARIMA and a multivariate regression model is the ARIMAX model, wherein covariates are present on the right hand side of the model. Below is an ARIMAX model where x_t is a covariate at time t and a is its coefficient:

x_t = ax_t + α_1 x_(t−1) + α_2 * x_(t−2) +…+ w_t + β_1 * w_(t−1) + β_2* w_(t−2) …+ β_q * w_(t−q)

Where {w_t} is white noise with E(w_t) = 0 and variance σ^2

## 
## Call:
## arima(x = train_set_diff[, 1], order = c(2, 1, 3), xreg = train_set_diff[, c(9, 
##     13, 16, 18, 19, 21)])
## 
## Coefficients:
##          ar1      ar2      ma1     ma2      ma3    mortgR  income_d2
##       1.5159  -0.8454  -1.9201  1.4622  -0.3158  -48.1082    -0.0054
## s.e.  0.1036   0.1034   0.1205  0.1749   0.0801   15.0446     0.0392
##       sec_conL_d2   CPI_d3  pvt_house_comp_d1  real_estL_d2
##            1.2251  -7.1930             0.1023        0.0096
## s.e.       0.5557   5.5589             0.0453        0.2274
## 
## sigma^2 estimated as 11168:  log likelihood = -2462.35,  aic = 4948.69
## 
## Training set error measures:
##                     ME     RMSE      MAE        MPE     MAPE      MASE
## Training set -4.840134 105.5491 79.58181 -0.8584944 5.634637 0.9146047
##                    ACF1
## Training set 0.00869421
## 
## Call:
## arima(x = test_set_diff[, 1], order = c(2, 1, 3), xreg = test_set_diff[, c(9, 
##     13, 16, 18, 19, 21)])
## 
## Coefficients:
##           ar1      ar2      ma1      ma2     ma3    mortgR  income_d2
##       -0.4512  -0.0537  -0.2699  -0.1091  0.0569  -25.0188     0.0309
## s.e.   1.6926   0.6366   1.6928   1.0203  0.3803   32.7015     0.0431
##       sec_conL_d2    CPI_d3  pvt_house_comp_d1  real_estL_d2
##           -0.0964  -10.1882             0.0746       -1.3180
## s.e.       0.2249    7.3425             0.0520        0.5985
## 
## sigma^2 estimated as 4294:  log likelihood = -566.01,  aic = 1156.02
## 
## Training set error measures:
##                    ME     RMSE      MAE      MPE     MAPE      MASE
## Training set 14.00983 65.20375 50.13054 1.256665 5.168606 0.7151391
##                     ACF1
## Training set -0.05640147

y_t = c + -0.4143ε_(t-1) + -56.5011mortgR + 0.017* income_d2 + 0.1658 * sec_conL_d2 + -6.9073* CPI_d3 + 0.0931 * pvt_house_comp_d1 + -0.0927 * real_estL_d2

ARIMA(0,1,1) is also an MA(1) model, where the coefficient of ε_(t-1) tells how quickly the forecasts converge to the mean. From th plot of forecasts, when the blue line is horizontal, it means that the forecasts have converged to the mean.

## $title

## 
## $subtitle
## [1] "ARIMA Errors"
## 
## attr(,"class")
## [1] "labels"

the ARIMA errors that should resemble a white noise series.

## 
##  Ljung-Box test
## 
## data:  Residuals from ARIMA(2,1,3)
## Q* = 19.555, df = 13, p-value = 0.1069
## 
## Model df: 11.   Total lags used: 24

H_o = no autocorrelation in the residuals.

The results Ljung-Box test are insignificant (i.e., the p-values = 0.1069 is big). Thus, we can conclude that the residuals are not serially correlated, producing precise coverage of the prediction intervals.

The time plot and histogram of the residuals shows that the variance in the residuals are almost constant.

## [1] 0.3570764
## [1] 0.3439583

Maximum Likelihood Estimation in ARIMA models

R estimates the ARIMA model using maximum likelihood estimation (MLE). This approch finds the parameter values that maximize the probability of obtaining the observed data.

R will report the value of the log likelihood of the data; that is, the logarithm of the probability of the observed data coming from the estimated model. For given values of p,d and q, R will try to maximise the log likelihood when finding parameter estimates.

## 
## Call:
## arima(x = train_set_diff[, 1], order = c(3, 1, 2))
## 
## Coefficients:
##          ar1      ar2      ar3      ma1     ma2
##       1.0488  -0.4336  -0.1437  -1.4270  0.8473
## s.e.  0.1111   0.0777   0.0629   0.0954  0.0705
## 
## sigma^2 estimated as 11872:  log likelihood = -2493.05,  aic = 4998.11
## 
## Training set error measures:
##                     ME     RMSE      MAE        MPE     MAPE      MASE
## Training set -2.885814 108.8237 82.18266 -0.6313271 5.767501 0.9444952
##                      ACF1
## Training set -0.008670964

the ARIMA errors that should resemble a white noise series.

## 
##  Ljung-Box test
## 
## data:  Residuals from ARIMA(3,1,2)
## Q* = 28.328, df = 19, p-value = 0.07732
## 
## Model df: 5.   Total lags used: 24

After simulating the AIRMA model of order ARIMA(4,2,4), the model is:

y_t = c + 0.0022 y_(t-1) + 0.5383y_(t-2) + -0.5267 y_(t-3) + -0.1795 y_(t-4) + -1.3846ε_(t-1) + -0.1697ε_(t-2) + 1.3837ε_(t-3)+ -0.8293 ε_(t43)

where ε_t is white noise with standard deviation of sqrt(10340) = 101.6858.

The ARCH-LM test with q lags checks for the presence of ARCH effects at lags 1 to q. It tests if the coefficients α_1,…. α_q in the equation below:

x^2_t = α_0 + α_1 * x^2_(t-1) +….+ α_q * x^2_(t-q) + ϵ_t

## [1] 0.3922781
## [1] 0.3885831
## ARCH heteroscedasticity test for residuals 
## alternative: heteroscedastic 
## 
## Portmanteau-Q test: 
##      order    PQ  p.value
## [1,]     4  42.2 1.54e-08
## [2,]     8  43.6 6.88e-07
## [3,]    12  78.2 8.94e-12
## [4,]    16 107.1 1.67e-15
## [5,]    20 108.8 3.30e-14
## [6,]    24 126.5 6.66e-16
## Lagrange-Multiplier test: 
##      order    LM  p.value
## [1,]     4 162.4 0.00e+00
## [2,]     8  79.0 2.23e-14
## [3,]    12  50.3 5.46e-07
## [4,]    16  30.0 1.21e-02
## [5,]    20  22.8 2.48e-01
## [6,]    24  16.9 8.15e-01

H_0 = No ARCH effects present

As the p-value is very small, we reject the null hypothesis and conclude that ARCH effects are present at lags 1 and 2 jointly. ARCH effects are also present at higher lag orders, implying that the data is conditionally heteroskedastic.

Generalized Autoregressive Conditional Heteroskedasticity (GARCH)

Generalized Autoregressive Conditional Heteroskedastic, or GARCH models are useful to analyse and forecast volatility in a time series data. Univariate GARCH(1,1) helps in modeling volality and its clustering.

Financial time series possess the property of volatility clustering wherein the volatility of the variable changes over time. Technically, this behavior is called conditional heteroskedasticity. Because ARMA models don’t consider volatility clustering i.e. they are not conditionally heteroskedastic, so we need to use ARCH and GARCH models for predictions.

Such models include the Autogressive Conditional Heteroskedastic (ARCH) model and Generalised Autogressive Conditional Heteroskedastic (GARCH) model. Different forms of volatility such as sell-offs during a financial crises, can cause serially correlated heteroskedasticity. Thus, the time_ser data is conditionally heteroskedastic.

Maximum likelihood estimates most GARCH models, such as measuring relative loss or profit from trading stocks in a day.

Now, I have fit a GARCH-type model which assumes the null hypothesis that the returns are GARCH.

The low p-values give reason to reject the hypothesis that the returns are a GARCH white noise process. So, we should do ARMA modelling.

We have fit GARCH model(s), starting with a GARCH(1,1) model with Gaussian innovations.GARCH(1,1) considers a single autoregressive and a moving average lag. The model is:

ϵ_t = σ_t * w_t σ^2 = α_0 + α_1 * ϵ^2_(t−1) + β_1 * σ^2_(t−1)

Note that alpha_1 + beta_1 < 0, otherwise the series will become unstable.

The persistence of a GARCH model signifies the rate at which large volatilities decay after a shock. The key statistic in GARCH(1,1) is the sum of two parameters: alpha1 and beta1.

Ideally, alpha_1 + beta_1 < 1. If, alpha_1 + beta_1 > 1, then the volatility predictions are explosive. If, alpha_1 + beta_1 = 1, then the model has exponential decay.

In the output from garchFit, the normalized log-likelihood is the loglikelihood divided by n. The AIC and BIC values have also been normalized by dividing by n,

The diagnostics imply that the standardised residuals and their squares are IID and that the model accomodates ARCH effects.

H_0: white noise innovation process is Gaussian

Their distribution is Gaussian only from the p-value for Ljung-Box Test which is 0.921266. From all other tests of normality, we reject the null hypothesis as the p-values are very low.

The qq-plot of the standardised residuals, suggests that the fitted standardised skew-t conditional distribution is decent.

Since, ARIMA linearly models the data, the forecast width is constant as the model does not incorporate new information or recent changes.To model non-linearity or a cluster of volatility, we have to use ARCH/GARCH methods as they reflect more recent fluctuations in the series. The ACF and PACF of residuals can confirm if the residuals can be predicted if they are not white noise. Residuals of strict white noise series are i.i.d normally distributed with zero mean. Moreover, the PACF and ACF of squared residuals have no significant lags. Finally, we cannot predict a strict white noise series, either linearly or non-linearly. Below, the residuals and squared residuals of ARIMA(4,2,4) model show a cluster of volatility as shown from the ACF plots.

## 
##  Box-Ljung test
## 
## data:  resid
## X-squared = 6.1791, df = 10, p-value = 0.8

H_o: no autocorrelation

We fail to reject the null hypothesis that the residuals of ARIMA(4,2,4) is not serially correlated i.e. We conclude that the residuals of ARIMA(4,2,4) follow a white noise process.

## 
##  Box-Ljung test
## 
## data:  resid^2
## X-squared = 55.432, df = 10, p-value = 2.623e-08

H_o: no autocorrelation

We fail to reject the null hypothesis that the squared residuals of ARIMA(4,2,4) is not serially correlated. We conclude that the squared residuals of ARIMA(4,2,4) do not follow a white noise process and are autocorrelated. So, the time series exhibits conditional heteroskedasticity. Now, I have fit a GARCH(1,1) model.

## 
## Title:
##  GARCH Modelling 
## 
## Call:
##  garchFit(formula = ~arma(2, 2) + garch(1, 1), data = train_set_diff[, 
##     1]) 
## 
## Mean and Variance Equation:
##  data ~ arma(2, 2) + garch(1, 1)
## <environment: 0x563fc25419e8>
##  [data = train_set_diff[, 1]]
## 
## Conditional Distribution:
##  norm 
## 
## Coefficient(s):
##          mu          ar1          ar2          ma1          ma2  
##   8.5617895    1.0000000   -0.0073407   -0.4019233    0.0581610  
##       omega       alpha1        beta1  
## 183.6894964    0.0875240    0.8971120  
## 
## Std. Errors:
##  based on Hessian 
## 
## Error Analysis:
##          Estimate  Std. Error  t value Pr(>|t|)    
## mu       8.561789   12.020834    0.712  0.47631    
## ar1      1.000000          NA       NA       NA    
## ar2     -0.007341          NA       NA       NA    
## ma1     -0.401923          NA       NA       NA    
## ma2      0.058161          NA       NA       NA    
## omega  183.689496   73.978572    2.483  0.01303 *  
## alpha1   0.087524    0.031136    2.811  0.00494 ** 
## beta1    0.897112    0.027595   32.510  < 2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Log Likelihood:
##  -2476.519    normalized:  -6.05506 
## 
## Description:
##  Sun Apr 21 15:50:18 2019 by user:  
## 
## 
## Standardised Residuals Tests:
##                                 Statistic p-Value     
##  Jarque-Bera Test   R    Chi^2  2.840468  0.2416575   
##  Shapiro-Wilk Test  R    W      0.9961509 0.4264433   
##  Ljung-Box Test     R    Q(10)  7.461145  0.6813097   
##  Ljung-Box Test     R    Q(15)  16.91151  0.3241817   
##  Ljung-Box Test     R    Q(20)  21.58734  0.3633155   
##  Ljung-Box Test     R^2  Q(10)  29.78674  0.0009282047
##  Ljung-Box Test     R^2  Q(15)  53.69362  2.95528e-06 
##  Ljung-Box Test     R^2  Q(20)  60.61724  5.713165e-06
##  LM Arch Test       R    TR^2   32.9226   0.0009953035
## 
## Information Criterion Statistics:
##      AIC      BIC      SIC     HQIC 
## 12.14924 12.22775 12.14849 12.18030
## 
## Title:
##  GARCH Modelling 
## 
## Call:
##  garchFit(formula = ~arma(2, 2) + garch(1, 1), data = test_set_diff[, 
##     1]) 
## 
## Mean and Variance Equation:
##  data ~ arma(2, 2) + garch(1, 1)
## <environment: 0x563fc0249488>
##  [data = test_set_diff[, 1]]
## 
## Conditional Distribution:
##  norm 
## 
## Coefficient(s):
##         mu         ar1         ar2         ma1         ma2       omega  
##   25.13945     0.61516     0.36854    -0.49843    -0.17914  2027.88963  
##     alpha1       beta1  
##    0.12774     0.40943  
## 
## Std. Errors:
##  based on Hessian 
## 
## Error Analysis:
##         Estimate  Std. Error  t value Pr(>|t|)  
## mu       25.1395     11.4771    2.190   0.0285 *
## ar1       0.6152      0.3603    1.708   0.0877 .
## ar2       0.3685      0.3565    1.034   0.3012  
## ma1      -0.4984      0.3837   -1.299   0.1939  
## ma2      -0.1791      0.3885   -0.461   0.6447  
## omega  2027.8896   1456.7677    1.392   0.1639  
## alpha1    0.1277      0.1523    0.839   0.4017  
## beta1     0.4094      0.3772    1.085   0.2777  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Log Likelihood:
##  -571.0781    normalized:  -5.598805 
## 
## Description:
##  Sun Apr 21 15:50:18 2019 by user:  
## 
## 
## Standardised Residuals Tests:
##                                 Statistic p-Value   
##  Jarque-Bera Test   R    Chi^2  0.7342273 0.6927309 
##  Shapiro-Wilk Test  R    W      0.9942315 0.9463657 
##  Ljung-Box Test     R    Q(10)  15.53304  0.1138027 
##  Ljung-Box Test     R    Q(15)  24.71459  0.05391022
##  Ljung-Box Test     R    Q(20)  30.02547  0.06944194
##  Ljung-Box Test     R^2  Q(10)  13.09079  0.2186401 
##  Ljung-Box Test     R^2  Q(15)  22.09223  0.1054082 
##  Ljung-Box Test     R^2  Q(20)  28.00402  0.1093043 
##  LM Arch Test       R    TR^2   16.46355  0.1709148 
## 
## Information Criterion Statistics:
##      AIC      BIC      SIC     HQIC 
## 11.35447 11.56035 11.34332 11.43784

## [1] 0.3697003
## [1] 0.3591915