This document is an examination of Extreme Value Theory (EVT) for Risk Factors - (Value at Risk and CvaR) in R programming. Via simulating data with extreme value distributions, (Frechet, Gumbel and Weibull), testing them on robustness with an Anderson Darling Test, and utilizing Block Maxima and Peak-Over-Threshold methods of EVT, this document will determine the influence on model results of different methods for calculating Risk Factors, and different estimation windows.
Extreme Value Theory, (initially developed by Fisher, Tippett, and Gnedenko), demonstrates that the distribution of the block-maxima of a sample of independent identically distributed, (iid), variables converges to one of the three Extreme Value distributions.
Renewed interest by Statisticians to modeling Extreme Values has recently occurred. Extreme Value Thoery has proven useful in a variety of Risk Factor cases. After Financial Market instabilities from 1999 - 2008, Extreme Value Analysis gained in effectiveness vs previous Value-at-Risk analysis. Extreme Values represent extreme fluctuations of a system. EVT offers the ability to model the relationships of probability of extreme events, magnitude, damage, and cost of protection.
https://arxiv.org/pdf/1310.3222.pdf
https://www.ma.utexas.edu/mp_arc/c/11/11-33.pdf http://evt2013.weebly.com/uploads/1/2/6/9/12699923/penalva.pdf
http://pubs.sciepub.com/ijefm/2/5/4/
http://www4.stat.ncsu.edu/~mannshardt/st810EVA/Lectures/Lec3GEV.pdf
http://www.sfu.ca/~rjones/econ811/readings/McNeil%201999.pdf
http://www.bankofcanada.ca/wp-content/uploads/2010/01/wp00-20.pdf
https://github.com/mbjoseph/demo_evt
https://github.com/C2SM/gevXgpd/tree/b306e066fe6f14c920254cab560ed6f9d7a2f53f/R https://cran.r-project.org/web/packages/evir/evir.pdf
# Set Working Directory
# setwd(" ")
# Required Packages
# if (!require("dplyr")) { install.packages("dplyr"); require("dplyr") }
# if (!require("magrittr")) { install.packages("magrittr"); require("magrittr")}
# if (!require("ggplot2")) { install.packages("ggplot2"); require("ggplot2") }
# if (!require("tseries")) { install.packages("tseries"); require("tseries") }
# if (!require("vars")) { install.packages("vars"); require("vars") }
# if (!require("evd")) { install.packages("evd"); require("evd") }
# if (!require("evir")) { install.packages("evir"); require("evir") }
# if (!require("POT")) { install.packages("POT"); require("POT") }
# if (!require("fBasics")) { install.packages("fBasics"); require("fBasics") }
# if (!require("fExtremes")) { install.packages("fExtremes"); require("fExtremes") }
# if (!require("quantmod")) { install.packages("quantmod"); require("quantmod") }
# if (!require("PerformanceAnalytics")) { install.packages("PerformanceAnalytics"); require("PerformanceAnalytics") }
# if (!require("rugarch")) { install.packages("rugarch"); require("rugarch") }
# if (!require("nortest")) { install.packages("nortest"); require("nortest") }
# if (!require("fGarch")) { install.packages("fGarch"); require("fGarch") }
library(dplyr)
library(magrittr)
library(ggplot2)
library(tseries)
library(vars)
library(evd)
library(evir)
library(POT)
library(fBasics)
library(fExtremes)
library(quantmod)
library(PerformanceAnalytics)
library(rugarch)
library(nortest)
Extreme Value Theorists have developed the Generalized Extreme Value distribution. The GEV contains a family of continuous probability distributions, the Gumbel, Frechet and Weibull distributions, (also known as Type I, II and III Extreme Value Distributions).
x <- seq(-10, 10, by=0.1)
Gumbel_density <- exp(-x-exp(-x))
Frechet_density <- dgev(x, xi=0.8, mu=0)
Weibull_density <- dgev(x, xi=-0.3, mu=0)
plot(c(x,x,x), c(Gumbel_density,Frechet_density, Weibull_density),
type='n', xlab="x", ylab=" ",las=1)
lines(x, Gumbel_density, type='l', lty=1, col='green')
lines(x, Weibull_density, type='l', lty=2, col='blue')
lines(x, Frechet_density, type='l', lty=3, col='red')
legend('topright', legend=c('Gumbel','Frechet','Weibull'), lty=c(1,2,3), col=c('green','blue','red'))
A Gumbel Extreme Value Distribution dataframe is created from the above Gumbel density. The beginning six distribution values are displayed, and a plot is drawn of the Gumbel Distribution.
GumbelDistribution <- data.frame(x, Gumbel_density)
names(GumbelDistribution) <- c("Time", "Observations")
head(GumbelDistribution)
## Time Observations
## 1 -10.0 0
## 2 -9.9 0
## 3 -9.8 0
## 4 -9.7 0
## 5 -9.6 0
## 6 -9.5 0
plot(GumbelDistribution, col="green", pch=19, cex=0.8,
main="Plot of Extreme Value Distribution - Gumbel")
A Frechet Extreme Value Distribution dataframe is created from the above Frechet density. The beginning six distribution values are displayed, and a plot is drawn of the Frechet Distribution.
FrechetDistribution <- data.frame(x, Frechet_density)
names(FrechetDistribution) <- c("Time", "Observations")
head(FrechetDistribution)
## Time Observations
## 1 -10.0 0
## 2 -9.9 0
## 3 -9.8 0
## 4 -9.7 0
## 5 -9.6 0
## 6 -9.5 0
plot(FrechetDistribution, col="blue", pch=19, cex=0.8,
main="Plot of Extreme Value Distribution - Frechet")
A Weibull Extreme Value Distribution dataframe is created from the above Weibull density. The beginning six distribution values are displayed, and a plot is drawn of the Weibull Distribution.
WeibullDistribution <- data.frame(x, Weibull_density)
names(WeibullDistribution) <- c("Time", "Observations")
head(WeibullDistribution)
## Time Observations
## 1 -10.0 1.919718e-43
## 2 -9.9 2.338991e-42
## 3 -9.8 2.726787e-41
## 4 -9.7 3.042965e-40
## 5 -9.6 3.252047e-39
## 6 -9.5 3.329823e-38
plot(WeibullDistribution, col="red", pch=19, cex=0.8,
main="Plot of Extreme Value Distribution - Weibull")
Before examining Extreme Value Theory estimations of Extreme Distributions, a pre-EVT Value-at-Risk estimation is performed. The objective is to compare VaR estimation results with EVT estimation results. All three Extreme Distributions are examined.
An attempt at VaR estimation is made for the Gumbel extreme value distribution. The Gumbel distribution is converted to a time series, then processed through VaR estimation. The following diagnostic tests are made: An arch test, a normality test, and the Gumbel values are fitted for correlation. A resulting prediction of the next 8 days of values demonstrates that traditional VaR estimation is inadequate for analysis of Gumbel Extreme Value Distributions.
gumbel_var_ts <- ts(GumbelDistribution)
var.2c <- VAR(gumbel_var_ts, p = 2, type = "const")
# Diagnostic Testing
# Arch Test
arch.test(var.2c)
##
## ARCH (multivariate)
##
## data: Residuals of VAR object var.2c
## Chi-squared = 353.09, df = 45, p-value < 2.2e-16
# Normality Test
normality.test(var.2c)
## $JB
##
## JB-Test (multivariate)
##
## data: Residuals of VAR object var.2c
## Chi-squared = 162900, df = 4, p-value < 2.2e-16
##
##
## $Skewness
##
## Skewness only (multivariate)
##
## data: Residuals of VAR object var.2c
## Chi-squared = 4064.3, df = 2, p-value < 2.2e-16
##
##
## $Kurtosis
##
## Kurtosis only (multivariate)
##
## data: Residuals of VAR object var.2c
## Chi-squared = 158840, df = 2, p-value < 2.2e-16
# fitted values
head(fitted(var.2c))
## Time Observations
## 1 -9.8 0.0002691024
## 2 -9.7 0.0002695511
## 3 -9.6 0.0002699998
## 4 -9.5 0.0002704486
## 5 -9.4 0.0002708973
## 6 -9.3 0.0002713460
# Predict the values for the next 8 days
var.2c.prd <- predict(var.2c, n.ahead = 8, ci = 0.9)
plot(var.2c.prd)
Next, an attempt at VaR estimation is made for the Frechet extreme value distribution. The Frechet distribution is converted to a time series, then processed through VaR estimation. The following diagnostic tests are made: An arch test, a normality test, and the Frechet values are fitted for correlation. A resulting prediction of the next 8 days of values demonstrates that traditional VaR estimation is also inadequate for analysis of Frechet Extreme Value Distributions.
frechet_var_ts <- ts(FrechetDistribution)
var.2c <- VAR(frechet_var_ts, p = 2, type = "const")
# Diagnostic Testing
# Arch Test
arch.test(var.2c)
##
## ARCH (multivariate)
##
## data: Residuals of VAR object var.2c
## Chi-squared = 259.12, df = 45, p-value < 2.2e-16
# Normality Test
normality.test(var.2c)
## $JB
##
## JB-Test (multivariate)
##
## data: Residuals of VAR object var.2c
## Chi-squared = 217100, df = 4, p-value < 2.2e-16
##
##
## $Skewness
##
## Skewness only (multivariate)
##
## data: Residuals of VAR object var.2c
## Chi-squared = 5976.6, df = 2, p-value < 2.2e-16
##
##
## $Kurtosis
##
## Kurtosis only (multivariate)
##
## data: Residuals of VAR object var.2c
## Chi-squared = 211130, df = 2, p-value < 2.2e-16
# fitted values
head(fitted(var.2c))
## Time Observations
## 1 -9.8 0.001076024
## 2 -9.7 0.001078705
## 3 -9.6 0.001081386
## 4 -9.5 0.001084068
## 5 -9.4 0.001086749
## 6 -9.3 0.001089430
# Predict the values for the next 8 days
var.2c.prd <- predict(var.2c, n.ahead = 8, ci = 0.9)
plot(var.2c.prd)
Finally, an attempt at VaR estimation is made for the Weibull extreme value distribution. The Weibull distribution is converted to a time series, then processed through VaR estimation. The following diagnostic tests are made: An arch test, a normality test, and the Frechet values are fitted for correlation. A resulting prediction of the next 8 days of values demonstrates that traditional VaR estimation is also inadequate for analysis of all three types of extreme value distributions, including the Weibull Extreme Value Distribution.
weibull_var_ts <- ts(WeibullDistribution)
var.2c <- VAR(weibull_var_ts, p = 2, type = "const")
# Diagnostic Testing
# Arch Test
arch.test(var.2c)
##
## ARCH (multivariate)
##
## data: Residuals of VAR object var.2c
## Chi-squared = 365.21, df = 45, p-value < 2.2e-16
# Normality Test
normality.test(var.2c)
## $JB
##
## JB-Test (multivariate)
##
## data: Residuals of VAR object var.2c
## Chi-squared = 162820, df = 4, p-value < 2.2e-16
##
##
## $Skewness
##
## Skewness only (multivariate)
##
## data: Residuals of VAR object var.2c
## Chi-squared = 4015.9, df = 2, p-value < 2.2e-16
##
##
## $Kurtosis
##
## Kurtosis only (multivariate)
##
## data: Residuals of VAR object var.2c
## Chi-squared = 158810, df = 2, p-value < 2.2e-16
# fitted values
head(fitted(var.2c))
## Time Observations
## 1 -9.8 0.0002644864
## 2 -9.7 0.0002647294
## 3 -9.6 0.0002649724
## 4 -9.5 0.0002652154
## 5 -9.4 0.0002654584
## 6 -9.3 0.0002657013
# Predict the values for the next 8 days
var.2c.prd <- predict(var.2c, n.ahead = 8, ci = 0.9)
plot(var.2c.prd)
Traditional VaR estimation of the 3 extreme value distributions has proven unreliable. Therefore, Conditional VaR, (or CvaR), is next reveiwed for analytical capabilities involving Extreme Value Distributions. When VaR fails, CVaR is often useful. CVaR is the average of the extreme losses in the “tail” of the distribution. The objective is to compare VaR estimation results with EVT estimation results. Once again, all three Extreme Distributions are examined.
An attempt at CvaR estimation is made for the Gumbel extreme value distribution. The Gumbel distribution time series, is processed through CvaR estimation. A Conditional Drawdown Test, (finds the mean of the worst 0.95% drawdowns), and a Expected Shortfall, (also known as CvaR), Test are made for diagnotics. The result for the Gumbell extreme distribution is that CvaR is capable of determined a Value at Risk point in the data.
# The conditional drawdown is the the mean of the worst 0.95% drawdowns
CDD(gumbel_var_ts, weights = NULL, geometric = TRUE, invert = TRUE, p = 0.95)
## Time Observations
## Conditional Drawdown 5% 9.2 0
# Calculates Expected Shortfall (also known as) Conditional Value at Risk
# Using modified Cornish Fisher calc for non-normal distribution
ES(ts(GumbelDistribution[1:201, 2, drop = FALSE]), p=.95, method="modified")
## Observations
## ES -0.3634259
An attempt at CvaR estimation is made for the Frechet extreme value distribution. The Frechet distribution time series, is processed through CvaR estimation. A Conditional Drawdown Test, (finds the mean of the worst 0.95% drawdowns), and a Expected Shortfall, (also known as CvaR), Test are made for diagnotics. The result for the Frechet extreme distribution is that CvaR is capable of determined a Value at Risk point in the data.
# The conditional drawdown is the the mean of the worst 0.95% drawdowns
CDD(frechet_var_ts, weights = NULL, geometric = TRUE, invert = TRUE, p = 0.95)
## Time Observations
## Conditional Drawdown 5% 9.2 0
# Calculates Expected Shortfall (also known as) Conditional Value at Risk
# Using modified Cornish Fisher calc for non-normal distribution
ES(ts(FrechetDistribution[1:201, 2, drop = FALSE]), p=.95, method="modified")
## Observations
## ES -0.6492217
An attempt at CvaR estimation is made for the Weibull extreme value distribution. The Weibull distribution time series, is processed through CvaR estimation. A Conditional Drawdown Test, (finds the mean of the worst 0.95% drawdowns), and a Expected Shortfall, (also known as CvaR), Test are made for diagnotics. The result for the Weibull extreme distribution is that CvaR is capable of determined a Value at Risk point in the data.
# The conditional drawdown is the the mean of the worst 0.95% drawdowns
CDD(weibull_var_ts, weights = NULL, geometric = TRUE, invert = TRUE, p = 0.95)
## Time Observations
## Conditional Drawdown 5% 9.2 0
# Calculates Expected Shortfall (also known as) Conditional Value at Risk
# Using modified Cornish Fisher calc for non-normal distribution
ES(ts(WeibullDistribution[1:201, 2, drop = FALSE]), p=.95, method="modified")
## Observations
## ES -0.4096232
The Block Maxima approach in Extreme value Theory is the most basic method of EVT, and consists of dividing the observation period into non-overlapping periods of equal size, and restricts attention to the maximum observation in each period. The observations created follow domain of attraction conditions, approximately an Extreme Value Distribution, G?? for some real ??. Parametric statistical methods for the Extreme Value Distributions are then applied to those observations.
Before creating a Block Maxima estimation of the Gumbel Extreme Value data, a histogram of the distribution’s observations is presented, and an Anderson-Darling Test of robustness of the data is made. Then, the Block Maxima estimation is made via fitting the Gumbel data to Block Maxima data, the results are displayed, with a plot of the Gumbel data density. Extreme data is deduced, then plotted. The resulting Extremes are presented in a final Histogram.
hist(GumbelDistribution$Observations)
cat("Anderson - Darling Test of Gumbel Extreme Value Distribution")
## Anderson - Darling Test of Gumbel Extreme Value Distribution
ad.test(GumbelDistribution$Observations)
##
## Anderson-Darling normality test
##
## data: GumbelDistribution$Observations
## A = 39.336, p-value < 2.2e-16
cat("Fitting the EV distribution to the block maxima data")
## Fitting the EV distribution to the block maxima data
GumbelGEV <- gev(as.numeric(GumbelDistribution$Observations[1:200]))
fgev(GumbelGEV$data, std.err = FALSE)
##
## Call: fgev(x = GumbelGEV$data, std.err = FALSE)
## Deviance: -1221.733
##
## Estimates
## loc scale shape
## 0.0004608 0.0014843 1.0017600
##
## Optimization Information
## Convergence: successful
## Function Evaluations: 98
## Gradient Evaluations: 14
plot(density(GumbelDistribution$Observations), xlab="", main="Maxima of Gumbel Distribution", lwd=2)
extremes <- GumbelDistribution %>%
group_by(Observations) %>%
summarize(max_x=max(Time), min_x=min(Time), n=n())
extremes %>%
ggplot(aes(x=max_x, y=Observations)) +
geom_point() +
xlab('Observations') +
ylab('Time)') +
ggtitle("Gumbel Distribution Extremes")
extremes$max_x %>%
hist(main="Histogram of Gumbel Distribution Extremes")
Before creating a Block Maxima estimation of the Frechet Extreme Value data, a histogram of the distribution’s observations is presented, and an Anderson-Darling Test of robustness of the data is made. Then, the Block Maxima estimation is made via fitting the Frechet data to Block Maxima data, the results are displayed, with a plot of the Frechet data density. Extreme data is deduced, then plotted. The resulting Extremes are presented in a final Histogram.
hist(FrechetDistribution$Observations)
cat("Anderson - Darling Test of Frechet Extreme Value Distribution")
## Anderson - Darling Test of Frechet Extreme Value Distribution
ad.test(FrechetDistribution$Observations)
##
## Anderson-Darling normality test
##
## data: FrechetDistribution$Observations
## A = 37.254, p-value < 2.2e-16
cat("Fitting the EV distribution to the block maxima data")
## Fitting the EV distribution to the block maxima data
FrechetGEV <- gev(as.numeric(FrechetDistribution$Observations[1:200]))
fgev(FrechetGEV$data, std.err = FALSE)
##
## Call: fgev(x = FrechetGEV$data, std.err = FALSE)
## Deviance: -1082.196
##
## Estimates
## loc scale shape
## 0.002289 0.003840 1.196843
##
## Optimization Information
## Convergence: successful
## Function Evaluations: 351
## Gradient Evaluations: 26
plot(density(FrechetDistribution$Observations), xlab="", main="Maxima of Frechet Distribution", lwd=2)
extremes <- FrechetDistribution %>%
group_by(Observations) %>%
summarize(max_x=max(Time), min_x=min(Time), n=n())
extremes %>%
ggplot(aes(x=max_x, y=Observations)) +
geom_point() +
xlab('Observations') +
ylab('Time)') +
ggtitle("Frechet Distribution Extremes")
extremes$max_x %>%
hist(main="Histogram of Frechet Distribution Extremes")
Before creating a Block Maxima estimation of the Weibull Extreme Value data, a histogram of the distribution’s observations is presented, and an Anderson-Darling Test of robustness of the data is made. Then, the Block Maxima estimation is made via fitting the Weibull data to Block Maxima data, the results are displayed, with a plot of the Weibull data density. Extreme data is deduced, then plotted. The resulting Extremes are presented in a final Histogram.
hist(WeibullDistribution$Observations)
cat("Anderson - Darling Test of Weibull Extreme Value Distribution")
## Anderson - Darling Test of Weibull Extreme Value Distribution
ad.test(WeibullDistribution$Observations)
##
## Anderson-Darling normality test
##
## data: WeibullDistribution$Observations
## A = 44.371, p-value < 2.2e-16
cat("Fitting the EV distribution to the block maxima data")
## Fitting the EV distribution to the block maxima data
WeibullGEV <- gev(as.numeric(WeibullDistribution$Observations[1:200]))
fgev(WeibullGEV$data)
##
## Call: fgev(x = WeibullGEV$data)
## Deviance: -1754.925
##
## Estimates
## loc scale shape
## 0.0008346 0.0019415 2.0281387
##
## Standard Errors
## loc scale shape
## 1.999e-06 1.999e-06 1.592e-01
##
## Optimization Information
## Convergence: successful
## Function Evaluations: 174
## Gradient Evaluations: 22
plot(density(WeibullDistribution$Observations), xlab="", main="Maxima of Weibull Distribution", lwd=2)
extremes <- WeibullDistribution %>%
group_by(Observations) %>%
summarize(max_x=max(Time), min_x=min(Time), n=n())
extremes %>%
ggplot(aes(x=max_x, y=Observations)) +
geom_point() +
xlab('Observations') +
ylab('Time)') +
ggtitle("Weibull Distribution Extremes")
extremes$max_x %>%
hist(main="Histogram of Weibull Distribution Extremes")
In the peaks-over-threshold approach in EVT, the initial observations that exceed a certain high threshold are selected. The probability distribution of those selected observations is approximately a generalized Pareto distribution.
In order to create a Peaks-Over-Threshold estimation of the Gumbel distribution, a threshold is determined using the POT package in R. Then a maximum likelihood estimation (mle) is created by fitting a Generalized Pareto Distribution. A 95% Confidence Interval is determined from the MLE data. The resulting estimation is then graphically diagnosed via MLE plotting. Finally, predictions based on the data are made using the Generalized Auto-Regressive Conditional Heteroskadacity method.
# Determine threshold
par(mfrow=c(1,2))
tcplot(GumbelDistribution$Observations, u.range = c(0.3, 0.35))
# Fit the Generalizaed Pareto Distribution
mle <- fitgpd(GumbelDistribution$Observations, thresh = .35, shape = 0, est = "mle")
# Confidence Intervals
gpd.fiscale(mle, conf = 0.95)
## conf.inf.scale conf.sup.scale
## 0.002667761 0.022090121
# graphic diagnostics for the fitted model
par(mfrow=c(1,2))
plot(mle, npy = 1, which=1)
plot(mle, npy = 1, which=4)
# Fit models with Generalized Auto-Regressive Conditional Heteroskadacity
gumbel.fitted.model <- garchFit(formula=~ arma(1, 0) + garch(1, 1),
data=GumbelDistribution$Observations,
cond.dist="norm", trace=FALSE)
# Produce forecasts
model.forecast <- fGarch::predict(object=gumbel.fitted.model, n.ahead=1)
cat("Forecasts for Gumbel Distribution observations")
## Forecasts for Gumbel Distribution observations
model.forecast
## meanForecast meanError standardDeviation
## 1 3.453016e-05 0.0001002756 0.0001002756
A Peaks-Over-Threshold estimation of the Frechet distribution, is determined via creation of a maximum likelihood estimation (mle) by fitting a Generalized Pareto Distribution. A 95% Confidence Interval is determined from the MLE data. The resulting estimation is then graphically diagnosed via MLE plotting. Finally, predictions based on the data are made using the Generalized Auto-Regressive Conditional Heteroskadacity method.
# Determine threshold
par(mfrow=c(1,2))
tcplot(FrechetDistribution$Observations, u.range = c(0.4, 0.45))
# Fit the Generalizaed Pareto Distribution
mle <- fitgpd(FrechetDistribution$Observations, thresh = .35, shape = 0, est = "mle")
# Confidence Intervals
gpd.fiscale(mle, conf = 0.95)
## conf.inf.scale conf.sup.scale
## 0.02435046 0.13408869
# graphic diagnostics for the fitted model
par(mfrow=c(1,2))
plot(mle, npy = 1, which=1)
plot(mle, npy = 1, which=4)
# Fit models with Generalized Auto-Regressive Conditional Heteroskadacity
frechet.fitted.model <- garchFit(formula=~ arma(1, 0) + garch(1, 1),
data=FrechetDistribution$Observations,
cond.dist="norm", trace=FALSE)
model.forecast <- fGarch::predict(object=frechet.fitted.model, n.ahead=1)
cat("Forecasts for Frechet Distribution observations")
## Forecasts for Frechet Distribution observations
model.forecast
## meanForecast meanError standardDeviation
## 1 0.007469903 0.00117062 0.00117062
In order to create a Peaks-Over-Threshold estimation of the Weibull distribution, a threshold is determined using the POT package in R. Then a maximum likelihood estimation (mle) is created by fitting a Generalized Pareto Distribution. A 95% Confidence Interval is determined from the MLE data. The resulting estimation is then graphically diagnosed via MLE plotting. Finally, predictions based on the data are made using the Generalized Auto-Regressive Conditional Heteroskadacity method.
# Determine threshold
par(mfrow=c(1,2))
tcplot(WeibullDistribution$Observations, u.range = c(0.3, 0.35))
# Fit the Generalizaed Pareto Distribution
mle <- fitgpd(WeibullDistribution$Observations, thresh = .35, shape = 0, est = "mle")
# Confidence Intervals
gpd.fiscale(mle, conf = 0.95)
## conf.inf.scale conf.sup.scale
## 0.008982837 0.037852127
# graphic diagnostics for the fitted model
par(mfrow=c(1,2))
plot(mle, npy = 1, which=1)
plot(mle, npy = 1, which=4)
# Fit models with Generalized Auto-Regressive Conditional Heteroskadacity
weibull.fitted.model <- garchFit(formula=~ arma(1, 0) + garch(1, 1),
data=WeibullDistribution$Observations,
cond.dist="norm", trace=FALSE)
# Produce forecasts
model.forecast <- fGarch::predict(object=weibull.fitted.model, n.ahead=1)
cat("Forecasts for Weibull Distribution observations")
## Forecasts for Weibull Distribution observations
model.forecast
## meanForecast meanError standardDeviation
## 1 1.795185e-06 0.0001072333 0.0001072333