Value at Risk (VaR) is a measure of risk that focuses on potential losses of an investment in a short period of time. Investment firms and banks constantly monitor VaR of their portfolios in order to be prepared in case a bad scenario. VaR is usually estimated daily using a small probability such as 5%, 1%, or 0.1%.
Very bad scenarios in the short time have a certain probability p to happen in one day. For example, if the probability of losing 10% or MORE of an investment in one single day is 5%, then we say that the 5% Value at Risk (5%VaR) is equal to 10%. We can also calculate VaR in money if we know the initial investment. For example, using the same example, if my initial investment is $1’000,000, then the 5%VaR = 10%, which is equal to $100,000.00. VaR can be reported as a positive or negative amount; this is just a matter of preference. But we have to be aware that VaR is a measure of potential loss in % or in $ with a certain probability.
p%VaR is the MINIMUM % loss of an investment with a probability of p. In other words, there is a probability p that I can lose %VaR or MORE.Another way to define p%VaR is the following. p%VaR is the MAXIMUM %loss of an investment with a probatility (1-p). In other words, there is a probability (1-p) that I can have a return of %VaR or HIGHER (less negative return or positive return).
Then, we can say that p%VaR is the best scenario that can happen after a very bad scenario that can happen with probability p. Also, we can say that p%VaR is the worse scenario that can happen when the scenario is not the bad one, so when there is a probability of (1-p)%.
Let’s illustrate VaR using real data from the US index S&P500 daily returns from January 2019 to April 21, 2021.
# Set all the working directory
setwd("~/Desktop/Financial Econometrics II ")
library(quantmod)
## Loading required package: xts
## Loading required package: zoo
##
## Attaching package: 'zoo'
## The following objects are masked from 'package:base':
##
## as.Date, as.Date.numeric
## Loading required package: TTR
## Registered S3 method overwritten by 'quantmod':
## method from
## as.zoo.data.frame zoo
SPindex<-getSymbols("^GSPC", from="2019-01-01", to="2021-04-21",auto.assign=FALSE)[,6]
## 'getSymbols' currently uses auto.assign=TRUE by default, but will
## use auto.assign=FALSE in 0.5-0. You will still be able to use
## 'loadSymbols' to automatically load data. getOption("getSymbols.env")
## and getOption("getSymbols.auto.assign") will still be checked for
## alternate defaults.
##
## This message is shown once per session and may be disabled by setting
## options("getSymbols.warning4.0"=FALSE). See ?getSymbols for details.
SPreturn <- na.omit(diff(log(SPindex)))
names(SPreturn)<-c("return")
hist(SPreturn, breaks=40)
The heigth of each bar represets the frequency in number of days thath a specific daily return range has happened.
If the daily return behaves like a normal-distributed variable, then about 2.5% of the days the S&P500 returns would be less that its mean menus 2 times its standard deviation. Remember that for a normal-distributed variable, in the range pls/minus 2 standar deviations from its mean, we will find about 95% of the values.
# Number of days for the S&P returns:
N = nrow(SPreturn)
N
## [1] 578
# The 2.5% of these # of days is about:
n25 <- 0.025*nrow(SPreturn)
n25
## [1] 14.45
Now we can look the real data ans see how many days the S&P500 has offered returns lees than 2 times its standar deviation (volatility) of daily returns, and then identify how many days the S&P500 has offered extreme returns less than the mean minus 2 times its volatility:
# We calculate the mean and standad deviation of returns:
mean_return=mean(SPreturn$return)
mean_return
## [1] 0.0008636299
volatility = sd(SPreturn$return)
volatility
## [1] 0.01571448
# We calculate the expected minimum return of the 95% C.I.
# assuming normality (the 2.5 percentile)
min95ci_returns = mean_return-2*volatility
min95ci_returns
## [1] -0.03056534
# We select the days with returns less than the minimum of the 95% C.I.
extremes <- SPreturn[SPreturn$return<min95ci_returns,]
extremes
## return
## 2020-02-24 -0.03408808
## 2020-02-25 -0.03074790
## 2020-02-27 -0.04516814
## 2020-03-05 -0.03451078
## 2020-03-09 -0.07901041
## 2020-03-11 -0.05010289
## 2020-03-12 -0.09994485
## 2020-03-16 -0.12765220
## 2020-03-18 -0.05322227
## 2020-03-20 -0.04432762
## 2020-03-27 -0.03426785
## 2020-04-01 -0.04514636
## 2020-04-21 -0.03115512
## 2020-06-11 -0.06075269
## 2020-09-03 -0.03575759
## 2020-10-28 -0.03592554
nextremes = nrow(extremes)
# Number of extreme returns:
nextremes
## [1] 16
percextremes = (nextremes / N) * 100
percextremes
## [1] 2.768166
In this case, we have 16 extreme days with returns less than the minimum of the 95% C.I. of daily returns. Then, it seems that we have had more extreme returns than expected. This represets about 2.768166% of the total # of days, wich is greater than the expected 2.5%.
Then, 2.768166% of the time, the S&P500 has experience a loss of -3.0565337% od MORE. If we belive that in the future the S&P500 will behave in a similar manner, then we can say that there is a probability of 2.768166% that I can lose -3.0565337% or MORE in only ONE DAY.
We can also say that there is a probability of 97.2318339% (100% - 2.768166%) thath the MAXIMUM loss I can experience in ONE DAY is -3.0565337%. Then, the 2.768166% VaR of the daily S&P500 returns is -3.0565337%.
VaR1 <- min95ci_returns*100
VaR1
## [1] -3.056534
For turbulence periods in financial markets it is likely that a bad scenario with probability p can happen. Once this scenario happens, the p%VaR is the best scenario, but in reality this the p%VaR is not too realistic. We need to be more consevative and estimate wich migth be the expected loss once the probability p of the bad scenario happens. This consevative estimate is given by the Expected Shortfall.
Expected Shortfall with a probability p refers to the expected average loss of an investment in ONE period (in this case, in one day) once the probability p of a bad scenario happens. Then, the absolute value of p%VaR. Let´s calculate the ES using the previous example of the S&P500 daili returns.
To estimate the ES at the 2.7681661% probability, we just calculate the average of the days when the S&P500 returns have been less than -3.0565337%. We can do this as follows:
extremes
## return
## 2020-02-24 -0.03408808
## 2020-02-25 -0.03074790
## 2020-02-27 -0.04516814
## 2020-03-05 -0.03451078
## 2020-03-09 -0.07901041
## 2020-03-11 -0.05010289
## 2020-03-12 -0.09994485
## 2020-03-16 -0.12765220
## 2020-03-18 -0.05322227
## 2020-03-20 -0.04432762
## 2020-03-27 -0.03426785
## 2020-04-01 -0.04514636
## 2020-04-21 -0.03115512
## 2020-06-11 -0.06075269
## 2020-09-03 -0.03575759
## 2020-10-28 -0.03592554
ES <- mean(extremes$return) * 100
ES
## [1] -5.261127
The Expected Shortfall at the 2.7681661% probability is the average of these extreme return, wich is -5.261127.
Research what is Kurtosis and briefly explain it. How you can use Kurtosis to identify a variable with an extreme (fat-tail) probability distribution?
KURTOSIS IS A MEASURE OF THE “TAILEDNESS” OF THE PROBABILITY DISTRIBUTION. KURTOSIS DESCRIBES THE SHAPE OF A PROBABILITY DISTRIBUTION. tHE PEAKEDNESS OF A DISTRIBUTION.
Calculate Kurtosis of S&P500 daily returns. Do you see a sign of fat-tail distribution of returns
YES BECAUSE OF THE KURTOSIS RESULT IS 15.97, WICH MEANS THAT THE DATA PRESENT A FAT TAIL.
# If i do not load the library "timeSeries" the function do not run.
library(timeSeries)
## Loading required package: timeDate
##
## Attaching package: 'timeSeries'
## The following object is masked from 'package:zoo':
##
## time<-
k_SP <- kurtosis(SPreturn$return)
k_SP
## [1] 15.97941
## attr(,"method")
## [1] "excess"
If daily returns were normally distributed, what would be the 1% VaR of $1 million for the next future day of the sample? Hint: the function qnorm(0.01) gives you the z value, which represents the # of standard deviations you need to go from the mean to the left to arrive to the 1% percentile.
# "z" represents the number of times that i have to move to reach the percentil.
z1 = qnorm(0.01)
VaR_1 <- (mean_return + z1*volatility)* 100
VaR_1
## [1] -3.569372
INTERPRETATION
THEN THE -2.326347% OF THE TIME, THE S&P500 HAS EXPIRIENCE A LOSS OF -3.569372% OR MORE. IF WE BELIVE THAT IN THE FUTURE THE S&P500 WILL BEHAVE IN A SIMILAR MANNER, THEN WE CAN SAY THAT THERE IS A PROBABILITY OF 2.326347% THAT I CAN LOSE -3.569372% OR MORE IN ONLY ONE DAY.
WE CAN ALSO SAY THAT THERE IS A PROBABILITY OF 96.43063% (100%-2.326347%) THAT THE MAXIMUM LOSS I CAN EXPIRIENCE IN ONE DAY IS -3.569372%. THEN, THE 2.326347% VaR of daily S&P500 RETURNS IS -3.569372%.
If daily returns were normally distributed, estimate the Expected Shortfall (ES) of the previous 1%VaR for the next future day. Hint: Value at Risk gives you the minimum loss given a certain probability. Expected Shortfall provides a more conservative measure of risk since it is the expected loss in case you are in the 1% probability of “bad times”.You can calculate ES just by calculating the VaR at a probability of (p/2) to get a middle value in the loss area. In this case, the 1%ES = 0.5%VaR
# We select the days with returns less than the minimum of the 99% C.I.
z2 = qnorm(0.01/2)
ES_1 <- (mean_return + z2*volatility)*100
ES_1
## [1] -3.96142
INTEPRETATION
THE EXPECTED SHORTFALL AT THE 2.575829% PROBABILITY IS THE AVERAGE OF THE EXTREME RETURNS, WICH IS -3.96142. AS EXPECTED THE 2.575829% ES IS MORE NEGATIVE THAN THE 2.326347%VaR (-3.96142 vs -3.569372).
Considering that the distribution of real returns is more a fat-tail distribution (more-than-normal extreme values), it is better to use the t distribution rather than the normal distribution. Research indicates that there is a relationship between the kurtosis and the degrees of freedom of the t distribution to model fat-tail distributions.
A t-distribution with low degrees of freedom shows fatter tails compared with a t distribution with higher degrees of freedom (or compared to a normal distribution).
The way to estimate the degrees of freedom of a t distribution to model a fat-tail distribution, we use the kurtosis of the distribution as follows: Degrees of freedom (df)= 6/Kurtosis + 4
df = (6 /15.97941) + 4
df
## [1] 4.375483
You have to estimate the degrees of freedom and then use the t distribution to estimate the 1% VaR of $1million for the next future day of the sample. Compare both estimations of VaR, the one calculated assuming normal distribution and this one. Briefly reports this comparison.
t = qt(0.01,df)
pVaR = (mean_return + t * volatility)*100
pVaR
## [1] -5.532152
INTERPRETATION of 1% VaR
THEN THE 3.575373% OF THE TIME, THE S&P500 HAS EXPIRIENCE A LOSS OF -5.532152% OR MORE. IF WE BELIVE THAT IN THE FUTURE THE S&P500 WILL BEHAVE IN A SIMILAR MANNER, THEN WE CAN SAY THAT THERE IS A PROBABILITY OF 3.575373% THAT I CAN LOSE -5.532152% OR MORE IN ONLY ONE DAY.
WE CAN ALSO SAY THAT THERE IS A PROBABILITY OF 94.46785% (100%-3.575373%) THAT THE MAXIMUM LOSS I CAN EXPIRIENCE IN ONE DAY IS -5.532152% . THEN, THE 3.575373%VaR of daily S&P500 RETURNS IS -5.532152% .
Work with the same dataset (daily S&P500 returns).
Run a GARCH(1,1) model. INTERPRET the model and briefly explain your results.
library(fGarch)
## Loading required package: fBasics
##
## Attaching package: 'fBasics'
## The following object is masked from 'package:TTR':
##
## volatility
garch.fit <- garchFit(~garch(1,1), data = SPreturn$return, trace = F)
## Warning: Using formula(x) is deprecated when x is a character vector of length > 1.
## Consider formula(paste(x, collapse = " ")) instead.
summary(garch.fit)
##
## Title:
## GARCH Modelling
##
## Call:
## garchFit(formula = ~garch(1, 1), data = SPreturn$return, trace = F)
##
## Mean and Variance Equation:
## data ~ garch(1, 1)
## <environment: 0x7fa5533f8750>
## [data = SPreturn$return]
##
## Conditional Distribution:
## norm
##
## Coefficient(s):
## mu omega alpha1 beta1
## 1.3127e-03 5.2453e-06 3.0325e-01 6.9993e-01
##
## Std. Errors:
## based on Hessian
##
## Error Analysis:
## Estimate Std. Error t value Pr(>|t|)
## mu 1.313e-03 3.103e-04 4.230 2.34e-05 ***
## omega 5.245e-06 1.694e-06 3.097 0.00196 **
## alpha1 3.033e-01 5.460e-02 5.554 2.79e-08 ***
## beta1 6.999e-01 3.938e-02 17.775 < 2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Log Likelihood:
## 1836.4 normalized: 3.177163
##
## Description:
## Thu Apr 29 14:51:41 2021 by user:
##
##
## Standardised Residuals Tests:
## Statistic p-Value
## Jarque-Bera Test R Chi^2 206.0941 0
## Shapiro-Wilk Test R W 0.9593287 1.507463e-11
## Ljung-Box Test R Q(10) 20.22171 0.0272241
## Ljung-Box Test R Q(15) 22.49806 0.09539405
## Ljung-Box Test R Q(20) 28.27344 0.1030829
## Ljung-Box Test R^2 Q(10) 2.923816 0.9831605
## Ljung-Box Test R^2 Q(15) 7.782097 0.9322211
## Ljung-Box Test R^2 Q(20) 12.32116 0.9045786
## LM Arch Test R TR^2 4.038401 0.9827334
##
## Information Criterion Statistics:
## AIC BIC SIC HQIC
## -6.340485 -6.310315 -6.340580 -6.328721
We can also run the same ARCH model using the ugarchfit from the rugarch package.
library(FinTS)
library(rugarch)
## Loading required package: parallel
##
## Attaching package: 'rugarch'
## The following object is masked from 'package:stats':
##
## sigma
#ARCH(1)
Espec2=ugarchspec(variance.model= list(model= "sGARCH", garchOrder= c(1, 1),
submodel= NULL, external.regressors= NULL, variance.targeting= FALSE),
mean.model= list(armaOrder= c(0, 0), include.mean= TRUE, archm= FALSE,
archpow= 0, arfima= FALSE, external.regressors= NULL, archex= FALSE),
distribution.model= "norm", start.pars= list(), fixed.pars= list())
#Estimación del GARCH(1,1)
garch1<- ugarchfit(spec=Espec2, data=SPreturn$return)
garch1
##
## *---------------------------------*
## * GARCH Model Fit *
## *---------------------------------*
##
## Conditional Variance Dynamics
## -----------------------------------
## GARCH Model : sGARCH(1,1)
## Mean Model : ARFIMA(0,0,0)
## Distribution : norm
##
## Optimal Parameters
## ------------------------------------
## Estimate Std. Error t value Pr(>|t|)
## mu 0.001309 0.000319 4.10148 0.000041
## omega 0.000005 0.000006 0.82425 0.409801
## alpha1 0.298173 0.059558 5.00644 0.000001
## beta1 0.700826 0.061089 11.47229 0.000000
##
## Robust Standard Errors:
## Estimate Std. Error t value Pr(>|t|)
## mu 0.001309 0.000509 2.57407 0.010051
## omega 0.000005 0.000031 0.17401 0.861855
## alpha1 0.298173 0.159277 1.87204 0.061200
## beta1 0.700826 0.238665 2.93644 0.003320
##
## LogLikelihood : 1836.377
##
## Information Criteria
## ------------------------------------
##
## Akaike -6.3404
## Bayes -6.3102
## Shibata -6.3405
## Hannan-Quinn -6.3286
##
## Weighted Ljung-Box Test on Standardized Residuals
## ------------------------------------
## statistic p-value
## Lag[1] 1.433 0.2313
## Lag[2*(p+q)+(p+q)-1][2] 1.867 0.2858
## Lag[4*(p+q)+(p+q)-1][5] 2.253 0.5599
## d.o.f=0
## H0 : No serial correlation
##
## Weighted Ljung-Box Test on Standardized Squared Residuals
## ------------------------------------
## statistic p-value
## Lag[1] 0.2289 0.6323
## Lag[2*(p+q)+(p+q)-1][5] 0.8167 0.8996
## Lag[4*(p+q)+(p+q)-1][9] 1.5227 0.9537
## d.o.f=2
##
## Weighted ARCH LM Tests
## ------------------------------------
## Statistic Shape Scale P-Value
## ARCH Lag[3] 1.696e-05 0.500 2.000 0.9967
## ARCH Lag[5] 1.088e+00 1.440 1.667 0.7071
## ARCH Lag[7] 1.312e+00 2.315 1.543 0.8582
##
## Nyblom stability test
## ------------------------------------
## Joint Statistic: 0.872
## Individual Statistics:
## mu 0.02911
## omega 0.12447
## alpha1 0.15201
## beta1 0.25776
##
## Asymptotic Critical Values (10% 5% 1%)
## Joint Statistic: 1.07 1.24 1.6
## Individual Statistic: 0.35 0.47 0.75
##
## Sign Bias Test
## ------------------------------------
## t-value prob sig
## Sign Bias 1.6711 0.09524 *
## Negative Sign Bias 0.6838 0.49436
## Positive Sign Bias 0.4015 0.68821
## Joint Effect 4.4676 0.21520
##
##
## Adjusted Pearson Goodness-of-Fit Test:
## ------------------------------------
## group statistic p-value(g-1)
## 1 20 34.60 0.015619
## 2 30 51.17 0.006737
## 3 40 56.74 0.032961
## 4 50 76.33 0.007485
##
##
## Elapsed time : 0.2390001
We can plot the volatility estimations over time according to the GARCH(1,1) model:
plot(garch1,which=3)
This plot is very similar to the plot we did with rolling standard deviations. The difference is that in this plot, the volatility is estimated according to the GARCH(1,1) model, instead of using rolling windows.
I indicated which=3 in the plot to see the conditional volatility. There are 12 different plots we can do. You can run in your console: plot(garch1), and you can select which plot you want.
INTERPRETATION OF THE MODEL THE beta1 COEFFICIENT IS 0.700826 AND IT IS SIGNIFICANTLY DIFFERENT THAN ZERO. THIS MEANS THAT ABOUT 70.08% OF THE RETURN VARIANCE OF YESTERDAY IS PASSED TO THE RETURN VARIANCE OF TODAY, BESIDES THE IMPACT OF THE YESTERDA´S SQUARED ERROR.SINCE THIS COEFFICIENT IS 0.7, THIS IS A SIGN OF CLUSTERING OF VOLATILITY.
IN THIS CASE, CONSIDERING THE VARIANCE OF YESTERDAY, THE SQUARED ERROR OF YESTERDAY IMPACTS THE RETURN VARIANCE OF TODAY IN A POSITIVE AND SIGNIFICANT WAY, BUT ONLY ABOUT 29.81% OF THE SQUARED ERROR OF YESTERDAY IS PASSED TO THE VARIANCE OF TODAY.
With this GARCH(1,1) model, forecast both volatility and 1% VaR of $1 million for 1 future day (tomorrow).
We can do a prediction with the GARCH(1,1) model for the daily future 10 return days, and we will see that the predictions will be the same for the future days, and equal to the mu coefficient. We can do this by applying the predict function to the object garch.fit, which was generated with the garchFit function (from the fGarch package):
garch.fit.pred<-predict(garch.fit, n.ahead=10, plot=TRUE)
The predict function plots the historical values of the daily returns of the series (IPyC returns), and for the future values, it plots the predicted returns for the next days. We see the same prediction, which is equal to the mu coefficient. Also, we see the 95% confidence interval for the daily future returns for the IPyC, which are plotted in blue.
garch.fit.pred$standardDeviation
## [1] 0.007937469 0.008273401 0.008597223 0.008910258 0.009213609 0.009508206
## [7] 0.009794842 0.010074198 0.010346868 0.010613369
We can also use the rugarch package to do the predictions. We use the function ugarchforecast from the uGarchfit function:
garch1.prediction <- ugarchforecast(garch1,n.ahead= 10)
garch1.prediction will be a very special R class: an uGARCHforecast class. This is special R class object that stores information about the GARCH prediction.
For this type of R class objects, we use the @ character to get access to the content of each element stored in this object.
The forecast for the daily return is stored in:
garch1.prediction@forecast$seriesFor
## 2021-04-20
## T+1 0.00130949
## T+2 0.00130949
## T+3 0.00130949
## T+4 0.00130949
## T+5 0.00130949
## T+6 0.00130949
## T+7 0.00130949
## T+8 0.00130949
## T+9 0.00130949
## T+10 0.00130949
The forecast for the standard deviation of returns (volatility) is stored in:
garch1.prediction@forecast$sigmaFor
## 2021-04-20
## T+1 0.007918091
## T+2 0.008244561
## T+3 0.008558279
## T+4 0.008860598
## T+5 0.009152650
## T+6 0.009435388
## T+7 0.009709625
## T+8 0.009976064
## T+9 0.010235313
## T+10 0.010487905
We can plot the historic and future volatility:
plot(garch1.prediction,which = 3)
With this GARCH(1,1) model, forecast both volatility and 1% VaR of $1 million for 1 future day (tomorrow).
# Here I calculate how many standard deviation i should move to be in 1%
t2 = qt(0.01, df)
# Then I calculate the VaR for the coming day:
VaRgarch <- mean_return + (t2 * 0.007918091)
VaRgarch
## [1] -0.0274465
Extend the dataset 10 days in the future. Do a within-sample forecast and an out-of-sample forecast for all days of the year (before and after TODAY) for daily volatility and 1%VaR for 1 day.
# Then I calculate the Var for the 10 days
VaRgarch10 = mean_return + (t2 *garch1.prediction@forecast$sigmaFor)
VaRgarch10
## 2021-04-20
## T+1 -0.02744651
## T+2 -0.02861376
## T+3 -0.02973542
## T+4 -0.03081632
## T+5 -0.03186051
## T+6 -0.03287141
## T+7 -0.03385191
## T+8 -0.03480453
## T+9 -0.03573144
## T+10 -0.03663455
Do a tsline of volatility and VaR for the whole year
garch1.predictionyear <- ugarchforecast(garch1,n.ahead= 240)
plot(garch1.predictionyear,which = 3)
# Then I calculate the Var for the 10 days
VaRgarchyear = mean_return + (t2 *garch1.predictionyear@forecast$sigmaFor)
VaRgarchyear = ts(VaRgarchyear)
VaRgarchyear
## Time Series:
## Start = 1
## End = 240
## Frequency = 1
## 2021-04-20
## [1,] -0.02744651
## [2,] -0.02861376
## [3,] -0.02973542
## [4,] -0.03081632
## [5,] -0.03186051
## [6,] -0.03287141
## [7,] -0.03385191
## [8,] -0.03480453
## [9,] -0.03573144
## [10,] -0.03663455
## [11,] -0.03751555
## [12,] -0.03837592
## [13,] -0.03921699
## [14,] -0.04003996
## [15,] -0.04084589
## [16,] -0.04163576
## [17,] -0.04241044
## [18,] -0.04317075
## [19,] -0.04391740
## [20,] -0.04465108
## [21,] -0.04537241
## [22,] -0.04608194
## [23,] -0.04678022
## [24,] -0.04746772
## [25,] -0.04814491
## [26,] -0.04881221
## [27,] -0.04947000
## [28,] -0.05011866
## [29,] -0.05075854
## [30,] -0.05138995
## [31,] -0.05201320
## [32,] -0.05262858
## [33,] -0.05323636
## [34,] -0.05383678
## [35,] -0.05443009
## [36,] -0.05501652
## [37,] -0.05559628
## [38,] -0.05616958
## [39,] -0.05673661
## [40,] -0.05729755
## [41,] -0.05785257
## [42,] -0.05840186
## [43,] -0.05894555
## [44,] -0.05948382
## [45,] -0.06001679
## [46,] -0.06054461
## [47,] -0.06106741
## [48,] -0.06158531
## [49,] -0.06209845
## [50,] -0.06260693
## [51,] -0.06311086
## [52,] -0.06361036
## [53,] -0.06410553
## [54,] -0.06459646
## [55,] -0.06508325
## [56,] -0.06556599
## [57,] -0.06604477
## [58,] -0.06651967
## [59,] -0.06699078
## [60,] -0.06745818
## [61,] -0.06792193
## [62,] -0.06838213
## [63,] -0.06883882
## [64,] -0.06929210
## [65,] -0.06974201
## [66,] -0.07018863
## [67,] -0.07063201
## [68,] -0.07107223
## [69,] -0.07150933
## [70,] -0.07194337
## [71,] -0.07237441
## [72,] -0.07280250
## [73,] -0.07322769
## [74,] -0.07365003
## [75,] -0.07406958
## [76,] -0.07448637
## [77,] -0.07490046
## [78,] -0.07531188
## [79,] -0.07572069
## [80,] -0.07612692
## [81,] -0.07653061
## [82,] -0.07693181
## [83,] -0.07733055
## [84,] -0.07772688
## [85,] -0.07812082
## [86,] -0.07851241
## [87,] -0.07890169
## [88,] -0.07928870
## [89,] -0.07967346
## [90,] -0.08005601
## [91,] -0.08043638
## [92,] -0.08081460
## [93,] -0.08119071
## [94,] -0.08156472
## [95,] -0.08193667
## [96,] -0.08230659
## [97,] -0.08267450
## [98,] -0.08304043
## [99,] -0.08340441
## [100,] -0.08376647
## [101,] -0.08412662
## [102,] -0.08448489
## [103,] -0.08484131
## [104,] -0.08519590
## [105,] -0.08554868
## [106,] -0.08589968
## [107,] -0.08624891
## [108,] -0.08659640
## [109,] -0.08694217
## [110,] -0.08728624
## [111,] -0.08762863
## [112,] -0.08796936
## [113,] -0.08830845
## [114,] -0.08864592
## [115,] -0.08898178
## [116,] -0.08931606
## [117,] -0.08964877
## [118,] -0.08997994
## [119,] -0.09030957
## [120,] -0.09063768
## [121,] -0.09096430
## [122,] -0.09128943
## [123,] -0.09161310
## [124,] -0.09193531
## [125,] -0.09225609
## [126,] -0.09257545
## [127,] -0.09289340
## [128,] -0.09320997
## [129,] -0.09352515
## [130,] -0.09383897
## [131,] -0.09415145
## [132,] -0.09446259
## [133,] -0.09477240
## [134,] -0.09508091
## [135,] -0.09538812
## [136,] -0.09569405
## [137,] -0.09599871
## [138,] -0.09630211
## [139,] -0.09660426
## [140,] -0.09690518
## [141,] -0.09720487
## [142,] -0.09750336
## [143,] -0.09780064
## [144,] -0.09809673
## [145,] -0.09839165
## [146,] -0.09868540
## [147,] -0.09897799
## [148,] -0.09926943
## [149,] -0.09955974
## [150,] -0.09984892
## [151,] -0.10013699
## [152,] -0.10042395
## [153,] -0.10070981
## [154,] -0.10099459
## [155,] -0.10127829
## [156,] -0.10156092
## [157,] -0.10184249
## [158,] -0.10212301
## [159,] -0.10240249
## [160,] -0.10268094
## [161,] -0.10295836
## [162,] -0.10323477
## [163,] -0.10351017
## [164,] -0.10378457
## [165,] -0.10405798
## [166,] -0.10433040
## [167,] -0.10460185
## [168,] -0.10487234
## [169,] -0.10514186
## [170,] -0.10541043
## [171,] -0.10567806
## [172,] -0.10594475
## [173,] -0.10621051
## [174,] -0.10647535
## [175,] -0.10673927
## [176,] -0.10700228
## [177,] -0.10726439
## [178,] -0.10752560
## [179,] -0.10778593
## [180,] -0.10804537
## [181,] -0.10830394
## [182,] -0.10856164
## [183,] -0.10881848
## [184,] -0.10907446
## [185,] -0.10932960
## [186,] -0.10958389
## [187,] -0.10983734
## [188,] -0.11008996
## [189,] -0.11034175
## [190,] -0.11059272
## [191,] -0.11084288
## [192,] -0.11109223
## [193,] -0.11134078
## [194,] -0.11158853
## [195,] -0.11183549
## [196,] -0.11208167
## [197,] -0.11232706
## [198,] -0.11257168
## [199,] -0.11281552
## [200,] -0.11305860
## [201,] -0.11330093
## [202,] -0.11354249
## [203,] -0.11378331
## [204,] -0.11402338
## [205,] -0.11426271
## [206,] -0.11450131
## [207,] -0.11473918
## [208,] -0.11497632
## [209,] -0.11521274
## [210,] -0.11544844
## [211,] -0.11568343
## [212,] -0.11591771
## [213,] -0.11615129
## [214,] -0.11638418
## [215,] -0.11661637
## [216,] -0.11684787
## [217,] -0.11707868
## [218,] -0.11730881
## [219,] -0.11753827
## [220,] -0.11776705
## [221,] -0.11799517
## [222,] -0.11822262
## [223,] -0.11844941
## [224,] -0.11867554
## [225,] -0.11890102
## [226,] -0.11912585
## [227,] -0.11935004
## [228,] -0.11957359
## [229,] -0.11979650
## [230,] -0.12001877
## [231,] -0.12024042
## [232,] -0.12046144
## [233,] -0.12068184
## [234,] -0.12090161
## [235,] -0.12112078
## [236,] -0.12133933
## [237,] -0.12155727
## [238,] -0.12177461
## [239,] -0.12199135
## [240,] -0.12220749