Examples by Prof. Jacob Escobar, EGADE Business School, Tec de Monterrey
#install.packages("quantmod")
library(quantmod)## Loading required package: xts
## Loading required package: zoo
##
## Attaching package: 'zoo'
## The following objects are masked from 'package:base':
##
## as.Date, as.Date.numeric
## Loading required package: TTR
## Registered S3 method overwritten by 'quantmod':
## method from
## as.zoo.data.frame zoo
library(ggplot2)
#install.packages("gridExtra")
library(gridExtra)
library(scales)Let’s read financial data and consolidate it as we did before:
tickers <- c("AAPL", "FB", "GS", "MCD", "SBUX", "WMT", "DIS", "TSLA")
start <- "2016-12-31"
end <- "2019-12-31"
n <- length(tickers)
p <- getSymbols(Symbols = tickers[1], src = "yahoo",
from = start, to = end,
auto.assign = F)[, 6]## 'getSymbols' currently uses auto.assign=TRUE by default, but will
## use auto.assign=FALSE in 0.5-0. You will still be able to use
## 'loadSymbols' to automatically load data. getOption("getSymbols.env")
## and getOption("getSymbols.auto.assign") will still be checked for
## alternate defaults.
##
## This message is shown once per session and may be disabled by setting
## options("getSymbols.warning4.0"=FALSE). See ?getSymbols for details.
for (i in 2:n){
p <- merge(p, getSymbols(Symbols = tickers[i], src = "yahoo",
from = start, to = end,
auto.assign=F)[, 6])
}
names(p) <- gsub(".Adjusted", "", names(p))
head(p)## AAPL FB GS MCD SBUX WMT DIS TSLA
## 2017-01-03 27.37236 116.86 222.4307 106.5132 50.46368 62.29127 101.5844 43.398
## 2017-01-04 27.34173 118.69 223.8671 106.3885 51.04719 62.65419 102.8867 45.398
## 2017-01-05 27.48076 120.67 222.2005 106.5844 51.47568 62.79028 102.8293 45.350
## 2017-01-06 27.78713 123.41 225.4969 107.5283 52.08654 61.92839 104.3615 45.802
## 2017-01-09 28.04165 124.90 223.6461 107.2344 53.06208 62.33665 103.7677 46.256
## 2017-01-10 28.06993 124.35 223.3515 107.0742 52.77034 61.90118 103.7869 45.974
We will calculate simple returns from the stock prices we read in p. We could use simple returns or log returns, each of them have their advantages.
When calculating returns, dividends should also be taken into account. This is why we are using the Adj.Close column as our prices instead of just Close. Hence, whenever we use \(p_1\) in the next equations we refer to \(p_1 + d_1\).
# Simple returns
ret <- p / lag(p) - 1
# Log-returns (in case you want to use these instead, however notice that using
# log-returns would also change how we calculate the cross-sectional portfolio
# returns and its average time-series return too.
# ret <- log(p / lag(p))
# Removing the first row since it contains NA's: a return can't be calculated
# for the first day
ret <- ret[-1, ]
tail(ret)## AAPL FB GS MCD SBUX
## 2019-12-20 -0.0020713211 0.0011647336 -0.004392398 0.0004060122 -0.0006778257
## 2019-12-23 0.0163184591 -0.0005817256 0.000698863 -0.0047682110 -0.0026000166
## 2019-12-24 0.0009505315 -0.0051411293 0.003579552 0.0023954854 0.0032868479
## 2019-12-26 0.0198404948 0.0130167612 0.005654293 0.0019830283 -0.0049706210
## 2019-12-27 -0.0003796512 0.0014919535 -0.002378764 0.0056328640 0.0005677508
## 2019-12-30 0.0059352179 -0.0177318688 -0.003728540 -0.0063581219 -0.0078293380
## WMT DIS TSLA
## 2019-12-20 0.00174891708 0.0049949438 0.003836217
## 2019-12-23 -0.01047473755 -0.0149782947 0.033605441
## 2019-12-24 0.00403258307 0.0042162015 0.014383867
## 2019-12-26 0.00008360765 0.0028219700 0.013380376
## 2019-12-27 0.00058567803 0.0003431915 -0.001299578
## 2019-12-30 -0.00158860111 -0.0135848782 -0.036432852
Handling of NAs: If there are any more NAs, we could remove them or define an imputation method like: average, mode, regression, interpolation, previous value, etc
# Counting if there are any more NAs
# colSums(is.na(ret))
apply(is.na(ret), 2, sum)## AAPL FB GS MCD SBUX WMT DIS TSLA
## 0 0 0 0 0 0 0 0
# If there were still NAs and we wanted to remove them:
# ret <- na.omit(ret)Notice that, p and ret are xts, however some algebraic calculations are easier with matrices, so we will also save a copy of ret as a matrix in retm.
retm <- as.matrix(ret)
head(retm)## AAPL FB GS MCD SBUX
## 2017-01-04 -0.001119085 0.015659772 0.006457745 -0.001170409 0.011562930
## 2017-01-05 0.005085195 0.016682079 -0.007444639 0.001841289 0.008394057
## 2017-01-06 0.011148307 0.022706605 0.014834984 0.008855544 0.011866866
## 2017-01-09 0.009159673 0.012073559 -0.008207245 -0.002732807 0.018729370
## 2017-01-10 0.001008357 -0.004403555 -0.001317474 -0.001494576 -0.005498163
## 2017-01-11 0.005373367 0.013992747 0.013150894 0.005239331 0.003800790
## WMT DIS TSLA
## 2017-01-04 0.005826097 0.0128205574 0.0460850741
## 2017-01-05 0.002172129 -0.0005584491 -0.0010573374
## 2017-01-06 -0.013726583 0.0149003890 0.0099669244
## 2017-01-09 0.006592453 -0.0056890735 0.0099122968
## 2017-01-10 -0.006985730 0.0001844697 -0.0060965495
## 2017-01-11 0.004396685 0.0097804655 -0.0006090399
For the purpose of showcasing the properties of VaR aggregation, we will also construct an Equally-Weighted Portfolio to compare the VaR of each stock to that of a portfolio.
# Portfolio Weights
wts <- matrix(rep(1/n, n), nrow = n, ncol = 1)
wts## [,1]
## [1,] 0.125
## [2,] 0.125
## [3,] 0.125
## [4,] 0.125
## [5,] 0.125
## [6,] 0.125
## [7,] 0.125
## [8,] 0.125
The time series of portfolio returns is given by: \(port.ret = retm*wts\)
retm has m observations (rows) of n stocks (columns)wts has weights of n stocks (rows) for 1 portfolio (column)retm times wts gives us port.ret with m observations (rows) of 1 portfolio (column).# Removing row names from "retm" (matrix) to avoid conflict with rownames from
# "ret" (xts)
rownames(retm) <- NULL
# Portfolio Returns (considering discrete returns)
ret$EqPort <- retm %*% wts
# Portfolio Returns (considering continuous returns)
# ret$EqPort <- log(exp(m) %*% wts)
head(ret)## AAPL FB GS MCD SBUX
## 2017-01-04 -0.001119085 0.015659772 0.006457745 -0.001170409 0.011562930
## 2017-01-05 0.005085195 0.016682079 -0.007444639 0.001841289 0.008394057
## 2017-01-06 0.011148307 0.022706605 0.014834984 0.008855544 0.011866866
## 2017-01-09 0.009159673 0.012073559 -0.008207245 -0.002732807 0.018729370
## 2017-01-10 0.001008357 -0.004403555 -0.001317474 -0.001494576 -0.005498163
## 2017-01-11 0.005373367 0.013992747 0.013150894 0.005239331 0.003800790
## WMT DIS TSLA EqPort
## 2017-01-04 0.005826097 0.0128205574 0.0460850741 0.012015335
## 2017-01-05 0.002172129 -0.0005584491 -0.0010573374 0.003139290
## 2017-01-06 -0.013726583 0.0149003890 0.0099669244 0.010069130
## 2017-01-09 0.006592453 -0.0056890735 0.0099122968 0.004979778
## 2017-01-10 -0.006985730 0.0001844697 -0.0060965495 -0.003075403
## 2017-01-11 0.004396685 0.0097804655 -0.0006090399 0.006890655
From this point forward, ret includes a column of porfolio returns but retm doesn’t. This will be handy since in the algebraic calculations that come later we don’t want to include the portfolio return.
Value-at-Risk (VaR) > “What is the loss level that will not be exceeded in h time horizon, with x% probability (confidence level)?”
Flexibility in the definition:
VaR Examples
Conditional Value-at-Risk (CVaR) > “What is the expected loss, given that the loss is greater than the VaR level or threshold?”
CVaR Synonyms: * Expected Shortfall (ES) * Expected Tail-Loss (ETL) * Tail VaR (TVaR)
Regulatory Capital Regulators traditionally use VaR to set the capital required by banks to keep: * Market risk: VaR with a 1 or 10-day horizon and a 99% confidence level, however they are switching to CVaR with a 97.5% confidence
* Credit risk: VaR with a 1-year horizon and a 99.9% confidence level
VaR Models * Parametric or Variance-covariance method: Assumes that returns are normally distributed (jointly) * Historical simulation: Assumes that asset returns will have the same distribution as in the past: uses historical returns as the return scenarios. * Monte Carlo simulation: Assumes that asset returns will be randomly distributed, but their covariance structure will remain: uses simulated returns as the return scenarios.
We will assume a 99% confidence level, hence an alpha of 1% and a holding period of 1 day.
# Daily 99% VaR (Alpha: 1%, Holding Period: 1 day)
alpha <- 0.01
hp <- 1Since we are using an equally-weighted portfolio, we will assume $10,000 usd invested in each stock (position value) for a total portfolio value of $10,000 \(x\) \(n\).
pv <- c(rep(10000, n), 10000*n)
pv## [1] 10000 10000 10000 10000 10000 10000 10000 10000 80000
Since we have daily data and the holding period hp is one day, then scaling the data to the holding period won’t affect, but I still include it in the equations below in case we want to change hp.
# Returns
ret.avg <- apply(ret, 2, mean, na.rm = T) * hp
print(percent( ret.avg , accuracy = 0.01))## AAPL FB GS MCD SBUX WMT DIS TSLA EqPort
## "0.14%" "0.09%" "0.01%" "0.08%" "0.08%" "0.09%" "0.05%" "0.13%" "0.08%"
# Volatilities
vol <- apply(ret, 2, sd, na.rm = T) * sqrt(hp)
print(percent( vol , accuracy = 0.01))## AAPL FB GS MCD SBUX WMT DIS TSLA EqPort
## "1.56%" "1.83%" "1.46%" "1.04%" "1.27%" "1.21%" "1.25%" "3.07%" "0.95%"
# Correlation Matrix (observed / historical data)
cor_h <- cor(retm, use = "complete.obs", method = "pearson")
cor_h## AAPL FB GS MCD SBUX WMT DIS
## AAPL 1.0000000 0.4676309 0.4682445 0.22072613 0.2940210 0.2185306 0.3452444
## FB 0.4676309 1.0000000 0.3346612 0.18912753 0.2161217 0.1322379 0.2517008
## GS 0.4682445 0.3346612 1.0000000 0.19608580 0.2591777 0.2493367 0.3584405
## MCD 0.2207261 0.1891275 0.1960858 1.00000000 0.3734352 0.2810496 0.2178987
## SBUX 0.2940210 0.2161217 0.2591777 0.37343518 1.0000000 0.2663478 0.3188720
## WMT 0.2185306 0.1322379 0.2493367 0.28104963 0.2663478 1.0000000 0.2584451
## DIS 0.3452444 0.2517008 0.3584405 0.21789870 0.3188720 0.2584451 1.0000000
## TSLA 0.3134455 0.2572807 0.2270695 0.09321883 0.1415719 0.1389395 0.1638064
## TSLA
## AAPL 0.31344549
## FB 0.25728066
## GS 0.22706952
## MCD 0.09321883
## SBUX 0.14157185
## WMT 0.13893947
## DIS 0.16380639
## TSLA 1.00000000
# Covariance Matrix (observed / historical data)
cov_h <- cov(retm, use = "complete.obs", method = "pearson") * hp
cov_h## AAPL FB GS MCD SBUX
## AAPL 0.00024190261 0.00013291123 0.00010647350 0.00003586926 0.00005796425
## FB 0.00013291123 0.00033394611 0.00008941123 0.00003611115 0.00005006084
## GS 0.00010647350 0.00008941123 0.00021374558 0.00002995319 0.00004802946
## MCD 0.00003586926 0.00003611115 0.00002995319 0.00010916837 0.00004945671
## SBUX 0.00005796425 0.00005006084 0.00004802946 0.00004945671 0.00016066570
## WMT 0.00004128403 0.00002935240 0.00004427762 0.00003566818 0.00004100721
## DIS 0.00006707234 0.00005745388 0.00006545792 0.00002843804 0.00005048641
## TSLA 0.00014950650 0.00014418600 0.00010180876 0.00002986963 0.00005503212
## WMT DIS TSLA
## AAPL 0.00004128403 0.00006707234 0.00014950650
## FB 0.00002935240 0.00005745388 0.00014418600
## GS 0.00004427762 0.00006545792 0.00010180876
## MCD 0.00003566818 0.00002843804 0.00002986963
## SBUX 0.00004100721 0.00005048641 0.00005503212
## WMT 0.00014753634 0.00003921159 0.00005175507
## DIS 0.00003921159 0.00015602464 0.00006274876
## TSLA 0.00005175507 0.00006274876 0.00094049286
Now that we have the covariance matrix cov_h, we will confirm the portfolio volatility by calculating it via matrix multiplication. It should match the standard deviation of its time-series returns (which we calculated above).
\[port.vol = \sqrt{wts^t * cov_h * wts}\]
# Confirming Portfolio Volatility, in percentage
sqrt(t(wts) %*% cov_h %*% wts)*100## [,1]
## [1,] 0.9491112
Assumes a specific distribution, typically the Normal Distribution.
\[VaR = \mu + z_\alpha * \sigma\]
Where:
qnorm( )).Additional Notes: * The equations above return VaR in %, for VaR in $ just multiply by the position value. * It is very important for the returns and volatility to be scaled to the holding period of the VaR, if they are not already. In our case we have daily returns and we will calculate daily VaR, but if we wanted a 10-day VaR we would have to multiply returns (\(\mu\)) times \(10\) and volatility (\(\sigma\)) times \(\sqrt{10}\)
Parametric VaR expressed as returns:
var.p <- ret.avg + qnorm(alpha)*vol
print(percent( var.p , accuracy = 0.01))## AAPL FB GS MCD SBUX WMT DIS TSLA
## "-3.48%" "-4.16%" "-3.39%" "-2.35%" "-2.87%" "-2.74%" "-2.85%" "-7.00%"
## EqPort
## "-2.12%"
Compare the Parametric VaR in returns of: EqPort vs the weighted average of the stocks to appreciate the diversification effect:
# since this is an equally-weighted portfolio, the mean works fine too
# mean(var.p[1:n])
print(percent( var.p[1:n] %*% wts , accuracy = 0.01))## [1] "-3.60%"
Parametric VaR expressed as position value:
var.p.pv <- var.p*pv
print(number( var.p.pv , accuracy = 0.01, big.mark = ","))## AAPL FB GS MCD SBUX WMT
## "-347.76" "-415.98" "-339.09" "-234.90" "-287.19" "-273.56"
## DIS TSLA EqPort
## "-285.19" "-700.15" "-1,698.61"
Compare the Parametric VaR in position value of: EqPort vs the sum of the individual stocks to appreciate the diversification effect:
print(number( sum(var.p.pv[1:n]) , accuracy = 0.01, big.mark = ","))## [1] "-2,883.82"
Let’s look at some graphs:
g <- list()
g[[1]] <- ggplot(data.frame(x = c(-4, 4)), aes(x = x)) +
stat_function(fun = dnorm, args = list(mean = ret.avg[1], sd = vol[1]),
color = "blue") +
geom_vline(xintercept = ret.avg[1] + var.p[1], color = "red") +
labs(title = paste(names(vol)[1],
"VaR =", percent(var.p[1], accuracy = 0.01))) +
scale_x_continuous(labels = scales::percent_format(accuracy = 0.01),
limits = c(-0.08, 0.08))
for (i in 2:ncol(ret)) {
g[[i]] <- ggplot(data.frame(x = c(-4, 4)), aes(x = x)) +
stat_function(fun = dnorm, args = list(mean = ret.avg[i], sd = vol[i]),
color = "blue") +
geom_vline(xintercept = ret.avg[i] + var.p[i], color = "red") +
labs(title = paste(names(vol)[i],
"VaR =", percent(var.p[i], accuracy = 0.01))) +
scale_x_continuous(labels = scales::percent_format(accuracy = 0.01),
limits = c(-0.08, 0.08))
}
do.call("grid.arrange", c(g, nrow = 3))Conditional VaR (CVaR) also known as Expected Shortfall (ES) or Expected Tail-Loss (ETL)
CVaR expressed in returns:
cvar.p <- ret.avg - dnorm(qnorm(alpha))*vol/alpha
print(percent( cvar.p , accuracy = 0.01))## AAPL FB GS MCD SBUX WMT DIS TSLA
## "-4.00%" "-4.78%" "-3.89%" "-2.70%" "-3.30%" "-3.15%" "-3.28%" "-8.04%"
## EqPort
## "-2.44%"
Compare the Parametric CVaR in returns of: EqPort vs the weighted average of the stocks to appreciate the effect of diversification:
print(percent( cvar.p[1:n] %*% wts , accuracy = 0.01))## [1] "-4.14%"
CVaR expressed as position value:
cvar.p.pv <- cvar.p*pv
print(number( cvar.p.pv , accuracy = 0.01, big.mark = ","))## AAPL FB GS MCD SBUX WMT
## "-400.46" "-477.91" "-388.63" "-270.31" "-330.14" "-314.72"
## DIS TSLA EqPort
## "-327.52" "-804.07" "-1,955.90"
Compare the Parametric CVaR in position value of: EqPort vs the sum of the stocks to appreciate the effect of diversification:
print(number( sum(cvar.p.pv[1:n]) , accuracy = 0.01, big.mark = ","))## [1] "-3,313.76"
We will consolidate the results in var to compare the results of the different methods.
var <- data.frame(t(percent(var.p, accuracy = 0.01)))
rownames(var) <- "Parametric VaR (Normal)"
temp <- data.frame(t(percent(cvar.p, accuracy = 0.01)))
rownames(temp) <- "Parametric CVaR (Normal)"
var <- rbind(var, temp)
head(var)## AAPL FB GS MCD SBUX WMT DIS
## Parametric VaR (Normal) -3.48% -4.16% -3.39% -2.35% -2.87% -2.74% -2.85%
## Parametric CVaR (Normal) -4.00% -4.78% -3.89% -2.70% -3.30% -3.15% -3.28%
## TSLA EqPort
## Parametric VaR (Normal) -7.00% -2.12%
## Parametric CVaR (Normal) -8.04% -2.44%
var.pv <- data.frame(t(number(var.p.pv, accuracy = 0.01, big.mark = ",")))
rownames(var.pv) <- "Parametric VaR (Normal)"
temp <- data.frame(t(number(cvar.p.pv, accuracy = 0.01, big.mark = ",")))
rownames(temp) <- "Parametric CVaR (Normal)"
var.pv <- rbind(var.pv, temp)
head(var.pv)## AAPL FB GS MCD SBUX WMT
## Parametric VaR (Normal) -347.76 -415.98 -339.09 -234.90 -287.19 -273.56
## Parametric CVaR (Normal) -400.46 -477.91 -388.63 -270.31 -330.14 -314.72
## DIS TSLA EqPort
## Parametric VaR (Normal) -285.19 -700.15 -1,698.61
## Parametric CVaR (Normal) -327.52 -804.07 -1,955.90
VaR:
CVaR: * Calculate average of losses of greater magnitude than the VaR threshold
# Build P&L table
m <- nrow(ret)
pnl <- matrix(0:(m-1)/(m-1)*100, nrow = m, ncol = 1)
colnames(pnl) <- "percentile"
head(pnl, 10)## percentile
## [1,] 0.0000000
## [2,] 0.1331558
## [3,] 0.2663116
## [4,] 0.3994674
## [5,] 0.5326232
## [6,] 0.6657790
## [7,] 0.7989348
## [8,] 0.9320905
## [9,] 1.0652463
## [10,] 1.1984021
for (i in 1:(n+1)) {
pnl <- cbind(pnl, sort(coredata(ret[, i]), decreasing = F))
}
colnames(pnl)[2:(n+2)] <- names(ret)
head(pnl, 10)## percentile AAPL FB GS MCD SBUX
## [1,] 0.0000000 -0.09960743 -0.18960922 -0.07455634 -0.05041699 -0.09243699
## [2,] 0.1331558 -0.06633068 -0.07505498 -0.05265967 -0.04771962 -0.09071895
## [3,] 0.2663116 -0.05811951 -0.07253236 -0.04964071 -0.04067326 -0.04337695
## [4,] 0.3994674 -0.05234792 -0.06769679 -0.04715822 -0.03488914 -0.04227894
## [5,] 0.5326232 -0.05037417 -0.06334331 -0.04477822 -0.03264851 -0.04046311
## [6,] 0.6657790 -0.04777774 -0.05719197 -0.04291711 -0.03219230 -0.04002735
## [7,] 0.7989348 -0.04632610 -0.05408385 -0.04203201 -0.03207253 -0.03063995
## [8,] 0.9320905 -0.04622050 -0.04898161 -0.04188920 -0.03058726 -0.02981579
## [9,] 1.0652463 -0.04398871 -0.04773000 -0.04181277 -0.02975759 -0.02945759
## [10,] 1.1984021 -0.04339032 -0.04740385 -0.03885179 -0.02880584 -0.02851332
## WMT DIS TSLA EqPort
## [1,] -0.10183243 -0.05294866 -0.13901539 -0.03959486
## [2,] -0.04650891 -0.04941129 -0.13613715 -0.03451806
## [3,] -0.04201768 -0.04498681 -0.12971118 -0.03219931
## [4,] -0.03843967 -0.04374383 -0.08928303 -0.03176267
## [5,] -0.03611261 -0.03879227 -0.08628984 -0.03110577
## [6,] -0.03272405 -0.03713304 -0.08234810 -0.03105598
## [7,] -0.03125744 -0.03679855 -0.08218818 -0.03098266
## [8,] -0.02977208 -0.03447680 -0.07843569 -0.02903137
## [9,] -0.02787720 -0.03255070 -0.07665304 -0.02894003
## [10,] -0.02752104 -0.03240735 -0.07624230 -0.02816653
# Interpolate rows 8 & 9 since the 1% is between them
var.h.pnl <- pnl[8,-1]+(pnl[9,-1]-pnl[8,-1])*(alpha*100-pnl[8,1])/(pnl[9,1]-pnl[8,1])
print(percent( var.h.pnl , accuracy = 0.01))## AAPL FB GS MCD SBUX WMT DIS TSLA
## "-4.51%" "-4.83%" "-4.19%" "-3.02%" "-2.96%" "-2.88%" "-3.35%" "-7.75%"
## EqPort
## "-2.90%"
One way of calculating VaR without having to explicitly construct the P&L table, is by using the quantile( ) function (percentile( ) function in Excel), as below:
VaR Calculation in returns for a single asset:
# EqPort's H VaR
print(percent( quantile(ret$EqPort, alpha) , accuracy = 0.01))## 1%
## "-2.90%"
VaR Calculation in returns for all assets in the table:
# Historical VaR
var.h <- apply(ret, 2, quantile, alpha, na.rm = T)
print(percent( var.h , accuracy = 0.01))## AAPL FB GS MCD SBUX WMT DIS TSLA
## "-4.51%" "-4.83%" "-4.19%" "-3.02%" "-2.96%" "-2.88%" "-3.35%" "-7.75%"
## EqPort
## "-2.90%"
Compare the Historical VaR in returns of: EqPort vs the weighted average of the stocks to appreciate the diversification effect:
print(percent( var.h[1:n] %*% wts , accuracy = 0.01))## [1] "-4.19%"
VaR in position value:
var.h.pv <- var.h*pv
print(number( var.h.pv , accuracy = 0.01, big.mark = ","))## AAPL FB GS MCD SBUX WMT
## "-450.82" "-483.43" "-418.50" "-301.64" "-296.33" "-288.06"
## DIS TSLA EqPort
## "-334.94" "-775.27" "-2,318.78"
Compare the Historical VaR in position value of: EqPort vs the sum of the individual stocks to appreciate the diversification effect:
print(number( sum(var.h.pv[1:n]) , accuracy = 0.01, big.mark = ","))## [1] "-3,349.00"
# Find days of lowest returns,
lowestRets <- ret
for (i in 1:ncol(ret)) {
lowestRets[, i] <- ifelse(coredata(ret[, i]) <= var.h[i], ret[, i], NA)
}
# Lowest Return Scenarios: 8 / 752 in each case
print( apply(!is.na(lowestRets), 2, sum) )## AAPL FB GS MCD SBUX WMT DIS TSLA EqPort
## 8 8 8 8 8 8 8 8 8
# Which lowest returns contribute to the EqPort lowest returns?
lowestRets[!is.na(lowestRets$EqPort) == T, ]## AAPL FB GS MCD SBUX WMT
## 2018-02-05 NA NA -0.04203201 -0.03264851 NA -0.04201768
## 2018-02-08 NA NA NA -0.04067326 NA NA
## 2018-10-10 -0.04632610 NA NA NA NA NA
## 2018-11-12 -0.05037417 NA -0.07455634 NA NA NA
## 2019-01-03 -0.09960743 NA NA NA -0.04337695 NA
## 2019-05-13 -0.05811951 NA NA NA NA NA
## 2019-08-05 -0.05234792 NA NA NA NA -0.03272405
## 2019-08-14 NA NA -0.04188920 NA NA NA
## DIS TSLA EqPort
## 2018-02-05 -0.03679855 NA -0.03451806
## 2018-02-08 NA -0.08628984 -0.03959486
## 2018-10-10 -0.03447680 NA -0.02903137
## 2018-11-12 NA NA -0.03110577
## 2019-01-03 NA NA -0.03176267
## 2019-05-13 NA NA -0.03105598
## 2019-08-05 NA NA -0.03098266
## 2019-08-14 NA NA -0.03219931
# since the lowestRets series has NA's by design,
# we ignore those warnings in the graphs
options(warn = -1)
# Historical VaR Graphs: Return Scenarios
g <- list()
g[[1]] <- ggplot(ret, aes_string(x = index(ret), y = names(ret)[1])) +
geom_col(position = "identity") +
geom_col(aes_string(y = lowestRets[, 1]), color = "red",
position = "identity") +
labs(title = paste(names(ret)[1], "Historical Returns"),
x = "Date / Scenarios", y = "Returns") +
ylim(c(-0.10,0.10))
for (i in 2:ncol(ret)) {
g[[i]] <- ggplot(ret, aes_string(x = index(ret),
y = names(ret)[i])) +
geom_col(position = "identity") +
geom_col(aes_string(y = lowestRets[, i]), color = "red",
position = "identity") +
labs(title = paste(names(ret)[i], "Historical Returns"),
x = "Date / Scenarios", y = "Returns") +
ylim(c(-0.10,0.10))
}
do.call("grid.arrange", c(g, nrow = 3))options(warn = 0) # we restore the warnings parameter for other use cases# we ignore warnings due to NA's in the graphs, since they are by design
options(warn = -1)
# Historical VaR Graphs: Histograms
g <- list()
g[[1]] <- ggplot(ret, aes_string(x = names(ret)[1])) +
geom_histogram(aes(y = ..density..), position = "identity", bins = 30,
fill = "cornflowerblue") +
stat_function(fun = dnorm, args = list(mean = ret.avg[1], sd = vol[1]),
color = "gray40") +
geom_vline(xintercept = var.h[1], color = "red") +
geom_vline(xintercept = ret.avg[1] + var.p[1], color = "gray40") +
labs(title = paste(names(ret)[1], "Returns Histogram: Historical vs Normal")) +
scale_x_continuous(labels = scales::percent_format(accuracy = 0.01),
limits = c(-0.08, 0.08))
for (i in 2:ncol(ret)) {
g[[i]] <- ggplot(ret, aes_string(x = names(ret)[i])) +
geom_histogram(aes(y = ..density..), position = "identity", bins = 30,
fill = "cornflowerblue") +
stat_function(fun = dnorm, args = list(mean = ret.avg[i], sd = vol[i]),
color = "gray40") +
geom_vline(xintercept = var.h[i], color = "red") +
geom_vline(xintercept = ret.avg[i] + var.p[i], color = "gray40") +
labs(title = paste(names(ret)[i], "Returns Histogram: Historical vs Normal")) +
scale_x_continuous(labels = scales::percent_format(accuracy = 0.01),
limits = c(-0.08, 0.08))
}
do.call("grid.arrange", c(g, nrow = 3))options(warn = 0) # we restore the warnings parameter for other use casesSince we have a data frame of lowestRets already, we can calculate the CVaR as the average of those returns for each stock:
print(percent( apply(lowestRets, 2, mean, na.rm = T) , accuracy = 0.01))## AAPL FB GS MCD SBUX WMT DIS TSLA
## "-5.84%" "-7.86%" "-4.95%" "-3.76%" "-5.12%" "-4.48%" "-4.23%" "-10.29%"
## EqPort
## "-3.25%"
This is the same as just using the returns of the Profit & Loss vector pnl that are below the VaR threshold, in our case the threshold is between days 8 and 9:
print(percent( apply(pnl[1:8, -1], 2, mean) , accuracy = 0.01 ))## AAPL FB GS MCD SBUX WMT DIS TSLA
## "-5.84%" "-7.86%" "-4.95%" "-3.76%" "-5.12%" "-4.48%" "-4.23%" "-10.29%"
## EqPort
## "-3.25%"
A faster approach to calculate Historical CVaR is by using the quantile( ) function too, without having to sort the P&L or having to use a filter to find the lowest returns. In this case, we request all the losses below the VaR threshold and average them.
First, let’s look at how the quantile( ) function can return several values, one for each of the quantiles we request:
print( quantile(ret$EqPort, c(0.01, 0.02, 0.03, 0.04, 0.05)) )## 1% 2% 3% 4% 5%
## -0.02898479 -0.02355609 -0.02044101 -0.01785292 -0.01636502
We can specify the quantiles from 0 to alpha, by 1/(m-1) so that they match exactly the percentiles we assigned to the returns below alpha:
print( quantile(ret$EqPort, seq(from = 0, to = alpha, by = 1/(m-1))) )## 0% 0.1331558% 0.2663116% 0.3994674% 0.5326232% 0.665779%
## -0.03959486 -0.03451806 -0.03219931 -0.03176267 -0.03110577 -0.03105598
## 0.7989348% 0.9320905%
## -0.03098266 -0.02903137
Calculating the average of the previous vector gives us the CVaR for the portfolio in returns:
# Confirming EqPort's CVaR:
mean(quantile(ret$EqPort, seq(0, alpha, 1/(m-1))))## [1] -0.03253134
CVaR in returns all assets in the table:
cvar.h <- colMeans(apply(ret, 2, quantile, probs = seq(0, alpha, 1/(m-1))))
print(percent( cvar.h , accuracy = 0.01))## AAPL FB GS MCD SBUX WMT DIS TSLA
## "-5.84%" "-7.86%" "-4.95%" "-3.76%" "-5.12%" "-4.48%" "-4.23%" "-10.29%"
## EqPort
## "-3.25%"
Compare the Historical CVaR in returns of: EqPort vs the weighted average of the stocks to appreciate the effect of diversification:
print(percent( cvar.h[1:n] %*% wts , accuracy = 0.01))## [1] "-5.82%"
CVaR in position value:
cvar.h.pv <- cvar.h * pv
print(number( cvar.h.pv , accuracy = 0.01, big.mark = ","))## AAPL FB GS MCD SBUX WMT
## "-583.88" "-785.62" "-494.54" "-376.50" "-512.20" "-448.33"
## DIS TSLA EqPort
## "-422.86" "-1,029.26" "-2,602.51"
Compare the Historical CVaR in position value of: EqPort vs the sum of the stocks to appreciate the effect of diversification:
print(number( sum(cvar.h.pv[1:n]) , accuracy = 0.01, big.mark = ","))## [1] "-4,653.19"
temp <- data.frame(t(percent(var.h, accuracy = 0.01)))
rownames(temp) <- "Historical VaR"
var <- rbind(var, temp)
temp <- data.frame(t(percent(cvar.h, accuracy = 0.01)))
rownames(temp) <- "Historical CVaR"
var <- rbind(var, temp)
var## AAPL FB GS MCD SBUX WMT DIS
## Parametric VaR (Normal) -3.48% -4.16% -3.39% -2.35% -2.87% -2.74% -2.85%
## Parametric CVaR (Normal) -4.00% -4.78% -3.89% -2.70% -3.30% -3.15% -3.28%
## Historical VaR -4.51% -4.83% -4.19% -3.02% -2.96% -2.88% -3.35%
## Historical CVaR -5.84% -7.86% -4.95% -3.76% -5.12% -4.48% -4.23%
## TSLA EqPort
## Parametric VaR (Normal) -7.00% -2.12%
## Parametric CVaR (Normal) -8.04% -2.44%
## Historical VaR -7.75% -2.90%
## Historical CVaR -10.29% -3.25%
temp <- data.frame(t(number(var.h.pv, accuracy = 0.01, big.mark = ",")))
rownames(temp) <- "Historical VaR"
var.pv <- rbind(var.pv, temp)
temp <- data.frame(t(number(cvar.h.pv, accuracy = 0.01, big.mark = ",")))
rownames(temp) <- "Historical CVaR"
var.pv <- rbind(var.pv, temp)
var.pv## AAPL FB GS MCD SBUX WMT
## Parametric VaR (Normal) -347.76 -415.98 -339.09 -234.90 -287.19 -273.56
## Parametric CVaR (Normal) -400.46 -477.91 -388.63 -270.31 -330.14 -314.72
## Historical VaR -450.82 -483.43 -418.50 -301.64 -296.33 -288.06
## Historical CVaR -583.88 -785.62 -494.54 -376.50 -512.20 -448.33
## DIS TSLA EqPort
## Parametric VaR (Normal) -285.19 -700.15 -1,698.61
## Parametric CVaR (Normal) -327.52 -804.07 -1,955.90
## Historical VaR -334.94 -775.27 -2,318.78
## Historical CVaR -422.86 -1,029.26 -2,602.51
The z-stat and the t-stat of a variable are calculated in the same way, the difference is the distribution assumed:
\[standarized.score = \frac{x - \overline{x}}{ \frac{s}{ \sqrt{n}}}\]
If the population’s distribution seems to be approximately Normal then: * For large enough sample size (typically \(n >= 30\)), we can use the Standard Normal Distribution. * For small samples (typically \(n < 30\)) it is preferred to use the t-Student Distribution.
The t-Student Distribution tends to have heavier tails than the Standard Normal Distribution, specially with small degrees of freedom df, and approximates the Standard Normal Distribution as they increase.
The recommended degrees of freedom are sample size m minus 1.
\[df = m - 1\]
##### Normal vs t-Student Distributions #####
ggplot(data.frame(x = c(-4, 4)), aes(x = x)) +
stat_function(fun = dnorm, color = "blue") +
stat_function(fun = dt, args = list(df = 2), color = "red") +
labs(title = "Distributions: Normal Standard vs t-Student") +
annotate(geom = "text", x = 1, y = dnorm(x = 1),label = "Normal Standard",
hjust = 0, vjust = -0.2, size = 4, color = "blue") +
annotate(geom = "text", x = 1, y = dt(x = 1, df = 2), label = "t-Student (df = 2)",
hjust = -0.2, vjust = -0.2, size = 4, color = "red")Generation of random numbers following a t-Student Distribution with m - 1 degrees of freedom. Given the size of m though, this is close to a Normal Distribution.
# Assuming t-Student Distribution with theoretical df
m <- 10000 # scenarios
e <- matrix(rt(m * n, m - 1), ncol = n)
colnames(e) <- paste0("e",1:8)
head(e)## e1 e2 e3 e4 e5 e6
## [1,] 0.64180290 -0.2527287 -0.2364694 -0.7350341 0.6828597 1.19737101
## [2,] -0.55239927 -0.6693039 1.4679676 0.8423540 0.6443562 -0.98844373
## [3,] 0.40414997 1.0182924 0.2198988 0.3593175 2.8926933 1.91139515
## [4,] 2.38249837 1.6886697 -0.2719359 -2.0061997 0.9709093 -1.34988452
## [5,] -0.09978281 1.6467859 -0.1624723 -0.5367579 -1.0087590 -1.11748029
## [6,] 0.39558058 0.4263109 0.9052249 0.3529220 1.1347307 0.07390858
## e7 e8
## [1,] -0.4854847 -1.88173223
## [2,] 0.9639110 0.04418334
## [3,] -1.0545441 -0.30950269
## [4,] 1.4864202 -1.23588713
## [5,] 0.5284560 0.92272819
## [6,] 1.8347406 0.57180559
Given that the random numbers in e where generated independently of each other, their correlation is 0. As we can see below, it is statistically insignificant.
round(cor(e), 2)## e1 e2 e3 e4 e5 e6 e7 e8
## e1 1.00 0.01 0.01 -0.01 0.00 -0.02 0.01 0.01
## e2 0.01 1.00 -0.01 0.00 -0.01 0.00 0.02 -0.03
## e3 0.01 -0.01 1.00 0.01 -0.02 -0.01 0.00 0.01
## e4 -0.01 0.00 0.01 1.00 0.00 0.01 0.00 0.00
## e5 0.00 -0.01 -0.02 0.00 1.00 0.01 0.02 0.01
## e6 -0.02 0.00 -0.01 0.01 0.01 1.00 -0.01 -0.02
## e7 0.01 0.02 0.00 0.00 0.02 -0.01 1.00 0.00
## e8 0.01 -0.03 0.01 0.00 0.01 -0.02 0.00 1.00
By multiplying e times the Cholesky factor a, we apply the historical covariance structure (variances & correlations) to the randomly generated numbers.
ret.mc are our Monte Carlo simulated returns. Each of them is a potential 1-day return and they form all of our simulated scenarios.
a <- chol(cov_h)
ret.mc <- data.frame(e %*% a)
head(ret.mc)## AAPL FB GS MCD SBUX WMT
## 1 0.009982101 0.001402251 0.0008889338 -0.006422680 0.006694497 0.013671194
## 2 -0.008591587 -0.015531833 0.0136919969 0.007954051 0.010021481 -0.006709542
## 3 0.006285833 0.019902166 0.0075246802 0.005803698 0.037174494 0.028829414
## 4 0.037055518 0.047636895 0.0160672486 -0.013302766 0.013772106 -0.012650016
## 5 -0.001551944 0.025747822 0.0003924930 -0.004131687 -0.012246576 -0.015475294
## 6 0.006152551 0.010266662 0.0150887473 0.005773855 0.017603358 0.006533375
## DIS TSLA
## 1 -0.001584435 -0.047162416
## 2 0.012441413 -0.003642248
## 3 0.001034270 0.004705062
## 4 0.025331496 -0.007484764
## 5 0.002712663 0.028935232
## 6 0.027732372 0.026166266
The correlation of the simulated returns should closely match the correlation of the historical returns, given a sufficiently large amount of scenarios (m).
cor_mc <- cor(ret.mc)
round(cor_mc, 2)## AAPL FB GS MCD SBUX WMT DIS TSLA
## AAPL 1.00 0.47 0.48 0.21 0.29 0.20 0.35 0.32
## FB 0.47 1.00 0.33 0.19 0.21 0.12 0.27 0.24
## GS 0.48 0.33 1.00 0.20 0.24 0.24 0.36 0.24
## MCD 0.21 0.19 0.20 1.00 0.37 0.28 0.22 0.10
## SBUX 0.29 0.21 0.24 0.37 1.00 0.27 0.33 0.15
## WMT 0.20 0.12 0.24 0.28 0.27 1.00 0.25 0.12
## DIS 0.35 0.27 0.36 0.22 0.33 0.25 1.00 0.17
## TSLA 0.32 0.24 0.24 0.10 0.15 0.12 0.17 1.00
round(cor_h, 2)## AAPL FB GS MCD SBUX WMT DIS TSLA
## AAPL 1.00 0.47 0.47 0.22 0.29 0.22 0.35 0.31
## FB 0.47 1.00 0.33 0.19 0.22 0.13 0.25 0.26
## GS 0.47 0.33 1.00 0.20 0.26 0.25 0.36 0.23
## MCD 0.22 0.19 0.20 1.00 0.37 0.28 0.22 0.09
## SBUX 0.29 0.22 0.26 0.37 1.00 0.27 0.32 0.14
## WMT 0.22 0.13 0.25 0.28 0.27 1.00 0.26 0.14
## DIS 0.35 0.25 0.36 0.22 0.32 0.26 1.00 0.16
## TSLA 0.31 0.26 0.23 0.09 0.14 0.14 0.16 1.00
Having simulated the returns of each individual stock, we now use them to calculate the simulated portfolio return, as before.
\[port.ret = ret.mc*wts\]
ret.mc$EqPort <- as.matrix(ret.mc[, 1:n]) %*% as.matrix(wts)
head(ret.mc$EqPort)## [,1]
## [1,] -0.002816319
## [2,] 0.001204217
## [3,] 0.013907452
## [4,] 0.013303215
## [5,] 0.003047839
## [6,] 0.014414648
Similar to Historical Simulation, once we have all the simulated scenarios we just have to select the percentile (quantile) that matches the level of confidence that we want (\(alpha = 1 - conf.level\)).
Monte Carlo VaR in returns for a single asset:
quantile(ret.mc$EqPort, alpha)## 1%
## -0.022242
Monte Carlo VaR in returns for all the assets in the table is:
var.mc <- apply(ret.mc, 2, quantile, alpha)
print(percent( var.mc , accuracy = 0.01))## AAPL FB GS MCD SBUX WMT DIS TSLA
## "-3.65%" "-4.21%" "-3.45%" "-2.43%" "-2.90%" "-2.81%" "-2.91%" "-6.96%"
## EqPort
## "-2.22%"
Compare the Monte Carlo VaR in returns of: EqPort vs the weighted average of the stocks to appreciate the diversification effect:
print(percent( var.mc[1:n] %*% wts , accuracy = 0.01))## [1] "-3.67%"
Monte Carlo VaR in position value for all the assets in the table is:
var.mc.pv <- var.mc * pv
print(number( var.mc.pv , accuracy = 0.01, big.mark = ","))## AAPL FB GS MCD SBUX WMT
## "-365.24" "-420.88" "-345.09" "-243.03" "-290.35" "-281.26"
## DIS TSLA EqPort
## "-291.29" "-696.36" "-1,779.36"
Compare the Monte Carlo VaR in position value of: EqPort vs the sum of the individual stocks to appreciate the diversification effect:
print(number( sum(var.mc.pv[1:n]) , accuracy = 0.01, big.mark = ","))## [1] "-2,933.50"
Just as in Historical Simulation, the CVaR of a Monte Carlo Simulation is calculated as the average of the returns in the scenarios below the VaR threshold.
CVaR in returns, for a single asset:
mean(quantile(ret.mc$EqPort, seq(0, alpha, 1/(m-1))))## [1] -0.02578798
The Monte Carlo VaR in returns for all the assets in the table is:
cvar.mc <- colMeans(apply(ret.mc, 2, quantile, probs = seq(0, alpha, 1/(m-1))))
print(percent( cvar.mc , accuracy = 0.01))## AAPL FB GS MCD SBUX WMT DIS TSLA
## "-4.25%" "-4.92%" "-3.91%" "-2.80%" "-3.29%" "-3.20%" "-3.39%" "-7.93%"
## EqPort
## "-2.58%"
Compare the CVaR in returns of: EqPort vs the weighted average of the individual stocks to appreciate the diversification effect:
print(percent( cvar.mc[1:n] %*% wts , accuracy = 0.01))## [1] "-4.21%"
The Monte Carlo VaR in position value for all the assets in the table is:
cvar.mc.pv <- cvar.mc * pv
print(number( cvar.mc.pv , accuracy = 0.01, big.mark = ","))## AAPL FB GS MCD SBUX WMT
## "-425.12" "-491.82" "-391.18" "-279.91" "-328.71" "-320.13"
## DIS TSLA EqPort
## "-339.27" "-792.77" "-2,063.04"
Compare the CVaR in position value of: EqPort vs the sum of the individual stocks to appreciate the diversification effect:
print(number( sum(cvar.mc.pv[1:n]) , accuracy = 0.01, big.mark = ","))## [1] "-3,368.91"
temp <- data.frame(t(percent(var.mc, accuracy = 0.01)))
rownames(temp) <- "MonteCarlo VaR (t-Student: df=obs-1)"
var <- rbind(var, temp)
temp <- data.frame(t(percent(cvar.mc, accuracy = 0.01)))
rownames(temp) <- "MonteCarlo CVar (t-Student: df=obs-1)"
var <- rbind(var, temp)
var## AAPL FB GS MCD SBUX WMT
## Parametric VaR (Normal) -3.48% -4.16% -3.39% -2.35% -2.87% -2.74%
## Parametric CVaR (Normal) -4.00% -4.78% -3.89% -2.70% -3.30% -3.15%
## Historical VaR -4.51% -4.83% -4.19% -3.02% -2.96% -2.88%
## Historical CVaR -5.84% -7.86% -4.95% -3.76% -5.12% -4.48%
## MonteCarlo VaR (t-Student: df=obs-1) -3.65% -4.21% -3.45% -2.43% -2.90% -2.81%
## MonteCarlo CVar (t-Student: df=obs-1) -4.25% -4.92% -3.91% -2.80% -3.29% -3.20%
## DIS TSLA EqPort
## Parametric VaR (Normal) -2.85% -7.00% -2.12%
## Parametric CVaR (Normal) -3.28% -8.04% -2.44%
## Historical VaR -3.35% -7.75% -2.90%
## Historical CVaR -4.23% -10.29% -3.25%
## MonteCarlo VaR (t-Student: df=obs-1) -2.91% -6.96% -2.22%
## MonteCarlo CVar (t-Student: df=obs-1) -3.39% -7.93% -2.58%
# Consolidating results
temp <- data.frame(t(number(var.mc.pv, accuracy = 0.01, big.mark = ",")))
rownames(temp) <- "MonteCarlo VaR (t-Student: df=obs-1)"
var.pv <- rbind(var.pv, temp)
temp <- data.frame(t(number(cvar.mc.pv, accuracy = 0.01, big.mark = ",")))
rownames(temp) <- "MonteCarlo CVar (t-Student: df=obs-1)"
var.pv <- rbind(var.pv, temp)
var.pv## AAPL FB GS MCD SBUX
## Parametric VaR (Normal) -347.76 -415.98 -339.09 -234.90 -287.19
## Parametric CVaR (Normal) -400.46 -477.91 -388.63 -270.31 -330.14
## Historical VaR -450.82 -483.43 -418.50 -301.64 -296.33
## Historical CVaR -583.88 -785.62 -494.54 -376.50 -512.20
## MonteCarlo VaR (t-Student: df=obs-1) -365.24 -420.88 -345.09 -243.03 -290.35
## MonteCarlo CVar (t-Student: df=obs-1) -425.12 -491.82 -391.18 -279.91 -328.71
## WMT DIS TSLA EqPort
## Parametric VaR (Normal) -273.56 -285.19 -700.15 -1,698.61
## Parametric CVaR (Normal) -314.72 -327.52 -804.07 -1,955.90
## Historical VaR -288.06 -334.94 -775.27 -2,318.78
## Historical CVaR -448.33 -422.86 -1,029.26 -2,602.51
## MonteCarlo VaR (t-Student: df=obs-1) -281.26 -291.29 -696.36 -1,779.36
## MonteCarlo CVar (t-Student: df=obs-1) -320.13 -339.27 -792.77 -2,063.04
# Assuming t-Student Distribution with much less df than theory (e.g. 10)
m <- 10000
df <- 10 # instead of "m - 1"
e2 <- matrix(rt(10000 * n, df), ncol = n)
ret.mc2 <- data.frame(e2 %*% a)
head(ret.mc2)## AAPL FB GS MCD SBUX WMT
## 1 0.033927507 0.02467964 0.006708318 -0.009028744 0.016255133 0.0232808422
## 2 -0.016949893 -0.03894882 -0.011954232 -0.004631043 -0.006730007 -0.0118432940
## 3 0.008487302 0.01891013 -0.010906794 0.001256555 0.015238065 0.0077382268
## 4 0.009394396 0.01449151 0.012820671 -0.013151768 0.007534240 0.0485675145
## 5 0.010030564 0.02077379 0.016392544 0.012563881 -0.006236016 -0.0008458853
## 6 0.027884628 0.03865441 0.010000024 0.016871259 0.011970151 -0.0219722645
## DIS TSLA
## 1 0.009794037 0.0506017914
## 2 0.018323441 -0.0416669081
## 3 -0.010743346 0.0229576974
## 4 0.034923569 -0.0008591072
## 5 0.011474003 -0.0100330628
## 6 -0.002051527 0.0149876165
cor_mc2 <- cor(ret.mc2)
round(cor_mc2, 2)## AAPL FB GS MCD SBUX WMT DIS TSLA
## AAPL 1.00 0.46 0.47 0.22 0.30 0.21 0.34 0.31
## FB 0.46 1.00 0.34 0.20 0.21 0.12 0.25 0.26
## GS 0.47 0.34 1.00 0.21 0.26 0.24 0.36 0.22
## MCD 0.22 0.20 0.21 1.00 0.36 0.27 0.22 0.10
## SBUX 0.30 0.21 0.26 0.36 1.00 0.26 0.32 0.14
## WMT 0.21 0.12 0.24 0.27 0.26 1.00 0.24 0.14
## DIS 0.34 0.25 0.36 0.22 0.32 0.24 1.00 0.15
## TSLA 0.31 0.26 0.22 0.10 0.14 0.14 0.15 1.00
round(cor_h, 2)## AAPL FB GS MCD SBUX WMT DIS TSLA
## AAPL 1.00 0.47 0.47 0.22 0.29 0.22 0.35 0.31
## FB 0.47 1.00 0.33 0.19 0.22 0.13 0.25 0.26
## GS 0.47 0.33 1.00 0.20 0.26 0.25 0.36 0.23
## MCD 0.22 0.19 0.20 1.00 0.37 0.28 0.22 0.09
## SBUX 0.29 0.22 0.26 0.37 1.00 0.27 0.32 0.14
## WMT 0.22 0.13 0.25 0.28 0.27 1.00 0.26 0.14
## DIS 0.35 0.25 0.36 0.22 0.32 0.26 1.00 0.16
## TSLA 0.31 0.26 0.23 0.09 0.14 0.14 0.16 1.00
ret.mc2$EqPort <- as.matrix(ret.mc2[, 1:n]) %*% as.matrix(wts)
head(ret.mc2$EqPort)## [,1]
## [1,] 0.019527315
## [2,] -0.014300094
## [3,] 0.006617230
## [4,] 0.014215129
## [5,] 0.006764977
## [6,] 0.012043037
# EqPort's MC VaR
print(percent( quantile(ret.mc2$EqPort, alpha) , accuracy = 0.01))## 1%
## "-2.50%"
# MC VaR in returns for each asset
var.mc2 <- apply(ret.mc2, 2, quantile, alpha)
print(percent( var.mc2 , accuracy = 0.01))## AAPL FB GS MCD SBUX WMT DIS TSLA
## "-4.10%" "-4.83%" "-4.00%" "-2.88%" "-3.43%" "-3.39%" "-3.32%" "-8.48%"
## EqPort
## "-2.50%"
# MC VaR in position value for each asset
var.mc2.pv <- var.mc2 * pv
print(number( var.mc2.pv , accuracy = 0.01, big.mark = ","))## AAPL FB GS MCD SBUX WMT
## "-409.62" "-483.47" "-399.96" "-287.69" "-342.92" "-339.37"
## DIS TSLA EqPort
## "-332.37" "-847.52" "-2,000.35"
# EqPort's CVaR:
print(percent( mean(quantile(ret.mc2$EqPort, seq(0, alpha, 1/(m-1)))) , accuracy = 0.01))## [1] "-2.93%"
# MC CVaR in returns for each asset:
cvar.mc2 <- colMeans(apply(ret.mc2, 2, quantile, probs = seq(0, alpha, 1/(m-1))))
print(percent( cvar.mc2 , accuracy = 0.01))## AAPL FB GS MCD SBUX WMT DIS TSLA
## "-5.16%" "-5.91%" "-4.83%" "-3.42%" "-4.06%" "-4.10%" "-4.10%" "-10.17%"
## EqPort
## "-2.93%"
# MC CVaR in position value for each asset
cvar.mc2.pv <- cvar.mc2 * pv
print(number( cvar.mc2.pv , accuracy = 0.01, big.mark = ","))## AAPL FB GS MCD SBUX WMT
## "-515.68" "-590.97" "-483.16" "-341.52" "-406.35" "-409.51"
## DIS TSLA EqPort
## "-409.50" "-1,017.45" "-2,341.22"
temp <- data.frame(t(percent(var.mc2, accuracy = 0.01)))
rownames(temp) <- "MonteCarlo VaR (t-Student: df=10)"
var <- rbind(var, temp)
temp <- data.frame(t(percent(cvar.mc2, accuracy = 0.01)))
rownames(temp) <- "MonteCarlo CVar (t-Student: df=10)"
var <- rbind(var, temp)
var## AAPL FB GS MCD SBUX WMT
## Parametric VaR (Normal) -3.48% -4.16% -3.39% -2.35% -2.87% -2.74%
## Parametric CVaR (Normal) -4.00% -4.78% -3.89% -2.70% -3.30% -3.15%
## Historical VaR -4.51% -4.83% -4.19% -3.02% -2.96% -2.88%
## Historical CVaR -5.84% -7.86% -4.95% -3.76% -5.12% -4.48%
## MonteCarlo VaR (t-Student: df=obs-1) -3.65% -4.21% -3.45% -2.43% -2.90% -2.81%
## MonteCarlo CVar (t-Student: df=obs-1) -4.25% -4.92% -3.91% -2.80% -3.29% -3.20%
## MonteCarlo VaR (t-Student: df=10) -4.10% -4.83% -4.00% -2.88% -3.43% -3.39%
## MonteCarlo CVar (t-Student: df=10) -5.16% -5.91% -4.83% -3.42% -4.06% -4.10%
## DIS TSLA EqPort
## Parametric VaR (Normal) -2.85% -7.00% -2.12%
## Parametric CVaR (Normal) -3.28% -8.04% -2.44%
## Historical VaR -3.35% -7.75% -2.90%
## Historical CVaR -4.23% -10.29% -3.25%
## MonteCarlo VaR (t-Student: df=obs-1) -2.91% -6.96% -2.22%
## MonteCarlo CVar (t-Student: df=obs-1) -3.39% -7.93% -2.58%
## MonteCarlo VaR (t-Student: df=10) -3.32% -8.48% -2.50%
## MonteCarlo CVar (t-Student: df=10) -4.10% -10.17% -2.93%
# Consolidating results
temp <- data.frame(t(number(var.mc2.pv, accuracy = 0.01, big.mark = ",")))
rownames(temp) <- "MonteCarlo VaR (t-Student: df=10)"
var.pv <- rbind(var.pv, temp)
temp <- data.frame(t(number(cvar.mc2.pv, accuracy = 0.01, big.mark = ",")))
rownames(temp) <- "MonteCarlo CVar (t-Student: df=10)"
var.pv <- rbind(var.pv, temp)
var.pv## AAPL FB GS MCD SBUX
## Parametric VaR (Normal) -347.76 -415.98 -339.09 -234.90 -287.19
## Parametric CVaR (Normal) -400.46 -477.91 -388.63 -270.31 -330.14
## Historical VaR -450.82 -483.43 -418.50 -301.64 -296.33
## Historical CVaR -583.88 -785.62 -494.54 -376.50 -512.20
## MonteCarlo VaR (t-Student: df=obs-1) -365.24 -420.88 -345.09 -243.03 -290.35
## MonteCarlo CVar (t-Student: df=obs-1) -425.12 -491.82 -391.18 -279.91 -328.71
## MonteCarlo VaR (t-Student: df=10) -409.62 -483.47 -399.96 -287.69 -342.92
## MonteCarlo CVar (t-Student: df=10) -515.68 -590.97 -483.16 -341.52 -406.35
## WMT DIS TSLA EqPort
## Parametric VaR (Normal) -273.56 -285.19 -700.15 -1,698.61
## Parametric CVaR (Normal) -314.72 -327.52 -804.07 -1,955.90
## Historical VaR -288.06 -334.94 -775.27 -2,318.78
## Historical CVaR -448.33 -422.86 -1,029.26 -2,602.51
## MonteCarlo VaR (t-Student: df=obs-1) -281.26 -291.29 -696.36 -1,779.36
## MonteCarlo CVar (t-Student: df=obs-1) -320.13 -339.27 -792.77 -2,063.04
## MonteCarlo VaR (t-Student: df=10) -339.37 -332.37 -847.52 -2,000.35
## MonteCarlo CVar (t-Student: df=10) -409.51 -409.50 -1,017.45 -2,341.22
What is the difference between the Parametric, Historical and Monte Carlo methods for calculating VaR?
What are the similarities between Historical and Monte Carlo Simulation?
What is the difference between VaR, CVaR and ES (Expected Shortfall)?
In Historical Simulation: If we add the VaR in $ of individual stocks, or we calculate the weighted average of the VaR in returns of individual stocks, to calculate the VaR of a Portfolio, we are assuming a perfect correlation among all the assets (\(\rho = 1\)), which is most likely wrong.
In Monte Carlo Simulation: If we use the random numbers as we generate them, even if we use appropriate distributions, we are assuming those variables are uncorrelated (\(\rho = 0\)), unless we use a method like multiplying by the Cholesky factor of the covariance matrix to purposely generate correlated random numbers.