About the tidyquant package

The latest r package in finance made financial analysis in R way easier. In a nutshell, the tidyquant package brings the PerformanceAnalytics package that contains functions that computes many of the widely used performance metrics to the framework within the tidyverse. This means financial analyses become way easier and flexible as it’s much easier to manipulate finaicla data the way we want as well as to visulize financial data and performance metrics.

To learn more about the tidyquant package, read the introduction to the tidyquant by Matt Dancho.

Suppose that you consider investing in two stocks: Microsoft and General Electric. As a prudent investor, you analyze the historical performance of the stocks during the period of 1990-01-01 to 2017-12-31.

Q1 Which of the two stocks would have been your best performer during the study period?

library(tidyquant)
library(ggplot2)

# Import stock prices
stock_prices <- c("AMZN", "APPL", "GE") %>%
    tq_get(get  = "stock.prices",
           from = "1990-01-01",
           to   = "2017-12-31") %>%
    group_by(symbol)
stock_prices
## # A tibble: 12,248 x 8
## # Groups:   symbol [2]
##    symbol date        open  high   low close   volume adjusted
##    <chr>  <date>     <dbl> <dbl> <dbl> <dbl>    <dbl>    <dbl>
##  1 AMZN   1997-05-15  2.44  2.5   1.93  1.96 72156000     1.96
##  2 AMZN   1997-05-16  1.97  1.98  1.71  1.73 14700000     1.73
##  3 AMZN   1997-05-19  1.76  1.77  1.62  1.71  6106800     1.71
##  4 AMZN   1997-05-20  1.73  1.75  1.64  1.64  5467200     1.64
##  5 AMZN   1997-05-21  1.64  1.65  1.38  1.43 18853200     1.43
##  6 AMZN   1997-05-22  1.44  1.45  1.31  1.40 11776800     1.40
##  7 AMZN   1997-05-23  1.41  1.52  1.33  1.5  15937200     1.5 
##  8 AMZN   1997-05-27  1.51  1.65  1.46  1.58  8697600     1.58
##  9 AMZN   1997-05-28  1.62  1.64  1.53  1.53  4574400     1.53
## 10 AMZN   1997-05-29  1.54  1.54  1.48  1.51  3472800     1.51
## # ... with 12,238 more rows

stock_prices %>%
  ggplot(aes(x = date, y = adjusted)) +
  geom_line() +
  facet_wrap(~symbol)


stock_prices %>%
  group_by(symbol) %>%
  mutate(close = close / close[1]) %>%
  ungroup() %>%
  ggplot(aes(x = date, y = close, col = symbol)) +
  geom_line()


stock_prices %>%
  group_by(symbol) %>%
  mutate(close = close / close[1]) %>%
  summarise(last = last(close))
## # A tibble: 2 x 2
##   symbol   last
##   <chr>   <dbl>
## 1 AMZN   597.  
## 2 GE       3.14

Q1. Amazon had all top 10 spots

Q2 Calculate quarterly returns, instead of monthly.

Hint: You should revise the period argument in the tq_transmute function. To learn more about the tq_transmute function, Google the tidyquant package and find its manual (usually posted as a PDF file in the CRAN website); and search the term tq_tansmute within the manual. You may also click the link here.

tq_transmute(quarterly)

Q3 Which of the two stocks would you expect the most stable returns quarter after quarter without worrying too much of fluctutations?

Hint: Examine the density plot. Note that density is mapped to the vertical axis, while return percentage is to the horizontal axis. The thinner and taller the distribution is, the smaller the range of possible return percentages are. The thicker and shorter the distribution is, the wider the range of possible return percentages are. Hence, it would be more difficult to make predictions for a thicker and shorter distribution than for a thinner and taller one.

Amazon progressed almost every quarter in the time period

Q4 Which of the two stocks would you expect the most frequent large positive returns?

Hint: As was described in Q3 hint, return percentages are mapped to the horizontal axis with positive returns on the right while negative returns on the left. Thus, the distribution that is skewed to the far right suggests the possibility of large positive returns (a positive skewness). In addition, the thicker tail on the right is indicative of the larger probability of positive returns.

# Calculate monthly returns
stock_returns_monthly <- stock_prices %>%
    tq_transmute(select     = adjusted,
                 mutate_fun = periodReturn,
                 period     = "quarterly",
                 col_rename = "Ra")
stock_returns_monthly
## # A tibble: 195 x 3
## # Groups:   symbol [2]
##    symbol date           Ra
##    <chr>  <date>      <dbl>
##  1 AMZN   1997-06-30 -0.213
##  2 AMZN   1997-09-30  1.81 
##  3 AMZN   1997-12-31  0.157
##  4 AMZN   1998-03-31  0.420
##  5 AMZN   1998-06-30  1.33 
##  6 AMZN   1998-09-30  0.119
##  7 AMZN   1998-12-31  1.88 
##  8 AMZN   1999-03-31  0.608
##  9 AMZN   1999-06-30 -0.273
## 10 AMZN   1999-09-30  0.278
## # ... with 185 more rows

# line chart
stock_returns_monthly %>%
  ggplot(aes(x = date, y = Ra)) +
  geom_line() +
  facet_wrap(~symbol)


# density plot
stock_returns_monthly %>%
  ggplot(aes(x = Ra, fill = symbol)) +
  geom_density(alpha = 0.3)

Q5 Which of the two stocks would provide the larger expected return adjusted to risk (in terms of standard deviation)?

Hint: Returns adjusted to risk is reflected in Sharp Ratio. The code below computes Sharp Ratio adjusted to three different types of risks: 1) standard deviation, 2) expected shortfall (5% worst loss), and 3) value at risk (the largest loss at the 95% confidence level).

# Retrieve performance metrics
stock_returns_monthly %>%
    tq_performance(Ra = Ra,
                   Rb = NULL, # Calculataing downside risk measures doesn't require Rb
                   performance_fun = SharpeRatio)
## # A tibble: 2 x 4
## # Groups:   symbol [2]
##   symbol `ESSharpe(Rf=0%,p=95%)` `StdDevSharpe(Rf=0%,~ `VaRSharpe(Rf=0%,p~
##   <chr>                    <dbl>                 <dbl>               <dbl>
## 1 AMZN                     0.136                 0.343               0.758
## 2 GE                       0.157                 0.330               0.233

library(tidyr)
# Select
stock_returns_monthly %>%
  tq_performance(Ra = Ra,
                   Rb = NULL, # Calculataing downside risk measures doesn't require Rb
                   performance_fun = SharpeRatio) %>%
  gather(SharpRatio, measure, 2:4) %>%
  ggplot(aes(x = symbol, y = measure, fill = SharpRatio)) +
  geom_col(show.legend = FALSE) +
  facet_wrap(~ SharpRatio) +
  coord_flip() +
  labs(title = "SharpRatio",
       x = NULL,
       y = NULL)

Q6 Calculate the downside risk measures by revising the code below. Which of the two stocks has greater downside risks? Discuss HistoricalES(95%), HistoricalVaR(95%), and SemiDeviation.

Hint: You should revise the performance_fun argument in the tq_performance function. To learn more about the tq_performance function, Google the tidyquant package and find its manual (usually posted as a PDF file in the CRAN website); and search the term tq_performance within the manual. A list of options for the performance_fun argument is shown below tq_performance_fun_options().

# Retrieve performance metrics
stock_returns_monthly %>%
    tq_performance(Ra = Ra,
                   Rb = NULL, # Calculataing downside risk measures doesn't require Rb
                   performance_fun = table.DownsideRisk) %>%
  t()
##                                          [,1]      [,2]     
## symbol                                   "AMZN"    "GE"     
## DownsideDeviation(0%)                    "0.1410"  "0.0806" 
## DownsideDeviation(MAR=3.33333333333333%) "0.1447"  "0.0841" 
## DownsideDeviation(Rf=0%)                 "0.1410"  "0.0806" 
## GainDeviation                            "0.3819"  "0.0815" 
## HistoricalES(95%)                        "-0.4538" "-0.2838"
## HistoricalVaR(95%)                       "-0.2930" "-0.2097"
## LossDeviation                            "0.1622"  "0.0984" 
## MaximumDrawdown                          "0.9307"  "0.7725" 
## ModifiedES(95%)                          "-1.0000" "-0.2837"
## ModifiedVaR(95%)                         "-0.1791" "-0.1905"
## SemiDeviation                            "0.2109"  "0.1009"

# See options for the `performance_fun` argument
tq_performance_fun_options()
## $table.funs
##  [1] "table.AnnualizedReturns" "table.Arbitrary"        
##  [3] "table.Autocorrelation"   "table.CAPM"             
##  [5] "table.CaptureRatios"     "table.Correlation"      
##  [7] "table.Distributions"     "table.DownsideRisk"     
##  [9] "table.DownsideRiskRatio" "table.DrawdownsRatio"   
## [11] "table.HigherMoments"     "table.InformationRatio" 
## [13] "table.RollingPeriods"    "table.SFM"              
## [15] "table.SpecificRisk"      "table.Stats"            
## [17] "table.TrailingPeriods"   "table.UpDownRatios"     
## [19] "table.Variability"      
## 
## $CAPM.funs
##  [1] "CAPM.alpha"       "CAPM.beta"        "CAPM.beta.bear"  
##  [4] "CAPM.beta.bull"   "CAPM.CML"         "CAPM.CML.slope"  
##  [7] "CAPM.dynamic"     "CAPM.epsilon"     "CAPM.jensenAlpha"
## [10] "CAPM.RiskPremium" "CAPM.SML.slope"   "TimingRatio"     
## [13] "MarketTiming"    
## 
## $SFM.funs
## [1] "SFM.alpha"       "SFM.beta"        "SFM.CML"         "SFM.CML.slope"  
## [5] "SFM.dynamic"     "SFM.epsilon"     "SFM.jensenAlpha"
## 
## $descriptive.funs
## [1] "mean"           "sd"             "min"            "max"           
## [5] "cor"            "mean.geometric" "mean.stderr"    "mean.LCL"      
## [9] "mean.UCL"      
## 
## $annualized.funs
## [1] "Return.annualized"        "Return.annualized.excess"
## [3] "sd.annualized"            "SharpeRatio.annualized"  
## 
## $VaR.funs
## [1] "VaR"  "ES"   "ETL"  "CDD"  "CVaR"
## 
## $moment.funs
##  [1] "var"              "cov"              "skewness"        
##  [4] "kurtosis"         "CoVariance"       "CoSkewness"      
##  [7] "CoSkewnessMatrix" "CoKurtosis"       "CoKurtosisMatrix"
## [10] "M3.MM"            "M4.MM"            "BetaCoVariance"  
## [13] "BetaCoSkewness"   "BetaCoKurtosis"  
## 
## $drawdown.funs
## [1] "AverageDrawdown"   "AverageLength"     "AverageRecovery"  
## [4] "DrawdownDeviation" "DrawdownPeak"      "maxDrawdown"      
## 
## $Bacon.risk.funs
## [1] "MeanAbsoluteDeviation" "Frequency"             "SharpeRatio"          
## [4] "MSquared"              "MSquaredExcess"        "HurstIndex"           
## 
## $Bacon.regression.funs
##  [1] "CAPM.alpha"       "CAPM.beta"        "CAPM.epsilon"    
##  [4] "CAPM.jensenAlpha" "SystematicRisk"   "SpecificRisk"    
##  [7] "TotalRisk"        "TreynorRatio"     "AppraisalRatio"  
## [10] "FamaBeta"         "Selectivity"      "NetSelectivity"  
## 
## $Bacon.relative.risk.funs
## [1] "ActivePremium"    "ActiveReturn"     "TrackingError"   
## [4] "InformationRatio"
## 
## $Bacon.drawdown.funs
## [1] "PainIndex"     "PainRatio"     "CalmarRatio"   "SterlingRatio"
## [5] "BurkeRatio"    "MartinRatio"   "UlcerIndex"   
## 
## $Bacon.downside.risk.funs
##  [1] "DownsideDeviation"     "DownsidePotential"    
##  [3] "DownsideFrequency"     "SemiDeviation"        
##  [5] "SemiVariance"          "UpsideRisk"           
##  [7] "UpsidePotentialRatio"  "UpsideFrequency"      
##  [9] "BernardoLedoitRatio"   "DRatio"               
## [11] "Omega"                 "OmegaSharpeRatio"     
## [13] "OmegaExcessReturn"     "SortinoRatio"         
## [15] "M2Sortino"             "Kappa"                
## [17] "VolatilitySkewness"    "AdjustedSharpeRatio"  
## [19] "SkewnessKurtosisRatio" "ProspectRatio"        
## 
## $misc.funs
## [1] "KellyRatio"   "Modigliani"   "UpDownRatios"