This vignette is a short illustration of the factor-based (smart beta) strategies elaborated in the book “Demystifying Shariah-compliant Equity Investments” which is currently under review. The book is co-authored with Dawood Ashraf (Islamic Development Bank Institute), Kris Boudt (Vrije Universiteit Brussel), Mulazim Ali Khokhar (Vrije Universiteit Brussel & Sukkur IBA University) and Muhammad Wajid Raza (Shaheed Benazir Bhutto University Sharingal, Dir.). We demonstrate factor-based (smart beta) equity portfolio allocation using public data for factors, Shariah-compliance data from Ideal Ratings and an open source (R-code). We illustrate techniques ranging from ranking methods to tilting the portfolio weights that may help better exploit the factor exposure or weights diversification. We consider Dow Jones Industrial Average (DJIA) as reference universe.
The document, following the book, implements R code with a portfolio of five securities as of 30-Sep-2020 and includes asset turnover (ATO) as single factor strategy and fundamental value weights as multi factor strategy. However the code is provided in such a way that the reader may be able choose portfolio-constituents, the date of execution and the factors (or strategy) of portfolio weights. We implement the ranking methods and portfolio allocation techniques in a cross-section. This makes it simple to understand the mathematical equations, the corresponding functions and their effects on portfolio weights at a point in time. However, the application of functions and procedures in a time series will be provided in a next Rpubs of this series.
Note: R is an open source programming language widely used for data-science. There are multiple sub-packages that may help read input, implement functions, visualize output and transform results for further use. We use “data.table” package that is elegant and fast, and use minimal syntax for coding. The R-documentation for “data.table” may be found at https://cran.r-project.org/web/packages/data.table/data.table.pdf.
For the strategies discussed in the book we provide reference data with monthly frequency for the period from 2012 to 2020. The data we provide mostly comes from different public sources except for the Shariah-compliance screening. The screenings were provided by Ideal Ratings with quarterly frequency based on quarterly-financials of firms.
A glimpse of data, its dimensions, and variable types may be seen
# load the data using data.table package
library("data.table")
data <- fread("merged_DJIA_data.csv")
# Structure of data
library("dplyr")
glimpse(data)FALSE Rows: 4,428
FALSE Columns: 22
FALSE $ Date <IDate> 2012-01-31, 2012-01-31, 2012-01-31, 2012-01-31, 2012-01-…
FALSE $ Ticker <chr> "MMM", "AA.3", "AXP", "AAPL", "T", "BAC", "BA", "CAT", "CV…
FALSE $ DJIA <int> 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0…
FALSE $ AAOIFI <int> 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0…
FALSE $ DJIslamic <int> 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0…
FALSE $ FTSEIslamic <int> 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0…
FALSE $ Price <dbl> 86.710, 10.160, 50.140, 456.480, 29.410, 7.130, 74.180, 10…
FALSE $ MC <dbl> 60260.85, 10814.43, 58362.96, 425537.05, 174298.72, 75121.…
FALSE $ TR <dbl> 6.0932, 17.4566, 6.6780, 12.7111, -1.2897, 28.2374, 1.1316…
FALSE $ `Low-risk` <dbl> 0.019489156, 0.004465364, 0.003106981, 0.012509670, 0.0339…
FALSE $ MoM <dbl> 0.01133101, -0.38136722, 0.17244383, 0.34527920, 0.1321807…
FALSE $ BV <dbl> 15420.0, 13789.0, 18794.0, 90054.0, 105534.0, 211704.0, 35…
FALSE $ Sale <dbl> 7089.0, 5989.0, 8317.0, 46333.0, 32503.0, 31191.0, 19555.0…
FALSE $ CF <dbl> 1271.0, 174.0, 1377.0, 13785.0, -2105.0, 2836.0, 1821.0, 2…
FALSE $ DV <dbl> 1555.0, 131.0, 861.0, 0.0, 10172.0, 1738.0, 1244.0, 1159.0…
FALSE $ ROA <dbl> 0.0301745951, -0.0047607178, 0.0077737272, 0.0942018013, -…
FALSE $ ATO <dbl> -0.0102790990, -0.0128269782, -0.0005440108, 0.0911677378,…
FALSE $ Accruals <dbl> 0.33002286, 0.27753740, -0.35000693, 0.39981685, 0.2774551…
FALSE $ Levrage <dbl> 0.16589701, 0.23357428, 0.41082061, 0.00000000, 0.23952076…
FALSE $ CFY <dbl> 0.021091638, 0.016089620, 0.023593731, 0.032394359, -0.012…
FALSE $ EY <dbl> 0.015831174, -0.017661594, 0.020423913, 0.030700030, -0.03…
FALSE $ STP <dbl> 0.11763857, 0.55379731, 0.14250477, 0.10888124, 0.18647871…
As proposed in the book, we consider Dow Jones Industrial Average (DJIA) as reference universe. In this section we show historical cardinality of DJIA-portfolio and it’s Shariah-compliance. The Shariah-screening standards include Accounting and Auditing Organization for Islamic Financial Institutions (AAOIFI), and the Shariah-boards of S&P Dow Jones (DJIslamic) and Financial Times Stock Exchanges (FTSEIslamic). The Shariah-compliance screening is provided by Ideal Ratings.
The code for cardinality of DJIA-portfolios and it’s Shariah-compliance for AAOIFI, DJIslamic and FTSEIslamic, and the corresponding graph may be shown as
# Cardinality of DJIA and it's Shariah-compliant variants
cardinality = data[,lapply(.SD, sum),
.SDcol=c("DJIA", "AAOIFI", "DJIslamic", "FTSEIslamic"),
by=Date]
# load ggplot
library("ggplot2")
# reshape the data from wide to long
cardinality <- melt(cardinality, id.vars = "Date",
variable.name="Portfolio", value.name = "Cardinality")
# plot
ggplot(cardinality, aes(x=Date, y=Cardinality, col=Portfolio)) +
geom_line() +
theme_minimal() +
theme(legend.position = "bottom")The choice of portfolio-constituents, the cross-section or the date of calculations for portfolio allocation (weights) and the factor(s)-based strategy(ies) is provided in three variables namely ‘port_components’, ‘cross_section’ and ‘sf_name (mf_names)’ respectively. The ‘sf_name’ name represents single factor name and ‘mf_names’ represent multiple factors names. The reader may re-define the components of these variables however we have kept our choice as proposed in the book.
The book selects Apple, Caterpiller, Chevron, IBM, United Health as a portfolio of five securities as of 30-Sep-2020 and includes asset turnover (ATO) as single factor strategy and fundamental value weights (FVW) as multi factor strategy The choice was careful and addressed some technical issues that are discussed in the book. The book also considers price based portfolio allocation strategy of DJIA as benchmark strategy
Following the choice of single factor strategy in the book, we demonstrate asset turnover (ATO) as single factor strategy. The choice of ATO and the five-securities was to demonstrate how to tackle short selling and low-cardinality for Shariah-compliant investors.
# data chunk for all calculations, it includes
# Portfolio constituents for analysis
# Date of calculations (cross section)
port_constituents <- c("AAPL", "CAT", "CVX", "IBM", "UNH")
cross_section <- as.Date("2020-09-30")
# Choose a factor
sf_name <- "ATO"
# Filtered data for selected single factor strategy
sf_data <- data [ Date == cross_section & Ticker %in% port_constituents, ]
sf_data <- sf_data[,.SD, .SDcol = c("Ticker", "Price",sf_name)]
#Print
kable(sf_data)| Ticker | Price | ATO |
|---|---|---|
| AAPL | 115.81 | 0.0116776 |
| CAT | 149.15 | -0.0018093 |
| CVX | 72.00 | 0.0362913 |
| IBM | 121.67 | -0.0036044 |
| UNH | 311.77 | 0.0181483 |
The book includes fundamental values, profitability, quality and value based strategies for multi factor strategies. The book includes pure-factor, composite-tilt, sequential-tilt and hybrid-tilt strategies as multi factor portfolio construction strategies.
For the purpose of demonstration we select fundamental value (or “size”), a multi factor based portfolio-allocation as first demonstrated by Arnott, Hsu, and Moore (2005).
The fundamental value weighted strategy We construct four factors and represent their combination as fundamental value (multi factor) composite. The four factors are sales, operating cash flows, book value and dividend and each factor value may be taken as
The choice of factors, date of cross section and portfolio components is given as
# Choose a factor
mf_names <- c("Sale", "CF", "BV", "DV")
# Filetered data for selected multi-factor strategy
mf_data <- data [ Date == cross_section & Ticker %in% port_constituents, ]
mf_data <- mf_data[,.SD, .SDcol = c("Ticker", "Price", mf_names)]
#Print
kable(mf_data)| Ticker | Price | Sale | CF | BV | DV |
|---|---|---|---|---|---|
| AAPL | 115.81 | 64698 | 15375 | 65339 | 14081 |
| CAT | 149.15 | 9881 | 1261 | 14949 | 1683 |
| CVX | 72.00 | 23997 | 3810 | 131774 | 7186 |
| IBM | 121.67 | 17559 | 3381 | 21208 | 4343 |
| UNH | 311.77 | 65115 | 3891 | 65231 | 3400 |
Note: The reader may change the portfolio components, the date and the factors in this sub-section. As such may change the strategy for further evaluations and implementation. Following sub-sections will use the data selected here.
This section must be read in connection with Section 4.6.2 of the book. For the purpose of simplicity, we show code and calculations for single factor based ranking methods. This makes it easy to see the effects of different ranking methods in terms of portfolio weights in a cross sections. For the constituent \(i\) the corresponding weight \(w_{i,t}\) in the portfolio at time \(t\) may be defined as
\[ w_{i,t} = \frac{Rank_{i,t}}{\sum_{j=1}^{N} {Rank_{j,t}}},\] where \(j\) represents portfolio constituents.
The changes in ranking methods actually affect the final allocation or weight of each security in the portfolio at a point in time. We demonstrate the effect using five different ranking methods.
Short selling: The Shariah-compliance standards prohibits investors from short selling. Thus, an Islamic investor would like to have positive weights for portfolio allocations. In this regards, we demonstrate some of the statistical methods to transform and normalize factor values in order to retain portfolio composition and avoid short selling.
The naive ranking method may be defined as taking factor values in a cross-section as ranks of securities. The R-code for this approach is
# Filtered data for selected single factor strategy
sf_data <- data [ Date == cross_section & Ticker %in% port_constituents, ]
sf_data <- sf_data[,.SD, .SDcol = c("Ticker", "Price",sf_name)]
# Rank
sf_data[, Rank:=.SD, .SDcol=sf_name]
# Weights
# Note: Benchmark is Price
# BW are benchmark (price) weights
# FW are factor-ranked weights
# AW are active weights
sf_data[, BW:= Price/sum(Price)]
sf_data[, FW:= .SD/sum(.SD), .SDcol="Rank"]
sf_data[, AW:= (FW - BW)]
# Summary
sf_data <- cbind(sf_data[, 1], round(sf_data[, -1], 4))
sf_data <- rbind(sf_data, data.table(Ticker="Total", t(colSums(sf_data[, -1]))))
kable(sf_data)| Ticker | Price | ATO | Rank | BW | FW | AW |
|---|---|---|---|---|---|---|
| AAPL | 115.81 | 0.0117 | 0.0117 | 0.1503 | 0.1924 | 0.0420 |
| CAT | 149.15 | -0.0018 | -0.0018 | 0.1936 | -0.0298 | -0.2234 |
| CVX | 72.00 | 0.0363 | 0.0363 | 0.0935 | 0.5978 | 0.5044 |
| IBM | 121.67 | -0.0036 | -0.0036 | 0.1579 | -0.0594 | -0.2173 |
| UNH | 311.77 | 0.0181 | 0.0181 | 0.4047 | 0.2990 | -0.1057 |
| Total | 770.40 | 0.0607 | 0.0607 | 1.0000 | 1.0000 | 0.0000 |
To achieve positive-only weights one could ignore negative factor values. This is a simplest method to achieve positive only weights such as \[Rank_{i,t} = max(𝑂,𝑉_{i,t}),\] where \(𝑉_{i,t}\) is the factor value of security \(i\) at time \(t\).
The positive only ranks and weights can be implemented in R as
# Filtered data for selected single factor strategy
sf_data <- data [ Date == cross_section & Ticker %in% port_constituents, ]
sf_data <- sf_data[,.SD, .SDcol = c("Ticker", "Price",sf_name)]
# Rank
sf_data[, Rank := ifelse(.SD < 0, 0, .SD), .SDcol=sf_name, by=Ticker]
# Weights
# BW are benchmark (price) weights
# FW are factor-ranked weights
# AW are active weights
sf_data[, BW:= round(Price/sum(Price), 4)]
sf_data[, FW:= round(.SD/sum(.SD),4), .SDcol="Rank"]
sf_data[, AW:= round((FW - BW),4)]
# Summary
sf_data <- cbind(sf_data[, 1], round(sf_data[, -1], 4))
sf_data <- rbind(sf_data, data.table(Ticker="Total", t(colSums(sf_data[, -1]))))
kable(sf_data)| Ticker | Price | ATO | Rank | BW | FW | AW |
|---|---|---|---|---|---|---|
| AAPL | 115.81 | 0.0117 | 0.0117 | 0.1503 | 0.1766 | 0.0263 |
| CAT | 149.15 | -0.0018 | 0.0000 | 0.1936 | 0.0000 | -0.1936 |
| CVX | 72.00 | 0.0363 | 0.0363 | 0.0935 | 0.5489 | 0.4554 |
| IBM | 121.67 | -0.0036 | 0.0000 | 0.1579 | 0.0000 | -0.1579 |
| UNH | 311.77 | 0.0181 | 0.0181 | 0.4047 | 0.2745 | -0.1302 |
| Total | 770.40 | 0.0607 | 0.0661 | 1.0000 | 1.0000 | 0.0000 |
Note: The positive-only normalization may exclude some of the securities altogether. When faced with low number of securities to choose from the exclusion may reduce the choice of constituents to a greater extent. To tackle this issue one needs methods to transform data in such a way to reduce or eliminate these exclusions.
For a Shariah-compliant investor when the universe to select securities is small it is not affordable to de-select all negative factor-valued securities. The objective is to get most of the factor strategy, so to do that one we re-formulate or normalize the rank in such a way that most of the securities are selected and negative valued securities get the least of the weights. This could be achieved as \[ Rank_{i,t} = V_{i,t} - min(V_{i,t}),\] where \(V_{i,t}\) is the factor value of security \(i\) at time \(t\).
# Filtered data for selected single factor strategy
sf_data <- data [ Date == cross_section & Ticker %in% port_constituents, ]
sf_data <- sf_data[,.SD, .SDcol = c("Ticker", "Price",sf_name)]
# Rank
sf_data[, Rank:=(.SD - min(.SD)), .SDcol=sf_name]
# Weights
# BW are benchmark (price) weights
# FW are factor-ranked weights
# AW are active weights
sf_data[, BW:= round(Price/sum(Price), 4)]
sf_data[, FW:= round(.SD/sum(.SD),4), .SDcol="Rank"]
sf_data[, AW:= round((FW - BW),4)]
# Summary
sf_data <- cbind(sf_data[, 1], round(sf_data[, -1], 4))
sf_data <- rbind(sf_data, data.table(Ticker="Total", t(colSums(sf_data[, -1]))))
kable(sf_data)| Ticker | Price | ATO | Rank | BW | FW | AW |
|---|---|---|---|---|---|---|
| AAPL | 115.81 | 0.0117 | 0.0153 | 0.1503 | 0.1941 | 0.0438 |
| CAT | 149.15 | -0.0018 | 0.0018 | 0.1936 | 0.0228 | -0.1708 |
| CVX | 72.00 | 0.0363 | 0.0399 | 0.0935 | 0.5068 | 0.4133 |
| IBM | 121.67 | -0.0036 | 0.0000 | 0.1579 | 0.0000 | -0.1579 |
| UNH | 311.77 | 0.0181 | 0.0218 | 0.4047 | 0.2763 | -0.1284 |
| Total | 770.40 | 0.0607 | 0.0788 | 1.0000 | 1.0000 | 0.0000 |
The quantile method may provide a better solution in response to the possibility of negative and non-normal factor values in the cross-section. The method divides the sorted factor values in a cross section into ‘n’ smaller ranges, say deciles (10), quartiles (4), or percentiles (100). Each range is then assigned a rank from 1 to ‘n’ in an increasing or decreasing order of choice. The ranges and corresponding ranks/scores become a dictionary and the securities are then ranked based on the dictionary. The percentiles method is more precise than the quartile and deciles method due to the more number of ranges. The choice of quantile depends on number of securities in the portfolio.
For demonstration purposes, we choose the decile method (\(n=10\)) for normalization and weight allocation. There are certain steps to follow when implementing decile method for normalized ranking. The steps can be summarized as
# Filtered data for selected single factor strategy
sf_data <- data [ Date == cross_section & Ticker %in% port_constituents, ]
sf_data <- sf_data[,.SD, .SDcol = c("Ticker", "Price",sf_name)]
# Decile-method
# 1. Parameters
max_value <- max(sf_data[,.SD, .SDcol= sf_name]) # maximum value in cross section
min_value <- min(sf_data[,.SD, .SDcol= sf_name]) # minimum value in cross section
rangeV <- max_value - min_value # Range
fracV <- rangeV/10 # fraction
# 2. Dictionary
# A series of ranges
Range = seq(min_value, max_value, by=fracV)
# Decile rank
library("dplyr")
Rank = ntile(Range, 10)
dictionary <- cbind(Range, Rank)
#kable(dictionary, align = c('c', 'l'))
# 3. Rank
sf_data[, Rank:= ifelse(.SD < dictionary[2,"Range"], 1,
ifelse(.SD < dictionary[3,"Range"], 2,
ifelse(.SD < dictionary[4,"Range"], 3,
ifelse(.SD < dictionary[5,"Range"], 4,
ifelse(.SD < dictionary[6,"Range"], 5,
ifelse(.SD < dictionary[7,"Range"], 6,
ifelse(.SD < dictionary[8,"Range"], 7,
ifelse(.SD < dictionary[9,"Range"], 8,
ifelse(.SD < dictionary[10,"Range"],9,
ifelse(.SD < dictionary[11,"Range"]|.SD == dictionary[11,"Range"], 10)))))))))),
.SDcol = sf_name, by = Ticker]
# Weights
# BW are benchmark (price) weights
# FW are factor-ranked weights
# AW are active weights
sf_data[, BW:= round(Price/sum(Price), 4)]
sf_data[, FW:= round(.SD/sum(.SD),4), .SDcol="Rank"]
sf_data[, AW:= round((FW - BW),4)]
# Summary
sf_data <- cbind(sf_data[, 1], round(sf_data[, -1], 4))
sf_data <- rbind(sf_data, data.table(Ticker="Total", t(colSums(sf_data[, -1]))))
# # #Print
kables(list(
kable(dictionary, caption = "Step 1 to 2: Ranks"),
kable(matrix(" ", ncol=5), caption = ""),
kable(sf_data, caption= "Step 3: Decile ranked portfolio weights")
))
|
|
|
# kable(list(dictionary, matrix(numeric(), ncol=3), sf_data),
# booktabs = TRUE, valign = 't')The alternate method for decile scoring may be shown as \[Rank_{i, t} = Rank_{lb, t} + [(Rank_{ub, t} - Rank_{lb, t}) * \frac{(V_{i,t} - min(V_{i,t}))}{(max(V_{i,t}) - min(V_{i,t}))}],\] where \(Rank_{lb, t}\) is the lower bound for proposed rank and the \(Rank_{ub, t}\) is the upper bound for the ranks at time \(t\). In our case we would like to have lower bound at 1 and the upper bound to be 10. We don’t assign zero to lower bound because we want lowest ranked security to receive some weights in the portfolio.
The code for decile method execution is
# Filtered data for selected single factor strategy
sf_data <- data [ Date == cross_section & Ticker %in% port_constituents, ]
sf_data <- sf_data[,.SD, .SDcol = c("Ticker", "Price",sf_name)]
# Decile-method
# 1. Parameters
max_value <- max(sf_data[,.SD, .SDcol= sf_name]) # maximum value in cross section
min_value <- min(sf_data[,.SD, .SDcol= sf_name]) # maximum value in cross section
# Alternate function for Rank
# lb_rank is lower bound of ranks
# ub_rank is upper bound of ranks
reScale <- function (x, lb_rank, upper_score, min_value, max_value) {
rank = lb_rank + (upper_score - lb_rank) * (x - min_value)/(max_value - min_value)
return(ceiling(rank))
}
# Rank
sf_data[, Rank:= reScale(.SD, 1, 10, min_value, max_value), .SDcol = sf_name, by = Ticker]
# Weights
# BW are benchmark (price) weights
# FW are factor-ranked weights
# AW are active weights
sf_data[, BW:= round(Price/sum(Price), 4)]
sf_data[, FW:= round(.SD/sum(.SD),4), .SDcol="Rank"]
sf_data[, AW:= round((FW - BW),4)]
# Summary
sf_data <- cbind(sf_data[, 1], round(sf_data[, -1], 4))
sf_data <- rbind(sf_data, data.table(Ticker="Total", t(colSums(sf_data[, -1]))))
kable(sf_data)| Ticker | Price | ATO | Rank | BW | FW | AW |
|---|---|---|---|---|---|---|
| AAPL | 115.81 | 0.0117 | 5 | 0.1503 | 0.2083 | 0.0580 |
| CAT | 149.15 | -0.0018 | 2 | 0.1936 | 0.0833 | -0.1103 |
| CVX | 72.00 | 0.0363 | 10 | 0.0935 | 0.4167 | 0.3232 |
| IBM | 121.67 | -0.0036 | 1 | 0.1579 | 0.0417 | -0.1162 |
| UNH | 311.77 | 0.0181 | 6 | 0.4047 | 0.2500 | -0.1547 |
| Total | 770.40 | 0.0607 | 24 | 1.0000 | 1.0000 | 0.0000 |
The adaptive normalization method, as defined in S&P mementum indices (2022), contains three simple steps: 1. Calculate Z-scores in a cross-section 2. Winsorize Z-scores between 3 and -3 3. Normalize/Transform winsorized z-scores to values starting from zero
Step 1: Z-scores
The Z-scoring in a cross-section at time \(t\) may be defined as
\[Z_{i,t} = \frac{V_{i,t} - \mu_{t}}{\sigma_{t}},\] where \(\mu_t\) and \(\sigma_t\) are the cross-sectional mean and standard deviation of the factor values.
Step 2: Winsorize The winsorization truncates Z-scores between 3 and -3 i.e. \(-3 \leq Z_{i,t} \leq 3.\). The winsorization may be defined as
\[\tilde Z_{i,t} = max(-3, min(Z_{i,t},3)).\]
Step 3: Normalization The normalization
method/function \(S_i\) may be defined
as \[S_{i} = S(\tilde Z_i),\] where
\(S(\tilde Z_i)\) is
- if \(\tilde Z_i > 0\), \(S_i = 1 + \tilde Z_i\)
- if \(\tilde Z_i < 0\), \(S_i = \frac{1}{1 - \tilde Z_i}\)
- if \(\tilde Z_i = 0\), \(S_i = 1.\)
The Z-score ranking in R is implemented as
# Filtered data for selected single factor strategy
sf_data <- data [ Date == cross_section & Ticker %in% port_constituents, ]
sf_data <- sf_data[,.SD, .SDcol = c("Ticker", "Price",sf_name)]
# Z-score
sf_data[, paste0("Z(", sf_name, ")"):=
(.SD-lapply(.SD, mean, na.rm=T))/lapply(.SD, sd, na.rm=T), .SDcol = sf_name]
# Winsorize
sf_data[, paste0("Z(", sf_name, ")"):=
ifelse(.SD < -3, -3, ifelse(.SD > 3, 3, .SD) ),
.SDcol = paste0("Z(", sf_name, ")"),
by = Ticker]
# Rank
# Normalization function taken from S&P Momentum Factor index (2021)
# It maps each of the Z scores to a positive real number
SnPScore <- function(ZScore){
idx.grtr <- which(ZScore>0)
idx.less <- which(ZScore<0)
idx.zero <- which(ZScore==0|is.na(ZScore))
ZScore[idx.grtr] <- 1 + ZScore[idx.grtr]
ZScore[idx.less] <- (1/(1-ZScore[idx.less]))
ZScore[idx.zero] <- rep(NA, length(idx.zero))
return(ZScore)
}
sf_data[, paste0("S(", sf_name, ")"):= lapply(.SD, SnPScore), .SDcol = paste0("Z(", sf_name, ")")]
# Weights
# BW are benchmark (price) weights
# FW are factor-ranked weights
# AW are active weights
sf_data[, BW:= Price/sum(Price)]
sf_data[, FW:= .SD/sum(.SD), .SDcol=paste0("S(", sf_name, ")")]
sf_data[, AW:= (FW - BW)]
# Summary
sf_data <- rbind(sf_data, data.table(Ticker="Total", t(colSums(sf_data[, -1]))))
sf_data <- cbind(sf_data[, 1] , round(sf_data[, -1], 4))
kable(sf_data)| Ticker | Price | ATO | Z(ATO) | S(ATO) | BW | FW | AW |
|---|---|---|---|---|---|---|---|
| AAPL | 115.81 | 0.0117 | -0.0284 | 0.9724 | 0.1503 | 0.1656 | 0.0153 |
| CAT | 149.15 | -0.0018 | -0.8561 | 0.5388 | 0.1936 | 0.0918 | -0.1018 |
| CVX | 72.00 | 0.0363 | 1.4821 | 2.4821 | 0.0935 | 0.4228 | 0.3294 |
| IBM | 121.67 | -0.0036 | -0.9663 | 0.5086 | 0.1579 | 0.0866 | -0.0713 |
| UNH | 311.77 | 0.0181 | 0.3687 | 1.3687 | 0.4047 | 0.2331 | -0.1715 |
| Total | 770.40 | 0.0607 | 0.0000 | 5.8705 | 1.0000 | 1.0000 | 0.0000 |
This section must be read along with Chapter 5 and Section 5.3 of the book. As explained in the book, the ranking methods may address a variety of issues, such as the handling of negative factor values for long-only portfolios and abnormal cross-sectional distributions of factor values. However, addressing portfolio allocation issues such as concentration risk and factor exposure may require more than factor ranking enhancements. The purpose of this section is to explain and show implementation of the enhancements to factor-based portfolio allocation strategies. These enhancements may be designed to increase the degree of factors’ exposure or the diversification or to balance between them.
The portfolio allocation strategies explained in the book
include
- heuristics based; price, market-cap and equal weight,
- pure factor based
- composite tilt to benchmark weights
- sequential tilt to benchmark weights
- hybrid tilt (convex combination of pure factor and tilt strategy)
The single factor based strategies are restricted to pure factor, benchmark-tilt and their hybrids. But multi factor based strategies my also include sequential tilt strategies. Thus, for the sake of demonstration we take one multi factor based strategy and demonstrate the R-code implementation of all discussed strategies. Also that from this section on wards we only consider adaptive or Z-score based normalization for ranking individual factors in multi factor based strategy. This is because we are interested in long only portfolio allocations.
Suppose there are \(N\) securities in the reference portfolio. Then the pure factor strategy takes simple average of the normalized Z-scores \(S_{i,t}\) for all factors \(k \in K\) as rank for each security i.e.
\[Rank_{i,t} = \bar{S}_{i,t} = \frac{1}{K}\sum_{k=1}^K S_{i,k,t}.\]
The weights for each security \(w_{i,t}\) may then be assigned as a
proportion of securities normalized weight \(\bar{S}_{i,t}\) to the total sum of the
normalized weights of all securities in the portfolio \(\sum_{i=1}^N \bar{S}_{i,t}\) in the cross
sections \(t\), i.e.
\[w_{i,t} = \frac{\bar{S}_{i,t}}{\sum_{j =
1}^N \bar{S}_{j,t}},\] where \(j\) represent securities in the portfolio.
The effects of pure (composite) factor weighted portfolio may be seen
as
# Filetered data for selected multi-factor strategy
mf_data <- data [ Date == cross_section & Ticker %in% port_constituents, ]
mf_data <- mf_data[,.SD, .SDcol = c("Ticker", "Price", mf_names)]
# Z-scores
mf_data [, paste0("Z(", mf_names, ")"):=
(.SD-lapply(.SD, mean, na.rm=T))/lapply(.SD, sd, na.rm=T),
.SDcol=mf_names]
# Winsorization
mf_data[, paste0("Z(", mf_names, ")"):= ifelse(.SD < -3, -3, ifelse(.SD > 3, 3, .SD) ),
.SDcol = paste0("Z(", mf_names, ")") , by = Ticker]
# Normalization
col_names_Z <- paste0("Z(", mf_names, ")")
mf_data [, paste0("S(", col_names_Z, ")"):=lapply(.SD, SnPScore),
.SDcol=col_names_Z]
# Rank
# PFR is pure factor rank
mf_data [, PFR:=rowMeans(.SD),.SDcol=paste0("S(", col_names_Z, ")")]
# Weights
# BW are benchmark (price) weights
# PFW are pure factor-ranked weights
# AW are active weights
mf_data [, PFW:=.SD/sum(.SD),.SDcol="PFR"]
mf_data [, BW:=.SD/sum(.SD),.SDcol="Price"]
mf_data [, AW:=(PFW - BW)]
# Summary
mf_data <- rbind(mf_data, data.table(Ticker="Total", t(colSums(mf_data[, -1]))))
mf_data <- cbind(mf_data[, 1] , round(mf_data[, -1], 4))
kable(mf_data [, .SD, .SDcol=c("Ticker", "Price", paste0("S(", col_names_Z, ")"),
"PFR", "PFW", "BW", "AW")])| Ticker | Price | S(Z(Sale)) | S(Z(CF)) | S(Z(BV)) | S(Z(DV)) | PFR | PFW | BW | AW |
|---|---|---|---|---|---|---|---|---|---|
| AAPL | 115.81 | 2.0681 | 2.7558 | 1.1206 | 2.6318 | 2.1441 | 0.3715 | 0.1503 | 0.2212 |
| CAT | 149.15 | 0.5025 | 0.5666 | 0.5109 | 0.5221 | 0.5255 | 0.0911 | 0.1936 | -0.1025 |
| CVX | 72.00 | 0.6849 | 0.7636 | 2.5418 | 1.2152 | 1.3014 | 0.2255 | 0.0935 | 0.1320 |
| IBM | 121.67 | 0.5876 | 0.7214 | 0.5484 | 0.7305 | 0.6470 | 0.1121 | 0.1579 | -0.0458 |
| UNH | 311.77 | 2.0838 | 0.7721 | 1.1183 | 0.6399 | 1.1535 | 0.1999 | 0.4047 | -0.2048 |
| Total | 770.40 | 5.9270 | 5.5795 | 5.8400 | 5.7395 | 5.7715 | 1.0000 | 1.0000 | 0.0000 |
To construct a composite tilt, Bender and Wang (2015) and Russell (2017b) suggest multiplying the average of multi factor values by the benchmark weight in the cross-section to determine the smart beta composite tilt portfolio weights. This way, smart beta portfolios tilt toward desirable factors, as stocks with higher factor scores will gain more total and active weights than stocks with lower factor ranks.
Let \(w_{i,t}^B\) be benchmark (price) weights then the composite tilt portfolio weights may be defined as
\[w_{i,t} = \frac{\bar{S}_{i,t} W_{i,t}^B}{\sum_{j=1}^N \bar{S}_{j,t} W_{j,t}^B}, \] where \(j\) represent securities in the portfolio. The effects of composite tilt on portfolio weights are
# Filtered data for selected multi-factor strategy
mf_data <- data [ Date == cross_section & Ticker %in% port_constituents, ]
mf_data <- mf_data[,.SD, .SDcol = c("Ticker", "Price", mf_names)]
# Z scores
mf_data [, paste0("Z(", mf_names, ")"):=
(.SD - lapply(.SD, mean, na.rm=T))/lapply(.SD, sd, na.rm=T),
.SDcol=mf_names]
# Winsorization
mf_data[, paste0("Z(", mf_names, ")"):= ifelse(.SD < -3, -3, ifelse(.SD > 3, 3, .SD) ),
.SDcol = paste0("Z(", mf_names, ")") , by = Ticker]
# Normalization
col_names_Z <- paste0("Z(", mf_names, ")")
mf_data [, paste0("S(", col_names_Z, ")"):=lapply(.SD, SnPScore),
.SDcol=col_names_Z]
# Pure factor Rank
mf_data [, PFR:=rowMeans(.SD),.SDcol=paste0("S(", col_names_Z, ")")]
# Weights
# BW are benchmark (price) weights
# FW are factor-ranked weights
# AW are active weights
# CTR is composite tilt rank
# CTW is composite tilt weights
mf_data [, BW:=.SD/sum(.SD), .SDcol="Price"]
mf_data [, CTR:=Reduce('*', .SD), .SDcol=c("BW", "PFR")] # composite tilt
mf_data [, CTW:=.SD/sum(.SD),.SDcol="CTR"]
mf_data [, AW:=(CTW - BW)]
# Summary
mf_data <- rbind(mf_data, data.table(Ticker="Total", t(colSums(mf_data[, -1]))))
mf_data <- cbind(mf_data[, 1] , round(mf_data[, -1], 4))
kable(mf_data[, .SD, .SDcol=c("Ticker",paste0("S(", col_names_Z, ")"), "BW", "CTR", "CTW", "AW")])| Ticker | S(Z(Sale)) | S(Z(CF)) | S(Z(BV)) | S(Z(DV)) | BW | CTR | CTW | AW |
|---|---|---|---|---|---|---|---|---|
| AAPL | 2.0681 | 2.7558 | 1.1206 | 2.6318 | 0.1503 | 0.3223 | 0.2892 | 0.1388 |
| CAT | 0.5025 | 0.5666 | 0.5109 | 0.5221 | 0.1936 | 0.1017 | 0.0913 | -0.1023 |
| CVX | 0.6849 | 0.7636 | 2.5418 | 1.2152 | 0.0935 | 0.1216 | 0.1091 | 0.0157 |
| IBM | 0.5876 | 0.7214 | 0.5484 | 0.7305 | 0.1579 | 0.1022 | 0.0917 | -0.0663 |
| UNH | 2.0838 | 0.7721 | 1.1183 | 0.6399 | 0.4047 | 0.4668 | 0.4188 | 0.0141 |
| Total | 5.9270 | 5.5795 | 5.8400 | 5.7395 | 1.0000 | 1.1147 | 1.0000 | 0.0000 |
Sequential tilt portfolio allocation, as opposed to composite tilt, tilts benchmark weights according to the rankings of the individual factors. It is done by multiplying the weight of the benchmark with each of the factor values in a multi factor portfolio to determine the weighting of each stock in the factor-based portfolios.
Let \(w_{i,t}^B\) be benchmark (price) weights then the sequential tilt with \(K\) factors may be defined as \[w_{i,t} = \frac{\prod_{k=1}^K S_{i,k, t}W_{i,t}^B}{\sum_{j =1 }^N \prod_{k=1}^K S_{j,k, t} W_{i,t}^B},\] where \(j\) represent securities in the portfolio.
# Filtered data for selected multi-factor strategy
mf_data <- data [ Date == cross_section & Ticker %in% port_constituents, ]
mf_data <- mf_data[,.SD, .SDcol = c("Ticker", "Price", mf_names)]
# Z scores
mf_data [, paste0("Z(", mf_names, ")"):=
(.SD - lapply(.SD, mean, na.rm=T))/lapply(.SD, sd, na.rm=T),
.SDcol=mf_names]
# Winsorization
mf_data[, paste0("Z(", mf_names, ")"):= ifelse(.SD < -3, -3, ifelse(.SD > 3, 3, .SD) ),
.SDcol = paste0("Z(", mf_names, ")") , by = Ticker]
# Normalization
col_names_Z <- paste0("Z(", mf_names, ")")
mf_data [, paste0("S(", col_names_Z, ")"):=lapply(.SD, SnPScore),
.SDcol=col_names_Z]
# Weights:
# BW are benchmark (price) weights
# FW are factor-ranked weights
# AW are active weights
# STR is Sequential Tilt rank
# STW is Sequential Tilt weights
mf_data [, BW:=.SD/sum(.SD),.SDcol="Price"]
mf_data [, STR:=Reduce('*', .SD), .SDcol=c("BW",paste0("S(", col_names_Z, ")"))]
mf_data [, STW:=.SD/sum(.SD),.SDcol="STR"]
mf_data [, AW:= (STW - BW)]
# Summary
mf_data <- rbind(mf_data, data.table(Ticker="Total", t(colSums(mf_data[, -1]))))
mf_data <- cbind(mf_data[, 1] , round(mf_data[, -1], 4))
kable(mf_data[, .SD, .SDcol=c("Ticker",paste0("S(", col_names_Z, ")"), "BW", "STR", "STW", "AW")])| Ticker | S(Z(Sale)) | S(Z(CF)) | S(Z(BV)) | S(Z(DV)) | BW | STR | STW | AW |
|---|---|---|---|---|---|---|---|---|
| AAPL | 2.0681 | 2.7558 | 1.1206 | 2.6318 | 0.1503 | 2.5269 | 0.7933 | 0.6430 |
| CAT | 0.5025 | 0.5666 | 0.5109 | 0.5221 | 0.1936 | 0.0147 | 0.0046 | -0.1890 |
| CVX | 0.6849 | 0.7636 | 2.5418 | 1.2152 | 0.0935 | 0.1510 | 0.0474 | -0.0461 |
| IBM | 0.5876 | 0.7214 | 0.5484 | 0.7305 | 0.1579 | 0.0268 | 0.0084 | -0.1495 |
| UNH | 2.0838 | 0.7721 | 1.1183 | 0.6399 | 0.4047 | 0.4660 | 0.1463 | -0.2584 |
| Total | 5.9270 | 5.5795 | 5.8400 | 5.7395 | 1.0000 | 3.1853 | 1.0000 | 0.0000 |
A hybrid portfolio, as explained earlier, is a convex combination of benchmark-dependent and independent portfolios. For the sake of simplicity in explaining, we construct three 50/50 portfolios while combining pure factor values that reflect the highest level of factor desirability with benchmark and two benchmark dependent strategies. This means for Hybrid portfolios, the first half is always based on a pure factor basis, but the second half may be either the benchmark, Composite-tilt portfolio or Sequential-tilt portfolio.
Let \(w^F_{i,t}\) be pure-factor based weights and \(w^{ST}_{i,t}\) be sequentially tilted weights then the hybrid tilt weights \(w^H_{i,t}\) be defined as \[w^H_{i,t} = \alpha w^F_{i,t} + (1 - \alpha) w^{ST}_{i,t},\] where \(\alpha \in [0, 1]\).
# Filtered data for selected multi-factor strategy
mf_data <- data [ Date == cross_section & Ticker %in% port_constituents, ]
mf_data <- mf_data[,.SD, .SDcol = c("Ticker", "Price", mf_names)]
# Z scores
mf_data [, paste0("Z(", mf_names, ")"):=
(.SD - lapply(.SD, mean, na.rm=T))/lapply(.SD, sd, na.rm=T),
.SDcol=mf_names]
# Winsorization
mf_data[, paste0("Z(", mf_names, ")"):= ifelse(.SD < -3, -3, ifelse(.SD > 3, 3, .SD) ),
.SDcol = paste0("Z(", mf_names, ")") , by = Ticker]
# Normalization
col_names_Z <- paste0("Z(", mf_names, ")")
mf_data [, paste0("S(", col_names_Z, ")"):=lapply(.SD, SnPScore),
.SDcol=col_names_Z]
# pure factor rank (PFR)
mf_data [, PFR:=rowMeans(.SD),.SDcol=paste0("S(", col_names_Z, ")")]
# Weights:
# BW are benchmark (price) weights
# FW are factor-ranked weights
# STW are sequentially tilted factor-based weights
# HFW are hybrid factor-based weights
# PFW is pure factor weights
# AW are active weights
mf_data [, PFW:=.SD/sum(.SD),.SDcol="PFR"]
mf_data [, BW:=.SD/sum(.SD),.SDcol="Price"]
mf_data [, STR:=Reduce('*', .SD), .SDcol=c("BW",paste0("S(", col_names_Z, ")"))]
mf_data [, STW:=.SD/sum(.SD),.SDcol="STR"]
mf_data$HFW <- mf_data$PFW*0.5 + mf_data$STW*0.5
mf_data [, AW:= (HFW - BW)]
# Summary
mf_data <- rbind(mf_data, data.table(Ticker="Total", t(colSums(mf_data[, -1]))))
mf_data <- cbind(mf_data[, 1] , round(mf_data[, -1], 4))
kable(mf_data[, .SD, .SDcol=c("Ticker",paste0("S(", col_names_Z, ")"), "BW", "PFR", "STR", "PFW", "STW","HFW","AW")])| Ticker | S(Z(Sale)) | S(Z(CF)) | S(Z(BV)) | S(Z(DV)) | BW | PFR | STR | PFW | STW | HFW | AW |
|---|---|---|---|---|---|---|---|---|---|---|---|
| AAPL | 2.0681 | 2.7558 | 1.1206 | 2.6318 | 0.1503 | 2.1441 | 2.5269 | 0.3715 | 0.7933 | 0.5824 | 0.4321 |
| CAT | 0.5025 | 0.5666 | 0.5109 | 0.5221 | 0.1936 | 0.5255 | 0.0147 | 0.0911 | 0.0046 | 0.0478 | -0.1458 |
| CVX | 0.6849 | 0.7636 | 2.5418 | 1.2152 | 0.0935 | 1.3014 | 0.1510 | 0.2255 | 0.0474 | 0.1364 | 0.0430 |
| IBM | 0.5876 | 0.7214 | 0.5484 | 0.7305 | 0.1579 | 0.6470 | 0.0268 | 0.1121 | 0.0084 | 0.0603 | -0.0977 |
| UNH | 2.0838 | 0.7721 | 1.1183 | 0.6399 | 0.4047 | 1.1535 | 0.4660 | 0.1999 | 0.1463 | 0.1731 | -0.2316 |
| Total | 5.9270 | 5.5795 | 5.8400 | 5.7395 | 1.0000 | 5.7715 | 3.1853 | 1.0000 | 1.0000 | 1.0000 | 0.0000 |
We saw that sequential tilted portfolio weights may be highly exposed or skewed with respect to factor(s) values for example Apple getting 80 percent of sequentially tilted portfolio. The factor exposure may affect portfolio diversification (constituent weights) in a cross section. The balance in factor exposure and diversification may depend on investment strategy. We provided “Hybrid-strategy” as a tool to balance between exposure and allocations.
There are methods available in literature that may help quantify and standardize portfolio diversification and active factor exposure for comparison. The book illustrates the Herfindahl Hirschman Index Herfindahl (1950) as a measure of diversification also known as effective-N (eN), and active factor exposure of Russell (2017a) as a measure of factor exposure or tilt.
The diversification of a portfolio in a cross section or effective-N (eN) may be defined as \[eN(w) = \frac{1}{\sum_{i=1}^N w_{i}^2}. \]
The active factor exposure (AFE) may be define as \[AFE_z(w) = \sum_{i=1}^N (w_i - w_i^B)Z_i, \] where \(w_i^B\) represent benchmark weights and \(Z_i\) the z scores in a cross section.
# Choose a factor
port_constituents <- c("AAPL", "CAT", "CVX", "IBM", "UNH")
cross_section <- as.Date("2020-09-30")
factor_names <- mf_names
# Select data
FEeN_data <- data [ Date == cross_section & Ticker %in% port_constituents, ]
FEeN_data <- FEeN_data[,.SD, .SDcol = c("Ticker", "Price", "MC", factor_names)]
# Z scores
FEeN_data [, paste0("Z(", factor_names, ")"):=
(.SD - lapply(.SD, mean, na.rm=T))/lapply(.SD, sd, na.rm=T),
.SDcol=factor_names]
# Winsorization
FEeN_data[, paste0("Z(", factor_names, ")"):= ifelse(.SD < -3, -3, ifelse(.SD > 3, 3, .SD) ),
.SDcol = paste0("Z(", factor_names, ")") , by = Ticker]
# Normalization
col_names_Z <- paste0("Z(", factor_names, ")")
FEeN_data [, paste0("S(", col_names_Z, ")"):=lapply(.SD, SnPScore),
.SDcol=col_names_Z]
# Ranks
# pure factor rank (PFR)
FEeN_data [, PFR:=rowMeans(.SD),.SDcol=paste0("S(", col_names_Z, ")")]
# Composite factor rank (CTR)
# Sequential tilt rank (STR)
FEeN_data [, BW:=.SD/sum(.SD),.SDcol="Price"]
FEeN_data [, CTR:=Reduce('*', .SD), .SDcol=c("BW", "PFR")] # composite tilt
FEeN_data [, STR:=Reduce('*', .SD), .SDcol=c("BW", paste0("S(", col_names_Z, ")"))]
# Weights
FEeN_data [, MCW:=.SD/sum(.SD),.SDcol="MC"]
FEeN_data [, EW:=1/length(port_constituents),]
FEeN_data [, PFW:=.SD/sum(.SD),.SDcol="PFR"]
FEeN_data [, CTW:=.SD/sum(.SD),.SDcol="CTR"]
FEeN_data [, STW:=.SD/sum(.SD),.SDcol="STR"]
FEeN_data$HFW <- FEeN_data$PFW*0.5 + FEeN_data$STW*0.5
# Effective N and Active Factor Exposure
weights_names <- c("BW", "MCW", "EW","PFW", "CTW", "STW", "HFW")
#Effective N (eN)
weights <- FEeN_data[,.SD,.SDcol=weights_names]
eN <- round(1/colSums(weights^2), 2)
# Active Factor Exposure (AFE)
# Take Z score as average z score of the multi factor strategy
Z <- rowMeans(FEeN_data[, .SD, .SDcol=paste0("Z(", mf_names, ")")])
BW <- t(t(FEeN_data$BW))
AFE <- round(colSums((weights - BW)*Z), 2)
AFE[1:3] <- 0 # AFE don't apply to BW, EW, and MC
# Results
results <- data.table(c("Benchmark", "Market-cap", "Equal-weight", "Pure-factor",
"Composite-tilt", "Sequential-tilt", "Hybrid-weight"),
t(t(eN)), t(t(AFE)))
colnames(results) <- c("Strategy","eN", "AFE")
kable(results)| Strategy | eN | AFE |
|---|---|---|
| Benchmark | 3.88 | 0.00 |
| Market-cap | 1.68 | 0.00 |
| Equal-weight | 5.00 | 0.00 |
| Pure-factor | 4.01 | 0.39 |
| Composite-tilt | 3.48 | 0.29 |
| Sequential-tilt | 1.53 | 0.96 |
| Hybrid-weight | 2.54 | 0.67 |
The results show that the sequentially tilted portfolio is able to exploit most of the active factor exposure of 96% while it’s diversification (inverse of Herfindahl index) is minimum at 1.53 of 5 securities portfolio. In contrast equal-weight (EW) strategy has the highest diversification in terms of portfolio allocation. We also showed that factor exposure and diversification may be controlled with hybrid strategy.
The vignette provides hands-on experience of implementing factor-based (smart beta) equity portfolio allocation strategies for Shariah-compliant investors using public data and an open source (R-code). The document, following the book, implements asset turnover (ATO) as single factor strategy and fundamental value weights as multi factor strategy with a portfolio of five securities as of 30-Sep-2020. The smart beta strategy implementation in a cross section with five securities portfolio makes it simple to understand the corresponding functions and their effects on portfolio weights. However we provide complete data from 2012 to 2020 with monthly frequency for all 30 Dow Jones Industrial Average (DJIA) constitutes. Further, the code is provided in such a way that the reader may be able choose portfolio-constituents, the date of execution and the factors (or strategy) of portfolio weights.
The vignette illustrates simple ranking and factor value weighted portfolio allocations along with their enhancements. As such, we illustrated five ranking methods and demonstrated how these methods may affect portfolio allocation strategy (for example avoiding short-sell). We also illustrated five portfolio weighting methods and showed how factor tilted (smart beta) portfolios may exploit factor exposures. We illustrated methods to measure active factor exposures and diversification of a smart beta portfolio and a Hybrid strategy to control or balance between exposure and concentration of weights in a portfolio.
The application of functions and procedures for portfolio allocations in a time series may be released in an upcoming Rpubs, along with comparative performance analytics.
This vignette is a short illustration of the smart beta strategies discussed in the book “Demystifying Shariah-compliant Equity Investments”. The book is co-authored with Dawood Ashraf (Islamic Development Bank Institute), Kris Boudt (Vrije Universiteit Brussel), Mulazim Ali Khokhar (Vrije Universiteit Brussel & Sukkur IBA University) and Muhammad Wajid Raza (Shaheed Benazir Bhutto University Sharingal, Dir.) and is under international peer-review process through Islamic Development Bank Institute. We did not screen the Shariah-compliance of securities but the data was provided by Ideal Ratings. A time series implementation of strategies and comparative analysis of portfolio performance will be provided in an other supplementary Rpubs.
We present our sincere gratitude to Ideal Ratings for providing access to their rating data.