Portfolio Optimization with PortfolioAnalytics
Previously prepared TXG asset returns (2016~2017 Tech/Growth stocks)
Structure of our returns data
str(txg_returns)## An 'xts' object on 2016-01-05/2018-05-24 containing:
## Data: num [1:602, 1:15] -0.02506 -0.01957 -0.0422 0.00529 0.01619 ...
## - attr(*, "dimnames")=List of 2
## ..$ : NULL
## ..$ : chr [1:15] "AAPL" "ADBE" "AMZN" "CHKP" ...
## Indexed by objects of class: [Date] TZ: UTC
## xts Attributes:
## List of 3
## $ src : chr "yahoo"
## $ updated : POSIXct[1:1], format: "2018-05-25 16:22:17"
## $ ret_type: chr "discrete"
Stocks list: 15 stocks - technology (cloud computing, cyber security, data driven) focus, along with a few promising-looking traditional sector players
Benchmark Portfolio
For comparison with the optimal portfolio(s) we eventually develop, we’ll start by creating a benchmark portfolio in which every asset is given equal weight unconditionally.
# Create a vector of equal weights
equal_weights <- rep(1 / ncol(txg_returns), ncol(txg_returns))
# Compute the benchmark returns
txg_benchmark <- Return.portfolio(R = txg_returns,
weights = equal_weights
#rebalance_on = 'quarters'
)
colnames(txg_benchmark) <- 'benchmark'
# Plot the benchmark returns
plot(txg_benchmark)# benchmark mean
mean(txg_benchmark)## [1] 0.00144597
# Benchmark standard deviation
sd(txg_benchmark)## [1] 0.01109487
# Plot benchmark monthly mean
benchmark_monthly_mean <- apply.monthly(txg_benchmark, mean)
plot(benchmark_monthly_mean)Just for reference; good to know. So hopefully we can do better than this (we certainly can)
Next I’m going to mess around with a few portfolio types/constraints, for example giving priority to return-maximizing vs. risk-minimizing priority, restricting to long investments only, per-asset weight constraints, etc.
To start I’ll be using PortfolioAnalytics’ random optimization method with a set of 20,000 (or fewer, more like 5,000) random portfolios, primarily because I can run the optimizations in a matter of minutes, whereas if I were to use a package like DEoptim every time each exploratory optimization would take half an hour or more on my laptop here, and for this first investigative section it’s not worth the time.
I’ll use the DEoptim and pso methods once I set the specs for my final optimized portfolio below.
Also note that the specifications and optimization runs below are just a sample of different constraints, targets, etc. I’ve tried, for the sake of presentation and because there’s really no reason for me to display all of the trial-and-error.
Specification ‘zero’
Say we just want to see what an optimized portfolio would look like with shorting allowed and weights summing to zero, and a modest target for our risk objective:
Constraints: WeightSum~0, box -20~20%
Objectives: Target Mean Returns = 0.2%, Target SD = 1.5%
# Create portfolio specification
txg_spec <- portfolio.spec(colnames(txg_returns))
## Add Constraints
# Add a weight sum constraint such that the weights sum to ~0
txg_spec <- add.constraint(portfolio = txg_spec, type = 'weight_sum',
min_sum = -0.01, max_sum = 0.01)
# Add a box constraint such that no asset can have a weight that is
# greater than max% or less than min%
txg_spec <- add.constraint(portfolio = txg_spec, type = 'box',
min = -0.2, max = 0.2)
## Add Objectives
# Add an objective to maximize portfolio return with a target of 0.0020
txg_spec <- add.objective(portfolio = txg_spec, type = 'return',
name = 'mean', target = 0.0020)
# Add an objective to minimize portfolio standard deviation with a
# target of 0.015
txg_spec <- add.objective(portfolio = txg_spec,
type = 'risk',
name = 'StdDev',
target = 0.015)
# Print the portfolio specification
txg_spec## **************************************************
## PortfolioAnalytics Portfolio Specification
## **************************************************
##
## Call:
## portfolio.spec(assets = colnames(txg_returns))
##
## Number of assets: 15
## Asset Names
## [1] "AAPL" "ADBE" "AMZN" "CHKP" "CYBR" "FDX" "ILMN" "ISRG" "MSFT" "NVDA"
## More than 10 assets, only printing the first 10
##
## Constraints
## Enabled constraint types
## - weight_sum
## - box (with shorting)
##
## Objectives:
## Enabled objective names
## - mean
## - StdDev
Run random optimization
# Random portfolios
rp <- random_portfolios(txg_spec, 5000, 'sample')
# Run optimization - method = random
txg_opt_rand <- optimize.portfolio(R = txg_returns,
portfolio = txg_spec,
optimize_method = 'random',
rp = rp,
trace = TRUE)## Warning: executing %dopar% sequentially: no parallel backend registered
# Plot
plot(txg_opt_rand, main = 'Random Optimized Portfolio',
risk.col = 'StdDev', neighbors = 10)First of all, these are unnecessarily low returns, which means we should consider relaxing our contraints or targetting a portfolio with a positive weight sum.
((Also it looks like our optimum portfolio is far to the right, meaning that the target we set for standard deviation, our risk metric, could probably be decreased without a significant corresponding drop in returns. In other words, we have a pretty low-risk portfolio. In fact, looking at the plot above, we could feasibly expect similar average return levels for portfolios with sds in the 0.013~0.014 range as for portfolios in the 0.015~0.020 range (given the specs we set).))
Specifications ‘A’
Now suppose we need to find an optimal portfolio given some more realistic constraints.
Constraints: Long-Only, WeightSum~1, box1~20%
Objectives: Target Mean = 0.15%, Target SD = 1%
# Create the portfolio specification
txg_spec_A <- portfolio.spec(colnames(txg_returns))
## Add Constraints
# Weight sum constraint such that the weights sum to ~0
txg_spec_A <- add.constraint(portfolio = txg_spec_A, type = 'weight_sum',
min_sum = 0.99, max_sum = 1.01)
# Box constraint such that no asset can have a weight that is
# greater than 20% or less than 1%
txg_spec_A <- add.constraint(portfolio = txg_spec_A, type = 'box',
min = 0.01, max = 0.2)
## Add Objectives
# Objective to maximize portfolio return with a target of 0.0015
txg_spec_A <- add.objective(portfolio = txg_spec_A, type = 'return',
name = 'mean', target = 0.0015)
# Objective to minimize portfolio standard deviation with a
# target of 0.02
txg_spec_A <- add.objective(portfolio = txg_spec_A,
type = 'risk',
name = 'StdDev',
target = 0.010)
txg_spec_A## **************************************************
## PortfolioAnalytics Portfolio Specification
## **************************************************
##
## Call:
## portfolio.spec(assets = colnames(txg_returns))
##
## Number of assets: 15
## Asset Names
## [1] "AAPL" "ADBE" "AMZN" "CHKP" "CYBR" "FDX" "ILMN" "ISRG" "MSFT" "NVDA"
## More than 10 assets, only printing the first 10
##
## Constraints
## Enabled constraint types
## - weight_sum
## - box
##
## Objectives:
## Enabled objective names
## - mean
## - StdDev
# Random portfolios
rp <- random_portfolios(txg_spec_A, 5000, 'sample')
# Run optimization - method = random
txg_opt_base_rand <- optimize.portfolio(R = txg_returns,
portfolio = txg_spec_A,
optimize_method = 'random',
rp = rp,
trace = TRUE)
# Plot
plot(txg_opt_base_rand, main = 'Random Optimized Base Portfolio',
risk.col = 'StdDev', neighbors = 10)So above we have what essentially amounts to a ‘full-investment’ portfolio, with weights summing to approx. 1, and no shorts allowed. This gives us less flexibility, and thus we don’t see quite as many portfolios with returns in the ~.16+% range, however the potential returns don’t fall a huge amount, and we can still achieve close to the same mean returns with lower risk.
Above I’ve also set the target StdDev to 0.010 in order to show the impact on the outputted optimal portfolio. Perhaps you can begin to see how optimal portfolios follow the ‘efficient frontier’, with movement along said boundary determined by the relative importance given to maximizing returns vs. minimizing standard deviation.
In the first specification we had strong returns with the risk metric relaxed. Above, we found a ‘compromise’ optimal portfolio, if you will, with moderate targets for mean and sd. And below, we can specifically prioritize minimizing standard deviation. For example:
## Modify Objectives
# Objective to maximize portfolio return
txg_spec_A <- add.objective(portfolio = txg_spec_A, type = 'return',
name = 'mean')
# Objective to minimize portfolio standard deviation with a
# target of 0.02
txg_spec_A <- add.objective(portfolio = txg_spec_A,
type = 'risk',
name = 'StdDev',
target = 0.005)# Random portfolios
rp <- random_portfolios(txg_spec_A, 5000, 'sample')
# Run optimization - method = random
txg_opt_base_rand_minSD <- optimize.portfolio(R = txg_returns,
portfolio = txg_spec_A,
optimize_method = 'random',
rp = rp,
trace = TRUE)
# Plot
plot(txg_opt_base_rand_minSD, main = 'Random Optimized Base Portfolio',
risk.col = 'StdDev', neighbors = 10)The following is an optimization using our full-investment parameters, but with no specific targets for returns or risk:
This is probably what comes to mind when one envisions the efficient frontier and the ‘optimum’ portfolio, but of course ‘optimum’ is really an arbitrary term (as is ‘efficient’, if you want to get philosophical) and depends on your goals.
Below we have an optimization that specifically maximizes returns. Sort of the other end of the spectrum compared to the above.
# Create portfolio specification
txg_spec_returns <- portfolio.spec(colnames(txg_returns))
## Add Constraints
# Add a weight sum constraint such that the weights sum to ~1
txg_spec_returns <- add.constraint(portfolio = txg_spec_returns,
type = 'weight_sum',
min_sum = 0.99, max_sum = 1.01)
# Add a box constraint such that no asset can have a weight that is
# greater than max% or less than min%
txg_spec_returns <- add.constraint(portfolio = txg_spec_returns,
type = 'box',
min = 0.01, max = 0.2)
## Add Objectives
# Add an objective to maximize portfolio return with a target of 0.0025
txg_spec_returns <- add.objective(portfolio = txg_spec_returns,
type = 'return',
name = 'mean',
target = 0.0025)
# Add an objective to minimize portfolio standard deviation
txg_spec_returns <- add.objective(portfolio = txg_spec_returns,
type = 'risk',
name = 'StdDev')We can better compare the asset weights we have outputted with the charts below.
par(mfrow = c(3, 1))
chart.Weights(txg_opt_rand_returns, main = 'Long-Only, Volatility-Relaxed')
chart.Weights(txg_opt_base_rand, main = 'Long-Only, Volatility-Restricted (sd~0.10)')
chart.Weights(txg_opt_base_rand_minSD, main = 'Long-Only, Min. Volatility')So let’s point out the similarities and differences between the sets of weights we have generated above. Note that the ‘Restricted’ and ‘Minimal’ weights follow a more similar pattern when contrasted with the less feasible ‘Relaxed’ set. In particular the latter two prefer CHKP (Check Point Software Technologies, Ltd.) and UNH (UnitedHealth Group Inc). On the other hand ISRG (Intuitive Surgical, Inc) is preferred by all three portfolios. It’s interesting to see that MSFT (Microsoft) is the heaviest asset in the intermediate portfolio, but essentially absent in the two extremes.
Note that some variation is caused by chance. These are from a set of randomly generated portfolios, and individual portfolios with similar returns:risk ratios (physically close to each other on the scatter plots we have produced above) don’t necessarily have similar looking asset-weight distributions.
Also, do note that even the ‘relaxed’ specification does have constraints, most importantly a permitted range of weights (between 1% and 20% in this case). Were I to remove all constraints and just have R return me a portfolio with the highest mean returns possible (given this is a single-period optimization and ignoring diversification considerations), I would get a portfolio made up almost entirely of NVIDIA stock, which defeats the purpose of creating a portfolio.
Alright I think you get the idea. We have a spectrum of ‘optimal’ portfolios to choose from, and we decide which optimum is the optimum we want based on our tolerance for risk and what kind of minimum returns we’re looking for, among other factors such as whether we want to allow shorting.
But this is all single-period optimization work. In order to be sure of a more robust solutions, we’d like to perform backchecking against historical returns by rebalancing.
Risk-Budgetting, Backchecking, Advanced Optimization
Portfolio Specification ‘final’
First off, I’m going to define one last portfolio specification object that we’ll use for some serious optimization. In this specification I will include shorting and a specific risk-budget while sticking to Mean-Risk objectives.
Parameters
Constraints: WeightSum, Box
Objectives: Mean, StdDev, RiskBudget (ES)
I am not using a metric like quadratic utility as an objective here, as assuming quadratic utility is a simplification that seems clearly not to be an entirely precise approach in many cases. Plus all adding this objective really does in PortfolioAnalytics is add mean-var objectives. I did run the optimization with QU as well (not included here), but it turned ot to be too restrictive for my taste. We have been using Mean-Risk optimization via mean & StdDev objectives, which are simple and prove to be practically as effective as more complicated strategies in the majority of investment situations. For our purpose this technique should be a sufficient proxy for of risk-aversion and utility-seeking preference. The same essentially applies to using expected shortfall for our risk budget measure. It may not be perfect but it will be sufficient for our purposes and there is not an all-around superior, simple alternative.
txg_spec_final <- portfolio.spec(colnames(txg_returns))
## Add Constraints
# Add a weight sum constraint such that the weights sum to ~1
txg_spec_final <- add.constraint(portfolio = txg_spec_final,
type = 'weight_sum',
min_sum = 0.99, max_sum = 1.01)
# Add a box constraint such that no asset can have a weight that is
# greater than 20% or less than 20%
txg_spec_final <- add.constraint(portfolio = txg_spec_final,
type = 'box',
min = -0.2, max = 0.2)
## Add Objectives
# Add an objective to maximize quadratic utility, b = 0.25
#txg_spec_final_qu <- add.objective(portfolio = txg_spec_final,
# type = 'quadratic_utility',
# risk_aversion = 0.25)
# Add an objective to maximize portfolio return
txg_spec_final <- add.objective(portfolio = txg_spec_final,
type = 'return',
name = 'mean',
target = 0.0015)
# Add an objective to minimize portfolio standard deviation
txg_spec_final <- add.objective(portfolio = txg_spec_final,
type = 'risk',
name = 'StdDev')
# Add an objective to limit risk contribution
txg_spec_final_rb <- add.objective(portfolio = txg_spec_final,
type = 'risk_budget', name = 'ES',
arguments = list(p=0.925, clean = 'boudt'),
max_prisk = 0.3)
# Print the portfolio specification
#txg_spec_finalRandom Optimization
Showing StdDev and ES measures of risk
(As well as comparing RiskBudget (…_final_rb) spec vs. pure Mean/SD (…_final) spec)
# Random portfolios (Sample)
rp <- random_portfolios(txg_spec_final_rb, 5000, 'sample')
#TEMP
rpsd <- random_portfolios(txg_spec_final, 5000, 'sample')
txg_opt_sample <- optimize.portfolio(R = txg_returns,
portfolio = txg_spec_final_rb,
optimize_method = 'random',
rp = rp,
trace = TRUE)
#_final/SD
txg_opt_sample_sd <- optimize.portfolio(R = txg_returns,
portfolio = txg_spec_final,
optimize_method = 'random',
rp = rpsd,
trace = TRUE)
plot(txg_opt_sample,
main = 'txg_opt_final_rb Optimized Portfolio - Random (sample)',
risk.col = 'StdDev', neighbors = 10)plot(txg_opt_sample,
main = 'txg_opt_final_rb Optimized Portfolio - Random (sample)',
risk.col = 'ES', neighbors = 10)#_final/SD
plot(txg_opt_sample_sd,
main = 'txg_opt_final Optimized Portfolio - Random (sample)',
risk.col = 'StdDev', neighbors = 10)PSO (Particle Swarm Optimization)
# Random portfolios (Sample)
rp <- random_portfolios(txg_spec_final_rb, 5000, 'sample')
txg_opt_pso <- optimize.portfolio(R = txg_returns,
portfolio = txg_spec_final_rb,
optimize_method = 'pso',
rp = rp,
momentFUN = 'set.portfolio.moments',
method = 'boudt', k = 3,
trace = TRUE)txg_returns_pso <- Return.portfolio(R = txg_returns,
weights = extractWeights(txg_opt_pso))
colnames(txg_returns_pso) <- 'pso'Comparison to benchmark
If you’ll recall the benchmark portfolio we created with equal weights..
txg_benchmark_cs <- apply(txg_benchmark, 2, cumsum)
txg_returns_pso_cs <- apply(txg_returns_pso, 2, cumsum)
txg_bench_cumret <- as.numeric(txg_benchmark_cs)
txg_pso_cumret <- as.numeric(txg_returns_pso_cs)
par(mfrow = c(1, 1))
plot(index(txg_returns), txg_pso_cumret, type = 's', col = 'blue', xlab = 'Date', ylab = 'Returns')
lines(index(txg_returns), txg_bench_cumret, type = 's', col = 'black')
title('PSO Optimized Returns vs. Benchmark')
legend('bottomright', legend = c('PSO_Returns', 'Benchmark'),
lty = c(1, 1), col = c('blue', 'black'))Random Optimization (boudt moments)
Run optimization with boudt statistical factors model moments
# Random portfolios (Sample)
rp <- random_portfolios(txg_spec_final_rb, 5000, 'sample')
txg_opt_rand_bdt <- optimize.portfolio(R = txg_returns,
portfolio = txg_spec_final_rb,
optimize_method = 'random',
rp = rp,
momentFUN = 'set.portfolio.moments',
method = 'boudt', k = 3,
trace = TRUE)
plot(txg_opt_rand_bdt,
main = 'txg_opt_final_rb Optimized Portfolio - Random - boudt moments',
risk.col = 'StdDev', neighbors = 10)DEoptim Optimization
# Random portfolios (Sample)
rp <- random_portfolios(txg_spec_final_rb, 5000, 'sample')
txg_opt_de <- optimize.portfolio(R = txg_returns,
portfolio = txg_spec_final_rb,
optimize_method = 'DEoptim',
rp = rp,
momentFUN = 'set.portfolio.moments',
method = 'boudt', k = 3,
trace = TRUE)plot(txg_opt_de,
main = 'txg_opt_final_rb Optimized Portfolio - DEoptim',
risk.col = 'StdDev', neighbors = 10)For this DEoptim optimized portfolio, let’s check on the percent contribution to risk per asset:
# Plot percent contribution to risk
chart.RiskBudget(txg_opt_de, risk.type = 'percentage',
main = '% Risk Contribution - DEoptim Portfolio',
col = 'blue')We can also look at the risk-reward scatter with individual assets’ ratios plotted as well:
# Plot Mean Returns vs. ES with assets shown
chart.RiskReward(txg_opt_de, chart.assets = TRUE)You can visualize what kind of portfolio we might have if we added weight to assets off to the right of our plot above. Sadly, with the exception of NVIDIA, the more volatile assets don’t seem to make up for their risk with a corresponding increase in expected returns (not in a linear sense, anyway).
Portfolio Comparisons
Set up dataframe to plot benchmark, pso, and DEoptim cumulative returns together with ggplot2. Chances are I won’t echo the code for creating our cumret (cumulative returns) matrices. It’s not super interesting or relavent to our ultimate goal here.
cumrets <- cbind(txg_pso_cumret, txg_de_cumret,
txg_rand_cumret, txg_bench_cumret)
cumrets <- as.data.frame(cumrets)
colnames(cumrets) <- c('PSO', 'DEoptim', 'Random', 'Benchmark')
cumrets$Date <- index(txg_returns)
cumrets_melt <- melt(cumrets, id = c('Date'))
colnames(cumrets_melt) <- c('Date', 'Portfolio', 'Returns')cumret_plot <- ggplot(cumrets_melt, aes(x = Date, y = Returns, col = Portfolio)) +
geom_line(linetype = 1, alpha = 0.6, size = 1.3) +
scale_color_manual(values = c('blue', 'green', 'red', 'black')) +
ylab('Cumulative Returns') +
ggtitle('Cumul.Returns Comparison')
cumret_plotThe pso doesn’t perform as well in this case in terms of mean and cumulative returns, but this may actually flip if we change our objective target mean returns, or even modify the sample size when constructing our random portfolios set. For instance if we remove the ‘target’ argument for our returns objective, the simple random optimization yields higher cumulative returns, while our two other global solvers dip below the benchmark (of course, there are corresponding drops in our StdDev and ES risk measures, so depending on the investment strategy these optimizations with lower cumulative historical returns may be the preferred choice). On the other hand when I was testing different optimization strategies with a rp sample size of 50~500 instead of 5,000+, the pso method resulted in significantly higher cumulative returns, with simple random performing near the benchmark and DEoptim displaying fairly low returns (without much compensation by reduction in standard deviation, mind you).
I think this is more evidence that compared to truly important steps in portfolio optimization such as individual asset analysis and investment strategy considerations, the optimization method is relatively arbitrary, given the amount of random variation and generally similar estimated return/risk measures across several runthroughs. On the other hand, rp sample size and the choice of whether to include rebalancing or not do play a role when selecting the right portfolio.
Also keep in mind that this ‘benchmark’ as shown above is just an arbitrary possible portfolio, in which the asset weights all happen to be equal. More than anything else it’s included as a check to confirm that we’re not producing wildly high or low cumulative return series, which would be evidence of potential oddities in the data or errors during the optimization process.
OK I kinda got sidetracked this morning comparing different optimization methods ^^;; The above is just one plotted comparison in cumulative returns over time (there are other metrics that deserve comparing, of course). I ended up testing a number of other combinations of constraints, objectives, opt_methods, etc., and my conclusion so far is that for this kind of simple optimization of a portfolio of asset returns, what matters more than the flavor of global solver we use are the constraints/objectives and/or targets we set, as well as just taking our time when selecting the assets to include in the first place.
Anyway, now I’ll move on to what I originally intended on continuing with: backchecking through rebalancing.
rp <- random_portfolios(txg_spec_final_rb, 5000, 'sample')
# Run a rebalancing optimization ((((((will need longer term data!!))))))
txg_opt_rebal <- optimize.portfolio.rebalancing(R = txg_returns,
portfolio = txg_spec_final_rb,
optimize_method = 'random',
rp = rp,
rebalance_on = 'quarters',
training_period = 12,
rolling_window = 8
)# Print the results
print(txg_opt_rebal)## **************************************************
## PortfolioAnalytics Optimization with Rebalancing
## **************************************************
##
## Call:
## optimize.portfolio.rebalancing(R = txg_returns, portfolio = txg_spec_final_rb,
## optimize_method = "random", rp = rp, rebalance_on = "quarters",
## training_period = 12, rolling_window = 8)
##
## Number of rebalancing dates: 10
## First rebalance date:
## [1] "2016-03-31 JST"
## Last rebalance date:
## [1] "2018-05-24 JST"
##
## Annualized Portfolio Rebalancing Return:
## [1] 0.2072883
##
## Annualized Portfolio Standard Deviation:
## [1] 0.1804005
# Chart the weights
chart.Weights(txg_opt_rebal, main = 'Weights - Random Rebal (Quarters)',
colorset = colorRamps::primary.colors())There is some variation in the weighting of assets over time, which is a hallmark of the type of portfolio (assets chosen for potential Future growth + strong IT sector assets) we have chosen to work with. The most stable players as I run multiple optimizations seem to be Apple, FedEx, Microsoft, ROSS, and Amazon which are household names that have recently shown strong returns, and wouldn’t be so volatile in terms of their weights in a rebalanced optimized portfolio over time. Some of the other assets, however, are stocks with lower continuity in their rolling returns and volatility, and would contribute to a rebalanced portfolio with many period-to-period changes. However!, the assets with consistent weights over time above will differ depending on the portfolio that happens to be selected from a set of random portfolios (which I reproduce each time I run this code).
Below we visualize the change over time of assets’ percent risk contribution:
# Chart the percentage contribution to risk
chart.RiskBudget(txg_opt_rebal, match.col = 'ES',
risk.type = 'percentage',
colorset = colorRamps::primary.colors())Again, some decent volatility here. The choice between a portfolio that is rebalanced versus one that is not is primarily one of investment strategy. For example if we choose some growth assets on a hunch that they will show higher returns in time t+1 despite their low performance today (t), then maybe we shouldn’t rebalance as that would be contrary to our original incentive for choosing the assets. However if we want to play a little more risk-free, given the generally accurate assumption that asset returns display serial correlation, we could rebalance on say, quarters, in order to reevaluate our asset weights each quarter to adjust for performance over time. It is my understanding that rebalancing is generally embraced as a wise strategy for the average stock-based portfolio, as you can most often decrease risk without any theoretical drops in average returns over time (again, investment strategy should be considered, and there is always risk due to randomness/the unknown, or what we term ‘chance’).
Below is a similar rebalanced optimization using DEoptim
txg_returns_rand_rebal <- Return.portfolio(R = txg_returns,
weights = extractWeights(txg_opt_rebal))
colnames(txg_returns_rand_rebal) <- 'rand_rebal'# Creating rebalanced pf using DEoptim
rp <- random_portfolios(txg_spec_final_rb, 5000, 'sample')
# Run a rebalancing optimization
txg_opt_rebal_de <- optimize.portfolio.rebalancing(R = txg_returns,
portfolio = txg_spec_final_rb,
optimize_method = 'DEoptim',
rp = rp,
rebalance_on = 'quarters',
training_period = 12,
rolling_window = 8
)txg_returns_de_rebal <- Return.portfolio(R = txg_returns,
weights = extractWeights(txg_opt_rebal_de))
colnames(txg_returns_de_rebal) <- 'de_rebal'Comparison of optimized returns
# Combine the returns
opt_rets <- cbind(txg_benchmark, txg_returns_rand, txg_returns_rand_rebal, txg_returns_de, txg_returns_de_rebal, txg_returns_pso)
# Compute annualized returns
table.AnnualizedReturns(R = opt_rets)## benchmark rand rand_rebal de de_rebal
## Annualized Return 0.4171 0.5695 0.2073 0.5293 0.4199
## Annualized Std Dev 0.1761 0.2260 0.1804 0.2356 0.1749
## Annualized Sharpe (Rf=0%) 2.3682 2.5205 1.1490 2.2468 2.4008
## pso
## Annualized Return 0.1950
## Annualized Std Dev 0.2758
## Annualized Sharpe (Rf=0%) 0.7070
# Chart the performance summary
charts.PerformanceSummary(R = opt_rets, main = 'Returns Performance')So what’s interesting to see is which rebalanced optimized portfolios perform well, and which drop below the benchmark. Rebalanced porfolios are often affiliated with lower returns and lower volatility, which may or may not be the case above, but the hit to annualized returns with may not be justified seeing as we can get a pretty similar annualized StdDev just by going with the equally-weighted, unchanging benchmark portfolio. Of course, if an investor is more interested in reducing risk and doesn’t care so much about the returns, perhaps a rebalanced portfolio would be preferable in any case (though if we’re really going for a subtle-returns, well-hedged and diversified portfolio, we would definitely be choosing a different set of assets, something closer to the DVA portfolio I created in the previous module).
As mentioned before, the single-period or even the simple random optimized portfolios are still looking strong in terms of annualized returns and sharpe ratio, with volatility no worse than the DEoptim or pso pfs.
Again, restating (word for word) some of my conclusions from above, what I have really established is that compared to truly important steps in portfolio optimization such as individual asset analysis and investment strategy considerations, the solver used is relatively arbitrary, given the amount of random variation and generally similar estimated return/risk measures across several runthroughs. On the other hand, rp sample size and the choice of whether to include rebalancing or not do play a role when selecting the right portfolio.
The asset selection and portfolio specification (constraints/objectives) steps are perhaps the most significant in determining the final optimized portfolio, but these may be out of the hands of the party actually going through the optimization process. In that case it is all the more crucial that we do our individual asset analysis and look carefully at intervariate relationships in advance of running our optimizations (Modules 1 & 3).