Summary

Applying Modern Portfolio Theory to a mock portfolio of equities. Translating monetary policy implications into markets, and how should my portfolio construction adapt to different policy regimes?

Overview: I constructed the classical efficient frontier and identified optimal portfolios allocation and then layered macro models to estimate each asset’s sensitivity to interest rates, inflation expectations, and the U.S. dollar. Then I used individual betas to project portfolio performance under different monetary policy scenarios ranging from aggressive easing to stagflation and recession. I also included risk analysis showing maximum losses and gains for given periods based on historical market data.


Part I: Portfolio Optimization

Asset Universe

library(quantmod)
library(PerformanceAnalytics)
library(ggplot2)
library(corrplot)
library(tidyr)
library(dplyr)
library(scales)
library(knitr)
library(RColorBrewer)
library(gridExtra)
if (!requireNamespace("moments", quietly = TRUE)) install.packages("moments")

set.seed(2026)

The portfolio universe spans names across technology, fintech, minerals, energy, healthcare, logistics, and emerging markets etc.:

Ticker Company / Fund Sector Macro Exposure
AMZN Amazon Tech / Consumer Growth, consumer spending
HOOD Robinhood Markets Fintech Rate-sensitive, retail flows
UUUU Energy Fuels Uranium / Energy Commodity, energy policy
COHR Coherent Corp Photonics / Tech Capex cycle, AI demand
PLTR Palantir Technologies Data Analytics Gov’t spending, growth
NVDA NVIDIA Semiconductors AI capex, global trade
BLK BlackRock Asset Management AUM/rates, financial conditions
ZETA Zeta Global AdTech Ad spending, consumer
GRAB Grab Holdings SE Asia Ride-Hailing EM growth, USD exposure
SOXL Direxion 3x Semis ETF Leveraged ETF Amplified semi/rate beta
NKE Nike Consumer / Apparel Consumer confidence, FX
JBLU JetBlue Airways Airlines Oil, consumer, rates
NIO NIO Inc EV / China China policy, FX, EV demand
UPS United Parcel Service Logistics GDP, trade volumes
MP MP Materials Rare Earths Industrial policy, China
JNJ Johnson & Johnson Healthcare Defensive, rate-insensitive
AMKR Amkor Technologies Semiconductor Capex cycle, chip demand, AI adoption
EL Estee Lauder Consumer / Beauty Luxury demand, consumer spending, FX

Data Collection

tickers <- c("AMZN", "HOOD", "UUUU", "COHR", "PLTR", "NVDA", "BLK", "ZETA",
             "GRAB", "SOXL", "NKE", "JBLU", "NIO", "UPS", "MP", "JNJ")

benchmark   <- "SPY"
start_date  <- "2024-01-01"
end_date    <- Sys.Date()
risk_free   <- 0.0418

# Equity data
all_tickers <- c(tickers, benchmark)
prices_list <- list()
for (tkr in all_tickers) {
  tryCatch({
    tmp <- getSymbols(tkr, src = "yahoo", from = start_date, to = end_date, auto.assign = FALSE)
    prices_list[[tkr]] <- Ad(tmp)
  }, error = function(e) { cat("Failed:", tkr, "\n") })
}
prices <- do.call(merge, prices_list)
colnames(prices) <- names(prices_list)
prices <- na.omit(prices)

# Macro factor proxies
macro_symbols <- c("SHY", "TLT", "TIP", "UUP")
macro_list <- list()
for (tkr in macro_symbols) {
  tryCatch({
    tmp <- getSymbols(tkr, src = "yahoo", from = start_date, to = end_date, auto.assign = FALSE)
    macro_list[[tkr]] <- Ad(tmp)
  }, error = function(e) { cat("Failed:", tkr, "\n") })
}
macro_prices <- do.call(merge, macro_list)
colnames(macro_prices) <- names(macro_list)
macro_prices <- na.omit(macro_prices)
returns_all  <- Return.calculate(prices, method = "log")[-1, ]
returns_port <- returns_all[, tickers]
returns_bm   <- returns_all[, benchmark]
macro_returns <- Return.calculate(macro_prices, method = "log")[-1, ]

n_assets <- ncol(returns_port)
mu      <- colMeans(returns_port, na.rm = TRUE) * 252
sigma   <- apply(returns_port, 2, sd, na.rm = TRUE) * sqrt(252)
cov_mat <- cov(returns_port, use = "pairwise.complete.obs") * 252

Sample period: 526 trading days from January 02, 2024 to February 05, 2026.

Individual Asset Performance

asset_stats <- data.frame(
  Ticker = tickers,
  `Ann. Return (%)` = round(mu * 100, 2),
  `Ann. Volatility (%)` = round(sigma * 100, 2),
  `Sharpe Ratio` = round((mu - risk_free) / sigma, 3),
  check.names = FALSE
)
asset_stats <- asset_stats[order(-asset_stats$`Sharpe Ratio`), ]
kable(asset_stats, row.names = FALSE, caption = "Individual Asset Statistics (Annualized)")
Individual Asset Statistics (Annualized)
Ticker Ann. Return (%) Ann. Volatility (%) Sharpe Ratio
PLTR 98.85 64.05 1.478
HOOD 85.00 68.33 1.183
NVDA 61.09 50.20 1.134
COHR 76.94 66.21 1.099
JNJ 21.97 17.49 1.017
MP 50.71 75.77 0.614
UUUU 49.95 74.98 0.610
AMZN 18.99 31.33 0.473
BLK 15.36 23.94 0.467
ZETA 30.93 75.10 0.356
SOXL 31.49 110.26 0.248
GRAB 11.00 43.84 0.156
JBLU 6.55 70.29 0.034
UPS -9.26 30.35 -0.443
NIO -27.99 65.27 -0.493
NKE -23.52 37.71 -0.735

Correlation Structure

cor_mat <- cor(returns_port, use = "pairwise.complete.obs")
corrplot(cor_mat, method = "color", type = "upper", order = "hclust",
         col = colorRampPalette(c("#2166AC", "#F7F7F7", "#B2182B"))(200),
         tl.col = "black", tl.cex = 0.75, cl.cex = 0.8,
         addCoef.col = "black", number.cex = 0.5, mar = c(0, 0, 1, 0))
Pairwise correlation matrix ordered by hierarchical clustering.

Pairwise correlation matrix ordered by hierarchical clustering.

Efficient Frontier

n_sims <- 50000
mc_results <- matrix(NA, nrow = n_sims, ncol = n_assets + 3)
colnames(mc_results) <- c(tickers, "Return", "Volatility", "Sharpe")

for (i in 1:n_sims) {
  w <- runif(n_assets); w <- w / sum(w)
  port_ret <- sum(w * mu)
  port_vol <- sqrt(t(w) %*% cov_mat %*% w)
  mc_results[i, ] <- c(w, port_ret, port_vol, (port_ret - risk_free) / port_vol)
}

mc_df <- as.data.frame(mc_results)
tangency <- mc_df[which.max(mc_df$Sharpe), ]
min_var  <- mc_df[which.min(mc_df$Volatility), ]

w_eq      <- rep(1/n_assets, n_assets)
eq_ret    <- sum(w_eq * mu)
eq_vol    <- as.numeric(sqrt(t(w_eq) %*% cov_mat %*% w_eq))
eq_sharpe <- (eq_ret - risk_free) / eq_vol
ggplot(mc_df, aes(x = Volatility * 100, y = Return * 100, color = Sharpe)) +
  geom_point(alpha = 0.12, size = 0.4) +
  scale_color_gradientn(colors = c("#2166AC", "#67A9CF", "#F7F7F7", "#EF8A62", "#B2182B"), name = "Sharpe\nRatio") +
  geom_point(aes(x = tangency$Volatility * 100, y = tangency$Return * 100), color = "red", size = 4.5, shape = 18) +
  annotate("text", x = tangency$Volatility * 100 + 1.5, y = tangency$Return * 100,
           label = "Max Sharpe", fontface = "bold", size = 3.5, color = "red") +
  geom_point(aes(x = min_var$Volatility * 100, y = min_var$Return * 100), color = "blue", size = 4.5, shape = 18) +
  annotate("text", x = min_var$Volatility * 100 + 1.5, y = min_var$Return * 100,
           label = "Min Variance", fontface = "bold", size = 3.5, color = "blue") +
  geom_point(aes(x = eq_vol * 100, y = eq_ret * 100), color = "darkgreen", size = 4.5, shape = 18) +
  annotate("text", x = eq_vol * 100 + 1.5, y = eq_ret * 100,
           label = "Equal Weight", fontface = "bold", size = 3.5, color = "darkgreen") +
  geom_point(data = data.frame(Vol = sigma * 100, Ret = mu * 100, Tk = tickers),
             aes(x = Vol, y = Ret), inherit.aes = FALSE, color = "black", size = 2.5, shape = 17) +
  geom_text(data = data.frame(Vol = sigma * 100, Ret = mu * 100, Tk = tickers),
            aes(x = Vol, y = Ret, label = Tk), inherit.aes = FALSE, size = 2.5, vjust = -1) +
  labs(title = "Efficient Frontier - Monte Carlo Simulation",
       x = "Annualized Volatility (%)", y = "Annualized Expected Return (%)") +
  theme_minimal(base_size = 13) +
  theme(plot.title = element_text(face = "bold"), panel.grid.minor = element_blank())
Efficient frontier from 50,000 Monte Carlo portfolios.

Efficient frontier from 50,000 Monte Carlo portfolios.

Optimal Portfolio Weights

tang_w_df <- data.frame(Ticker = tickers, `Weight (%)` = round(as.numeric(tangency[1:n_assets]) * 100, 2), check.names = FALSE)
kable(tang_w_df[order(-tang_w_df$`Weight (%)`), ], row.names = FALSE, caption = "Maximum Sharpe Ratio Portfolio Weights")
Maximum Sharpe Ratio Portfolio Weights
Ticker Weight (%)
JNJ 13.09
MP 12.32
PLTR 12.13
HOOD 11.44
NVDA 10.83
COHR 10.21
UPS 6.08
AMZN 5.22
GRAB 4.04
JBLU 3.17
UUUU 3.16
ZETA 3.16
NKE 2.11
NIO 1.52
BLK 1.24
SOXL 0.27
mv_w_df <- data.frame(Ticker = tickers, `Weight (%)` = round(as.numeric(min_var[1:n_assets]) * 100, 2), check.names = FALSE)
kable(mv_w_df[order(-mv_w_df$`Weight (%)`), ], row.names = FALSE, caption = "Minimum Variance Portfolio Weights")
Minimum Variance Portfolio Weights
Ticker Weight (%)
GRAB 17.03
BLK 15.04
JNJ 14.41
NKE 12.30
AMZN 8.30
COHR 5.01
MP 4.99
NIO 4.46
UUUU 4.26
JBLU 3.84
ZETA 3.31
UPS 2.65
NVDA 2.02
HOOD 1.22
PLTR 0.93
SOXL 0.23

Backtested Performance

tang_w <- as.numeric(tangency[1:n_assets])
mv_w   <- as.numeric(min_var[1:n_assets])

bt <- merge(
  Return.portfolio(returns_port, weights = tang_w, rebalance_on = "months"),
  Return.portfolio(returns_port, weights = mv_w, rebalance_on = "months"),
  Return.portfolio(returns_port, weights = w_eq, rebalance_on = "months"),
  returns_bm
)
colnames(bt) <- c("Max Sharpe", "Min Variance", "Equal Weight", "S&P 500")
bt <- na.omit(bt)
cum_ret <- cumprod(1 + bt) - 1
cum_df <- data.frame(Date = index(cum_ret), coredata(cum_ret))
cum_long <- pivot_longer(cum_df, cols = -Date, names_to = "Portfolio", values_to = "Return")
cum_long$Portfolio <- gsub("\\.", " ", cum_long$Portfolio)

ggplot(cum_long, aes(x = Date, y = Return * 100, color = Portfolio)) +
  geom_line(linewidth = 0.9) +
  scale_color_manual(values = c("Max Sharpe" = "#B2182B", "Min Variance" = "#2166AC",
                                "Equal Weight" = "#4DAF4A", "S&P 500" = "grey40")) +
  geom_hline(yintercept = 0, linetype = "dashed", color = "grey50") +
  labs(title = "Cumulative Returns - Portfolio Comparison", x = "", y = "Cumulative Return (%)") +
  theme_minimal(base_size = 13) +
  theme(plot.title = element_text(face = "bold"), panel.grid.minor = element_blank(), legend.position = "bottom")
Cumulative returns of optimized portfolios versus benchmarks.

Cumulative returns of optimized portfolios versus benchmarks.

perf_summary <- function(ret_series, name) {
  ann_ret <- as.numeric(Return.annualized(ret_series, scale = 252))
  ann_vol <- as.numeric(StdDev.annualized(ret_series, scale = 252))
  data.frame(Portfolio = name,
    `Ann. Return (%)` = round(ann_ret * 100, 2),
    `Ann. Volatility (%)` = round(ann_vol * 100, 2),
    `Sharpe Ratio` = round((ann_ret - risk_free) / ann_vol, 3),
    `Max Drawdown (%)` = round(as.numeric(maxDrawdown(ret_series)) * 100, 2),
    `Total Return (%)` = round((as.numeric(tail(cumprod(1 + ret_series), 1)) - 1) * 100, 2),
    check.names = FALSE)
}
perf_table <- rbind(perf_summary(bt[,1], "Max Sharpe"), perf_summary(bt[,2], "Min Variance"),
                    perf_summary(bt[,3], "Equal Weight"), perf_summary(bt[,4], "S&P 500"))
kable(perf_table, row.names = FALSE, caption = "Backtest Performance Summary")
Backtest Performance Summary
Portfolio Ann. Return (%) Ann. Volatility (%) Sharpe Ratio Max Drawdown (%) Total Return (%)
Max Sharpe 52.27 31.39 1.532 32.30 140.12
Min Variance 14.17 22.21 0.450 29.16 31.79
Equal Weight 26.61 31.48 0.713 37.42 63.50
S&P 500 18.74 16.08 0.905 19.21 43.01

Risk Analysis

roll_vol <- merge(
  rollapply(bt[,1], 63, sd, fill = NA) * sqrt(252),
  rollapply(bt[,2], 63, sd, fill = NA) * sqrt(252),
  rollapply(bt[,3], 63, sd, fill = NA) * sqrt(252),
  rollapply(bt[,4], 63, sd, fill = NA) * sqrt(252))
colnames(roll_vol) <- c("Max Sharpe", "Min Variance", "Equal Weight", "S&P 500")
rv_df <- data.frame(Date = index(roll_vol), coredata(roll_vol))
rv_long <- pivot_longer(rv_df, cols = -Date, names_to = "Portfolio", values_to = "Volatility")
rv_long$Portfolio <- gsub("\\.", " ", rv_long$Portfolio)

ggplot(na.omit(rv_long), aes(x = Date, y = Volatility * 100, color = Portfolio)) +
  geom_line(linewidth = 0.7) +
  scale_color_manual(values = c("Max Sharpe" = "#B2182B", "Min Variance" = "#2166AC",
                                "Equal Weight" = "#4DAF4A", "S&P 500" = "grey40")) +
  labs(title = "63-Day Rolling Annualized Volatility", x = "", y = "Volatility (%)") +
  theme_minimal(base_size = 13) +
  theme(plot.title = element_text(face = "bold"), panel.grid.minor = element_blank(), legend.position = "bottom")
63-day rolling annualized volatility.

63-day rolling annualized volatility.

dd_all <- merge(Drawdowns(bt[,1]), Drawdowns(bt[,2]), Drawdowns(bt[,3]), Drawdowns(bt[,4]))
colnames(dd_all) <- c("Max Sharpe", "Min Variance", "Equal Weight", "S&P 500")
dd_df <- data.frame(Date = index(dd_all), coredata(dd_all))
dd_long <- pivot_longer(dd_df, cols = -Date, names_to = "Portfolio", values_to = "Drawdown")
dd_long$Portfolio <- gsub("\\.", " ", dd_long$Portfolio)

ggplot(dd_long, aes(x = Date, y = Drawdown * 100, color = Portfolio)) +
  geom_line(linewidth = 0.6) +
  scale_color_manual(values = c("Max Sharpe" = "#B2182B", "Min Variance" = "#2166AC",
                                "Equal Weight" = "#4DAF4A", "S&P 500" = "grey40")) +
  labs(title = "Portfolio Drawdowns", x = "", y = "Drawdown (%)") +
  theme_minimal(base_size = 13) +
  theme(plot.title = element_text(face = "bold"), panel.grid.minor = element_blank(), legend.position = "bottom")
Portfolio drawdowns over the sample period.

Portfolio drawdowns over the sample period.


Risk Analytics Dashboard

Risk profile for each portfolio strategy, modeled after institutional risk reporting standards.

library(moments)  # for skewness and kurtosis

compute_risk_dashboard <- function(ret_series, name, portfolio_value = 10000) {
  r <- as.numeric(na.omit(ret_series))
  n_days <- length(r)
  
  # Returns
  ann_ret      <- (prod(1 + r))^(252 / n_days) - 1
  ann_vol      <- sd(r) * sqrt(252)
  sharpe       <- (ann_ret - risk_free) / ann_vol
  downside_dev <- sqrt(mean(pmin(r, 0)^2)) * sqrt(252)
  sortino      <- (ann_ret - risk_free) / downside_dev
  
  # Drawdown
  cum     <- cumprod(1 + r)
  peak    <- cummax(cum)
  dd      <- (cum - peak) / peak
  max_dd  <- min(dd)
  
  # Beta to SPY
  common  <- merge(ret_series, returns_bm)
  common  <- na.omit(common)
  beta    <- cov(as.numeric(common[,1]), as.numeric(common[,2])) / var(as.numeric(common[,2]))
  
  # VaR and CVaR
  var_95  <- quantile(r, 0.05)
  var_99  <- quantile(r, 0.01)
  cvar_95 <- mean(r[r <= var_95])
  
  # Distribution
  skew <- skewness(r)
  kurt <- kurtosis(r)
  
  # Win rate
  win_rate <- mean(r > 0)
  
  data.frame(
    Metric = c("Annualized Return", "Annualized Volatility", "Sharpe Ratio",
               "Sortino Ratio", "Maximum Drawdown", "Beta to SPY",
               "VaR 95% (Daily)", "VaR 99% (Daily)",
               paste0("VaR 95% ($", formatC(portfolio_value, format = "d", big.mark = ","), " portfolio)"),
               "CVaR 95% (Daily)",
               "Trading Days", "Win Rate", "Best Day", "Worst Day",
               "Avg Daily Return", "Skewness", "Kurtosis"),
    Value = c(
      paste0(ifelse(ann_ret >= 0, "+", ""), sprintf("%.2f%%", ann_ret * 100)),
      sprintf("%.2f%%", ann_vol * 100),
      sprintf("%.2f", sharpe),
      sprintf("%.2f", sortino),
      sprintf("%.2f%%", max_dd * 100),
      sprintf("%.2f", beta),
      sprintf("%.2f%%", var_95 * 100),
      sprintf("%.2f%%", var_99 * 100),
      paste0("-$", formatC(abs(var_95) * portfolio_value, format = "f", digits = 2, big.mark = ",")),
      sprintf("%.2f%%", cvar_95 * 100),
      as.character(n_days),
      sprintf("%.2f%%", win_rate * 100),
      paste0("+", sprintf("%.2f%%", max(r) * 100)),
      sprintf("%.2f%%", min(r) * 100),
      sprintf("%.2f%%", mean(r) * 100),
      sprintf("%.2f", skew),
      sprintf("%.2f", kurt)
    ),
    Description = c(
      "Geometric annualized return",
      "Standard deviation of returns x sqrt(252)",
      paste0("(Return - ", risk_free * 100, "% RFR) / Volatility"),
      "Return / Downside deviation",
      "Largest peak-to-trough decline",
      "Covariance with SPY / Variance of SPY",
      "95% confidence daily loss threshold",
      "99% confidence daily loss threshold",
      "95% VaR in dollar terms",
      "Expected loss beyond VaR 95%",
      "Number of trading days in dataset",
      "Percentage of positive return days",
      "Best single-day return",
      "Worst single-day return",
      "Mean daily return",
      "Return distribution asymmetry",
      "Return distribution tail heaviness"
    ),
    stringsAsFactors = FALSE
  )
}

# Compute for each portfolio
risk_tang <- compute_risk_dashboard(bt[, "Max Sharpe"], "Max Sharpe")
risk_mv   <- compute_risk_dashboard(bt[, "Min Variance"], "Min Variance")
risk_eq   <- compute_risk_dashboard(bt[, "Equal Weight"], "Equal Weight")

kable(risk_tang, row.names = FALSE, caption = "Risk Analytics: Maximum Sharpe Ratio Portfolio")
Risk Analytics: Maximum Sharpe Ratio Portfolio
Metric Value Description
Annualized Return +52.27% Geometric annualized return
Annualized Volatility 31.39% Standard deviation of returns x sqrt(252)
Sharpe Ratio 1.53 (Return - 4.18% RFR) / Volatility
Sortino Ratio 2.27 Return / Downside deviation
Maximum Drawdown -32.30% Largest peak-to-trough decline
Beta to SPY 1.55 Covariance with SPY / Variance of SPY
VaR 95% (Daily) -2.92% 95% confidence daily loss threshold
VaR 99% (Daily) -5.23% 99% confidence daily loss threshold
VaR 95% ($10,000 portfolio) -$292.24 95% VaR in dollar terms
CVaR 95% (Daily) -4.49% Expected loss beyond VaR 95%
Trading Days 525 Number of trading days in dataset
Win Rate 58.10% Percentage of positive return days
Best Day +12.68% Best single-day return
Worst Day -7.90% Worst single-day return
Avg Daily Return 0.19% Mean daily return
Skewness 0.05 Return distribution asymmetry
Kurtosis 7.07 Return distribution tail heaviness
kable(risk_mv, row.names = FALSE, caption = "Risk Analytics: Minimum Variance Portfolio")
Risk Analytics: Minimum Variance Portfolio
Metric Value Description
Annualized Return +14.17% Geometric annualized return
Annualized Volatility 22.21% Standard deviation of returns x sqrt(252)
Sharpe Ratio 0.45 (Return - 4.18% RFR) / Volatility
Sortino Ratio 0.64 Return / Downside deviation
Maximum Drawdown -29.16% Largest peak-to-trough decline
Beta to SPY 1.11 Covariance with SPY / Variance of SPY
VaR 95% (Daily) -2.08% 95% confidence daily loss threshold
VaR 99% (Daily) -3.67% 99% confidence daily loss threshold
VaR 95% ($10,000 portfolio) -$207.57 95% VaR in dollar terms
CVaR 95% (Daily) -3.17% Expected loss beyond VaR 95%
Trading Days 525 Number of trading days in dataset
Win Rate 53.33% Percentage of positive return days
Best Day +10.02% Best single-day return
Worst Day -8.18% Worst single-day return
Avg Daily Return 0.06% Mean daily return
Skewness -0.08 Return distribution asymmetry
Kurtosis 10.66 Return distribution tail heaviness
kable(risk_eq, row.names = FALSE, caption = "Risk Analytics: Equal Weight Portfolio")
Risk Analytics: Equal Weight Portfolio
Metric Value Description
Annualized Return +26.61% Geometric annualized return
Annualized Volatility 31.48% Standard deviation of returns x sqrt(252)
Sharpe Ratio 0.71 (Return - 4.18% RFR) / Volatility
Sortino Ratio 1.00 Return / Downside deviation
Maximum Drawdown -37.42% Largest peak-to-trough decline
Beta to SPY 1.65 Covariance with SPY / Variance of SPY
VaR 95% (Daily) -3.27% 95% confidence daily loss threshold
VaR 99% (Daily) -5.34% 99% confidence daily loss threshold
VaR 95% ($10,000 portfolio) -$326.76 95% VaR in dollar terms
CVaR 95% (Daily) -4.77% Expected loss beyond VaR 95%
Trading Days 525 Number of trading days in dataset
Win Rate 56.76% Percentage of positive return days
Best Day +13.02% Best single-day return
Worst Day -10.41% Worst single-day return
Avg Daily Return 0.11% Mean daily return
Skewness -0.21 Return distribution asymmetry
Kurtosis 8.10 Return distribution tail heaviness

Market Scenarios & Stress Impact

This section estimates how each portfolio would perform under various market-wide moves, then maps those scenarios to historical crisis events. This framework mirrors scenario analysis used in the stress testing of systemically important financial institutions.

# Portfolio beta to SPY for each strategy
beta_tang <- cov(as.numeric(na.omit(bt[,1])), as.numeric(na.omit(returns_bm[index(bt)]))) / 
             var(as.numeric(na.omit(returns_bm[index(bt)])))
beta_mv   <- cov(as.numeric(na.omit(bt[,2])), as.numeric(na.omit(returns_bm[index(bt)]))) / 
             var(as.numeric(na.omit(returns_bm[index(bt)])))
beta_eq   <- cov(as.numeric(na.omit(bt[,3])), as.numeric(na.omit(returns_bm[index(bt)]))) / 
             var(as.numeric(na.omit(returns_bm[index(bt)])))

starting_nav <- 10000  # hypothetical $10,000 portfolio

market_moves <- c(-0.20, -0.15, -0.10, -0.05, -0.03, 0.00, 0.03, 0.05, 0.10, 0.15, 0.20)
scenario_names <- c("Market Crash", "Severe Correction", "Correction", "Flash Crash",
                     "Minor Pullback", "Flat Market", "Minor Rally", "Moderate Rally",
                     "Rally", "Strong Rally", "Bull Run")

# Build scenario table for tangency portfolio (primary)
port_impact <- market_moves * beta_tang
resulting_nav <- starting_nav * (1 + port_impact)
pnl_impact <- resulting_nav - starting_nav

scenario_table <- data.frame(
  Scenario = scenario_names,
  `Market Move` = paste0(ifelse(market_moves >= 0, "+", ""), sprintf("%.1f%%", market_moves * 100)),
  `Portfolio Impact` = paste0(ifelse(port_impact >= 0, "+", ""), sprintf("%.2f%%", port_impact * 100)),
  `Resulting NAV` = paste0("$", formatC(resulting_nav, format = "f", digits = 2, big.mark = ",")),
  `P&L Impact` = paste0(ifelse(pnl_impact >= 0, "+$", "-$"), formatC(abs(pnl_impact), format = "f", digits = 2, big.mark = ",")),
  check.names = FALSE
)

kable(scenario_table, row.names = FALSE, 
      caption = paste0("Market Scenarios: Max Sharpe Portfolio (Beta = ", 
                       sprintf("%.2f", beta_tang), ", $10,000 starting NAV)"))
Market Scenarios: Max Sharpe Portfolio (Beta = 1.55, $10,000 starting NAV)
Scenario Market Move Portfolio Impact Resulting NAV P&L Impact
Market Crash -20.0% -30.98% $6,901.60 -$3,098.40
Severe Correction -15.0% -23.24% $7,676.20 -$2,323.80
Correction -10.0% -15.49% $8,450.80 -$1,549.20
Flash Crash -5.0% -7.75% $9,225.40 -$774.60
Minor Pullback -3.0% -4.65% $9,535.24 -$464.76
Flat Market +0.0% +0.00% $10,000.00 +$0.00
Minor Rally +3.0% +4.65% $10,464.76 +$464.76
Moderate Rally +5.0% +7.75% $10,774.60 +$774.60
Rally +10.0% +15.49% $11,549.20 +$1,549.20
Strong Rally +15.0% +23.24% $12,323.80 +$2,323.80
Bull Run +20.0% +30.98% $13,098.40 +$3,098.40

Historical Market Events Reference

Mapping historical crisis events to their equivalent portfolio impact using the portfolio’s estimated market beta to predict outcomes:

hist_events <- data.frame(
  Event = c("COVID Crash", "2022 Bear Market", "Flash Crash", "Brexit Vote",
            "Trade War Selloff", "SVB Banking Crisis", "2018 Vol Spike"),
  Date = c("Mar 2020", "2022", "Aug 2015", "Jun 2016", "Dec 2018", "Mar 2023", "Feb 2018"),
  `S&P 500 Decline` = c("-34.00%", "-25.00%", "-11.00%", "-6.00%", "-20.00%", "-8.00%", "-10.00%"),
  check.names = FALSE
)

sp500_declines <- c(-0.34, -0.25, -0.11, -0.06, -0.20, -0.08, -0.10)

hist_events$`Max Sharpe Impact` <- sprintf("%.2f%%", sp500_declines * beta_tang * 100)
hist_events$`Min Variance Impact` <- sprintf("%.2f%%", sp500_declines * beta_mv * 100)
hist_events$`Equal Weight Impact` <- sprintf("%.2f%%", sp500_declines * beta_eq * 100)

kable(hist_events, row.names = FALSE, 
      caption = "Historical Market Events: Estimated Portfolio Impact by Strategy")
Historical Market Events: Estimated Portfolio Impact by Strategy
Event Date S&P 500 Decline Max Sharpe Impact Min Variance Impact Equal Weight Impact
COVID Crash Mar 2020 -34.00% -52.67% -37.64% -56.07%
2022 Bear Market 2022 -25.00% -38.73% -27.68% -41.23%
Flash Crash Aug 2015 -11.00% -17.04% -12.18% -18.14%
Brexit Vote Jun 2016 -6.00% -9.30% -6.64% -9.89%
Trade War Selloff Dec 2018 -20.00% -30.98% -22.14% -32.98%
SVB Banking Crisis Mar 2023 -8.00% -12.39% -8.86% -13.19%
2018 Vol Spike Feb 2018 -10.00% -15.49% -11.07% -16.49%

The historical events table illustrates that during market stress, the minimum-variance portfolio consistently limits losses relative to the tangency portfolio. In a COVID-equivalent event, the min-variance strategy would have reduced losses by approximately 29% compared to the max-Sharpe allocation. Helpful to see how market shocks are amplified by leverage and highly concentrated portfolios.

Comparative Scenario Summary

# Side-by-side scenario comparison across all three portfolios
comp_moves <- c(-0.20, -0.10, -0.05, 0.00, 0.05, 0.10, 0.20)
comp_names <- c("Market Crash (-20%)", "Correction (-10%)", "Flash Crash (-5%)",
                "Flat Market", "Rally (+5%)", "Strong Rally (+10%)", "Bull Run (+20%)")

comp_table <- data.frame(
  Scenario = comp_names,
  `Max Sharpe` = sprintf("%+.2f%%", comp_moves * beta_tang * 100),
  `Min Variance` = sprintf("%+.2f%%", comp_moves * beta_mv * 100),
  `Equal Weight` = sprintf("%+.2f%%", comp_moves * beta_eq * 100),
  `S&P 500` = sprintf("%+.2f%%", comp_moves * 100),
  check.names = FALSE
)

kable(comp_table, row.names = FALSE,
      caption = "Market Scenario Impact: All Portfolios vs. S&P 500 Benchmark")
Market Scenario Impact: All Portfolios vs. S&P 500 Benchmark
Scenario Max Sharpe Min Variance Equal Weight S&P 500
Market Crash (-20%) -30.98% -22.14% -32.98% -20.00%
Correction (-10%) -15.49% -11.07% -16.49% -10.00%
Flash Crash (-5%) -7.75% -5.54% -8.25% -5.00%
Flat Market +0.00% +0.00% +0.00% +0.00%
Rally (+5%) +7.75% +5.54% +8.25% +5.00%
Strong Rally (+10%) +15.49% +11.07% +16.49% +10.00%
Bull Run (+20%) +30.98% +22.14% +32.98% +20.00%

Part II: Macro Factor & Monetary Policy Analysis

This section extends the classical Markowitz framework by examining how monetary policy and macroeconomic conditions affect portfolio returns. The approach is based on the monetary transmission mechanism, specifically, how changes in interest rates, inflation expectations, and financial conditions effect equity markets.

Macro Factor Proxies

We use four ETFs as tradeable proxies for key macro variables:

ETF Proxy For Monetary Policy Relevance
SHY (1-3Y Treasury) Short-term interest rates Direct Fed policy transmission
TLT (20+Y Treasury) Long-duration rates / term premium Expectations channel, QE effects
TIP (TIPS ETF) Inflation expectations Inflation targeting, real rates
UUP (US Dollar Index) USD strength International spillovers, trade

Multi-Factor Regression

We estimate a multi-factor model for each asset to see drivers of perfomance:

\[R_{i,t} = \alpha_i + \beta_{i,\text{mkt}} R_{\text{SPY},t} + \beta_{i,\text{SHY}} R_{\text{SHY},t} + \beta_{i,\text{TLT}} R_{\text{TLT},t} + \beta_{i,\text{TIP}} R_{\text{TIP},t} + \beta_{i,\text{UUP}} R_{\text{UUP},t} + \varepsilon_{i,t}\]

The regression model decomposes asset returns into five systematic risk dimensions: equity market beta (SPY), short-term rate sensitivity (SHY), duration risk (TLT), inflation expectations (TIP), and currency fluctuations (UUP). By isolating these betas, we can identify how an asset is likely to perform during specific macroeconomic shifts, such as rising interest rates or a strengthening dollar.

Model is an application of Arbitrage Pricing Theory (APT), which says that an asset’s return is a linear function of various systematic risk factors, assigning betas to each scenario.

common_dates <- index(returns_port)[index(returns_port) %in% index(macro_returns)]
rp <- returns_port[common_dates, ]
mr <- macro_returns[common_dates, ]
rb <- returns_bm[common_dates, ]

rb_df <- data.frame(Date = index(rb), SPY = as.numeric(rb))
mr_df <- data.frame(Date = index(mr), coredata(mr))
merged_df <- merge(rb_df, mr_df, by = "Date")
factors <- xts(merged_df[, -1], order.by = merged_df$Date)
colnames(factors) <- c("SPY", colnames(mr))

factor_labels <- c("Market (SPY)", "Short Rate (SHY)", "Duration (TLT)",
                    "Inflation (TIP)", "USD (UUP)")

beta_matrix <- matrix(NA, nrow = n_assets, ncol = 6)
rownames(beta_matrix) <- tickers
colnames(beta_matrix) <- c(factor_labels, "R_squared")

for (i in 1:n_assets) {
  reg_data <- merge(rp[, i], factors)
  reg_data <- na.omit(reg_data)
  colnames(reg_data)[1] <- "Y"
  fit <- lm(Y ~ SPY + SHY + TLT + TIP + UUP, data = as.data.frame(reg_data))
  beta_matrix[i, ] <- c(coef(fit)[-1], summary(fit)$r.squared)
}

beta_display <- data.frame(
  Ticker = tickers,
  `Market` = round(beta_matrix[,1], 3),
  `Short Rate` = round(beta_matrix[,2], 3),
  `Duration` = round(beta_matrix[,3], 3),
  `Inflation` = round(beta_matrix[,4], 3),
  `USD` = round(beta_matrix[,5], 3),
  `R-sq` = paste0(round(beta_matrix[,6] * 100, 1), "%"),
  check.names = FALSE
)
kable(beta_display, row.names = FALSE, caption = "Multi-Factor Regression: Asset-Level Macro Betas")
Multi-Factor Regression: Asset-Level Macro Betas
Ticker Market Short Rate Duration Inflation USD R-sq
AMZN 1.417 -1.432 -0.089 -0.261 0.306 55.4%
HOOD 2.554 -4.865 -0.676 2.018 -0.768 38.6%
UUUU 1.161 -4.945 -0.891 3.065 -1.212 8.5%
COHR 2.531 -1.329 0.204 -1.042 0.103 37.8%
PLTR 2.165 0.989 -0.560 -0.202 -0.682 29.3%
NVDA 2.145 -0.275 0.043 -1.625 -0.018 46.8%
BLK 1.041 0.394 -0.012 0.277 -0.113 50.1%
ZETA 1.912 -5.752 0.080 1.719 -0.720 19.1%
GRAB 1.179 -0.065 -0.350 -0.022 -0.550 19%
SOXL 5.607 -4.291 -0.472 0.007 -0.518 67.4%
NKE 0.914 -0.030 0.727 -1.427 0.194 17.2%
JBLU 1.530 -2.065 0.369 -1.740 0.324 13.2%
NIO 0.949 1.317 0.181 0.017 -1.049 7.5%
UPS 0.718 0.477 0.087 -0.294 -0.265 14.6%
MP 1.208 2.345 -0.382 0.670 -0.725 7.2%
JNJ -0.001 0.812 -0.065 0.820 -0.198 7.1%
beta_df <- as.data.frame(beta_matrix[, 1:5])
colnames(beta_df) <- c("Market\n(SPY)", "Short Rate\n(SHY)", "Long Rate\n(TLT)",
                        "Inflation\n(TIP)", "USD\n(UUP)")
beta_df$Ticker <- rownames(beta_df)
beta_long <- pivot_longer(beta_df, cols = -Ticker, names_to = "Factor", values_to = "Beta")

ggplot(beta_long, aes(x = Factor, y = Ticker, fill = Beta)) +
  geom_tile(color = "white", linewidth = 0.5) +
  geom_text(aes(label = sprintf("%+.2f", Beta)), size = 3) +
  scale_fill_gradient2(low = "#2166AC", mid = "#F7F7F7", high = "#B2182B", midpoint = 0, name = "Beta") +
  labs(title = "Macro Factor Sensitivity - Asset-Level Betas",
       subtitle = "Multi-factor regression coefficients", x = "", y = "") +
  theme_minimal(base_size = 12) +
  theme(plot.title = element_text(face = "bold"), panel.grid = element_blank())
Macro factor sensitivity heatmap. Red indicates positive exposure; blue indicates negative.

Macro factor sensitivity heatmap. Red indicates positive exposure; blue indicates negative.

The heatmap reveals substantial cross-sectional variation in macro sensitivity. Growth names carry higher market betas but also more rate sensitivity, while more traditional defensive sector names show lower factors across the board. International names also exhibit more sensitivity to USD.

Portfolio-Level Factor Exposure

compute_port_betas <- function(w, bm) { colSums(w * bm[, 1:5]) }

tang_betas <- compute_port_betas(tang_w, beta_matrix)
mv_betas   <- compute_port_betas(mv_w, beta_matrix)
eq_betas   <- compute_port_betas(w_eq, beta_matrix)

port_beta_df <- data.frame(
  Portfolio = c("Max Sharpe", "Min Variance", "Equal Weight"),
  Market = round(c(tang_betas[1], mv_betas[1], eq_betas[1]), 3),
  `Short Rate` = round(c(tang_betas[2], mv_betas[2], eq_betas[2]), 3),
  Duration = round(c(tang_betas[3], mv_betas[3], eq_betas[3]), 3),
  Inflation = round(c(tang_betas[4], mv_betas[4], eq_betas[4]), 3),
  USD = round(c(tang_betas[5], mv_betas[5], eq_betas[5]), 3),
  check.names = FALSE
)
kable(port_beta_df, row.names = FALSE, caption = "Portfolio-Level Factor Exposures")
Portfolio-Level Factor Exposures
Portfolio Market Short Rate Duration Inflation USD
Max Sharpe 1.567 -0.646 -0.186 0.151 -0.365
Min Variance 1.115 -0.382 -0.022 0.043 -0.255
Equal Weight 1.689 -1.170 -0.113 0.124 -0.368

Monetary Policy Scenario Analysis

Define six scenarios reflecting plausible paths for Federal Reserve policy over the next 12 months. Each scenario specifies an expected rate cut path, inflation estimates, GDP growth, and implied returns on our macro factor proxies.

scenarios <- data.frame(
  Scenario = c("Baseline: Soft Landing", "Aggressive Easing (-200bp)",
               "Hawkish Hold / Re-Tightening", "Stagflation",
               "Recession + Emergency Cuts", "Goldilocks"),
  SPY  = c(0.10, 0.06, 0.02, -0.08, -0.25, 0.15),
  SHY  = c(0.03, 0.06, 0.01,  0.02,  0.08, 0.04),
  TLT  = c(0.04, 0.15, -0.08, -0.05, 0.20, 0.08),
  TIP  = c(0.03, 0.04, -0.02,  0.08, 0.05, 0.02),
  UUP  = c(-0.02, -0.06, 0.04, 0.01, -0.08, -0.03),
  FFR_Path = c("4.0% -> 3.50%", "4.50% -> 2.50%", "4.50% -> 5.00%",
               "4.50% -> 4.00%", "4.50% -> 1.00%", "4.50% -> 3.00%"),
  Inflation = c("2.5%", "2.0%", "3.5%+", "4.0%+", "1.5%", "2.0%"),
  GDP = c("2.0%", "1.0%", "1.5%", "-0.5%", "-3.0%", "3.0%+"),
  stringsAsFactors = FALSE
)

scen_display <- scenarios[, c("Scenario", "FFR_Path", "Inflation", "GDP")]
colnames(scen_display) <- c("Scenario", "FFR Path", "Inflation", "GDP Growth")
kable(scen_display, row.names = FALSE, caption = "Monetary Policy Scenario Definitions")
Monetary Policy Scenario Definitions
Scenario FFR Path Inflation GDP Growth
Baseline: Soft Landing 4.0% -> 3.50% 2.5% 2.0%
Aggressive Easing (-200bp) 4.50% -> 2.50% 2.0% 1.0%
Hawkish Hold / Re-Tightening 4.50% -> 5.00% 3.5%+ 1.5%
Stagflation 4.50% -> 4.00% 4.0%+ -0.5%
Recession + Emergency Cuts 4.50% -> 1.00% 1.5% -3.0%
Goldilocks 4.50% -> 3.00% 2.0% 3.0%+

Expected Returns by Scenario

Using the portfolio-level factor betas, we project expected returns under each policy regime:

\[E[R_{\text{portfolio}} | \text{Scenario}_s] = \sum_{f} \beta_{\text{portfolio},f} \cdot E[R_{f} | \text{Scenario}_s]\]

scenario_results <- data.frame(Scenario = scenarios$Scenario, MaxSharpe = NA, MinVar = NA, EqualWeight = NA)

for (s in 1:nrow(scenarios)) {
  fr <- as.numeric(scenarios[s, c("SPY", "SHY", "TLT", "TIP", "UUP")])
  scenario_results$MaxSharpe[s]   <- sum(tang_betas * fr)
  scenario_results$MinVar[s]      <- sum(mv_betas * fr)
  scenario_results$EqualWeight[s] <- sum(eq_betas * fr)
}

scen_out <- data.frame(
  Scenario = scenarios$Scenario,
  `Max Sharpe (%)` = round(scenario_results$MaxSharpe * 100, 2),
  `Min Variance (%)` = round(scenario_results$MinVar * 100, 2),
  `Equal Weight (%)` = round(scenario_results$EqualWeight * 100, 2),
  check.names = FALSE
)
kable(scen_out, row.names = FALSE, caption = "Expected Portfolio Returns Under Each Scenario")
Expected Portfolio Returns Under Each Scenario
Scenario Max Sharpe (%) Min Variance (%) Equal Weight (%)
Baseline: Soft Landing 14.17 10.56 14.04
Aggressive Easing (-200bp) 5.53 5.77 4.13
Hawkish Hold / Re-Tightening 2.21 0.92 1.39
Stagflation -12.05 -9.48 -14.67
Recession + Emergency Cuts -44.39 -29.11 -50.28
Goldilocks 20.83 15.87 21.11
scen_long <- pivot_longer(scenario_results, cols = -Scenario, names_to = "Portfolio", values_to = "Expected_Return")
scen_long$Portfolio <- gsub("MaxSharpe", "Max Sharpe", scen_long$Portfolio)
scen_long$Portfolio <- gsub("MinVar", "Min Variance", scen_long$Portfolio)
scen_long$Portfolio <- gsub("EqualWeight", "Equal Weight", scen_long$Portfolio)
scen_long$Scenario <- factor(scen_long$Scenario, levels = rev(scenarios$Scenario))

ggplot(scen_long, aes(x = Expected_Return * 100, y = Scenario, fill = Portfolio)) +
  geom_bar(stat = "identity", position = position_dodge(width = 0.7), width = 0.6) +
  scale_fill_manual(values = c("Max Sharpe" = "#B2182B", "Min Variance" = "#2166AC", "Equal Weight" = "#4DAF4A")) +
  geom_vline(xintercept = 0, linetype = "dashed", color = "grey40") +
  labs(title = "Expected Portfolio Returns Under Monetary Policy Scenarios",
       subtitle = "Projected via multi-factor beta decomposition",
       x = "Expected Annual Return (%)", y = "") +
  theme_minimal(base_size = 12) +
  theme(plot.title = element_text(face = "bold"), panel.grid.minor = element_blank(), legend.position = "bottom")
Expected portfolio returns under six monetary policy scenarios.

Expected portfolio returns under six monetary policy scenarios.


Stress Testing (CCAR/DFAST-Inspired)

Estimate instantaneous portfolio impacts under severe but plausible macro shocks:

stress_shocks <- data.frame(
  Shock = c("FFR +100bp surprise (hawkish)", "FFR -100bp surprise (dovish)",
            "10Y yield +150bp (term premium)", "10Y yield -100bp (flight to safety)",
            "Breakeven inflation +200bp", "USD appreciation +10%",
            "USD depreciation -10%", "Equity market -20% (severe)",
            "Equity market -35% (2008-style)", "Combined: rates up + equities down"),
  SPY  = c(-0.05, 0.04, -0.08, 0.03, -0.06, -0.03, 0.03, -0.20, -0.35, -0.15),
  SHY  = c(-0.02, 0.02, -0.01, 0.01, -0.01, 0.005, -0.005, 0.01, 0.03, -0.01),
  TLT  = c(-0.06, 0.05, -0.12, 0.08, -0.04, 0.02, -0.02, 0.05, 0.10, -0.08),
  TIP  = c(-0.02, 0.02, -0.04, 0.03, 0.06, 0.01, -0.01, 0.02, 0.04, -0.02),
  UUP  = c(0.03, -0.03, 0.02, -0.01, 0.01, 0.10, -0.10, 0.03, 0.05, 0.05),
  stringsAsFactors = FALSE
)

stress_results <- data.frame(Shock = stress_shocks$Shock)
for (s in 1:nrow(stress_shocks)) {
  fs <- as.numeric(stress_shocks[s, c("SPY", "SHY", "TLT", "TIP", "UUP")])
  stress_results$`Max Sharpe`[s]   <- round(sum(tang_betas * fs) * 100, 2)
  stress_results$`Min Variance`[s] <- round(sum(mv_betas * fs) * 100, 2)
  stress_results$`Equal Weight`[s] <- round(sum(eq_betas * fs) * 100, 2)
}
kable(stress_results, row.names = FALSE, caption = "Stress Test: Estimated Portfolio Impact (%)")
Stress Test: Estimated Portfolio Impact (%)
Shock Max Sharpe Min Variance Equal Weight
FFR +100bp surprise (hawkish) -6.82 -5.53 -6.78
FFR -100bp surprise (dovish) 5.44 4.44 5.21
10Y yield +150bp (term premium) -10.99 -8.96 -12.22
10Y yield -100bp (flight to safety) 3.38 3.17 3.73
Breakeven inflation +200bp -7.47 -6.21 -8.14
USD appreciation +10% -8.90 -6.09 -9.44
USD depreciation -10% 8.90 6.09 9.44
Equity market -20% (severe) -33.71 -23.47 -36.38
Equity market -35% (2008-style) -59.87 -41.48 -65.11
Combined: rates up + equities down -23.50 -17.53 -25.35
stress_long <- pivot_longer(stress_results, cols = -Shock, names_to = "Portfolio", values_to = "Impact")
stress_long$Shock <- factor(stress_long$Shock, levels = rev(stress_shocks$Shock))

ggplot(stress_long, aes(x = Impact, y = Shock, fill = Portfolio)) +
  geom_bar(stat = "identity", position = position_dodge(width = 0.7), width = 0.6) +
  scale_fill_manual(values = c("Max Sharpe" = "#B2182B", "Min Variance" = "#2166AC", "Equal Weight" = "#4DAF4A")) +
  geom_vline(xintercept = 0, linetype = "dashed", color = "grey40") +
  labs(title = "Stress Testing: Portfolio Impact of Macro Shocks",
       subtitle = "CCAR/DFAST-inspired scenario analysis", x = "Estimated Impact (%)", y = "") +
  theme_minimal(base_size = 11) +
  theme(plot.title = element_text(face = "bold"), panel.grid.minor = element_blank(), legend.position = "bottom")
Estimated portfolio impact of instantaneous macro shocks.

Estimated portfolio impact of instantaneous macro shocks.


Forward-Looking Capital Market Assumptions

We construct 12-month forward expected returns by assigning subjective probabilities to each monetary policy scenario, informed by the FOMC dot plot, Fed Funds futures, and the Summary of Economic Projections:

scenario_probs <- c(0.30, 0.20, 0.15, 0.5, 0.15, 0.5)

prob_df <- data.frame(
  Scenario = scenarios$Scenario,
  `Probability` = paste0(scenario_probs * 100, "%"),
  check.names = FALSE
)
kable(prob_df, row.names = FALSE, caption = "Scenario Probability Weights")
Scenario Probability Weights
Scenario Probability
Baseline: Soft Landing 30%
Aggressive Easing (-200bp) 20%
Hawkish Hold / Re-Tightening 15%
Stagflation 50%
Recession + Emergency Cuts 15%
Goldilocks 50%
pw_tang <- sum(scenario_probs * scenario_results$MaxSharpe) * 100
pw_mv   <- sum(scenario_probs * scenario_results$MinVar) * 100
pw_eq   <- sum(scenario_probs * scenario_results$EqualWeight) * 100

Asset-Level Forward Returns

asset_fwd <- data.frame(Ticker = tickers, Historical = mu, Forward = NA, Fwd_Sharpe = NA)

for (i in 1:n_assets) {
  fwd_ret <- 0
  for (s in 1:nrow(scenarios)) {
    fr <- as.numeric(scenarios[s, c("SPY", "SHY", "TLT", "TIP", "UUP")])
    fwd_ret <- fwd_ret + scenario_probs[s] * sum(beta_matrix[i, 1:5] * fr)
  }
  asset_fwd$Forward[i] <- fwd_ret
  asset_fwd$Fwd_Sharpe[i] <- fwd_ret / sigma[i]
}

asset_fwd_display <- data.frame(
  Ticker = tickers,
  `Historical (%)` = round(asset_fwd$Historical * 100, 2),
  `Scenario-Adjusted (%)` = round(asset_fwd$Forward * 100, 2),
  `Forward Sharpe` = round(asset_fwd$Fwd_Sharpe, 3),
  check.names = FALSE
)
asset_fwd_display <- asset_fwd_display[order(-asset_fwd_display$`Forward Sharpe`), ]
kable(asset_fwd_display, row.names = FALSE, caption = "Historical vs. Forward-Looking Expected Returns")
Historical vs. Forward-Looking Expected Returns
Ticker Historical (%) Scenario-Adjusted (%) Forward Sharpe
JNJ 21.97 11.28 0.645
BLK 15.36 9.23 0.386
MP 50.71 24.65 0.325
NIO -27.99 17.58 0.269
PLTR 98.85 12.26 0.191
UPS -9.26 5.57 0.184
GRAB 11.00 3.68 0.084
NKE -23.52 -1.72 -0.046
SOXL 31.49 -5.57 -0.051
COHR 76.94 -4.09 -0.062
NVDA 61.09 -3.89 -0.078
UUUU 49.95 -7.61 -0.102
HOOD 85.00 -8.56 -0.125
ZETA 30.93 -13.63 -0.182
AMZN 18.99 -6.78 -0.216
JBLU 6.55 -17.60 -0.250

Forward-Looking Efficient Frontier

Using the scenario-adjusted expected returns in place of historical means, we re-run the Monte Carlo optimization to see how the efficient frontier shifts under current policy expectations:

mu_fwd <- asset_fwd$Forward

mc_fwd <- matrix(NA, nrow = n_sims, ncol = n_assets + 3)
for (i in 1:n_sims) {
  w <- runif(n_assets); w <- w / sum(w)
  pr <- sum(w * mu_fwd); pv <- sqrt(t(w) %*% cov_mat %*% w)
  mc_fwd[i, ] <- c(w, pr, pv, (pr - risk_free) / pv)
}
colnames(mc_fwd) <- c(tickers, "Return", "Volatility", "Sharpe")
mc_fwd_df <- as.data.frame(mc_fwd)

fwd_tangency <- mc_fwd_df[which.max(mc_fwd_df$Sharpe), ]

ggplot() +
  geom_point(data = mc_df, aes(x = Volatility * 100, y = Return * 100),
             alpha = 0.08, size = 0.3, color = "grey60") +
  geom_point(data = mc_fwd_df, aes(x = Volatility * 100, y = Return * 100),
             alpha = 0.08, size = 0.3, color = "#EF8A62") +
  geom_point(aes(x = tangency$Volatility * 100, y = tangency$Return * 100),
             color = "grey40", size = 4, shape = 18) +
  annotate("text", x = tangency$Volatility * 100 + 1.5, y = tangency$Return * 100,
           label = "Historical Tangency", fontface = "bold", size = 3, color = "grey40") +
  geom_point(aes(x = fwd_tangency$Volatility * 100, y = fwd_tangency$Return * 100),
             color = "#B2182B", size = 4, shape = 18) +
  annotate("text", x = fwd_tangency$Volatility * 100 + 1.5, y = fwd_tangency$Return * 100,
           label = "Forward Tangency", fontface = "bold", size = 3, color = "#B2182B") +
  labs(title = "Historical vs. Forward-Looking Efficient Frontier",
       subtitle = "Grey = historical returns | Orange = scenario-adjusted forward expectations",
       x = "Annualized Volatility (%)", y = "Expected Return (%)") +
  theme_minimal(base_size = 13) +
  theme(plot.title = element_text(face = "bold"), panel.grid.minor = element_blank())
Historical (grey) vs. forward-looking (orange) efficient frontiers. The shift reflects current monetary policy pricing.

Historical (grey) vs. forward-looking (orange) efficient frontiers. The shift reflects current monetary policy pricing.

Comparison: The forward-looking frontier shifts meaningfully from the historical one, reflecting the market’s current pricing of the Fed’s easing cycle. The forward portfolio reallocates toward assets with favorable rate sensitivity and away from those most vulnerable to a hawkish surprise.



References

  • Adrian, T., & Shin, H. S. (2010). Financial Intermediaries and Monetary Economics. Federal Reserve Bank of New York Staff Reports, No. 398.
  • Borio, C., & Zhu, H. (2012). Capital Regulation, Risk-Taking and Monetary Policy. Journal of Financial Stability, 8(4), 236-251.
  • Markowitz, H. (1952). Portfolio Selection. The Journal of Finance, 7(1), 77-91.
  • Michaud, R. O. (1989). The Markowitz Optimization Enigma. Financial Analysts Journal, 45(1), 31-42.
  • Sharpe, W. F. (1966). Mutual Fund Performance. The Journal of Business, 39(1), 119-138.
  • Board of Governors of the Federal Reserve System. (2024). Comprehensive Capital Analysis and Review (CCAR) Assessment Framework.
  • Federal Open Market Committee. (2024). Summary of Economic Projections. -OpenAI ChatGPT (GPT-4) [Large language model used for R script formatting and visualization styling].

Analysis conducted in R 4.5.2 using quantmod, PerformanceAnalytics, and ggplot2.