ADEC7406 Module 2 Discussion

Author

Fabian Yang

Published

February 4, 2026

Question 1 Answer: Benchmark models and accuracy metrics

Data

I tried to replace the original monthly FRED series with Industrial Production Index (INDPRO) from FRED. The data are monthly, so they can be evaluated with benchmark forecasting models and decomposed into trend and seasonality.

library(fredr)
library(forecast)
library(ggplot2)
library(dplyr)
library(tidyr)
library(knitr)
library(kableExtra)
library(writexl)
indpro_raw <- fredr(
  series_id = "INDPRO",
  observation_start = as.Date("2000-01-01"),
  observation_end   = as.Date("2024-12-01")
)

df <- indpro_raw |>
  transmute(date, INDPRO = value) |>
  arrange(date)

y_ts <- ts(
  df$INDPRO,
  start = c(lubridate::year(min(df$date)), lubridate::month(min(df$date))),
  frequency = 12
)

I reserve the last 24 months as the test set.

h <- 24
train_ts <- head(y_ts, length(y_ts) - h)
test_ts  <- tail(y_ts, h)

Models (AVG, NAIVE, SNAIVE)

fc_avg    <- meanf(train_ts,  h = h, level = c(80, 95))
fc_naive  <- naive(train_ts,  h = h, level = c(80, 95))
fc_snaive <- snaive(train_ts, h = h, level = c(80, 95))

Plot: All forecasts

autoplot(fc_avg) +
  autolayer(test_ts, series = "Actual") +
  ggtitle("AVG (Mean) Forecast with 80% and 95% Prediction Intervals") +
  xlab("Time") + ylab("INDPRO")

autoplot(fc_naive) +
  autolayer(test_ts, series = "Actual") +
  ggtitle("NAIVE Forecast with 80% and 95% Prediction Intervals") +
  xlab("Time") + ylab("INDPRO")

autoplot(fc_snaive) +
  autolayer(test_ts, series = "Actual") +
  ggtitle("SNAIVE Forecast with 80% and 95% Prediction Intervals") +
  xlab("Time") + ylab("INDPRO")

Accuracy metric table (ME, MPE, RMSE, MAE, MAPE)

acc_avg    <- accuracy(fc_avg,    test_ts)[2, c("ME","MPE","RMSE","MAE","MAPE")]
acc_naive  <- accuracy(fc_naive,  test_ts)[2, c("ME","MPE","RMSE","MAE","MAPE")]
acc_snaive <- accuracy(fc_snaive, test_ts)[2, c("ME","MPE","RMSE","MAE","MAPE")]

acc <- rbind(
  AVG    = acc_avg,
  NAIVE  = acc_naive,
  SNAIVE = acc_snaive
) |>
  as.data.frame() |>
  tibble::rownames_to_column("Model")

kable(acc, digits = 3,
      caption = "Accuracy metrics (ME, MPE, RMSE, MAE, MAPE)") |>
  kable_styling(full_width = FALSE)
Accuracy metrics (ME, MPE, RMSE, MAE, MAPE)
Model ME MPE RMSE MAE MAPE
AVG 3.655 3.636 3.695 3.655 3.636
NAIVE 0.669 0.663 0.862 0.778 0.773
SNAIVE -0.545 -0.545 0.842 0.688 0.687

How these measures differ:

ME is the average forecast error in the original units and captures bias. MPE is the average percentage error and captures bias relative to the magnitude of the series. RMSE squares errors before averaging, so large mistakes receive extra weight. MAE averages the absolute errors and is easier to interpret in the original units. MAPE scales absolute errors by actual values and is scale free, but it can behave poorly if actual values are near zero.

Forecast table with point estimates and prediction intervals

make_fc_tbl <- function(fc_obj, model_name, start_date) {
  tibble(
    .model = model_name,
    Month = seq.Date(from = as.Date(start_date), by = "month", length.out = h),
    point = as.numeric(fc_obj$mean),
    lo80  = as.numeric(fc_obj$lower[, "80%"]),
    hi80  = as.numeric(fc_obj$upper[, "80%"]),
    lo95  = as.numeric(fc_obj$lower[, "95%"]),
    hi95  = as.numeric(fc_obj$upper[, "95%"])
  )
}

test_start_date <- df$date[(length(df$date) - h + 1)]

fc_tbl <- bind_rows(
  make_fc_tbl(fc_avg,    "AVG",    test_start_date),
  make_fc_tbl(fc_naive,  "NAIVE",  test_start_date),
  make_fc_tbl(fc_snaive, "SNAIVE", test_start_date)
)

kable(fc_tbl, digits = 2, caption = "Forecast point estimates and 80%/95% prediction intervals") |>
  kable_styling(full_width = FALSE)
Forecast point estimates and 80%/95% prediction intervals
.model Month point lo80 hi80 lo95 hi95
AVG 2023-01-01 96.78 90.55 103.01 87.23 106.33
AVG 2023-02-01 96.78 90.55 103.01 87.23 106.33
AVG 2023-03-01 96.78 90.55 103.01 87.23 106.33
AVG 2023-04-01 96.78 90.55 103.01 87.23 106.33
AVG 2023-05-01 96.78 90.55 103.01 87.23 106.33
AVG 2023-06-01 96.78 90.55 103.01 87.23 106.33
AVG 2023-07-01 96.78 90.55 103.01 87.23 106.33
AVG 2023-08-01 96.78 90.55 103.01 87.23 106.33
AVG 2023-09-01 96.78 90.55 103.01 87.23 106.33
AVG 2023-10-01 96.78 90.55 103.01 87.23 106.33
AVG 2023-11-01 96.78 90.55 103.01 87.23 106.33
AVG 2023-12-01 96.78 90.55 103.01 87.23 106.33
AVG 2024-01-01 96.78 90.55 103.01 87.23 106.33
AVG 2024-02-01 96.78 90.55 103.01 87.23 106.33
AVG 2024-03-01 96.78 90.55 103.01 87.23 106.33
AVG 2024-04-01 96.78 90.55 103.01 87.23 106.33
AVG 2024-05-01 96.78 90.55 103.01 87.23 106.33
AVG 2024-06-01 96.78 90.55 103.01 87.23 106.33
AVG 2024-07-01 96.78 90.55 103.01 87.23 106.33
AVG 2024-08-01 96.78 90.55 103.01 87.23 106.33
AVG 2024-09-01 96.78 90.55 103.01 87.23 106.33
AVG 2024-10-01 96.78 90.55 103.01 87.23 106.33
AVG 2024-11-01 96.78 90.55 103.01 87.23 106.33
AVG 2024-12-01 96.78 90.55 103.01 87.23 106.33
NAIVE 2023-01-01 99.77 98.32 101.21 97.56 101.97
NAIVE 2023-02-01 99.77 97.73 101.81 96.64 102.89
NAIVE 2023-03-01 99.77 97.27 102.27 95.94 103.59
NAIVE 2023-04-01 99.77 96.88 102.65 95.35 104.18
NAIVE 2023-05-01 99.77 96.54 102.99 94.83 104.70
NAIVE 2023-06-01 99.77 96.23 103.30 94.36 105.17
NAIVE 2023-07-01 99.77 95.95 103.58 93.93 105.61
NAIVE 2023-08-01 99.77 95.68 103.85 93.52 106.01
NAIVE 2023-09-01 99.77 95.44 104.10 93.14 106.39
NAIVE 2023-10-01 99.77 95.20 104.33 92.79 106.75
NAIVE 2023-11-01 99.77 94.98 104.55 92.45 107.09
NAIVE 2023-12-01 99.77 94.77 104.77 92.12 107.41
NAIVE 2024-01-01 99.77 94.56 104.97 91.81 107.72
NAIVE 2024-02-01 99.77 94.37 105.17 91.51 108.03
NAIVE 2024-03-01 99.77 94.18 105.36 91.22 108.31
NAIVE 2024-04-01 99.77 93.99 105.54 90.94 108.60
NAIVE 2024-05-01 99.77 93.82 105.72 90.67 108.87
NAIVE 2024-06-01 99.77 93.64 105.89 90.40 109.13
NAIVE 2024-07-01 99.77 93.48 106.06 90.15 109.39
NAIVE 2024-08-01 99.77 93.31 106.22 89.90 109.64
NAIVE 2024-09-01 99.77 93.15 106.38 89.65 109.88
NAIVE 2024-10-01 99.77 93.00 106.54 89.41 110.12
NAIVE 2024-11-01 99.77 92.84 106.69 89.18 110.35
NAIVE 2024-12-01 99.77 92.70 106.84 88.95 110.58
SNAIVE 2023-01-01 100.19 94.48 105.89 91.46 108.91
SNAIVE 2023-02-01 100.81 95.10 106.51 92.09 109.53
SNAIVE 2023-03-01 101.39 95.69 107.09 92.67 110.11
SNAIVE 2023-04-01 101.44 95.74 107.14 92.72 110.16
SNAIVE 2023-05-01 101.33 95.63 107.04 92.61 110.05
SNAIVE 2023-06-01 101.02 95.32 106.72 92.30 109.74
SNAIVE 2023-07-01 101.22 95.52 106.93 92.50 109.94
SNAIVE 2023-08-01 101.10 95.39 106.80 92.38 109.82
SNAIVE 2023-09-01 101.29 95.59 106.99 92.57 110.01
SNAIVE 2023-10-01 101.25 95.55 106.96 92.53 109.97
SNAIVE 2023-11-01 100.96 95.25 106.66 92.23 109.68
SNAIVE 2023-12-01 99.77 94.06 105.47 91.05 108.49
SNAIVE 2024-01-01 100.19 92.12 108.25 87.85 112.52
SNAIVE 2024-02-01 100.81 92.74 108.87 88.47 113.14
SNAIVE 2024-03-01 101.39 93.33 109.45 89.06 113.72
SNAIVE 2024-04-01 101.44 93.38 109.50 89.11 113.77
SNAIVE 2024-05-01 101.33 93.27 109.40 89.00 113.67
SNAIVE 2024-06-01 101.02 92.95 109.08 88.68 113.35
SNAIVE 2024-07-01 101.22 93.16 109.29 88.89 113.56
SNAIVE 2024-08-01 101.10 93.03 109.16 88.76 113.43
SNAIVE 2024-09-01 101.29 93.23 109.36 88.96 113.62
SNAIVE 2024-10-01 101.25 93.19 109.32 88.92 113.59
SNAIVE 2024-11-01 100.96 92.89 109.02 88.62 113.29
SNAIVE 2024-12-01 99.77 91.70 107.83 87.43 112.10

For submission, I paste screenshots of the two tables above (accuracy metrics and forecast intervals) as images.

Excel recreation check

To recreate the accuracy metrics in Excel, I export the actual values and each model’s point forecasts. In Excel, compute error = actual - forecast for each model and then compute ME, MPE, RMSE, MAE, and MAPE using the standard formulas. A screenshot comparing the Excel output to the R accuracy table should show the same values.

excel_out <- tibble(
  Month = seq.Date(from = as.Date(test_start_date), by = "month", length.out = h),
  actual = as.numeric(test_ts),
  AVG = as.numeric(fc_avg$mean),
  NAIVE = as.numeric(fc_naive$mean),
  SNAIVE = as.numeric(fc_snaive$mean)
)

write_xlsx(
  list(
    Data_for_Excel = excel_out,
    Accuracy_from_R = acc
  ),
  path = "excel_accuracy_check.xlsx"
)

Question 2 Answer: Additive vs multiplicative decomposition

Because INDPRO is monthly data, decomposition is appropriate. I compare an additive decomposition (constant seasonal amplitude) to a multiplicative decomposition (seasonality scales with the level), implemented via a log transformation.

add_stl <- stl(train_ts, s.window = "periodic")
mul_stl <- stl(log(train_ts), s.window = "periodic")
autoplot(add_stl) + ggtitle("Additive decomposition via STL")

autoplot(mul_stl) + ggtitle("Multiplicative decomposition via STL on log scale")

If seasonal fluctuations become larger when the series level is higher, the multiplicative decomposition is more appropriate. The log STL plot supports multiplicative seasonality when the seasonal component looks more stable over time after the log transformation. If the seasonal amplitude is roughly constant in the original scale, the additive decomposition is adequate.

Decomposition can be used to seasonally adjust the series, forecast the adjusted component, and then re-seasonalize the forecasts. In practice, for short horizon monthly forecasting, SNAIVE already captures repeating seasonal patterns well, so explicit decomposition is most useful when I want to interpret the trend and seasonal structure or when I plan to model the seasonally adjusted series with another method.