Intro to wired

Author

Giancarlo Vercellino

Published

February 6, 2026

“Everything is deeply intertwined” - Ted Nelson

To see what is in front of one’s nose needs a constant struggle” - George Orwell

Correlation does not imply causation, but it does waggle its eyebrows suggestively” - Randall Munroe

What you can do with wired

wired builds joint, multi-horizon probabilistic forecasts for multiple time series by first learning a calibrated per-series predictive mixture and then imposing adaptive cross-series dependence with a Gaussian or Student-t copula. You choose a transform (additive, multiplicative, or log_multiplicative), and wired estimates time-varying correlations (static, EWMA, rolling, or regime-switching), stabilizes them with shrinkage and PD repair, and simulates coherent scenarios.

The wired package works this way:

  • Data preparation. Before fitting, wired converts your data frame ts_set (with N rows) to the horizon-aligned, transformed scale and computes the effective length, that is L = ⌊(N - future)/future⌋. If L ≤ 0, the requested future is impossible for the available history and the function errors. With backtesting (n_testing > 0), wired also requires each expanding split to have at least min_train = 20 rows; it enforces min_win = ⌊L/n_testing⌋ ≥ min_train. If that fails, the error message tells you the current L, min_win, and min_train, and suggests concrete bounds: the maximum feasible future for your n_testing, and the maximum feasible n_testing for your future. If you set n_testing = 0, feasibility reduces to L ≥ min_train. What to do: decrease future, decrease n_testing, or provide more rows so that L (and thus min_win) clears min_train. For multi-horizon runs, size your data for the largest horizon you request.
  • Per-horizon target construction. wired starts with multiple, equally spaced time series (ts_set) and a forecast horizon future = H. You choose a transform mode (additive, multiplicative, or log_multiplicative) which is used consistently for both the marginal models and the dependence layer. You also set options for the dependence metric (Kendall, Spearman, or Pearson), the time adaptation scheme (static, EWMA, rolling, or regime), shrinkage toward identity, and the copula family (Gaussian or t with t_df > 2). Basic checks ensure sufficient aligned history and positivity of the last level when multiplicative or log modes are used.
  • Probabilistic marginal mix. For every series at each horizon, wired fits a calibrated mixture. The mixer evaluates simple predictors (naive–PERT, ARIMA, EWMA-Gaussian, historical bootstrap, drift+residual bootstrap, vol-scaled naive, robust median+MAD with Laplace/Normal, and shrunk quantile) on expanding windows, computes CRPS against realized outcomes, and predicts the next CRPS with a robust trend. Those predicted CRPS values are turned into softmax weights, producing a stable mixture with consistent r/q/p/dfun. The engine returns marginals on the transformed scale and, via analytic mappings, on the original level scale.
    • naive predictor: last horizon-aligned move wrapped in a PERT distribution (min/mode/max from recent); fast, heavy-tailed option.

    • ARIMA predictor: forecast::auto.arima() on the horizon-aligned series; draws from simulate/normal approx.

    • EWMA Gaussian predictor: exponential moving average for mean/variance, modeled as Gaussian; adapts quickly to level/vol shifts.

    • historical bootstrap predictor: resamples past horizon-aligned moves (optionally age-decayed) to form an empirical predictive.

    • drift residual bootstrap predictor: linear trend on horizon-aligned moves plus bootstrapped residuals; captures slow drift with robust noise.

    • volatility scaled naive predictor: centers at last move, scales by recent rolling SD; quick volatility-aware Gaussian.

    • robust median mad predictor: Laplace (default) or Normal with center equals to median, scale equals to MAD; resilient to outliers.

    • shrunk quantile predictor: quantile regression over time (few taus) with interpolation to a full predictive; stable tails.

  • Adaptive dependence estimation. Using the same transform mode, wired forms an aligned matrix of historical moves and estimates cross-series correlation. Depending on your choice, it uses all rows (static), an exponentially weighted covariance (EWMA), the most recent window (rolling), or a regime approach that splits calm versus stress states by quantiles and blends them with a logistic weight. The raw estimate is then stabilized by shrinking toward identity and repaired to be positive-definite via eigenvalue flooring, yielding a well-behaved correlation matrix (R).
  • Copula synthesis & joint sampling. With (R) fixed, wired samples dependent uniforms by drawing a multivariate normal (Gaussian copula) or a scale-mixture t (t-copula) and mapping through the appropriate CDF. Those uniforms are passed through each series’ mixture quantile to generate coherent multivariate draws. The engine exposes samplers on both scales: transformed draws for analysis in the modeling space, and level draws for direct interpretation and downstream use.
  • Multi-horizon wrapper & outputs. The top-level wired function orchestrates one engine per horizon, seeds them deterministically, and provides a convenience sampler that stacks level-scale draws into a 3-D array with dimensions draws × series × horizons. Alongside the array, you get per-horizon objects (marginals, (R), samplers, and metadata) for detailed inspection. In practice this enables coherent scenario generation, aggregation and portfolio views, stress testing with t-copula tails, and fast integration into planning and simulation workflows.

The process flow in wired

Eight simple predictors walk into a bar …

In this mini demo we’ll fit wired on three synthetic series, let the marginal “mixologist” blend eight simple predictors, and then use a copula to make them socialize politely across horizons. Short, fast, and no hangovers.

set.seed(42)
n <- 1000
A <- cumsum(rnorm(n, sd = 1.0)) + 100
B <- cumsum(rnorm(n, sd = 2)) +  95
C <- cumsum(rnorm(n, sd = 0.5)) + 155
D <- cumsum(rnorm(n, sd = 3)) + 255

ts_set <- data.frame(A = A, B = B, C = C, D = D)

1) Fit wired (aka “shake well and serve”)

We’ll use multiplicative mode (percent-ish changes), rolling Spearman for dependence, and a Gaussian copula. We keep n_sim small so your laptop fan doesn’t file a complaint.

library(wired)  # your package

fit <- wired(
  ts_set      = ts_set,
  future      = 10,                     # horizons h = 1, 2, 3
  mode        = "multiplicative",      # transform for marginals + dependence
  dep_metric  = "spearman",            # rank dependence
  corr_adapt  = "rolling",             # time-adaptive correlation
  roll_window = 50,
  copula      = "gaussian",
  shrink_alpha= 0.05,                  # mild shrinkage = fewer regrets
  n_testing   = 3,
  seed        = 123
)

names(fit$res_by_h)    # "h1" "h2" "h3" ... "h10"
 [1] "h1"  "h2"  "h3"  "h4"  "h5"  "h6"  "h7"  "h8"  "h9"  "h10"

2) Peek inside an horizon (h=1)

Each horizon keeps its own engine: marginals on both scales, a correlation matrix R, and samplers. Think “tiny factory”—no safety goggles required.

h1 <- fit$res_by_h$h1
colnames(h1$rfun_level(3))   # series names on the level scale
[1] "A" "B" "C" "D"
h1$R                         # copula correlation (neatly PD & shrink-repaired)
            A          B          C           D
A  1.00000000 0.05516812 0.06032503 -0.26119590
B  0.05516812 1.00000000 0.01638536  0.06070701
C  0.06032503 0.01638536 1.00000000  0.34043312
D -0.26119590 0.06070701 0.34043312  1.00000000
h1$marginals_level$A$qfun(c(0.1, 0.5, 0.9))  # quick taste of A's level quantiles
[1] 72.36530 73.98207 75.60982

3) Coherent scenarios (because forecasts should get along)

Generate 200 joint scenarios at horizon 2 and compute medians, quick vibes check.

Xh2 <- fit$res_by_h$h2$rfun_level(200)  # 200 x 3
apply(Xh2, 2, median)
        A         B         C         D 
 74.32777  85.25184 153.24836 189.15804 

Aggregate with toy weights (portfolio-ish) and check 10/50/90%, a tiny risk dashboard that doesn’t ask for a meeting.

w <- c(A = 0.4, B = 0.35, C = 0.2, D = 0.05)
last <- as.numeric(ts_set[nrow(ts_set),])
expected_growth <- Xh2/last - 1
agg_h2 <- as.numeric(expected_growth %*% w)
quantile(agg_h2, c(.1, .5, .9))
       10%        50%        90% 
-0.4752975 -0.0932258  0.3535911 

4) Rolling vs static dependence (two moods, one dataset)

Let’s refit with static correlation and compare the horizon-1 matrices. If they match exactly, buy a lottery ticket.

fit_static <- wired(
  ts_set      = ts_set,
  future      = 1,
  mode        = "multiplicative",
  dep_metric  = "spearman",
  corr_adapt  = "static",
  shrink_alpha= 0.05,
  n_testing   = 20,
  seed        = 123
)
R_roll <- fit$res_by_h$h1$R
R_stat <- fit_static$res_by_h$h1$R
round(R_roll, 3); round(R_stat, 3)
       A     B     C      D
A  1.000 0.055 0.060 -0.261
B  0.055 1.000 0.016  0.061
C  0.060 0.016 1.000  0.340
D -0.261 0.061 0.340  1.000
       A      B      C      D
A  1.000 -0.007 -0.016  0.006
B -0.007  1.000 -0.019  0.060
C -0.016 -0.019  1.000 -0.012
D  0.006  0.060 -0.012  1.000

5) Stress mode: bring in the t-copula

When markets get dramatic, t-copula adds tail dependence, like switching from decaf to double espresso.

## Compare Gaussian vs t-copula at h = 1 on the same data/settings

# Fit both; keep it light
fit_g <- wired(
  ts_set, future = 1,
  mode = "multiplicative",
  dep_metric = "kendall",
  corr_adapt = "ewma", ewma_lambda = 0.2,
  copula = "gaussian",
  shrink_alpha = 0.05,
  n_testing = 20, seed = 123
)
fit_t <- wired(
  ts_set, future = 1,
  mode = "multiplicative",
  dep_metric = "kendall",
  corr_adapt = "ewma", ewma_lambda = 0.2,
  copula = "t", t_df = 7,
  shrink_alpha = 0.05,
  n_testing = 20, seed = 123
)

Xg <- fit_g$res_by_h$h1$rfun_level(500)  # draws x series
Xt <- fit_t$res_by_h$h1$rfun_level(500)

Do two series jump together more often? Let’s have a look.

coex_rate <- function(X, probs = 0.95) {
  q <- apply(X, 2, quantile, probs = probs)
  pairs <- combn(ncol(X), 2)
  out <- apply(pairs, 2, function(ix) {
    mean(X[, ix[1]] > q[ix[1]] & X[, ix[2]] > q[ix[2]])
  })
  setNames(out, apply(pairs, 2, function(ix) paste0(colnames(X)[ix], collapse = "-")))
}

r95_g <- coex_rate(Xg, 0.95)
r95_t <- coex_rate(Xt, 0.95)
r99_g <- coex_rate(Xg, 0.99)
r99_t <- coex_rate(Xt, 0.99)

round(rbind(gauss_95 = r95_g, t_95 = r95_t, gauss_99 = r99_g, t_99 = r99_t), 4)
           A-B A-C A-D   B-C   B-D   C-D
gauss_95 0.000   0   0 0.000 0.004 0.012
t_95     0.002   0   0 0.006 0.010 0.006
gauss_99 0.000   0   0 0.000 0.000 0.002
t_99     0.000   0   0 0.000 0.004 0.002

6) Minimal plots (tiny but telling)

A few boxplots per horizon, fast visual reassurance that the uncertainty is, indeed, uncertain.

X_list <- lapply(1:3, function(h) fit$res_by_h[[paste0("h", h)]]$rfun_level(200))
yl     <- range(unlist(X_list), finite = TRUE)
yticks <- pretty(yl)

op <- par(no.readonly = TRUE)
on.exit(par(op), add = TRUE)

layout(matrix(1:3, nrow = 1))
par(mar = c(3, 3, 2, 1), oma = c(0, 2, 0, 0))

for (h in 1:3) {
  bxdat <- as.data.frame(X_list[[h]])
  boxplot(bxdat, main = paste("Horizon", h), las = 2, ylim = yl, yaxt = "n")
  axis(2, at = yticks, labels = (h == 1))
}

Conclusion

wired is like a well-run jam session, eight simple predictors each bring their riff, the mixer keeps them in key, the copula sets the groove so they play together, and out come coherent scenarios you can actually use. It’s fast enough for everyday forecasting, flexible enough for stressy days (hello, t-tails), and tidy enough to drop straight into dashboards or risk reports. Mix, couple, sample, then take a bow and let the scenarios do the encore. (yeah, that’s a joke)

Enzoi!