One of the keynote lectures from last week R in Finance conference focused on parallel computing. It was an excellent lecture delivered by Professor Norman S. Matloff from UC Davis. The lecture focused on challenges faced in parallel computing when dealing with time series analysis, which is recursive in nature. Nonetheless, it also stressed the power of R to perform parallel computing and the advancement of the current libraries to establish so. The lecture slides should be uploaded to the online program. In this vignette, I will illustrate the usage of the mclapply function from the parallel package, which I find super friendly to deploy.

To get started, I will take a look at the SPY ETF along with AAPL:

library(quantmod)
P1 <- get(getSymbols("SPY",from = "1990-01-01"))[,6]
P2 <- get(getSymbols("AAPL",from = "1990-01-01"))[,6]
P <- merge(P1,P2)
R <- na.omit(P/lag(P))-1
names(R) <- c("SPY","AAPL")

In particular, I will test the computation time needed to estimate AAPL’s beta with the SPY ETF. To do so, I create a function named beta.f that takes i as its main argument. The function randomly samples 50% of the data using a fixed seed i and computes the market beta for AAPL.

beta.f <- function(i) {
  set.seed(i)
  R.i <- R[sample(1:nrow(R),floor(0.5*nrow(R)) ),]
  lm.i <- lm(AAPL~SPY,data = R.i)
  beta.i <- summary(lm.i)$coefficients["SPY",1]
  return(beta.i)
  }

I run the computation twice over a sequence of i integers - once using the lapply and once using the mclapply. The latter runs in the same fashion of the former, making it is extremely easy to implement:

library(parallel)
N <- 10^2
f1 <- function() mean(unlist(lapply(1:N, beta.f)))
f2 <- function() mean(unlist(mclapply(1:N, beta.f)) )

In order the compare the computation time that takes each of f1 and f2 to run, I refer to the microbenchmark library to achieve a robust perspective. The main function from the library is microbenchmark whose main argument is the underlying function we like to evaluate. In our case, those are f1 and f2. Additionally, we can add an input that determines how many times we would like to run these functions. This, hence, provides multiple perspectives on the computational time needed to run each function.

library(microbenchmark)
ds.time <- microbenchmark(Regular = f1(),Parallel = f2(),times = 100)
ds.time
Unit: milliseconds
     expr      min       lq     mean   median       uq      max neval
  Regular 644.6834 676.5123 707.6287 691.5217 724.9808 933.8577   100
 Parallel 386.6154 412.8983 436.8464 423.9566 444.7628 673.1075   100

We observe that, on average, the mclapply runs significantly faster than the base lapply function. Additionally, one can refer to the autoplot function from ggplot2 to demonstrate the time distribution that takes each function to run, by simply running the following command:

library(ggplot2)
autoplot(ds.time)

Summary

Overall, this vignette demonstrates the enhancement of computation time using parallel computing for a specific task. Nevertheless, readers are advised to learn further on the topic in order to understand whether (and under what conditions) parallel computing improves performance. Check the following notes by Josh Errickson for further reading on the topic.

LS0tCnRpdGxlOiAiVGlwIG9mIHRoZSBNb250aDogbWNsYXBwbHkiCiNvdXRwdXQ6IHJtYXJrZG93bjo6Z2l0aHViX2RvY3VtZW50Cm91dHB1dDoKICBodG1sX25vdGVib29rOiBkZWZhdWx0CiAgcGRmX2RvY3VtZW50OiBkZWZhdWx0CmF1dGhvcjogTWFqZWVkIFNpbWFhbgpkYXRlOiBKdW5lIDgsIDIwMTgKZmlnX3dpZHRoOiA1MAotLS0KT25lIG9mIHRoZSBrZXlub3RlIGxlY3R1cmVzIGZyb20gbGFzdCB3ZWVrIFIgaW4gRmluYW5jZSBjb25mZXJlbmNlIGZvY3VzZWQgb24gcGFyYWxsZWwgY29tcHV0aW5nLiBJdCB3YXMgYW4gZXhjZWxsZW50IGxlY3R1cmUgZGVsaXZlcmVkIGJ5IFByb2Zlc3NvciBOb3JtYW4gUy4gTWF0bG9mZiBmcm9tIFVDIERhdmlzLiBUaGUgbGVjdHVyZSBmb2N1c2VkIG9uIGNoYWxsZW5nZXMgZmFjZWQgaW4gcGFyYWxsZWwgY29tcHV0aW5nIHdoZW4gZGVhbGluZyB3aXRoIHRpbWUgc2VyaWVzIGFuYWx5c2lzLCB3aGljaCBpcyByZWN1cnNpdmUgaW4gbmF0dXJlLiBOb25ldGhlbGVzcywgaXQgYWxzbyBzdHJlc3NlZCB0aGUgcG93ZXIgb2YgUiB0byBwZXJmb3JtIHBhcmFsbGVsIGNvbXB1dGluZyBhbmQgdGhlIGFkdmFuY2VtZW50IG9mIHRoZSBjdXJyZW50IGxpYnJhcmllcyB0byBlc3RhYmxpc2ggc28uIFRoZSBsZWN0dXJlIHNsaWRlcyBzaG91bGQgYmUgdXBsb2FkZWQgdG8gdGhlICBbb25saW5lIHByb2dyYW1dKGh0dHBzOi8vd3d3LnJpbmZpbmFuY2UuY29tLyNwcm9ncmFtKS4gSW4gdGhpcyB2aWduZXR0ZSwgSSB3aWxsIGlsbHVzdHJhdGUgdGhlIHVzYWdlIG9mIHRoZSBgbWNsYXBwbHlgIGZ1bmN0aW9uIGZyb20gdGhlIGBwYXJhbGxlbGAgcGFja2FnZSwgd2hpY2ggSSBmaW5kIHN1cGVyIGZyaWVuZGx5IHRvIGRlcGxveS4gCgpUbyBnZXQgc3RhcnRlZCwgSSB3aWxsIHRha2UgYSBsb29rIGF0IHRoZSBTUFkgRVRGIGFsb25nIHdpdGggQUFQTDoKYGBge3IgbWVzc2FnZT1GQUxTRSwgd2FybmluZz1GQUxTRX0KbGlicmFyeShxdWFudG1vZCkKUDEgPC0gZ2V0KGdldFN5bWJvbHMoIlNQWSIsZnJvbSA9ICIxOTkwLTAxLTAxIikpWyw2XQpQMiA8LSBnZXQoZ2V0U3ltYm9scygiQUFQTCIsZnJvbSA9ICIxOTkwLTAxLTAxIikpWyw2XQpQIDwtIG1lcmdlKFAxLFAyKQpSIDwtIG5hLm9taXQoUC9sYWcoUCkpLTEKbmFtZXMoUikgPC0gYygiU1BZIiwiQUFQTCIpCmBgYApJbiBwYXJ0aWN1bGFyLCBJIHdpbGwgdGVzdCB0aGUgY29tcHV0YXRpb24gdGltZSBuZWVkZWQgdG8gZXN0aW1hdGUgQUFQTCdzIGJldGEgd2l0aCB0aGUgU1BZIEVURi4gVG8gZG8gc28sIEkgY3JlYXRlIGEgZnVuY3Rpb24gbmFtZWQgYGJldGEuZmAgdGhhdCB0YWtlcyBgaWAgYXMgaXRzIG1haW4gYXJndW1lbnQuIFRoZSBmdW5jdGlvbiByYW5kb21seSBzYW1wbGVzIDUwJSBvZiB0aGUgZGF0YSB1c2luZyBhIGZpeGVkIHNlZWQgYGlgIGFuZCBjb21wdXRlcyB0aGUgbWFya2V0IGJldGEgZm9yIEFBUEwuIApgYGB7ciBtZXNzYWdlPUZBTFNFLCB3YXJuaW5nPUZBTFNFfQpiZXRhLmYgPC0gZnVuY3Rpb24oaSkgewogIHNldC5zZWVkKGkpCiAgUi5pIDwtIFJbc2FtcGxlKDE6bnJvdyhSKSxmbG9vcigwLjUqbnJvdyhSKSkgKSxdCiAgbG0uaSA8LSBsbShBQVBMflNQWSxkYXRhID0gUi5pKQogIGJldGEuaSA8LSBzdW1tYXJ5KGxtLmkpJGNvZWZmaWNpZW50c1siU1BZIiwxXQogIHJldHVybihiZXRhLmkpCiAgfQoKYGBgCkkgcnVuIHRoZSBjb21wdXRhdGlvbiB0d2ljZSBvdmVyIGEgc2VxdWVuY2Ugb2YgYGlgIGludGVnZXJzIC0gb25jZSB1c2luZyB0aGUgYGxhcHBseWAgYW5kIG9uY2UgdXNpbmcgdGhlIGBtY2xhcHBseWAuIFRoZSBsYXR0ZXIgcnVucyBpbiB0aGUgc2FtZSBmYXNoaW9uIG9mIHRoZSBmb3JtZXIsIG1ha2luZyBpdCBpcyBleHRyZW1lbHkgZWFzeSB0byBpbXBsZW1lbnQ6CmBgYHtyIG1lc3NhZ2U9RkFMU0UsIHdhcm5pbmc9RkFMU0V9CmxpYnJhcnkocGFyYWxsZWwpCk4gPC0gMTBeMgpmMSA8LSBmdW5jdGlvbigpIG1lYW4odW5saXN0KGxhcHBseSgxOk4sIGJldGEuZikpKQpmMiA8LSBmdW5jdGlvbigpIG1lYW4odW5saXN0KG1jbGFwcGx5KDE6TiwgYmV0YS5mKSkgKQpgYGAKSW4gb3JkZXIgdGhlIGNvbXBhcmUgdGhlIGNvbXB1dGF0aW9uIHRpbWUgdGhhdCB0YWtlcyBlYWNoIG9mIGBmMWAgYW5kIGBmMmAgdG8gcnVuLCBJIHJlZmVyIHRvIHRoZSBgbWljcm9iZW5jaG1hcmtgIGxpYnJhcnkgdG8gYWNoaWV2ZSBhIHJvYnVzdCBwZXJzcGVjdGl2ZS4gVGhlIG1haW4gZnVuY3Rpb24gZnJvbSB0aGUgbGlicmFyeSBpcyBgbWljcm9iZW5jaG1hcmtgIHdob3NlIG1haW4gYXJndW1lbnQgaXMgdGhlIHVuZGVybHlpbmcgZnVuY3Rpb24gd2UgbGlrZSB0byBldmFsdWF0ZS4gSW4gb3VyIGNhc2UsIHRob3NlIGFyZSBgZjFgIGFuZCBgZjJgLiBBZGRpdGlvbmFsbHksIHdlIGNhbiBhZGQgYW4gaW5wdXQgdGhhdCBkZXRlcm1pbmVzIGhvdyBtYW55IHRpbWVzIHdlIHdvdWxkIGxpa2UgdG8gcnVuIHRoZXNlIGZ1bmN0aW9ucy4gVGhpcywgaGVuY2UsIHByb3ZpZGVzIG11bHRpcGxlIHBlcnNwZWN0aXZlcyBvbiB0aGUgY29tcHV0YXRpb25hbCB0aW1lIG5lZWRlZCB0byBydW4gZWFjaCBmdW5jdGlvbi4KYGBge3IgbWVzc2FnZT1GQUxTRSwgd2FybmluZz1GQUxTRX0KbGlicmFyeShtaWNyb2JlbmNobWFyaykKZHMudGltZSA8LSBtaWNyb2JlbmNobWFyayhSZWd1bGFyID0gZjEoKSxQYXJhbGxlbCA9IGYyKCksdGltZXMgPSAxMDApCmRzLnRpbWUKYGBgCgpXZSBvYnNlcnZlIHRoYXQsIG9uIGF2ZXJhZ2UsIHRoZSBgbWNsYXBwbHlgIHJ1bnMgc2lnbmlmaWNhbnRseSBmYXN0ZXIgdGhhbiB0aGUgYmFzZSBgbGFwcGx5YCBmdW5jdGlvbi4gQWRkaXRpb25hbGx5LCAgb25lIGNhbiByZWZlciB0byB0aGUgYGF1dG9wbG90YCBmdW5jdGlvbiBmcm9tIGBnZ3Bsb3QyYCB0byBkZW1vbnN0cmF0ZSB0aGUgdGltZSBkaXN0cmlidXRpb24gdGhhdCB0YWtlcyBlYWNoIGZ1bmN0aW9uIHRvIHJ1biwgYnkgc2ltcGx5IHJ1bm5pbmcgdGhlIGZvbGxvd2luZyBjb21tYW5kOgoKYGBge3IgbWVzc2FnZT1GQUxTRSwgd2FybmluZz1GQUxTRSxmaWcuYWxpZ249ImNlbnRlciJ9CmxpYnJhcnkoZ2dwbG90MikKYXV0b3Bsb3QoZHMudGltZSkKYGBgCgojIyBTdW1tYXJ5Ck92ZXJhbGwsIHRoaXMgdmlnbmV0dGUgZGVtb25zdHJhdGVzIHRoZSBlbmhhbmNlbWVudCBvZiBjb21wdXRhdGlvbiB0aW1lIHVzaW5nIHBhcmFsbGVsIGNvbXB1dGluZyBmb3IgYSBzcGVjaWZpYyB0YXNrLiBOZXZlcnRoZWxlc3MsIHJlYWRlcnMgYXJlIGFkdmlzZWQgdG8gbGVhcm4gZnVydGhlciBvbiB0aGUgdG9waWMgaW4gb3JkZXIgdG8gdW5kZXJzdGFuZCB3aGV0aGVyIChhbmQgdW5kZXIgd2hhdCBjb25kaXRpb25zKSBwYXJhbGxlbCBjb21wdXRpbmcgaW1wcm92ZXMgcGVyZm9ybWFuY2UuIENoZWNrIHRoZSBmb2xsb3dpbmcgW25vdGVzXShodHRwOi8vZGVwdC5zdGF0LmxzYS51bWljaC5lZHUvfmplcnJpY2svY291cnNlcy9zdGF0NzAxL25vdGVzL3BhcmFsbGVsLmh0bWwpIGJ5IEpvc2ggRXJyaWNrc29uIGZvciBmdXJ0aGVyIHJlYWRpbmcgb24gdGhlIHRvcGljLgoKCg==