1. Jump Threading

Idea

When several conditionals present the same boolean condition, merging their inside actions would avoid computing the condition several times. Also in the cases where a second if statement has been used with the exact negation of the condition of the first if, the second if could then be incorporated in an else statement.

Code Examples

Unoptimized Code

jump <- function(n){
  evens <- 0
  evens2 <- 0
  odds <- 0
  for (i in seq_len(n)) {
    if (i %% 2 == 0) { # same logical as next if condition (can be merged)
      evens <- evens + 1
    }
    if (i %% 2 == 0) {
      evens2 <- evens2 + 1
    }
    if (!(i %% 2 == 0)) { # exact negation as previous if (can be an else)
      odds <- odds + 1
    }
  }
}

Proposed Optimized Code

jump_opt <- function(n) {
  evens <- 0
  evens2 <- 0
  odds <- 0
  for (i in seq_len(n)) {
    if( i %% 2 == 0) { # merged
      evens <- evens + 1
      evens2 <- evens2 + 1
    } else { # converted to else
      odds <- odds + 1
    }
  }
}

Benchmark

library("ggplot2")
library("microbenchmark")
n <- 100000
autoplot(microbenchmark(jump(n), jump_opt(n)))

Drawbacks

To-Do

2. Inline Expansion

Idea

Replacing a function call site with the body of the called function is called Inline Expansion. This eliminates the function calling overhead and also the overhead of return call from a function. It also saves the overhead of variables push/pop on the stack while, function calling

Code Examples

Unoptimized Code

cubed <- function(x){
  x*x*x
}

inline <- function(n) {
  to_cubes <- 0
  for(i in seq_len(n)) {
    to_cubes <- to_cubes +  cubed(i)
  }
}

Proposed Optimized Code

inline_opt <- function(n) {
  to_cubes <- 0
  for (i in seq_len(n)){
   to_cubes <- to_cubes + (i * i * i) #function inlined 
  }
}

Benchmark

library("ggplot2")
library("microbenchmark")
n <- 1000
autoplot(microbenchmark(inline(n), inline_opt(n)))

Drawbacks

To-Do

3. Memory Pre-Allocation

Idea

Code Examples

As a general rule of thumb, in any programming language we should undertake the memory management as much as possible. When we grow a vector inside a loop, the vector asks the processor for extra space in between the running program and then proceeds, once it gets the required memory. This process is repeated for every iteration of the loop. Thus we should pre-allocate the required memory to a vector to avoid such delays.

Unoptimized Code

pre_mem <- function(n) {
  vec <- NULL
  for (i in seq_len(n)) {
    vec[i] <- i
  }
}

Proposed Optimized Code

pre_mem_opt <- function(n) {
  vec <- vector(mode = "numeric", length = n)
  for (i in seq_len(n)) {
    vec[i] <- i
  }
}

pre_mem_opt_unknown_type <- function(n) {
  vec <- vector(length = n)
  for (i in seq_len(n)) {
    vec[i] <- i
  }  
}

Benchmark

library("ggplot2")
library("microbenchmark")
n <- 100000
autoplot(microbenchmark(pre_mem(n), pre_mem_opt(n), pre_mem_opt_unknown_type(n)))

Drawbacks

To-Do

4. Vectorization

Idea

A golden rule in R programming is to access the underlying C/Fortran routines as quickly as possible; the fewer functions calls required to achieve this, the better.Many R functions are therefore vectorised, that is the function’s inputs and/or outputs naturally work with vectors, reducing the number of function calls required.

Code Examples

Unoptimized Code

non_vectorized <- function(n) {
  v1 <- seq_len(n)
  v2 <- length(seq.int(n+2, n+1000, 2))
  res <- vector(length = length(v1))
  for (i in seq_len(n)) {
    res[i] <- v1[i] + v2[i]
  }
}

Proposed Optimized Code

vectorized <- function(n) {
  v1 <- seq_len(n)
  v2 <- length(seq.int(n+2, n+1000, 2))
  res <- v1 + v2
}

Benchmark

library("ggplot2")
library("microbenchmark")
n <- 10000
autoplot(microbenchmark(non_vectorized(n), vectorized(n)))

Drawbacks

To-Do

5. Caching

Idea

The idea of caching is similar to the concept of dynamic programming where the results of repeating sub-problems of a recursion are stored/cached and simply called every time instead of recomputating.

Code Examples

Unoptimized Code

library("memoise")

fibR <- function(x) {
  if (x == 0) return(0)
  if (x == 1) return(1)
  Recall(x - 1) + Recall(x - 2)
}

Proposed Optimized Code

mem_fibR <- memoise::memoise(fibR)

Benchmark

library("ggplot2")
library("microbenchmark")
n <- 20
autoplot(microbenchmark(fibR(n), mem_fibR(n)))

Drawbacks

To-Do

6. Reversing elements

Idea

Instead of nesting a function inside another function, it is generally faster to use a single more specific function. The rev() function provides a reversed version of its argument. If you wish to sort in decreasing order, sort(x, decreasing = TRUE) is around 10% faster than rev(sort(x)).

Code Examples

Unoptimized Code

reverse_fun <- function(n) {
  vec <- sample(seq_len(100000), n)
  rev(sort(vec))
}

Proposed Optimized Code

reverse_fun_opt <- function(n) {
  vec <- sample(seq_len(100000), n)
  sort(vec, decreasing = TRUE)
}

Benchmark

library("ggplot2")
library("microbenchmark")
n <- 1000
autoplot(microbenchmark(reverse_fun(n), reverse_fun_opt(n), times = 10))

Drawbacks

To-Do

7. Efficient Sorting

Idea

There are currently three sorting algorithms, c("shell", "quick", "radix") that can be specified in the sort() function; with radix being a new addition to R 3.3. Typically the radix (the non-default option) is the most computationally efficient option for most situations (it is around 20% faster when sorting a large vector of doubles)

Code Examples

Unoptimized Code

sort_fun <- function(n) {
  vec <- sample(seq_len(1000), n)
  sort(vec)
}

Proposed Optimized Code

sort_fun_opt <- function(n) {
  vec <- sample(seq_len(1000), n)
  sort(vec, method = "radix")
}

Benchmark

library("ggplot2")
library("microbenchmark")
n <- 10
autoplot(microbenchmark(sort_fun(n), sort_fun_opt(n)))

Drawbacks

To-Do

8. min/max using which function

Idea

Instead of nesting a function inside another function, it is generally faster to use a single more specific function. Below is a similar analogy for finding minimum and maximum values in a vector values.

Code Examples

Unoptimized Code

which_fun <- function(n) {
  vec <- sample(seq_len(100000), n)
  which(vec == min(vec))
  which(vec == max(vec))
}

Proposed Optimized Code

which_fun_opt <- function (n) {
  vec <- sample(seq_len(100000), n)
  which.min(vec)
  which.max(vec)
}

Benchmark

library("ggplot2")
library("microbenchmark")
n <- 1000
autoplot(microbenchmark(which_fun(n), which_fun_opt(n)))

Drawbacks

To-Do

9. Row and Column Operations

Idea

Code Example

Instead of using apply family of functions on rows and columns of a matrix/dataframe for simple/trivial operations, it is often more efficient to use a single specialized function for the same. Below is a similar analogy for calculating mean and sum of a row and column of a matrix respectively.

Unoptimized Code

rowCol_fun <- function () {
  mat <- matrix(c(1:81), nrow = 9, ncol = 9)
  apply(mat, 1, mean)
  apply(mat, 2, sum)
}

Proposed Optimized Code

rowCol_fun_opt <- function (n) {
  mat <- matrix(c(1:81), nrow = 9, ncol = 9)
  rowMeans(mat)
  colSums(mat)
}

For more row and column operations look up the matrixStats package.

Benchmark

library("ggplot2")
library("microbenchmark")
autoplot(microbenchmark(rowCol_fun(), rowCol_fun_opt()))

Drawbacks

To-Do

10. Detect NA values

Idea

Instead of nesting a function inside another function, it is generally faster to use a single more specific function. Below is a similar analogy for detecting NA values.

Code Examples

Unoptimized Code

detectNA <- function(n) {
  val <- sample(seq_len(100000), n)
  val <- append(val, NA, as.integer(n/2))
  any(is.na(val))
}

Proposed Optimized Code

detectNA_opt <- function(n) {
  val <- sample(seq_len(100000), n)
  val <- append(val, NA, as.integer(n/2))
  anyNA(val)  
}

Benchmark

library("ggplot2")
library("microbenchmark")
n <- 10000
autoplot(microbenchmark(detectNA(n), detectNA_opt(n)))

Drawbacks

To-Do

11. Matrix v/s Data Frames(Lists)

Idea

This optimization is based on the idea that, it takes more time to process/read a vectsor with different data types than a vector with homogeneous data type. Since matrix consists of a homogeneous data type and lists could accomodate varying data types, lists should be converted to matrices whenever possible.

Code Examples

Unoptimized Code

df_fun <- function(){
  mtcars[4, ]
}

Proposed Optimized Code

mat_fun <- function(){
  mat <- matrix(1:352, nrow = 32, ncol = 11)
  mat[4, ]
}

Benchmark

library("ggplot2")
library("microbenchmark")
autoplot(microbenchmark(df_fun(), mat_fun()))

Drawbacks

To-Do

12. Use of Native Binary Formats for I/O

Idea

Using a data type that is native to R should intuitively result in faster reading/loading times. As an added example, when the file are stored in native file formats, such as .rds, they are also quite heavily compressed, resulting in efficient usage of resources.

Code Examples

Unoptimized Code

mtcars$cyl <- as.factor(mtcars$cyl)

io_fun <- function() {
  write.csv(mtcars, file = "mtcars.csv")
  read.csv("mtcars.csv")
}

Proposed Optimized Code

io_opt_fun <- function() {
  saveRDS(mtcars, file = "mtcars.rds")  
  readRDS("mtcars.rds")
}

Benchmark

library("ggplot2")
library("microbenchmark")
autoplot(microbenchmark(io_fun(), io_opt_fun()))

Drawbacks

To-Do

13. Efficient Column Extraction

Idea

The idea would be to replace the different one-column extraction alternatives by the .subset2 function.

Code Examples

Unoptimized Code

col_ext_fun <- function(){
  mtcars[, 11]
  mtcars$carb
  mtcars[[c(11)]]
  mtcars[[11]]  
}

Proposed Optimized Code

col_ext_op_fun <- function(){
  .subset2(mtcars, 11)
  .subset2(mtcars, 11)
  .subset2(mtcars, 11)
  .subset2(mtcars, 11)
}

Benchmark

library("ggplot2")
library("microbenchmark")
autoplot(microbenchmark(
  mtcars[, 11],
  mtcars$carb,
  mtcars[[c(11)]],
  mtcars[[11]],
  .subset2(mtcars, 11)
))

Drawbacks

To-Do

14. Efficient Value Extraction

Idea

The idea would be to replace the different one-value extraction alternatives by the .subset2 function

Code Examples

Unoptimized Code

val_ext_fun <- function(){
  mtcars[32, 11]
  mtcars$carb[32]
  mtcars[[c(11, 32)]]
  mtcars[[11]][32]
}

Proposed Optimized Code

val_ext_op_fun <- function(){
 .subset2(mtcars, 11)[32] 
  .subset2(mtcars, 11)[32]
  .subset2(mtcars, 11)[32]
  .subset2(mtcars, 11)[32]
}

Benchmark

library("ggplot2")
library("microbenchmark")
autoplot(microbenchmark(
  mtcars[32, 11],
  mtcars$carb[32],
  mtcars[[c(11, 32)]],
  mtcars[[11]][32],
  .subset2(mtcars, 11)[32],
  times = 1000L
))

Drawbacks

To-Do

External packages that can be explored for further speed-ups

For efficient programming

  • compiler

  • memoise

For Efficient Data Carpentry

  • tibble

  • tidyr

  • stringr

  • readr

  • dplyr

  • data.table

For efficient input/output

  • rio

  • readr

  • data.table

  • feather

For efficient Optimization

  • profvis

  • Rcpp

For efficient set-up and hardware

  • benchmarkme

For efficient collaboration

  • lubridate

For efficient workflow

  • DiagrammeR

Some additional comments

  • if is faster than ifelse, but the speed boost of if isn’t particularly interesting since it isn’t something that can easily be harnessed through vectorization. That is to say, if is only advantageous over ifelse when the cond/test argument is of length 1.

  • A factor is just a vector of integers with associated levels. Occasionally we want to convert a factor into its numerical equivalent. The most efficient way of doing this (especially for long factors) is:

    as.numeric(levels(f))[f]

  • The non-vectorised version of R logical vectors, && and ||, only executes the second component if needed. This is efficient and leads to neater. Care must be taken not to use && or || on vectors since it only evaluates the first element of the vector, giving the incorrect answer.