Getting data into R

Ben Bond-Lamberty

2022-04-07

Outline

How do we get our data into R?

Today’s we’re talking about specifically with respect to tabular data.

Comma-separated value (CSV) files

These are plain-text files in which columns are separated by commas as delimiters:

## X,Y,Reference
## 1,2,Alice
## 4,5,"Bob, Carol, and Dave"

read.csv

read.csv as a workhorse tool

This function is the principal means of reading tabular data into R.

Unless colClasses is specified, all columns are read as character columns and then converted using type.convert to logical, integer, numeric, complex or (depending on as.is) factor as appropriate. Quotes are (by default) interpreted in all fields, so a column of values like “42” will result in an integer column.

A field or line is ‘blank’ if it contains nothing (except whitespace if no separator is specified) before a comment character or the end of the field or line.

read.csv

Demonstrate it

Caution: file paths

Do not use absolute file paths.

If the first line of your R script is setwd(“C:”) I will come into your office and SET YOUR COMPUTER ON FIRE 🔥.

https://www.tidyverse.org/blog/2017/12/workflow-vs-script/

Caution: Excel

Microsoft Excel does not handle CSVs well in some circumstances. It mucks with significant digits, and will mangle (“reformat”) dates.

Use a dedicated CSV editor, not Excel, if at all possible.

read.csv

read.csv is a front-end for the more general read.table:

> read.csv
function (file, header = TRUE, sep = ",", quote = "\"", dec = ".", 
    fill = TRUE, comment.char = "", ...) 
read.table(file = file, header = header, sep = sep, quote = quote, 
    dec = dec, fill = fill, comment.char = comment.char, ...)

read.csv: crucial parameters

## # This is a header line
## X,Y,Reference
## 1,2,Alice
## 4,5,"Bob, Carol, and Dave"
read.csv("test-files/file-with-header.csv", skip = 1)
##   X Y            Reference
## 1 1 2                Alice
## 2 4 5 Bob, Carol, and Dave

read.csv: crucial parameters

read.csv("test-files/file-with-header.csv", comment.char = "#")
##   X Y            Reference
## 1 1 2                Alice
## 2 4 5 Bob, Carol, and Dave

read.csv: crucial parameters

## X,Y,Z
## 1,,3
## 4,5,6
read.csv("test-files/missing-values.csv")
##   X  Y Z
## 1 1 NA 3
## 2 4  5 6
read.csv("test-files/missing-values.csv", na.strings = "4")
##    X  Y Z
## 1  1 NA 3
## 2 NA  5 6

read.csv: crucial parameters

readr::read_csv

Some advantages:

Fancier things: reading from online

read.csv("https://raw.githubusercontent.com/bpbond/R-workshops/main/test-files/basic-file.csv")
##   X Y            Reference
## 1 1 2                Alice
## 2 4 5 Bob, Carol, and Dave

Fancier things: reading from memory

my_data <- c("A,B", "1,2", "3,4")
read.csv(text = my_data)
##   A B
## 1 1 2
## 2 3 4

Skipping a units line

This is fairly common and seems like a pain:

## X,Y,Z
## ,cm,m2/ha
## 1,2,3
## 4,5,6

This forces every column to be read as character; we then need to remove the row and re-class each column. What a pain!

x <- read.csv("test-files/units-line.csv")
glimpse(x)
## Rows: 3
## Columns: 3
## $ X <int> NA, 1, 4
## $ Y <chr> "cm", "2", "5"
## $ Z <chr> "m2/ha", "3", "6"
x <- x[-1,]
x$Y <- as.numeric(x$Y)
x$Z <- as.numeric(x$Z)
glimpse(x)
## Rows: 2
## Columns: 3
## $ X <int> 1, 4
## $ Y <dbl> 2, 5
## $ Z <dbl> 3, 6

Skipping a units line

A slicker way is to read the file in as raw text; delete the problematic line(s), and the read.csv directly from memory:

x_raw <- readLines("test-files/units-line.csv")
print(x_raw)
## [1] "X,Y,Z"     ",cm,m2/ha" "1,2,3"     "4,5,6"
read.csv(text = x_raw[-2])
##   X Y Z
## 1 1 2 3
## 2 4 5 6

Skipping columns

Set the appropriate colClasses to NULL.

read_csv is easier here…

Other delimiters

What if your data are tab-delimited (or something else)?

See read.delim and the more general read.table