This notebook explores statistical learning techniques to conduct land cover classification from multispectral imagery.
Novel machine learning techniques promise high predictive performance as they seems able to better represent nonlinear relationships or higher-order interactions between predictors than traditional linear models (Brenning, 2005).
Nevertheless, this improvement also brings the risk of possible over-fitting to the training data. Since nearby spatial observations often tend to be more similar than distant ones, traditional random cross-validation is unable to detect this over-fitting whenever spatial observations are close to each other (Brenning, 2005).
This notebook illustrates a case study focusing on land cover classification in Montes de Maria, in the Colombian Caribbean region.
I will use a spatial and spectral subset of a Landsat 8 scene collected in 2013. The original subset of seven multispectral bands has been previously enriched with three spectral indices: (i) the normalized vegetation difference index (ndvi); (ii) the enhanced vegetation index (evi); and (iii) the normalized water difference index (nwdi).
I will also use a sampled dataset of existing 2010-2012 land cover dataset, obtained by IDEAM, as reference dataset for two key steps in the classification .
A lot of the code here has been borrowed from the vignette written by Alexander Brenning and Patrick Schratz.
Let’s use the raster library to create a raster object from an enriched Landsat image processed in a previous notebook.
rm(list=ls())
library(sp)
library(raster)
library(terra)
landsat13 <- brick("./landsat/montes_toa13.TIF")
Let’s check what is landsat13:
landsat13
## class : RasterBrick
## dimensions : 3392, 3409, 11563328, 10 (nrow, ncol, ncell, nlayers)
## resolution : 30, 30 (x, y)
## extent : 422745, 525015, 1019775, 1121535 (xmin, xmax, ymin, ymax)
## crs : +proj=utm +zone=18 +datum=WGS84 +units=m +no_defs
## source : /Users/ivanlizarazo/Documents/ivan/PR/datos/landsat/montes_toa13.TIF
## names : ultra.blue, blue, green, red, NIR, SWIR1, SWIR2, ndvi, evi, ndwi
## min values : 0.0734600, 0.0544000, 0.0310400, 0.0173000, -0.0034800, -0.0034400, -0.0013000, -1.3357314, -0.2695947, -0.7850666
## max values : 0.3157600, 0.3356600, 0.3658800, 0.4511000, 0.7654000, 1.2107000, 1.2107000, 0.8741745, 0.9665707, 1.1886792
Let’s read the sampled land cover data:
library(sf)
## Linking to GEOS 3.8.1, GDAL 3.1.4, PROJ 6.3.1
lc_samples <- st_read("./montes_lc_samples.shp")
## Reading layer `montes_lc_samples' from data source `/Users/ivanlizarazo/Documents/ivan/PR/datos/montes_lc_samples.shp' using driver `ESRI Shapefile'
## Simple feature collection with 2108 features and 12 fields
## geometry type: POINT
## dimension: XY
## bbox: xmin: 424287.7 ymin: 1021479 xmax: 524824 ymax: 1120175
## CRS: unknown
head(lc_samples)
Let’s add the location of each site to the attribute table:
coordenadas <- st_coordinates(lc_samples)
lc_samples$x <- coordenadas[,1]
lc_samples$y <- coordenadas[,2]
Let’s drop the geometry:
lc_samples.df = st_drop_geometry(lc_samples)
class(lc_samples.df)
## [1] "data.frame"
summary(lc_samples.df)
## class id ultrablue blue
## Min. :1.000 Min. : 1.0 Min. :0.07874 Min. :0.05928
## 1st Qu.:3.000 1st Qu.: 527.8 1st Qu.:0.08472 1st Qu.:0.06660
## Median :4.000 Median :1054.5 Median :0.08708 Median :0.06932
## Mean :4.389 Mean :1054.5 Mean :0.08921 Mean :0.07233
## 3rd Qu.:6.000 3rd Qu.:1581.2 3rd Qu.:0.09063 3rd Qu.:0.07420
## Max. :7.000 Max. :2108.0 Max. :0.15110 Max. :0.15144
## NA's :90 NA's :90
## green red nir swir1
## Min. :0.03680 Min. :0.02070 Min. :0.0124 Min. :0.00292
## 1st Qu.:0.05436 1st Qu.:0.03406 1st Qu.:0.2042 1st Qu.:0.10909
## Median :0.05994 Median :0.03946 Median :0.2373 Median :0.13047
## Mean :0.06321 Mean :0.04674 Mean :0.2260 Mean :0.12877
## 3rd Qu.:0.06833 3rd Qu.:0.05143 3rd Qu.:0.2735 3rd Qu.:0.15716
## Max. :0.17032 Max. :0.19724 Max. :0.4128 Max. :0.34942
## NA's :90 NA's :90 NA's :90 NA's :90
## swir2 ndvi evi ndwi
## Min. :0.00140 Min. :-0.5657 Min. :-0.1144 Min. :-0.7347
## 1st Qu.:0.04138 1st Qu.: 0.5612 1st Qu.: 0.3772 1st Qu.:-0.6422
## Median :0.05395 Median : 0.7023 Median : 0.4974 Median :-0.5929
## Mean :0.06071 Mean : 0.5959 Mean : 0.4543 Mean :-0.4978
## 3rd Qu.:0.07441 3rd Qu.: 0.7629 3rd Qu.: 0.5939 3rd Qu.:-0.5036
## Max. :0.26806 Max. : 0.8440 Max. : 0.8503 Max. : 0.6536
## NA's :90 NA's :90 NA's :90 NA's :90
## x y
## Min. :424288 Min. :1021479
## 1st Qu.:460348 1st Qu.:1059379
## Median :471777 Median :1081489
## Mean :479000 Mean :1078870
## 3rd Qu.:502587 3rd Qu.:1098816
## Max. :524824 Max. :1120175
##
lc_samples.df$class <- as.factor(lc_samples.df$class)
head(lc_samples.df)
Let’s remove rows with missing values:
lc_samples.df2 <- na.omit(lc_samples.df)
Note that the data set includes the following variables:
Response
class: response variable (factor) with 7 levels: ground truth information
These are the land cover categories encoded in the class attribute: - 1. Urbano - 2. Erial - 3. Cultivo - 4. Pasto - 5. Bosque - 6. Arbustal - 7. Agua
Predictors
Others
id: site identifier; not to be used as predictor x, y: x/y location; not to be used as predictors
All but the first two variables (i.e. class and id) and the last two variables (i.e. x and y) of the data set are predictors; their names are used to construct a formula object:
(predictors <- colnames(lc_samples.df2)[3:12])
## [1] "ultrablue" "blue" "green" "red" "nir" "swir1"
## [7] "swir2" "ndvi" "evi" "ndwi"
(fo <- as.formula(paste("class ~", paste(predictors, collapse = "+"))))
## class ~ ultrablue + blue + green + red + nir + swir1 + swir2 +
## ndvi + evi + ndwi
ovAcc <- function(conmat)
{
# number of total cases/samples
n = sum(conmat)
# number of correctly classified cases per class
diag = diag(conmat)
# Overall Accuracy
OA = sum(diag) / n
#
# observed (true) cases per class
rowsums = apply(conmat, 1, sum)
p = rowsums / n
# predicted cases per class
colsums = apply(conmat, 2, sum)
q = colsums / n
expAccuracy = sum(p*q)
kappa = (OA - expAccuracy) / (1 - expAccuracy)
#
# Producer accuracy
PA <- diag / colsums
# User accuracy
UA <- diag / rowsums
outAcc <- data.frame(producerAccuracy = PA, userAccuracy = UA)
print(outAcc)
#
global_acc = data.frame(overallAccuracy=OA, overallKappa=kappa)
print(global_acc)
}
We will take a look at several classification methods with varying degrees of computational complexity and flexibility. This will give us an idea of how different models are handled by {sperrorest}, depending on the characteristics of their fitting and prediction methods. Background information about any of these methods can be found here.
LDA is simple and fast, and often performs surprisingly well if the problem at hand is linear enough.
As a start, let’s fit a model with all predictors and using all available data:
library(MASS)
##
## Attaching package: 'MASS'
## The following objects are masked from 'package:terra':
##
## area, select
## The following objects are masked from 'package:raster':
##
## area, select
fit <- lda(fo, data = lc_samples.df2)
Predict the land cover class with the fitted model and calculate the misclassification error rate (MER) on the training sample:
pred <- predict(fit, newdata = lc_samples.df2)$class
summary(pred)
## 1 2 3 4 5 6 7
## 141 91 489 443 338 312 204
conmat <- table(pred = pred, obs = lc_samples.df2$class)
conmat
## obs
## pred 1 2 3 4 5 6 7
## 1 88 16 19 2 0 0 16
## 2 9 34 9 12 5 3 19
## 3 16 30 215 56 75 51 46
## 4 18 33 86 154 48 67 37
## 5 1 10 13 40 140 89 45
## 6 1 14 1 78 77 138 3
## 7 2 8 5 1 2 2 184
lda_acc <- ovAcc(conmat)
## producerAccuracy userAccuracy
## 1 0.6518519 0.6241135
## 2 0.2344828 0.3736264
## 3 0.6178161 0.4396728
## 4 0.4489796 0.3476298
## 5 0.4034582 0.4142012
## 6 0.3942857 0.4423077
## 7 0.5257143 0.9019608
## overallAccuracy overallKappa
## 1 0.4722498 0.3716037
Classification and regression trees (CART) take a completely different approach—they are based on yes/no questions in the predictor variables and can be referred to as a binary partitioning technique.
Fit a model with all predictors and default settings:
library("rpart")
fit <- rpart(fo, data = lc_samples.df2)
# print the trained model
library(rpart.plot)
rpart.plot(fit)
Again, predict the landcover category with the fitted model:
pred <- predict(fit, newdata =lc_samples.df2 , type = "class")
Here the predict call is slightly different. Again, we could calculate a confusion matrix as well as accuracy metrics.
conmat <- table(pred = pred, obs = lc_samples.df2$class)
conmat
## obs
## pred 1 2 3 4 5 6 7
## 1 93 33 13 11 2 1 22
## 2 0 0 0 0 0 0 0
## 3 23 33 256 77 46 36 62
## 4 14 29 57 147 72 92 27
## 5 0 2 8 16 110 51 10
## 6 2 11 0 87 86 158 1
## 7 3 37 14 5 31 12 228
cart_acc <- ovAcc(conmat)
## producerAccuracy userAccuracy
## 1 0.6888889 0.5314286
## 2 0.0000000 NaN
## 3 0.7356322 0.4803002
## 4 0.4285714 0.3356164
## 5 0.3170029 0.5583756
## 6 0.4514286 0.4579710
## 7 0.6514286 0.6909091
## overallAccuracy overallKappa
## 1 0.4915758 0.3925346
Bagging, bundling and random forests build upon the CART technique by fitting many trees on bootstrap resamples of the original data set. They differ in that random forest also samples from the predictors, and bundling adds an ancillary classifier for improved classification.
We will use the randomForest algoritm here.
library("ranger")
fit <- ranger(fo, data = lc_samples.df2, importance="impurity")
fit
## Ranger result
##
## Call:
## ranger(fo, data = lc_samples.df2, importance = "impurity")
##
## Type: Classification
## Number of trees: 500
## Sample size: 2018
## Number of independent variables: 10
## Mtry: 3
## Target node size: 1
## Variable importance mode: impurity
## Splitrule: gini
## OOB prediction error: 47.67 %
fit$variable.importance
## ultrablue blue green red nir swir1 swir2 ndvi
## 224.0947 166.7347 172.6372 154.3002 158.6667 214.4635 181.7056 134.1226
## evi ndwi
## 125.5365 165.9601
Again, predict the landcover category with the fitted model:
pred <- predict(fit, data =lc_samples.df2 , type = "response")
conmat <- table(pred = pred$predictions, obs = lc_samples.df2$class)
conmat
## obs
## pred 1 2 3 4 5 6 7
## 1 135 0 0 0 0 0 0
## 2 0 145 0 0 0 0 0
## 3 0 0 348 0 0 0 0
## 4 0 0 0 343 0 0 0
## 5 0 0 0 0 347 0 0
## 6 0 0 0 0 0 350 0
## 7 0 0 0 0 0 0 350
cart_acc <- ovAcc(conmat)
## producerAccuracy userAccuracy
## 1 1 1
## 2 1 1
## 3 1 1
## 4 1 1
## 5 1 1
## 6 1 1
## 7 1 1
## overallAccuracy overallKappa
## 1 1 1
Isn’t this amazing? None grid cell was misclassified by the bagging classifier! Too good to be true? We’ll see…
Note that the sperrorest package provides a customizable toolkit for cross-validation (and bootstrap) estimation using a variety of spatial resampling schemes. In case you are interested in exploring such toolkit, you may go here to walk through a simple case study for crop classification. In this tutorial, we want to check the thematic accuracy of the best model when applied to classify the enriched Landsat 8 data set over Montes de Maria.
Now that we have a trained random forest classification model (\(fit\)), we can use it to make predictions, that is, to classify all cells in the \(landsat13\) SpatRaster.
Important: The layer names in the SpatRaster should exactly match those that were used to train the model. This will be the case if the same SpatRaster object was used (via extract) to obtain the values to fit the model. Otherwise you need to specify the matching names.
The classification step may take several minutes (depending on your computer CPU and memory):
fo
## class ~ ultrablue + blue + green + red + nir + swir1 + swir2 +
## ndvi + evi + ndwi
names(landsat13) <- c("ultrablue", "blue", "green", "red",
"nir", "swir1", "swir2", "ndvi", "evi", "ndwi")
classified <- predict(landsat13, fit,
fun = function(model, ...) predict(model, ...)$predictions)
## Predicting.. Progress: 48%. Estimated remaining time: 34 seconds.
## Predicting.. Progress: 91%. Estimated remaining time: 5 seconds.
## Aggregating predictions.. Progress: 44%. Estimated remaining time: 39 seconds.
## Aggregating predictions.. Progress: 87%. Estimated remaining time: 9 seconds.
## Predicting.. Progress: 48%. Estimated remaining time: 33 seconds.
## Predicting.. Progress: 93%. Estimated remaining time: 4 seconds.
## Aggregating predictions.. Progress: 40%. Estimated remaining time: 46 seconds.
## Aggregating predictions.. Progress: 78%. Estimated remaining time: 17 seconds.
## Predicting.. Progress: 43%. Estimated remaining time: 41 seconds.
## Predicting.. Progress: 91%. Estimated remaining time: 6 seconds.
## Aggregating predictions.. Progress: 40%. Estimated remaining time: 45 seconds.
## Aggregating predictions.. Progress: 80%. Estimated remaining time: 15 seconds.
## Predicting.. Progress: 66%. Estimated remaining time: 16 seconds.
## Aggregating predictions.. Progress: 66%. Estimated remaining time: 16 seconds.
Let’s check the output:
classified
## class : RasterLayer
## dimensions : 3392, 3409, 11563328 (nrow, ncol, ncell)
## resolution : 30, 30 (x, y)
## extent : 422745, 525015, 1019775, 1121535 (xmin, xmax, ymin, ymax)
## crs : +proj=utm +zone=18 +datum=WGS84 +units=m +no_defs
## source : memory
## names : layer
## values : 1, 7 (min, max)
## attributes :
## ID value
## from: 1 1
## to : 7 7
Let’s plot the results:
plot(classified)
There is a single layer in the classified object which represents the land cover class at every pixel.
Let’s save the classified image:
writeRaster(classified, filename="./landsat/montes_RF_05.04.TIF",
datatype="INT1U", overwrite=TRUE)
unique(classified$layer)
## [1] 1 2 3 4 5 6 7
# https://colorbrewer2.org/#type=sequential&scheme=YlGn&n=5
library(dplyr)
##
## Attaching package: 'dplyr'
## The following object is masked from 'package:MASS':
##
## select
## The following objects are masked from 'package:terra':
##
## collapse, desc, intersect, near, select, union
## The following objects are masked from 'package:raster':
##
## intersect, select, union
## The following objects are masked from 'package:stats':
##
## filter, lag
## The following objects are masked from 'package:base':
##
## intersect, setdiff, setequal, union
library(RColorBrewer)
lc_colors <- c('#fc4e2a','#f7fcb9' , '#99d8c9','#006837',
'#41ae76', '#d9f0a3',
'#addd8e', '#2b8cbe') %>%
colorRampPalette()
plot(classified, legend=TRUE, axes=TRUE, col=lc_colors(9))
This section discusses how to assess the accuracy of the model to get an idea of how reliable the classified map might be.
A first task is to visually evaluate the quality of the classification. For doing it, we have to select a known area and have a closer look over such area.
# This is Maria La Baja extent
N <- 1120000
S <- 1100000
W <- 450000
E <- 481000
Easting <- c(W,W,E,E,W)
Northing <- c(N,S,S,N,N)
Next let’s crop the landsat13n to the area of the region of interest.
print(M <- matrix(c(W,S,E,N), nrow = 2, ncol = 2))
## [,1] [,2]
## [1,] 450000 481000
## [2,] 1100000 1120000
ext.region <- extent(M)
region.brick <- crop(landsat13, ext.region)
region.lc <- crop(classified, ext.region)
region.lc
## class : RasterLayer
## dimensions : 667, 1033, 689011 (nrow, ncol, ncell)
## resolution : 30, 30 (x, y)
## extent : 450015, 481005, 1099995, 1120005 (xmin, xmax, ymin, ymax)
## crs : +proj=utm +zone=18 +datum=WGS84 +units=m +no_defs
## source : memory
## names : layer
## values : 1, 7 (min, max)
## attributes :
## ID value
## from: 1 1
## to : 7 7
new_colors <- c('#fc4e2a','#f7fcb9' , '#99d8c9','#006837','#41ae76', '#d9f0a3',
'#addd8e', '#2b8cbe') %>%
colorRampPalette()
par(mfrow=c(1,2))
plotRGB(region.brick, r = 5, g = 6, b = 4, stretch = "lin")
plot(region.lc, legend=FALSE, axes=FALSE, col=lc_colors(7))
Thematic accuracy assessment, in the strict sense of the term, requires: (i) validation data collected in the field (or any independent dataset); and (ii) validation data not used to train the prediction model.
Let’s read the data:
ref_data <- vect("./shapes/MontesLC_UTM18N.shp")
What is ref_data?
ref_data
## class : SpatVector
## geometry : polygons
## dimensions : 3426, 5 (geometries, attributes)
## extent : 422788.8, 524926.8, 1019785, 1121539 (xmin, xmax, ymin, ymax)
## coord. ref. : +proj=utm +zone=18 +datum=WGS84 +units=m +no_defs
## names : OBJECTID CODIGO LEYENDA3N LC Code
## type : <num> <num> <chr> <num> <num>
## values : 1.599e+05 3.112e+04 3.1.1. Bosque denso 311 5
## 1.599e+05 233 2.3.3. Pastos enmal~ 233 4
## 1.612e+05 244 2.4.4. Mosaico de p~ 244 4
Let’s obtain a stratified random sample from the reference data. Note that, in real-life projects, this sample is usually obtained through field-work.
set.seed(15)
# generate stratified random samples from the polygons
# change size parameter as wanted
ptsamp <- spatSample(ref_data, size=300, method="random", strata=
ref_data$Code, chess="")
What is ptsamp?
ptsamp
## class : SpatVector
## geometry : points
## dimensions : 1821, 1 (geometries, attributes)
## extent : 423650.5, 524836, 1020962, 1120232 (xmin, xmax, ymin, ymax)
## coord. ref. : +proj=utm +zone=18 +datum=WGS84 +units=m +no_defs
## names : pol.id
## type : <num>
## values : 1
## 1
## 1
Note that the attribute pol.id holds the reference land cover codes.
ptsamp$reference <- as.factor(ptsamp$pol.id)
Now we can extract the cell values from the classified images. We will use the extract function from terra.
First, we need a SpatRaster object:
classified2 <- rast(classified)
classified2
## class : SpatRaster
## dimensions : 3392, 3409, 1 (nrow, ncol, nlyr)
## resolution : 30, 30 (x, y)
## extent : 422745, 525015, 1019775, 1121535 (xmin, xmax, ymin, ymax)
## coord. ref. : +proj=utm +zone=18 +datum=WGS84 +units=m +no_defs
## source : memory
## name : layer
## min value : 1
## max value : 7
Then, the extraction task:
prediction <- terra::extract(classified2, ptsamp)
prediction
ptsamp$prediction <- as.factor(prediction$layer)
# Quick check for the extracted values
ptsamp$prediction[500:515]
## [1] "3" "3" "5" "3" "3" "3" "3" "5" "4" "3" "2" "3" "3" "3" "3" "3"
ptsamp$reference[500:515]
## [1] "3" "3" "3" "3" "3" "3" "3" "3" "3" "3" "3" "3" "3" "3" "3" "3"
Then, create the error matrix:
(conmat <- table(ptsamp$prediction, ptsamp$reference))
##
## 1 2 3 4 5 6 7
## 1 82 7 6 5 1 1 8
## 2 8 35 6 11 2 3 6
## 3 15 13 216 57 13 26 21
## 4 11 31 29 103 28 61 15
## 5 1 6 16 37 132 64 16
## 6 3 9 6 71 102 140 7
## 7 9 16 15 7 19 4 227
## NA 72 3 6 9 3 1 0
Let’s remove the last row:
(conmat2 <- conmat[-8, ])
##
## 1 2 3 4 5 6 7
## 1 82 7 6 5 1 1 8
## 2 8 35 6 11 2 3 6
## 3 15 13 216 57 13 26 21
## 4 11 31 29 103 28 61 15
## 5 1 6 16 37 132 64 16
## 6 3 9 6 71 102 140 7
## 7 9 16 15 7 19 4 227
Compute the overall accuracy and the “kappa” statistic.
Overall accuracy:
cart_acc <- ovAcc(conmat2)
## producerAccuracy userAccuracy
## 1 0.6356589 0.7454545
## 2 0.2991453 0.4929577
## 3 0.7346939 0.5983380
## 4 0.3539519 0.3705036
## 5 0.4444444 0.4852941
## 6 0.4682274 0.4142012
## 7 0.7566667 0.7643098
## overallAccuracy overallKappa
## 1 0.5414013 0.4533355
Note that the random forest classification has a higher accuracy than the CART classification from the previous notebook. In this experiment, the classes 1 Urbano, 3 Cultivo and 7 Agua have an almost acceptable thematic accuracy.
However, note that the {ranger}‘s classification is not that good as it seemed when evaluating the thematic accuracy of the fitted model. So, it seems that spatial dependence does matter. Are you aware of any classification algorithm that takes into account spatial relationships between land cover categories?
Brenning, A. 2005. “Spatial Prediction Models for Landslide Hazards: Review, Comparison and Evaluation”. Natural Hazards and Earth System Science 5 (6): 853–62. https://doi.org/10.5194/nhess-5-853-2005.
Brenning, A. and Schratz, P. 2018. “Spatial Modeling Using Statistical Learning Techniques”. Avalaible at https://cran.r-project.org/web/packages/sperrorest/vignettes/spatial-modeling-use-case.html
Cite as: Lizarazo, I., 2021. A tutorial on pixel-based land cover classification using random forests in R. Available at https://rpubs.com/ials2un/rf_landcover.
sessionInfo()
## R version 4.0.4 (2021-02-15)
## Platform: x86_64-apple-darwin17.0 (64-bit)
## Running under: macOS Mojave 10.14.6
##
## Matrix products: default
## BLAS: /Library/Frameworks/R.framework/Versions/4.0/Resources/lib/libRblas.dylib
## LAPACK: /Library/Frameworks/R.framework/Versions/4.0/Resources/lib/libRlapack.dylib
##
## locale:
## [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
##
## attached base packages:
## [1] stats graphics grDevices utils datasets methods base
##
## other attached packages:
## [1] RColorBrewer_1.1-2 dplyr_1.0.4 ranger_0.12.1 rpart.plot_3.0.9
## [5] rpart_4.1-15 MASS_7.3-53.1 sf_0.9-7 terra_1.0-10
## [9] raster_3.4-5 sp_1.4-5
##
## loaded via a namespace (and not attached):
## [1] Rcpp_1.0.6 highr_0.8 pillar_1.5.1 bslib_0.2.4
## [5] compiler_4.0.4 jquerylib_0.1.3 class_7.3-18 tools_4.0.4
## [9] digest_0.6.27 jsonlite_1.7.2 evaluate_0.14 lifecycle_1.0.0
## [13] tibble_3.1.0 lattice_0.20-41 pkgconfig_2.0.3 rlang_0.4.10
## [17] Matrix_1.3-2 DBI_1.1.1 yaml_2.2.1 rgdal_1.5-23
## [21] xfun_0.21 e1071_1.7-4 stringr_1.4.0 knitr_1.31
## [25] generics_0.1.0 sass_0.3.1 vctrs_0.3.6 tidyselect_1.1.0
## [29] classInt_0.4-3 grid_4.0.4 glue_1.4.2 R6_2.5.0
## [33] fansi_0.4.2 rmarkdown_2.7 purrr_0.3.4 magrittr_2.0.1
## [37] ellipsis_0.3.1 codetools_0.2-18 htmltools_0.5.1.1 units_0.6-7
## [41] assertthat_0.2.1 utf8_1.1.4 KernSmooth_2.23-18 stringi_1.5.3
## [45] crayon_1.4.1