1. Introduction

This notebook illustrates how to conduct land cover classification from multispectral imagery using the terra library. It aims at helping Percepcion Remota students at UNAL to get started with remote sensing image analysis in R.

I will use a spatial and spectral subset of a Landsat 8 scene collected in 2013. The subset is a seven-band image which covers the area known as Montes de Maria, in the Colombian Caribbean region.

I will also use an existing 2010-2012 land cover dataset, obtained by IDEAM, as reference dataset for two key steps in the classification process: (i) to obtain reliable training samples for creating the model; and (ii) to evaluate thematic accuracy of the results.

Many code lines has been taken from the excellent resources written by Ani Ghosh and Robert Hijmans.

2. Reading input data

2.1 Reading the multispectral image

Let’s use the raster library to create a raster object from a Landsat multispectral TIF file obtained in a previous notebook.

rm(list=ls())
library(sp)
library(terra)
landsat13 <- rast("./landsat/montes-landsat.TIF")

Let’s check what is landsat13:

landsat13
## class       : SpatRaster 
## dimensions  : 3392, 3409, 7  (nrow, ncol, nlyr)
## resolution  : 30, 30  (x, y)
## extent      : 422745, 525015, 1019775, 1121535  (xmin, xmax, ymin, ymax)
## coord. ref. : +proj=utm +zone=18 +datum=WGS84 +units=m +no_defs 
## source      : montes-landsat.TIF 
## names       : ultra-blue,  blue, green,   red,   NIR, SWIR1, ... 
## min values  :          0,     0,     0,     0,     0,     0, ... 
## max values  :      20788, 21783, 23294, 27555, 43270, 65535, ...

Now, let’s load a shapefile representing the study area:

library(raster)
montes  <- shapefile("./shapes/MontesdeMaria_UTM18N.shp")

Let’s plot the data to check the extent of the image:

plot(landsat13[[5]], main="Landsat 8 - NIR - in 2013")
plot(montes, add=TRUE)

First of all, it is important to get rid of nodata values (they correspond to pixels that lie outside of the scene boundaries).

#  Set all values in landsat13 that are == 0  to be NA
landsat13[ landsat13 == 0 ] <- NA

Let’s check the change:

plot(landsat13[[5]], main="Landsat 8 - NIR - in 2013")
plot(montes, add=TRUE)

2.2 Reading reference data

Training and validation data are essential for succeeding in land cover classification. The higher the quality of the reference data, the higher the quality of the classification.

In this notebook, I use a high-quality land cover data set obtained by IDEAM to obtain both training and validation data.

Let’s read the land cover data:

library(raster)
ref_data <- vect("./shapes/MontesLC_UTM18N.shp")

What is ref_data?

ref_data
##  class       : SpatVector 
##  geometry    : polygons 
##  dimensions  : 3426, 5  (geometries, attributes)
##  extent      : 422788.8, 524926.8, 1019785, 1121539  (xmin, xmax, ymin, ymax)
##  coord. ref. : +proj=utm +zone=18 +datum=WGS84 +units=m +no_defs 
##  names       :  OBJECTID    CODIGO            LEYENDA3N    LC  Code
##  type        :     <num>     <num>                <chr> <num> <num>
##  values      : 1.599e+05 3.112e+04  3.1.1. Bosque denso   311     5
##                1.599e+05       233 2.3.3. Pastos enmal~   233     4
##                1.612e+05       244 2.4.4. Mosaico de p~   244     4

Note that the LC attribute holds the original land cover codes. These are Level 3 Land Cover codes according to the Corine Land Cover Classification System used in Colombia.

unique(ref_data$LC)
##  [1] 311 233 244 243 512 111 323 322 315 231 314 232 411 242 514 313 331 333 212
## [20] 334 413 211 112 223 224 513 521 312 131 121 132 245 522 421 523 511
unique(ref_data$LEYENDA3N)
##  [1] "3.1.1. Bosque denso"                                    
##  [2] "2.3.3. Pastos enmalezados"                              
##  [3] "2.4.4. Mosaico de pastos con espacios naturales"        
##  [4] "2.4.3. Mosaico de cultivos, pastos y espacios naturales"
##  [5] "5.1.2. Lagunas, lagos y cienagas naturales"             
##  [6] "1.1.1. Tejido urbano continuo"                          
##  [7] "3.2.3. Vegetacion secundaria o en transicion"           
##  [8] "3.2.2. Arbustal"                                        
##  [9] "3.1.5. Plantacion forestal"                             
## [10] "2.3.1. Pastos limpios"                                  
## [11] "3.1.4. Bosque de galeria y ripario"                     
## [12] "2.3.2. Pastos arbolados"                                
## [13] "4.1.1. Zonas Pantanosas"                                
## [14] "2.4.2. Mosaico de pastos y cultivos"                    
## [15] "5.1.4. Cuerpos de agua artificiales"                    
## [16] "3.1.3. Bosque fragmentado"                              
## [17] "3.3.1. Zonas arenosas naturales"                        
## [18] "3.3.3. Tierras desnudas y degradadas"                   
## [19] "2.1.2. Cereales"                                        
## [20] "3.3.4. Zonas quemadas"                                  
## [21] "4.1.3. Vegetacion acuatica sobre cuerpos de agua"       
## [22] "2.1.1. Otros cultivos transitorios"                     
## [23] "1.1.2. Tejido urbano discontinuo"                       
## [24] "2.2.3. Cultivos permanentes arboreos"                   
## [25] "2.2.4. Cultivos agroforestales"                         
## [26] "5.1.3. Canales"                                         
## [27] "5.2.1. Lagunas costeras"                                
## [28] "3.1.2. Bosque abierto"                                  
## [29] "1.3.1. Zonas de extraccion minera"                      
## [30] "1.2.1. Zonas industriales o comerciales"                
## [31] "1.3.2. Zona de disposicion de residuos"                 
## [32] "2.4.5. Mosaico de cultivos con espacios naturales"      
## [33] "5.2.2. Mares y oceanos"                                 
## [34] "4.2.1. Pantanos costeros"                               
## [35] "5.2.3. Estanques para acuicultura marina"               
## [36] "5.1.1. Rios (50 m)"

For this notebook, I use generalized land cover categories which are stored in the Code attribute:

unique(ref_data$Code)
## [1] 5 4 7 1 6 2 3

These are the generalized categories:

    1. Urbano
    1. Erial
    1. Cultivo
    1. Pasto
    1. Bosque
    1. Arbustal
    1. Agua

3. Data pre-processing

3.1 Computing TOA reflectance values

Let’s convert the “raw” digital values into TOA reflectance values:

# Review the landsat metadata to obtain the conversion parameters
toa13 <-  -0.1 + landsat13*2.0E-5

3.2 Computing spectral indices

Let’s define a function to compute the normalized difference vegetation index (ndvi):

vi <- function(img, k, i) {
  bk <- img[[k]]
  bi <- img[[i]]
  vi <- (bk - bi) / (bk + bi)
  return(vi)
}
# For Landsat 8: NIR = 5, red = 4.
ndvi <- vi(toa13, 5, 4)
plot(ndvi, col=rev(terrain.colors(10)), main = "Montes-NDVI in 2013")

Now, let’s compute the enhanced vegetation index (evi). You find the corresponding equation here.

vi2 <- function(img, m, k, i) {
  bm <- img[[m]]
  bk <- img[[k]]
  bi <- img[[i]]
  vi2 <- 2.5 * ((bm - bk) / (bm + 6* bk - 7.5* bi + 1))
  return(vi2)
}
# For Landsat 8: NIR = 5, red = 4, blue=2.
evi <- vi2(toa13, 5, 4, 2)
plot(evi, col=rev(terrain.colors(10)), main = "Montes- EVI in 2013")

It seems that EVI provides much more information than NDVI. Why? You may review some literature to find a reason.

par(mfrow=c(1,2))
hist(ndvi, main="NDVI")
hist(evi, main="EVI")

# correlation plot between indices
indices <- c(ndvi,evi)
names(indices) <- c("ndvi", "evi")
pairs(indices)

Let’s compute the normalized difference water index (NDWI). See equation here.

wi <- function(img, j, k) {
  bj <- img[[j]]
  bk <- img[[k]]
  wi <-  (bj - bk) / (bj +bk)
  return(wi)
}
# For Landsat 8: NIR = 5, green = 3.
ndwi <- wi(toa13, 3, 5)
plot(ndwi, col=rev(terrain.colors(10)), main = "Montes- NDWI in 2013")

3.3 adding the spectral indices to the TOA image

Let’s combine the toa13 and two spectral indices, the EVI and the NDWI:

toa13 <- c(toa13, ndvi, evi, ndwi)

Now, we have a 10-band multispectral imagery:

nlyr(toa13)
## [1] 10
names(toa13) <- c("ultra-blue", "blue", "green",      
                  "red","NIR","SWIR1","SWIR2","ndvi", "evi", "ndwi")   

4. Data processing

4.1 Getting training data

Let’s sample the land cover polygons using the spatSample function for terra. Note that the strata parameter takes size samples from each stratum (a field name in a SpatVector of polygons).

While this function takes much more time that the \(spsample\) in the terra tutorial, it seems a better option because it ensures every class in the reference dataset is sampled.

set.seed(63)
# generate stratified random samples from the polygons
# change size parameter as wanted
ptsamp <- spatSample(ref_data, size=350, method="random", strata=
                        ref_data$Code, chess="")
unique(ptsamp$pol.id)
## [1] 1 2 3 4 5 6 7
nptsamp <- as(ptsamp, "Spatial")
nref_data <- as(ref_data, "Spatial")
ptsamp$class <- over(nptsamp, nref_data)$Code

What is ptsamp?

ptsamp
##  class       : SpatVector 
##  geometry    : points 
##  dimensions  : 2096, 2  (geometries, attributes)
##  extent      : 424229, 524609.3, 1021068, 1119840  (xmin, xmax, ymin, ymax)
##  coord. ref. : +proj=utm +zone=18 +datum=WGS84 +units=m +no_defs 
##  names       : pol.id class
##  type        :  <num> <num>
##  values      :      1     1
##                     1     1
##                     1     1

Now the attribute class holds the land cover codes.

Once we have the training sites, we can extract the cell values from the landsat TOA layers. These band values from the toa13 object will be the predictor variables. The “class” values from the ptsamp object will be the response variable.

# We use the x-y coordinates to extract the spectral values for the locations
xy <- as.matrix(geom(ptsamp)[,c('x','y')])
df <- extract(toa13, xy)[,-1]
# Quick check for the extracted values
head(df)
sampdata <- data.frame(class = as.factor(ptsamp$class), df)
# Quick check for the output
head(sampdata)
# Quick check for the output
tail(sampdata)
summary(sampdata$class)
##   1   2   3   4   5   6   7 
## 202 144 350 350 350 350 350

4.2 Training the classifier

Now we will train the classification algorithm using the \(sampdata\) data.

There are several classification algorithms to choose from. In this notebook we will use a CART algorithm. It is a well-known decision tree classifier. For more information, you can visit this link to understand how the CART model works.

library(rpart)
# Train the model
cartmodel <- rpart(as.factor(class)~., data = sampdata, method = 'class', minsplit = 3)

One of the primary reasons behind choosing a CART model is to have a closer look at the classification model. Unlike other models, CART provides very simple way of inspecting and plotting the model structure.

# print the trained model
print(cartmodel)
## n=2003 (93 observations deleted due to missingness)
## 
## node), split, n, loss, yval, (yprob)
##       * denotes terminal node
## 
##   1) root 2003 1653 7 (0.069 0.071 0.17 0.17 0.17 0.17 0.17)  
##     2) SWIR1>=0.07054 1765 1418 6 (0.077 0.065 0.2 0.19 0.18 0.2 0.097)  
##       4) ndwi< -0.6815695 162   39 3 (0 0.0062 0.76 0.043 0.08 0.11 0) *
##       5) ndwi>=-0.6815695 1603 1274 6 (0.085 0.07 0.14 0.2 0.19 0.21 0.11)  
##        10) green>=0.06141 815  635 4 (0.16 0.11 0.2 0.22 0.069 0.076 0.17)  
##          20) blue>=0.08302 175   86 1 (0.51 0.22 0.074 0.057 0.011 0.017 0.11)  
##            40) SWIR1< 0.21825 114   30 1 (0.74 0.11 0.096 0.0088 0.0088 0 0.044) *
##            41) SWIR1>=0.21825 61   35 2 (0.082 0.43 0.033 0.15 0.016 0.049 0.25) *
##          21) blue< 0.08302 640  470 4 (0.062 0.081 0.23 0.27 0.084 0.092 0.18)  
##            42) SWIR1>=0.12508 544  381 4 (0.066 0.083 0.24 0.3 0.079 0.1 0.13)  
##              84) SWIR1< 0.16894 339  239 3 (0.088 0.062 0.29 0.24 0.074 0.091 0.15)  
##               168) green>=0.06773 135   83 3 (0.16 0.1 0.39 0.074 0.03 0.03 0.21) *
##               169) green< 0.06773 204  134 4 (0.039 0.034 0.24 0.34 0.1 0.13 0.11) *
##              85) SWIR1>=0.16894 205  122 4 (0.029 0.12 0.16 0.4 0.088 0.12 0.083) *
##            43) SWIR1< 0.12508 96   50 7 (0.042 0.073 0.19 0.073 0.11 0.031 0.48) *
##        11) green< 0.06141 788  521 6 (0.0089 0.029 0.075 0.18 0.32 0.34 0.047)  
##          22) SWIR2< 0.04767 354  197 5 (0.0056 0.025 0.093 0.13 0.44 0.23 0.071) *
##          23) SWIR2>=0.04767 434  250 6 (0.012 0.032 0.06 0.23 0.22 0.42 0.028) *
##     3) SWIR1< 0.07054 238   60 7 (0.0084 0.12 0 0.0042 0.11 0.0042 0.75)  
##       6) red>=0.06337 29    4 2 (0.034 0.86 0 0 0 0 0.1) *
##       7) red< 0.06337 209   34 7 (0.0048 0.019 0 0.0048 0.13 0.0048 0.84) *

The decision tree seems hard to understand. Let’s try another way:

# Plot the trained classification tree
par(xpd = NA) # otherwise on some devices the text is clipped
plot(cartmodel, uniform=TRUE, main="Classification Tree")
text(cartmodel, cex=0.8 ,digits=3, col="blue")

The plot shows the different possible splitting rules that can be used to effectively predict the type of outcome (here, land cover classes). For example, the top split assigns observations having SWIR1 TOA reflectance < 0.07054 to the right branch and a new split based on red TOA reflectance assigns observations either to land cover class 2 (Erial) or 7 (Agua). When SWIR1 TOA reflectance is >= 0.07054* and ndwi < -0.6816, observations are assigned to land cover class 3 (Cultivo).

Note that we trained a CART model without making any effort to parameterize or “tune” the decision tree. In a real-life project, this task is needed for getting very accurate results. More information here.

See ?rpart.control to set different parameters for building the model.

You can print/plot more about the \(cartmodel\) created in the previous example. For example, you can use \(plotcp(cartmodel)\) to learn about the cost-complexity (cp argument in \(rpart\)).

4.3 Conducting the classification step

Now that we have our trained classification model (\(cartmodel\)), we can use it to make predictions, that is, to classify all cells in the \(toa13\) SpatRaster.

Important: The layer names in the SpatRaster should exactly match those that were used to train the model. This will be the case if the same SpatRaster object was used (via extract) to obtain the values to fit the model. Otherwise you need to specify the matching names.

The classification step may take several minutes (depending on your computer CPU and memory):

classified <- predict(toa13, cartmodel, na.rm = TRUE)

Let’s check the output:

classified
## class       : SpatRaster 
## dimensions  : 3392, 3409, 7  (nrow, ncol, nlyr)
## resolution  : 30, 30  (x, y)
## extent      : 422745, 525015, 1019775, 1121535  (xmin, xmax, ymin, ymax)
## coord. ref. : +proj=utm +zone=18 +datum=WGS84 +units=m +no_defs 
## source      : memory 
## names       :         X1,         X2,         X3,         X4,         X5,         X6, ... 
## min values  : 0.00000000, 0.00617284, 0.00000000, 0.00000000, 0.00000000, 0.00000000, ... 
## max values  :  0.7368421,  0.8620690,  0.7592593,  0.4048780,  0.4435028,  0.4239631, ...

Let’s plot the results:

plot(classified)

There are seven layers in the classified object, each of the layer representing the probability of a particular land cover class.

In order to obtain a single classified image, we assign each pixel to the land cover class with highest probability.

The terra library provides the app function to apply a function fun across datasets by layer of a SpatRasterDataset. We can use it to create the final single layer product.

Be aware that some times, when applying the app function, errors arise due to the presence of null values. Let’s create a new band to store NA values.

classified$X8 <- classified$X7
classified$X8[is.na(classified)] <- 1.0
classified[[8]]
## class       : SpatRaster 
## dimensions  : 3392, 3409, 1  (nrow, ncol, nlyr)
## resolution  : 30, 30  (x, y)
## extent      : 422745, 525015, 1019775, 1121535  (xmin, xmax, ymin, ymax)
## coord. ref. : +proj=utm +zone=18 +datum=WGS84 +units=m +no_defs 
## source      : spat_KUZn4sxaJwkzsZu.tif 
## name        : X8 
## min value   :  0 
## max value   :  1

Now, let’s apply the app function from terra:

lulc <- app(classified, fun = which.max)

Let’s save the result. Just in case anything goes wrong.

Let’s check what is lulc:

lulc
## class       : SpatRaster 
## dimensions  : 3392, 3409, 1  (nrow, ncol, nlyr)
## resolution  : 30, 30  (x, y)
## extent      : 422745, 525015, 1019775, 1121535  (xmin, xmax, ymin, ymax)
## coord. ref. : +proj=utm +zone=18 +datum=WGS84 +units=m +no_defs 
## source      : memory 
## name        : lyr.1 
## min value   :     1 
## max value   :     8
writeRaster(lulc, filename="./landsat/montes_LC_MAX.TIF", 
            datatype="INT1U", overwrite=TRUE)
unique(lulc$lyr.1)
##      lyr.1
## [1,]     1
## [2,]     2
## [3,]     3
## [4,]     4
## [5,]     5
## [6,]     6
## [7,]     7
## [8,]     8
    1. Urbano
    1. Erial
    1. Cultivo
    1. Pasto
    1. Bosque
    1. Arbustal
    1. Agua
    1. NoData
# https://colorbrewer2.org/#type=sequential&scheme=YlGn&n=5
library(dplyr)
## 
## Attaching package: 'dplyr'
## The following objects are masked from 'package:raster':
## 
##     intersect, select, union
## The following objects are masked from 'package:terra':
## 
##     collapse, desc, intersect, near, select, union
## The following objects are masked from 'package:stats':
## 
##     filter, lag
## The following objects are masked from 'package:base':
## 
##     intersect, setdiff, setequal, union
library(RColorBrewer)
green_colors <- c('#fc4e2a','#f7fcb9' , '#99d8c9','#006837','#41ae76', '#d9f0a3',
                  '#addd8e', '#2b8cbe','white') %>%
  colorRampPalette()
plot(lulc,  legend=TRUE, axes=TRUE,  maxcell=5000000, col=green_colors(9))

5. Accuracy evaluation

This section discusses how to assess the accuracy of the model to get an idea of how reliable the classified map might be.

5.1 Qualitative evaluation

A first task is to visually evaluate the quality of the classification. For doing it, we have to select a known area and have a closer look over such area.

# This is Maria La Baja extent
N <- 1120000
S <- 1100000
W <- 450000
E <- 481000
Easting <- c(W,W,E,E,W)
Northing <- c(N,S,S,N,N)

Next let’s crop the toa13 to the area of the region of interest.

print(M <- matrix(c(W,S,E,N), nrow = 2, ncol = 2))
##         [,1]    [,2]
## [1,]  450000  481000
## [2,] 1100000 1120000
ext.region <- extent(M)
region.brick <- crop(toa13, ext.region)
region.lc <- crop(lulc, ext.region)
region.lc
## class       : SpatRaster 
## dimensions  : 667, 1033, 1  (nrow, ncol, nlyr)
## resolution  : 30, 30  (x, y)
## extent      : 450015, 481005, 1099995, 1120005  (xmin, xmax, ymin, ymax)
## coord. ref. : +proj=utm +zone=18 +datum=WGS84 +units=m +no_defs 
## source      : memory 
## name        : lyr.1 
## min value   :     1 
## max value   :     7
new_colors <- c('#fc4e2a','#f7fcb9' , '#99d8c9','#006837','#41ae76', '#d9f0a3',
                  '#addd8e', '#2b8cbe')  %>%
                        colorRampPalette()
par(mfrow=c(1,2))
plotRGB(region.brick, r = 5, g = 6, b = 4, stretch = "lin")
plot(region.lc,  legend=FALSE, axes=FALSE,  col=new_colors(7))

5.2 Quantitative assessment

To evaluate the model, we can use k-fold cross-validation (you can also do single-fold). In this technique the data used to fit the model is split into k groups (typically 5 groups). In turn, one of the groups will be used for model testing, while the rest of the data is used for model training (fitting).

set.seed(63)
# number of folds
k <- 5
j <- sample(rep(1:k, each = round(nrow(sampdata))/k))
table(j)
## j
##   1   2   3   4   5 
## 419 419 419 419 419

Now we train and test the model five times, each time computing the predictions and storing that with the actual values in a list. Later we use the list to compute the final accuracy.

x <- list()
for (k in 1:5) {
    train <- sampdata[j!= k, ]
    test <- sampdata[j == k, ]
    cart <- rpart(as.factor(class)~., data=train, method = 'class',
                  minsplit = 5)
    pclass <- predict(cart, test, na.rm = TRUE)
    # assign class to maximum probablity
    pclass <- apply(pclass, 1, which.max)
    # create a data.frame using the reference and prediction
    x[[k]] <- cbind(test$class, as.integer(pclass))
}

Now combine the five list elements into a single data.frame, using do.call and compute a confusion matrix.

class <- c("Urbano", "Erial", "Cultivo", "Pasto", "Bosque", "Arbustal", "Agua")
classdf <- data.frame(classvalue = seq(1:7),
                      classnames = class)
#uncomment to run 
y <- do.call(rbind, x)
y <- data.frame(y)
colnames(y) <- c('observed', 'predicted')
# confusion matrix
conmat <- table(y)
# 
print(conmat)
##         predicted
## observed   1   2   3   4   5   6   7
##        1  82   7  14  58   1  33   7
##        2  12  50  13  37   9  12  11
##        3  11   2 169  89  28  28  23
##        4   2   8  29 153  41 108   9
##        5   1   1  23  44 146 102  33
##        6   0   3  31  59  79 174   4
##        7   6  17  30  43  34  12 208

Note that the procedure explained above it is just a training model evaluation.

Thematic accuracy assessment, in the strict sense of the term, requires: (i) validation data collected in the field (or any independent dataset); and (ii) validation data not used to train the prediction model.

Questions

Question 1:Comment on the miss-classification between different classes.

Question 2:Can you think of ways to to improve the accuracy.

Compute the overall accuracy and the “kappa” statistic.

Overall accuracy:

# number of total cases/samples
n <- sum(conmat)
n
## [1] 2096
# number of correctly classified cases per class
diag <- diag(conmat)
# Overall Accuracy
OA <- sum(diag) / n
OA
## [1] 0.4685115

Kappa Index:

# observed (true) cases per class
rowsums <- apply(conmat, 1, sum)
p <- rowsums / n
# predicted cases per class
colsums <- apply(conmat, 2, sum)
q <- colsums / n
expAccuracy <- sum(p*q)
kappa <- (OA - expAccuracy) / (1 - expAccuracy)
kappa
## [1] 0.3680144

Producer and user accuracy:

# Producer accuracy
PA <- diag / colsums
# User accuracy
UA <- diag / rowsums
outAcc <- data.frame(producerAccuracy = PA, userAccuracy = UA)
outAcc

Note that the higher accuracy in this experiment corresponds to the class 7 Agua.

In a technical report, is important to describe the meaning of these accuracy metrics and discuss their implications.

6. Questions & further tasks

Question 3:Perform the classification using Random Forest classifiers from the randomForest package

Question 4:Plot the results of rpart and Random Forest classifier side-by-side.

Summary: I illustrated here how to conduct land cover classification from multispectral images. I did it using the TOA reflectance values, two vegetation indices, and a water index. How much was the contribution of the spectral indices to the classification decision rules?

As discussed in class, there are additional spectral transforms that can be used to enrich information content of the multispectral imagery. You are free to try them in your land cover classification experiments.

I used here the CART classification algorithm to start the machine learning path. I did not attempt to parameterize the classification model. As result, several land cover classes were not correctly represented in the output classification. You are free to parameterize your land cover classification model.

7. Reproducibility

Cite as: Lizarazo, I, 2021. A tutorial on land cover classification from satellite imagery using R. Available at https://rpubs.com/ials2un/cart_landsat8.

sessionInfo()
## R version 4.0.4 (2021-02-15)
## Platform: x86_64-apple-darwin17.0 (64-bit)
## Running under: macOS Mojave 10.14.6
## 
## Matrix products: default
## BLAS:   /Library/Frameworks/R.framework/Versions/4.0/Resources/lib/libRblas.dylib
## LAPACK: /Library/Frameworks/R.framework/Versions/4.0/Resources/lib/libRlapack.dylib
## 
## locale:
## [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
## 
## attached base packages:
## [1] stats     graphics  grDevices utils     datasets  methods   base     
## 
## other attached packages:
## [1] RColorBrewer_1.1-2 dplyr_1.0.4        rpart_4.1-15       raster_3.4-5      
## [5] terra_1.0-10       sp_1.4-5          
## 
## loaded via a namespace (and not attached):
##  [1] Rcpp_1.0.6        bslib_0.2.4       compiler_4.0.4    pillar_1.4.7     
##  [5] jquerylib_0.1.3   highr_0.8         tools_4.0.4       digest_0.6.27    
##  [9] jsonlite_1.7.2    evaluate_0.14     lifecycle_1.0.0   tibble_3.0.6     
## [13] lattice_0.20-41   pkgconfig_2.0.3   rlang_0.4.10      DBI_1.1.1        
## [17] yaml_2.2.1        rgdal_1.5-23      xfun_0.21         stringr_1.4.0    
## [21] knitr_1.31        generics_0.1.0    vctrs_0.3.6       sass_0.3.1       
## [25] grid_4.0.4        tidyselect_1.1.0  glue_1.4.2        R6_2.5.0         
## [29] rmarkdown_2.7     purrr_0.3.4       magrittr_2.0.1    codetools_0.2-18 
## [33] htmltools_0.5.1.1 ellipsis_0.3.1    assertthat_0.2.1  stringi_1.5.3    
## [37] crayon_1.4.1