library(ggplot2)
setwd("d:/Cursos/Datacamp/Unsupervised Learning")

Unsupervised learning in R

k-means clustering

We have created some two-dimensional data and stored it in a variable called x in your workspace. The scatter plot on the right is a visual representation of the data.

In this exercise, your task is to create a k-means model of the x data using 3 clusters, then to look at the structure of the resulting model using the summary() function.

# Create the k-means model: km.out
km.out <- kmeans(iris[,-5], 3, nstart=20)

# Inspect the result
summary(km.out)
##              Length Class  Mode   
## cluster      150    -none- numeric
## centers       12    -none- numeric
## totss          1    -none- numeric
## withinss       3    -none- numeric
## tot.withinss   1    -none- numeric
## betweenss      1    -none- numeric
## size           3    -none- numeric
## iter           1    -none- numeric
## ifault         1    -none- numeric

Results of kmeans()

The kmeans() function produces several outputs. In the video, we discussed one output of modeling, the cluster membership.

In this exercise, you will access the cluster component directly. This is useful anytime you need the cluster membership for each observation of the data used to build the clustering model. A future exercise will show an example of how this cluster membership might be used to help communicate the results of k-means modeling.

k-means models also have a print method to give a human friendly output of basic modeling results. This is available by using print() or simply typing the name of the model.

# Print the cluster membership component of the model
km.out$cluster
##   [1] 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
##  [36] 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 2 2 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
##  [71] 2 2 2 2 2 2 2 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 2 1 1 1
## [106] 1 2 1 1 1 1 1 1 2 2 1 1 1 1 2 1 2 1 2 1 1 2 2 1 1 1 1 1 2 1 1 1 1 2 1
## [141] 1 1 2 1 1 1 2 1 1 2
# Print the km.out object
km.out
## K-means clustering with 3 clusters of sizes 38, 62, 50
## 
## Cluster means:
##   Sepal.Length Sepal.Width Petal.Length Petal.Width
## 1     6.850000    3.073684     5.742105    2.071053
## 2     5.901613    2.748387     4.393548    1.433871
## 3     5.006000    3.428000     1.462000    0.246000
## 
## Clustering vector:
##   [1] 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
##  [36] 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 2 2 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
##  [71] 2 2 2 2 2 2 2 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 2 1 1 1
## [106] 1 2 1 1 1 1 1 1 2 2 1 1 1 1 2 1 2 1 2 1 1 2 2 1 1 1 1 1 2 1 1 1 1 2 1
## [141] 1 1 2 1 1 1 2 1 1 2
## 
## Within cluster sum of squares by cluster:
## [1] 23.87947 39.82097 15.15100
##  (between_SS / total_SS =  88.4 %)
## 
## Available components:
## 
## [1] "cluster"      "centers"      "totss"        "withinss"    
## [5] "tot.withinss" "betweenss"    "size"         "iter"        
## [9] "ifault"

Visualizing and interpreting results of kmeans()

One of the more intuitive ways to interpret the results of k-means models is by plotting the data as a scatter plot and using color to label the samples’ cluster membership. In this exercise, you will use the standard plot() function to accomplish this.

To create a scatter plot, you can pass data with two features (i.e. columns) to plot() with an extra argument col = km.out$cluster, which sets the color of each point in the scatter plot according to its cluster membership.

# Scatter plot of x
plot(Petal.Length ~ Petal.Width, data=iris, col=km.out$cluster
, main="k-means with 3 clusters"
,xlab=""
,ylab="")

Handling random algorithms

In the video, you saw how kmeans() randomly initializes the centers of clusters. This random initialization can result in assigning observations to different cluster labels. Also, the random initialization can result in finding different local minima for the k-means algorithm. This exercise will demonstrate both results.

At the top of each plot, the measure of model quality—total within cluster sum of squares error—will be plotted. Look for the model(s) with the lowest error to find models with the better model results.

Because kmeans() initializes observations to random clusters, it is important to set the random number generator seed for reproducibility.

# Set up 2 x 3 plotting grid
par(mfrow = c(2, 3))
x <- readr::read_csv('xds.csv', col_names = F)
## Parsed with column specification:
## cols(
##   X1 = col_double(),
##   X2 = col_double()
## )
# Set seed
set.seed(1)

for(i in 1:6) {
  # Run kmeans() on x with three clusters and one start
  km.out <- kmeans(x, 3, nstart=1)
  
  # Plot clusters
  plot(x, col = km.out$cluster, 
       main = km.out$tot.withinss, 
       xlab = "", ylab = "")
}

hist(sample(1:20, size = 100, replace = TRUE, prob = 20:1))

### Selecting number of clusters

The k-means algorithm assumes the number of clusters as part of the input. If you know the number of clusters in advance (e.g. due to certain business constraints) this makes setting the number of clusters easy. However, as you saw in the video, if you do not know the number of clusters and need to determine it, you will need to run the algorithm multiple times, each time with a different number of clusters. From this, you can observe how a measure of model quality changes with the number of clusters.

In this exercise, you will run kmeans() multiple times to see how model quality changes as the number of clusters changes. Plots displaying this information help to determine the number of clusters and are often referred to as scree plots.

The ideal plot will have an elbow where the quality measure improves more slowly as the number of clusters increases. This indicates that the quality of the model is no longer improving substantially as the model complexity (i.e. number of clusters) increases. In other words, the elbow indicates the number of clusters inherent in the data.

# Initialize total within sum of squares error: wss
wss <- 0

# For 1 to 15 cluster centers
for (i in 1:15) {
  km.out <- kmeans(x, centers = i, nstart=20)
  # Save total within sum of squares to wss variable
  wss[i] <- km.out$tot.withinss
}

# Plot total within sum of squares vs. number of clusters
plot(1:15, wss, type = "b", 
     xlab = "Number of Clusters", 
     ylab = "Within groups sum of squares")

# Set k equal to the number of clusters corresponding to the elbow location
k <- 2

Practical matters: working with real data

Dealing with real data is often more challenging than dealing with synthetic data. Synthetic data helps with learning new concepts and techniques, but the next few exercises will deal with data that is closer to the type of real data you might find in your professional or academic pursuits.

The first challenge with the Pokemon data is that there is no pre-determined number of clusters. You will determine the appropriate number of clusters, keeping in mind that in real data the elbow in the scree plot might be less of a sharp elbow than in synthetic data. Use your judgement on making the determination of the number of clusters.

The second part of this exercise includes plotting the outcomes of the clustering on two dimensions, or features, of the data. These features were chosen somewhat arbitrarily for this exercise. Think about how you would use plotting and clustering to communicate interesting groups of Pokemon to other people.

An additional note: this exercise utilizes the iter.max argument to kmeans(). As you’ve seen, kmeans() is an iterative algorithm, repeating over and over until some stopping criterion is reached. The default number of iterations for kmeans() is 10, which is not enough for the algorithm to converge and reach its stopping criterion, so we’ll set the number of iterations to 50 to overcome this issue. To see what happens when kmeans() does not converge, try running the example with a lower number of iterations (e.g. 3). This is another example of what might happen when you encounter real data and use real cases.

The pokemon dataset, which contains observations of 800 Pokemon characters on 6 dimensions (i.e. features), is available in your workspace.

  • Using kmeans() with nstart = 20, determine the total within sum of square errors for different numbers of clusters (between 1 and 15).
  • Pick an appropriate number of clusters based on these results from the first instruction and assign that number to k.
  • Create a k-means model using k clusters and assign it to the km.out variable.
  • Create a scatter plot of Defense vs. Speed, showing cluster membership for each observation.
pokemon <- readr::read_csv('Pokemon1.csv')
## Parsed with column specification:
## cols(
##   Number = col_integer(),
##   Name = col_character(),
##   Type1 = col_character(),
##   Type2 = col_character(),
##   Total = col_integer(),
##   HitPoints = col_integer(),
##   Attack = col_integer(),
##   Defense = col_integer(),
##   SpecialAttack = col_integer(),
##   SpecialDefense = col_integer(),
##   Speed = col_integer(),
##   Generation = col_integer(),
##   Legendary = col_character()
## )
numcols <- c('HitPoints', 'Attack', 'Defense', 'SpecialAttack', 'SpecialDefense', 'Speed')
# Initialize total within sum of squares error: wss
wss <- 0

# Look over 1 to 15 possible clusters
for (i in 1:15) {
  # Fit the model: km.out
  km.out <- kmeans(pokemon[,numcols], centers = i, nstart = 20, iter.max = 50)
  # Save the within cluster sum of squares
  wss[i] <- km.out$tot.withinss
}

# Produce a scree plot
plot(1:15, wss, type = "b", 
     xlab = "Number of Clusters", 
     ylab = "Within groups sum of squares")

# Select number of clusters
k <- 3

# Build model with k clusters: km.out
km.out <- kmeans(pokemon[,numcols], centers = k, nstart = 20, iter.max = 50)

# View the resulting model
km.out
## K-means clustering with 3 clusters of sizes 355, 270, 175
## 
## Cluster means:
##   HitPoints   Attack   Defense SpecialAttack SpecialDefense    Speed
## 1  54.68732 56.93239  53.64507      52.02254       53.04789 53.58873
## 2  81.90370 96.15926  77.65556     104.12222       86.87778 94.71111
## 3  79.30857 97.29714 108.93143      66.71429       87.04571 57.29143
## 
## Clustering vector:
##   [1] 1 1 2 2 1 1 2 2 2 1 1 3 2 1 1 1 1 1 1 2 1 1 2 2 1 1 1 2 1 2 1 2 1 3 1
##  [36] 1 3 1 1 2 1 2 1 2 1 1 1 2 1 1 2 1 3 1 2 1 1 1 2 1 2 1 2 1 2 1 1 3 1 2
##  [71] 2 2 1 3 3 1 1 2 1 2 1 3 3 1 2 1 3 3 1 2 1 1 2 1 3 1 3 1 3 1 2 2 2 3 1
## [106] 3 1 3 1 2 1 2 1 3 3 3 1 1 3 1 3 1 3 3 3 1 2 1 3 1 2 2 2 2 2 2 3 3 3 1
## [141] 3 3 3 1 1 2 2 2 1 1 3 1 3 2 2 3 2 2 2 1 1 2 2 2 2 2 1 1 3 1 1 2 1 1 3
## [176] 1 1 1 2 1 1 1 1 2 1 2 1 1 1 1 1 1 2 1 1 2 2 3 1 1 3 2 1 1 2 1 1 1 1 1
## [211] 3 2 3 1 3 2 1 1 2 1 3 1 3 3 3 1 3 1 3 3 3 3 3 1 1 3 1 3 1 3 1 1 2 1 2
## [246] 3 1 2 2 2 1 3 2 2 1 1 3 1 1 1 3 2 2 2 3 1 1 3 3 2 2 2 1 1 2 2 1 1 2 2
## [281] 1 1 3 3 1 1 1 1 1 1 1 1 1 1 1 2 1 1 2 1 1 1 3 1 1 2 2 1 1 1 3 1 1 2 1
## [316] 2 1 1 1 2 1 3 1 3 1 1 1 3 1 3 1 3 3 3 1 1 2 1 2 2 1 1 1 1 1 1 3 1 2 2
## [351] 1 2 1 2 3 3 1 2 1 1 1 2 1 2 1 3 2 2 2 2 3 1 3 1 3 1 3 1 3 1 3 1 2 1 3
## [386] 1 2 2 1 3 3 1 2 2 1 1 2 2 1 1 2 1 3 3 3 1 1 3 2 2 1 3 3 2 3 3 3 2 2 2
## [421] 2 2 2 3 2 2 2 2 2 2 3 2 1 3 3 1 1 2 1 1 2 1 1 2 1 1 1 1 1 1 2 1 2 1 3
## [456] 1 3 1 3 3 3 2 1 3 1 1 2 1 2 1 3 2 1 2 1 2 2 2 2 1 2 1 1 2 1 3 1 1 1 1
## [491] 3 1 1 2 2 1 1 2 2 1 3 1 3 1 2 3 1 2 1 1 2 3 2 2 3 3 3 2 2 2 2 3 2 3 2
## [526] 2 2 2 3 3 2 2 2 2 2 2 2 3 2 2 2 2 2 2 2 2 3 2 2 2 2 2 2 2 1 1 2 1 1 2
## [561] 1 1 2 1 1 1 1 3 1 2 1 2 1 2 1 2 1 3 1 1 2 1 2 1 3 3 1 2 1 2 3 3 1 3 3
## [596] 1 1 2 3 3 1 1 2 1 1 2 1 2 1 2 2 1 1 2 1 3 2 2 1 3 1 3 2 1 3 1 3 1 2 1
## [631] 3 1 2 1 2 1 1 3 1 1 2 1 2 1 1 2 1 2 2 1 3 1 3 1 2 3 1 2 1 3 1 3 3 1 1
## [666] 2 1 2 1 1 2 1 3 3 1 3 2 1 2 3 1 2 3 1 3 1 3 3 1 3 1 3 2 3 1 1 2 1 2 2
## [701] 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 3 3 1 1 2 1 1 2 1 1 1 1 2 1 1 1
## [736] 1 2 1 1 2 1 2 1 3 2 1 2 2 1 3 2 3 1 3 1 2 1 3 1 3 1 3 1 2 1 2 1 3 1 2
## [771] 2 2 2 3 1 2 2 3 1 3 1 1 1 1 3 3 3 3 1 3 1 2 2 2 3 3 2 2 2 2
## 
## Within cluster sum of squares by cluster:
## [1]  812079.9 1018348.0  709020.5
##  (between_SS / total_SS =  40.8 %)
## 
## Available components:
## 
## [1] "cluster"      "centers"      "totss"        "withinss"    
## [5] "tot.withinss" "betweenss"    "size"         "iter"        
## [9] "ifault"
# Plot of Defense vs. Speed by cluster membership
plot(pokemon[, c("Defense", "Speed")],
     col = km.out$cluster,
     main = paste("k-means clustering of Pokemon with", k, "clusters"),
     xlab = "Defense", ylab = "Speed")

Hierarchical clustering

Hierarchical clustering with results

In this exercise, you will create your first hierarchical clustering model using the hclust() function.

We have created some data that has two dimensions and placed it in a variable called x. Your task is to create a hierarchical clustering model of x. Remember from the video that the first step to hierarchical clustering is determining the similarity between observations, which you will do with the dist() function.

You will look at the structure of the resulting model using the summary() function.

# Create hierarchical clustering model: hclust.out
hclust.out <- hclust(dist(x[1:50,]))

# Inspect the result
summary(hclust.out)
##             Length Class  Mode     
## merge       98     -none- numeric  
## height      49     -none- numeric  
## order       50     -none- numeric  
## labels       0     -none- NULL     
## method       1     -none- character
## call         2     -none- call     
## dist.method  1     -none- character

Now that you’ve made your first hierarchical clustering model, let’s learn how to use it to solve problems. ### Cutting the tree

Remember from the video that cutree() is the R function that cuts a hierarchical model. The h and k arguments to cutree() allow you to cut the tree based on a certain height h or a certain number of clusters k.

In this exercise, you will use cutree() to cut the hierarchical model you created earlier based on each of these two criteria. The hclust.out model you created earlier is available in your workspace. * Cut the hclust.out model at height 7. * Cut the hclust.out model to create 3 clusters.

# Cut by height
cutree(hclust.out,h=7)
##  [1] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
## [36] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
# Cut by number of clusters
cutree(hclust.out, k=3)
##  [1] 1 2 3 1 3 2 1 2 3 2 1 3 2 2 2 3 2 2 2 1 2 2 2 1 1 2 2 2 3 2 1 3 3 2 1
## [36] 2 2 2 2 2 3 2 3 2 2 3 2 3 2 1
plot(hclust.out)
abline(k = 3, col = "red")
## Warning in int_abline(a = a, b = b, h = h, v = v, untf = untf, ...): "k" is
## not a graphical parameter

If you’re wondering what the output means, remember, there are n observations in the original dataset x. The output of each cutree() call represents the cluster assignments for each observation in the original dataset. Great work!

Linking clusters in hierarchical clustering

  • How is distance between clusters determined? Rules?
  • Four methods to determine which cluster should be linked
  • Complete: pairwise similarity between all observationsin cluster 1 and cluster 2, and uses largest of similarities
  • Single: same as above but uses smallest of similarities
  • Average: same as above but uses average of similarities
  • Centroid: finds centroid of cluster 1 and centroid of cluster 2, and uses similarity between two centroids

Practical matters

  • Data on different scales can cause undesirable results in clustering methods
  • Solution is to scale data so that features have same mean and standard deviation
  • Subtract mean of a feature from all observations
  • Divide each feature by the standard deviation of the feature
  • Normalized features have a mean of zero and a standard deviation of one

Linkage methods

In this exercise, you will produce hierarchical clustering models using different linkages and plot the dendrogram for each, observing the overall structure of the trees.

You’ll be asked to interpret the results in the next exercise.

  • Produce three hierarchical clustering models on x using the “complete”, “average”, and “single” linkage methods, respectively.
  • Plot a dendrogram for each model, using titles of “Complete”, “Average”, and “Single”, respectively.
# Cluster using complete linkage: hclust.complete
hclust.complete <- hclust(dist(x[1:50,]), method = 'complete')

# Cluster using average linkage: hclust.average
hclust.average <- hclust(dist(x[1:50,]), method='average')

# Cluster using single linkage: hclust.single
hclust.single <- hclust(dist(x[1:50,]),method='single')

# Plot dendrogram of hclust.complete
plot(hclust.complete, main='Complete')

# Plot dendrogram of hclust.average
plot(hclust.average, main='Average')

# Plot dendrogram of hclust.single
plot(hclust.single, main='Single')

Before moving on, make sure to toggle through the plots to compare and contrast the three dendrograms you created. You’ll learn about the implications of these differences in the next exercise. Whether you want balanced or unbalanced trees for your hierarchical clustering model depends on the context of the problem you’re trying to solve. Balanced trees are essential if you want an even number of observations assigned to each cluster. On the other hand, if you want to detect outliers, for example, an unbalanced tree is more desirable because pruning an unbalanced tree can result in most observations assigned to one cluster and only a few observations assigned to other clusters.

Practical matters: scaling

Recall from the video that clustering real data may require scaling the features if they have different distributions. So far in this chapter, you have been working with synthetic data that did not need scaling.

In this exercise, you will go back to working with “real” data, the pokemon dataset introduced in the first chapter. You will observe the distribution (mean and standard deviation) of each feature, scale the data accordingly, then produce a hierarchical clustering model using the complete linkage method.

  1. Observe the mean of each variable in pokemon using the colMeans() function.
  2. Observe the standard deviation of each variable using the apply() and sd() functions. Since the variables are the columns of your matrix, make sure to specify 2 as the MARGIN argument to apply().
  3. Scale the pokemon data using the scale() function and store the result in pokemon.scaled.
  4. Create a hierarchical clustering model of the pokemon.scaled data using the complete linkage method. Manually specify the method argument and store the result in hclust.pokemon.
poknum <- pokemon[,numcols]
row.names(poknum) <- pokemon$Name
## Warning: Setting row names on a tibble is deprecated.
# View column means
colMeans(poknum)
##      HitPoints         Attack        Defense  SpecialAttack SpecialDefense 
##       69.25875       79.00125       73.84250       72.82000       71.90250 
##          Speed 
##       68.27750
# View column standard deviations
apply(poknum,2,sd)
##      HitPoints         Attack        Defense  SpecialAttack SpecialDefense 
##       25.53467       32.45737       31.18350       32.72229       27.82892 
##          Speed 
##       29.06047
# Scale the data
pokemon.scaled <- scale(poknum)
#normalize data
source('d:/cursos/Giufunctions/utils.r')
pokemon.norm <- normalizeDf(poknum)
# Create hierarchical clustering model: hclust.pokemon
hclust.pokemon <- hclust(dist(pokemon.scaled), method='complete')
plot(hclust.pokemon)

hclust.norm <- hclust(dist(pokemon.norm), method='complete')
plot(hclust.norm)

Let’s quickly recap what we just did. You first checked to see if the column means and standard deviations vary. Because they do, you scaled the data, converted the scaled data to a similarity matrix and passed it into the hclust() function.

Comparing kmeans() and hclust()

Comparing k-means and hierarchical clustering, you’ll see the two methods produce different cluster memberships. This is because the two algorithms make different assumptions about how the data is generated. In a more advanced course, we could choose to use one model over another based on the quality of the models’ assumptions, but for now, it’s enough to observe that they are different.

This exercise will have you compare results from the two models on the pokemon dataset to see how they differ. The results from running k-means clustering on the pokemon data (for 3 clusters) are stored as km.pokemon. The hierarchical clustering model you created in the previous exercise is still available as hclust.pokemon.

  1. Using cutree() on hclust.pokemon, assign cluster membership to each observation. Assume three clusters and assign the result to a vector called cut.pokemon.
  2. Using table(), compare cluster membership between the two clustering methods. Recall that the different components of k-means model objects can be accessed with the $ operator.
# Apply cutree() to hclust.pokemon: cut.pokemon
cut.pokemon <- cutree(hclust.pokemon, k = 3)

# Compare methods
table(km.out$cluster, cut.pokemon)
##    cut.pokemon
##       1   2   3
##   1 350   5   0
##   2 267   3   0
##   3 171   3   1

Looking at the table, it looks like the hierarchical clustering model assigns most of the observations to cluster 1, while the k-means algorithm distributes the observations relatively evenly among all clusters. It’s important to note that there’s no consensus on which method produces better clusters. The job of the analyst in unsupervised clustering is to observe the cluster assignments and make a judgment call as to which method provides more insights into the data.

Principal Component Analysis (PCA)

Dimensionality Reduction

  • A popular methodo is PCA
  • Three goals when finding lower dimensional representation of features
  • Find linear combination of variables to create principal components
  • Mantain most variance in the data
  • Principal components are uncorrelated (orthogonal to each other)(no correlations)

PCA using prcomp()

In this exercise, you will create your first PCA model and observe the diagnostic results.

We have loaded the Pokemon data from earlier, which has four dimensions, and placed it in a variable called pokemon. Your task is to create a PCA model of the data, then to inspect the resulting model using the summary() function.

  • Create a PCA model of the data in pokemon, setting scale to TRUE. Store the result in pr.out.
  • Inspect the result with the summary() function.
# Perform scaled PCA: pr.out
pr.out <- prcomp(poknum, scale=T)
# Inspect model output
summary(pr.out)
## Importance of components:
##                           PC1    PC2    PC3    PC4     PC5     PC6
## Standard deviation     1.6466 1.0457 0.8825 0.8489 0.65463 0.51681
## Proportion of Variance 0.4519 0.1822 0.1298 0.1201 0.07142 0.04451
## Cumulative Proportion  0.4519 0.6342 0.7640 0.8841 0.95549 1.00000

The first two principal components describe around 63% of the variance.

Additional results of PCA

PCA models in R produce additional diagnostic and output components:

  • center: the column means used to center to the data, or FALSE if the data weren’t centered
  • scale: the column standard deviations used to scale the data, or FALSE if the data weren’t scaled
  • rotation: the directions of the principal component vectors in terms of the original features/variables. This information allows you to define new data in terms of the original principal components
  • x: the value of each observation in the original dataset projected to the principal components

You can access these the same as other model components. For example, use pr.out$rotation to access the rotation component.

dim(pr.out$rotation)
## [1] 6 6
dim(pr.out$x)
## [1] 800   6
biplot(pr.out)

Variance explained

The second common plot type for understanding PCA models is a scree plot. A scree plot shows the variance explained as the number of principal components increases. Sometimes the cumulative variance explained is plotted as well.

In this and the next exercise, you will prepare data from the pr.out model you created at the beginning of the chapter for use in a scree plot. Preparing the data for plotting is required because there is not a built-in function in R to create this type of plot.

# Variability of each principal component: pr.var
pr.var <- pr.out$sdev^2

# Variance explained by each principal component: pve
pve <- pr.var / sum(pr.var)
pve
## [1] 0.45190665 0.18225358 0.12979086 0.12011089 0.07142337 0.04451466

Visualize variance explained

Now you will create a scree plot showing the proportion of variance explained by each principal component, as well as the cumulative proportion of variance explained.

Recall from the video that these plots can help to determine the number of principal components to retain. One way to determine the number of principal components to retain is by looking for an elbow in the scree plot showing that as the number of principal components increases, the rate at which variance is explained decreases substantially. In the absence of a clear elbow, you can use the scree plot as a guide for setting a threshold.

# Plot variance explained for each principal component
plot(pve, xlab = "Principal Component",
     ylab = "Proportion of Variance Explained",
     ylim = c(0, 1), type = "b")

# Plot cumulative proportion of variance explained
plot(cumsum(pve), xlab = "Principal Component",
     ylab = "Cumulative Proportion of Variance Explained",
     ylim = c(0, 1), type = "b")

Notice that when the number of principal components is equal to the number of original features in the data, the cumulative proportion of variance explained is 1.

Practical issues: scaling

You saw in the video that scaling your data before doing PCA changes the results of the PCA modeling. Here, you will perform PCA with and without scaling, then visualize the results using biplots.

Sometimes scaling is appropriate when the variances of the variables are substantially different. This is commonly the case when variables have different units of measurement, for example, degrees Fahrenheit (temperature) and miles (distance). Making the decision to use scaling is an important step in performing a principal component analysis.

# Mean of each variable
colMeans(poknum)
##      HitPoints         Attack        Defense  SpecialAttack SpecialDefense 
##       69.25875       79.00125       73.84250       72.82000       71.90250 
##          Speed 
##       68.27750
# Standard deviation of each variable
apply(poknum, 2, sd)
##      HitPoints         Attack        Defense  SpecialAttack SpecialDefense 
##       25.53467       32.45737       31.18350       32.72229       27.82892 
##          Speed 
##       29.06047
# PCA model with scaling: pr.with.scaling
pr.with.scaling <- prcomp(poknum,scale=T)

# PCA model without scaling: pr.without.scaling
pr.without.scaling <- prcomp(poknum)

# Create biplots of both for comparison
par(mfrow=c(1,2))
biplot(pr.with.scaling)
biplot(pr.without.scaling)

Case Study

  • Complete Analysis using Unsupervised Learning
  • Reinforce All Concept that we learned
  • Add steps not covered before(like preparing data, selecting good features for supervised learning)
  • Emphasize Creativity
  • Human brest mass that was or was not malignant
  • 10 features measured of each cell nuclei
  • each features is a summary statistic of the cells in that mass
  • includes diagnosis (target) - can be used for supervised learning but will not be used during the unsupervised analysis

  • overall steps
  • download and prepare data
  • EDA
  • perform PCA and interpret results
  • complete twy tupes of clustering
  • understnd and compare the two types
  • combine PCA and clustering

Preparing the data

Unlike prior chapters, where we prepared the data for you for unsupervised learning, the goal of this chapter is to step you through a more realistic and complete workflow.

Recall from the video that the first step is to download and prepare the data.

url <- "http://s3.amazonaws.com/assets.datacamp.com/production/course_1903/datasets/WisconsinCancer.csv"

# Download the data: wisc.df
wisc.df <- read.csv(url)

# Convert the features of the data: wisc.data
wisc.data <- as.matrix(wisc.df[,3:32])

# Set the row names of wisc.data
row.names(wisc.data) <- wisc.df$id
head(wisc.df)
##         id diagnosis radius_mean texture_mean perimeter_mean area_mean
## 1   842302         M       17.99        10.38         122.80    1001.0
## 2   842517         M       20.57        17.77         132.90    1326.0
## 3 84300903         M       19.69        21.25         130.00    1203.0
## 4 84348301         M       11.42        20.38          77.58     386.1
## 5 84358402         M       20.29        14.34         135.10    1297.0
## 6   843786         M       12.45        15.70          82.57     477.1
##   smoothness_mean compactness_mean concavity_mean concave.points_mean
## 1         0.11840          0.27760         0.3001             0.14710
## 2         0.08474          0.07864         0.0869             0.07017
## 3         0.10960          0.15990         0.1974             0.12790
## 4         0.14250          0.28390         0.2414             0.10520
## 5         0.10030          0.13280         0.1980             0.10430
## 6         0.12780          0.17000         0.1578             0.08089
##   symmetry_mean fractal_dimension_mean radius_se texture_se perimeter_se
## 1        0.2419                0.07871    1.0950     0.9053        8.589
## 2        0.1812                0.05667    0.5435     0.7339        3.398
## 3        0.2069                0.05999    0.7456     0.7869        4.585
## 4        0.2597                0.09744    0.4956     1.1560        3.445
## 5        0.1809                0.05883    0.7572     0.7813        5.438
## 6        0.2087                0.07613    0.3345     0.8902        2.217
##   area_se smoothness_se compactness_se concavity_se concave.points_se
## 1  153.40      0.006399        0.04904      0.05373           0.01587
## 2   74.08      0.005225        0.01308      0.01860           0.01340
## 3   94.03      0.006150        0.04006      0.03832           0.02058
## 4   27.23      0.009110        0.07458      0.05661           0.01867
## 5   94.44      0.011490        0.02461      0.05688           0.01885
## 6   27.19      0.007510        0.03345      0.03672           0.01137
##   symmetry_se fractal_dimension_se radius_worst texture_worst
## 1     0.03003             0.006193        25.38         17.33
## 2     0.01389             0.003532        24.99         23.41
## 3     0.02250             0.004571        23.57         25.53
## 4     0.05963             0.009208        14.91         26.50
## 5     0.01756             0.005115        22.54         16.67
## 6     0.02165             0.005082        15.47         23.75
##   perimeter_worst area_worst smoothness_worst compactness_worst
## 1          184.60     2019.0           0.1622            0.6656
## 2          158.80     1956.0           0.1238            0.1866
## 3          152.50     1709.0           0.1444            0.4245
## 4           98.87      567.7           0.2098            0.8663
## 5          152.20     1575.0           0.1374            0.2050
## 6          103.40      741.6           0.1791            0.5249
##   concavity_worst concave.points_worst symmetry_worst
## 1          0.7119               0.2654         0.4601
## 2          0.2416               0.1860         0.2750
## 3          0.4504               0.2430         0.3613
## 4          0.6869               0.2575         0.6638
## 5          0.4000               0.1625         0.2364
## 6          0.5355               0.1741         0.3985
##   fractal_dimension_worst  X
## 1                 0.11890 NA
## 2                 0.08902 NA
## 3                 0.08758 NA
## 4                 0.17300 NA
## 5                 0.07678 NA
## 6                 0.12440 NA
# Create diagnosis vector
diagnosis <- as.numeric(wisc.df$diagnosis == 'M')

Exploratory data analysis

The first step of any data analysis, unsupervised or supervised, is to familiarize yourself with the data.

The variables you created before, wisc.data and diagnosis, are still available in your workspace. Explore the data to answer the following questions:

  1. How many observations are in this dataset?
  2. How many variables/features in the data are suffixed with _mean?
  3. How many of the observations have a malignant diagnosis?
#1 how many obs?
dim(wisc.data)
## [1] 569  30
#2 how many vars with mean?
meancols <- colnames(wisc.data[,grep('_mean',colnames(wisc.data))])
print(meancols)
##  [1] "radius_mean"            "texture_mean"          
##  [3] "perimeter_mean"         "area_mean"             
##  [5] "smoothness_mean"        "compactness_mean"      
##  [7] "concavity_mean"         "concave.points_mean"   
##  [9] "symmetry_mean"          "fractal_dimension_mean"
print(length(meancols))
## [1] 10
#3. how many obs are malign?
length(diagnosis[diagnosis == 1])
## [1] 212

Performing PCA

The next step in your analysis is to perform PCA on wisc.data.

You saw in the last chapter that it’s important to check if the data need to be scaled before performing PCA. Recall two common reasons for scaling data:

  • The input variables use different units of measurement.
  • The input variables have significantly different variances.
# Check column means and standard deviations
colMeans(wisc.data)
##             radius_mean            texture_mean          perimeter_mean 
##            1.412729e+01            1.928965e+01            9.196903e+01 
##               area_mean         smoothness_mean        compactness_mean 
##            6.548891e+02            9.636028e-02            1.043410e-01 
##          concavity_mean     concave.points_mean           symmetry_mean 
##            8.879932e-02            4.891915e-02            1.811619e-01 
##  fractal_dimension_mean               radius_se              texture_se 
##            6.279761e-02            4.051721e-01            1.216853e+00 
##            perimeter_se                 area_se           smoothness_se 
##            2.866059e+00            4.033708e+01            7.040979e-03 
##          compactness_se            concavity_se       concave.points_se 
##            2.547814e-02            3.189372e-02            1.179614e-02 
##             symmetry_se    fractal_dimension_se            radius_worst 
##            2.054230e-02            3.794904e-03            1.626919e+01 
##           texture_worst         perimeter_worst              area_worst 
##            2.567722e+01            1.072612e+02            8.805831e+02 
##        smoothness_worst       compactness_worst         concavity_worst 
##            1.323686e-01            2.542650e-01            2.721885e-01 
##    concave.points_worst          symmetry_worst fractal_dimension_worst 
##            1.146062e-01            2.900756e-01            8.394582e-02
apply(wisc.data,2, sd)
##             radius_mean            texture_mean          perimeter_mean 
##            3.524049e+00            4.301036e+00            2.429898e+01 
##               area_mean         smoothness_mean        compactness_mean 
##            3.519141e+02            1.406413e-02            5.281276e-02 
##          concavity_mean     concave.points_mean           symmetry_mean 
##            7.971981e-02            3.880284e-02            2.741428e-02 
##  fractal_dimension_mean               radius_se              texture_se 
##            7.060363e-03            2.773127e-01            5.516484e-01 
##            perimeter_se                 area_se           smoothness_se 
##            2.021855e+00            4.549101e+01            3.002518e-03 
##          compactness_se            concavity_se       concave.points_se 
##            1.790818e-02            3.018606e-02            6.170285e-03 
##             symmetry_se    fractal_dimension_se            radius_worst 
##            8.266372e-03            2.646071e-03            4.833242e+00 
##           texture_worst         perimeter_worst              area_worst 
##            6.146258e+00            3.360254e+01            5.693570e+02 
##        smoothness_worst       compactness_worst         concavity_worst 
##            2.283243e-02            1.573365e-01            2.086243e-01 
##    concave.points_worst          symmetry_worst fractal_dimension_worst 
##            6.573234e-02            6.186747e-02            1.806127e-02
# Execute PCA, scaling if appropriate: wisc.pr
wisc.pr <- prcomp(wisc.data, scale=T)

# Look at summary of results
summary(wisc.pr)
## Importance of components:
##                           PC1    PC2     PC3     PC4     PC5     PC6
## Standard deviation     3.6444 2.3857 1.67867 1.40735 1.28403 1.09880
## Proportion of Variance 0.4427 0.1897 0.09393 0.06602 0.05496 0.04025
## Cumulative Proportion  0.4427 0.6324 0.72636 0.79239 0.84734 0.88759
##                            PC7     PC8    PC9    PC10   PC11    PC12
## Standard deviation     0.82172 0.69037 0.6457 0.59219 0.5421 0.51104
## Proportion of Variance 0.02251 0.01589 0.0139 0.01169 0.0098 0.00871
## Cumulative Proportion  0.91010 0.92598 0.9399 0.95157 0.9614 0.97007
##                           PC13    PC14    PC15    PC16    PC17    PC18
## Standard deviation     0.49128 0.39624 0.30681 0.28260 0.24372 0.22939
## Proportion of Variance 0.00805 0.00523 0.00314 0.00266 0.00198 0.00175
## Cumulative Proportion  0.97812 0.98335 0.98649 0.98915 0.99113 0.99288
##                           PC19    PC20   PC21    PC22    PC23   PC24
## Standard deviation     0.22244 0.17652 0.1731 0.16565 0.15602 0.1344
## Proportion of Variance 0.00165 0.00104 0.0010 0.00091 0.00081 0.0006
## Cumulative Proportion  0.99453 0.99557 0.9966 0.99749 0.99830 0.9989
##                           PC25    PC26    PC27    PC28    PC29    PC30
## Standard deviation     0.12442 0.09043 0.08307 0.03987 0.02736 0.01153
## Proportion of Variance 0.00052 0.00027 0.00023 0.00005 0.00002 0.00000
## Cumulative Proportion  0.99942 0.99969 0.99992 0.99997 1.00000 1.00000

Interpreting PCA results

Now you’ll use some visualizations to better understand your PCA model. You were introduced to one of these visualizations, the biplot, in an earlier chapter.

# Create a biplot of wisc.pr
biplot(wisc.pr)

# Scatter plot observations by components 1 and 2
plot(wisc.pr$x[, c(1, 2)], col = (diagnosis + 1), 
     xlab = "PC1", ylab = "PC2")

# Repeat for components 1 and 3
plot(wisc.pr$x[,c(1,3)], col = (diagnosis + 1), 
     xlab = "PC1", ylab = "PC3")

# Do additional data exploration of your choosing below (optional)

Principal component 2 explains more variance in the original data than principal component 3, you can see that the first plot has a cleaner cut separating the two subgroups.

Variance explained

In this exercise, you will produce scree plots showing the proportion of variance explained as the number of principal components increases. The data from PCA must be prepared for these plots, as there is not a built-in function in R to create them directly from the PCA model.

As you look at these plots, ask yourself if there’s an elbow in the amount of variance explained that might lead you to pick a natural number of principal components. If an obvious elbow does not exist, as is typical in real-world datasets, consider how else you might determine the number of principal components to retain based on the scree plot.

# Set up 1 x 2 plotting grid
par(mfrow = c(1, 2))

# Calculate variability of each component
pr.var <- wisc.pr$sdev^2

# Variance explained by each principal component: pve
pve <- pr.var/sum(pr.var)

# Plot variance explained for each principal component
plot(pve, xlab = "Principal Component", 
     ylab = "Proportion of Variance Explained", 
     ylim = c(0, 1), type = "b")

# Plot cumulative proportion of variance explained
plot(cumsum(pve), xlab = "Principal Component", 
     ylab = "Cumulative Proportion of Variance Explained", 
     ylim = c(0, 1), type = "b")

Before moving on, answer the following question: What is the minimum number of principal components needed to explain 80% of the variance in the data? Write it down as you may need this in the next exercise

### Communicating PCA results

  1. For the first principal component, what is the component of the loading vector for the feature concave.points_mean? -0.26085376
  2. What is the minimum number of principal components required to explain 80% of the variance of the data? 5
wisc.pr$rotation[1:10,1:2]
##                                PC1         PC2
## radius_mean            -0.21890244  0.23385713
## texture_mean           -0.10372458  0.05970609
## perimeter_mean         -0.22753729  0.21518136
## area_mean              -0.22099499  0.23107671
## smoothness_mean        -0.14258969 -0.18611302
## compactness_mean       -0.23928535 -0.15189161
## concavity_mean         -0.25840048 -0.06016536
## concave.points_mean    -0.26085376  0.03476750
## symmetry_mean          -0.13816696 -0.19034877
## fractal_dimension_mean -0.06436335 -0.36657547

Hierarchical clustering of case data

The goal of this exercise is to do hierarchical clustering of the observations. Recall from Chapter 2 that this type of clustering does not assume in advance the number of natural groups that exist in the data.

As part of the preparation for hierarchical clustering, distance between all pairs of observations are computed. Furthermore, there are different ways to link clusters together, with single, complete, and average being the most common linkage methods.

# Scale the wisc.data data: data.scaled
data.scaled <- scale(wisc.data)

# Calculate the (Euclidean) distances: data.dist
data.dist <- dist(data.scaled)

# Create a hierarchical clustering model: wisc.hclust
wisc.hclust <- hclust(data.dist, method='complete')
plot(wisc.hclust)

### Selecting number of clusters

In this exercise, we will compare the outputs from your hierarchical clustering model to the actual diagnoses. Normally when performing unsupervised learning like this, a target variable isn’t available. We do have it with this dataset, however, so it can be used to check the performance of the clustering model.

When performing supervised learning—that is, when you’re trying to predict some target variable of interest and that target variable is available in the original data—using clustering to create new features may or may not improve the performance of the final model. This exercise will help you determine if, in this case, hierarchical clustering provides a promising new feature.

# Cut tree so that it has 4 clusters: wisc.hclust.clusters
wisc.hclust.clusters <- cutree(wisc.hclust, k=4)

# Compare cluster membership to actual diagnoses
table(wisc.hclust.clusters,diagnosis)
##                     diagnosis
## wisc.hclust.clusters   0   1
##                    1  12 165
##                    2   2   5
##                    3 343  40
##                    4   0   2

Four clusters were picked after some exploration. Before moving on, you may want to explore how different numbers of clusters affect the ability of the hierarchical clustering to separate the different diagnoses.

k-means clustering and comparing results

As you now know, there are two main types of clustering: hierarchical and k-means.

In this exercise, we will create a k-means clustering model on the Wisconsin breast cancer data and compare the results to the actual diagnoses and the results of your hierarchical clustering model. Take some time to see how each clustering model performs in terms of separating the two diagnoses and how the clustering models compare to each other.

# Create a k-means model on wisc.data: wisc.km
wisc.km <- kmeans(scale(wisc.data),2, nstart=20)

# Compare k-means to actual diagnoses
table(wisc.km$cluster, diagnosis)
##    diagnosis
##       0   1
##   1  14 175
##   2 343  37
# Compare k-means to hierarchical clustering
table(wisc.km$cluster,wisc.hclust.clusters)
##    wisc.hclust.clusters
##       1   2   3   4
##   1 160   7  20   2
##   2  17   0 363   0

Looking at the second table you generated, it looks like clusters 1, 2, and 4 from the hierarchical clustering model can be interpreted as the cluster 1 equivalent from the k-means algorithm, and cluster 3 can be interpreted as the cluster 2 equivalent. ### Clustering on PCA results

In this final exercise, you will put together several steps you used earlier and, in doing so, you will experience some of the creativity that is typical in unsupervised learning.

Recall from earlier exercises that the PCA model required significantly fewer features to describe 80% and 95% of the variability of the data. In addition to normalizing data and potentially avoiding overfitting, PCA also uncorrelates the variables, sometimes improving the performance of other modeling techniques.

Let’s see if PCA improves or degrades the performance of hierarchical clustering.

# Create a hierarchical clustering model: wisc.pr.hclust
wisc.pr.hclust <- hclust(dist(wisc.pr$x[, 1:7]), method = 'complete')

# Cut model into 4 clusters: wisc.pr.hclust.clusters
wisc.pr.hclust.clusters <- cutree(wisc.pr.hclust,k=4)

# Compare to actual diagnoses
table(wisc.pr.hclust.clusters, diagnosis)
##                        diagnosis
## wisc.pr.hclust.clusters   0   1
##                       1   5 113
##                       2 350  97
##                       3   2   0
##                       4   0   2
table(wisc.hclust.clusters, diagnosis)
##                     diagnosis
## wisc.hclust.clusters   0   1
##                    1  12 165
##                    2   2   5
##                    3 343  40
##                    4   0   2
#sum(apply(tab, 1, min))
# Compare to k-means and hierarchical
table(wisc.km$cluster,diagnosis)
##    diagnosis
##       0   1
##   1  14 175
##   2 343  37
table(wisc.pr.hclust.clusters,wisc.km$cluster)
##                        
## wisc.pr.hclust.clusters   1   2
##                       1 115   3
##                       2  70 377
##                       3   2   0
##                       4   2   0
  • 2 cluster k-means does the best job
  • The whole purpose of this is to see if the results of clustering could be useful in a supervised learning process.