Introduction

This is Part III of the hierarchical clustering analysis. In this part, I will be adding an outlier to the data set and observe if it makes any difference.

Note that,you can find the other parts from the links below:

Part I covers question 1 Part II covers question 2 Part III covers question 3. Part IV covers question4

Data:

The data file Ramusbone length records the length of the Ramusbone from 20 boys ages 8, 8.5, 9, and 9.5.

Objective:

Carry out different hierarchical cluster analysis techniques on the Ramusbone length data set and cluster the kids based on given growth in height and compare the results.

Research Questions

  1. Perform hierarchical clustering using the single, average, and complete methods to obtain k = 4 groups. Compare the results. For each unique clustering, make a plot of the first two principal components of each individual and indicate the cluster to which each point belongs.

  2. One way in which hierarchical methods are compared is their robustness to error. Add noise to the bone data by adding independent normal random variables with mean 0 and standard deviation 0.25 to each measurement. Repeat the analyses from part (1). Which of the methods, if any, had their clustering change due to the added noise?

  3. Another way in which hierarchical models are compared is their robustness to outliers. Add the observation x21 = (47:7; 48:8; 45:7; 45:3)T to the data set and rerun the analyses from part (1). Which of the methods, if any, had their clustering change due to the outlier?

  4. Run a k-means analysis for k = 4 groups on the ramus bone data. Perform the analysis three times with random seeds to begin the procedure. Do the three k-means analyses return the same grouping? How do the groupings from the k-means procedure compare to those from the hierarchical methods?

PART III

Data Preparation

library(tidyverse)   # Data manipulation. 
## -- Attaching packages --------------------------------------- tidyverse 1.3.1 --
## v ggplot2 3.3.5     v purrr   0.3.4
## v tibble  3.1.4     v dplyr   1.0.7
## v tidyr   1.1.3     v stringr 1.4.0
## v readr   2.0.1     v forcats 0.5.1
## -- Conflicts ------------------------------------------ tidyverse_conflicts() --
## x dplyr::filter() masks stats::filter()
## x dplyr::lag()    masks stats::lag()
library(factoextra)  # Clustering Visualization
## Welcome! Want to learn more? See two factoextra-related books at https://goo.gl/ve3WBa

Load the Dataset

# Clear the workspace: 
rm(list = ls())

# Load data
ramus<- read.table("ramusbonelength.txt", head = T)

head(ramus)
##   Individual X8yr X8.5yr X9yr X9.5yr
## 1          1 47.8   48.8 49.0   49.7
## 2          2 46.4   47.3 47.7   48.4
## 3          3 46.3   46.8 47.8   48.5
## 4          4 45.1   45.3 46.1   47.2
## 5          5 47.6   48.5 48.9   49.3
## 6          6 52.5   53.2 53.3   53.7
nrow(ramus)
## [1] 20

Add Outlier and Create a new Data set

#Outlier
new<-c(21,47.7,48.8,45.7,45.3)

#Outlier added
newramus <- rbind(ramus,new)

#Numer of rows of the new dataset
nrow(newramus)
## [1] 21

I added Observation x21 = (47:7; 48:8; 45:7; 45:3) as an outlier to compare the robustness of hierarchical models against outliers.



Normalize the data set

#Create a scale function
scaled <- function(x,na.rm=FALSE) ((x - mean(x)) / sd(x))

#Apply over the observations
newramusdf <- newramus %>% mutate_at(c("X8yr", "X8.5yr","X9yr","X9.5yr"), scaled, na.rm=TRUE)

head(newramusdf)
##   Individual       X8yr     X8.5yr       X9yr     X9.5yr
## 1          1 -0.3289306 -0.3165917 -0.4821698 -0.4879474
## 2          2 -0.8977870 -0.9209939 -0.9506124 -0.9247003
## 3          3 -0.9384196 -1.1224614 -0.9145784 -0.8911040
## 4          4 -1.4260108 -1.7268636 -1.5271571 -1.3278569
## 5          5 -0.4101958 -0.4374721 -0.5182039 -0.6223329
## 6          6  1.5808017  1.4563217  1.0672941  0.8559078
# Drops "Individual" column
newdf<-newramusdf[,2:5] 

Create functions for data visualizations

# Create a function of dendrogram:

dend_func <- (function(x) {fviz_dend(x, 
          k = 4,   
          cex = 0.5, 
          rect = TRUE, 
          rect_fill = TRUE, 
          horiz = FALSE, 
          palette = "jco", 
          rect_border = "jco", 
          color_labels_by_k = TRUE) })
# Create a theme of dendrogram plots : 
theme1<- theme_gray() + 
         theme(plot.margin = unit(rep(0.7, 4), "cm"))  
# Create a function of cluster:
clust_func<-(function(x){fviz_cluster(list(data = newdf, cluster = paste0("Group", x)), 
                         alpha = 1, 
                         colors = x, 
                         labelsize = 9, 
                         ellipse.type = "norm")})
# Create a theme of cluster plots : 
theme2<-  theme(legend.position = c(0.1, 0.8)) + 
          theme(plot.margin = unit(rep(0.5, 4), "cm"))   

Distance matrix computation and visualization

# Compute distances: 
dd <- dist(newdf, method = "euclidean")

# Visualize the dissimilarity: 
fviz_dist(dd, lab_size = 6)


Eigen Values for PC Analysis

#Find Correlation Matrix
S <- cor(newdf)

#Find EigenValues
eig <- eigen(S)

#Find Cumulative sum of the Eigenvalues
cumsum(eig$values)/sum(eig$values)
## [1] 0.9001048 0.9879579 0.9960001 1.0000000
# PC values
pc1 <- t( t(eig$vectors[,1] %*% t(newdf)))
pc2 <- t( t(eig$vectors[,2] %*% t(newdf)))




Single Linkage

Dendrogram Plot

singleNoise <- hclust(dd, method = "single")

dend_func(singleNoise) -> basic_plot

basic_plot + theme1 + 
  labs(title = "Hierarchical Clustering with Single Linkage Method")

Adding observation X21 has changed the cluster of observations X9 and X12. X21 created its own cluster and, X9 and X12 were included in the Red Cluster.

PCA Plot

sgroup<- cutree(singleNoise, k = 4)

plot(pc1, pc2, ylim = c(-2,2))

text(x = pc1, y = pc2+.1, labels = rownames(newramus))

points(pc1, pc2, col = sgroup, pch = 19)

Cluster Plot

clust_func(sgroup)-> cluster_plot

cluster_plot + theme2 + 
  labs(title = "Cluster based on Hierarchical Clustering") 
## Too few points to calculate an ellipse
## Too few points to calculate an ellipse

These plots confirm that observation 18 and 4 have the biggest distance.





Complete Linkage

Dendrogram Plot

completeNoise <- hclust(dd, method = "complete")

dend_func(completeNoise) -> basic_plot

basic_plot + theme1 + 
  labs(title = "Hierarchical Clustering with Complete Linkage Method")

New observation (X21) were included in the Yellow cluster and did not make a dramatic change in overall.

PCA Plot

cgroup<- cutree(completeNoise, k = 4)

plot(pc1, pc2, ylim = c(-2,2))

text(x = pc1, y = pc2+.1, labels = rownames(newramus))

points(pc1, pc2, col = cgroup, pch = 19)

Cluster Plot

clust_func(cgroup)-> cluster_plot

cluster_plot + theme2 + 
  labs(title = "Cluster based on Hierarchical Clustering") 





Average Linkage

Dendrogram Plot

aveNoise <- hclust(dd, method = "average")

dend_func(aveNoise) -> basic_plot

basic_plot + theme1 + 
  labs(title = "Hierarchical Clustering with Average Linkage Method")

New observation(X21) created its own cluster and observation(20) were included in the Yellow cluster. The rest of the observations remained in the same cluster.

PCA Plot

agroup<- cutree(aveNoise, k = 4)

plot(pc1, pc2, ylim = c(-2,2))

text(x = pc1, y = pc2+.1, labels = rownames(newramus))

points(pc1, pc2, col = agroup, pch = 19)

Cluster Plot

# Cut tree into 4 groups: 
clust_func(agroup)-> cluster_plot

cluster_plot + theme2 + 
  labs(title = "Cluster based on Hierarchical Clustering") 
## Too few points to calculate an ellipse

References:

  1. https://en.wikipedia.org/wiki/Hierarchical_clustering
  2. https://www.datanovia.com/en/blog/cluster-analysis-in-r-simplified-and-enhanced/

**************