Reading

For hierachical clustering and exploratory data analysis read Chapter 12 “Cluster Analysis” from An Introduction to Statistical Learning with Applications in R by Gareth James, Daniela Witten, Trevor Hastie and Robert Tibshirani - reading (p.385-p.399).

Remember this is just a starting point, explore the reading list, practical and lecture for more ideas.

Reference: Gareth James, Daniela Witten, Trevor Hastie and Robert Tibshirani 2013.An Introduction to Statistical Learning with Applications in R. https://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf

R Markdown

This is an R Markdown document. Markdown is a simple formatting syntax for authoring HTML, PDF, and MS Word documents. For more details on using R Markdown see http://rmarkdown.rstudio.com.

library(readr)
mydata <-read_csv('customer_segmentation.csv')
## Rows: 45 Columns: 13
## ── Column specification ────────────────────────────────────────────────────────
## Delimiter: ","
## dbl (13): ID, Gender, Age, Buy_Avocado, Number_Avocado, Organic_Conventional...
## 
## ℹ Use `spec()` to retrieve the full column specification for this data.
## ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.

Importing data

In the following step, you will standardize your data(i.e., data with a mean of 0 and a standard deviation of 1). You can use the scale function from the R environment which is a generic function whose default method centers and/or scales the columns of a numeric matrix.

Building distance function and ploting the trees (dendrograms)

Hierarchical clustering (using the function hclust) is an informative way to visualize the data.

We will see if we could discover subgroups among the variables or among the observations.

use = scale(mydata[,-c(1)], center = TRUE, scale = TRUE)
dist = dist(use)  
d <- dist(as.matrix(dist))   # find distance matrix 
seg.hclust <- hclust(d)                # apply hirarchical clustering 
library(ggplot2) # needs no introduction
plot(seg.hclust)

Identifying clustering memberships for each cluster

Imagine if your goal is to find some profitable customers to target. Now you will be able to see the number of customers using this algorithm.

groups.3 = cutree(seg.hclust,3)
table(groups.3)  #A good first step is to use the table function to see how # many observations are in each cluster 
## groups.3
##  1  2  3 
## 13 31  1
#In the following step, we will find the members in each cluster or group.
mydata$ID[groups.3 == 1]
##  [1]  1  7  9 10 16 18 25 26 27 30 39 42 44
mydata$ID[groups.3 == 2]
##  [1]  2  3  4  5  6  8 11 12 13 14 15 17 19 20 21 22 23 24 28 31 32 33 34 35 36
## [26] 37 38 40 41 43 45
mydata$ID[groups.3 == 3]
## [1] 29

Identifying common features of each cluster using the aggregate function

#?aggregate
aggregate(mydata,list(groups.3),median)
##   Group.1 ID Gender Age Buy_Avocado Number_Avocado Organic_Conventional
## 1       1 25      1   1           3              4                    2
## 2       2 22      2   1           1              3                    2
## 3       3 29      1   1           4              4                    0
##   Purchase_Satisfaction Price_Importance Quality_Importance Winter_Likelihood
## 1                     3                2                  1                 4
## 2                     2                2                  1                 2
## 3                     0                1                  1                 5
##   Spring_Likelihood Summer_Likelihood Fall_Likelihood
## 1                 3                 2               3
## 2                 1                 1               2
## 3                 5                 5               5
aggregate(mydata,list(groups.3),mean)
##   Group.1       ID   Gender      Age Buy_Avocado Number_Avocado
## 1       1 22.61538 1.461538 1.384615    2.846154       3.846154
## 2       2 22.96774 1.709677 1.806452    1.483871       2.483871
## 3       3 29.00000 1.000000 1.000000    4.000000       4.000000
##   Organic_Conventional Purchase_Satisfaction Price_Importance
## 1             1.846154              2.461538         1.769231
## 2             1.516129              1.935484         1.645161
## 3             0.000000              0.000000         1.000000
##   Quality_Importance Winter_Likelihood Spring_Likelihood Summer_Likelihood
## 1           1.461538          4.000000          2.923077          2.307692
## 2           1.096774          1.774194          1.387097          1.225806
## 3           1.000000          5.000000          5.000000          5.000000
##   Fall_Likelihood
## 1        3.461538
## 2        1.548387
## 3        5.000000
aggregate(mydata[,-1],list(groups.3),median)
##   Group.1 Gender Age Buy_Avocado Number_Avocado Organic_Conventional
## 1       1      1   1           3              4                    2
## 2       2      2   1           1              3                    2
## 3       3      1   1           4              4                    0
##   Purchase_Satisfaction Price_Importance Quality_Importance Winter_Likelihood
## 1                     3                2                  1                 4
## 2                     2                2                  1                 2
## 3                     0                1                  1                 5
##   Spring_Likelihood Summer_Likelihood Fall_Likelihood
## 1                 3                 2               3
## 2                 1                 1               2
## 3                 5                 5               5
aggregate(mydata[,-1],list(groups.3),mean)
##   Group.1   Gender      Age Buy_Avocado Number_Avocado Organic_Conventional
## 1       1 1.461538 1.384615    2.846154       3.846154             1.846154
## 2       2 1.709677 1.806452    1.483871       2.483871             1.516129
## 3       3 1.000000 1.000000    4.000000       4.000000             0.000000
##   Purchase_Satisfaction Price_Importance Quality_Importance Winter_Likelihood
## 1              2.461538         1.769231           1.461538          4.000000
## 2              1.935484         1.645161           1.096774          1.774194
## 3              0.000000         1.000000           1.000000          5.000000
##   Spring_Likelihood Summer_Likelihood Fall_Likelihood
## 1          2.923077          2.307692        3.461538
## 2          1.387097          1.225806        1.548387
## 3          5.000000          5.000000        5.000000
cluster_means <- aggregate(mydata[,-1],list(groups.3),mean)

Exporting cluster analysis results into excel from R Studio Cloud

write.csv(groups.3, "clusterID.csv")
write.csv(cluster_means, "cluster_means.csv")

Downloading your solutions mannually

First, select the files (“clusterID.csv” & “cluster_means.csv”) and put a checkmark before each file.

Second, click the gear icon on the right side of your pane and export the data.

Finding means or medians of each variable (factor) for each cluster

Imagine if your goal is to find some profitable customers to target. Now using the mean function or the median function, you will be able to see the characteristics of each sub-group. Now it is time to use your domain expertise.

Discussion Questions for you

  1. How many observations do we have in each cluster? Answer: Your answer here:

  2. We can look at the medians (or means) for the variables in each cluster. Why is this important?

Answer: Your answer here:

  1. Do you think if mean or median should be used when it comes to analyzing the differences among different clusters? Why?

Answer: Your answer here:

  1. Now we need to understand the common characteristics of each cluster. Our goal is to build targeting strategy using the profiles of each cluster. What summary measures of each cluster are appropriate in a descriptive sense.

Answer: Your answer here:

  1. Any major differences between K-means clustering (https://rpubs.com/utjimmyx/kmeans) and Hierarchical clustering? Which one do you like better? Why? You may refer to the assigned readings.

  2. Do a keyword search using “cluster analysis.” How many relevant job titles are there?

Answer: Your answer here:

Principal Component Analysis (PCA)

Intro

Principal Component Analysis (PCA) involves the process of understanding different features in a dataset and can be used in conjunction with cluster analysis.

PCA is also a popular machine learning algorithm used for feature selection. Imagine if you have more than 100 features or factors. It is useful to select the most important features for further analysis.

The basic idea when using PCA as a tool for feature selection is to select variables according to the magnitude (from largest to smallest in absolute values) of their coefficients (loadings).

#install.packages('dplyr')
library(dplyr) # sane data manipulation
## 
## Attaching package: 'dplyr'
## The following objects are masked from 'package:stats':
## 
##     filter, lag
## The following objects are masked from 'package:base':
## 
##     intersect, setdiff, setequal, union
library(tidyr) # sane data munging
library(ggplot2) # needs no introduction
library(ggfortify) # super-helpful for plotting non-"standard" stats objects

#identifying your working directory
getwd() #confirm your working directory is accurate
## [1] "/cloud/project"
library(readr)

##  mydata <-read_csv('Segmentation.csv')

mydata <-read_csv('customer_segmentation.csv')
## Rows: 45 Columns: 13
## ── Column specification ────────────────────────────────────────────────────────
## Delimiter: ","
## dbl (13): ID, Gender, Age, Buy_Avocado, Number_Avocado, Organic_Conventional...
## 
## ℹ Use `spec()` to retrieve the full column specification for this data.
## ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
# read csv file #This allows you to read the data from my Github site.

#Open the data. Note that some students will see an Excel option in "Import Dataset";
#those that do not will need to save the original data as a csv and import that as a text file.
#rm(list = ls()) #used to clean your working environment
fit <- kmeans(na.omit(mydata[,-1]), 3, iter.max=1000)
#exclude the first column since it is "id" instead of a factor #or variable.
#3 means you want to have 3 clusters
table(fit$cluster)
## 
##  1  2  3 
## 11 15 19
barplot(table(fit$cluster), col="#336699") #plot

pca <- prcomp(mydata[,-1], scale=TRUE) #principle component analysis
pca_data <- mutate(fortify(pca), col=fit$cluster)
#We want to examine the cluster memberships for each #observation - see last column

ggplot(pca_data) + geom_point(aes(x=PC1, y=PC2, fill=factor(col)),
size=3, col="#7f7f7f", shape=21) + theme_bw(base_family="Helvetica")

autoplot(fit, data=mydata[,-1], frame=TRUE, frame.type='norm')

names(pca)
## [1] "sdev"     "rotation" "center"   "scale"    "x"
pca$center
##                Gender                   Age           Buy_Avocado 
##              1.622222              1.666667              1.933333 
##        Number_Avocado  Organic_Conventional Purchase_Satisfaction 
##              2.911111              1.577778              2.044444 
##      Price_Importance    Quality_Importance     Winter_Likelihood 
##              1.666667              1.200000              2.488889 
##     Spring_Likelihood     Summer_Likelihood       Fall_Likelihood 
##              1.911111              1.622222              2.177778
pca$scale
##                Gender                   Age           Buy_Avocado 
##             0.5346574             0.9293204             0.8893307 
##        Number_Avocado  Organic_Conventional Purchase_Satisfaction 
##             0.8480518             0.5430925             0.8244986 
##      Price_Importance    Quality_Importance     Winter_Likelihood 
##             0.7071068             0.4045199             1.4866408 
##     Spring_Likelihood     Summer_Likelihood       Fall_Likelihood 
##             1.0833916             1.0507333             1.3019023
pca$rotation
##                               PC1         PC2         PC3          PC4
## Gender                 0.16357157  0.14585533 -0.13507031 -0.702398055
## Age                    0.21150650  0.10239097  0.61846271  0.023021365
## Buy_Avocado           -0.37947936 -0.07740873 -0.06795683 -0.174803938
## Number_Avocado        -0.39321572  0.18943028 -0.04174530 -0.207845830
## Organic_Conventional  -0.06350491  0.67673866  0.23418918  0.040665426
## Purchase_Satisfaction -0.07975294  0.62979582 -0.16616074 -0.004464391
## Price_Importance      -0.04917651  0.08690228 -0.65411635  0.286756835
## Quality_Importance    -0.21028092  0.16755935  0.03089091  0.452309568
## Winter_Likelihood     -0.39176398 -0.04591197  0.05788791 -0.254151045
## Spring_Likelihood     -0.40487110 -0.07878982  0.23489852  0.117173328
## Summer_Likelihood     -0.31169131 -0.12795353  0.14963879  0.172834281
## Fall_Likelihood       -0.39787592 -0.10692231  0.02650637 -0.188864015
##                                PC5         PC6         PC7        PC8
## Gender                 0.093708734 -0.22309236  0.54127465 -0.2016211
## Age                   -0.375298925  0.06744487  0.32655118  0.4802340
## Buy_Avocado           -0.007270042 -0.24852963 -0.10725577  0.5689419
## Number_Avocado        -0.007618302 -0.10683779  0.12981420  0.2699770
## Organic_Conventional  -0.306451536  0.06922649 -0.08014864 -0.3496704
## Purchase_Satisfaction  0.440092533  0.25341095 -0.16437768  0.2770619
## Price_Importance      -0.506634167  0.21925082  0.38153328  0.1193273
## Quality_Importance     0.072876283 -0.75581009  0.20594683 -0.1429481
## Winter_Likelihood     -0.274368062  0.07373871 -0.15987403 -0.2269846
## Spring_Likelihood      0.099312655  0.14939788  0.16900625 -0.1328025
## Summer_Likelihood      0.369019607  0.38971591  0.51985919 -0.1071962
## Fall_Likelihood       -0.281707434  0.06734127 -0.16195243 -0.1333654
##                               PC9        PC10        PC11         PC12
## Gender                 0.08511169 -0.15371803 -0.05309453  0.120376234
## Age                    0.27258334 -0.03268424  0.03815282 -0.047654124
## Buy_Avocado           -0.37100767 -0.51990902 -0.08225620  0.055606359
## Number_Avocado        -0.20431603  0.77772143  0.06297776 -0.113055277
## Organic_Conventional  -0.47668390 -0.15005736  0.02086718  0.062573290
## Purchase_Satisfaction  0.43985706 -0.11795378  0.01524409  0.008111108
## Price_Importance       0.05749632 -0.05743372 -0.08757661  0.033198085
## Quality_Importance     0.24776817 -0.06761078  0.09827529 -0.084505604
## Winter_Likelihood      0.31671748 -0.12762799 -0.27928793 -0.651207907
## Spring_Likelihood      0.11770590  0.05523517 -0.62858373  0.520153055
## Summer_Likelihood     -0.20340870 -0.18475959  0.34176075 -0.268519596
## Fall_Likelihood        0.31325099 -0.04868682  0.61406667  0.434058585
dim(pca$x)
## [1] 45 12
biplot(pca, scale=0)

pca$rotation=-pca$rotation
pca$x=-pca$x
biplot(pca, scale=0)

pca$sdev
##  [1] 2.1979207 1.2445198 1.1048917 1.0706341 0.9355258 0.8870451 0.7863149
##  [8] 0.6113287 0.5271726 0.3773493 0.3139625 0.2834165
pca.var=pca$sdev^2
pca.var
##  [1] 4.83085520 1.54882945 1.22078574 1.14625733 0.87520858 0.78684897
##  [7] 0.61829114 0.37372274 0.27791098 0.14239246 0.09857248 0.08032494
pve=pca.var/sum(pca.var)
pve
##  [1] 0.402571266 0.129069121 0.101732145 0.095521444 0.072934048 0.065570748
##  [7] 0.051524262 0.031143561 0.023159248 0.011866038 0.008214373 0.006693745
plot(pve, xlab="Principal Component", ylab="Proportion of Variance Explained", ylim=c(0,1),type='b')

plot(cumsum(pve), xlab="Principal Component", ylab="Cumulative Proportion of Variance Explained", ylim=c(0,1),type='b')

write.csv(pca_data, "pca_data.csv")
#save your cluster solutions in the working directory
#We want to examine the cluster memberships for each observation - see last column of pca_data

References

Cluster analysis - reading (p.385-p.399) https://www.statlearning.com/

Hint:you can download the free version of this book from this website.

Comparison of similarity coefficients used for cluster analysis with dominant markers in maize (Zea mays L) https://www.scielo.br/scielo.php?script=sci_arttext&pid=S1415-47572004000100014&lng=en&nrm=iso

Principal Component Methods in R: Practical Guide http://www.sthda.com/english/articles/31-principal-component-methods-in-r-practical-guide/118-principal-component-analysis-in-r-prcomp-vs-princomp/

Principal component analysis - reading (p.404-p.405) https://www.statlearning.com/

Hint:you can download the free version from this website.

Principal Component Methods in R: Practical Guide http://www.sthda.com/english/articles/31-principal-component-methods-in-r-practical-guide/118-principal-component-analysis-in-r-prcomp-vs-princomp/

https://online.stat.psu.edu/stat505/lesson/11/11.4