library(ggplot2)
library(ggrepel)
library(patchwork)
library(knitr)Activity 3.2 - PC concepts
SUBMISSION INSTRUCTIONS
- Render to html
- Publish your html to RPubs
- Submit a link to your published solutions
Question 1
The data set we will analyze for this question are on the 10 events in the Men’s 2024 Olympic Decathlon in Paris.
decathlon <- read.csv("/Users/uj2116bi/Desktop/DSCI_415/Activities/Data/mens_decathlon_paris2024.csv")
head(decathlon) Athlete Medal Nation Overall X100m LongJump ShotPut HighJump
1 Markus Rooth Gold Norway 8796 10.71 7.80 15.25 1.99
2 Leo Neugebauer Silver Germany 8748 10.67 7.98 16.55 2.05
3 Lindon Victor Bronze Grenada 8711 10.56 7.48 15.71 2.02
4 Sven Roosen None Netherlands 8607 10.52 7.56 15.10 1.87
5 Janek Õiglane None Estonia 8572 10.89 7.25 14.58 1.99
6 Johannes Erm None Estonia 8569 10.64 7.66 14.61 2.08
X400m X110mHurdle Discus PoleVault Javelin X1500m
1 47.69 14.25 49.80 5.3 66.87 279.6
2 47.70 14.51 53.33 5.0 56.64 284.7
3 47.84 14.62 53.91 4.9 68.22 283.5
4 46.40 13.99 46.88 4.7 63.72 258.5
5 48.02 14.45 43.49 5.3 71.89 265.6
6 47.19 14.35 46.29 4.6 59.58 259.7
For the purposes of this question, assume we have 10-dimensional data - that is, ignore the Overall column.
A)
Explain why we need to scale this data set before performing PCA.
We need to scale the data before performing PCA because each decathlon event has different units and ranges. For example, 100m sprint times are measured in seconds (~10–11), whereas javelin throws are measured in meters (~60–80). PCA is sensitive to the magnitude of variables: without scaling, variables with larger numeric ranges would dominate the principal components, which would distort the analysis. Scaling ensures each event contributes equally to the PCA.
B)
Use svd() to find the first 2 principal component scores and their loadings. Full credit will only be granted if you use the svd() ingredients u, d, and v. What percent of the overall variability do the first two PCs explain?
X <- decathlon[, sapply(decathlon, is.numeric)]
X$Overall <- NULL
X_scaled <- scale(X)
svd_res <- svd(X_scaled)
U <- svd_res$u
D <- diag(svd_res$d)
V <- svd_res$v
PC_scores <- U[, 1:2] %*% D[1:2, 1:2]
var_explained <- svd_res$d^2 / sum(svd_res$d^2)
cumsum(var_explained)[1:2] [1] 0.2913494 0.4841500
C)
Find and print the loadings. Based on the loadings alone, if the first two PCs are plotted in a 2D plane as shown below, which of the four quadrants will the medalists be in? Explain your reasoning. 
loadings <- V[, 1:2]
colnames(loadings) <- c("PC1", "PC2")
rownames(loadings) <- colnames(X)
loadings PC1 PC2
X100m 0.446612410 0.01678592
LongJump -0.493037935 -0.06565628
ShotPut -0.309431542 0.53134100
HighJump -0.150386291 -0.05665757
X400m 0.485973869 0.14847674
X110mHurdle 0.345270647 0.31685991
Discus -0.139145881 0.59570522
PoleVault 0.019829831 0.48083261
Javelin 0.252821246 0.03137029
X1500m 0.005588625 -0.01948870
PC1 mainly separates “sprinters” (positive PC1) from “power/strength athletes” (negative PC1). PC2 mainly measures strong technical/power events. Medalists should appear in quadrant 1 because they tend to be strong across MANY events. especiallt sprint events, hurdles, technical events. ## D)
Add the PCs to the decathlon data set and create a scatterplot of these PCs, with the points labeled by the athletes’ names. Color-code the points on whether or not the athlete was a medalist. Use the ggrepel package for better labeling. Verify that your intuition from C) is correct.
library(ggplot2)
library(ggrepel)
# Add PC columns to decathlon
decathlon$PC1 <- PC_scores[,1]
decathlon$PC2 <- PC_scores[,2]
decathlon$Medalist <- decathlon$Medal != "None"
# Plot PCA of 2024 athletes only
ggplot(decathlon, aes(x = PC1, y = PC2, color = Medalist, label = Athlete)) +
geom_point(size = 3) +
geom_text_repel() +
theme_minimal() +
coord_fixed() +
labs(title = "Decathlon PCA: First Two PCs (2024 Athletes)")E)
Canadian Damian Warner won the gold medal in the decathlon in the 2020 Tokyo games. He began the 2024 decathlon but bowed out after three straight missed pole vault attempts.
These are his results in the 10 events in 2020:
warner <- c(10.12, 8.24, 14.8, 2.02, 47.48, 13.46, 48.67, 4.9, 63.44, 271.08)
warner_scaled <- (warner - colMeans(X)) / apply(X, 2, sd)Would this have won a medal if it had happened in 2024? To answer this, we will compute his PCs with respect to the 2024 athletes and add it to the plot to see where his 2020 gold-medal performance compares to the 2024 athletes. To do this:
- Find the mean vector from the 2024 athletes. Call it
mean_vec_24. - Find the standard deviation vector from the 2024 athletes. Call it
sd_vec_24. - Standardize Warner’s 2020 results with respect to the 2024 athletes:
(warner-mean_vec_24)/sd_vec_24 - Find Warner’s PC coordinates using the 2024 loadings.
- Add his point to the scatterplot.
Do you think his 2020 performance would have won a medal if it had happened in 2024?
# Compute PC coordinates for Warner
warner_PC <- as.numeric(warner_scaled %*% V[,1:2])
# Plot PCA including Warner 2020
ggplot(decathlon, aes(x = PC1, y = PC2, color = Medalist, label = Athlete)) +
geom_point(size = 3) +
geom_text_repel() +
geom_point(aes(x = warner_PC[1], y = warner_PC[2]), color = "red", size = 4) +
geom_text_repel(aes(x = warner_PC[1], y = warner_PC[2], label = "Warner 2020"), color = "red") +
theme_minimal() +
coord_fixed() +
labs(title = "Decathlon PCA: First Two PCs with Damian Warner 2020")Warning in geom_text_repel(aes(x = warner_PC[1], y = warner_PC[2], label = "Warner 2020"), : All aesthetics have length 1, but the data has 20 rows.
ℹ Please consider using `annotate()` or provide this layer with data containing
a single row.
Warning: ggrepel: 20 unlabeled data points (too many overlaps). Consider
increasing max.overlaps
Question 2
Below is a screenshot of a conversation between me and chatbot Claude:
After looking at the graphs, I grew skeptical. So I said:

Behold, Claude’s three data sets which I’ve called claudeA, claudeB, and claudeC:
claudeA <- read.csv('/Users/uj2116bi/Desktop/DSCI_415/Activities/Data/claude_dataA.csv')
claudeB <- read.csv('/Users/uj2116bi/Desktop/DSCI_415/Activities/Data/claude_dataB.csv')
claudeC <- read.csv('/Users/uj2116bi/Desktop/DSCI_415/Activities/Data/claude_dataC.csv')Each data set has an X and a Y column which represent 2-dimensional variables that we need to rotate.
A)
Scale each data set and plot them side-by-side using the patchwork package. Make sure the aspect ratio of each graph is 1 (i.e., make the height and width of each graph equal). At this point, explain why you think I was skeptical. Specifically, do you think the percent variability explained by the first PC of each data set appears to exceed or fall short of the variability I asked it to?
library(patchwork)
A_scaled <- scale(claudeA)
B_scaled <- scale(claudeB)
C_scaled <- scale(claudeC)
pA <- ggplot(as.data.frame(A_scaled), aes(X, Y)) +
geom_point() + coord_fixed() + ggtitle("Claude A")
pB <- ggplot(as.data.frame(B_scaled), aes(X, Y)) +
geom_point() + coord_fixed() + ggtitle("Claude B")
pC <- ggplot(as.data.frame(C_scaled), aes(X, Y)) +
geom_point() + coord_fixed() + ggtitle("Claude C")
pA + pB + pCB)
Use SVD to find the first PC for each data set, and find the actual percent of total variability explained by each PC using aggregation methods.
# Dataset A
svdA <- svd(A_scaled)
varA <- svdA$d^2 / sum(svdA$d^2)
varA[1][1] 0.9756201
# Dataset B
svdB <- svd(B_scaled)
varB <- svdB$d^2 / sum(svdB$d^2)
varB[1][1] 0.9591807
# Dataset C
svdC <- svd(C_scaled)
varC <- svdC$d^2 / sum(svdC$d^2)
varC[1][1] 0.9949096