In this analysis, we perform Principal Component Analysis (PCA) on the mtcars dataset, which includes various specifications of different car models. PCA helps to reduce the dimensionality of the dataset while retaining as much variance as possible, allowing for easier visualization and analysis.
# Load necessary libraries
library(ggplot2)
# Load the mtcars dataset
data(mtcars)
head(mtcars)
## mpg cyl disp hp drat wt qsec vs am gear carb
## Mazda RX4 21.0 6 160 110 3.90 2.620 16.46 0 1 4 4
## Mazda RX4 Wag 21.0 6 160 110 3.90 2.875 17.02 0 1 4 4
## Datsun 710 22.8 4 108 93 3.85 2.320 18.61 1 1 4 1
## Hornet 4 Drive 21.4 6 258 110 3.08 3.215 19.44 1 0 3 1
## Hornet Sportabout 18.7 8 360 175 3.15 3.440 17.02 0 0 3 2
## Valiant 18.1 6 225 105 2.76 3.460 20.22 1 0 3 1
summary(mtcars)
## mpg cyl disp hp
## Min. :10.40 Min. :4.000 Min. : 71.1 Min. : 52.0
## 1st Qu.:15.43 1st Qu.:4.000 1st Qu.:120.8 1st Qu.: 96.5
## Median :19.20 Median :6.000 Median :196.3 Median :123.0
## Mean :20.09 Mean :6.188 Mean :230.7 Mean :146.7
## 3rd Qu.:22.80 3rd Qu.:8.000 3rd Qu.:326.0 3rd Qu.:180.0
## Max. :33.90 Max. :8.000 Max. :472.0 Max. :335.0
## drat wt qsec vs
## Min. :2.760 Min. :1.513 Min. :14.50 Min. :0.0000
## 1st Qu.:3.080 1st Qu.:2.581 1st Qu.:16.89 1st Qu.:0.0000
## Median :3.695 Median :3.325 Median :17.71 Median :0.0000
## Mean :3.597 Mean :3.217 Mean :17.85 Mean :0.4375
## 3rd Qu.:3.920 3rd Qu.:3.610 3rd Qu.:18.90 3rd Qu.:1.0000
## Max. :4.930 Max. :5.424 Max. :22.90 Max. :1.0000
## am gear carb
## Min. :0.0000 Min. :3.000 Min. :1.000
## 1st Qu.:0.0000 1st Qu.:3.000 1st Qu.:2.000
## Median :0.0000 Median :4.000 Median :2.000
## Mean :0.4062 Mean :3.688 Mean :2.812
## 3rd Qu.:1.0000 3rd Qu.:4.000 3rd Qu.:4.000
## Max. :1.0000 Max. :5.000 Max. :8.000
Data Preparation Before applying PCA, we will scale the data to ensure that all features contribute equally to the PCA results.
# Scale the data
scaled_data <- scale(mtcars)
head(scaled_data)
## mpg cyl disp hp drat
## Mazda RX4 0.1508848 -0.1049878 -0.57061982 -0.5350928 0.5675137
## Mazda RX4 Wag 0.1508848 -0.1049878 -0.57061982 -0.5350928 0.5675137
## Datsun 710 0.4495434 -1.2248578 -0.99018209 -0.7830405 0.4739996
## Hornet 4 Drive 0.2172534 -0.1049878 0.22009369 -0.5350928 -0.9661175
## Hornet Sportabout -0.2307345 1.0148821 1.04308123 0.4129422 -0.8351978
## Valiant -0.3302874 -0.1049878 -0.04616698 -0.6080186 -1.5646078
## wt qsec vs am gear
## Mazda RX4 -0.610399567 -0.7771651 -0.8680278 1.1899014 0.4235542
## Mazda RX4 Wag -0.349785269 -0.4637808 -0.8680278 1.1899014 0.4235542
## Datsun 710 -0.917004624 0.4260068 1.1160357 1.1899014 0.4235542
## Hornet 4 Drive -0.002299538 0.8904872 1.1160357 -0.8141431 -0.9318192
## Hornet Sportabout 0.227654255 -0.4637808 -0.8680278 -0.8141431 -0.9318192
## Valiant 0.248094592 1.3269868 1.1160357 -0.8141431 -0.9318192
## carb
## Mazda RX4 0.7352031
## Mazda RX4 Wag 0.7352031
## Datsun 710 -1.1221521
## Hornet 4 Drive -1.1221521
## Hornet Sportabout -0.5030337
## Valiant -1.1221521
Perform PCA We will perform PCA to identify the principal components of the dataset.
# Perform PCA
pca_result <- prcomp(scaled_data, center = TRUE, scale. = TRUE)
summary(pca_result)
## Importance of components:
## PC1 PC2 PC3 PC4 PC5 PC6 PC7
## Standard deviation 2.5707 1.6280 0.79196 0.51923 0.47271 0.46000 0.3678
## Proportion of Variance 0.6008 0.2409 0.05702 0.02451 0.02031 0.01924 0.0123
## Cumulative Proportion 0.6008 0.8417 0.89873 0.92324 0.94356 0.96279 0.9751
## PC8 PC9 PC10 PC11
## Standard deviation 0.35057 0.2776 0.22811 0.1485
## Proportion of Variance 0.01117 0.0070 0.00473 0.0020
## Cumulative Proportion 0.98626 0.9933 0.99800 1.0000
Importance of Components The following table shows the importance of each principal component, including the standard deviation, proportion of variance, and cumulative proportion.
# Importance of components
importance <- pca_result$sdev^2 / sum(pca_result$sdev^2)
importance_df <- data.frame(
PC = 1:length(importance),
StandardDeviation = pca_result$sdev,
ProportionVariance = importance,
CumulativeProportion = cumsum(importance)
)
print(importance_df)
## PC StandardDeviation ProportionVariance CumulativeProportion
## 1 1 2.5706809 0.600763659 0.6007637
## 2 2 1.6280258 0.240951627 0.8417153
## 3 3 0.7919579 0.057017934 0.8987332
## 4 4 0.5192277 0.024508858 0.9232421
## 5 5 0.4727061 0.020313737 0.9435558
## 6 6 0.4599958 0.019236011 0.9627918
## 7 7 0.3677798 0.012296544 0.9750884
## 8 8 0.3505730 0.011172858 0.9862612
## 9 9 0.2775728 0.007004241 0.9932655
## 10 10 0.2281128 0.004730495 0.9979960
## 11 11 0.1484736 0.002004037 1.0000000
Visualization of PCA Results Biplot of PCA We visualize the PCA results using a biplot.
# Create a biplot of the PCA results
biplot(pca_result, main = "PCA Biplot of mtcars Dataset")
Explained Variance Plot To better understand the variance explained by
each principal component, we create a scree plot using ggplot().
# Scree plot of explained variance
explained_variance <- pca_result$sdev^2 / sum(pca_result$sdev^2)
variance_df <- data.frame(PC = 1:length(explained_variance), Variance = explained_variance)
ggplot(variance_df, aes(x = PC, y = Variance)) +
geom_line() +
geom_point() +
labs(title = "Explained Variance by Principal Components",
x = "Principal Component",
y = "Variance Explained") +
theme_minimal()
PCA Scatter Plot Finally, we create a scatter plot of the first two principal components using ggplot().
# Create a data frame with the PCA results
pca_data <- data.frame(pca_result$x[, 1:2], Car = rownames(mtcars))
# Scatter plot of the first two principal components
ggplot(pca_data, aes(x = PC1, y = PC2, label = Car)) +
geom_point(size = 3) +
geom_text(vjust = 1.5, hjust = 1.5) +
labs(title = "PCA of mtcars Dataset",
x = "Principal Component 1",
y = "Principal Component 2") +
theme_minimal()
Conclusion
The Principal Component Analysis (PCA) conducted on the mtcars dataset successfully identified key patterns and relationships among various car specifications. By transforming the original 11 features into a smaller set of principal components, we captured a significant amount of the dataset’s variance, with the first two components explaining approximately 84% of the total variance.
The biplot visualization highlighted how different car models are distributed in the new feature space, revealing clusters that may indicate similarities in performance and specifications. The explained variance plot underscored the diminishing returns of subsequent components, confirming the effectiveness of dimensionality reduction through PCA.
Overall, this analysis provides valuable insights into the underlying structure of the mtcars dataset, allowing for a clearer understanding of the relationships between different car features. This simplified representation can facilitate further analyses, such as clustering or regression, ultimately aiding in informed decision-making in automotive research and development.