The goal of this excerise is to predict the manner in which participants did the exercise – further explaination of the data in the Background section.
This project attempts to fulfill the following requirements: 1) How a predictive model is built 2) How corss validation is conducted 3) Expected out of sample error 4) Rationale behind the model construction
Using devices such as Jawbone Up, Nike FuelBand, and Fitbit it is now possible to collect a large amount of data about personal activity relatively inexpensively. These type of devices are part of the quantified self movement – a group of enthusiasts who take measurements about themselves regularly to improve their health, to find patterns in their behavior, or because they are tech geeks. One thing that people regularly do is quantify how much of a particular activity they do, but they rarely quantify how well they do it. In this project, your goal will be to use data from accelerometers on the belt, forearm, arm, and dumbell of 6 participants. They were asked to perform barbell lifts correctly and incorrectly in 5 different ways. More information is available from the website here: http://groupware.les.inf.puc-rio.br/har (see the section on the Weight Lifting Exercise Dataset).
The training data for this project are available here: https://d396qusza40orc.cloudfront.net/predmachlearn/pml-training.csv
The test data are available here: https://d396qusza40orc.cloudfront.net/predmachlearn/pml-testing.csv
Make this analysis reproducible. Seed is set to be 9876
set.seed(9876)
Download and load Data
trainURL <- "https://d396qusza40orc.cloudfront.net/predmachlearn/pml-training.csv"
testURL <- "https://d396qusza40orc.cloudfront.net/predmachlearn/pml-testing.csv"
download.file(trainURL, destfile = "pml-training.csv", method = "curl")
download.file(testURL, destfile = "pml-testing.csv", method = "curl")
training <- read.csv("pml-training.csv")
testing <- read.csv("pml-testing.csv")
Remove incompleted columns
## identify which columns have incomplete obersvation
NAChecker <- function(x){unlist(apply(x, 2, function(x){length(which(!is.na(x)))}))}
NDataPoints <- NAChecker(training)
## extract names of the columns that have complete data set
CompleteVariable <- c()
for(i in 1:length(NDataPoints)){
if(NDataPoints[[i]]==nrow(training)){
CompleteVariable <- c(CompleteVariable, names(training)[i])
}
}
## create a new set of data with columns of complete data set
trainingSet <- training[, names(training) %in% CompleteVariable]
library(caret)
## Loading required package: lattice
## Loading required package: ggplot2
## identify which variables are near zero variance
nzv <- nearZeroVar(trainingSet, saveMetrics = TRUE)
## keep all the variables that are not near zero variance variables
myVar <- rownames(subset(nzv, nzv==FALSE))
print(myVar)
## [1] "X" "user_name" "raw_timestamp_part_1"
## [4] "raw_timestamp_part_2" "cvtd_timestamp" "num_window"
## [7] "roll_belt" "pitch_belt" "yaw_belt"
## [10] "total_accel_belt" "gyros_belt_x" "gyros_belt_y"
## [13] "gyros_belt_z" "accel_belt_x" "accel_belt_y"
## [16] "accel_belt_z" "magnet_belt_x" "magnet_belt_y"
## [19] "magnet_belt_z" "roll_arm" "pitch_arm"
## [22] "yaw_arm" "total_accel_arm" "gyros_arm_x"
## [25] "gyros_arm_y" "gyros_arm_z" "accel_arm_x"
## [28] "accel_arm_y" "accel_arm_z" "magnet_arm_x"
## [31] "magnet_arm_y" "magnet_arm_z" "roll_dumbbell"
## [34] "pitch_dumbbell" "yaw_dumbbell" "total_accel_dumbbell"
## [37] "gyros_dumbbell_x" "gyros_dumbbell_y" "gyros_dumbbell_z"
## [40] "accel_dumbbell_x" "accel_dumbbell_y" "accel_dumbbell_z"
## [43] "magnet_dumbbell_x" "magnet_dumbbell_y" "magnet_dumbbell_z"
## [46] "roll_forearm" "pitch_forearm" "yaw_forearm"
## [49] "total_accel_forearm" "gyros_forearm_x" "gyros_forearm_y"
## [52] "gyros_forearm_z" "accel_forearm_x" "accel_forearm_y"
## [55] "accel_forearm_z" "magnet_forearm_x" "magnet_forearm_y"
## [58] "magnet_forearm_z" "classe"
library(dplyr)
##
## Attaching package: 'dplyr'
##
## The following objects are masked from 'package:stats':
##
## filter, lag
##
## The following objects are masked from 'package:base':
##
## intersect, setdiff, setequal, union
myVar <- myVar[-(1:6)]
trainingData <- select(trainingSet, one_of(myVar))
inTrain <- createDataPartition(y=trainingData$classe, p=0.6, list=FALSE)
## create one set for training and another for validation
trainingPart <- trainingData[inTrain,]
validationPart <- trainingData[-inTrain,]
library(corrplot)
## calculate variables' correlations
varCorr <- round(cor(trainingPart[sapply(trainingPart, is.numeric)]), 4)
## chart the correlations
par(ps=5)
corrplot.mixed(varCorr, order="hclust", tl.col="black", diag="n", tl.pos="lt", lower="circle", upper = "number", tl.cex=1.5, mar=c(1, 0, 1, 0))
The dark color dots on the chart reflect strong correlations between variables. Given the strong correlations among several variables, the number of variables in the prediction model could potentially be further reduced by running a pricinal component analysis.
reduced <- preProcess(trainingPart[,-53], method = "pca")
trainingPCA <- predict(reduced, trainingPart[,-53])
validationPCA <- predict(reduced, validationPart[,-53])
print(reduced)
##
## Call:
## preProcess.default(x = trainingPart[, -53], method = "pca")
##
## Created from 11776 samples and 52 variables
## Pre-processing: principal component signal extraction, scaled, centered
##
## PCA needed 24 components to capture 95 percent of the variance
With the help of the PAC, the number of components has been reduced to 24 while maintaining 95 percent of the variance.
## train a random forest model using the PCA data
modelRF <- train(trainingPart$classe ~ ., method ="rf", data=trainingPCA, trControl=trainControl(method = "cv", number = 4), ntree=100, importance = TRUE)
## plot the result
par(ps=5)
varImpPlot(modelRF$finalModel, sort = TRUE, type = 1, pch =19, col=12, cex=1, main = "Importance of Pricincipal Components in Random Forest Model")
Cross Validate the Model with the Validation Data Set
modelRFVal <- predict(modelRF, validationPCA)
accuracy <- confusionMatrix(validationPart$classe, modelRFVal)
accuracy$table
## Reference
## Prediction A B C D E
## A 2210 7 9 5 1
## B 42 1447 23 4 2
## C 6 32 1305 21 4
## D 2 2 63 1215 4
## E 0 7 26 10 1399
Caculate the Accuracy of the Model
modelRFAcc <- round(postResample(validationPart$classe, modelRFVal)[[1]], 4)
modelRFAcc
## [1] 0.9656
The model delivers 96.89% accuracy.
1-modelRFAcc
## [1] 0.0344
The expected out of sample error for this model is 3.44%
## build a random forest model usnig the training data set
modelRF2 <- train(classe ~., method="rf", data=trainingPart, trControl = trainControl(method="cv", number=4), ntree=100, importance =TRUE)
## plot the result
par(ps=5)
varImpPlot(modelRF2$finalModel, sort = TRUE, type = 1, pch=19, col=12, cex=1, main="Importance of Predictor Variables in Random Forest Model")
Cross Validate the Model with Valication Dataset
modelRF2Val <- predict(modelRF2, validationPart)
accuracy2 <- confusionMatrix(validationPart$classe, modelRF2Val)
accuracy2$table
## Reference
## Prediction A B C D E
## A 2228 1 3 0 0
## B 20 1494 4 0 0
## C 0 12 1349 7 0
## D 0 0 14 1271 1
## E 0 2 5 7 1428
Caculate the Accuracy of the Model
modelRF2Acc <- round(postResample(validationPart$classe, modelRF2Val)[[1]], 4)
modelRF2Acc
## [1] 0.9903
The model achieves 99.03% accuracy.
1-modelRF2Acc
## [1] 0.0097
The expected out of sample error for this model is 0.97%
modelRF2Test <- predict(modelRF2, testing)
modelRF2Test
## [1] B A B A A E D B A A B C B A E E A B B B
## Levels: A B C D E
The project started with cleaning the training dataset, removing variables with incomplete observations, to reduce number of variables and to speed up model training process. The PCA helped further reduce number of compoments in random forest model building at the cost of lower accuracy. The random forest model without using the PCA delivered a high accuracy, 99.17%, despite it took a slightly longer time to build. Given the reasonable incremental time consumption, the final test employeed the random forest model without using PCA.