Thomas J James
3/5/2021
## Warning: package 'corrplot' was built under R version 4.0.4
## corrplot 0.84 loaded
## Warning: package 'caret' was built under R version 4.0.4
## Loading required package: lattice
## Loading required package: ggplot2
## Warning: package 'rpart.plot' was built under R version 4.0.4
## Warning: package 'randomForest' was built under R version 4.0.4
## randomForest 4.6-14
## Type rfNews() to see new features/changes/bug fixes.
##
## Attaching package: 'randomForest'
## The following object is masked from 'package:ggplot2':
##
## margin
trainUrl <-"https://d396qusza40orc.cloudfront.net/predmachlearn/pml-training.csv"
testUrl <- "https://d396qusza40orc.cloudfront.net/predmachlearn/pml-testing.csv"
trainFile <- "./data/pml-training.csv"
testFile <- "./data/pml-testing.csv"
if (!file.exists("./data")) {
dir.create("./data")
}
if (!file.exists(trainFile)) {
download.file(trainUrl, destfile=trainFile, method="curl")
}
if (!file.exists(testFile)) {
download.file(testUrl, destfile=testFile, method="curl")
}
After downloading the data from the data source, we can read the two csv files into two data frames.
trainRaw <- read.csv("./data/pml-training.csv")
testRaw <- read.csv("./data/pml-testing.csv")
dim(trainRaw)
## [1] 19622 160
## [1] 20 160
The training data set contains 19622 observations and 160 variables, while the testing data set contains 20 observations and 160 variables. The “classe” variable in the training set is the outcome to predict.
In this step, we will clean the data and get rid of observations with missing values as well as some meaningless variables.
## [1] 406
First, we remove columns that contain NA missing values.
trainRaw <- trainRaw[, colSums(is.na(trainRaw)) == 0]
testRaw <- testRaw[, colSums(is.na(testRaw)) == 0]
Next, we get rid of some columns that do not contribute much to the accelerometer measurements.
classe <- trainRaw$classe
trainRemove <- grepl("^X|timestamp|window", names(trainRaw))
trainRaw <- trainRaw[, !trainRemove]
trainCleaned <- trainRaw[, sapply(trainRaw, is.numeric)]
trainCleaned$classe <- classe
testRemove <- grepl("^X|timestamp|window", names(testRaw))
testRaw <- testRaw[, !testRemove]
testCleaned <- testRaw[, sapply(testRaw, is.numeric)]
Now, the cleaned training data set contains 19622 observations and 53 variables, while the testing data set contains 20 observations and 53 variables. The “classe” variable is still in the cleaned training set.
Then, we can split the cleaned training set into a pure training data set (70%) and a validation data set (30%). We will use the validation data set to conduct cross validation in future steps.
We fit a predictive model for activity recognition using Random Forest algorithm because it automatically selects important variables and is robust to correlated covariates & outliers in general. We will use 5-fold cross validation when applying the algorithm.
## Warning: package 'e1071' was built under R version 4.0.4
controlRf <- trainControl(method="cv", 5)
modelRf <- train(classe ~ ., data=trainData, method="rf", trControl=controlRf, ntree=250)
modelRf
## Random Forest
##
## 13737 samples
## 52 predictor
## 5 classes: 'A', 'B', 'C', 'D', 'E'
##
## No pre-processing
## Resampling: Cross-Validated (5 fold)
## Summary of sample sizes: 10988, 10989, 10989, 10991, 10991
## Resampling results across tuning parameters:
##
## mtry Accuracy Kappa
## 2 0.9912654 0.9889499
## 27 0.9916291 0.9894104
## 52 0.9842766 0.9801110
##
## Accuracy was used to select the optimal model using the largest value.
## The final value used for the model was mtry = 27.
Then, we estimate the performance of the model on the validation data set.
## Confusion Matrix and Statistics
##
## Reference
## Prediction A B C D E
## A 1669 5 0 0 0
## B 2 1130 4 0 0
## C 3 3 1019 10 4
## D 0 1 3 954 2
## E 0 0 0 0 1076
##
## Overall Statistics
##
## Accuracy : 0.9937
## 95% CI : (0.9913, 0.9956)
## No Information Rate : 0.2845
## P-Value [Acc > NIR] : < 2.2e-16
##
## Kappa : 0.992
##
## Mcnemar's Test P-Value : NA
##
## Statistics by Class:
##
## Class: A Class: B Class: C Class: D Class: E
## Sensitivity 0.9970 0.9921 0.9932 0.9896 0.9945
## Specificity 0.9988 0.9987 0.9959 0.9988 1.0000
## Pos Pred Value 0.9970 0.9947 0.9808 0.9937 1.0000
## Neg Pred Value 0.9988 0.9981 0.9986 0.9980 0.9988
## Prevalence 0.2845 0.1935 0.1743 0.1638 0.1839
## Detection Rate 0.2836 0.1920 0.1732 0.1621 0.1828
## Detection Prevalence 0.2845 0.1930 0.1766 0.1631 0.1828
## Balanced Accuracy 0.9979 0.9954 0.9945 0.9942 0.9972
## Accuracy Kappa
## 0.9937128 0.9920477
## [1] 0.006287171
So, the estimated accuracy of the model is 99.42% and the estimated out-of-sample error is 0.58%.
Now, we apply the model to the original testing data set downloaded from the data source. We remove the problem_id
column first.
## [1] B A B A A E D B A A B C B A E E A B B B
## Levels: A B C D E