Using devices such as Jawbone Up, Nike FuelBand, and Fitbit it is now possible to collect a large amount of data about personal activity relatively inexpensively. These type of devices are part of the quantified self movement – a group of enthusiasts who take measurements about themselves regularly to improve their health, to find patterns in their behavior, or because they are tech geeks. One thing that people regularly do is quantify how much of a particular activity they do, but they rarely quantify how well they do it. In this project, your goal will be to use data from accelerometers on the belt, forearm, arm, and dumbell of 6 participants. They were asked to perform barbell lifts correctly and incorrectly in 5 different ways. More information is available from the website here: http://groupware.les.inf.puc-rio.br/har (see the section on the Weight Lifting Exercise Dataset).
The training data for this project are available here:
https://d396qusza40orc.cloudfront.net/predmachlearn/pml-training.csv
The test data are available here:
https://d396qusza40orc.cloudfront.net/predmachlearn/pml-testing.csv
First, we remove columns that contain NA missing values.
trainRaw <- training[, colSums(is.na(training)) == 0]
testRaw <- testing[, colSums(is.na(testing)) == 0]
Next, we get rid of some columns that do not contribute much to the accelerometer measurements.
classe <- trainRaw$classe
trainRemove <- grepl("^X|timestamp|window", names(trainRaw))
trainRaw <- trainRaw[, !trainRemove]
trainCleaned <- trainRaw[, sapply(trainRaw, is.numeric)]
trainCleaned$classe <- classe
testRemove <- grepl("^X|timestamp|window", names(testRaw))
testRaw <- testRaw[, !testRemove]
testCleaned <- testRaw[, sapply(testRaw, is.numeric)]
Then we verify, if there are near zero variance variables.
## [1] freqRatio percentUnique zeroVar nzv
## <0 rows> (or 0-length row.names)
## [1] freqRatio percentUnique zeroVar nzv
## <0 rows> (or 0-length row.names)
There are no NZV that we “missed”.
We verify, if the names of the two data frames are itentical:
n1 <- names(trainCleaned)
n2 <- names(testCleaned)
identical(n1, n2)
## [1] FALSE
They are not… so we look for the one(s) that are not identical:
which(n1 != n2)
## [1] 53
n1[which(n1 != n2)]; n2[which(n1 != n2)]
## [1] "classe"
## [1] "problem_id"
That’s no problem, because it doesn’t concerne a predictor but the “classe” respective the “identifier” of the variable we want to predict. Our data is ready to use now and we split our training data.
Then, we can split the cleaned training set into a pure training data sets (60%, 65%, 70%) and corresponding validation data sets (40%, 35%, 30%). We will use the validation data sets to conduct cross validation in future steps. We may check the robustnes of the splitting this way.
set.seed(22519) # For reproducibile purpose
inTrain1 <- createDataPartition(trainCleaned$classe, p=0.6, list=F)
trainData1 <- trainCleaned[inTrain1, ]
testData1 <- trainCleaned[-inTrain1, ]
inTrain2 <- createDataPartition(trainCleaned$classe, p=0.65, list=F)
trainData2 <- trainCleaned[inTrain2, ]
testData2 <- trainCleaned[-inTrain2, ]
inTrain3 <- createDataPartition(trainCleaned$classe, p=0.7, list=F)
trainData3 <- trainCleaned[inTrain3, ]
testData3 <- trainCleaned[-inTrain3, ]
Now that the data is split we may turn to the models to predict.
We fit a predictive model for activity recognition using Random Forest algorithm because it automatically selects important variables and is robust to correlated covariates & outliers in general. We will use 5-fold cross validation when applying the algorithm.
set.seed(22519)
controlRf <- trainControl(method="cv", 5)
modelRf1 <- train(classe ~ ., data=trainData1, method="rf", trControl=controlRf, ntree=250)
modelRf2 <- train(classe ~ ., data=trainData2, method="rf", trControl=controlRf, ntree=250)
modelRf3 <- train(classe ~ ., data=trainData3, method="rf", trControl=controlRf, ntree=250)
modelRf1
## Random Forest
##
## 11776 samples
## 52 predictor
## 5 classes: 'A', 'B', 'C', 'D', 'E'
##
## No pre-processing
## Resampling: Cross-Validated (5 fold)
## Summary of sample sizes: 9420, 9420, 9422, 9421, 9421
## Resampling results across tuning parameters:
##
## mtry Accuracy Kappa
## 2 0.9881117 0.9849608
## 27 0.9887057 0.9857138
## 52 0.9839503 0.9796981
##
## Accuracy was used to select the optimal model using the largest value.
## The final value used for the model was mtry = 27.
modelRf2
## Random Forest
##
## 12757 samples
## 52 predictor
## 5 classes: 'A', 'B', 'C', 'D', 'E'
##
## No pre-processing
## Resampling: Cross-Validated (5 fold)
## Summary of sample sizes: 10206, 10206, 10206, 10205, 10205
## Resampling results across tuning parameters:
##
## mtry Accuracy Kappa
## 2 0.9897309 0.9870089
## 27 0.9898880 0.9872081
## 52 0.9833824 0.9789787
##
## Accuracy was used to select the optimal model using the largest value.
## The final value used for the model was mtry = 27.
modelRf3
## Random Forest
##
## 13737 samples
## 52 predictor
## 5 classes: 'A', 'B', 'C', 'D', 'E'
##
## No pre-processing
## Resampling: Cross-Validated (5 fold)
## Summary of sample sizes: 10988, 10990, 10990, 10991, 10989
## Resampling results across tuning parameters:
##
## mtry Accuracy Kappa
## 2 0.9894445 0.9866470
## 27 0.9892984 0.9864629
## 52 0.9849308 0.9809380
##
## Accuracy was used to select the optimal model using the largest value.
## The final value used for the model was mtry = 2.
Then, we estimate the performance of the model on the validation data set.
predictRf1 <- predict(modelRf1, testData1)
confusionMatrix(testData1$classe, predictRf1)
## Confusion Matrix and Statistics
##
## Reference
## Prediction A B C D E
## A 2230 1 0 0 1
## B 7 1506 5 0 0
## C 0 7 1358 3 0
## D 0 2 19 1264 1
## E 2 1 3 6 1430
##
## Overall Statistics
##
## Accuracy : 0.9926
## 95% CI : (0.9905, 0.9944)
## No Information Rate : 0.2854
## P-Value [Acc > NIR] : < 2.2e-16
##
## Kappa : 0.9906
## Mcnemar's Test P-Value : NA
##
## Statistics by Class:
##
## Class: A Class: B Class: C Class: D Class: E
## Sensitivity 0.9960 0.9927 0.9805 0.9929 0.9986
## Specificity 0.9996 0.9981 0.9985 0.9967 0.9981
## Pos Pred Value 0.9991 0.9921 0.9927 0.9829 0.9917
## Neg Pred Value 0.9984 0.9983 0.9958 0.9986 0.9997
## Prevalence 0.2854 0.1933 0.1765 0.1622 0.1825
## Detection Rate 0.2842 0.1919 0.1731 0.1611 0.1823
## Detection Prevalence 0.2845 0.1935 0.1744 0.1639 0.1838
## Balanced Accuracy 0.9978 0.9954 0.9895 0.9948 0.9984
accuracy1 <- postResample(predictRf1, testData1$classe)
accuracy1 <- round(accuracy1, digits = 4)
oose1 <- 1 - as.numeric(confusionMatrix(testData1$classe, predictRf1)$overall[1])
oose1 <- round(oose1, digits = 4)
So, the estimated accuracy of the model is 0.9926 and the estimated out-of-sample error is 0.0074.
predictRf2 <- predict(modelRf2, testData2)
confusionMatrix(testData2$classe, predictRf2)
## Confusion Matrix and Statistics
##
## Reference
## Prediction A B C D E
## A 1950 3 0 0 0
## B 13 1314 1 0 0
## C 0 2 1194 1 0
## D 0 3 23 1099 0
## E 0 1 3 9 1249
##
## Overall Statistics
##
## Accuracy : 0.9914
## 95% CI : (0.9889, 0.9935)
## No Information Rate : 0.2859
## P-Value [Acc > NIR] : < 2.2e-16
##
## Kappa : 0.9891
## Mcnemar's Test P-Value : NA
##
## Statistics by Class:
##
## Class: A Class: B Class: C Class: D Class: E
## Sensitivity 0.9934 0.9932 0.9779 0.9910 1.0000
## Specificity 0.9994 0.9975 0.9995 0.9955 0.9977
## Pos Pred Value 0.9985 0.9895 0.9975 0.9769 0.9897
## Neg Pred Value 0.9974 0.9984 0.9952 0.9983 1.0000
## Prevalence 0.2859 0.1927 0.1779 0.1615 0.1819
## Detection Rate 0.2840 0.1914 0.1739 0.1601 0.1819
## Detection Prevalence 0.2845 0.1934 0.1744 0.1639 0.1838
## Balanced Accuracy 0.9964 0.9953 0.9887 0.9932 0.9988
accuracy2 <- postResample(predictRf2, testData2$classe)
accuracy2 <- round(accuracy2, digits = 4)
oose2 <- 1 - as.numeric(confusionMatrix(testData2$classe, predictRf2)$overall[1])
oose2 <- round(oose2, digits = 4)
So, the estimated accuracy of the model is 0.9914 and the estimated out-of-sample error is 0.0086.
predictRf3 <- predict(modelRf3, testData3)
confusionMatrix(testData3$classe, predictRf3)
## Confusion Matrix and Statistics
##
## Reference
## Prediction A B C D E
## A 1672 1 0 0 1
## B 16 1119 4 0 0
## C 0 6 1019 1 0
## D 0 0 23 941 0
## E 0 0 0 3 1079
##
## Overall Statistics
##
## Accuracy : 0.9907
## 95% CI : (0.9879, 0.993)
## No Information Rate : 0.2868
## P-Value [Acc > NIR] : < 2.2e-16
##
## Kappa : 0.9882
## Mcnemar's Test P-Value : NA
##
## Statistics by Class:
##
## Class: A Class: B Class: C Class: D Class: E
## Sensitivity 0.9905 0.9938 0.9742 0.9958 0.9991
## Specificity 0.9995 0.9958 0.9986 0.9953 0.9994
## Pos Pred Value 0.9988 0.9824 0.9932 0.9761 0.9972
## Neg Pred Value 0.9962 0.9985 0.9944 0.9992 0.9998
## Prevalence 0.2868 0.1913 0.1777 0.1606 0.1835
## Detection Rate 0.2841 0.1901 0.1732 0.1599 0.1833
## Detection Prevalence 0.2845 0.1935 0.1743 0.1638 0.1839
## Balanced Accuracy 0.9950 0.9948 0.9864 0.9956 0.9992
accuracy3 <- postResample(predictRf3, testData3$classe)
accuracy3 <- round(accuracy3, digits = 4)
oose3 <- 1 - as.numeric(confusionMatrix(testData3$classe, predictRf3)$overall[1])
oose3 <- round(oose3, digits = 4)
So, the estimated accuracy of the model is 0.9907 and the estimated out-of-sample error is 0.0093.
The results ar very robust concerning de splitting factor p. The slightly best accuray is atteind by modelRf1. (We’ll see in the appendix, that the different versions lead to diffrent trees with the same result.)
Now, we apply the model to the original testing data set downloaded from the data source. We remove the problem_id column first.
result1 <- predict(modelRf1, testCleaned[, -length(names(testCleaned))])
result2 <- predict(modelRf2, testCleaned[, -length(names(testCleaned))])
result3 <- predict(modelRf3, testCleaned[, -length(names(testCleaned))])
result1
## [1] B A B A A E D B A A B C B A E E A B B B
## Levels: A B C D E
result2
## [1] B A B A A E D B A A B C B A E E A B B B
## Levels: A B C D E
result3
## [1] B A B A A E D B A A B C B A E E A B B B
## Levels: A B C D E
identical(result1, result2, result3)
## [1] TRUE
All results are identical; all can be used for aswering the quiz questions.
Graphics are established with Splitting version 1 with p = 60%.
cmrf <- confusionMatrix(predictRf1, testData1$classe)
plot(cmrf$table, col = cmrf$byClass, main = paste("Random Forest Confusion Matrix: Accuracy =", round(cmrf$overall['Accuracy'], 4)))
For all variantes of p = 60%, p = 65% and p = 70%
treeModel <- rpart(classe ~ ., data=trainData1, method="class")
prp(treeModel) # fast plot
treeModel <- rpart(classe ~ ., data=trainData2, method="class")
prp(treeModel) # fast plot
treeModel <- rpart(classe ~ ., data=trainData3, method="class")
prp(treeModel) # fast plot
Interssting to see the differences in trees leading to the same result.
corrPlot <- cor(trainData1[, -length(names(trainData1))])
corrplot(corrPlot, method="color")