Using devices such as Jawbone Up, Nike FuelBand, and Fitbit it is now possible to collect a large amount of data about personal activity relatively inexpensively. These type of devices are part of the quantified self movement ??? a group of enthusiasts who take measurements about themselves regularly to improve their health, to find patterns in their behavior, or because they are tech geeks. One thing that people regularly do is quantify how much of a particular activity they do, but they rarely quantify how well they do it. In this project, your goal will be to use data from accelerometers on the belt, forearm, arm, and dumbell of 6 participants. They were asked to perform barbell lifts correctly and incorrectly in 5 different ways.
More information is available from the website here:http://groupware.les.inf.puc-rio.br/har (see the section on the Weight Lifting Exercise Dataset).
The goal of your project is to predict the manner in which they did the exercise. This is the “classe” variable in the training set. You may use any of the other variables to predict with. You should create a report describing how you built your model, how you used cross validation, what you think the expected out of sample error is, and why you made the choices you did. You will also use your prediction model to predict 20 different test cases.
library(knitr)
library(randomForest)
## randomForest 4.6-12
## Type rfNews() to see new features/changes/bug fixes.
library(caret)
## Loading required package: lattice
## Loading required package: ggplot2
##
## Attaching package: 'ggplot2'
## The following object is masked from 'package:randomForest':
##
## margin
library(rpart)
library(rpart.plot)
library(corrplot)
## Warning: package 'corrplot' was built under R version 3.2.5
library(randomForest)
library(gbm)
## Loading required package: survival
##
## Attaching package: 'survival'
## The following object is masked from 'package:caret':
##
## cluster
## Loading required package: splines
## Loading required package: parallel
## Loaded gbm 2.1.1
trainData <- read.csv("pml-training.csv", header = TRUE)
testData <- read.csv("pml-testing.csv", header = TRUE)
dim(trainData)
## [1] 19622 160
The raw dataset contained 19622 rows of data with 160 variables. Many variables contained missing data, so we need to removed from the raw dataset(trainData) to train dataset.
train_filter <- colnames(trainData[colSums(is.na(trainData)) == 0])
train <- trainData[train_filter]
train <- trainData[,c(8:11,37:49,60:68,84:86,102,113:124,140,151:160)]
dim(train)
## [1] 19622 53
After the cleaning of the dataset, we found there are now 19622 rows of data with 53 varibles.
Partioning Training data set into two data sets, 60% for TrainSet, 40% for TestSet:
inTrain <- createDataPartition(train$classe, p=0.6, list=FALSE)
TrainSet <- train[inTrain, ]
TestSet <- train[-inTrain, ]
dim(TrainSet)
## [1] 11776 53
dim(TestSet)
## [1] 7846 53
I am going to explore the data with 2 models, Random Forest and Generalized Boosted Model. It was determined that the Random Forest method produced the best results.
A Confusion Matrix is listed at the end of each analysis to show the accuracy of the models.
set.seed(1777)
RandomForest <- randomForest(classe~.,data=TrainSet,ntree=500,importance=TRUE)
RandomForest
##
## Call:
## randomForest(formula = classe ~ ., data = TrainSet, ntree = 500, importance = TRUE)
## Type of random forest: classification
## Number of trees: 500
## No. of variables tried at each split: 7
##
## OOB estimate of error rate: 0.62%
## Confusion matrix:
## A B C D E class.error
## A 3346 2 0 0 0 0.0005973716
## B 14 2256 9 0 0 0.0100921457
## C 0 12 2040 2 0 0.0068159688
## D 0 0 22 1905 3 0.0129533679
## E 0 0 3 6 2156 0.0041570439
plot(RandomForest)
predictRandomForest <- predict(RandomForest, newdata = TestSet)
CMRandForest <- confusionMatrix(predictRandomForest, TestSet$classe)
CMRandForest
## Confusion Matrix and Statistics
##
## Reference
## Prediction A B C D E
## A 2229 12 0 0 0
## B 2 1503 7 0 1
## C 1 3 1360 19 0
## D 0 0 1 1267 11
## E 0 0 0 0 1430
##
## Overall Statistics
##
## Accuracy : 0.9927
## 95% CI : (0.9906, 0.9945)
## No Information Rate : 0.2845
## P-Value [Acc > NIR] : < 2.2e-16
##
## Kappa : 0.9908
## Mcnemar's Test P-Value : NA
##
## Statistics by Class:
##
## Class: A Class: B Class: C Class: D Class: E
## Sensitivity 0.9987 0.9901 0.9942 0.9852 0.9917
## Specificity 0.9979 0.9984 0.9964 0.9982 1.0000
## Pos Pred Value 0.9946 0.9934 0.9834 0.9906 1.0000
## Neg Pred Value 0.9995 0.9976 0.9988 0.9971 0.9981
## Prevalence 0.2845 0.1935 0.1744 0.1639 0.1838
## Detection Rate 0.2841 0.1916 0.1733 0.1615 0.1823
## Detection Prevalence 0.2856 0.1928 0.1763 0.1630 0.1823
## Balanced Accuracy 0.9983 0.9943 0.9953 0.9917 0.9958
set.seed(12345)
traincontrolGBM <- trainControl(method = "repeatedcv", number = 5, repeats = 1)
modelGMB <- train(classe ~ ., data = TrainSet, method = "gbm", trControl = traincontrolGBM, verbose = FALSE)
## Loading required package: plyr
modelGMB$finalModel
## A gradient boosted model with multinomial loss function.
## 150 iterations were performed.
## There were 52 predictors of which 45 had non-zero influence.
predictGBM <- predict(modelGMB, newdata = TestSet)
confMatGBM <- confusionMatrix(predictGBM, TestSet$classe)
confMatGBM
## Confusion Matrix and Statistics
##
## Reference
## Prediction A B C D E
## A 2198 64 0 2 3
## B 22 1407 26 6 17
## C 8 43 1325 45 17
## D 1 4 15 1216 22
## E 3 0 2 17 1383
##
## Overall Statistics
##
## Accuracy : 0.9596
## 95% CI : (0.955, 0.9638)
## No Information Rate : 0.2845
## P-Value [Acc > NIR] : < 2.2e-16
##
## Kappa : 0.9489
## Mcnemar's Test P-Value : 1.284e-12
##
## Statistics by Class:
##
## Class: A Class: B Class: C Class: D Class: E
## Sensitivity 0.9848 0.9269 0.9686 0.9456 0.9591
## Specificity 0.9877 0.9888 0.9826 0.9936 0.9966
## Pos Pred Value 0.9696 0.9520 0.9214 0.9666 0.9843
## Neg Pred Value 0.9939 0.9826 0.9933 0.9894 0.9908
## Prevalence 0.2845 0.1935 0.1744 0.1639 0.1838
## Detection Rate 0.2801 0.1793 0.1689 0.1550 0.1763
## Detection Prevalence 0.2889 0.1884 0.1833 0.1603 0.1791
## Balanced Accuracy 0.9862 0.9578 0.9756 0.9696 0.9778
The accuracy of the 2 regression modeling methods above are:
Random Forest : 0.9959 GBM : 0.9616
Random Forests gave an Accuracy in the TrainSet dataset of 99.29%, which was more accurate that what I got from the GBM. The expected out-of-sample error is 100-99.29 = 0.71%.
predictTEST <- predict(RandomForest, newdata=testData)
predictTEST
## 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
## B A B A A E D B A A B C B A E E A B B B
## Levels: A B C D E