Using devices such as Jawbone Up, Nike FuelBand, and Fitbit it is now possible to collect a large amount of data about personal activity relatively inexpensively. These type of devices are part of the quantified self movement - a group of enthusiasts who take measurements about themselves regularly to improve their health, to find patterns in their behavior, or because they are tech geeks. One thing that people regularly do is quantify how much of a particular activity they do, but they rarely quantify how well they do it. In this project, your goal will be to use data from accelerometers on the belt, forearm, arm, and dumbell of 6 participants. They were asked to perform barbell lifts correctly and incorrectly in 5 different ways. More information is available from the website here: http://groupware.les.inf.puc-rio.br/har (see the section on the Weight Lifting Exercise Dataset).
The outcome variable is the categorical “classe” variable, which can have 6 possible results: Class A (Exactly according to the specification), Class B (Throwing the elbows to the front), Class C (Lifting the dumbbell only halfway), Class D (Lowering the dumbbell only halfway), Class E (And throwing the hips to the front).
In this dataset, there are 160 variables. The first 7 variables will be deleted completely, because they only contain an index for the observations, as well as timestamps.
In order to cut down time in computational execution, variables that have at least 1 NA will be removed (67 variables).
Some variables appear as “Factor” and have huge amounts of levels, e.g. 323, 401. And they also have lots of blank observations. We will be remove this variables as well.
Note: the following code assumes the pml-training and pml-testing datasets are in your working directory.
#loading
training <- read.csv("pml-training.csv")
test <- read.csv("pml-testing.csv")
#removing useless variables from the predictors
training <- training[,8:160]
test <- test[,8:160]
#removing variables with NAs
noNaIndex <- which(colSums(is.na(training))==0) #creating index of variables without NAs
training <- training[,noNaIndex]
test <- test[,noNaIndex]
#getting rid of "factor variables" for the Random Forest model
trainingFinal <- training[,sapply(training, is.numeric)]
testFinal <- test[,sapply(test, is.numeric)]
#removing problem_id tag at the end of test set
testFinal <- testFinal[,-53]
classe <- training$classe
rm(training, test)
The objective of the model is clearly Multi-class Classification. The model to be used is Random Forests for various reasons:
library(caret)
library(randomForest)
set.seed(343) # reproducibility
rfModel <- randomForest(y= classe, x=trainingFinal,
trControl=trainControl(method="cv",10),
ntree=250, do.trace = T)
#The model
rfModel$call
## randomForest(x = trainingFinal, y = classe, ntree = 250, do.trace = T,
## trControl = trainControl(method = "cv", 10))
rfModel$type
## [1] "classification"
rfModel$confusion
## A B C D E class.error
## A 5578 2 0 0 0 0.0003584229
## B 12 3782 3 0 0 0.0039504872
## C 0 11 3409 2 0 0.0037989480
## D 0 0 23 3191 2 0.0077736318
## E 0 0 2 6 3599 0.0022179096
There are many methods for cross-validation but the selected method for this project will be k-fold cross-validation with k = 10. The reason for this is explained in Chapter 5.1.4 of Introduction to Statistical Learning. Very briefly put: there is a Validation Set Approach (where the data is split roughly 50/50), this approach is computationally simple to execute (only one model has to be trained), yet can overestimate the Misclassification Rate. Then there’s also Leave-One-Out cross-validation which has the smallest bias of error estimation in validation methods, yet it is computationally complex and has very high variability. Thse 2 are extremes (k=2 and k=n; respectively). Another, more balanced approach is to take k folds or groups and hold each one of them to be validation sets and train the model k times on the remaining k-1 groups, and then average the respective Misclassification Rates of every model trained. The advantage of a 10-fold cross-validation is that it is not so computationally complex and scores very well on the bias-variance trade-off. Please read the chapter for further detail.
## A B C D E
## 0.9996416 0.9960495 0.9962011 0.9922264 0.9977821
## [1] 0.9963801