Background

Using devices such as Jawbone Up, Nike FuelBand, and Fitbit it is now possible to collect a large amount of data about personal activity relatively inexpensively. These type of devices are part of the quantified self movement - a group of enthusiasts who take measurements about themselves regularly to improve their health, to find patterns in their behavior, or because they are tech geeks. One thing that people regularly do is quantify how much of a particular activity they do, but they rarely quantify how well they do it. In this project, your goal will be to use data from accelerometers on the belt, forearm, arm, and dumbell of 6 participants. They were asked to perform barbell lifts correctly and incorrectly in 5 different ways. More information is available from the website here: http://groupware.les.inf.puc-rio.br/har (see the section on the Weight Lifting Exercise Dataset).

Data

[1] Training data: [https://d396qusza40orc.cloudfront.net/predmachlearn/pml-training.csv]

[2] Test data: [https://d396qusza40orc.cloudfront.net/predmachlearn/pml-testing.csv]

The data for this project come from this source: [http://groupware.les.inf.puc-rio.br/har]. If you use the document you create for this class for any purpose please cite them as they have been very generous in allowing their data to be used for this kind of assignment.

What you should submit

The goal of your project is to predict the manner in which they did the exercise. This is the “classe” variable in the training set. You may use any of the other variables to predict with. You should create a report describing how you built your model, how you used cross validation, what you think the expected out of sample error is, and why you made the choices you did. You will also use your prediction model to predict 20 different test cases.

Your submission should consist of a link to a Github repo with your R markdown and compiled HTML file describing your analysis. Please constrain the text of the writeup to < 2000 words and the number of figures to be less than 5. It will make it easier for the graders if you submit a repo with a gh-pages branch so the HTML page can be viewed online (and you always want to make it easy on graders :-). You should also apply your machine learning algorithm to the 20 test cases available in the test data above. Please submit your predictions in appropriate format to the programming assignment for automated grading. See the programming assignment for additional details.

Introduction

Reproduceability

Pseudo-random number generator seed was set at 3500 for all code. To run this code you will need download and install the “caret”, “rpart” and “randomForest” R packages.

How the model was built

For this data set, “participants were asked to perform one set of 10 repetitions of the Unilateral Dumbbell Biceps Curl in 5 different fashions:

Cross-validation

Cross-validation will be performed by subsampling our training data set randomly without replacement into 2 subsamples: subTraining data (75% of the original Training data set) and subTesting data (25%). Our models will be fitted on the subTraining data set, and tested on the subTesting data. Once the most accurate model is choosen, it will be tested on the original Testing data set.

Expected out-of-sample error

The expected out-of-sample error will correspond to the quantity: 1-accuracy in the cross-validation data. Accuracy is the proportion of correct classified observation over the total sample in the subTesting data set. Expected accuracy is the expected accuracy in the out-of-sample data set (i.e. original testing data set). Thus, the expected value of the out-of-sample error will correspond to the expected number of missclassified observations/total observations in the Test data set, which is the quantity: 1-accuracy found from the cross-validation data set.

Observations

Our outcome variable “classe” is an unordered factor variable. Thus, we can choose our error type as 1-accuracy. We have a large sample size with N= 19622 in the Training data set. This allow us to divide our Training sample into subTraining and subTesting to allow cross-validation. Features with all missing values will be discarded as well as features that are irrelevant. All other features will be kept as relevant variables. Decision tree and random forest algorithms are known for their ability of detecting the features that are important for classification [2]. Feature selection is inherent, so it is not so necessary at the data preparation phase. Thus, there won’t be any feature selection section in this report.

Preparation data analysis

Packages, Libraries, Seed

Installing packages, loading libraries, and setting the seed for reproduceability:

library(caret)
library(randomForest)
library(rpart)
library(rpart.plot)

Loading data sets and preliminary cleaning

First we want to load the data sets into R and make sure that missing values are coded correctly. Irrelevant variables will be deleted. Results will be hidden from the report for clarity and space considerations.

Loading the testing data set

Loading the training data set into my R session replacing all missing with “NA”

testingset <- read.csv("pml-testing.csv", na.strings=c("NA","#DIV/0!", ""))
trainingset <- read.csv("pml-training.csv", na.strings=c("NA","#DIV/0!", ""))

Delete columns with all missing values

trainingset <- trainingset[,colSums(is.na(trainingset)) == 0]
testingset <- testingset[,colSums(is.na(testingset)) == 0]

Irrelevants variables

trainingset <- trainingset[,-c(1:7)]
testingset <- testingset[,-c(1:7)]

Partitioning the training data set to allow cross-validation

Training data set = 53 variables

Testing data set = 53 variables and 20 obs.

subsamples <- createDataPartition(y=trainingset$classe, p=0.70, list=FALSE)
subTraining <- trainingset[subsamples, ] 
subTesting <- trainingset[-subsamples, ]

Classe Summary

summary(subTraining$classe)
##    A    B    C    D    E 
## 3906 2658 2396 2252 2525

Model 1: Decision Tree

model1 <- rpart(classe ~ ., data=subTraining, method="class")
prediction1 <- predict(model1, subTesting, type = "class")
rpart.plot(model1, main="Classification Tree", extra=102, under=TRUE, faclen=0)

Test results:

confusionMatrix(prediction1, subTesting$classe)
## Confusion Matrix and Statistics
## 
##           Reference
## Prediction    A    B    C    D    E
##          A 1495  196   34   96   24
##          B   68  651  107   75  105
##          C   42  114  765  120   79
##          D   52   70   81  565   69
##          E   17  108   39  108  805
## 
## Overall Statistics
##                                           
##                Accuracy : 0.7274          
##                  95% CI : (0.7159, 0.7388)
##     No Information Rate : 0.2845          
##     P-Value [Acc > NIR] : < 2.2e-16       
##                                           
##                   Kappa : 0.6539          
##  Mcnemar's Test P-Value : < 2.2e-16       
## 
## Statistics by Class:
## 
##                      Class: A Class: B Class: C Class: D Class: E
## Sensitivity            0.8931   0.5716   0.7456  0.58610   0.7440
## Specificity            0.9169   0.9252   0.9269  0.94473   0.9434
## Pos Pred Value         0.8103   0.6471   0.6830  0.67503   0.7474
## Neg Pred Value         0.9557   0.9000   0.9452  0.92096   0.9424
## Prevalence             0.2845   0.1935   0.1743  0.16381   0.1839
## Detection Rate         0.2540   0.1106   0.1300  0.09601   0.1368
## Detection Prevalence   0.3135   0.1709   0.1903  0.14223   0.1830
## Balanced Accuracy      0.9050   0.7484   0.8363  0.76541   0.8437

Model 2: Random Forest

model2 <- randomForest(classe ~. , data=subTraining, method="class")
prediction2 <- predict(model2, subTesting, type = "class")
confusionMatrix(prediction2, subTesting$classe)
## Confusion Matrix and Statistics
## 
##           Reference
## Prediction    A    B    C    D    E
##          A 1670    4    0    0    0
##          B    4 1134    8    0    0
##          C    0    1 1018    9    0
##          D    0    0    0  954    0
##          E    0    0    0    1 1082
## 
## Overall Statistics
##                                          
##                Accuracy : 0.9954         
##                  95% CI : (0.9933, 0.997)
##     No Information Rate : 0.2845         
##     P-Value [Acc > NIR] : < 2.2e-16      
##                                          
##                   Kappa : 0.9942         
##  Mcnemar's Test P-Value : NA             
## 
## Statistics by Class:
## 
##                      Class: A Class: B Class: C Class: D Class: E
## Sensitivity            0.9976   0.9956   0.9922   0.9896   1.0000
## Specificity            0.9991   0.9975   0.9979   1.0000   0.9998
## Pos Pred Value         0.9976   0.9895   0.9903   1.0000   0.9991
## Neg Pred Value         0.9991   0.9989   0.9984   0.9980   1.0000
## Prevalence             0.2845   0.1935   0.1743   0.1638   0.1839
## Detection Rate         0.2838   0.1927   0.1730   0.1621   0.1839
## Detection Prevalence   0.2845   0.1947   0.1747   0.1621   0.1840
## Balanced Accuracy      0.9983   0.9965   0.9951   0.9948   0.9999

Results

Random Forest algorithm performed better than Decision Trees.

Accuracy for Random Forest model was 0.995 (95% CI: (0.993, 0.997)) compared to 0.7215 (95% CI: (0.709, 0.73)) for Decision Tree model.

Submission

Predict outcome levels on the original Testing data set using Random Forest algorithm

predictfinal <- predict(model2, testingset, type="class")
predictfinal
##  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 
##  B  A  B  A  A  E  D  B  A  A  B  C  B  A  E  E  A  B  B  B 
## Levels: A B C D E