Poeject Assignment

Background

Using devices such as Jawbone Up, Nike FuelBand, and Fitbit it is now possible to collect a large amount of data about personal activity relatively inexpensively. These type of devices are part of the quantified self movement – a group of enthusiasts who take measurements about themselves regularly to improve their health, to find patterns in their behavior, or because they are tech geeks. One thing that people regularly do is quantify how much of a particular activity they do, but they rarely quantify how well they do it. In this project, your goal will be to use data from accelerometers on the belt, forearm, arm, and dumbell of 6 participants. They were asked to perform barbell lifts correctly and incorrectly in 5 different ways. More information is available from the website here: http://groupware.les.inf.puc-rio.br/har (see the section on the Weight Lifting Exercise Dataset).

Data

The training data for this project are available here:

https://d396qusza40orc.cloudfront.net/predmachlearn/pml-training.csv

The test data are available here:

https://d396qusza40orc.cloudfront.net/predmachlearn/pml-testing.csv

The data for this project come from this source: http://groupware.les.inf.puc-rio.br/har. If you use the document you create for this class for any purpose please cite them as they have been very generous in allowing their data to be used for this kind of assignment.

What you should submit

The goal of your project is to predict the manner in which they did the exercise. This is the “classe” variable in the training set. You may use any of the other variables to predict with. You should create a report describing how you built your model, how you used cross validation, what you think the expected out of sample error is, and why you made the choices you did. You will also use your prediction model to predict 20 different test cases.

  1. Your submission should consist of a link to a Github repo with your R markdown and compiled HTML file descr]ibing your analysis. Please constrain the text of the writeup to < 2000 words and the number of figures to be less than 5. It will make it easier for the graders if you submit a repo with a gh-pages branch so the HTML page can be viewed online (and you always want to make it easy on graders :-).
  2. You should also apply your machine learning algorithm to the 20 test cases available in the test data above. Please submit your predictions in appropriate format to the programming assignment for automated grading. See the programming assignment for additional details. ####Reproducibility Due to security concerns with the exchange of R code, your code will not be run during the evaluation by your classmates. Please be sure that if they download the repo, they will be able to view the compiled HTML version of your analysis.

Project Code

Prepare the R env and Data

library("caret") 
## Warning: package 'caret' was built under R version 3.1.2
## Loading required package: lattice
## Loading required package: ggplot2
set.seed(12345) #set seed in random number generator for the sake of reproducibility.

# Download the training and testing data if have not done so.
#download.file("https://d396qusza40orc.cloudfront.net/predmachlearn/pml-training.csv", destfil="./pml-training.csv",)
#download.file("https://d396qusza40orc.cloudfront.net/predmachlearn/pml-testing.csv", destfil="./pml-testing.csv",)

# Load training and testing data
# Replace invalid strings as "NA"
trainingdata <- read.csv("./pml-training.csv",na.strings=c("NA","#DIV/0!",""))
testingdata <- read.csv("./pml-testing.csv",na.strings=c("NA","#DIV/0!",""))
dim(trainingdata)
## [1] 19622   160
dim(testingdata)
## [1]  20 160
# Delete any columns containg NAs in testingdata
training <- trainingdata[,colSums(is.na(testingdata))==0]
testing <- testingdata[,colSums(is.na(testingdata))==0]

# Delete irrelevent columns [X, user_name, raw_timestamp_part_1, raw_timestamp_part_2, cvtd_timestamp, new_window, num_window]
training  <-training[,-c(1:7)]
testing <-testing[,-c(1:7)]

# Take a look at the data after clearning
dim(training)
## [1] 19622    53
dim(testing)
## [1] 20 53

Cross Validation

Split the original training data to subTraining(75%) and subTesting(25%) for cross validation. Fit the model using subTraining data and then predit using subTesting data. The accuracy of prediction should reflect the accuracy of the model.

# Divide training data to subtraining and subtesting (75% subtraining, 25% subtesting)
inTrain <- createDataPartition(y=training$classe, p=0.75, list=F,)
subTraining <- training[inTrain,]
subTesting <- training[-inTrain,]

Decision Tree algorithm

library("rpart")
model_dt <- rpart(classe ~., data=subTraining, method="class")
pred_dt  <- predict(model_dt, subTesting, type="class")
res_dt <- confusionMatrix(pred_dt,subTesting$classe)
res_dt
## Confusion Matrix and Statistics
## 
##           Reference
## Prediction    A    B    C    D    E
##          A 1260  156   33   40   23
##          B   52  555   73   52   52
##          C   24  136  575   83   95
##          D   40   33  150  513   89
##          E   19   69   24  116  642
## 
## Overall Statistics
##                                        
##                Accuracy : 0.723        
##                  95% CI : (0.71, 0.735)
##     No Information Rate : 0.284        
##     P-Value [Acc > NIR] : <2e-16       
##                                        
##                   Kappa : 0.649        
##  Mcnemar's Test P-Value : <2e-16       
## 
## Statistics by Class:
## 
##                      Class: A Class: B Class: C Class: D Class: E
## Sensitivity             0.903    0.585    0.673    0.638    0.713
## Specificity             0.928    0.942    0.917    0.924    0.943
## Pos Pred Value          0.833    0.708    0.630    0.622    0.738
## Neg Pred Value          0.960    0.904    0.930    0.929    0.936
## Prevalence              0.284    0.194    0.174    0.164    0.184
## Detection Rate          0.257    0.113    0.117    0.105    0.131
## Detection Prevalence    0.308    0.160    0.186    0.168    0.177
## Balanced Accuracy       0.916    0.763    0.795    0.781    0.828

Bagging algorithm

Bootstraping Aggregating method.

library("ipred")
## Warning: package 'ipred' was built under R version 3.1.2
model_bagging <- bagging(classe ~., data=subTraining)
pred_bagging <- predict(model_bagging, subTesting)
res_bagging <- confusionMatrix(pred_bagging, subTesting$classe)
res_bagging 
## Confusion Matrix and Statistics
## 
##           Reference
## Prediction    A    B    C    D    E
##          A 1391    6    0    0    0
##          B    3  936    3    1    1
##          C    1    7  847   10    4
##          D    0    0    5  793    6
##          E    0    0    0    0  890
## 
## Overall Statistics
##                                         
##                Accuracy : 0.99          
##                  95% CI : (0.987, 0.993)
##     No Information Rate : 0.284         
##     P-Value [Acc > NIR] : <2e-16        
##                                         
##                   Kappa : 0.988         
##  Mcnemar's Test P-Value : NA            
## 
## Statistics by Class:
## 
##                      Class: A Class: B Class: C Class: D Class: E
## Sensitivity             0.997    0.986    0.991    0.986    0.988
## Specificity             0.998    0.998    0.995    0.997    1.000
## Pos Pred Value          0.996    0.992    0.975    0.986    1.000
## Neg Pred Value          0.999    0.997    0.998    0.997    0.997
## Prevalence              0.284    0.194    0.174    0.164    0.184
## Detection Rate          0.284    0.191    0.173    0.162    0.181
## Detection Prevalence    0.285    0.192    0.177    0.164    0.181
## Balanced Accuracy       0.998    0.992    0.993    0.992    0.994

Random Forest (improved bagging) algorithm

Improved Bagging algorithm

  1. Boorstrap sampled;
  2. At each split, bootstrap variables
  3. Grow multiple trees and vote

Pros:

  • Accuracy

Cons:

  • Slow speed
  • Interpretability
  • Overfitting -> important to do cross-validation.
# Use randomForest model to train and predict
#install.packages("randomForest")
library("randomForest") #Random forest for classification and regression
## Warning: package 'randomForest' was built under R version 3.1.2
## randomForest 4.6-10
## Type rfNews() to see new features/changes/bug fixes.
model_rf <- randomForest(classe ~., data=subTraining, na.action=na.omit)
pred_rf <- predict(model_rf, subTesting, type="class")
# Summarize randomForest results. 
res_rf <- confusionMatrix(pred_rf,subTesting$classe)
res_rf
## Confusion Matrix and Statistics
## 
##           Reference
## Prediction    A    B    C    D    E
##          A 1395    7    0    0    0
##          B    0  938    3    0    0
##          C    0    4  850    7    1
##          D    0    0    2  797    4
##          E    0    0    0    0  896
## 
## Overall Statistics
##                                         
##                Accuracy : 0.994         
##                  95% CI : (0.992, 0.996)
##     No Information Rate : 0.284         
##     P-Value [Acc > NIR] : <2e-16        
##                                         
##                   Kappa : 0.993         
##  Mcnemar's Test P-Value : NA            
## 
## Statistics by Class:
## 
##                      Class: A Class: B Class: C Class: D Class: E
## Sensitivity             1.000    0.988    0.994    0.991    0.994
## Specificity             0.998    0.999    0.997    0.999    1.000
## Pos Pred Value          0.995    0.997    0.986    0.993    1.000
## Neg Pred Value          1.000    0.997    0.999    0.998    0.999
## Prevalence              0.284    0.194    0.174    0.164    0.184
## Detection Rate          0.284    0.191    0.173    0.163    0.183
## Detection Prevalence    0.286    0.192    0.176    0.164    0.183
## Balanced Accuracy       0.999    0.994    0.996    0.995    0.997

Select the model

Compare the accuracy of trained model on subTesting data, and choose highest accuracy model => randomForest. We observe slightly accuracy improvement for RandomForest compared with genetic Bagging algorithm.

df_res <- data.frame(res_dt$overall, res_bagging$overall, res_rf$overall)
df_res
##                res_dt.overall res_bagging.overall res_rf.overall
## Accuracy            7.229e-01              0.9904         0.9943
## Kappa               6.486e-01              0.9879         0.9928
## AccuracyLower       7.101e-01              0.9873         0.9918
## AccuracyUpper       7.354e-01              0.9929         0.9962
## AccuracyNull        2.845e-01              0.2845         0.2845
## AccuracyPValue      0.000e+00              0.0000         0.0000
## McnemarPValue       4.774e-26                 NaN            NaN

Final Prediction Results

Apply the randomForest trained model on testing data, and get its testing results. RandomForest has 99.4% accuracy 95% CI : (0.992, 0.996), so I expect that less than 1 prediction can be wrong in 20 predictions.

# Predict testing results using trained model
res <- predict(model_rf, testing, type="class")
res
##  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 
##  B  A  B  A  A  E  D  B  A  A  B  C  B  A  E  E  A  B  B  B 
## Levels: A B C D E