Practical Machine Learning: Project Report

Practical Machine Learning from Johns Hopkins Bloomberg School of Public Health from Coursera.

GitHub: https://github.com/andresbmendoza/JohnsHopkins_PracticalMachineLearning.git

Background

Using devices such as Jawbone Up, Nike FuelBand, and Fitbit it is now possible to collect a large amount of data about personal activity relatively inexpensively. These type of devices are part of the quantified self movement - a group of enthusiasts who take measurements about themselves regularly to improve their health, to find patterns in their behavior, or because they are tech geeks. One thing that people regularly do is quantify how much of a particular activity they do, but they rarely quantify how well they do it. In this project, our goal will be to use data from accelerometers on the belt, forearm, arm, and dumbell of 6 participants. They were asked to perform barbell lifts correctly and incorrectly in 5 different ways. More information is available from the website here: http://groupware.les.inf.puc-rio.br/har (see the section on the Weight Lifting Exercise Dataset).

Data Sources

  • The training data for this project is available here:

https://d396qusza40orc.cloudfront.net/predmachlearn/pml-training.csv

  • The test data is available here:

https://d396qusza40orc.cloudfront.net/predmachlearn/pml-testing.csv

The data for this project comes from this original source: http://groupware.les.inf.puc-rio.br/har. If you use the document you create for this class for any purpose please cite them as they have been very generous in allowing their data to be used for this kind of assignment.

Intended Results

The goal of this project is to predict the manner in which they did the exercise. This is the “classe” variable in the training set. You may use any of the other variables to predict with. You should create a report describing how you built your model, how you used cross validation, what you think the expected out of sample error is, and why you made the choices you did. You will also use your prediction model to predict 20 different test cases.

This assignment has two part:

  1. Your submission should consist of a link to a Github repo with your R markdown and compiled HTML file describing your analysis. Please constrain the text of the writeup to < 2000 words and the number of figures to be less than 5. It will make it easier for the graders if you submit a repo with a gh-pages branch so the HTML page can be viewed online (and you always want to make it easy on graders :-).
  2. You should also apply your machine learning algorithm to the 20 test cases available in the test data above. Please submit your predictions in appropriate format to the programming assignment for automated grading. See the programming assignment for additional details.

Reproducibility

In order to reproduce the same results, you need a certain set of packages as well as setting a pseudo random seed equal to the one I have used.

The following Libraries were used for this project, If some library is not installed first install it using the command install.packages(“Pack_Name”):

library(ggplot2)
library(RColorBrewer)
library(rattle)
## Loading required package: tibble
## Loading required package: bitops
## Rattle: A free graphical interface for data science with R.
## Version 5.4.0 Copyright (c) 2006-2020 Togaware Pty Ltd.
## Type 'rattle()' to shake, rattle, and roll your data.
library(caret)
## Loading required package: lattice
library(rpart)
library(rpart.plot)
library(randomForest)
## randomForest 4.6-14
## Type rfNews() to see new features/changes/bug fixes.
## 
## Attaching package: 'randomForest'
## The following object is masked from 'package:rattle':
## 
##     importance
## The following object is masked from 'package:ggplot2':
## 
##     margin

Getting The Data

Creating the Data Folder.

WD <- getwd()
if (!dir.exists(paste(WD,"Data", sep = "/"))) dir.create(paste(WD,"Data", sep = "/"))

Download the Test and Training data into the ‘Data’ Folder

TrainFile <- "./Data/pml_Training.csv"
TestFile  <- "./Data/pml_Testing.csv"
TrainUrl <-"https://d396qusza40orc.cloudfront.net/predmachlearn/pml-training.csv"
TestUrl <- "https://d396qusza40orc.cloudfront.net/predmachlearn/pml-testing.csv"


if (!file.exists(TrainFile)) {
  download.file(TrainUrl, destfile = TrainFile)
}
if (!file.exists(TestFile)) {
  download.file(TestUrl, destfile = TestFile)
}

Loading Data

After downloading the data, The CSV files should be loaded into the environmental throughout the read.csv command to create Dataframes to be manipulated.

TrainRaw <- read.csv(TrainFile)
TestRaw <- read.csv(TestFile)

The training data set contains 19622 observations and 160 variables, while the testing data set contains 20 observations and 160 variables.

The variable named classe in the training data is the Explained/Outcome Variable.

Cleaning Data Process

The Dataset have to be cleaned and is necessary to handle the of observations with missing values. Furthermore, variables that has not to much variability (Variance near to Zero) could be exclude from our model.

Using the Near Zero Variance function, is possible to discriminate Variables which do not explain or add variation to the data.

NZV <- nearZeroVar(TrainRaw, saveMetrics = TRUE)
Training <- TrainRaw[, !NZV$nzv]
Testing <- TestRaw[, !NZV$nzv]

Removing columns As Subject name, and timeStamps which not add value to our models.

Training <- Training[, -c(1:5)]
Testing <- Testing[, -c(1:5)]

Removing columns withNA's

isNAcol <- colSums(is.na(Training)) == 0
Training <- Training[, isNAcol]
Testing <- Testing[, isNAcol]

After clean the Training and Testing Data sets contains:

Training: 19622 observations and 54 variables. Testing : 20 observations and 54 variables.

Partitioning Training Set and create the Validation Data

The training set will be splited into two sets:

  • A pure training data set (70%)
  • Validation data set (30%). The validation data set will be used to conduct cross validation in future steps.
set.seed(56789)
inTrain <- createDataPartition(Training$classe, p = 0.70, list = FALSE)
Training <- Training[inTrain, ]
Validation <- Training[-inTrain, ]

Our Model will be created with:

  • Training Data which include: 13737 observations.
  • Validation Data which include: 4111 observations.
  • Testing Data which include: 20 observations.

Predictive Data Modelling for Activity Recognition

Decision Tree

Fitting a predictive based on Decision Tree algorithm.

FitTree <- rpart(classe ~ ., data = Training, method = "class")
prp(FitTree, main ="Decision Tree: Activity Recognition", box.palette = "BlGnYl")

Model’s performance on the validation data set.

PredictTree <- predict(FitTree, Validation, type = "class")
confusionMatrix(Validation$classe, PredictTree)
## Confusion Matrix and Statistics
## 
##           Reference
## Prediction    A    B    C    D    E
##          A 1073   31    7   58   23
##          B  176  405   77   87   37
##          C   35   31  578   43   38
##          D   84   17   99  439   33
##          E   58   58   37   76  511
## 
## Overall Statistics
##                                           
##                Accuracy : 0.7312          
##                  95% CI : (0.7174, 0.7447)
##     No Information Rate : 0.3469          
##     P-Value [Acc > NIR] : < 2.2e-16       
##                                           
##                   Kappa : 0.6572          
##                                           
##  Mcnemar's Test P-Value : < 2.2e-16       
## 
## Statistics by Class:
## 
##                      Class: A Class: B Class: C Class: D Class: E
## Sensitivity            0.7525  0.74723   0.7243   0.6245   0.7960
## Specificity            0.9557  0.89437   0.9556   0.9316   0.9340
## Pos Pred Value         0.9002  0.51790   0.7972   0.6533   0.6905
## Neg Pred Value         0.8791  0.95885   0.9350   0.9232   0.9611
## Prevalence             0.3469  0.13184   0.1941   0.1710   0.1562
## Detection Rate         0.2610  0.09852   0.1406   0.1068   0.1243
## Detection Prevalence   0.2900  0.19022   0.1764   0.1635   0.1800
## Balanced Accuracy      0.8541  0.82080   0.8400   0.7780   0.8650
accuracy.Tree <- postResample(PredictTree, Validation$classe)
OSE.Tree <- 1 - as.numeric(confusionMatrix(Validation$classe, PredictTree)$overall[1])
  • The Estimated Accuracy of the Decision Tree Model: 73.1208952% .
  • The Estimated Out-of-Sample Error is 26.8791048%.

Predictive Data by Random Forest Algorithm

Fitting a predictive based on Random algorithm.
We will use 5-fold cross validation when applying the algorithm.

FitRF <- train(classe ~ ., data = Training, method = "rf",
                 trControl = trainControl(method = "cv", 5), ntree = 250, allowParallel = TRUE)       # Apply Cross Validation with 5 fold.
FitRF
## Random Forest 
## 
## 13737 samples
##    53 predictor
##     5 classes: 'A', 'B', 'C', 'D', 'E' 
## 
## No pre-processing
## Resampling: Cross-Validated (5 fold) 
## Summary of sample sizes: 10988, 10990, 10991, 10990, 10989 
## Resampling results across tuning parameters:
## 
##   mtry  Accuracy   Kappa    
##    2    0.9933033  0.9915283
##   27    0.9971614  0.9964095
##   53    0.9938853  0.9922657
## 
## Accuracy was used to select the optimal model using the largest value.
## The final value used for the model was mtry = 27.

The performance of the model on the validation data set:

PredictRF <- predict(FitRF, Validation)
confusionMatrix(Validation$classe, PredictRF)
## Confusion Matrix and Statistics
## 
##           Reference
## Prediction    A    B    C    D    E
##          A 1192    0    0    0    0
##          B    0  782    0    0    0
##          C    0    0  725    0    0
##          D    0    0    0  672    0
##          E    0    0    0    0  740
## 
## Overall Statistics
##                                      
##                Accuracy : 1          
##                  95% CI : (0.9991, 1)
##     No Information Rate : 0.29       
##     P-Value [Acc > NIR] : < 2.2e-16  
##                                      
##                   Kappa : 1          
##                                      
##  Mcnemar's Test P-Value : NA         
## 
## Statistics by Class:
## 
##                      Class: A Class: B Class: C Class: D Class: E
## Sensitivity              1.00   1.0000   1.0000   1.0000     1.00
## Specificity              1.00   1.0000   1.0000   1.0000     1.00
## Pos Pred Value           1.00   1.0000   1.0000   1.0000     1.00
## Neg Pred Value           1.00   1.0000   1.0000   1.0000     1.00
## Prevalence               0.29   0.1902   0.1764   0.1635     0.18
## Detection Rate           0.29   0.1902   0.1764   0.1635     0.18
## Detection Prevalence     0.29   0.1902   0.1764   0.1635     0.18
## Balanced Accuracy        1.00   1.0000   1.0000   1.0000     1.00
accuracyRF <- postResample(PredictRF, Validation$classe)
OSE.RF <- 1 - as.numeric(confusionMatrix(Validation$classe, PredictRF)$overall[1])
  • The Estimated Random Forest Accuracy : 100%
  • The Estimated Out-of-Sample Error : 0%.

Predicting The Outcome from the Test set

Applying the Random Forest model to the testing data set (the problem_id column has been removed)

predict(FitRF, Testing[, -length(names(Testing))])
##  [1] B A B A A E D B A A B C B A E E A B B B
## Levels: A B C D E