These is a file produced during a homework assignment of Coursera’s MOOC Practical Machine Learning from Johns Hopkins Bloomberg School of Public Health.

For more information about the several MOOCs comprised in this Specialization, please visit: https://www.coursera.org/specialization/jhudatascience/

The scripts have been solely produced, tested and executed on Windows 10 Pro and RStudio Version 0.99.486.

Background

Using devices such as Jawbone Up, Nike FuelBand and Fitbit it is now possible to collect a large amount of data about personal activity relatively inexpensively. These type of devices are part of the quantified self movement - a group of enthusiasts who take measurements about themselves regularly to improve their health, to find patterns in their behavior, or because they are tech geeks. One thing that people regularly do is quantify how much of a particular activity they do, but they rarely quantify how well they do it.

In this project, our goal will be to use data from accelerometers on the belt, forearm, arm, and dumbell of 6 participants. They were asked to perform barbell lifts correctly and incorrectly in 5 different ways.

More information is available from the website here: http://groupware.les.inf.puc-rio.br/har (see the section on the Weight Lifting Exercise Dataset).

Data Sources

If you use the document you create for this class for any purpose please cite them as they have been very generous in allowing their data to be used for this kind of assignment.

Intended Results

The goal of this project is to predict the manner in which they did the exercise. This is the “classe” variable in the training set.

You may use any of the other variables to predict with. You should create a report describing how you built your model, how you used cross validation, what you think the expected out of sample error is, and why you made the choices you did. You will also use your prediction model to predict 20 different test cases.

  1. Your submission should consist of a link to a Github repo with your R markdown and compiled HTML file describing your analysis. Please constrain the text of the writeup to < 2000 words and the number of figures to be less than 5. It will make it easier for the graders if you submit a repo with a gh-pages branch so the HTML page can be viewed online.

  2. You should also apply your machine learning algorithm to the 20 test cases available in the test data above. Please submit your predictions in appropriate format to the programming assignment for automated grading. See the programming assignment for additional details.

Reproducibility

In order to reproduce the same results, you need a certain set of packages as well as setting a pseudo random seed equal to the one I have used. Note: To install, for instance, the rattle package in R, run this command: install.packages("rattle"). The following Libraries were used for this project, which you should install and load them in your working environment.

library(rattle)
## Warning: package 'rattle' was built under R version 4.3.3
## Loading required package: tibble
## Loading required package: bitops
## Rattle: A free graphical interface for data science with R.
## Version 5.5.1 Copyright (c) 2006-2021 Togaware Pty Ltd.
## Type 'rattle()' to shake, rattle, and roll your data.
library(caret)
## Warning: package 'caret' was built under R version 4.3.3
## Loading required package: ggplot2
## Warning: package 'ggplot2' was built under R version 4.3.3
## Loading required package: lattice
## Warning: package 'lattice' was built under R version 4.3.3
library(rpart)
## Warning: package 'rpart' was built under R version 4.3.3
library(rpart.plot)
## Warning: package 'rpart.plot' was built under R version 4.3.3
library(corrplot)
## corrplot 0.92 loaded
library(randomForest)
## Warning: package 'randomForest' was built under R version 4.3.3
## randomForest 4.7-1.1
## Type rfNews() to see new features/changes/bug fixes.
## 
## Attaching package: 'randomForest'
## The following object is masked from 'package:ggplot2':
## 
##     margin
## The following object is masked from 'package:rattle':
## 
##     importance
library(RColorBrewer)

Load the seed

set.seed(237568)

LOAD THE DATA

trainUrl <-"https://d396qusza40orc.cloudfront.net/predmachlearn/pml-training.csv"
testUrl <- "https://d396qusza40orc.cloudfront.net/predmachlearn/pml-testing.csv"
trainFile <- "./data/pml-training.csv"
testFile  <- "./data/pml-testing.csv"
if (!file.exists("./data")) {
  dir.create("./data")
}
if (!file.exists(trainFile)) {
  download.file(trainUrl, destfile = trainFile, method = "curl")
}
if (!file.exists(testFile)) {
  download.file(testUrl, destfile = testFile, method = "curl")
}

Reading the data

trainRaw <- read.csv(trainFile)
testRaw <- read.csv(testFile)
dim(trainRaw)
## [1] 19622   160
dim(testRaw)
## [1]  20 160
  • The training data contains 19622 observations and 160 variables.
  • The test data contains 20 observations and 160 variables.

The variable classe in the training set is the outcome to predict

CLEANING THE DATA

We will get rid of observations with: - missing values - meaningless variables

  1. Remove the observations that are near zero
NZV <- nearZeroVar(trainRaw, saveMetrics = TRUE)
head(NZV, 20)
##                        freqRatio percentUnique zeroVar   nzv
## X                       1.000000  100.00000000   FALSE FALSE
## user_name               1.100679    0.03057792   FALSE FALSE
## raw_timestamp_part_1    1.000000    4.26562022   FALSE FALSE
## raw_timestamp_part_2    1.000000   85.53154622   FALSE FALSE
## cvtd_timestamp          1.000668    0.10192641   FALSE FALSE
## new_window             47.330049    0.01019264   FALSE  TRUE
## num_window              1.000000    4.37264295   FALSE FALSE
## roll_belt               1.101904    6.77810621   FALSE FALSE
## pitch_belt              1.036082    9.37722964   FALSE FALSE
## yaw_belt                1.058480    9.97349913   FALSE FALSE
## total_accel_belt        1.063160    0.14779329   FALSE FALSE
## kurtosis_roll_belt   1921.600000    2.02323922   FALSE  TRUE
## kurtosis_picth_belt   600.500000    1.61553358   FALSE  TRUE
## kurtosis_yaw_belt      47.330049    0.01019264   FALSE  TRUE
## skewness_roll_belt   2135.111111    2.01304658   FALSE  TRUE
## skewness_roll_belt.1  600.500000    1.72255631   FALSE  TRUE
## skewness_yaw_belt      47.330049    0.01019264   FALSE  TRUE
## max_roll_belt           1.000000    0.99378249   FALSE FALSE
## max_picth_belt          1.538462    0.11211905   FALSE FALSE
## max_yaw_belt          640.533333    0.34654979   FALSE  TRUE
training01 <- trainRaw[, !NZV$nzv]
testing01 <- testRaw[, !NZV$nzv]
dim(training01)
## [1] 19622   100
dim(testing01)
## [1]  20 100
  1. Remove columns that do not contribute to the accelerometer measurements
regex <- grepl("^X|timestamp|user_name", names(training01))
training <- training01[, !regex]
testing <- testing01[, !regex]
dim(training)
## [1] 19622    95
dim(testing)
## [1] 20 95
  1. Remove columns contianing NA values
cond <- (colSums(is.na(training)) == 0)
training <- training[, cond]
testing <- testing[, cond]
dim(training)
## [1] 19622    54
dim(testing)
## [1] 20 54

Now, the cleaned training data set contains 19622 observations and 54 variables, while the testing data set contains 20 observations and 54 variables.

CORRELATION

We are going to generate a correlation matrix in order to observe the training data set.

corrplot(cor(training[, -length(names(training))]),
         method = "color",
         tl.cex = 0.5)

PARTITIONING THE DATASET

we split the cleaned training set into a pure training data set (70%) and a validation data set (30%). We will use the validation data set to conduct cross validation in future steps.

set.seed(237568)
inTrain <- createDataPartition(training$classe, p = 0.70, list = FALSE)
validation <- training[-inTrain, ]
training <- training[inTrain, ]
dim(validation)
## [1] 5885   54
dim(training)
## [1] 13737    54
dim(testing)
## [1] 20 54

The Dataset now consists of 54 variables with the observations divided as following: - Training Data: 13737 observations. - Validation Data: 5885 observations. - Testing Data: 20 observations

DATA MODELLING

Decision Tree

We fit a predictive model for activity recognition using Decision Tree algorithm.

modelTree <- rpart(classe ~ ., data = training, method = "class")
prp(modelTree)

Now, we estimate the performance of the model on the validation data set.

predictTree <- predict(modelTree, validation, type = "class")
confusionMatrix(as.factor(validation$classe), predictTree)
## Confusion Matrix and Statistics
## 
##           Reference
## Prediction    A    B    C    D    E
##          A 1488   51   18   95   22
##          B  190  669   87  146   47
##          C   41   33  868   59   25
##          D   57   89  147  631   40
##          E   61  113   86  132  690
## 
## Overall Statistics
##                                           
##                Accuracy : 0.7385          
##                  95% CI : (0.7271, 0.7497)
##     No Information Rate : 0.3121          
##     P-Value [Acc > NIR] : < 2.2e-16       
##                                           
##                   Kappa : 0.6684          
##                                           
##  Mcnemar's Test P-Value : < 2.2e-16       
## 
## Statistics by Class:
## 
##                      Class: A Class: B Class: C Class: D Class: E
## Sensitivity            0.8100   0.7005   0.7197   0.5936   0.8374
## Specificity            0.9541   0.9047   0.9662   0.9309   0.9225
## Pos Pred Value         0.8889   0.5874   0.8460   0.6546   0.6377
## Neg Pred Value         0.9171   0.9397   0.9304   0.9122   0.9721
## Prevalence             0.3121   0.1623   0.2049   0.1806   0.1400
## Detection Rate         0.2528   0.1137   0.1475   0.1072   0.1172
## Detection Prevalence   0.2845   0.1935   0.1743   0.1638   0.1839
## Balanced Accuracy      0.8820   0.8026   0.8430   0.7623   0.8800
accuracy <- postResample(validation$classe, predictTree)
ose <- 1 - as.numeric(confusionMatrix(as.factor(validation$classe), predictTree)$overall[1]) 

print("Accuracy:")
## [1] "Accuracy:"
print(accuracy)
##  Accuracy     Kappa 
## 0.7384877 0.6684439
print("ose:")
## [1] "ose:"
print(ose)
## [1] 0.2615123

Random Forest

We fit a predictive model for activity recognition using Random Forest algorithm because it automatically selects important variables and is robust to correlated covariates & outliers in general.

We will use 5-fold cross validation when applying the algorithm.

modelRF <- train(classe ~ ., data = training, method = "rf", trControl = trainControl(method = "cv", 5), ntree = 250)
modelRF
## Random Forest 
## 
## 13737 samples
##    53 predictor
##     5 classes: 'A', 'B', 'C', 'D', 'E' 
## 
## No pre-processing
## Resampling: Cross-Validated (5 fold) 
## Summary of sample sizes: 10990, 10989, 10991, 10991, 10987 
## Resampling results across tuning parameters:
## 
##   mtry  Accuracy   Kappa    
##    2    0.9930845  0.9912518
##   27    0.9975975  0.9969612
##   53    0.9949042  0.9935535
## 
## Accuracy was used to select the optimal model using the largest value.
## The final value used for the model was mtry = 27.

Now we will estimate the performance of the model on the validation dataset.

predictRF <- predict(modelRF, validation)
confusionMatrix(as.factor(validation$classe), predictRF)
## Confusion Matrix and Statistics
## 
##           Reference
## Prediction    A    B    C    D    E
##          A 1673    1    0    0    0
##          B    5 1133    1    0    0
##          C    0    1 1025    0    0
##          D    0    0    1  961    2
##          E    0    0    0    0 1082
## 
## Overall Statistics
##                                           
##                Accuracy : 0.9981          
##                  95% CI : (0.9967, 0.9991)
##     No Information Rate : 0.2851          
##     P-Value [Acc > NIR] : < 2.2e-16       
##                                           
##                   Kappa : 0.9976          
##                                           
##  Mcnemar's Test P-Value : NA              
## 
## Statistics by Class:
## 
##                      Class: A Class: B Class: C Class: D Class: E
## Sensitivity            0.9970   0.9982   0.9981   1.0000   0.9982
## Specificity            0.9998   0.9987   0.9998   0.9994   1.0000
## Pos Pred Value         0.9994   0.9947   0.9990   0.9969   1.0000
## Neg Pred Value         0.9988   0.9996   0.9996   1.0000   0.9996
## Prevalence             0.2851   0.1929   0.1745   0.1633   0.1842
## Detection Rate         0.2843   0.1925   0.1742   0.1633   0.1839
## Detection Prevalence   0.2845   0.1935   0.1743   0.1638   0.1839
## Balanced Accuracy      0.9984   0.9985   0.9989   0.9997   0.9991
accuracy <- postResample( validation$classe, predictRF)
ose <- 1 - as.numeric(confusionMatrix(as.factor(validation$classe), predictRF)$overall[1])

print("accuracy:")
## [1] "accuracy:"
print(accuracy)
##  Accuracy     Kappa 
## 0.9981308 0.9976356
print("OSE:")
## [1] "OSE:"
print(ose)
## [1] 0.001869159

Random Forests yield a better result with an accuracy of 99,8% and the Estimated Out-of-Sample Error of 0.18%

We will not apply a Random Forest to the original testing dataset, after removing the problem_ID column first.

predict(modelRF, testing[, -length(names(testing))])
##  [1] B A B A A E D B A A B C B A E E A B B B
## Levels: A B C D E

Generating the files for the assignment

Function to generate files with predictions to submit for assignment.

pml_write_files = function(x){
  n = length(x)
  for(i in 1:n){
    filename = paste0("G:/My Drive/PERSONAL DEVELOPMENT/CAPSTONE PROJECTS/JOHN HOPKINS - DATA/MACHINE LEARNING/ProjectSolutions",i,".txt")
    write.table(x[i], file = filename, quote = FALSE, row.names = FALSE, col.names = FALSE)
  }
}
pml_write_files(predict(modelRF, testing[, -length(names(testing))]))