INTRODUCTION


>> BACKGROUND

Using devices such as Jawbone Up, Nike FuelBand and Fitbit it is now possible to collect a large amount of data about personal activity relatively inexpensively. These type of devices are part of the quantified self movement - a group of enthusiasts who take measurements about themselves regularly to improve their health, to find patterns in their behavior, or because the are tech geeks. One thing that people regularly do is quantify how much of a particular acitvity they do, but they rarely quantify how well they do it.

In this project, your goal will be to use data from accelerometers on the belt, forearm, arm, and dumbell of 6 participants. They were asked to perform barbell lifts correctly and incorrectly in 5 different ways. More information is available from the website here: http://groupware.les.inf.puc-rio.br/har (see the section on the Weight Lifting Exercise Dataset).

The data has 5 classes to classify, each one represents a manner of doing the exercise, one is the correct way and the other four are wrong ways:

  • A: exactly according to the specification.
  • B: throwing the elbows to the front.
  • C: lifting the dumbbell only halfway.
  • D: lowering the dumbbel only halfway.
  • E: throwing the hips to the front.


>> DATA SOURCE

This work was first developed in the paper:

Velloso, E.; Bulling, A.; Gellersen, H.; Ugulino, W.; Fuks, H. Qualitative Activity Recognition of Weight Lifting Exercises. Proceedings of 4th International Conference in Cooperation with SIGCHI (Augmented Human ’13) . Stuttgart, Germany: ACM SIGCHI, 2013.

The research group generously offers the training and testing data respectivally in the following links:

https://d396qusza40orc.cloudfront.net/predmachlearn/pml-training.csv

https://d396qusza40orc.cloudfront.net/predmachlearn/pml-testing.csv


>> GOALS OF THE PROJECT

Predict the manner in which they did the exercise, this the “classe” variable in the training set

  • Create a report describing how the model was built.
  • How i used the cross validation.
  • What i think the expected out-of-sample error is.
  • Why i made the choices i did.
  • Use my prediction model to predict 20 different test cases.


DATA PREPARATION


>> SETTING THE REQUIRED ENVIRONMENT

# Set working directory
setwd(Sys.getenv("WDIR_PRACTICALMACHINELEARNING"))

# Load required packages
if ( !require(caret)){
  install.packages("caret", dependencies = T)
  require(caret)
}

set.seed(123456)


>> GETTING DATA

# URLs to download train and test files
train.url = 'https://d396qusza40orc.cloudfront.net/predmachlearn/pml-training.csv'
test.url = 'https://d396qusza40orc.cloudfront.net/predmachlearn/pml-testing.csv'

# Create data directory if it not exists
if (!file.exists("./data")) {
  dir.create("./data")
}

# Set the filenames to the data sources
train.file = "./data/pml-training.csv"
test.file = "./data/pml-testing.csv"

# Download data
if (!file.exists(train.file)) {
  download.file(train.url, destfile=train.file)
}

if (!file.exists(test.file)) {
  download.file(test.url, destfile=test.file)
}  


>> LOAD DATA

# Load the data into two data frames
train = read.csv(file=train.file, stringsAsFactors = F, na.strings=c("NA", "NULL",'', ' '))
test = read.csv(file=test.file, stringsAsFactors = F, na.strings=c("NA", "NULL",'', ' '))

Examining the dataset dimensions.

dim(train)
## [1] 19622   160
dim(test)
## [1]  20 160

The trainset contains 19622 rows and 160 variables, and the testset contains 20 rows and 160 variables. Now, let’s do some exploration in the dataset, checking the variables type and its first values.

str(train)
## 'data.frame':    19622 obs. of  160 variables:
##  $ X                       : int  1 2 3 4 5 6 7 8 9 10 ...
##  $ user_name               : chr  "carlitos" "carlitos" "carlitos" "carlitos" ...
##  $ raw_timestamp_part_1    : int  1323084231 1323084231 1323084231 1323084232 1323084232 1323084232 1323084232 1323084232 1323084232 1323084232 ...
##  $ raw_timestamp_part_2    : int  788290 808298 820366 120339 196328 304277 368296 440390 484323 484434 ...
##  $ cvtd_timestamp          : chr  "05/12/2011 11:23" "05/12/2011 11:23" "05/12/2011 11:23" "05/12/2011 11:23" ...
##  $ new_window              : chr  "no" "no" "no" "no" ...
##  $ num_window              : int  11 11 11 12 12 12 12 12 12 12 ...
##  $ roll_belt               : num  1.41 1.41 1.42 1.48 1.48 1.45 1.42 1.42 1.43 1.45 ...
##  $ pitch_belt              : num  8.07 8.07 8.07 8.05 8.07 8.06 8.09 8.13 8.16 8.17 ...
##  $ yaw_belt                : num  -94.4 -94.4 -94.4 -94.4 -94.4 -94.4 -94.4 -94.4 -94.4 -94.4 ...
##  $ total_accel_belt        : int  3 3 3 3 3 3 3 3 3 3 ...
##  $ kurtosis_roll_belt      : chr  NA NA NA NA ...
##  $ kurtosis_picth_belt     : chr  NA NA NA NA ...
##  $ kurtosis_yaw_belt       : chr  NA NA NA NA ...
##  $ skewness_roll_belt      : chr  NA NA NA NA ...
##  $ skewness_roll_belt.1    : chr  NA NA NA NA ...
##  $ skewness_yaw_belt       : chr  NA NA NA NA ...
##  $ max_roll_belt           : num  NA NA NA NA NA NA NA NA NA NA ...
##  $ max_picth_belt          : int  NA NA NA NA NA NA NA NA NA NA ...
##  $ max_yaw_belt            : chr  NA NA NA NA ...
##  $ min_roll_belt           : num  NA NA NA NA NA NA NA NA NA NA ...
##  $ min_pitch_belt          : int  NA NA NA NA NA NA NA NA NA NA ...
##  $ min_yaw_belt            : chr  NA NA NA NA ...
##  $ amplitude_roll_belt     : num  NA NA NA NA NA NA NA NA NA NA ...
##  $ amplitude_pitch_belt    : int  NA NA NA NA NA NA NA NA NA NA ...
##  $ amplitude_yaw_belt      : chr  NA NA NA NA ...
##  $ var_total_accel_belt    : num  NA NA NA NA NA NA NA NA NA NA ...
##  $ avg_roll_belt           : num  NA NA NA NA NA NA NA NA NA NA ...
##  $ stddev_roll_belt        : num  NA NA NA NA NA NA NA NA NA NA ...
##  $ var_roll_belt           : num  NA NA NA NA NA NA NA NA NA NA ...
##  $ avg_pitch_belt          : num  NA NA NA NA NA NA NA NA NA NA ...
##  $ stddev_pitch_belt       : num  NA NA NA NA NA NA NA NA NA NA ...
##  $ var_pitch_belt          : num  NA NA NA NA NA NA NA NA NA NA ...
##  $ avg_yaw_belt            : num  NA NA NA NA NA NA NA NA NA NA ...
##  $ stddev_yaw_belt         : num  NA NA NA NA NA NA NA NA NA NA ...
##  $ var_yaw_belt            : num  NA NA NA NA NA NA NA NA NA NA ...
##  $ gyros_belt_x            : num  0 0.02 0 0.02 0.02 0.02 0.02 0.02 0.02 0.03 ...
##  $ gyros_belt_y            : num  0 0 0 0 0.02 0 0 0 0 0 ...
##  $ gyros_belt_z            : num  -0.02 -0.02 -0.02 -0.03 -0.02 -0.02 -0.02 -0.02 -0.02 0 ...
##  $ accel_belt_x            : int  -21 -22 -20 -22 -21 -21 -22 -22 -20 -21 ...
##  $ accel_belt_y            : int  4 4 5 3 2 4 3 4 2 4 ...
##  $ accel_belt_z            : int  22 22 23 21 24 21 21 21 24 22 ...
##  $ magnet_belt_x           : int  -3 -7 -2 -6 -6 0 -4 -2 1 -3 ...
##  $ magnet_belt_y           : int  599 608 600 604 600 603 599 603 602 609 ...
##  $ magnet_belt_z           : int  -313 -311 -305 -310 -302 -312 -311 -313 -312 -308 ...
##  $ roll_arm                : num  -128 -128 -128 -128 -128 -128 -128 -128 -128 -128 ...
##  $ pitch_arm               : num  22.5 22.5 22.5 22.1 22.1 22 21.9 21.8 21.7 21.6 ...
##  $ yaw_arm                 : num  -161 -161 -161 -161 -161 -161 -161 -161 -161 -161 ...
##  $ total_accel_arm         : int  34 34 34 34 34 34 34 34 34 34 ...
##  $ var_accel_arm           : num  NA NA NA NA NA NA NA NA NA NA ...
##  $ avg_roll_arm            : num  NA NA NA NA NA NA NA NA NA NA ...
##  $ stddev_roll_arm         : num  NA NA NA NA NA NA NA NA NA NA ...
##  $ var_roll_arm            : num  NA NA NA NA NA NA NA NA NA NA ...
##  $ avg_pitch_arm           : num  NA NA NA NA NA NA NA NA NA NA ...
##  $ stddev_pitch_arm        : num  NA NA NA NA NA NA NA NA NA NA ...
##  $ var_pitch_arm           : num  NA NA NA NA NA NA NA NA NA NA ...
##  $ avg_yaw_arm             : num  NA NA NA NA NA NA NA NA NA NA ...
##  $ stddev_yaw_arm          : num  NA NA NA NA NA NA NA NA NA NA ...
##  $ var_yaw_arm             : num  NA NA NA NA NA NA NA NA NA NA ...
##  $ gyros_arm_x             : num  0 0.02 0.02 0.02 0 0.02 0 0.02 0.02 0.02 ...
##  $ gyros_arm_y             : num  0 -0.02 -0.02 -0.03 -0.03 -0.03 -0.03 -0.02 -0.03 -0.03 ...
##  $ gyros_arm_z             : num  -0.02 -0.02 -0.02 0.02 0 0 0 0 -0.02 -0.02 ...
##  $ accel_arm_x             : int  -288 -290 -289 -289 -289 -289 -289 -289 -288 -288 ...
##  $ accel_arm_y             : int  109 110 110 111 111 111 111 111 109 110 ...
##  $ accel_arm_z             : int  -123 -125 -126 -123 -123 -122 -125 -124 -122 -124 ...
##  $ magnet_arm_x            : int  -368 -369 -368 -372 -374 -369 -373 -372 -369 -376 ...
##  $ magnet_arm_y            : int  337 337 344 344 337 342 336 338 341 334 ...
##  $ magnet_arm_z            : int  516 513 513 512 506 513 509 510 518 516 ...
##  $ kurtosis_roll_arm       : chr  NA NA NA NA ...
##  $ kurtosis_picth_arm      : chr  NA NA NA NA ...
##  $ kurtosis_yaw_arm        : chr  NA NA NA NA ...
##  $ skewness_roll_arm       : chr  NA NA NA NA ...
##  $ skewness_pitch_arm      : chr  NA NA NA NA ...
##  $ skewness_yaw_arm        : chr  NA NA NA NA ...
##  $ max_roll_arm            : num  NA NA NA NA NA NA NA NA NA NA ...
##  $ max_picth_arm           : num  NA NA NA NA NA NA NA NA NA NA ...
##  $ max_yaw_arm             : int  NA NA NA NA NA NA NA NA NA NA ...
##  $ min_roll_arm            : num  NA NA NA NA NA NA NA NA NA NA ...
##  $ min_pitch_arm           : num  NA NA NA NA NA NA NA NA NA NA ...
##  $ min_yaw_arm             : int  NA NA NA NA NA NA NA NA NA NA ...
##  $ amplitude_roll_arm      : num  NA NA NA NA NA NA NA NA NA NA ...
##  $ amplitude_pitch_arm     : num  NA NA NA NA NA NA NA NA NA NA ...
##  $ amplitude_yaw_arm       : int  NA NA NA NA NA NA NA NA NA NA ...
##  $ roll_dumbbell           : num  13.1 13.1 12.9 13.4 13.4 ...
##  $ pitch_dumbbell          : num  -70.5 -70.6 -70.3 -70.4 -70.4 ...
##  $ yaw_dumbbell            : num  -84.9 -84.7 -85.1 -84.9 -84.9 ...
##  $ kurtosis_roll_dumbbell  : chr  NA NA NA NA ...
##  $ kurtosis_picth_dumbbell : chr  NA NA NA NA ...
##  $ kurtosis_yaw_dumbbell   : chr  NA NA NA NA ...
##  $ skewness_roll_dumbbell  : chr  NA NA NA NA ...
##  $ skewness_pitch_dumbbell : chr  NA NA NA NA ...
##  $ skewness_yaw_dumbbell   : chr  NA NA NA NA ...
##  $ max_roll_dumbbell       : num  NA NA NA NA NA NA NA NA NA NA ...
##  $ max_picth_dumbbell      : num  NA NA NA NA NA NA NA NA NA NA ...
##  $ max_yaw_dumbbell        : chr  NA NA NA NA ...
##  $ min_roll_dumbbell       : num  NA NA NA NA NA NA NA NA NA NA ...
##  $ min_pitch_dumbbell      : num  NA NA NA NA NA NA NA NA NA NA ...
##  $ min_yaw_dumbbell        : chr  NA NA NA NA ...
##  $ amplitude_roll_dumbbell : num  NA NA NA NA NA NA NA NA NA NA ...
##   [list output truncated]

The seven first columns represent problem IDs and timestamps, the outcome target is named as classe. Apparently there are many variables with predominance of missing values, NA, this condition will be verified later.


DATA CLEANING


>> REMOVE IDs VARIABLES

The first procedure is to remove the first seven columns, because they only represent IDs and timestamps, and the relation with the classe outcome is no time-dependent.

train.tidy = train[, -(1:7)]


>> REMOVE VARIABLES WITH TOO MANY MISSING VALUES

Secondly, i remove variables that contains a distribution of more or equal than 50% of missing values.

NA.predominance = colSums(is.na(train.tidy))/nrow(train.tidy)
NA.predominance = (NA.predominance >= 0.5)

train.tidy = train.tidy[, !(NA.predominance)]


>> REMOVE VARIABLES OF LOW VARIANCE

Third procedure, i remove variables with Near Zero Variance, they mostly has few changes into the population, therefore low influence in the outcome.

nzv = nearZeroVar(train.tidy)

# print the number of chosen variables as near zero variance
print(length(nzv))
## [1] 0

As shown, no column variable has near zero variance, so there is no cut-off here.


>> REMOVE HIGH CORRELATED VARIABLES

The fourth procedure adopted is to remove high correlated variables, it is useful to compress data and avoid overfitting.

outcome = which(names(train.tidy)=="classe")
correlations = abs(cor(train.tidy[, -outcome]))
highCorrFeat = findCorrelation(correlations, .90)

# print the features high correlated with others features in the dataset
print(names(train.tidy[, -outcome])[highCorrFeat])
## [1] "accel_belt_z"     "roll_belt"        "accel_belt_y"    
## [4] "accel_belt_x"     "gyros_dumbbell_x" "gyros_dumbbell_z"
## [7] "gyros_arm_x"
# exclude the features above
train.tidy <- cbind(train.tidy[, -outcome][, -highCorrFeat], "classe"=train.tidy$classe)


>> THE NEW DATA SET

The tidy train set at the end of the cleaning process has a reduced dimension:

dim(train.tidy)
## [1] 19622    46

Now it remains only 46 of the initial 160 variable columns.


>> DATA SPLITTING

Prior to any modelling, i split the current train.tidy dataset into a training and a validation (CV) set, the CV set allows to verify the performance of multiple models and choose the best one.

inTrain = createDataPartition(y=train.tidy$classe, p=.75, list=F)
training = train.tidy[inTrain,]
validation = train.tidy[-inTrain,]


BUILDING THE MODELS

I am going to try tree modelling techniques: Random Forest, Gradient Boosting Machine and K-Nearest Neighbors. From these, i choose the best model, testing them in the validation set. I also save the models after the training phase.

# Create the models storage directory
if (!file.exists("./models")) {
  dir.create("./models")
}


>> RANDOM FOREST

I apply the Random Forest to model the data, the same method was used by the reseach group data owner, and as they announced the choice is because of the characteristic noise in the sensor data. Random Forest is good for fit non-linear data and avoid overfitting.

rf.file = "./models/rf.fit.rda"
if (!file.exists(rf.file)){
  rf.fit = train(classe ~ ., method="rf", data=training, trControl=trainControl(method="oob"), ntree=100, importance=T)
  save(rf.fit, file=rf.file)
}else{
  load(rf.file)
}

rf.validation = predict(rf.fit, newdata=validation)
rf.acc = confusionMatrix(validation$classe, rf.validation)$overall[1]

It is possible check the most important features discovered by Random Forest method as shown bellow.

par(ps=7)
varImpPlot(rf.fit$finalModel)


>> GRADIENT BOOSTING MACHINE

The GBM method tries to find an optimal linear combination of trees for a given data. This method usually gives a better accuracy with less trees than Random Forest, nevertheless they’re more sucescitable to overfit the data.

gradbm.file = "./models/gradbm.fit.rda"
if (!file.exists(gradbm.file)){
  gradbm.fit = train(classe ~ ., method="gbm", data=training)
  save(gradbm.fit, file=gradbm.file)
}else{
  load(gradbm.file)
}

gradbm.validation = predict(gradbm.fit, newdata=validation)
gradbm.acc = confusionMatrix(validation$classe, gradbm.validation)$overall[1]


>> K-NEAREST MACHINE

This method computes conditional probability of a class j for given data based on the average of the k nearest neighbors of the same class. Higher k increases the variance of the estimated function, hence the chance of overfitting the data. For this case, i use the default k value of 5 as documented in the Caret package.

knn.file = "./models/knn.fit.rda"

if (!file.exists(knn.file)){
  knn.fit = train(classe ~ ., method="gbm", data=training)
  save(knn.fit, file=knn.file)
}else{
    load(knn.file)
}

knn.validation = predict(knn.fit, newdata=validation)
knn.acc = confusionMatrix(validation$classe, knn.validation)$overall[1]


CHOOSING THE BEST MODEL

accuracies = data.frame(rf.acc, gradbm.acc, knn.acc)
print(accuracies)
##             rf.acc gradbm.acc   knn.acc
## Accuracy 0.9949021  0.9584013 0.9584013

All the tree models present a good perfomance, and could be choosen as the final model, however the Random Forest one got the best accuracy in the validation set with almost 99.5%, hence it will be the final model.

finalModel = rf.fit$finalModel


>> OUT-OF-SAMPLE ERROR

The out-of-sample error is:

oos.error = 1-accuracies$rf.acc[1]
names(oos.error) <- "out-of-sample error"
print(oos.error)
## out-of-sample error 
##         0.005097879


PREDICTING TEST CASES

To predict the test case, i exclude the unnecessary columns, and also preprocess the data - center and scale - as did with the train.tidy.

test.tidy <- test[, which(names(test) %in% names(train.tidy))]
#preObj <- preProcess(test.tidy, method=c("center", "scale"))
#test.tidy <- predict(preObj, test.tidy)

test.predictions = predict(finalModel, newdata=test.tidy)
print(test.predictions)
##  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 
##  B  A  B  A  A  E  D  B  A  A  B  C  B  A  E  E  A  B  B  B 
## Levels: A B C D E