Using devices such as Jawbone Up, Nike FuelBand, and Fitbit it is now possible to collect a large amount of data about personal activity relatively inexpensively. These type of devices are part of the quantified self movement - a group of enthusiasts who take measurements about themselves regularly to improve their health, to find patterns in their behavior, or because they are tech geeks. One thing that people regularly do is quantify how much of a particular activity they do, but they rarely quantify how well they do it.
In this project, we will use data from accelerometers on the belt, forearm, arm, and dumbell of 6 participants to predict the manner in which they did the exercise in particular the weight-lifting check whether we are able to detect there was Human Activity Recognition (HAR).
The activities classes are: ‘A’, ‘B’, ‘C’, ‘D’, and ‘E’; which are indeed the “Sitting,”Sitting-down“,”Standing“,”Standing Up“, and”Walking“.
We will make model and predict or classify the 20 such instances based on the model we will train using the correctly classified training sets.
library(caret)
library(rpart)
library(rpart.plot)
library(randomForest)
library(corrplot)
trainUrl <-"https://d396qusza40orc.cloudfront.net/predmachlearn/pml-training.csv"
testUrl <- "https://d396qusza40orc.cloudfront.net/predmachlearn/pml-testing.csv"
trainFile <- "./data/pml-training.csv"
testFile <- "./data/pml-testing.csv"
if (!file.exists("./data")) {
dir.create("./data")
}
if (!file.exists(trainFile)) {
download.file(trainUrl, destfile=trainFile, method="curl")
}
if (!file.exists(testFile)) {
download.file(testUrl, destfile=testFile, method="curl")
}
After downloading the data from the data source, we can read the two csv files into two data frames.
trainRaw <- read.csv("./data/pml-training.csv")
testRaw <- read.csv("./data/pml-testing.csv")
dim(trainRaw);
## [1] 19622 160
dim(testRaw);
## [1] 20 160
The training data set contains 19622 observations and 160 variables, while the testing data set contains 20 observations and 160 variables. The “classe” variable in the training set is the outcome to predict.
In this step, we will clean the data and get rid of observations with missing values as well as some meaningless variables.
sum(complete.cases(trainRaw))
## [1] 406
First, we remove columns that contain NA missing values.
trainRaw <- trainRaw[, colSums(is.na(trainRaw)) == 0]
testRaw <- testRaw[, colSums(is.na(testRaw)) == 0]
Next, we get rid of some columns that do not contribute much to the accelerometer measurements, expecially the missing values and value fields which have information like timestamp and window.
classe <- trainRaw$classe
trainRemove <- grepl("^X|timestamp|window", names(trainRaw))
trainRaw <- trainRaw[, !trainRemove]
trainCleaned <- trainRaw[, sapply(trainRaw, is.numeric)]
trainCleaned$classe <- classe
testRemove <- grepl("^X|timestamp|window", names(testRaw))
testRaw <- testRaw[, !testRemove]
testCleaned <- testRaw[, sapply(testRaw, is.numeric)]
Now, the cleaned training data set contains 19622 observations and 53 variables, while the testing data set contains 20 observations and 53 variables. The “classe” variable is still in the cleaned training set.
The data can split into a training set (60%) and a validation set (40%). We will use the validation data set to conduct cross validation in future steps.
set.seed(12345) # For reproducibile purpose
inTrain <- createDataPartition(trainCleaned$classe, p=0.60, list=F)
trainData <- trainCleaned[inTrain, ]
testData <- trainCleaned[-inTrain, ]
Implemented is a supervised classification technique for activity recognition using Random Forest algorithm, as because it automatically selects important variables and is robust to correlated covariates & outliers in general. We will use 5-fold cross validation when applying the algorithm.
controlRf <- trainControl(method="cv", 5)
modelRf <- train(classe ~ ., data=trainData, method="rf", trControl=controlRf, ntree=200)
modelRf
## Random Forest
##
## 11776 samples
## 52 predictor
## 5 classes: 'A', 'B', 'C', 'D', 'E'
##
## No pre-processing
## Resampling: Cross-Validated (5 fold)
## Summary of sample sizes: 9420, 9421, 9420, 9420, 9423
## Resampling results across tuning parameters:
##
## mtry Accuracy Kappa Accuracy SD Kappa SD
## 2 0.9872629 0.9838860 0.002118167 0.002680896
## 27 0.9883672 0.9852835 0.002789434 0.003531359
## 52 0.9847160 0.9806639 0.004324208 0.005475037
##
## Accuracy was used to select the optimal model using the largest value.
## The final value used for the model was mtry = 27.
Then, we estimate the performance of the model on the validation data set.
predictRf <- predict(modelRf, testData)
confusionMatrix(testData$classe, predictRf)
## Confusion Matrix and Statistics
##
## Reference
## Prediction A B C D E
## A 2230 2 0 0 0
## B 10 1503 5 0 0
## C 0 6 1357 5 0
## D 0 0 16 1267 3
## E 0 3 3 5 1431
##
## Overall Statistics
##
## Accuracy : 0.9926
## 95% CI : (0.9905, 0.9944)
## No Information Rate : 0.2855
## P-Value [Acc > NIR] : < 2.2e-16
##
## Kappa : 0.9906
## Mcnemar's Test P-Value : NA
##
## Statistics by Class:
##
## Class: A Class: B Class: C Class: D Class: E
## Sensitivity 0.9955 0.9927 0.9826 0.9922 0.9979
## Specificity 0.9996 0.9976 0.9983 0.9971 0.9983
## Pos Pred Value 0.9991 0.9901 0.9920 0.9852 0.9924
## Neg Pred Value 0.9982 0.9983 0.9963 0.9985 0.9995
## Prevalence 0.2855 0.1930 0.1760 0.1628 0.1828
## Detection Rate 0.2842 0.1916 0.1730 0.1615 0.1824
## Detection Prevalence 0.2845 0.1935 0.1744 0.1639 0.1838
## Balanced Accuracy 0.9976 0.9952 0.9905 0.9946 0.9981
accuracy <- postResample(predictRf, testData$classe)
accuracy
## Accuracy Kappa
## 0.9926077 0.9906485
oose <- 1 - as.numeric(confusionMatrix(testData$classe, predictRf)$overall[1])
oose
## [1] 0.007392302
So, We started to predict, indeed classify, the correct instances of Human Activity Recognition (HAR) based on sensors and we foud the 5 different types of activities classes we are able to predict or correctly classify with the estimated accuracy of the classification model is ~ 99.26% and the estimated out-of-sample error is ~ 0.74%, which is indeed very good accuracy and out-of-sample-error is in par with the benchmark results from “HAR Dataset for benchmarking” at Groupware@LES project for Human Activity Recognition.
- The Benchmark results:
- Our Results :
Refer more at: http://groupware.les.inf.puc-rio.br/har
Now, we apply the model to the original testing data set downloaded from the data source. We remove the problem_id column first.
result <- predict(modelRf, testCleaned[, -length(names(testCleaned))])
result
## [1] B A B A A E D B A A B C B A E E A B B B
## Levels: A B C D E
corrPlot <- cor(trainData[, -length(names(trainData))])
corrplot(corrPlot, method="color")
2. Decision Tree Visualization
treeModel <- rpart(classe ~ ., data=trainData, method="class")
prp(treeModel) # fast plot