Human Activity Recognition (HAR) is a research area that focuses on predicting activities that were performed at a specific point in time. A group of researchers proposed an HAR based approach whereby the main idea would be to predict “how well” a workout exercise was performed. Given that people often make mistakes while exercising, such mistakes may lead to reduced exercise effect or potential injuries.
The information provided by the aforementioned predictions is useful in the development of context-aware systems, which are able to specify the correct execution of an exercise, detect execution mistakes, and provide feedback on the quality of execution to the user.
This report provides a prediction exercise for determining how correctly workout exercises are performed. The data were obtained from accelerometers used by six individuals who wore them on the belt, arm, forearm, and dumbbells that they used to do a set of exercises.
Using three machine learning techniques, namely gradient boosting method, decision trees, and random forests, prediction models were created using the training data set provided for this project. This training data was divided into two partitions: 75% of the data was used to train the models and 25% of that same data was used to test them. The results were cross validated, and the model with the highest prediction accuracy was selected to process the testing data set that was also provided for this project.
The objective of this study is predict the manner in which individuals did a particular weight lifting exercise. The classifications for “how well” the exercise was done are defined as follows:
A - Exactly according to the specification.
B - Throwing the elbows to the front.
C - Lifting the dumbbell only halfway.
D - Lowering the dumbbell only halfway.
E - Throwing the hips to the front.
Class A corresponds to the specified execution of the exercise, while the other 4 classes correspond to common mistakes.
# Load required libraries
library(data.table)
library(caret)
library(randomForest)
library(rpart)
library(rpart.plot)
library(lattice)
library(ggplot2)
# Create a data directory
workingDir <- getwd()
dataDir <- "predict"
if (!file.exists(dataDir)) dir.create(dataDir)
destDir <- paste0(workingDir, "/", dataDir)
# Download the training and test data
trainDataUrl <- "https://d396qusza40orc.cloudfront.net/predmachlearn/pml-training.csv"
testDataUrl <- "https://d396qusza40orc.cloudfront.net/predmachlearn/pml-testing.csv"
trainDataFileName <- paste0(destDir, "/pml-training.csv")
testDataFileName <- paste0(destDir, "/pml-testing.csv")
download.file(trainDataUrl, trainDataFileName)
download.file(testDataUrl, testDataFileName)
# Read the training and test data and identify NAs
trainData <- read.csv(trainDataFileName, na.strings=c("NA","#DIV/0!", ""))
testData <- read.csv(testDataFileName, na.strings=c("NA","#DIV/0!", ""))
# Inspect the structure of the data in order to proceed with cleaning
str(trainData)
str(testData)
# Clean the training and test data
# a) Remove unnecessary columns 1 - 7 (user info, timestamps, window)
# b) Remove columns with mostly NAs
trainData <- trainData[, -c(1:7)]
trainData <- trainData[,colSums(is.na(trainData)) == 0]
testData <- testData[, -c(1:7)]
testData <- testData[,colSums(is.na(testData)) == 0]
# Create training data partitions for training and testing the prediciton models
inTrain <- createDataPartition(y=trainData$classe, p=0.75, list=FALSE)
inTrain_Train <- trainData[inTrain, ]
inTrain_Test <- trainData[-inTrain, ]
# Produce prediction models using three methods
# a) gbm: gradient boosting method
# b) class: decision tree
# c) rf: random forest
set.seed(1234)
modelGBM <- train(classe ~ ., data = inTrain_Train, method = "gbm")
modelCLASS <- rpart(classe ~ ., data = inTrain_Train, method = "class")
modelRF <- train(classe ~ ., data = inTrain_Train, method = "rf")
# Predict with each of the produced models and evaluate model accuracy
predictGBM <- predict(modelGBM, inTrain_Test)
confusionMatrix(predictGBM, inTrain_Test$classe)
predictCLASS <- predict(modelCLASS, inTrain_Test, type = "class")
confusionMatrix(predictCLASS, inTrain_Test$classe)
predictRF <- predict(modelRF, inTrain_Test)
confusionMatrix(predictRF, inTrain_Test$classe)
# Using the Random Forest model (the most accurate of the three models),
# make the final prediction with the provided downloaded test dataset
predictfinal <- predict(modelRF, testData)
predictfinal
Given the above estimations, the Random Forest method was selected.
Don’t forget the exploratory data analysis and the creation of tidy data for processing. The train and test datasets had NA values in various forms, and also contained a number of incomplete columns that could have caused prediction problems had they not been remediated.
Caret’s train() function took a long time to process (in the neighborhood of 1.5+ hours), and it consumed a high amount of memory. The rpart() function was more resource efficient. The takeaway of this experience is the need to be careful with respect to which package to use, the number of predictors to choose, and the amount of training data to apply.
My “hypothesis” was that the decision trees would be the best fit for this classification problem. However, it turned out to be the worst. It is therefore important to create more than one model in order to discover potential deficiencies in the prediction approach or in the data itself.
str(trainData)
'data.frame': 19622 obs. of 160 variables:
$ X : int 1 2 3 4 5 6 7 8 9 10 ...
$ user_name : Factor w/ 6 levels "adelmo","carlitos",..: 2 2 2 2 2 2 2 2 2 2 ...
$ raw_timestamp_part_1 : int 1323084231 1323084231 1323084231 1323084232 1323084232 1323084232 1323084232 1323084232 1323084232 1323084232 ...
$ raw_timestamp_part_2 : int 788290 808298 820366 120339 196328 304277 368296 440390 484323 484434 ...
$ cvtd_timestamp : Factor w/ 20 levels "02/12/2011 13:32",..: 9 9 9 9 9 9 9 9 9 9 ...
$ new_window : Factor w/ 2 levels "no","yes": 1 1 1 1 1 1 1 1 1 1 ...
$ num_window : int 11 11 11 12 12 12 12 12 12 12 ...
$ roll_belt : num 1.41 1.41 1.42 1.48 1.48 1.45 1.42 1.42 1.43 1.45 ...
$ pitch_belt : num 8.07 8.07 8.07 8.05 8.07 8.06 8.09 8.13 8.16 8.17 ...
$ yaw_belt : num -94.4 -94.4 -94.4 -94.4 -94.4 -94.4 -94.4 -94.4 -94.4 -94.4 ...
$ total_accel_belt : int 3 3 3 3 3 3 3 3 3 3 ...
$ kurtosis_roll_belt : num NA NA NA NA NA NA NA NA NA NA ...
$ kurtosis_picth_belt : num NA NA NA NA NA NA NA NA NA NA ...
$ kurtosis_yaw_belt : logi NA NA NA NA NA NA ...
$ skewness_roll_belt : num NA NA NA NA NA NA NA NA NA NA ...
$ skewness_roll_belt.1 : num NA NA NA NA NA NA NA NA NA NA ...
$ skewness_yaw_belt : logi NA NA NA NA NA NA ...
$ max_roll_belt : num NA NA NA NA NA NA NA NA NA NA ...
$ max_picth_belt : int NA NA NA NA NA NA NA NA NA NA ...
$ max_yaw_belt : num NA NA NA NA NA NA NA NA NA NA ...
$ min_roll_belt : num NA NA NA NA NA NA NA NA NA NA ...
$ min_pitch_belt : int NA NA NA NA NA NA NA NA NA NA ...
$ min_yaw_belt : num NA NA NA NA NA NA NA NA NA NA ...
$ amplitude_roll_belt : num NA NA NA NA NA NA NA NA NA NA ...
$ amplitude_pitch_belt : int NA NA NA NA NA NA NA NA NA NA ...
$ amplitude_yaw_belt : num NA NA NA NA NA NA NA NA NA NA ...
$ var_total_accel_belt : num NA NA NA NA NA NA NA NA NA NA ...
$ avg_roll_belt : num NA NA NA NA NA NA NA NA NA NA ...
$ stddev_roll_belt : num NA NA NA NA NA NA NA NA NA NA ...
$ var_roll_belt : num NA NA NA NA NA NA NA NA NA NA ...
$ avg_pitch_belt : num NA NA NA NA NA NA NA NA NA NA ...
$ stddev_pitch_belt : num NA NA NA NA NA NA NA NA NA NA ...
$ var_pitch_belt : num NA NA NA NA NA NA NA NA NA NA ...
$ avg_yaw_belt : num NA NA NA NA NA NA NA NA NA NA ...
$ stddev_yaw_belt : num NA NA NA NA NA NA NA NA NA NA ...
$ var_yaw_belt : num NA NA NA NA NA NA NA NA NA NA ...
$ gyros_belt_x : num 0 0.02 0 0.02 0.02 0.02 0.02 0.02 0.02 0.03 ...
$ gyros_belt_y : num 0 0 0 0 0.02 0 0 0 0 0 ...
$ gyros_belt_z : num -0.02 -0.02 -0.02 -0.03 -0.02 -0.02 -0.02 -0.02 -0.02 0 ...
$ accel_belt_x : int -21 -22 -20 -22 -21 -21 -22 -22 -20 -21 ...
$ accel_belt_y : int 4 4 5 3 2 4 3 4 2 4 ...
$ accel_belt_z : int 22 22 23 21 24 21 21 21 24 22 ...
$ magnet_belt_x : int -3 -7 -2 -6 -6 0 -4 -2 1 -3 ...
$ magnet_belt_y : int 599 608 600 604 600 603 599 603 602 609 ...
$ magnet_belt_z : int -313 -311 -305 -310 -302 -312 -311 -313 -312 -308 ...
$ roll_arm : num -128 -128 -128 -128 -128 -128 -128 -128 -128 -128 ...
$ pitch_arm : num 22.5 22.5 22.5 22.1 22.1 22 21.9 21.8 21.7 21.6 ...
$ yaw_arm : num -161 -161 -161 -161 -161 -161 -161 -161 -161 -161 ...
$ total_accel_arm : int 34 34 34 34 34 34 34 34 34 34 ...
$ var_accel_arm : num NA NA NA NA NA NA NA NA NA NA ...
$ avg_roll_arm : num NA NA NA NA NA NA NA NA NA NA ...
$ stddev_roll_arm : num NA NA NA NA NA NA NA NA NA NA ...
$ var_roll_arm : num NA NA NA NA NA NA NA NA NA NA ...
$ avg_pitch_arm : num NA NA NA NA NA NA NA NA NA NA ...
$ stddev_pitch_arm : num NA NA NA NA NA NA NA NA NA NA ...
$ var_pitch_arm : num NA NA NA NA NA NA NA NA NA NA ...
$ avg_yaw_arm : num NA NA NA NA NA NA NA NA NA NA ...
$ stddev_yaw_arm : num NA NA NA NA NA NA NA NA NA NA ...
$ var_yaw_arm : num NA NA NA NA NA NA NA NA NA NA ...
$ gyros_arm_x : num 0 0.02 0.02 0.02 0 0.02 0 0.02 0.02 0.02 ...
$ gyros_arm_y : num 0 -0.02 -0.02 -0.03 -0.03 -0.03 -0.03 -0.02 -0.03 -0.03 ...
$ gyros_arm_z : num -0.02 -0.02 -0.02 0.02 0 0 0 0 -0.02 -0.02 ...
$ accel_arm_x : int -288 -290 -289 -289 -289 -289 -289 -289 -288 -288 ...
$ accel_arm_y : int 109 110 110 111 111 111 111 111 109 110 ...
$ accel_arm_z : int -123 -125 -126 -123 -123 -122 -125 -124 -122 -124 ...
$ magnet_arm_x : int -368 -369 -368 -372 -374 -369 -373 -372 -369 -376 ...
$ magnet_arm_y : int 337 337 344 344 337 342 336 338 341 334 ...
$ magnet_arm_z : int 516 513 513 512 506 513 509 510 518 516 ...
$ kurtosis_roll_arm : num NA NA NA NA NA NA NA NA NA NA ...
$ kurtosis_picth_arm : num NA NA NA NA NA NA NA NA NA NA ...
$ kurtosis_yaw_arm : num NA NA NA NA NA NA NA NA NA NA ...
$ skewness_roll_arm : num NA NA NA NA NA NA NA NA NA NA ...
$ skewness_pitch_arm : num NA NA NA NA NA NA NA NA NA NA ...
$ skewness_yaw_arm : num NA NA NA NA NA NA NA NA NA NA ...
$ max_roll_arm : num NA NA NA NA NA NA NA NA NA NA ...
$ max_picth_arm : num NA NA NA NA NA NA NA NA NA NA ...
$ max_yaw_arm : int NA NA NA NA NA NA NA NA NA NA ...
$ min_roll_arm : num NA NA NA NA NA NA NA NA NA NA ...
$ min_pitch_arm : num NA NA NA NA NA NA NA NA NA NA ...
$ min_yaw_arm : int NA NA NA NA NA NA NA NA NA NA ...
$ amplitude_roll_arm : num NA NA NA NA NA NA NA NA NA NA ...
$ amplitude_pitch_arm : num NA NA NA NA NA NA NA NA NA NA ...
$ amplitude_yaw_arm : int NA NA NA NA NA NA NA NA NA NA ...
$ roll_dumbbell : num 13.1 13.1 12.9 13.4 13.4 ...
$ pitch_dumbbell : num -70.5 -70.6 -70.3 -70.4 -70.4 ...
$ yaw_dumbbell : num -84.9 -84.7 -85.1 -84.9 -84.9 ...
$ kurtosis_roll_dumbbell : num NA NA NA NA NA NA NA NA NA NA ...
$ kurtosis_picth_dumbbell : num NA NA NA NA NA NA NA NA NA NA ...
$ kurtosis_yaw_dumbbell : logi NA NA NA NA NA NA ...
$ skewness_roll_dumbbell : num NA NA NA NA NA NA NA NA NA NA ...
$ skewness_pitch_dumbbell : num NA NA NA NA NA NA NA NA NA NA ...
$ skewness_yaw_dumbbell : logi NA NA NA NA NA NA ...
$ max_roll_dumbbell : num NA NA NA NA NA NA NA NA NA NA ...
$ max_picth_dumbbell : num NA NA NA NA NA NA NA NA NA NA ...
$ max_yaw_dumbbell : num NA NA NA NA NA NA NA NA NA NA ...
$ min_roll_dumbbell : num NA NA NA NA NA NA NA NA NA NA ...
$ min_pitch_dumbbell : num NA NA NA NA NA NA NA NA NA NA ...
$ min_yaw_dumbbell : num NA NA NA NA NA NA NA NA NA NA ...
$ amplitude_roll_dumbbell : num NA NA NA NA NA NA NA NA NA NA ...
[list output truncated]
str(testData)
'data.frame': 20 obs. of 160 variables:
$ X : int 1 2 3 4 5 6 7 8 9 10 ...
$ user_name : Factor w/ 6 levels "adelmo","carlitos",..: 6 5 5 1 4 5 5 5 2 3 ...
$ raw_timestamp_part_1 : int 1323095002 1322673067 1322673075 1322832789 1322489635 1322673149 1322673128 1322673076 1323084240 1322837822 ...
$ raw_timestamp_part_2 : int 868349 778725 342967 560311 814776 510661 766645 54671 916313 384285 ...
$ cvtd_timestamp : Factor w/ 11 levels "02/12/2011 13:33",..: 5 10 10 1 6 11 11 10 3 2 ...
$ new_window : Factor w/ 1 level "no": 1 1 1 1 1 1 1 1 1 1 ...
$ num_window : int 74 431 439 194 235 504 485 440 323 664 ...
$ roll_belt : num 123 1.02 0.87 125 1.35 -5.92 1.2 0.43 0.93 114 ...
$ pitch_belt : num 27 4.87 1.82 -41.6 3.33 1.59 4.44 4.15 6.72 22.4 ...
$ yaw_belt : num -4.75 -88.9 -88.5 162 -88.6 -87.7 -87.3 -88.5 -93.7 -13.1 ...
$ total_accel_belt : int 20 4 5 17 3 4 4 4 4 18 ...
$ kurtosis_roll_belt : logi NA NA NA NA NA NA ...
$ kurtosis_picth_belt : logi NA NA NA NA NA NA ...
$ kurtosis_yaw_belt : logi NA NA NA NA NA NA ...
$ skewness_roll_belt : logi NA NA NA NA NA NA ...
$ skewness_roll_belt.1 : logi NA NA NA NA NA NA ...
$ skewness_yaw_belt : logi NA NA NA NA NA NA ...
$ max_roll_belt : logi NA NA NA NA NA NA ...
$ max_picth_belt : logi NA NA NA NA NA NA ...
$ max_yaw_belt : logi NA NA NA NA NA NA ...
$ min_roll_belt : logi NA NA NA NA NA NA ...
$ min_pitch_belt : logi NA NA NA NA NA NA ...
$ min_yaw_belt : logi NA NA NA NA NA NA ...
$ amplitude_roll_belt : logi NA NA NA NA NA NA ...
$ amplitude_pitch_belt : logi NA NA NA NA NA NA ...
$ amplitude_yaw_belt : logi NA NA NA NA NA NA ...
$ var_total_accel_belt : logi NA NA NA NA NA NA ...
$ avg_roll_belt : logi NA NA NA NA NA NA ...
$ stddev_roll_belt : logi NA NA NA NA NA NA ...
$ var_roll_belt : logi NA NA NA NA NA NA ...
$ avg_pitch_belt : logi NA NA NA NA NA NA ...
$ stddev_pitch_belt : logi NA NA NA NA NA NA ...
$ var_pitch_belt : logi NA NA NA NA NA NA ...
$ avg_yaw_belt : logi NA NA NA NA NA NA ...
$ stddev_yaw_belt : logi NA NA NA NA NA NA ...
$ var_yaw_belt : logi NA NA NA NA NA NA ...
$ gyros_belt_x : num -0.5 -0.06 0.05 0.11 0.03 0.1 -0.06 -0.18 0.1 0.14 ...
$ gyros_belt_y : num -0.02 -0.02 0.02 0.11 0.02 0.05 0 -0.02 0 0.11 ...
$ gyros_belt_z : num -0.46 -0.07 0.03 -0.16 0 -0.13 0 -0.03 -0.02 -0.16 ...
$ accel_belt_x : int -38 -13 1 46 -8 -11 -14 -10 -15 -25 ...
$ accel_belt_y : int 69 11 -1 45 4 -16 2 -2 1 63 ...
$ accel_belt_z : int -179 39 49 -156 27 38 35 42 32 -158 ...
$ magnet_belt_x : int -13 43 29 169 33 31 50 39 -6 10 ...
$ magnet_belt_y : int 581 636 631 608 566 638 622 635 600 601 ...
$ magnet_belt_z : int -382 -309 -312 -304 -418 -291 -315 -305 -302 -330 ...
$ roll_arm : num 40.7 0 0 -109 76.1 0 0 0 -137 -82.4 ...
$ pitch_arm : num -27.8 0 0 55 2.76 0 0 0 11.2 -63.8 ...
$ yaw_arm : num 178 0 0 -142 102 0 0 0 -167 -75.3 ...
$ total_accel_arm : int 10 38 44 25 29 14 15 22 34 32 ...
$ var_accel_arm : logi NA NA NA NA NA NA ...
$ avg_roll_arm : logi NA NA NA NA NA NA ...
$ stddev_roll_arm : logi NA NA NA NA NA NA ...
$ var_roll_arm : logi NA NA NA NA NA NA ...
$ avg_pitch_arm : logi NA NA NA NA NA NA ...
$ stddev_pitch_arm : logi NA NA NA NA NA NA ...
$ var_pitch_arm : logi NA NA NA NA NA NA ...
$ avg_yaw_arm : logi NA NA NA NA NA NA ...
$ stddev_yaw_arm : logi NA NA NA NA NA NA ...
$ var_yaw_arm : logi NA NA NA NA NA NA ...
$ gyros_arm_x : num -1.65 -1.17 2.1 0.22 -1.96 0.02 2.36 -3.71 0.03 0.26 ...
$ gyros_arm_y : num 0.48 0.85 -1.36 -0.51 0.79 0.05 -1.01 1.85 -0.02 -0.5 ...
$ gyros_arm_z : num -0.18 -0.43 1.13 0.92 -0.54 -0.07 0.89 -0.69 -0.02 0.79 ...
$ accel_arm_x : int 16 -290 -341 -238 -197 -26 99 -98 -287 -301 ...
$ accel_arm_y : int 38 215 245 -57 200 130 79 175 111 -42 ...
$ accel_arm_z : int 93 -90 -87 6 -30 -19 -67 -78 -122 -80 ...
$ magnet_arm_x : int -326 -325 -264 -173 -170 396 702 535 -367 -420 ...
$ magnet_arm_y : int 385 447 474 257 275 176 15 215 335 294 ...
$ magnet_arm_z : int 481 434 413 633 617 516 217 385 520 493 ...
$ kurtosis_roll_arm : logi NA NA NA NA NA NA ...
$ kurtosis_picth_arm : logi NA NA NA NA NA NA ...
$ kurtosis_yaw_arm : logi NA NA NA NA NA NA ...
$ skewness_roll_arm : logi NA NA NA NA NA NA ...
$ skewness_pitch_arm : logi NA NA NA NA NA NA ...
$ skewness_yaw_arm : logi NA NA NA NA NA NA ...
$ max_roll_arm : logi NA NA NA NA NA NA ...
$ max_picth_arm : logi NA NA NA NA NA NA ...
$ max_yaw_arm : logi NA NA NA NA NA NA ...
$ min_roll_arm : logi NA NA NA NA NA NA ...
$ min_pitch_arm : logi NA NA NA NA NA NA ...
$ min_yaw_arm : logi NA NA NA NA NA NA ...
$ amplitude_roll_arm : logi NA NA NA NA NA NA ...
$ amplitude_pitch_arm : logi NA NA NA NA NA NA ...
$ amplitude_yaw_arm : logi NA NA NA NA NA NA ...
$ roll_dumbbell : num -17.7 54.5 57.1 43.1 -101.4 ...
$ pitch_dumbbell : num 25 -53.7 -51.4 -30 -53.4 ...
$ yaw_dumbbell : num 126.2 -75.5 -75.2 -103.3 -14.2 ...
$ kurtosis_roll_dumbbell : logi NA NA NA NA NA NA ...
$ kurtosis_picth_dumbbell : logi NA NA NA NA NA NA ...
$ kurtosis_yaw_dumbbell : logi NA NA NA NA NA NA ...
$ skewness_roll_dumbbell : logi NA NA NA NA NA NA ...
$ skewness_pitch_dumbbell : logi NA NA NA NA NA NA ...
$ skewness_yaw_dumbbell : logi NA NA NA NA NA NA ...
$ max_roll_dumbbell : logi NA NA NA NA NA NA ...
$ max_picth_dumbbell : logi NA NA NA NA NA NA ...
$ max_yaw_dumbbell : logi NA NA NA NA NA NA ...
$ min_roll_dumbbell : logi NA NA NA NA NA NA ...
$ min_pitch_dumbbell : logi NA NA NA NA NA NA ...
$ min_yaw_dumbbell : logi NA NA NA NA NA NA ...
$ amplitude_roll_dumbbell : logi NA NA NA NA NA NA ...
[list output truncated]
> predictGBM <- predict(modelGBM, inTrain_Test)
> confusionMatrix(predictGBM, inTrain_Test$classe)
Confusion Matrix and Statistics
Reference
Prediction A B C D E
A 1377 34 0 0 1
B 13 895 16 6 6
C 4 20 826 26 5
D 0 0 12 765 9
E 1 0 1 7 880
Overall Statistics
Accuracy : 0.9672
95% CI : (0.9618, 0.972)
No Information Rate : 0.2845
P-Value [Acc > NIR] : < 2.2e-16
Kappa : 0.9585
Mcnemar's Test P-Value : NA
Statistics by Class:
Class: A Class: B Class: C Class: D Class: E
Sensitivity 0.9871 0.9431 0.9661 0.9515 0.9767
Specificity 0.9900 0.9896 0.9864 0.9949 0.9978
Pos Pred Value 0.9752 0.9562 0.9376 0.9733 0.9899
Neg Pred Value 0.9948 0.9864 0.9928 0.9905 0.9948
Prevalence 0.2845 0.1935 0.1743 0.1639 0.1837
Detection Rate 0.2808 0.1825 0.1684 0.1560 0.1794
Detection Prevalence 0.2879 0.1909 0.1796 0.1603 0.1813
Balanced Accuracy 0.9886 0.9664 0.9762 0.9732 0.9872
>
> predictCLASS <- predict(modelCLASS, inTrain_Test, type = "class")
> confusionMatrix(predictCLASS, inTrain_Test$classe)
Confusion Matrix and Statistics
Reference
Prediction A B C D E
A 1251 207 13 93 39
B 41 535 78 29 57
C 35 88 688 139 104
D 43 75 51 492 43
E 25 44 25 51 658
Overall Statistics
Accuracy : 0.739
95% CI : (0.7265, 0.7512)
No Information Rate : 0.2845
P-Value [Acc > NIR] : < 2.2e-16
Kappa : 0.6682
Mcnemar's Test P-Value : < 2.2e-16
Statistics by Class:
Class: A Class: B Class: C Class: D Class: E
Sensitivity 0.8968 0.5638 0.8047 0.6119 0.7303
Specificity 0.8997 0.9482 0.9096 0.9483 0.9638
Pos Pred Value 0.7804 0.7230 0.6528 0.6989 0.8194
Neg Pred Value 0.9564 0.9006 0.9566 0.9257 0.9407
Prevalence 0.2845 0.1935 0.1743 0.1639 0.1837
Detection Rate 0.2551 0.1091 0.1403 0.1003 0.1342
Detection Prevalence 0.3269 0.1509 0.2149 0.1436 0.1637
Balanced Accuracy 0.8982 0.7560 0.8571 0.7801 0.8470
>
> predictRF <- predict(modelRF, inTrain_Test)
> confusionMatrix(predictRF, inTrain_Test$classe)
Confusion Matrix and Statistics
Reference
Prediction A B C D E
A 1394 5 0 0 0
B 0 943 2 0 0
C 0 1 851 8 0
D 0 0 2 796 2
E 1 0 0 0 899
Overall Statistics
Accuracy : 0.9957
95% CI : (0.9935, 0.9973)
No Information Rate : 0.2845
P-Value [Acc > NIR] : < 2.2e-16
Kappa : 0.9946
Mcnemar's Test P-Value : NA
Statistics by Class:
Class: A Class: B Class: C Class: D Class: E
Sensitivity 0.9993 0.9937 0.9953 0.9900 0.9978
Specificity 0.9986 0.9995 0.9978 0.9990 0.9998
Pos Pred Value 0.9964 0.9979 0.9895 0.9950 0.9989
Neg Pred Value 0.9997 0.9985 0.9990 0.9981 0.9995
Prevalence 0.2845 0.1935 0.1743 0.1639 0.1837
Detection Rate 0.2843 0.1923 0.1735 0.1623 0.1833
Detection Prevalence 0.2853 0.1927 0.1754 0.1631 0.1835
Balanced Accuracy 0.9989 0.9966 0.9965 0.9945 0.9988
> predictfinal <- predict(modelRF, testData)
> predictfinal
[1] B A B A A E D B A A B C B A E E A B B B
Levels: A B C D E
> sessionInfo()
R version 3.3.0 (2016-05-03)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 7 x64 (build 7601) Service Pack 1
locale:
[1] LC_COLLATE=English_United States.1252 LC_CTYPE=English_United States.1252
[3] LC_MONETARY=English_United States.1252 LC_NUMERIC=C
[5] LC_TIME=English_United States.1252
attached base packages:
[1] parallel splines stats graphics grDevices utils datasets methods base
other attached packages:
[1] plyr_1.8.3 gbm_2.1.1 survival_2.39-4 rpart.plot_1.5.3 rpart_4.1-10
[6] randomForest_4.6-12 caret_6.0-68 ggplot2_2.1.0 lattice_0.20-33 data.table_1.9.6
loaded via a namespace (and not attached):
[1] Rcpp_0.12.5 compiler_3.3.0 nloptr_1.0.4 iterators_1.0.8 class_7.3-14
[6] tools_3.3.0 digest_0.6.9 lme4_1.1-12 nlme_3.1-128 gtable_0.2.0
[11] mgcv_1.8-12 Matrix_1.2-6 foreach_1.4.3 yaml_2.1.13 SparseM_1.7
[16] e1071_1.6-7 stringr_1.0.0 MatrixModels_0.4-1 stats4_3.3.0 grid_3.3.0
[21] nnet_7.3-12 rmarkdown_0.9.6 minqa_1.2.4 reshape2_1.4.1 car_2.1-2
[26] magrittr_1.5 htmltools_0.3.5 scales_0.4.0 codetools_0.2-14 MASS_7.3-45
[31] pbkrtest_0.4-6 colorspace_1.2-6 quantreg_5.24 stringi_1.0-1 munsell_0.4.3
[36] chron_2.3-47
Velloso, E.; Bulling, A.; Gellersen, H.; Ugulino, W.; Fuks, H. Qualitative Activity Recognition of Weight Lifting Exercises. http://groupware.les.inf.puc-rio.br/har