1 Disclaimer

This is an R Markdown document created by Rongbin Ye for the final project of JHU-coursera course, Practical Machine Learning. As part of the data scientist concerntration, this markdown is created by Rongbin Ye independently binding to the honor codes of Johns Hopkins University.The unauthorized usage is prohibited.

2 Executive Summary

Using devices such as Jawbone Up, Nike FuelBand, and Fitbit it is now possible to collect a large amount of data about personal activity relatively inexpensively. These type of devices are part of the quantified self movement – a group of enthusiasts who take measurements about themselves regularly to improve their health, to find patterns in their behavior, or because they are tech geeks. One thing that people regularly do is quantify how much of a particular activity they do, but they rarely quantify how well they do it.

In this project, I will use data from accelerometers on the belt, forearm, arm, and dumbell of 6 participants. They were asked to perform barbell lifts correctly and incorrectly in 5 different ways. More information is available from the website here: http://groupware.les.inf.puc-rio.br/har (see the section on the Weight Lifting Exercise Dataset).

The core business problem is to help clients who are using the equipment to tell which class their posture is belonging to. This is a typical question of classification. The expected output will be a predictive model that provide this information, focusing on providing highest precision.

The Data will be cleansed by treating missing data, wrong data type and overfitting because of over power. The major techniques used here are standardization and feature reduction by drop variables.Three models has been tested, a support vector machine, a random forest and a neauralnet multinomial work model. By the comparison in accuracy, kappas and performances for each individual class, the support vector machine model (svm-poly) proved to have the best performances. Hence, this model has been chosen to predict.

This report builds an algorithm, which is capable of detecting the posture of users effectively. The business problem has been directly solved.

3 Analysis

3.1 Data & Libraries

3.1.1 Import Libraries

# For Data Cleaning
library(readr)
library(tidyverse)
# For Model Training
library(caret)
## multinomial logistic regression
## for random forest - ensemble
library(rpart)
## for multinomial nuralnet work
library(nnet)
library(factoextra)
library(ggfortify)
library(kernlab)
library(rsample)
## Measurements
library(MLmetrics)
## for computation
library(MASS)
library(foreach)
library(doParallel)
library(e1071)

3.1.2 Load Data

3.2 Explanatory Data Analysis (EDA)

## # A tibble: 10 x 160
##       X1 user_name raw_timestamp_p~ raw_timestamp_p~ cvtd_timestamp new_window
##    <dbl> <chr>                <dbl>            <dbl> <chr>          <chr>     
##  1     1 carlitos        1323084231           788290 05/12/2011 11~ no        
##  2     2 carlitos        1323084231           808298 05/12/2011 11~ no        
##  3     3 carlitos        1323084231           820366 05/12/2011 11~ no        
##  4     4 carlitos        1323084232           120339 05/12/2011 11~ no        
##  5     5 carlitos        1323084232           196328 05/12/2011 11~ no        
##  6     6 carlitos        1323084232           304277 05/12/2011 11~ no        
##  7     7 carlitos        1323084232           368296 05/12/2011 11~ no        
##  8     8 carlitos        1323084232           440390 05/12/2011 11~ no        
##  9     9 carlitos        1323084232           484323 05/12/2011 11~ no        
## 10    10 carlitos        1323084232           484434 05/12/2011 11~ no        
## # ... with 154 more variables: num_window <dbl>, roll_belt <dbl>,
## #   pitch_belt <dbl>, yaw_belt <dbl>, total_accel_belt <dbl>,
## #   kurtosis_roll_belt <dbl>, kurtosis_picth_belt <dbl>,
## #   kurtosis_yaw_belt <lgl>, skewness_roll_belt <dbl>,
## #   skewness_roll_belt.1 <dbl>, skewness_yaw_belt <lgl>, max_roll_belt <dbl>,
## #   max_picth_belt <dbl>, max_yaw_belt <dbl>, min_roll_belt <dbl>,
## #   min_pitch_belt <dbl>, min_yaw_belt <dbl>, amplitude_roll_belt <dbl>,
## #   amplitude_pitch_belt <dbl>, amplitude_yaw_belt <dbl>,
## #   var_total_accel_belt <dbl>, avg_roll_belt <dbl>, stddev_roll_belt <dbl>,
## #   var_roll_belt <dbl>, avg_pitch_belt <dbl>, stddev_pitch_belt <dbl>,
## #   var_pitch_belt <dbl>, avg_yaw_belt <dbl>, stddev_yaw_belt <dbl>,
## #   var_yaw_belt <dbl>, gyros_belt_x <dbl>, gyros_belt_y <dbl>,
## #   gyros_belt_z <dbl>, accel_belt_x <dbl>, accel_belt_y <dbl>,
## #   accel_belt_z <dbl>, magnet_belt_x <dbl>, magnet_belt_y <dbl>,
## #   magnet_belt_z <dbl>, roll_arm <dbl>, pitch_arm <dbl>, yaw_arm <dbl>,
## #   total_accel_arm <dbl>, var_accel_arm <dbl>, avg_roll_arm <dbl>,
## #   stddev_roll_arm <dbl>, var_roll_arm <dbl>, avg_pitch_arm <dbl>,
## #   stddev_pitch_arm <dbl>, var_pitch_arm <dbl>, avg_yaw_arm <dbl>,
## #   stddev_yaw_arm <dbl>, var_yaw_arm <dbl>, gyros_arm_x <dbl>,
## #   gyros_arm_y <dbl>, gyros_arm_z <dbl>, accel_arm_x <dbl>, accel_arm_y <dbl>,
## #   accel_arm_z <dbl>, magnet_arm_x <dbl>, magnet_arm_y <dbl>,
## #   magnet_arm_z <dbl>, kurtosis_roll_arm <dbl>, kurtosis_picth_arm <dbl>,
## #   kurtosis_yaw_arm <dbl>, skewness_roll_arm <dbl>, skewness_pitch_arm <dbl>,
## #   skewness_yaw_arm <dbl>, max_roll_arm <dbl>, max_picth_arm <dbl>,
## #   max_yaw_arm <dbl>, min_roll_arm <dbl>, min_pitch_arm <dbl>,
## #   min_yaw_arm <dbl>, amplitude_roll_arm <dbl>, amplitude_pitch_arm <dbl>,
## #   amplitude_yaw_arm <dbl>, roll_dumbbell <dbl>, pitch_dumbbell <dbl>,
## #   yaw_dumbbell <dbl>, kurtosis_roll_dumbbell <dbl>,
## #   kurtosis_picth_dumbbell <dbl>, kurtosis_yaw_dumbbell <lgl>,
## #   skewness_roll_dumbbell <dbl>, skewness_pitch_dumbbell <dbl>,
## #   skewness_yaw_dumbbell <lgl>, max_roll_dumbbell <dbl>,
## #   max_picth_dumbbell <dbl>, max_yaw_dumbbell <dbl>, min_roll_dumbbell <dbl>,
## #   min_pitch_dumbbell <dbl>, min_yaw_dumbbell <dbl>,
## #   amplitude_roll_dumbbell <dbl>, amplitude_pitch_dumbbell <dbl>,
## #   amplitude_yaw_dumbbell <dbl>, total_accel_dumbbell <dbl>,
## #   var_accel_dumbbell <dbl>, avg_roll_dumbbell <dbl>,
## #   stddev_roll_dumbbell <dbl>, var_roll_dumbbell <dbl>, ...

3.2.1 EDA: Misintepretated Data

# Irrelevant Data: Timestamps, ID
belllift_train <- belllift_train %>%
  dplyr::select(- c("raw_timestamp_part_1","raw_timestamp_part_2", "cvtd_timestamp", "X1", "num_window"))

belllift_test <- belllift_test %>%
  dplyr::select(-c("raw_timestamp_part_1","raw_timestamp_part_2", "cvtd_timestamp", "X1","num_window"))

# The Wrong Data Type
## Id can be dropped

## User_name
belllift_train$user_name = as.factor(belllift_train$user_name)
belllift_test$user_name = as.factor(belllift_test$user_name)

## Target variable should be factorized
belllift_train$classe = as.factor(belllift_train$classe)

3.2.2 EDA: Imbalance Data

The new_window data is highly imbalance. As there is no new window in test dataset, filter data with new_window = No. 

# The imbalance of data
table(belllift_train$new_window)
## 
##    no   yes 
## 19216   406
table(belllift_test$new_window)
## 
## no 
## 20
## # A tibble: 19,216 x 155
##    user_name new_window roll_belt pitch_belt yaw_belt total_accel_belt
##    <fct>     <chr>          <dbl>      <dbl>    <dbl>            <dbl>
##  1 carlitos  no              1.41       8.07    -94.4                3
##  2 carlitos  no              1.41       8.07    -94.4                3
##  3 carlitos  no              1.42       8.07    -94.4                3
##  4 carlitos  no              1.48       8.05    -94.4                3
##  5 carlitos  no              1.48       8.07    -94.4                3
##  6 carlitos  no              1.45       8.06    -94.4                3
##  7 carlitos  no              1.42       8.09    -94.4                3
##  8 carlitos  no              1.42       8.13    -94.4                3
##  9 carlitos  no              1.43       8.16    -94.4                3
## 10 carlitos  no              1.45       8.17    -94.4                3
## # ... with 19,206 more rows, and 149 more variables: kurtosis_roll_belt <dbl>,
## #   kurtosis_picth_belt <dbl>, kurtosis_yaw_belt <lgl>,
## #   skewness_roll_belt <dbl>, skewness_roll_belt.1 <dbl>,
## #   skewness_yaw_belt <lgl>, max_roll_belt <dbl>, max_picth_belt <dbl>,
## #   max_yaw_belt <dbl>, min_roll_belt <dbl>, min_pitch_belt <dbl>,
## #   min_yaw_belt <dbl>, amplitude_roll_belt <dbl>, amplitude_pitch_belt <dbl>,
## #   amplitude_yaw_belt <dbl>, var_total_accel_belt <dbl>, avg_roll_belt <dbl>,
## #   stddev_roll_belt <dbl>, var_roll_belt <dbl>, avg_pitch_belt <dbl>,
## #   stddev_pitch_belt <dbl>, var_pitch_belt <dbl>, avg_yaw_belt <dbl>,
## #   stddev_yaw_belt <dbl>, var_yaw_belt <dbl>, gyros_belt_x <dbl>,
## #   gyros_belt_y <dbl>, gyros_belt_z <dbl>, accel_belt_x <dbl>,
## #   accel_belt_y <dbl>, accel_belt_z <dbl>, magnet_belt_x <dbl>,
## #   magnet_belt_y <dbl>, magnet_belt_z <dbl>, roll_arm <dbl>, pitch_arm <dbl>,
## #   yaw_arm <dbl>, total_accel_arm <dbl>, var_accel_arm <dbl>,
## #   avg_roll_arm <dbl>, stddev_roll_arm <dbl>, var_roll_arm <dbl>,
## #   avg_pitch_arm <dbl>, stddev_pitch_arm <dbl>, var_pitch_arm <dbl>,
## #   avg_yaw_arm <dbl>, stddev_yaw_arm <dbl>, var_yaw_arm <dbl>,
## #   gyros_arm_x <dbl>, gyros_arm_y <dbl>, gyros_arm_z <dbl>, accel_arm_x <dbl>,
## #   accel_arm_y <dbl>, accel_arm_z <dbl>, magnet_arm_x <dbl>,
## #   magnet_arm_y <dbl>, magnet_arm_z <dbl>, kurtosis_roll_arm <dbl>,
## #   kurtosis_picth_arm <dbl>, kurtosis_yaw_arm <dbl>, skewness_roll_arm <dbl>,
## #   skewness_pitch_arm <dbl>, skewness_yaw_arm <dbl>, max_roll_arm <dbl>,
## #   max_picth_arm <dbl>, max_yaw_arm <dbl>, min_roll_arm <dbl>,
## #   min_pitch_arm <dbl>, min_yaw_arm <dbl>, amplitude_roll_arm <dbl>,
## #   amplitude_pitch_arm <dbl>, amplitude_yaw_arm <dbl>, roll_dumbbell <dbl>,
## #   pitch_dumbbell <dbl>, yaw_dumbbell <dbl>, kurtosis_roll_dumbbell <dbl>,
## #   kurtosis_picth_dumbbell <dbl>, kurtosis_yaw_dumbbell <lgl>,
## #   skewness_roll_dumbbell <dbl>, skewness_pitch_dumbbell <dbl>,
## #   skewness_yaw_dumbbell <lgl>, max_roll_dumbbell <dbl>,
## #   max_picth_dumbbell <dbl>, max_yaw_dumbbell <dbl>, min_roll_dumbbell <dbl>,
## #   min_pitch_dumbbell <dbl>, min_yaw_dumbbell <dbl>,
## #   amplitude_roll_dumbbell <dbl>, amplitude_pitch_dumbbell <dbl>,
## #   amplitude_yaw_dumbbell <dbl>, total_accel_dumbbell <dbl>,
## #   var_accel_dumbbell <dbl>, avg_roll_dumbbell <dbl>,
## #   stddev_roll_dumbbell <dbl>, var_roll_dumbbell <dbl>,
## #   avg_pitch_dumbbell <dbl>, stddev_pitch_dumbbell <dbl>,
## #   var_pitch_dumbbell <dbl>, avg_yaw_dumbbell <dbl>,
## #   stddev_yaw_dumbbell <dbl>, ...

3.2.3 EDA: Missing Data & Trimming

For this dataset, as most of the data records the 3-d positions for different parts, such as hand, wist, the missing data is the key data problem should be treated. In this case, instead of using Principle Component Analysis, I decided to drop directly the columns with too many sparcity, like kurtosis_Roll_belt, etc. Indeed, the existence of these variables are likely to harm the performance of the models.

# Drop
bellift_train_c <- belllift_train #creating another subset to iterate in loop
for(i in 1:length(belllift_train)) { #for every column in the training dataset
        if( sum( is.na( belllift_train[, i] ) ) /nrow(belllift_train) >= .6 ) { #if n?? NAs > 60% of total observations
        for(j in 1:length(bellift_train_c)) {
            if( length( grep(names(belllift_train[i]), names(bellift_train_c)[j]) ) ==1)  { #if the columns are the same:
                bellift_train_c <- bellift_train_c[ , -j] #Remove that column
            }   
        } 
    }
}

# Capture the shape of the trimmed trains
dim(bellift_train_c)
## [1] 19622    55

Test data need to be converted in the same manner to keep model functioning normally.

# Filter the data
# Selectors
selector_name <- c(colnames(bellift_train_c)[1:54])
bellift_test_c <- belllift_test %>%
  dplyr::select(all_of(selector_name))
dim(bellift_test_c)
## [1] 20 54

3.2.4 EDA: Imbalance Data?

This check ensures that the target variables are distributed evenly across multiple classe.

# train dataset observation
table(bellift_train_c$classe)
## 
##    A    B    C    D    E 
## 5580 3797 3422 3216 3607

3.3 Data Wrangling

3.3.1 Standardization

Indeed, the other manipulation should be done is the normalization. On one hand, the nomralization mitigates the potential problem of heskadacity. On the other hand, standardized data is more digestible by the models.

Here, I decided to use min-max normalization as the missing data demonstrates that there are different level of missing data in different columns, which might lead to a difference in standard deviation. Hence, using a more straightfoward normalization, the min-max normalization will hedge the potential risk.

# Using Standard normalization
bellift_train_c[is.na(bellift_train_c)] <- 0
# for all the numeric
numeric_train <- bellift_train_c %>% keep(is.numeric)
bellift_train_c[colnames(numeric_train)] <- map(bellift_train_c[colnames(numeric_train)], scale)
bellift_test_c[colnames(numeric_train)] <- map(bellift_test_c[colnames(numeric_train)], scale)
# create a validataion data set for training
sample.set <- createDataPartition(bellift_train_c$classe, p = 0.75, list = FALSE)
bellift_train_c2 <- bellift_train_c[sample.set, ]
bellift_valid <- bellift_train_c[-sample.set, ]

3.4 Train Model

3.4.1 Model Selections

Based on the instruction, there are five postures need to be identify . Yet, in the test set there is no such column named Classe. Therefore, for the rigidity of the study, hereby a parameter tuning process is necessary.

Considering the properties of the response (multiple factors), i decide to try two major methods:

  1. Support Vector Machine: One of the most classic and powerful classifier. As it is able to handle both numeric and categorical variable, this method could be an effective model to implement. Furthermore, based on the five classes existing, a multinomial logistic regression model will be used(SVM-Poly).
  1. Random Forest: One of the most powerful ensemble classifiers in the machine learning Ensemble classifiers. The interpretation might be an issue, but, as the project does not ask for the interpretation, but only asked for the outcome. The random forest should provide credible prediction on test data.
  1. Neural Network Multinomial Logistic Regression: One of the most complicated but effective algorithm to classify in this case. Via this supervised learning process, a neural network could disentangle the information from a machine’s perspective. The larger amount of numeric data and the low requirement for interpretation provide a solid foundation for introducing the Neural network in this case.

The performances of three models will be evaluated and compared in order to get the best model for test cases. There are three major criteria here:

  1. Accuracy: The basic standards for measuring the performance of the models.

  2. Kappa statistics: The adjusted accuracy hedged the probability of correct prediction by chance alone. Kappa provides a balanced view of the performance in True Positive and True Negative.

  3. Other metrics breakdown by Cases: As long as the prediction is for multiple classes, it is important for have a look at the performances for different classes.

# For calculation power
numcore <- detectCores() - 1
registerDoParallel(numcore)

3.4.2 Model 1: Support Vector Machine

# Train the control terms
svmTrainControl = trainControl(method = "cv", number = 5, verboseIter = FALSE)
# Look Up tunegrid
modelLookup("svmLinear")
##       model parameter label forReg forClass probModel
## 1 svmLinear         C  Cost   TRUE     TRUE      TRUE
# train the models
svm.mod = train(classe ~., data = bellift_train_c2, method ="svmPoly", trControl = svmTrainControl, tunegrid = data.frame(degree = 1,scale = 1,C = c(.1,.5, 1.5)), metric = "Accuracy", preProc = c("center", "scale"),na.action = na.omit)
svm_new.mod<- svm(classe ~., data = bellift_train_c2,
          method="C-classification", kernal="radial", 
          gamma=0.1, cost=1)

3.4.3 Model 2: Random Forest with Cross Validation

3.4.3.1 Create Control Terms & Autotuning

# Look Up 
modelLookup("rf")
##   model parameter                         label forReg forClass probModel
## 1    rf      mtry #Randomly Selected Predictors   TRUE     TRUE      TRUE
# Create a search grid based on the available parameters.
grid <- expand.grid(.mtry = c(2,3,4))

# Control Object: 5 fold cross validation with the 'best' performing configuration.
ctrl <-
  trainControl(method = "cv",
               number = 5,
               selectionFunction = "best")

3.4.3.2 Random Forest with Cross Validation

# Train Random Forest Model
rf.mod <-
  train(
    classe ~ .,
    na.action = na.exclude,
    data = bellift_train_c2,
    method = "rf",
    metric = "kappa",
    trControl = ctrl,
    tuneGrid = grid
  )
## Warning in train.default(x, y, weights = w, ...): The metric "kappa" was not in
## the result set. Accuracy will be used instead.

3.4.4 Model 3: Modeling with Cross-Validation

3.4.4.1 Neural Network dirven Multinominal Logistic Regression

# Neural Net Multinomial-300 iterations
multi.mod <-
    nnet::multinom(
      classe ~ .,
      data = bellift_train_c2,
      control = rpart.control(cp = 0.005),
      maxit = 300
    )
## # weights:  300 (236 variable)
## initial  value 23687.707195 
## iter  10 value 16740.628125
## iter  20 value 14118.572855
## iter  30 value 13380.592287
## iter  40 value 12771.432624
## iter  50 value 11973.394788
## iter  60 value 11201.685421
## iter  70 value 10476.206015
## iter  80 value 10111.120707
## iter  90 value 9967.302863
## iter 100 value 9660.543390
## iter 110 value 9438.686149
## iter 120 value 9276.169619
## iter 130 value 9146.268702
## iter 140 value 9017.775809
## iter 150 value 8892.954596
## iter 160 value 8810.121805
## iter 170 value 8760.201091
## iter 180 value 8734.300677
## iter 190 value 8725.011987
## iter 200 value 8720.293926
## iter 210 value 8716.657666
## iter 220 value 8713.086024
## iter 230 value 8707.009666
## iter 240 value 8705.841070
## iter 250 value 8705.299973
## iter 260 value 8705.011990
## final  value 8705.007519 
## converged
#Stop Clusters
stopImplicitCluster()

3.5 Performance Evaluation

3.5.1 Accuracy Best

test <- bellift_valid$classe
svm_predict <- predict(svm.mod, newdata = bellift_valid, type = "raw")
svm_accuracy <- mean( test == svm_predict)
rf_predict <- predict(rf.mod, newdata = bellift_valid, type = "raw")
rf_accuracy <- mean( test == rf_predict)

multi_predict <- predict(multi.mod, newdata = bellift_valid, type = "class")
multi_accuracy <- mean( test == multi_predict)

all_accuracy <- rbind(svm_accuracy, rf_accuracy)
all_accuracy <- rbind(all_accuracy, multi_accuracy)
colnames(all_accuracy) = "Accuracy"
all_accuracy
##                 Accuracy
## svm_accuracy   0.9944943
## rf_accuracy    0.9949021
## multi_accuracy 0.7769168

3.5.2 Kappa Performances

# svm performance
  svm.matrix <- confusionMatrix(svm_predict, test, positive = "Yes")
  svm.kappa <- as.numeric(svm.matrix$overall["Kappa"])

# random forest performance
  rf.matrix <- confusionMatrix(rf_predict, test, positive = "Yes")
  rf.kappa <- as.numeric(rf.matrix$overall["Kappa"])

#  multinomial model
  multi.matrix <- confusionMatrix(multi_predict, test, positive = "Yes")
  multi.kappa <- as.numeric(multi.matrix$overall["Kappa"])

  all_kappa <- rbind(svm.kappa, rf.kappa, multi.kappa)
  colnames(all_kappa) <- "Kappa"
  all_kappa
##                 Kappa
## svm.kappa   0.9930360
## rf.kappa    0.9935508
## multi.kappa 0.7172414

3.5.3 Breakdown by Classes

3.5.4 Final Selection

all_relevant <- cbind(all_accuracy, all_kappa)
all_relevant
##                 Accuracy     Kappa
## svm_accuracy   0.9944943 0.9930360
## rf_accuracy    0.9949021 0.9935508
## multi_accuracy 0.7769168 0.7172414

Joining these three criteria together, I decided to use support vector machine in this case. Surprisingly, the performance of the SVM poly is much better than the other two models. In regarding the criteria, the support vector machine has the accuracy of 0.9944943 and the kappa of 0.993036.

Furthermore, using Occam’s Razor principle, SVM model has relatively few hypotheses involved and is not driven by the random process, which introduce any unexpected intervention.

After considering these, the SVM model is the model will be used to predict the testset.

3.6 Predictions

# Prediction Machine
final_prediction <- predict(newdata = bellift_test_c, svm.mod)
final <- bellift_test_c
final$classe <- final_prediction
final$classe
##  [1] A A B A A E D B A A B C B A E E A B A B
## Levels: A B C D E

4 Summary

In this project, I examined the dataset of belllifting and constituted three major machine learning models accordingly. The control of 5 fold cross-validation has been applied in the process to control the resubstititional error and improve the performance of the models. After tuning the models and predicting the result, these three has been compared and the best model has been selected to predict the result of the testset.

This study still can be improved by:

  1. Better Data collection. Instead of having a hodgepodge of every type of wearable equipment, it will be awesome and helpful to put similar equipments together, thus avoiding the tremendous amount of missing inherent data. For example, watches data with a label and helmets’ data with a label.

  2. Concerning the Overfitting Issue. Indeed, as one can see, two models in this case, the accuracy is so high, which is very likely beening overfitting on the train sets. Fortunately, in the process, the testing result was not being influenced and enable to reach 85% of accuracy in the test set. yet, despite this is not technically an overfitting incidence, one should keep eyes on this issue, if the model will be applied in other data.

4.1 Final Outcome

table(final$classe)
## 
## A B C D E 
## 9 6 1 1 3

4.2 Conclusion

  1. Out of the three model chosen, the support vector machine model performs the best in the accuracy and kappa value, therefore, being selected as the major model use in this case.
  1. The result of test has been stored in the following dataframe: final. The summary table has been provided followed.
  1. For multinomial classification, the selection of model should be based on both information of overall performance and performance of model breakdown by classes.
  1. The SVM poly model performed the best in this case and reach the 85% accuracy for the final prediction set.

4.3 Meaning for Business - More Interaction

For these wearble equipments, building a proper model to detect the users which positions (actions) users are performing will add values on contents they are already offering. This added interactivity will boost the attractiveness to the Generation-Z (Mckinsey & co., 2018), population borned between 1995 - 2000, generation who care more about immersive experience and DIY products.

For example, business can develop gym products and game products. Recently, the Ring’s Adventure, a physical excercise game with wearble senors on Switch platform, is a remarkable success. Users play and move to interact and the joy-cons, the controller, can feedback the data to direct users to do the right posture. Utilizing this technology, software and hardware manufactures could enable more interactivities with users to create greater fun.

4.3.1 references:

  1. Mckinsey & co. 2010.‘True Gen’: Generation Z and its implications for companies. Retrived on 01/03/2020 from: https://www.mckinsey.com/industries/consumer-packaged-goods/our-insights/true-gen-generation-z-and-its-implications-for-companies