The UC Irvine Machine Learning Repository6 contains a data set related to glass identification. The data consist of 214 glass samples labeled as one of seven class categories. There are nine predictors, including the refractive index and percentages of eight elements: Na, Mg, Al, Si, K, Ca, Ba, and Fe. The data can be accessed via: library(mlbench), data(Glass), str(Glass) .
library(knitr)
library(psych)
library(mlbench)
library(ggplot2)
library(reshape2)
library(corrplot)
library(caret)
library(e1071)
library(DMwR)
#library(mlbench)
data(Glass)
str(Glass)
## 'data.frame': 214 obs. of 10 variables:
## $ RI : num 1.52 1.52 1.52 1.52 1.52 ...
## $ Na : num 13.6 13.9 13.5 13.2 13.3 ...
## $ Mg : num 4.49 3.6 3.55 3.69 3.62 3.61 3.6 3.61 3.58 3.6 ...
## $ Al : num 1.1 1.36 1.54 1.29 1.24 1.62 1.14 1.05 1.37 1.36 ...
## $ Si : num 71.8 72.7 73 72.6 73.1 ...
## $ K : num 0.06 0.48 0.39 0.57 0.55 0.64 0.58 0.57 0.56 0.57 ...
## $ Ca : num 8.75 7.83 7.78 8.22 8.07 8.07 8.17 8.24 8.3 8.4 ...
## $ Ba : num 0 0 0 0 0 0 0 0 0 0 ...
## $ Fe : num 0 0 0 0 0 0.26 0 0 0 0.11 ...
## $ Type: Factor w/ 6 levels "1","2","3","5",..: 1 1 1 1 1 1 1 1 1 1 ...
a. Using visualizations, explore the predictor variables to understand their distributions as well as the relationships between predictors.
kable(head(Glass, 10))
RI | Na | Mg | Al | Si | K | Ca | Ba | Fe | Type |
---|---|---|---|---|---|---|---|---|---|
1.52101 | 13.64 | 4.49 | 1.10 | 71.78 | 0.06 | 8.75 | 0 | 0.00 | 1 |
1.51761 | 13.89 | 3.60 | 1.36 | 72.73 | 0.48 | 7.83 | 0 | 0.00 | 1 |
1.51618 | 13.53 | 3.55 | 1.54 | 72.99 | 0.39 | 7.78 | 0 | 0.00 | 1 |
1.51766 | 13.21 | 3.69 | 1.29 | 72.61 | 0.57 | 8.22 | 0 | 0.00 | 1 |
1.51742 | 13.27 | 3.62 | 1.24 | 73.08 | 0.55 | 8.07 | 0 | 0.00 | 1 |
1.51596 | 12.79 | 3.61 | 1.62 | 72.97 | 0.64 | 8.07 | 0 | 0.26 | 1 |
1.51743 | 13.30 | 3.60 | 1.14 | 73.09 | 0.58 | 8.17 | 0 | 0.00 | 1 |
1.51756 | 13.15 | 3.61 | 1.05 | 73.24 | 0.57 | 8.24 | 0 | 0.00 | 1 |
1.51918 | 14.04 | 3.58 | 1.37 | 72.08 | 0.56 | 8.30 | 0 | 0.00 | 1 |
1.51755 | 13.00 | 3.60 | 1.36 | 72.99 | 0.57 | 8.40 | 0 | 0.11 | 1 |
summary(Glass)
## RI Na Mg Al
## Min. :1.511 Min. :10.73 Min. :0.000 Min. :0.290
## 1st Qu.:1.517 1st Qu.:12.91 1st Qu.:2.115 1st Qu.:1.190
## Median :1.518 Median :13.30 Median :3.480 Median :1.360
## Mean :1.518 Mean :13.41 Mean :2.685 Mean :1.445
## 3rd Qu.:1.519 3rd Qu.:13.82 3rd Qu.:3.600 3rd Qu.:1.630
## Max. :1.534 Max. :17.38 Max. :4.490 Max. :3.500
## Si K Ca Ba
## Min. :69.81 Min. :0.0000 Min. : 5.430 Min. :0.000
## 1st Qu.:72.28 1st Qu.:0.1225 1st Qu.: 8.240 1st Qu.:0.000
## Median :72.79 Median :0.5550 Median : 8.600 Median :0.000
## Mean :72.65 Mean :0.4971 Mean : 8.957 Mean :0.175
## 3rd Qu.:73.09 3rd Qu.:0.6100 3rd Qu.: 9.172 3rd Qu.:0.000
## Max. :75.41 Max. :6.2100 Max. :16.190 Max. :3.150
## Fe Type
## Min. :0.00000 1:70
## 1st Qu.:0.00000 2:76
## Median :0.00000 3:17
## Mean :0.05701 5:13
## 3rd Qu.:0.10000 6: 9
## Max. :0.51000 7:29
#using library(psych)
describe(Glass)
## vars n mean sd median trimmed mad min max range skew kurtosis
## RI 1 214 1.52 0.00 1.52 1.52 0.00 1.51 1.53 0.02 1.60 4.72
## Na 2 214 13.41 0.82 13.30 13.38 0.64 10.73 17.38 6.65 0.45 2.90
## Mg 3 214 2.68 1.44 3.48 2.87 0.30 0.00 4.49 4.49 -1.14 -0.45
## Al 4 214 1.44 0.50 1.36 1.41 0.31 0.29 3.50 3.21 0.89 1.94
## Si 5 214 72.65 0.77 72.79 72.71 0.57 69.81 75.41 5.60 -0.72 2.82
## K 6 214 0.50 0.65 0.56 0.43 0.17 0.00 6.21 6.21 6.46 52.87
## Ca 7 214 8.96 1.42 8.60 8.74 0.66 5.43 16.19 10.76 2.02 6.41
## Ba 8 214 0.18 0.50 0.00 0.03 0.00 0.00 3.15 3.15 3.37 12.08
## Fe 9 214 0.06 0.10 0.00 0.04 0.00 0.00 0.51 0.51 1.73 2.52
## Type* 10 214 2.54 1.71 2.00 2.31 1.48 1.00 6.00 5.00 1.04 -0.29
## se
## RI 0.00
## Na 0.06
## Mg 0.10
## Al 0.03
## Si 0.05
## K 0.04
## Ca 0.10
## Ba 0.03
## Fe 0.01
## Type* 0.12
#using library(ggplot2) && library(reshape2)
ggplot(melt(Glass, id.vars=c('Type')), aes(x=value)) +
geom_histogram(bins=50) +
facet_wrap(~variable, scale="free")
#using library(corrplot)
corrplot(cor(Glass[,1:9]), order = "hclust")
For the visual above, we can see that there is a strong correlation between RI and Ca.
Let’s verify that algebrically:
#using library(caret)
corelation <- cor(Glass[,-10])
highCorr <- findCorrelation(corelation, cutoff = .80)
print(paste0("The number of highly correlated Predictors with Pearson Correlation > 0.80 is: ",length(highCorr)))
## [1] "The number of highly correlated Predictors with Pearson Correlation > 0.80 is: 1"
cor(Glass[,c('RI','Ca')])
## RI Ca
## RI 1.0000000 0.8104027
## Ca 0.8104027 1.0000000
b. Do there appear to be any outliers in the data? Are any predictors skewed?
for(i in 1:9) {
print(paste0("These are the outlier values for predictor Variable: ", colnames(Glass[i])))
print(paste0(boxplot(Glass[i],plot=FALSE)$out))
}
## [1] "These are the outlier values for predictor Variable: RI"
## [1] "1.52667" "1.5232" "1.51215" "1.52725" "1.5241" "1.52475" "1.53125"
## [8] "1.53393" "1.52664" "1.52739" "1.52777" "1.52614" "1.52369" "1.51115"
## [15] "1.51131" "1.52315" "1.52365"
## [1] "These are the outlier values for predictor Variable: Na"
## [1] "11.45" "10.73" "11.23" "11.02" "11.03" "17.38" "15.79"
## [1] "These are the outlier values for predictor Variable: Mg"
## character(0)
## [1] "These are the outlier values for predictor Variable: Al"
## [1] "0.29" "0.47" "0.47" "0.51" "3.5" "3.04" "3.02" "0.34" "2.38" "2.79"
## [11] "2.68" "2.54" "2.34" "2.66" "2.51" "2.42" "2.74" "2.88"
## [1] "These are the outlier values for predictor Variable: Si"
## [1] "70.57" "69.81" "70.16" "74.45" "69.89" "70.48" "70.7" "74.55" "75.41"
## [10] "70.26" "70.43" "75.18"
## [1] "These are the outlier values for predictor Variable: K"
## [1] "1.68" "6.21" "6.21" "1.76" "1.46" "2.7" "1.41"
## [1] "These are the outlier values for predictor Variable: Ca"
## [1] "11.64" "10.79" "13.24" "13.3" "16.19" "11.52" "10.99" "14.68" "14.96"
## [10] "14.4" "11.14" "13.44" "5.87" "11.41" "11.62" "11.53" "11.32" "12.24"
## [19] "12.5" "11.27" "10.88" "11.22" "6.65" "5.43" "5.79" "6.47"
## [1] "These are the outlier values for predictor Variable: Ba"
## [1] "0.09" "0.11" "0.69" "0.14" "0.11" "3.15" "0.27" "0.09" "0.06" "0.15"
## [11] "2.2" "0.24" "1.19" "1.63" "1.68" "0.76" "0.64" "0.4" "1.59" "1.57"
## [21] "0.61" "0.81" "0.66" "0.64" "0.53" "0.63" "0.56" "1.71" "0.67" "1.55"
## [31] "1.38" "2.88" "0.54" "1.06" "1.59" "1.64" "1.57" "1.67"
## [1] "These are the outlier values for predictor Variable: Fe"
## [1] "0.26" "0.3" "0.31" "0.32" "0.34" "0.28" "0.29" "0.28" "0.35" "0.37"
## [11] "0.51" "0.28"
Yes, there are outliers for certain predictors variables
#using library(e1071)
apply(Glass[,-10], 2, skewness)
## RI Na Mg Al Si K Ca
## 1.6027151 0.4478343 -1.1364523 0.8946104 -0.7202392 6.4600889 2.0184463
## Ba Fe
## 3.3686800 1.7298107
Yes there are some skewed values. Therefore a transformation technic is a must.
c. Are there any relevant transformations of one or more predictors that might improve the classification model?
TheTransform <- apply(Glass[,-10], 2, BoxCoxTrans)
TheTransform
## $RI
## Box-Cox Transformation
##
## 214 data points used to estimate Lambda
##
## Input data summary:
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 1.511 1.517 1.518 1.518 1.519 1.534
##
## Largest/Smallest: 1.02
## Sample Skewness: 1.6
##
## Estimated Lambda: -2
##
##
## $Na
## Box-Cox Transformation
##
## 214 data points used to estimate Lambda
##
## Input data summary:
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 10.73 12.91 13.30 13.41 13.82 17.38
##
## Largest/Smallest: 1.62
## Sample Skewness: 0.448
##
## Estimated Lambda: -0.1
## With fudge factor, Lambda = 0 will be used for transformations
##
##
## $Mg
## Box-Cox Transformation
##
## 214 data points used to estimate Lambda
##
## Input data summary:
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.000 2.115 3.480 2.685 3.600 4.490
##
## Lambda could not be estimated; no transformation is applied
##
##
## $Al
## Box-Cox Transformation
##
## 214 data points used to estimate Lambda
##
## Input data summary:
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.290 1.190 1.360 1.445 1.630 3.500
##
## Largest/Smallest: 12.1
## Sample Skewness: 0.895
##
## Estimated Lambda: 0.5
##
##
## $Si
## Box-Cox Transformation
##
## 214 data points used to estimate Lambda
##
## Input data summary:
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 69.81 72.28 72.79 72.65 73.09 75.41
##
## Largest/Smallest: 1.08
## Sample Skewness: -0.72
##
## Estimated Lambda: 2
##
##
## $K
## Box-Cox Transformation
##
## 214 data points used to estimate Lambda
##
## Input data summary:
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.0000 0.1225 0.5550 0.4971 0.6100 6.2100
##
## Lambda could not be estimated; no transformation is applied
##
##
## $Ca
## Box-Cox Transformation
##
## 214 data points used to estimate Lambda
##
## Input data summary:
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 5.430 8.240 8.600 8.957 9.172 16.190
##
## Largest/Smallest: 2.98
## Sample Skewness: 2.02
##
## Estimated Lambda: -1.1
##
##
## $Ba
## Box-Cox Transformation
##
## 214 data points used to estimate Lambda
##
## Input data summary:
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.000 0.000 0.000 0.175 0.000 3.150
##
## Lambda could not be estimated; no transformation is applied
##
##
## $Fe
## Box-Cox Transformation
##
## 214 data points used to estimate Lambda
##
## Input data summary:
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.00000 0.00000 0.00000 0.05701 0.10000 0.51000
##
## Lambda could not be estimated; no transformation is applied
First, removing any outliers will definitly improve the model. Nevertheless, transformation technics such as log or BoxCox would also help improve the model.
The soybean data can also be found at the UC Irvine Machine Learning Repository. Data were collected to predict disease in 683 soybeans. The 35 predictors are mostly categorical and include information on the environmental conditions (e.g., temperature, precipitation) and plant conditions (e.g., left spots, mold growth). The outcome labels consist of 19 distinct classes. The data can be loaded via:
#using library(mlbench)
data(Soybean)
## See ?Soybean for details
Description: There are 19 classes, only the first 15 of which have been used in prior work. The folklore seems to be that the last four classes are unjustified by the data since they have so few examples. There are 35 categorical attributes, some nominal and some ordered. The value “dna” means does not apply. The values for attributes are encoded numerically, with the first value encoded as “0,” the second as “1,” and so forth. A data frame with 683 observations on 36 variables. There are 35 categorical attributes, all numerical and a nominal denoting the class.
a. Investigate the frequency distributions for the categorical predictors. Are any of the distributions degenerate in the ways discussed earlier in this chapter?
kable(head(Soybean))
Class | date | plant.stand | precip | temp | hail | crop.hist | area.dam | sever | seed.tmt | germ | plant.growth | leaves | leaf.halo | leaf.marg | leaf.size | leaf.shread | leaf.malf | leaf.mild | stem | lodging | stem.cankers | canker.lesion | fruiting.bodies | ext.decay | mycelium | int.discolor | sclerotia | fruit.pods | fruit.spots | seed | mold.growth | seed.discolor | seed.size | shriveling | roots |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
diaporthe-stem-canker | 6 | 0 | 2 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 2 | 2 | 0 | 0 | 0 | 1 | 1 | 3 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 4 | 0 | 0 | 0 | 0 | 0 | 0 |
diaporthe-stem-canker | 4 | 0 | 2 | 1 | 0 | 2 | 0 | 2 | 1 | 1 | 1 | 1 | 0 | 2 | 2 | 0 | 0 | 0 | 1 | 0 | 3 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 4 | 0 | 0 | 0 | 0 | 0 | 0 |
diaporthe-stem-canker | 3 | 0 | 2 | 1 | 0 | 1 | 0 | 2 | 1 | 2 | 1 | 1 | 0 | 2 | 2 | 0 | 0 | 0 | 1 | 0 | 3 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 4 | 0 | 0 | 0 | 0 | 0 | 0 |
diaporthe-stem-canker | 3 | 0 | 2 | 1 | 0 | 1 | 0 | 2 | 0 | 1 | 1 | 1 | 0 | 2 | 2 | 0 | 0 | 0 | 1 | 0 | 3 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 4 | 0 | 0 | 0 | 0 | 0 | 0 |
diaporthe-stem-canker | 6 | 0 | 2 | 1 | 0 | 2 | 0 | 1 | 0 | 2 | 1 | 1 | 0 | 2 | 2 | 0 | 0 | 0 | 1 | 0 | 3 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 4 | 0 | 0 | 0 | 0 | 0 | 0 |
diaporthe-stem-canker | 5 | 0 | 2 | 1 | 0 | 3 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 2 | 2 | 0 | 0 | 0 | 1 | 0 | 3 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 4 | 0 | 0 | 0 | 0 | 0 | 0 |
str(Soybean)
## 'data.frame': 683 obs. of 36 variables:
## $ Class : Factor w/ 19 levels "2-4-d-injury",..: 11 11 11 11 11 11 11 11 11 11 ...
## $ date : Factor w/ 7 levels "0","1","2","3",..: 7 5 4 4 7 6 6 5 7 5 ...
## $ plant.stand : Ord.factor w/ 2 levels "0"<"1": 1 1 1 1 1 1 1 1 1 1 ...
## $ precip : Ord.factor w/ 3 levels "0"<"1"<"2": 3 3 3 3 3 3 3 3 3 3 ...
## $ temp : Ord.factor w/ 3 levels "0"<"1"<"2": 2 2 2 2 2 2 2 2 2 2 ...
## $ hail : Factor w/ 2 levels "0","1": 1 1 1 1 1 1 1 2 1 1 ...
## $ crop.hist : Factor w/ 4 levels "0","1","2","3": 2 3 2 2 3 4 3 2 4 3 ...
## $ area.dam : Factor w/ 4 levels "0","1","2","3": 2 1 1 1 1 1 1 1 1 1 ...
## $ sever : Factor w/ 3 levels "0","1","2": 2 3 3 3 2 2 2 2 2 3 ...
## $ seed.tmt : Factor w/ 3 levels "0","1","2": 1 2 2 1 1 1 2 1 2 1 ...
## $ germ : Ord.factor w/ 3 levels "0"<"1"<"2": 1 2 3 2 3 2 1 3 2 3 ...
## $ plant.growth : Factor w/ 2 levels "0","1": 2 2 2 2 2 2 2 2 2 2 ...
## $ leaves : Factor w/ 2 levels "0","1": 2 2 2 2 2 2 2 2 2 2 ...
## $ leaf.halo : Factor w/ 3 levels "0","1","2": 1 1 1 1 1 1 1 1 1 1 ...
## $ leaf.marg : Factor w/ 3 levels "0","1","2": 3 3 3 3 3 3 3 3 3 3 ...
## $ leaf.size : Ord.factor w/ 3 levels "0"<"1"<"2": 3 3 3 3 3 3 3 3 3 3 ...
## $ leaf.shread : Factor w/ 2 levels "0","1": 1 1 1 1 1 1 1 1 1 1 ...
## $ leaf.malf : Factor w/ 2 levels "0","1": 1 1 1 1 1 1 1 1 1 1 ...
## $ leaf.mild : Factor w/ 3 levels "0","1","2": 1 1 1 1 1 1 1 1 1 1 ...
## $ stem : Factor w/ 2 levels "0","1": 2 2 2 2 2 2 2 2 2 2 ...
## $ lodging : Factor w/ 2 levels "0","1": 2 1 1 1 1 1 2 1 1 1 ...
## $ stem.cankers : Factor w/ 4 levels "0","1","2","3": 4 4 4 4 4 4 4 4 4 4 ...
## $ canker.lesion : Factor w/ 4 levels "0","1","2","3": 2 2 1 1 2 1 2 2 2 2 ...
## $ fruiting.bodies: Factor w/ 2 levels "0","1": 2 2 2 2 2 2 2 2 2 2 ...
## $ ext.decay : Factor w/ 3 levels "0","1","2": 2 2 2 2 2 2 2 2 2 2 ...
## $ mycelium : Factor w/ 2 levels "0","1": 1 1 1 1 1 1 1 1 1 1 ...
## $ int.discolor : Factor w/ 3 levels "0","1","2": 1 1 1 1 1 1 1 1 1 1 ...
## $ sclerotia : Factor w/ 2 levels "0","1": 1 1 1 1 1 1 1 1 1 1 ...
## $ fruit.pods : Factor w/ 4 levels "0","1","2","3": 1 1 1 1 1 1 1 1 1 1 ...
## $ fruit.spots : Factor w/ 4 levels "0","1","2","4": 4 4 4 4 4 4 4 4 4 4 ...
## $ seed : Factor w/ 2 levels "0","1": 1 1 1 1 1 1 1 1 1 1 ...
## $ mold.growth : Factor w/ 2 levels "0","1": 1 1 1 1 1 1 1 1 1 1 ...
## $ seed.discolor : Factor w/ 2 levels "0","1": 1 1 1 1 1 1 1 1 1 1 ...
## $ seed.size : Factor w/ 2 levels "0","1": 1 1 1 1 1 1 1 1 1 1 ...
## $ shriveling : Factor w/ 2 levels "0","1": 1 1 1 1 1 1 1 1 1 1 ...
## $ roots : Factor w/ 3 levels "0","1","2": 1 1 1 1 1 1 1 1 1 1 ...
ggplot(melt(Soybean, id.vars=c('Class')), aes(x=value)) +
geom_histogram(stat="count") +
facet_wrap(~variable, scale="free")
## Warning: attributes are not identical across measure variables; they will be
## dropped
## Warning: Ignoring unknown parameters: binwidth, bins, pad
Remove near-zero variance predictors:
#using caret library
nearZeroVar(Soybean)
## [1] 19 26 28
Clearly, according to the above output, the degenerate distributions are columns 19, 26 and 28.
b. Roughly 18% of the data are mising. Are there particular predictors that are more likely to be missing? Is the pattern of missing data related to the classes?
#How many missing values in every column
the.na.Soybean <- apply(Soybean, 2, function(x){sum(is.na(x))})
the.na.Soybean
## Class date plant.stand precip temp
## 0 1 36 38 30
## hail crop.hist area.dam sever seed.tmt
## 121 16 1 121 121
## germ plant.growth leaves leaf.halo leaf.marg
## 112 16 0 84 84
## leaf.size leaf.shread leaf.malf leaf.mild stem
## 84 100 84 108 16
## lodging stem.cankers canker.lesion fruiting.bodies ext.decay
## 121 38 38 106 38
## mycelium int.discolor sclerotia fruit.pods fruit.spots
## 38 38 38 84 106
## seed mold.growth seed.discolor seed.size shriveling
## 92 92 106 92 106
## roots
## 31
#Check on different Class values
cl_values <- unique(Soybean$Class)
cl_values
## [1] diaporthe-stem-canker charcoal-rot
## [3] rhizoctonia-root-rot phytophthora-rot
## [5] brown-stem-rot powdery-mildew
## [7] downy-mildew brown-spot
## [9] bacterial-blight bacterial-pustule
## [11] purple-seed-stain anthracnose
## [13] phyllosticta-leaf-spot alternarialeaf-spot
## [15] frog-eye-leaf-spot diaporthe-pod-&-stem-blight
## [17] cyst-nematode 2-4-d-injury
## [19] herbicide-injury
## 19 Levels: 2-4-d-injury alternarialeaf-spot anthracnose ... rhizoctonia-root-rot
Num_na <- apply(Soybean, 1, function(x){sum(is.na(x))})
class_soybean <- Soybean$Class
soybean_na_df <- data.frame(class_soybean, Num_na)
kable(head(soybean_na_df,14))
class_soybean | Num_na |
---|---|
diaporthe-stem-canker | 0 |
diaporthe-stem-canker | 0 |
diaporthe-stem-canker | 0 |
diaporthe-stem-canker | 0 |
diaporthe-stem-canker | 0 |
diaporthe-stem-canker | 0 |
diaporthe-stem-canker | 0 |
diaporthe-stem-canker | 0 |
diaporthe-stem-canker | 0 |
diaporthe-stem-canker | 0 |
charcoal-rot | 0 |
charcoal-rot | 0 |
charcoal-rot | 0 |
charcoal-rot | 0 |
results <- aggregate(soybean_na_df$Num_na, by=list(class_soybean=soybean_na_df$class_soybean), FUN=sum)
names(results)[2] <- "NA_Per_Class"
kable(results[order(results[,"NA_Per_Class"]),])
class_soybean | NA_Per_Class | |
---|---|---|
2 | alternarialeaf-spot | 0 |
3 | anthracnose | 0 |
4 | bacterial-blight | 0 |
5 | bacterial-pustule | 0 |
6 | brown-spot | 0 |
7 | brown-stem-rot | 0 |
8 | charcoal-rot | 0 |
11 | diaporthe-stem-canker | 0 |
12 | downy-mildew | 0 |
13 | frog-eye-leaf-spot | 0 |
15 | phyllosticta-leaf-spot | 0 |
17 | powdery-mildew | 0 |
18 | purple-seed-stain | 0 |
19 | rhizoctonia-root-rot | 0 |
14 | herbicide-injury | 160 |
10 | diaporthe-pod-&-stem-blight | 177 |
9 | cyst-nematode | 336 |
1 | 2-4-d-injury | 450 |
16 | phytophthora-rot | 1214 |
c. Develop a strategy for handling missing data, either by eliminating predictors or imputation.
we see from the above that the class phytophthora-rot
has the most NA values, therefore it be it should be imputed.
head(Soybean[Soybean$Class=='phytophthora-rot',],14)
## Class date plant.stand precip temp hail crop.hist area.dam sever
## 31 phytophthora-rot 0 1 2 1 1 1 1 1
## 32 phytophthora-rot 1 1 2 1 <NA> 3 1 <NA>
## 33 phytophthora-rot 2 1 2 2 <NA> 2 1 <NA>
## 34 phytophthora-rot 1 1 2 0 0 2 1 2
## 35 phytophthora-rot 2 1 2 2 <NA> 2 1 <NA>
## 36 phytophthora-rot 3 1 2 1 <NA> 2 1 <NA>
## 37 phytophthora-rot 0 1 1 1 0 1 1 1
## 38 phytophthora-rot 3 1 2 0 0 2 1 2
## 39 phytophthora-rot 2 1 1 1 <NA> 0 1 <NA>
## 40 phytophthora-rot 2 1 2 0 0 1 1 2
## 41 phytophthora-rot 2 1 2 1 <NA> 1 1 <NA>
## 42 phytophthora-rot 1 1 2 1 <NA> 1 1 <NA>
## 43 phytophthora-rot 0 1 2 1 0 3 1 1
## 44 phytophthora-rot 0 1 1 1 1 2 1 2
## seed.tmt germ plant.growth leaves leaf.halo leaf.marg leaf.size leaf.shread
## 31 0 0 1 1 0 2 2 0
## 32 <NA> <NA> 1 1 0 2 2 0
## 33 <NA> <NA> 1 1 <NA> <NA> <NA> <NA>
## 34 1 1 1 1 0 2 2 0
## 35 <NA> <NA> 1 1 <NA> <NA> <NA> <NA>
## 36 <NA> <NA> 1 1 <NA> <NA> <NA> <NA>
## 37 0 0 1 1 0 2 2 0
## 38 1 1 1 1 0 2 2 0
## 39 <NA> <NA> 1 1 0 2 2 0
## 40 0 1 1 1 0 2 2 0
## 41 <NA> <NA> 1 1 0 2 2 0
## 42 <NA> <NA> 1 1 <NA> <NA> <NA> <NA>
## 43 0 0 1 1 0 2 2 0
## 44 1 0 1 1 0 2 2 0
## leaf.malf leaf.mild stem lodging stem.cankers canker.lesion fruiting.bodies
## 31 0 0 1 0 1 2 0
## 32 0 0 1 <NA> 2 2 <NA>
## 33 <NA> <NA> 1 <NA> 3 2 <NA>
## 34 0 0 1 0 2 2 0
## 35 <NA> <NA> 1 <NA> 2 2 <NA>
## 36 <NA> <NA> 1 <NA> 3 2 <NA>
## 37 0 0 1 0 1 2 0
## 38 0 0 1 0 2 2 0
## 39 0 0 1 <NA> 2 2 <NA>
## 40 0 0 1 0 1 2 0
## 41 0 0 1 <NA> 2 2 <NA>
## 42 <NA> <NA> 1 <NA> 1 2 <NA>
## 43 0 0 1 0 1 2 0
## 44 0 0 1 1 2 2 0
## ext.decay mycelium int.discolor sclerotia fruit.pods fruit.spots seed
## 31 1 0 0 0 3 4 0
## 32 0 0 0 0 <NA> <NA> <NA>
## 33 0 0 0 0 <NA> <NA> <NA>
## 34 0 0 0 0 3 4 0
## 35 0 0 0 0 <NA> <NA> <NA>
## 36 0 0 0 0 <NA> <NA> <NA>
## 37 0 0 0 0 3 4 0
## 38 0 0 0 0 3 4 0
## 39 0 0 0 0 <NA> <NA> <NA>
## 40 0 0 0 0 3 4 0
## 41 0 0 0 0 <NA> <NA> <NA>
## 42 0 0 0 0 <NA> <NA> <NA>
## 43 0 0 0 0 3 4 0
## 44 1 0 0 0 3 4 0
## mold.growth seed.discolor seed.size shriveling roots
## 31 0 0 0 0 0
## 32 <NA> <NA> <NA> <NA> 1
## 33 <NA> <NA> <NA> <NA> 1
## 34 0 0 0 0 0
## 35 <NA> <NA> <NA> <NA> 1
## 36 <NA> <NA> <NA> <NA> 1
## 37 0 0 0 0 0
## 38 0 0 0 0 0
## 39 <NA> <NA> <NA> <NA> 1
## 40 0 0 0 0 0
## 41 <NA> <NA> <NA> <NA> 1
## 42 <NA> <NA> <NA> <NA> 1
## 43 0 0 0 0 0
## 44 0 0 0 0 0
There is an impute package DMwr that has an impute function will be using here. Will be using impute.knn whic uses Knearest neighbors algorithm for doing this by estimating the data.
see https://www.rdocumentation.org/packages/DMwR/versions/0.4.1/topics/knnImputation
#using DMwR package for imputation using top 10 (k=10)
imputed_data <- knnImputation(Soybean,k=10)
head(imputed_data,14)
## Class date plant.stand precip temp hail crop.hist area.dam
## 1 diaporthe-stem-canker 6 0 2 1 0 1 1
## 2 diaporthe-stem-canker 4 0 2 1 0 2 0
## 3 diaporthe-stem-canker 3 0 2 1 0 1 0
## 4 diaporthe-stem-canker 3 0 2 1 0 1 0
## 5 diaporthe-stem-canker 6 0 2 1 0 2 0
## 6 diaporthe-stem-canker 5 0 2 1 0 3 0
## 7 diaporthe-stem-canker 5 0 2 1 0 2 0
## 8 diaporthe-stem-canker 4 0 2 1 1 1 0
## 9 diaporthe-stem-canker 6 0 2 1 0 3 0
## 10 diaporthe-stem-canker 4 0 2 1 0 2 0
## 11 charcoal-rot 6 0 0 2 0 1 3
## 12 charcoal-rot 4 0 0 1 1 1 3
## 13 charcoal-rot 3 0 0 1 0 1 2
## 14 charcoal-rot 6 0 0 1 1 3 3
## sever seed.tmt germ plant.growth leaves leaf.halo leaf.marg leaf.size
## 1 1 0 0 1 1 0 2 2
## 2 2 1 1 1 1 0 2 2
## 3 2 1 2 1 1 0 2 2
## 4 2 0 1 1 1 0 2 2
## 5 1 0 2 1 1 0 2 2
## 6 1 0 1 1 1 0 2 2
## 7 1 1 0 1 1 0 2 2
## 8 1 0 2 1 1 0 2 2
## 9 1 1 1 1 1 0 2 2
## 10 2 0 2 1 1 0 2 2
## 11 1 1 0 1 1 0 2 2
## 12 1 1 1 1 1 0 2 2
## 13 1 0 0 1 1 0 2 2
## 14 1 1 0 1 1 0 2 2
## leaf.shread leaf.malf leaf.mild stem lodging stem.cankers canker.lesion
## 1 0 0 0 1 1 3 1
## 2 0 0 0 1 0 3 1
## 3 0 0 0 1 0 3 0
## 4 0 0 0 1 0 3 0
## 5 0 0 0 1 0 3 1
## 6 0 0 0 1 0 3 0
## 7 0 0 0 1 1 3 1
## 8 0 0 0 1 0 3 1
## 9 0 0 0 1 0 3 1
## 10 0 0 0 1 0 3 1
## 11 0 0 0 1 0 0 3
## 12 0 0 0 1 1 0 3
## 13 0 0 0 1 0 0 3
## 14 0 0 0 1 0 0 3
## fruiting.bodies ext.decay mycelium int.discolor sclerotia fruit.pods
## 1 1 1 0 0 0 0
## 2 1 1 0 0 0 0
## 3 1 1 0 0 0 0
## 4 1 1 0 0 0 0
## 5 1 1 0 0 0 0
## 6 1 1 0 0 0 0
## 7 1 1 0 0 0 0
## 8 1 1 0 0 0 0
## 9 1 1 0 0 0 0
## 10 1 1 0 0 0 0
## 11 0 0 0 2 1 0
## 12 0 0 0 2 1 0
## 13 0 0 0 2 1 0
## 14 0 0 0 2 1 0
## fruit.spots seed mold.growth seed.discolor seed.size shriveling roots
## 1 4 0 0 0 0 0 0
## 2 4 0 0 0 0 0 0
## 3 4 0 0 0 0 0 0
## 4 4 0 0 0 0 0 0
## 5 4 0 0 0 0 0 0
## 6 4 0 0 0 0 0 0
## 7 4 0 0 0 0 0 0
## 8 4 0 0 0 0 0 0
## 9 4 0 0 0 0 0 0
## 10 4 0 0 0 0 0 0
## 11 4 0 0 0 0 0 0
## 12 4 0 0 0 0 0 0
## 13 4 0 0 0 0 0 0
## 14 4 0 0 0 0 0 0
after that, let’s check:
anyNA(imputed_data)
## [1] FALSE