Step 1: Collecting Data

Data have been taken from the UCI Repository Of Machine Learning Databases, and were also stored in the Rstudio data-repository. The data were pre-processed with a “Yes” or “No” per cell in the Housevote84 dataset. Some cells are missing values which neither indicates a “Yes” or “No”.

Step 2: Exploring and preparing the data —-

The data contains 435 observations in rows representing the number of house representatives, and 17 featurs/variables in columns representing different billes that each house representative may have different opinions on.

library(e1071)
library(mlbench)
data("HouseVotes84")

We can always randomized the dataset by row to make sure observations with similar class level doesn’t clustered together.

HouseVotes84 = HouseVotes84[sample(nrow(HouseVotes84)),]

The whole dataset is splitted by rows to get about 75% of data as the trained dataset, and about 25% of the data as the tested dataset for all the columns.

house_vote_train <- HouseVotes84[1:326, ]
house_vote_test  <- HouseVotes84[327:435, ]

Similarly, the “Class” feature will be use as the target labels for classification. We split the “class” feature to obtain about 75% of that vector as the trained label, and the remaining 25% of that vector as the tested label. We can also use the prop.table() to convert the class table from value into proportion to make sure the frations between two class levels are similarly across trained and test labels.

train_labels <- HouseVotes84[1:326, ]$Class
test_labels  <- HouseVotes84[327:435, ]$Class
prop.table(table(train_labels))
train_labels
  democrat republican 
 0.5828221  0.4171779 
prop.table(table(test_labels))
test_labels
  democrat republican 
  0.706422   0.293578 

Step 3: Training a model on the data —-

Using the trained dataset and its label, we can perform the training through the use of naiveBayes() and generate a Naived Bayes model.

house_vote_model <- naiveBayes(house_vote_train, train_labels)

Using the model we just generated from the naived bayes algorithm, along with the tested dataset, we can make prediction on the class level of each observation (row) in the test dataset.

test_pred <- predict(house_vote_model, house_vote_test)
head(test_pred)
[1] democrat   democrat   republican democrat   democrat   republican
Levels: democrat republican

Step 4: Evaluating model performance —-

Now that we can compare the prediction labels with the original tested label, and see how accurately the naived bayes model has learned from the data.

library(gmodels)
CrossTable(test_pred, test_labels,
           prop.chisq = FALSE, prop.t = FALSE, prop.r = FALSE,
           dnn = c('predicted', 'actual'))

 
   Cell Contents
|-------------------------|
|                       N |
|           N / Col Total |
|-------------------------|

 
Total Observations in Table:  109 

 
             | actual 
   predicted |   democrat | republican |  Row Total | 
-------------|------------|------------|------------|
    democrat |         75 |          1 |         76 | 
             |      0.974 |      0.031 |            | 
-------------|------------|------------|------------|
  republican |          2 |         31 |         33 | 
             |      0.026 |      0.969 |            | 
-------------|------------|------------|------------|
Column Total |         77 |         32 |        109 | 
             |      0.706 |      0.294 |            | 
-------------|------------|------------|------------|

 

As it turns out, we have 2 incidents being mis-classfied into republician and 1 incident being mis-classsified into democrate. The accuracy for this Naived Bayes training is 97.2%, which is quite high. Accuracy = (106)/109*100 = 97.4%

Step 5: Improving model performance —-

A way that can improve the model performance is to add the same value for all cells in the dataset to make sure we can avoid the zero probability of certain features that may drastically overrule the evidence of others. Using laplace = 1 here however, the model did not improve, accuracy is still 96.3%. (105/109*100)

house_vote_model2 <- naiveBayes(house_vote_train, train_labels, laplace = 1)
test_pred2 <- predict(house_vote_model2, house_vote_test)
CrossTable(test_pred2, test_labels,
           prop.chisq = FALSE, prop.t = FALSE, prop.r = FALSE,
           dnn = c('predicted', 'actual'))

 
   Cell Contents
|-------------------------|
|                       N |
|           N / Col Total |
|-------------------------|

 
Total Observations in Table:  109 

 
             | actual 
   predicted |   democrat | republican |  Row Total | 
-------------|------------|------------|------------|
    democrat |         74 |          1 |         75 | 
             |      0.961 |      0.031 |            | 
-------------|------------|------------|------------|
  republican |          3 |         31 |         34 | 
             |      0.039 |      0.969 |            | 
-------------|------------|------------|------------|
Column Total |         77 |         32 |        109 | 
             |      0.706 |      0.294 |            | 
-------------|------------|------------|------------|

 
LS0tDQp0aXRsZTogIk5haXZlZCBCYXllcyBDbGFzc2lmaWNhdGlvbiBmb3IgSG91c2UgUHJlZGljdGlvbiINCm91dHB1dDogaHRtbF9ub3RlYm9vaw0KLS0tDQotLS0NCnRpdGxlOiAiTmFpdmVkIEJheWVzIENsYXNzaWZpY2F0aW9uIGZvciBIb3VzZSBQcmVkaWN0aW9uIg0KYXV0aG9yOiAiRW1pbHkiDQpkYXRlOiAiQXByaWwgMTcsIDIwMTciDQpvdXRwdXQ6IHBkZl9kb2N1bWVudA0KLS0tDQojU3RlcCAxOiBDb2xsZWN0aW5nIERhdGENCkRhdGEgaGF2ZSBiZWVuIHRha2VuIGZyb20gdGhlIFVDSSBSZXBvc2l0b3J5IE9mIE1hY2hpbmUgTGVhcm5pbmcgRGF0YWJhc2VzLCBhbmQgd2VyZSBhbHNvIHN0b3JlZCBpbiB0aGUgUnN0dWRpbyBkYXRhLXJlcG9zaXRvcnkuIFRoZSBkYXRhIHdlcmUgcHJlLXByb2Nlc3NlZCB3aXRoIGEgIlllcyIgb3IgIk5vIiBwZXIgY2VsbCBpbiB0aGUgSG91c2V2b3RlODQgZGF0YXNldC4gU29tZSBjZWxscyBhcmUgbWlzc2luZyB2YWx1ZXMgd2hpY2ggbmVpdGhlciBpbmRpY2F0ZXMgYSAiWWVzIiBvciAiTm8iLg0KDQojIFN0ZXAgMjogRXhwbG9yaW5nIGFuZCBwcmVwYXJpbmcgdGhlIGRhdGEgLS0tLSANClRoZSBkYXRhIGNvbnRhaW5zIDQzNSBvYnNlcnZhdGlvbnMgaW4gcm93cyByZXByZXNlbnRpbmcgdGhlIG51bWJlciBvZiBob3VzZSByZXByZXNlbnRhdGl2ZXMsIGFuZCAxNyBmZWF0dXJzL3ZhcmlhYmxlcyBpbiBjb2x1bW5zIHJlcHJlc2VudGluZyBkaWZmZXJlbnQgYmlsbGVzIHRoYXQgZWFjaCBob3VzZSByZXByZXNlbnRhdGl2ZSBtYXkgaGF2ZSBkaWZmZXJlbnQgb3BpbmlvbnMgb24uIA0KYGBge3J9DQpsaWJyYXJ5KGUxMDcxKQ0KbGlicmFyeShtbGJlbmNoKQ0KDQpkYXRhKCJIb3VzZVZvdGVzODQiKQ0KYGBgDQpXZSBjYW4gYWx3YXlzIHJhbmRvbWl6ZWQgdGhlIGRhdGFzZXQgYnkgcm93IHRvIG1ha2Ugc3VyZSBvYnNlcnZhdGlvbnMgd2l0aCBzaW1pbGFyIGNsYXNzIGxldmVsIGRvZXNuJ3QgY2x1c3RlcmVkIHRvZ2V0aGVyLiANCmBgYHtyfQ0KSG91c2VWb3Rlczg0ID0gSG91c2VWb3Rlczg0W3NhbXBsZShucm93KEhvdXNlVm90ZXM4NCkpLF0NCg0KYGBgDQoNClRoZSB3aG9sZSBkYXRhc2V0IGlzIHNwbGl0dGVkIGJ5IHJvd3MgdG8gZ2V0IGFib3V0IDc1JSBvZiBkYXRhIGFzIHRoZSB0cmFpbmVkIGRhdGFzZXQsIGFuZCBhYm91dCAyNSUgb2YgdGhlIGRhdGEgYXMgdGhlIHRlc3RlZCBkYXRhc2V0IGZvciBhbGwgdGhlIGNvbHVtbnMuIA0KYGBge3J9DQpob3VzZV92b3RlX3RyYWluIDwtIEhvdXNlVm90ZXM4NFsxOjMyNiwgXQ0KaG91c2Vfdm90ZV90ZXN0ICA8LSBIb3VzZVZvdGVzODRbMzI3OjQzNSwgXQ0KYGBgDQoNClNpbWlsYXJseSwgdGhlICJDbGFzcyIgZmVhdHVyZSB3aWxsIGJlIHVzZSBhcyB0aGUgdGFyZ2V0IGxhYmVscyBmb3IgY2xhc3NpZmljYXRpb24uIFdlIHNwbGl0IHRoZSAiY2xhc3MiIGZlYXR1cmUgdG8gb2J0YWluIGFib3V0IDc1JSBvZiB0aGF0IHZlY3RvciBhcyB0aGUgdHJhaW5lZCBsYWJlbCwgYW5kIHRoZSByZW1haW5pbmcgMjUlIG9mIHRoYXQgdmVjdG9yIGFzIHRoZSB0ZXN0ZWQgbGFiZWwuIA0KV2UgY2FuIGFsc28gdXNlIHRoZSBwcm9wLnRhYmxlKCkgdG8gY29udmVydCB0aGUgY2xhc3MgdGFibGUgZnJvbSB2YWx1ZSBpbnRvIHByb3BvcnRpb24gdG8gbWFrZSBzdXJlIHRoZSBmcmF0aW9ucyBiZXR3ZWVuIHR3byBjbGFzcyBsZXZlbHMgYXJlIHNpbWlsYXJseSBhY3Jvc3MgdHJhaW5lZCBhbmQgdGVzdCBsYWJlbHMuDQpgYGB7cn0NCnRyYWluX2xhYmVscyA8LSBIb3VzZVZvdGVzODRbMTozMjYsIF0kQ2xhc3MNCnRlc3RfbGFiZWxzICA8LSBIb3VzZVZvdGVzODRbMzI3OjQzNSwgXSRDbGFzcw0KDQpwcm9wLnRhYmxlKHRhYmxlKHRyYWluX2xhYmVscykpDQpwcm9wLnRhYmxlKHRhYmxlKHRlc3RfbGFiZWxzKSkNCmBgYA0KDQojIyBTdGVwIDM6IFRyYWluaW5nIGEgbW9kZWwgb24gdGhlIGRhdGEgLS0tLQ0KVXNpbmcgdGhlIHRyYWluZWQgZGF0YXNldCBhbmQgaXRzIGxhYmVsLCB3ZSBjYW4gcGVyZm9ybSB0aGUgdHJhaW5pbmcgdGhyb3VnaCB0aGUgdXNlIG9mIG5haXZlQmF5ZXMoKSBhbmQgZ2VuZXJhdGUgYSBOYWl2ZWQgQmF5ZXMgbW9kZWwuDQpgYGB7cn0NCmhvdXNlX3ZvdGVfbW9kZWwgPC0gbmFpdmVCYXllcyhob3VzZV92b3RlX3RyYWluLCB0cmFpbl9sYWJlbHMpDQoNCmBgYA0KDQpVc2luZyB0aGUgbW9kZWwgd2UganVzdCBnZW5lcmF0ZWQgZnJvbSB0aGUgbmFpdmVkIGJheWVzIGFsZ29yaXRobSwgYWxvbmcgd2l0aCB0aGUgdGVzdGVkIGRhdGFzZXQsIHdlIGNhbiBtYWtlIHByZWRpY3Rpb24gb24gdGhlIGNsYXNzIGxldmVsIG9mIGVhY2ggb2JzZXJ2YXRpb24gKHJvdykgaW4gdGhlIHRlc3QgZGF0YXNldC4gDQpgYGB7cn0NCnRlc3RfcHJlZCA8LSBwcmVkaWN0KGhvdXNlX3ZvdGVfbW9kZWwsIGhvdXNlX3ZvdGVfdGVzdCkNCmhlYWQodGVzdF9wcmVkKQ0KYGBgDQoNCiMjIFN0ZXAgNDogRXZhbHVhdGluZyBtb2RlbCBwZXJmb3JtYW5jZSAtLS0tDQpOb3cgdGhhdCB3ZSBjYW4gY29tcGFyZSB0aGUgcHJlZGljdGlvbiBsYWJlbHMgd2l0aCB0aGUgb3JpZ2luYWwgdGVzdGVkIGxhYmVsLCBhbmQgc2VlIGhvdyBhY2N1cmF0ZWx5IHRoZSBuYWl2ZWQgYmF5ZXMgbW9kZWwgaGFzIGxlYXJuZWQgZnJvbSB0aGUgZGF0YS4gDQpgYGB7cn0NCmxpYnJhcnkoZ21vZGVscykNCkNyb3NzVGFibGUodGVzdF9wcmVkLCB0ZXN0X2xhYmVscywNCiAgICAgICAgICAgcHJvcC5jaGlzcSA9IEZBTFNFLCBwcm9wLnQgPSBGQUxTRSwgcHJvcC5yID0gRkFMU0UsDQogICAgICAgICAgIGRubiA9IGMoJ3ByZWRpY3RlZCcsICdhY3R1YWwnKSkNCmBgYA0KQXMgaXQgdHVybnMgb3V0LCB3ZSBoYXZlIDIgaW5jaWRlbnRzIGJlaW5nIG1pcy1jbGFzc2ZpZWQgaW50byByZXB1YmxpY2lhbiBhbmQgMSBpbmNpZGVudCBiZWluZyBtaXMtY2xhc3NzaWZpZWQgaW50byBkZW1vY3JhdGUuIFRoZSBhY2N1cmFjeSBmb3IgdGhpcyBOYWl2ZWQgQmF5ZXMgdHJhaW5pbmcgaXMgOTcuMiUsIHdoaWNoIGlzIHF1aXRlIGhpZ2guIA0KQWNjdXJhY3kgPSAoMTA2KS8xMDkqMTAwID0gOTcuNCUNCg0KIyMgU3RlcCA1OiBJbXByb3ZpbmcgbW9kZWwgcGVyZm9ybWFuY2UgLS0tLQ0KQSB3YXkgdGhhdCBjYW4gaW1wcm92ZSB0aGUgbW9kZWwgcGVyZm9ybWFuY2UgaXMgdG8gYWRkIHRoZSBzYW1lIHZhbHVlIGZvciBhbGwgY2VsbHMgaW4gdGhlIGRhdGFzZXQgdG8gbWFrZSBzdXJlIHdlIGNhbiBhdm9pZCB0aGUgemVybyBwcm9iYWJpbGl0eSBvZiBjZXJ0YWluIGZlYXR1cmVzIHRoYXQgbWF5IGRyYXN0aWNhbGx5IG92ZXJydWxlIHRoZSBldmlkZW5jZSBvZiBvdGhlcnMuIFVzaW5nIGxhcGxhY2UgPSAxIGhlcmUgaG93ZXZlciwgdGhlIG1vZGVsIGRpZCBub3QgaW1wcm92ZSwgYWNjdXJhY3kgaXMgc3RpbGwgOTYuMyUuICgxMDUvMTA5KjEwMCkNCmBgYHtyfQ0KaG91c2Vfdm90ZV9tb2RlbDIgPC0gbmFpdmVCYXllcyhob3VzZV92b3RlX3RyYWluLCB0cmFpbl9sYWJlbHMsIGxhcGxhY2UgPSAxKQ0KdGVzdF9wcmVkMiA8LSBwcmVkaWN0KGhvdXNlX3ZvdGVfbW9kZWwyLCBob3VzZV92b3RlX3Rlc3QpDQpDcm9zc1RhYmxlKHRlc3RfcHJlZDIsIHRlc3RfbGFiZWxzLA0KICAgICAgICAgICBwcm9wLmNoaXNxID0gRkFMU0UsIHByb3AudCA9IEZBTFNFLCBwcm9wLnIgPSBGQUxTRSwNCiAgICAgICAgICAgZG5uID0gYygncHJlZGljdGVkJywgJ2FjdHVhbCcpKQ0KYGBgDQoNCg==