Step 1: collecting data

The data has been collected and ready to be analyed.

launch <- read.csv("http://www.sci.csueastbay.edu/~esuess/classes/Statistics_6620/Presentations/ml10/challenger.csv")

Step 2: exploring and preparing data

# exammine the launch data
str(launch)
'data.frame':   23 obs. of  4 variables:
 $ distress_ct         : int  0 1 0 0 0 0 0 0 1 1 ...
 $ temperature         : int  66 70 69 68 67 72 73 70 57 63 ...
 $ field_check_pressure: int  50 50 50 50 50 50 100 100 200 200 ...
 $ flight_num          : int  1 2 3 4 5 6 7 8 9 10 ...

First recode the distress_ct variable into 0 and 1, making 1 to represent at least one failure during a launch.

launch$distress_ct = ifelse(launch$distress_ct<1,0,1)
launch$distress_ct
 [1] 0 1 0 0 0 0 0 0 1 1 1 0 0 1 0 0 0 0 0 0 1 0 1

Set up trainning and test data sets

indx = sample(1:nrow(launch), as.integer(0.9*nrow(launch)))
indx # ramdomize rows, save 90% of data into index
 [1] 22 16 14 12 18  1 23  9  3  4 21 15 19 13 17  7 11  2  6 20
launch_train = launch[indx,]
launch_test = launch[-indx,]
launch_train_labels = launch[indx,1] # label the first column: distress_ct is the chategorial dependent variable
launch_test_labels = launch[-indx,1] 

Check if there’s any missing values:

library(Amelia)
missmap(launch, main = "Missing values vs observed")

Number of missing values in each column

sapply(launch,function(x) sum(is.na(x)))
         distress_ct          temperature field_check_pressure           flight_num 
                   0                    0                    0                    0 

Number of unique values in each column

sapply(launch, function(x) length(unique(x)))
         distress_ct          temperature field_check_pressure           flight_num 
                   2                   16                    3                   23 

Step 3: training a model on the data

fit the logistic regression model, with all predictor variables

model <- glm(distress_ct ~.,family=binomial(link='logit'),data=launch_train)
model

Call:  glm(formula = distress_ct ~ ., family = binomial(link = "logit"), 
    data = launch_train)

Coefficients:
         (Intercept)           temperature  field_check_pressure            flight_num  
           12.815439             -0.212388              0.004734              0.021998  

Degrees of Freedom: 19 Total (i.e. Null);  16 Residual
Null Deviance:      24.43 
Residual Deviance: 17.19    AIC: 25.19
summary(model)

Call:
glm(formula = distress_ct ~ ., family = binomial(link = "logit"), 
    data = launch_train)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-1.1195  -0.6415  -0.4842   0.3850   1.9638  

Coefficients:
                      Estimate Std. Error z value Pr(>|z|)  
(Intercept)          12.815439   8.023342   1.597   0.1102  
temperature          -0.212388   0.113289  -1.875   0.0608 .
field_check_pressure  0.004734   0.018554   0.255   0.7986  
flight_num            0.021998   0.188979   0.116   0.9073  
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for binomial family taken to be 1)

    Null deviance: 24.435  on 19  degrees of freedom
Residual deviance: 17.189  on 16  degrees of freedom
AIC: 25.189

Number of Fisher Scoring iterations: 5
anova(model, test="Chisq")
Analysis of Deviance Table

Model: binomial, link: logit

Response: distress_ct

Terms added sequentially (first to last)

                     Df Deviance Resid. Df Resid. Dev Pr(>Chi)   
NULL                                    19     24.435            
temperature           1   6.6572        18     17.777 0.009875 **
field_check_pressure  1   0.5742        17     17.203 0.448577   
flight_num            1   0.0138        16     17.189 0.906545   
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Only tempersture is significant from glm and anova output.

Drop the insignificant predictors, alpha = 0.10

model <- glm(distress_ct~temperature,family=binomial(link='logit'),data=launch_train)
model

Call:  glm(formula = distress_ct ~ temperature, family = binomial(link = "logit"), 
    data = launch_train)

Coefficients:
(Intercept)  temperature  
     13.535       -0.209  

Degrees of Freedom: 19 Total (i.e. Null);  18 Residual
Null Deviance:      24.43 
Residual Deviance: 17.78    AIC: 21.78
summary(model)

Call:
glm(formula = distress_ct ~ temperature, family = binomial(link = "logit"), 
    data = launch_train)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-1.0690  -0.7770  -0.4267   0.4543   2.1226  

Coefficients:
            Estimate Std. Error z value Pr(>|z|)  
(Intercept)  13.5354     7.1173   1.902   0.0572 .
temperature  -0.2090     0.1035  -2.019   0.0434 *
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for binomial family taken to be 1)

    Null deviance: 24.435  on 19  degrees of freedom
Residual deviance: 17.777  on 18  degrees of freedom
AIC: 21.777

Number of Fisher Scoring iterations: 5
anova(model, test="Chisq")
Analysis of Deviance Table

Model: binomial, link: logit

Response: distress_ct

Terms added sequentially (first to last)

            Df Deviance Resid. Df Resid. Dev Pr(>Chi)   
NULL                           19     24.435            
temperature  1   6.6572        18     17.777 0.009875 **
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

From the anova test, we tell that the model fits.

Step 4: evaluating model performance

Check Accuracy

fitted.results <- predict(model,newdata=launch_test,type='response')
fitted.results <- ifelse(fitted.results > 0.5,1,0)
misClasificError <- mean(fitted.results != launch_test$distress_ct)
print(paste('Accuracy',1-misClasificError))
[1] "Accuracy 1"

The misclassific error is 0 form the result,which indicates our model is really good.

Step 5: improving model performance

ROC Method:

Because this data set is so small, it is possible that the test data set does not contain both 0 and 1 values. If this happens the code will not run. And since the test data set is so small the ROC is not useful here, but the code is provided.

library(ROCR)
p <- predict(model, newdata=launch_test, type="response")
pr <- prediction(p,launch_test_labels)
prf <- performance(pr, measure = "tpr", x.measure = "fpr")
plot(prf)

auc <- performance(pr, measure = "auc")
auc <- auc@y.values[[1]]
auc
[1] 1
LS0tCnRpdGxlOiAiTG9naXN0aWMgUmVncmVzc2lvbiBDaGFsbGVuZ2luZyBkYXRhIgpvdXRwdXQ6CiAgaHRtbF9ub3RlYm9vazogZGVmYXVsdAogIHBkZl9kb2N1bWVudDogZGVmYXVsdAogIHdvcmRfZG9jdW1lbnQ6IGRlZmF1bHQKLS0tCgojIyBTdGVwIDE6IGNvbGxlY3RpbmcgZGF0YQpUaGUgZGF0YSBoYXMgYmVlbiBjb2xsZWN0ZWQgYW5kIHJlYWR5IHRvIGJlIGFuYWx5ZWQuCmBgYHtyfQpsYXVuY2ggPC0gcmVhZC5jc3YoImh0dHA6Ly93d3cuc2NpLmNzdWVhc3RiYXkuZWR1L35lc3Vlc3MvY2xhc3Nlcy9TdGF0aXN0aWNzXzY2MjAvUHJlc2VudGF0aW9ucy9tbDEwL2NoYWxsZW5nZXIuY3N2IikKCmBgYAoKIyMgU3RlcCAyOiBleHBsb3JpbmcgYW5kIHByZXBhcmluZyBkYXRhIApgYGB7cn0KIyBleGFtbWluZSB0aGUgbGF1bmNoIGRhdGEKc3RyKGxhdW5jaCkKYGBgCgpGaXJzdCByZWNvZGUgdGhlIGRpc3RyZXNzX2N0IHZhcmlhYmxlIGludG8gMCBhbmQgMSwgbWFraW5nIDEgdG8gcmVwcmVzZW50IGF0IGxlYXN0IG9uZSBmYWlsdXJlIGR1cmluZyBhIGxhdW5jaC4KYGBge3J9CmxhdW5jaCRkaXN0cmVzc19jdCA9IGlmZWxzZShsYXVuY2gkZGlzdHJlc3NfY3Q8MSwwLDEpCmxhdW5jaCRkaXN0cmVzc19jdApgYGAKClNldCB1cCB0cmFpbm5pbmcgYW5kIHRlc3QgZGF0YSBzZXRzCmBgYHtyfQppbmR4ID0gc2FtcGxlKDE6bnJvdyhsYXVuY2gpLCBhcy5pbnRlZ2VyKDAuOSpucm93KGxhdW5jaCkpKQppbmR4ICMgcmFtZG9taXplIHJvd3MsIHNhdmUgOTAlIG9mIGRhdGEgaW50byBpbmRleAoKbGF1bmNoX3RyYWluID0gbGF1bmNoW2luZHgsXQpsYXVuY2hfdGVzdCA9IGxhdW5jaFstaW5keCxdCgpsYXVuY2hfdHJhaW5fbGFiZWxzID0gbGF1bmNoW2luZHgsMV0gIyBsYWJlbCB0aGUgZmlyc3QgY29sdW1uOiBkaXN0cmVzc19jdCBpcyB0aGUgY2hhdGVnb3JpYWwgZGVwZW5kZW50IHZhcmlhYmxlCmxhdW5jaF90ZXN0X2xhYmVscyA9IGxhdW5jaFstaW5keCwxXSAKYGBgCkNoZWNrIGlmIHRoZXJlJ3MgYW55IG1pc3NpbmcgdmFsdWVzOgpgYGB7cn0KbGlicmFyeShBbWVsaWEpCm1pc3NtYXAobGF1bmNoLCBtYWluID0gIk1pc3NpbmcgdmFsdWVzIHZzIG9ic2VydmVkIikKYGBgCgpOdW1iZXIgb2YgbWlzc2luZyB2YWx1ZXMgaW4gZWFjaCBjb2x1bW4KYGBge3J9CnNhcHBseShsYXVuY2gsZnVuY3Rpb24oeCkgc3VtKGlzLm5hKHgpKSkKYGBgCk51bWJlciBvZiB1bmlxdWUgdmFsdWVzIGluIGVhY2ggY29sdW1uCmBgYHtyfQpzYXBwbHkobGF1bmNoLCBmdW5jdGlvbih4KSBsZW5ndGgodW5pcXVlKHgpKSkKYGBgCiMjIFN0ZXAgMzogdHJhaW5pbmcgYSBtb2RlbCBvbiB0aGUgZGF0YQpmaXQgdGhlIGxvZ2lzdGljIHJlZ3Jlc3Npb24gbW9kZWwsIHdpdGggYWxsIHByZWRpY3RvciB2YXJpYWJsZXMKYGBge3J9Cm1vZGVsIDwtIGdsbShkaXN0cmVzc19jdCB+LixmYW1pbHk9Ymlub21pYWwobGluaz0nbG9naXQnKSxkYXRhPWxhdW5jaF90cmFpbikKbW9kZWwKc3VtbWFyeShtb2RlbCkKYW5vdmEobW9kZWwsIHRlc3Q9IkNoaXNxIikKYGBgCk9ubHkgdGVtcGVyc3R1cmUgaXMgc2lnbmlmaWNhbnQgZnJvbSBnbG0gYW5kIGFub3ZhIG91dHB1dC4gCgpEcm9wIHRoZSBpbnNpZ25pZmljYW50IHByZWRpY3RvcnMsIGFscGhhID0gMC4xMApgYGB7cn0KbW9kZWwgPC0gZ2xtKGRpc3RyZXNzX2N0fnRlbXBlcmF0dXJlLGZhbWlseT1iaW5vbWlhbChsaW5rPSdsb2dpdCcpLGRhdGE9bGF1bmNoX3RyYWluKQptb2RlbAoKc3VtbWFyeShtb2RlbCkKCmFub3ZhKG1vZGVsLCB0ZXN0PSJDaGlzcSIpCmBgYApGcm9tIHRoZSBhbm92YSB0ZXN0LCB3ZSB0ZWxsIHRoYXQgdGhlIG1vZGVsIGZpdHMuIAoKIyMgU3RlcCA0OiBldmFsdWF0aW5nIG1vZGVsIHBlcmZvcm1hbmNlIApDaGVjayBBY2N1cmFjeQpgYGB7cn0KZml0dGVkLnJlc3VsdHMgPC0gcHJlZGljdChtb2RlbCxuZXdkYXRhPWxhdW5jaF90ZXN0LHR5cGU9J3Jlc3BvbnNlJykKZml0dGVkLnJlc3VsdHMgPC0gaWZlbHNlKGZpdHRlZC5yZXN1bHRzID4gMC41LDEsMCkKCm1pc0NsYXNpZmljRXJyb3IgPC0gbWVhbihmaXR0ZWQucmVzdWx0cyAhPSBsYXVuY2hfdGVzdCRkaXN0cmVzc19jdCkKcHJpbnQocGFzdGUoJ0FjY3VyYWN5JywxLW1pc0NsYXNpZmljRXJyb3IpKQpgYGAKVGhlIG1pc2NsYXNzaWZpYyBlcnJvciBpcyAwIGZvcm0gdGhlIHJlc3VsdCx3aGljaCBpbmRpY2F0ZXMgb3VyIG1vZGVsIGlzIHJlYWxseSBnb29kLgoKIyMgU3RlcCA1OiBpbXByb3ZpbmcgbW9kZWwgcGVyZm9ybWFuY2UgClJPQyBNZXRob2Q6IAoKQmVjYXVzZSB0aGlzIGRhdGEgc2V0IGlzIHNvIHNtYWxsLCBpdCBpcyBwb3NzaWJsZSB0aGF0IHRoZSB0ZXN0IGRhdGEgc2V0CmRvZXMgbm90IGNvbnRhaW4gYm90aCAwIGFuZCAxIHZhbHVlcy4gIElmIHRoaXMgaGFwcGVucyB0aGUgY29kZSB3aWxsIG5vdApydW4uIEFuZCBzaW5jZSB0aGUgdGVzdCBkYXRhIHNldCBpcyBzbyBzbWFsbCB0aGUgUk9DIGlzIG5vdCB1c2VmdWwgaGVyZSwgYnV0IHRoZSBjb2RlIGlzIHByb3ZpZGVkLgoKYGBge3J9CmxpYnJhcnkoUk9DUikKcCA8LSBwcmVkaWN0KG1vZGVsLCBuZXdkYXRhPWxhdW5jaF90ZXN0LCB0eXBlPSJyZXNwb25zZSIpCnByIDwtIHByZWRpY3Rpb24ocCxsYXVuY2hfdGVzdF9sYWJlbHMpCnByZiA8LSBwZXJmb3JtYW5jZShwciwgbWVhc3VyZSA9ICJ0cHIiLCB4Lm1lYXN1cmUgPSAiZnByIikKcGxvdChwcmYpCgphdWMgPC0gcGVyZm9ybWFuY2UocHIsIG1lYXN1cmUgPSAiYXVjIikKYXVjIDwtIGF1Y0B5LnZhbHVlc1tbMV1dCmF1YwpgYGAKCgoK