Predicting Tire Rating at Tire Rack

Project Objective

To develop a logistic regression model to predict the likelihood of a customer purchasing a tire again based on its Wet and Noise performance ratings

Question 1 & 2: Develop the Model & Access Predictor Significance

Step 1: Install and load the libraries

#install.packages("readxl")
#install.packages("Hmisc")
#install.packages("pscl")
#if(!require(pROC)) install.packages("pROC")

library(readxl) #allows us to import excel files
## Warning: package 'readxl' was built under R version 4.4.2
library(Hmisc) #allows us to call the correlation function
## Warning: package 'Hmisc' was built under R version 4.4.2
## 
## Attaching package: 'Hmisc'
## The following objects are masked from 'package:base':
## 
##     format.pval, units
library(pscl) #allows us to call the pseudo R-square package to evaluate our model
## Warning: package 'pscl' was built under R version 4.4.2
## Classes and Methods for R originally developed in the
## Political Science Computational Laboratory
## Department of Political Science
## Stanford University (2002-2015),
## by and under the direction of Simon Jackman.
## hurdle and zeroinfl functions by Achim Zeileis.
library(pROC) #allows us to run the area under the curve (AUC) package to get the plot and AUC score
## Warning: package 'pROC' was built under R version 4.4.2
## Type 'citation("pROC")' for a citation.
## 
## Attaching package: 'pROC'
## The following objects are masked from 'package:stats':
## 
##     cov, smooth, var

Step 2: Import the data and clean the data

TireRatings_df <- read_excel(file.choose())
Tire_df <- subset(TireRatings_df, select = -c(Tire)) #drop irrelevant column

##Step 3: Summarize the data

head(Tire_df)
## # A tibble: 6 × 4
##     Wet Noise Buy_Again Purchase
##   <dbl> <dbl>     <dbl>    <dbl>
## 1   8     7.2       6.1        0
## 2   8     7.2       6.6        1
## 3   7.6   7.5       6.9        1
## 4   6.6   5.4       6.6        0
## 5   5.8   6.3       4          0
## 6   6.3   5.7       4.5        0

Data Description: A description of some of the features are presented in the table below.

#Step 4: Summarize the data (i.e., descriptive statistics)

head(Tire_df)
## # A tibble: 6 × 4
##     Wet Noise Buy_Again Purchase
##   <dbl> <dbl>     <dbl>    <dbl>
## 1   8     7.2       6.1        0
## 2   8     7.2       6.6        1
## 3   7.6   7.5       6.9        1
## 4   6.6   5.4       6.6        0
## 5   5.8   6.3       4          0
## 6   6.3   5.7       4.5        0
summary(Tire_df)
##       Wet            Noise         Buy_Again        Purchase     
##  Min.   :4.300   Min.   :3.600   Min.   :1.400   Min.   :0.0000  
##  1st Qu.:6.450   1st Qu.:6.000   1st Qu.:3.850   1st Qu.:0.0000  
##  Median :7.750   Median :7.100   Median :6.150   Median :0.0000  
##  Mean   :7.315   Mean   :6.903   Mean   :5.657   Mean   :0.4412  
##  3rd Qu.:8.225   3rd Qu.:7.925   3rd Qu.:7.400   3rd Qu.:1.0000  
##  Max.   :9.200   Max.   :8.900   Max.   :8.900   Max.   :1.0000
Interpretation:

#Step 5: Feature selection (i.e., correlation analysis)

corr <- rcorr(as.matrix(Tire_df))
corr
##            Wet Noise Buy_Again Purchase
## Wet       1.00  0.76      0.91     0.74
## Noise     0.76  1.00      0.83     0.72
## Buy_Again 0.91  0.83      1.00     0.83
## Purchase  0.74  0.72      0.83     1.00
## 
## n= 68 
## 
## 
## P
##           Wet Noise Buy_Again Purchase
## Wet            0     0         0      
## Noise      0         0         0      
## Buy_Again  0   0               0      
## Purchase   0   0     0
Interpretation

#Step 6: Build the logistic regression

model <- glm(Purchase ~ Wet + Noise, data = Tire_df, family = binomial)
summary(model)
## 
## Call:
## glm(formula = Purchase ~ Wet + Noise, family = binomial, data = Tire_df)
## 
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)   
## (Intercept) -39.4982    12.4779  -3.165  0.00155 **
## Wet           3.3745     1.2641   2.670  0.00760 **
## Noise         1.8163     0.8312   2.185  0.02887 * 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 93.325  on 67  degrees of freedom
## Residual deviance: 27.530  on 65  degrees of freedom
## AIC: 33.53
## 
## Number of Fisher Scoring iterations: 8
Interpretation:

Question 3: Overall Model Significance

Likelihood Ratio Test

# Fit a null model
null_model <- glm(Purchase ~ 1, data = Tire_df, family = binomial)

# Perform a likelihood ratio test
anova(null_model, model, test = "Chisq")
## Analysis of Deviance Table
## 
## Model 1: Purchase ~ 1
## Model 2: Purchase ~ Wet + Noise
##   Resid. Df Resid. Dev Df Deviance  Pr(>Chi)    
## 1        67     93.325                          
## 2        65     27.530  2   65.795 5.162e-15 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Interpretation

Psedudo-R-squared

pR2(model)
## fitting null model for pseudo-r2
##         llh     llhNull          G2    McFadden        r2ML        r2CU 
## -13.7649516 -46.6623284  65.7947536   0.7050093   0.6199946   0.8305269
Interpretation:

Area Under the Curve (AUC)

roc_curve <- roc(Tire_df$Purchase, fitted(model))
## Setting levels: control = 0, case = 1
## Setting direction: controls < cases
plot(roc_curve)

auc(roc_curve)
## Area under the curve: 0.9741
Interpretation

Question 4 & 5: Predicting with New Information

# Given the following new tire information
new_data <- data.frame(Wet = 7, Noise = 7)

# Predict the probability
# probability that ...


prob <- predict(model, newdata = new_data, type = "response")
prob * 100
##        1 
## 4.058753
Interpretation:
# Coefficients from the model
intercept <- -39.4982
wet_coeff <- 3.3745
noise_coeff <- 1.8163

# Predictor values
wet_rating <- 8
noise_rating <- 8

# Compute the logit
logit <- intercept + (wet_coeff * wet_rating) + (noise_coeff * noise_rating)

# Compute the probability using the logistic function
probability <- exp(logit) / (1 + exp(logit))

# Print the result
probability
## [1] 0.8837262