Problem 10

This question should be answered using the Weekly data set, which is part of the ISLR package. This data is similar in nature to the Smarket data from this chapter’s lab, except that it contains 1,089 weekly returns for 21 years, from the beginning of 1990 to the end of 2010.

library(ISLR)
attach(Weekly)

(a) Produce some numerical and graphical summaries of the Weekly data. Do there appear to be any patterns?.

We first use the names function to see what variables are used in this dataset. Then we use the pairs function to see if we can see any variables that have relationships. It is a first glance hard to see if there are any relationships using this function so we then use correlation matrixs to see if we can see any relationships. When we look at the correlations between the variables there does not appear to be any relationships. The variables are all low and close to 0. The only variables that appear to be correlated are Volume and Year. Lastly, if we use the plot function to plot volume we can see that the average number of shares traded increases overtime.

names(Weekly)
## [1] "Year"      "Lag1"      "Lag2"      "Lag3"      "Lag4"      "Lag5"     
## [7] "Volume"    "Today"     "Direction"
pairs(Weekly)

cor(Weekly[,-9])
##               Year         Lag1        Lag2        Lag3         Lag4
## Year    1.00000000 -0.032289274 -0.03339001 -0.03000649 -0.031127923
## Lag1   -0.03228927  1.000000000 -0.07485305  0.05863568 -0.071273876
## Lag2   -0.03339001 -0.074853051  1.00000000 -0.07572091  0.058381535
## Lag3   -0.03000649  0.058635682 -0.07572091  1.00000000 -0.075395865
## Lag4   -0.03112792 -0.071273876  0.05838153 -0.07539587  1.000000000
## Lag5   -0.03051910 -0.008183096 -0.07249948  0.06065717 -0.075675027
## Volume  0.84194162 -0.064951313 -0.08551314 -0.06928771 -0.061074617
## Today  -0.03245989 -0.075031842  0.05916672 -0.07124364 -0.007825873
##                Lag5      Volume        Today
## Year   -0.030519101  0.84194162 -0.032459894
## Lag1   -0.008183096 -0.06495131 -0.075031842
## Lag2   -0.072499482 -0.08551314  0.059166717
## Lag3    0.060657175 -0.06928771 -0.071243639
## Lag4   -0.075675027 -0.06107462 -0.007825873
## Lag5    1.000000000 -0.05851741  0.011012698
## Volume -0.058517414  1.00000000 -0.033077783
## Today   0.011012698 -0.03307778  1.000000000
plot(Volume)

(b) Use the full data set to perform a logistic regression with ‘Direction’ as the response and the five lag variables plus ‘Volume’ as predictors. Use the summary function to print the results. Do any of the predictors appear to be statistically significant? If so, which ones?

After fitting the dataset with logistic regression model and using the summary function to view results, it appears that ‘Lag2’ has a statistically significant relationship with ‘Direction’ because the p-value is small at 0.0296. All other variables do not appear to have a relationship with ‘Direction’ since the p-value’s are larger and closer to 1.

glm.fit = glm(Direction~Lag1+Lag2+Lag3+Lag4+Lag5+Volume, data = Weekly, family = binomial)
summary(glm.fit)
## 
## Call:
## glm(formula = Direction ~ Lag1 + Lag2 + Lag3 + Lag4 + Lag5 + 
##     Volume, family = binomial, data = Weekly)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -1.6949  -1.2565   0.9913   1.0849   1.4579  
## 
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)   
## (Intercept)  0.26686    0.08593   3.106   0.0019 **
## Lag1        -0.04127    0.02641  -1.563   0.1181   
## Lag2         0.05844    0.02686   2.175   0.0296 * 
## Lag3        -0.01606    0.02666  -0.602   0.5469   
## Lag4        -0.02779    0.02646  -1.050   0.2937   
## Lag5        -0.01447    0.02638  -0.549   0.5833   
## Volume      -0.02274    0.03690  -0.616   0.5377   
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 1496.2  on 1088  degrees of freedom
## Residual deviance: 1486.4  on 1082  degrees of freedom
## AIC: 1500.4
## 
## Number of Fisher Scoring iterations: 4

(c) Compute the confusion matrix and overall fraction of correct predictions. Explain what the confusion matrix is telling you about the types of mistakes made by logistic regression.

Accuracy Rate of this model is 56.12% based on the confusion matrix.

glm.probs = predict(glm.fit, type = 'response')
glm.pred = rep("Down",1089)
glm.pred[glm.probs>0.5]="Up"
table(glm.pred,Direction)
##         Direction
## glm.pred Down  Up
##     Down   54  48
##     Up    430 557
mean(glm.pred==Direction)
## [1] 0.5610652

(d) Now fit the logistic regression model using a training data period from 1990 to 2008, with ‘Lag2’ as the only predictor. Compute the confusion matrix and the overall fraction of correct predictions for the held out data (that is, the data from 2009 and 2010).

Accuracy Rate of this model is 62.5% based on the confusion matrix.

train = (Year<2009)
Weekly.2009 = Weekly[!train,]
dim(Weekly)
## [1] 1089    9
dim(Weekly.2009)
## [1] 104   9
Direction.2009 = Direction[!train]
glm.fits = glm(Direction~Lag2, data = Weekly, subset = train, family = binomial)
glm.probs = predict(glm.fits, Weekly.2009, type = 'response')
glm.pred = rep("Down",104)
glm.pred[glm.probs>0.5]="Up"
table(glm.pred,Direction.2009)
##         Direction.2009
## glm.pred Down Up
##     Down    9  5
##     Up     34 56
mean(glm.pred==Direction.2009)
## [1] 0.625

(e) Repeat (d) using LDA.

This model shows that 44.77% of the days the market went down and 55.23% of the days the market went up. Group means shows that the prior Lag day 2 tends to have a negative previous day return when the market declines and a positive previous day return when the market increases.

Accuracy Rate of this model is 62.5% based on the confusion matrix.

library(MASS)
lda.fit = lda(Direction~Lag2, data = Weekly, subset = train)
lda.fit
## Call:
## lda(Direction ~ Lag2, data = Weekly, subset = train)
## 
## Prior probabilities of groups:
##      Down        Up 
## 0.4477157 0.5522843 
## 
## Group means:
##             Lag2
## Down -0.03568254
## Up    0.26036581
## 
## Coefficients of linear discriminants:
##            LD1
## Lag2 0.4414162
plot(lda.fit)

lda.pred = predict(lda.fit, Weekly.2009)
names(lda.pred)
## [1] "class"     "posterior" "x"
lda.class = lda.pred$class
table(lda.class,Direction.2009)
##          Direction.2009
## lda.class Down Up
##      Down    9  5
##      Up     34 56
mean(lda.class==Direction.2009)
## [1] 0.625

(f) Repeat (d) using QDA.

This model shows that 44.77% of the days the market went down and 55.23% of the days the market went up. Group means shows that the prior Lag day 2 tends to have a negative previous day return when the market declines and a positive previous day return when the market increases.

Accuracy Rate of this model is 58.65% based on the confusion matrix.

qda.fit = qda(Direction~Lag2, data = Weekly, subset = train)
qda.fit
## Call:
## qda(Direction ~ Lag2, data = Weekly, subset = train)
## 
## Prior probabilities of groups:
##      Down        Up 
## 0.4477157 0.5522843 
## 
## Group means:
##             Lag2
## Down -0.03568254
## Up    0.26036581
qda.pred = predict(qda.fit, Weekly.2009)
names(qda.pred)
## [1] "class"     "posterior"
qda.class = qda.pred$class
table(qda.class,Direction.2009)
##          Direction.2009
## qda.class Down Up
##      Down    0  0
##      Up     43 61
mean(qda.class==Direction.2009)
## [1] 0.5865385

(g) Repeat (d) using KNN with K = 1.

Accuracy Rate of this model is 50% based on the confusion matrix.

library(class)
train.X = as.matrix(Lag2[train])
test.X = as.matrix(Lag2[!train])
train.Direction = Direction[train]
length(train.X)
## [1] 985
length(test.X)
## [1] 104
length(train.Direction)
## [1] 985
set.seed(1)
knn.pred = knn(train.X,test.X,train.Direction, k=1)
table(knn.pred,Direction.2009)
##         Direction.2009
## knn.pred Down Up
##     Down   21 30
##     Up     22 31
mean(knn.pred==Direction.2009)
## [1] 0.5

(h) Which of these methods appears to provide the best results on this data?

The methods that have the highest accuracy rates are the Logistic Regression and Linear Discriminant Analysis with both having rates of 62.5%

(i) Experiment with different combinations of predictors, including possible transformations and interactions, for each of the methods. Report the variables, method, and associated confusion matrix that appears to provide the best results on the held out data. Note that you should also experiment with values for K in the KNN classifier.

After experimenting with different combinations of predictions including possible transformations and interaction for each method, it appears that the best model results are for Logistic Regression and LDA with Lag2 and Lag2:Lag5 as variables. Both these models had a 63.46% accuracy rate.

Logistic Regression with ‘Lag2’ and ‘Lag2’ : ‘Lag5’

glm.fit = glm(Direction~Lag2:Lag5+Lag2, data=Weekly,family=binomial, subset=train)
glm.probs = predict(glm.fit, Weekly.2009, type = "response")
glm.pred = rep("Down", length(glm.probs))
glm.pred[glm.probs > 0.5] = "Up"
Direction.2009 = Direction[!train]
table(glm.pred, Direction.2009)
##         Direction.2009
## glm.pred Down Up
##     Down    9  4
##     Up     34 57
mean(glm.pred==Direction.2009)
## [1] 0.6346154

KNN = 100

train.X = as.matrix(Lag2[train])
test.X = as.matrix(Lag2[!train])
train.Direction = Direction[train]

set.seed(1)
knn.pred = knn(train.X,test.X,train.Direction,k=200)
table(knn.pred,Direction.2009)
##         Direction.2009
## knn.pred Down Up
##     Down    2  0
##     Up     41 61
mean(knn.pred==Direction.2009)
## [1] 0.6057692

LDA with ‘Lag2’ and ‘Lag2’ : ‘Lag5’

lda.fit = lda(Direction~Lag2:Lag5+Lag2, data=Weekly,family=binomial, subset=train)
lda.pred = predict(lda.fit, Weekly.2009)
table(lda.pred$class, Direction.2009)
##       Direction.2009
##        Down Up
##   Down    9  4
##   Up     34 57
mean(lda.pred$class==Direction.2009)
## [1] 0.6346154

QDA with ‘Lag2’ and ‘Lag2’ : ‘Lag5’

qda.fit = qda(Direction~Lag2:Lag5+Lag2, data=Weekly,family=binomial, subset=train)
qda.pred = predict(qda.fit, Weekly.2009)
table(qda.pred$class, Direction.2009)
##       Direction.2009
##        Down Up
##   Down    5 13
##   Up     38 48
mean(qda.pred$class==Direction.2009)
## [1] 0.5096154

Problem 11

In this problem, you will develop a model to predict whether a given car gets high or low gas mileage based on the Auto data set.

library(ISLR)
attach(Auto)

(a) Create a binary variable, ‘mpg01’, that contains a 1 if ‘mpg’ contains a value above its median, and a 0 if ‘mpg’ contains a value below its median. You can compute the median using the median() function. Note you may find it helpful to use the data.frame() function to create a single data set containing both ‘mpg01’ and the other Auto variables.

mpg01 = rep(0, length(mpg))
mpg01[mpg > median(mpg)] = 1
Auto = data.frame(Auto, mpg01)
summary(Auto)
##       mpg          cylinders      displacement     horsepower        weight    
##  Min.   : 9.00   Min.   :3.000   Min.   : 68.0   Min.   : 46.0   Min.   :1613  
##  1st Qu.:17.00   1st Qu.:4.000   1st Qu.:105.0   1st Qu.: 75.0   1st Qu.:2225  
##  Median :22.75   Median :4.000   Median :151.0   Median : 93.5   Median :2804  
##  Mean   :23.45   Mean   :5.472   Mean   :194.4   Mean   :104.5   Mean   :2978  
##  3rd Qu.:29.00   3rd Qu.:8.000   3rd Qu.:275.8   3rd Qu.:126.0   3rd Qu.:3615  
##  Max.   :46.60   Max.   :8.000   Max.   :455.0   Max.   :230.0   Max.   :5140  
##                                                                                
##   acceleration        year           origin                      name    
##  Min.   : 8.00   Min.   :70.00   Min.   :1.000   amc matador       :  5  
##  1st Qu.:13.78   1st Qu.:73.00   1st Qu.:1.000   ford pinto        :  5  
##  Median :15.50   Median :76.00   Median :1.000   toyota corolla    :  5  
##  Mean   :15.54   Mean   :75.98   Mean   :1.577   amc gremlin       :  4  
##  3rd Qu.:17.02   3rd Qu.:79.00   3rd Qu.:2.000   amc hornet        :  4  
##  Max.   :24.80   Max.   :82.00   Max.   :3.000   chevrolet chevette:  4  
##                                                  (Other)           :365  
##      mpg01    
##  Min.   :0.0  
##  1st Qu.:0.0  
##  Median :0.5  
##  Mean   :0.5  
##  3rd Qu.:1.0  
##  Max.   :1.0  
## 

(b) Explore the data graphically in order to investigate the association between mpg01 and the other features. Which of the other features seem most likely to be useful in predicting mpg01? Scatterplots and boxplots may be useful tools to answer this question. Describe your findings.

we use the pairs function to see if we can see any variables that have relationships. It is a first glance hard to see if there are any relationships using this function so we then use correlation matrixs to see if we can see any relationships. When we look at the correlations between the variables and ‘mpg01’ there appears to be positive relationships with the following variables ‘acceleration’, ‘origin’ and ‘year’ due to having high values close to 1. There also appears to be negative relationships with the following variables ‘horsepower’, ‘displacement’, ‘cylinders’, and ‘weight’ due to having negative values close to -1.

pairs(Auto)

cor(Auto[,-9])
##                     mpg  cylinders displacement horsepower     weight
## mpg           1.0000000 -0.7776175   -0.8051269 -0.7784268 -0.8322442
## cylinders    -0.7776175  1.0000000    0.9508233  0.8429834  0.8975273
## displacement -0.8051269  0.9508233    1.0000000  0.8972570  0.9329944
## horsepower   -0.7784268  0.8429834    0.8972570  1.0000000  0.8645377
## weight       -0.8322442  0.8975273    0.9329944  0.8645377  1.0000000
## acceleration  0.4233285 -0.5046834   -0.5438005 -0.6891955 -0.4168392
## year          0.5805410 -0.3456474   -0.3698552 -0.4163615 -0.3091199
## origin        0.5652088 -0.5689316   -0.6145351 -0.4551715 -0.5850054
## mpg01         0.8369392 -0.7591939   -0.7534766 -0.6670526 -0.7577566
##              acceleration       year     origin      mpg01
## mpg             0.4233285  0.5805410  0.5652088  0.8369392
## cylinders      -0.5046834 -0.3456474 -0.5689316 -0.7591939
## displacement   -0.5438005 -0.3698552 -0.6145351 -0.7534766
## horsepower     -0.6891955 -0.4163615 -0.4551715 -0.6670526
## weight         -0.4168392 -0.3091199 -0.5850054 -0.7577566
## acceleration    1.0000000  0.2903161  0.2127458  0.3468215
## year            0.2903161  1.0000000  0.1815277  0.4299042
## origin          0.2127458  0.1815277  1.0000000  0.5136984
## mpg01           0.3468215  0.4299042  0.5136984  1.0000000

(c) Split the data into a training set and a test set.

train = (year %% 2 == 0)
Auto.train = Auto[train,]
Auto.test = Auto[-train,]

(d) Perform LDA on the training data in order to predict ‘mpg01’ using the variables that seemed most associated with ‘mpg01’ in (b). What is the test error of the model obtained?

This model has an Accuracy Rate of 91.56%.

lda.fit = lda(mpg01~cylinders+displacement+horsepower+weight+acceleration+year+origin, data=Auto.train)
lda.pred = predict(lda.fit, Auto.test)
table(lda.pred$class, Auto.test$mpg01)
##    
##       0   1
##   0 169   7
##   1  26 189
mean(lda.pred$class==Auto.test$mpg01)
## [1] 0.915601

(f) Perform QDA on the training data in order to predict ‘mpg01’ using the variables that seemed most associated with ‘mpg01’ in (b). What is the test error of the model obtained?

This model has an Accuracy Rate of 90.28%.

qda.fit = qda(mpg01~cylinders+displacement+horsepower+weight+acceleration+year+origin, data=Auto.train)
qda.pred = predict(qda.fit, Auto.test)
table(qda.pred$class, Auto.test$mpg01)
##    
##       0   1
##   0 176  19
##   1  19 177
mean(qda.pred$class==Auto.test$mpg01)
## [1] 0.9028133

(f) Perform logistic regression on the training data in order to predict ‘mpg01’ using the variables that seemed most associated with ‘mpg01’ in (b). What is the test error of the model obtained?

This model has an Accurancy Rate of 91.56%

glm.fits = glm(mpg01~cylinders+displacement+horsepower+weight+acceleration+year+origin, data = Auto.train, family = binomial)
glm.probs = predict(glm.fits, Auto.test, type = 'response')
glm.pred = rep("Down",391)
glm.pred[glm.probs>0.5]="Up"
table(glm.pred,Auto.test$mpg01)
##         
## glm.pred   0   1
##     Down 175  13
##     Up    20 183
(175+183)/391
## [1] 0.915601

(g) Perform KNN on the training data, with several values of K, in order to predict ‘mpg01’. Use only the variables that seemed most associated with ‘mpg01’ in (b). What test errors do you obtain? Which value of K seems to perform the best on this data set?

k=1 has the best Accuracy Rate at 92.84%. As the value of k increases the Accuracy rate decreases.

k = 1 This model has an Accuracy Rate of 92.84%

train.X = cbind(cylinders,displacement,horsepower,weight,acceleration,year,origin)[train,]
test.X = cbind(cylinders,displacement,horsepower,weight,acceleration,year,origin)[-train,]

set.seed(1)
knn.pred = knn(train.X,test.X,Auto.train$mpg01,k=1)
table(knn.pred,Auto.test$mpg01)
##         
## knn.pred   0   1
##        0 178  11
##        1  17 185
mean(knn.pred==Auto.test$mpg01)
## [1] 0.9283887

k = 10 This model has an Accuracy Rate of 87.72%

train.X = cbind(cylinders,displacement,horsepower,weight,acceleration,year,origin)[train,]
test.X = cbind(cylinders,displacement,horsepower,weight,acceleration,year,origin)[-train,]

set.seed(1)
knn.pred = knn(train.X,test.X,Auto.train$mpg01,k=10)
table(knn.pred,Auto.test$mpg01)
##         
## knn.pred   0   1
##        0 157  10
##        1  38 186
mean(knn.pred==Auto.test$mpg01)
## [1] 0.8772379

k = 100 This model has an Accuracy Rate of 88.24%

train.X = cbind(cylinders,displacement,horsepower,weight,acceleration,year,origin)[train,]
test.X = cbind(cylinders,displacement,horsepower,weight,acceleration,year,origin)[-train,]

set.seed(1)
knn.pred = knn(train.X,test.X,Auto.train$mpg01,k=100)
table(knn.pred,Auto.test$mpg01)
##         
## knn.pred   0   1
##        0 160  11
##        1  35 185
mean(knn.pred==Auto.test$mpg01)
## [1] 0.8823529

Problem 13

Using the Boston data set, fit classification models in order to predict whether a given ‘suburb’ has a ‘crime rate’ above or below the ‘median’.Explore logistic regression, LDA, and KNN models using various subsets of the predictors. Describe your findings.

we use the pairs function to see if we can see any variables that have relationships. It is a first glance hard to see if there are any relationships using this function so we then use correlation matrixs to see if we can see any relationships. When we look at the correlations between the variables and ‘crime01’ there appears to be positive relationships with the following variables ‘age’,‘rad’,‘tax’,‘indus’, and ‘nox due to having high values close to 1. There also appears to be a negative relationship with the following variable ’dis’ due to having a negative value close to -1.

Logistic Regression model had a 90.91% Accuracy Rate. LDA model had an 89.33% Accuracy Rate. KNN k=1 model had a 15.41% Accuracy Rate. KNN K=10 model had a 78.66% Accuracy Rate. KNN k=100 model had a 69.77 Accuracy Rate.

Based on all the models, it appears that Logestic Regression with the following variables ‘age’,‘rad’,‘tax’,‘indus’,and ‘nox’ is the best model.

attach(Boston)
crime01 = rep(0, length(crim))
crime01[crim > median(crim)] = 1
Boston= data.frame(Boston,crime01)
summary(Boston)
##       crim                zn             indus            chas        
##  Min.   : 0.00632   Min.   :  0.00   Min.   : 0.46   Min.   :0.00000  
##  1st Qu.: 0.08205   1st Qu.:  0.00   1st Qu.: 5.19   1st Qu.:0.00000  
##  Median : 0.25651   Median :  0.00   Median : 9.69   Median :0.00000  
##  Mean   : 3.61352   Mean   : 11.36   Mean   :11.14   Mean   :0.06917  
##  3rd Qu.: 3.67708   3rd Qu.: 12.50   3rd Qu.:18.10   3rd Qu.:0.00000  
##  Max.   :88.97620   Max.   :100.00   Max.   :27.74   Max.   :1.00000  
##       nox               rm             age              dis        
##  Min.   :0.3850   Min.   :3.561   Min.   :  2.90   Min.   : 1.130  
##  1st Qu.:0.4490   1st Qu.:5.886   1st Qu.: 45.02   1st Qu.: 2.100  
##  Median :0.5380   Median :6.208   Median : 77.50   Median : 3.207  
##  Mean   :0.5547   Mean   :6.285   Mean   : 68.57   Mean   : 3.795  
##  3rd Qu.:0.6240   3rd Qu.:6.623   3rd Qu.: 94.08   3rd Qu.: 5.188  
##  Max.   :0.8710   Max.   :8.780   Max.   :100.00   Max.   :12.127  
##       rad              tax           ptratio          black       
##  Min.   : 1.000   Min.   :187.0   Min.   :12.60   Min.   :  0.32  
##  1st Qu.: 4.000   1st Qu.:279.0   1st Qu.:17.40   1st Qu.:375.38  
##  Median : 5.000   Median :330.0   Median :19.05   Median :391.44  
##  Mean   : 9.549   Mean   :408.2   Mean   :18.46   Mean   :356.67  
##  3rd Qu.:24.000   3rd Qu.:666.0   3rd Qu.:20.20   3rd Qu.:396.23  
##  Max.   :24.000   Max.   :711.0   Max.   :22.00   Max.   :396.90  
##      lstat            medv          crime01   
##  Min.   : 1.73   Min.   : 5.00   Min.   :0.0  
##  1st Qu.: 6.95   1st Qu.:17.02   1st Qu.:0.0  
##  Median :11.36   Median :21.20   Median :0.5  
##  Mean   :12.65   Mean   :22.53   Mean   :0.5  
##  3rd Qu.:16.95   3rd Qu.:25.00   3rd Qu.:1.0  
##  Max.   :37.97   Max.   :50.00   Max.   :1.0
pairs(Boston)

cor(Boston)
##                crim          zn       indus         chas         nox
## crim     1.00000000 -0.20046922  0.40658341 -0.055891582  0.42097171
## zn      -0.20046922  1.00000000 -0.53382819 -0.042696719 -0.51660371
## indus    0.40658341 -0.53382819  1.00000000  0.062938027  0.76365145
## chas    -0.05589158 -0.04269672  0.06293803  1.000000000  0.09120281
## nox      0.42097171 -0.51660371  0.76365145  0.091202807  1.00000000
## rm      -0.21924670  0.31199059 -0.39167585  0.091251225 -0.30218819
## age      0.35273425 -0.56953734  0.64477851  0.086517774  0.73147010
## dis     -0.37967009  0.66440822 -0.70802699 -0.099175780 -0.76923011
## rad      0.62550515 -0.31194783  0.59512927 -0.007368241  0.61144056
## tax      0.58276431 -0.31456332  0.72076018 -0.035586518  0.66802320
## ptratio  0.28994558 -0.39167855  0.38324756 -0.121515174  0.18893268
## black   -0.38506394  0.17552032 -0.35697654  0.048788485 -0.38005064
## lstat    0.45562148 -0.41299457  0.60379972 -0.053929298  0.59087892
## medv    -0.38830461  0.36044534 -0.48372516  0.175260177 -0.42732077
## crime01  0.40939545 -0.43615103  0.60326017  0.070096774  0.72323480
##                  rm         age         dis          rad         tax    ptratio
## crim    -0.21924670  0.35273425 -0.37967009  0.625505145  0.58276431  0.2899456
## zn       0.31199059 -0.56953734  0.66440822 -0.311947826 -0.31456332 -0.3916785
## indus   -0.39167585  0.64477851 -0.70802699  0.595129275  0.72076018  0.3832476
## chas     0.09125123  0.08651777 -0.09917578 -0.007368241 -0.03558652 -0.1215152
## nox     -0.30218819  0.73147010 -0.76923011  0.611440563  0.66802320  0.1889327
## rm       1.00000000 -0.24026493  0.20524621 -0.209846668 -0.29204783 -0.3555015
## age     -0.24026493  1.00000000 -0.74788054  0.456022452  0.50645559  0.2615150
## dis      0.20524621 -0.74788054  1.00000000 -0.494587930 -0.53443158 -0.2324705
## rad     -0.20984667  0.45602245 -0.49458793  1.000000000  0.91022819  0.4647412
## tax     -0.29204783  0.50645559 -0.53443158  0.910228189  1.00000000  0.4608530
## ptratio -0.35550149  0.26151501 -0.23247054  0.464741179  0.46085304  1.0000000
## black    0.12806864 -0.27353398  0.29151167 -0.444412816 -0.44180801 -0.1773833
## lstat   -0.61380827  0.60233853 -0.49699583  0.488676335  0.54399341  0.3740443
## medv     0.69535995 -0.37695457  0.24992873 -0.381626231 -0.46853593 -0.5077867
## crime01 -0.15637178  0.61393992 -0.61634164  0.619786249  0.60874128  0.2535684
##               black      lstat       medv     crime01
## crim    -0.38506394  0.4556215 -0.3883046  0.40939545
## zn       0.17552032 -0.4129946  0.3604453 -0.43615103
## indus   -0.35697654  0.6037997 -0.4837252  0.60326017
## chas     0.04878848 -0.0539293  0.1752602  0.07009677
## nox     -0.38005064  0.5908789 -0.4273208  0.72323480
## rm       0.12806864 -0.6138083  0.6953599 -0.15637178
## age     -0.27353398  0.6023385 -0.3769546  0.61393992
## dis      0.29151167 -0.4969958  0.2499287 -0.61634164
## rad     -0.44441282  0.4886763 -0.3816262  0.61978625
## tax     -0.44180801  0.5439934 -0.4685359  0.60874128
## ptratio -0.17738330  0.3740443 -0.5077867  0.25356836
## black    1.00000000 -0.3660869  0.3334608 -0.35121093
## lstat   -0.36608690  1.0000000 -0.7376627  0.45326273
## medv     0.33346082 -0.7376627  1.0000000 -0.26301673
## crime01 -0.35121093  0.4532627 -0.2630167  1.00000000
train = 1:(dim(Boston)[1]/2)
test = (dim(Boston)[1]/2 + 1):dim(Boston)[1]
Boston.train = Boston[train, ]
Boston.test = Boston[test, ]
crime01.test = crime01[test]

Logistic regression

glm.fits = glm(crime01~dis+rad+tax+age+indus+nox, data = Boston.train, family = binomial)
glm.probs = predict(glm.fits, Boston.test, type = 'response')
glm.pred = rep("Down",253)
glm.pred[glm.probs>0.5]="Up"
table(glm.pred,crime01.test)
##         crime01.test
## glm.pred   0   1
##     Down  75   8
##     Up    15 155
(75+155)/253
## [1] 0.9090909

LDA

lda.fit = lda(crime01~dis+rad+tax+age+indus+nox, data=Boston.train)
lda.pred = predict(lda.fit, Boston.test)
table(lda.pred$class, crime01.test)
##    crime01.test
##       0   1
##   0  81  18
##   1   9 145
mean(lda.pred$class==crime01.test)
## [1] 0.8932806

k = 1

train.X = cbind(dis,rad,tax,age,indus,nox)[train,]
test.X = cbind(dis,rad,tax,age,indus,nox)[-train,]

set.seed(1)
knn.pred = knn(train.X,test.X,crime01.test,k=1)
table(knn.pred,crime01.test)
##         crime01.test
## knn.pred   0   1
##        0  31 155
##        1  59   8
(31+8)/253
## [1] 0.1541502

k = 10

train.X = cbind(dis,rad,tax,age,indus,nox)[train,]
test.X = cbind(dis,rad,tax,age,indus,nox)[-train,]

set.seed(1)
knn.pred = knn(train.X,test.X,crime01.test,k=10)
table(knn.pred,crime01.test)
##         crime01.test
## knn.pred   0   1
##        0  44   8
##        1  46 155
mean(knn.pred==crime01.test)
## [1] 0.7865613

k = 100

train.X = cbind(dis,rad,tax,age,indus,nox)[train,]
test.X = cbind(dis,rad,tax,age,indus,nox)[-train,]

set.seed(1)
knn.pred = knn(train.X,test.X,crime01.test,k=100)
table(knn.pred,crime01.test)
##         crime01.test
## knn.pred   0   1
##        0  20   8
##        1  70 155
mean(knn.pred==crime01.test)
## [1] 0.6916996