Final Data 605 Spring 2018

Your final is due by the end of day on 5/20/2018 You should post your solutions to your GitHub account or RPubs. You are also expected to make a short presentation via YouTube and post that recording to the board. This project will show off your ability to understand the elements of the class.

You are to register for Kaggle.com (free) and compete in the House Prices: Advanced Regression Techniques competition. https://www.kaggle.com/c/house-prices-advanced-regression-techniques . I want you to do the following. * Pick one of the quantitative independent variables from the training data set (train.csv) , and define that variable as X. Make sure this variable is skewed to the right! * Pick the dependent variable and define it as Y.

Data Exploration

The data is loaded and the summary is being obtained for all variables (categorial or continuous). In this case, I will only consider continouous variables for this anlaysis.

train = read.csv("train.csv")
head(summary(train))
##        Id           MSSubClass       MSZoning     LotFrontage    
##  Min.   :   1.0   Min.   : 20.0   C (all):  10   Min.   : 21.00  
##  1st Qu.: 365.8   1st Qu.: 20.0   FV     :  65   1st Qu.: 59.00  
##  Median : 730.5   Median : 50.0   RH     :  16   Median : 69.00  
##  Mean   : 730.5   Mean   : 56.9   RL     :1151   Mean   : 70.05  
##  3rd Qu.:1095.2   3rd Qu.: 70.0   RM     : 218   3rd Qu.: 80.00  
##  Max.   :1460.0   Max.   :190.0                  Max.   :313.00  
##     LotArea        Street      Alley      LotShape  LandContour
##  Min.   :  1300   Grvl:   6   Grvl:  50   IR1:484   Bnk:  63   
##  1st Qu.:  7554   Pave:1454   Pave:  41   IR2: 41   HLS:  50   
##  Median :  9478               NA's:1369   IR3: 10   Low:  36   
##  Mean   : 10517                           Reg:925   Lvl:1311   
##  3rd Qu.: 11602                                                
##  Max.   :215245                                                
##   Utilities      LotConfig    LandSlope   Neighborhood   Condition1  
##  AllPub:1459   Corner : 263   Gtl:1382   NAmes  :225   Norm   :1260  
##  NoSeWa:   1   CulDSac:  94   Mod:  65   CollgCr:150   Feedr  :  81  
##                FR2    :  47   Sev:  13   OldTown:113   Artery :  48  
##                FR3    :   4              Edwards:100   RRAn   :  26  
##                Inside :1052              Somerst: 86   PosN   :  19  
##                                          Gilbert: 79   RRAe   :  11  
##    Condition2     BldgType      HouseStyle   OverallQual    
##  Norm   :1445   1Fam  :1220   1Story :726   Min.   : 1.000  
##  Feedr  :   6   2fmCon:  31   2Story :445   1st Qu.: 5.000  
##  Artery :   2   Duplex:  52   1.5Fin :154   Median : 6.000  
##  PosN   :   2   Twnhs :  43   SLvl   : 65   Mean   : 6.099  
##  RRNn   :   2   TwnhsE: 114   SFoyer : 37   3rd Qu.: 7.000  
##  PosA   :   1                 1.5Unf : 14   Max.   :10.000  
##   OverallCond      YearBuilt     YearRemodAdd    RoofStyle   
##  Min.   :1.000   Min.   :1872   Min.   :1950   Flat   :  13  
##  1st Qu.:5.000   1st Qu.:1954   1st Qu.:1967   Gable  :1141  
##  Median :5.000   Median :1973   Median :1994   Gambrel:  11  
##  Mean   :5.575   Mean   :1971   Mean   :1985   Hip    : 286  
##  3rd Qu.:6.000   3rd Qu.:2000   3rd Qu.:2004   Mansard:   7  
##  Max.   :9.000   Max.   :2010   Max.   :2010   Shed   :   2  
##     RoofMatl     Exterior1st   Exterior2nd    MasVnrType    MasVnrArea    
##  CompShg:1434   VinylSd:515   VinylSd:504   BrkCmn : 15   Min.   :   0.0  
##  Tar&Grv:  11   HdBoard:222   MetalSd:214   BrkFace:445   1st Qu.:   0.0  
##  WdShngl:   6   MetalSd:220   HdBoard:207   None   :864   Median :   0.0  
##  WdShake:   5   Wd Sdng:206   Wd Sdng:197   Stone  :128   Mean   : 103.7  
##  ClyTile:   1   Plywood:108   Plywood:142   NA's   :  8   3rd Qu.: 166.0  
##  Membran:   1   CemntBd: 61   CmentBd: 60                 Max.   :1600.0  
##  ExterQual ExterCond  Foundation  BsmtQual   BsmtCond    BsmtExposure
##  Ex: 52    Ex:   3   BrkTil:146   Ex  :121   Fa  :  45   Av  :221    
##  Fa: 14    Fa:  28   CBlock:634   Fa  : 35   Gd  :  65   Gd  :134    
##  Gd:488    Gd: 146   PConc :647   Gd  :618   Po  :   2   Mn  :114    
##  TA:906    Po:   1   Slab  : 24   TA  :649   TA  :1311   No  :953    
##            TA:1282   Stone :  6   NA's: 37   NA's:  37   NA's: 38    
##                      Wood  :  3                                      
##  BsmtFinType1   BsmtFinSF1     BsmtFinType2   BsmtFinSF2     
##  ALQ :220     Min.   :   0.0   ALQ :  19    Min.   :   0.00  
##  BLQ :148     1st Qu.:   0.0   BLQ :  33    1st Qu.:   0.00  
##  GLQ :418     Median : 383.5   GLQ :  14    Median :   0.00  
##  LwQ : 74     Mean   : 443.6   LwQ :  46    Mean   :  46.55  
##  Rec :133     3rd Qu.: 712.2   Rec :  54    3rd Qu.:   0.00  
##  Unf :430     Max.   :5644.0   Unf :1256    Max.   :1474.00  
##    BsmtUnfSF       TotalBsmtSF      Heating     HeatingQC CentralAir
##  Min.   :   0.0   Min.   :   0.0   Floor:   1   Ex:741    N:  95    
##  1st Qu.: 223.0   1st Qu.: 795.8   GasA :1428   Fa: 49    Y:1365    
##  Median : 477.5   Median : 991.5   GasW :  18   Gd:241              
##  Mean   : 567.2   Mean   :1057.4   Grav :   7   Po:  1              
##  3rd Qu.: 808.0   3rd Qu.:1298.2   OthW :   2   TA:428              
##  Max.   :2336.0   Max.   :6110.0   Wall :   4                       
##  Electrical     X1stFlrSF      X2ndFlrSF     LowQualFinSF    
##  FuseA:  94   Min.   : 334   Min.   :   0   Min.   :  0.000  
##  FuseF:  27   1st Qu.: 882   1st Qu.:   0   1st Qu.:  0.000  
##  FuseP:   3   Median :1087   Median :   0   Median :  0.000  
##  Mix  :   1   Mean   :1163   Mean   : 347   Mean   :  5.845  
##  SBrkr:1334   3rd Qu.:1391   3rd Qu.: 728   3rd Qu.:  0.000  
##  NA's :   1   Max.   :4692   Max.   :2065   Max.   :572.000  
##    GrLivArea     BsmtFullBath     BsmtHalfBath        FullBath    
##  Min.   : 334   Min.   :0.0000   Min.   :0.00000   Min.   :0.000  
##  1st Qu.:1130   1st Qu.:0.0000   1st Qu.:0.00000   1st Qu.:1.000  
##  Median :1464   Median :0.0000   Median :0.00000   Median :2.000  
##  Mean   :1515   Mean   :0.4253   Mean   :0.05753   Mean   :1.565  
##  3rd Qu.:1777   3rd Qu.:1.0000   3rd Qu.:0.00000   3rd Qu.:2.000  
##  Max.   :5642   Max.   :3.0000   Max.   :2.00000   Max.   :3.000  
##     HalfBath       BedroomAbvGr    KitchenAbvGr   KitchenQual
##  Min.   :0.0000   Min.   :0.000   Min.   :0.000   Ex:100     
##  1st Qu.:0.0000   1st Qu.:2.000   1st Qu.:1.000   Fa: 39     
##  Median :0.0000   Median :3.000   Median :1.000   Gd:586     
##  Mean   :0.3829   Mean   :2.866   Mean   :1.047   TA:735     
##  3rd Qu.:1.0000   3rd Qu.:3.000   3rd Qu.:1.000              
##  Max.   :2.0000   Max.   :8.000   Max.   :3.000              
##   TotRmsAbvGrd    Functional    Fireplaces    FireplaceQu   GarageType 
##  Min.   : 2.000   Maj1:  14   Min.   :0.000   Ex  : 24    2Types :  6  
##  1st Qu.: 5.000   Maj2:   5   1st Qu.:0.000   Fa  : 33    Attchd :870  
##  Median : 6.000   Min1:  31   Median :1.000   Gd  :380    Basment: 19  
##  Mean   : 6.518   Min2:  34   Mean   :0.613   Po  : 20    BuiltIn: 88  
##  3rd Qu.: 7.000   Mod :  15   3rd Qu.:1.000   TA  :313    CarPort:  9  
##  Max.   :14.000   Sev :   1   Max.   :3.000   NA's:690    Detchd :387  
##   GarageYrBlt   GarageFinish   GarageCars      GarageArea     GarageQual 
##  Min.   :1900   Fin :352     Min.   :0.000   Min.   :   0.0   Ex  :   3  
##  1st Qu.:1961   RFn :422     1st Qu.:1.000   1st Qu.: 334.5   Fa  :  48  
##  Median :1980   Unf :605     Median :2.000   Median : 480.0   Gd  :  14  
##  Mean   :1979   NA's: 81     Mean   :1.767   Mean   : 473.0   Po  :   3  
##  3rd Qu.:2002                3rd Qu.:2.000   3rd Qu.: 576.0   TA  :1311  
##  Max.   :2010                Max.   :4.000   Max.   :1418.0   NA's:  81  
##  GarageCond  PavedDrive   WoodDeckSF      OpenPorchSF     EnclosedPorch   
##  Ex  :   2   N:  90     Min.   :  0.00   Min.   :  0.00   Min.   :  0.00  
##  Fa  :  35   P:  30     1st Qu.:  0.00   1st Qu.:  0.00   1st Qu.:  0.00  
##  Gd  :   9   Y:1340     Median :  0.00   Median : 25.00   Median :  0.00  
##  Po  :   7              Mean   : 94.24   Mean   : 46.66   Mean   : 21.95  
##  TA  :1326              3rd Qu.:168.00   3rd Qu.: 68.00   3rd Qu.:  0.00  
##  NA's:  81              Max.   :857.00   Max.   :547.00   Max.   :552.00  
##    X3SsnPorch      ScreenPorch        PoolArea        PoolQC    
##  Min.   :  0.00   Min.   :  0.00   Min.   :  0.000   Ex  :   2  
##  1st Qu.:  0.00   1st Qu.:  0.00   1st Qu.:  0.000   Fa  :   2  
##  Median :  0.00   Median :  0.00   Median :  0.000   Gd  :   3  
##  Mean   :  3.41   Mean   : 15.06   Mean   :  2.759   NA's:1453  
##  3rd Qu.:  0.00   3rd Qu.:  0.00   3rd Qu.:  0.000              
##  Max.   :508.00   Max.   :480.00   Max.   :738.000              
##    Fence      MiscFeature    MiscVal             MoSold      
##  GdPrv:  59   Gar2:   2   Min.   :    0.00   Min.   : 1.000  
##  GdWo :  54   Othr:   2   1st Qu.:    0.00   1st Qu.: 5.000  
##  MnPrv: 157   Shed:  49   Median :    0.00   Median : 6.000  
##  MnWw :  11   TenC:   1   Mean   :   43.49   Mean   : 6.322  
##  NA's :1179   NA's:1406   3rd Qu.:    0.00   3rd Qu.: 8.000  
##                           Max.   :15500.00   Max.   :12.000  
##      YrSold        SaleType    SaleCondition    SalePrice     
##  Min.   :2006   WD     :1267   Abnorml: 101   Min.   : 34900  
##  1st Qu.:2007   New    : 122   AdjLand:   4   1st Qu.:129975  
##  Median :2008   COD    :  43   Alloca :  12   Median :163000  
##  Mean   :2008   ConLD  :   9   Family :  20   Mean   :180921  
##  3rd Qu.:2009   ConLI  :   5   Normal :1198   3rd Qu.:214000  
##  Max.   :2010   ConLw  :   5   Partial: 125   Max.   :755000
#fill NAs with 0.
train[is.na(train)] <- 0
## Warning in `[<-.factor`(`*tmp*`, thisvar, value = 0): invalid factor level,
## NA generated

## Warning in `[<-.factor`(`*tmp*`, thisvar, value = 0): invalid factor level,
## NA generated

## Warning in `[<-.factor`(`*tmp*`, thisvar, value = 0): invalid factor level,
## NA generated

## Warning in `[<-.factor`(`*tmp*`, thisvar, value = 0): invalid factor level,
## NA generated

## Warning in `[<-.factor`(`*tmp*`, thisvar, value = 0): invalid factor level,
## NA generated

## Warning in `[<-.factor`(`*tmp*`, thisvar, value = 0): invalid factor level,
## NA generated

## Warning in `[<-.factor`(`*tmp*`, thisvar, value = 0): invalid factor level,
## NA generated

## Warning in `[<-.factor`(`*tmp*`, thisvar, value = 0): invalid factor level,
## NA generated

## Warning in `[<-.factor`(`*tmp*`, thisvar, value = 0): invalid factor level,
## NA generated

## Warning in `[<-.factor`(`*tmp*`, thisvar, value = 0): invalid factor level,
## NA generated

## Warning in `[<-.factor`(`*tmp*`, thisvar, value = 0): invalid factor level,
## NA generated

## Warning in `[<-.factor`(`*tmp*`, thisvar, value = 0): invalid factor level,
## NA generated

## Warning in `[<-.factor`(`*tmp*`, thisvar, value = 0): invalid factor level,
## NA generated

## Warning in `[<-.factor`(`*tmp*`, thisvar, value = 0): invalid factor level,
## NA generated

## Warning in `[<-.factor`(`*tmp*`, thisvar, value = 0): invalid factor level,
## NA generated

## Warning in `[<-.factor`(`*tmp*`, thisvar, value = 0): invalid factor level,
## NA generated
summary(train$BsmtFinSF1)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##     0.0     0.0   383.5   443.6   712.2  5644.0
summary(train$BsmtFinSF2)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##    0.00    0.00    0.00   46.55    0.00 1474.00
train$BsmtFinSF12 = train$BsmtFinSF1+train$BsmtFinSF2


#import test data for submission
test = read.csv("test.csv")

#fill NAs with 0.
test[is.na(test)] <- 0
## Warning in `[<-.factor`(`*tmp*`, thisvar, value = 0): invalid factor level,
## NA generated

## Warning in `[<-.factor`(`*tmp*`, thisvar, value = 0): invalid factor level,
## NA generated

## Warning in `[<-.factor`(`*tmp*`, thisvar, value = 0): invalid factor level,
## NA generated

## Warning in `[<-.factor`(`*tmp*`, thisvar, value = 0): invalid factor level,
## NA generated

## Warning in `[<-.factor`(`*tmp*`, thisvar, value = 0): invalid factor level,
## NA generated

## Warning in `[<-.factor`(`*tmp*`, thisvar, value = 0): invalid factor level,
## NA generated

## Warning in `[<-.factor`(`*tmp*`, thisvar, value = 0): invalid factor level,
## NA generated

## Warning in `[<-.factor`(`*tmp*`, thisvar, value = 0): invalid factor level,
## NA generated

## Warning in `[<-.factor`(`*tmp*`, thisvar, value = 0): invalid factor level,
## NA generated

## Warning in `[<-.factor`(`*tmp*`, thisvar, value = 0): invalid factor level,
## NA generated

## Warning in `[<-.factor`(`*tmp*`, thisvar, value = 0): invalid factor level,
## NA generated

## Warning in `[<-.factor`(`*tmp*`, thisvar, value = 0): invalid factor level,
## NA generated

## Warning in `[<-.factor`(`*tmp*`, thisvar, value = 0): invalid factor level,
## NA generated

## Warning in `[<-.factor`(`*tmp*`, thisvar, value = 0): invalid factor level,
## NA generated

## Warning in `[<-.factor`(`*tmp*`, thisvar, value = 0): invalid factor level,
## NA generated

## Warning in `[<-.factor`(`*tmp*`, thisvar, value = 0): invalid factor level,
## NA generated

## Warning in `[<-.factor`(`*tmp*`, thisvar, value = 0): invalid factor level,
## NA generated

## Warning in `[<-.factor`(`*tmp*`, thisvar, value = 0): invalid factor level,
## NA generated

## Warning in `[<-.factor`(`*tmp*`, thisvar, value = 0): invalid factor level,
## NA generated

## Warning in `[<-.factor`(`*tmp*`, thisvar, value = 0): invalid factor level,
## NA generated

## Warning in `[<-.factor`(`*tmp*`, thisvar, value = 0): invalid factor level,
## NA generated

## Warning in `[<-.factor`(`*tmp*`, thisvar, value = 0): invalid factor level,
## NA generated

The data being used is the sum of the Finished Basement area. This means that all types (even if the area is for mechanical/utilities) it could potentially in the future be reclaimed as usable space. The area that is unfinished is being ignored as it is not livable at the point of sale. It may add potential value in the future but I am making the assumption that it will not for the purposes of this analysis. In addition, I have generated new variables based upon looking at the data that I felt might be helpful when generating the models later on in the anlaysis. These include AllSF which is the sum of all SF footage described in the house, and then taking this new variable and creating SFLotRatio.

#function to get mode of a variable.
getmode <- function(v) {
  uniqv <- unique(v)
  uniqv[which.max(tabulate(match(v, uniqv)))]
}

#test for skewneess 
summary(train$BsmtFinSF12)[4]>summary(train$BsmtFinSF12)[3]
## Mean 
## TRUE
summary(train$BsmtFinSF12)[3]>getmode(train$BsmtFinSF12)
## Median 
##   TRUE
skewness(train$BsmtFinSF12)
## [1] 1.403074
#visual of skewness
par(mfrow=c(1,2))
hist(train$BsmtFinSF12, main ='Histogram of Usable \n Basement Area')

qqnorm(train$BsmtFinSF12, main ='QQ Plot of Usable \n Basement Area')
qqline(train$BsmtFinSF12)

In addition, the data was selected because of its skewness which in this case is mean > median > mode which is 490.2 > 465 > 0. The data has a lower bound of 0, as an area can only be greater than or equal to 0 in livable space. The histogram and the qqplot also visually confirm this information.

The dependent variable for this analysis will be the Sale Price of the home, the function will have the form of \(f(x) = SalePrice = m*BsmtFinSF12+b\) where BsmtFinSF12 is the sum of column BsmtFinSF1 and BsmtFinSF2 as noted above.

Probability.

Calculate as a minimum the below probabilities a through c. Assume the small letter “x” is estimated as the 1st quartile of the X variable, and the small letter “y” is estimated as the 1st quartile of the Y variable. Interpret the meaning of all probabilities. In addition, make a table of counts as shown below.

To populate the table of probabilities for the 1st quartile for x and 2nd quartile for y, I will be using the quantile function in r.

  1. P(X>x|Y>y)
xq1 <- quantile(train$BsmtFinSF12, 0.25)
yq2 <- quantile(train$SalePrice, 0.5)

rowcount <- dim(train)[1]
upperxq1yq2 <- filter(train, train$SalePrice > yq2 & train$BsmtFinSF12 > xq1) %>% count()
upperyq2 <- filter(train, train$SalePrice > yq2) %>% count()

(upperxq1yq2/rowcount) / (upperyq2/rowcount)
##          n
## 1 0.706044
#insert into matrix table
d22<- filter(train, train$SalePrice > yq2 & train$BsmtFinSF12 > xq1) %>% count()

The value for P(X>x|Y>y) is 0.7060.

  1. P(X>x,Y>y)
upperxq1yq2 <- filter(train, train$SalePrice > yq2 & train$BsmtFinSF12 > xq1) %>% count()
upperyq2 <- filter(train, train$SalePrice > yq2) %>% count()

(upperxq1yq2/rowcount) * (upperyq2/rowcount)
##           n
## 1 0.1755451

The value for P(X>x,Y>y) is 0.1755.

  1. P(Xy)
upperyq2xq1 <- filter(train, train$SalePrice > yq2 & train$BsmtFinSF12 < xq1) %>% count()
upperyq2 <- filter(train, train$SalePrice > yq2) %>% count()

(upperyq2xq1/rowcount)/(upperyq2/rowcount)
##   n
## 1 0
#Insert into matrix table
d21<- filter(train, train$SalePrice > yq2 & train$BsmtFinSF12 < xq1) %>% count()

The value P(Xy) is 0

Does splitting the training data in this fashion make them independent?

#Insert into matrix table.  Fill in other items including row and column sums.  
d23 <- filter(train, train$SalePrice > yq2) %>% count()
d13 <- dim(train)[1]-d23
d32 <- filter(train, train$BsmtFinSF12 > xq1) %>% count()
d31 <- dim(train)[1]-d32
d11 <- d31-d21
d12 <- d32-d22
d33 <- dim(train)[1]

#prepare table
tab <- matrix(c(d11,d21,d31,d12,d22,d32,d13,d23,d33), 3, 3, byrow = T)

print(tab)
##      [,1] [,2] [,3]
## [1,] 467  0    467 
## [2,] 479  514  993 
## [3,] 732  728  1460
PA <- d32/d33
PB <- d23/d33
PA*PB
##           n
## 1 0.3391368

Let A be the new variable counting those observations above the 1st quartile for X, and let B be the new variable counting those observations above the 1st quartile for Y.

Does P(AB)=P(A)P(B)?

The item P(A) * P(B) is 0.34. This means that they are not equal as P(AB) = 0.71 which is not equal to 0.34.

Check mathematically, and then evaluate by running a Chi Square test for association.

chimat<-rbind(c(467,0),c(479,514))
chisq.test(chimat,correct=TRUE)
## 
##  Pearson's Chi-squared test with Yates' continuity correction
## 
## data:  chimat
## X-squared = 370.81, df = 1, p-value < 2.2e-16
#check variables also
chisq.test(train$TotalBsmtSF, train$SalePrice, correct=FALSE)
## Warning in chisq.test(train$TotalBsmtSF, train$SalePrice, correct = FALSE):
## Chi-squared approximation may be incorrect
## 
##  Pearson's Chi-squared test
## 
## data:  train$TotalBsmtSF and train$SalePrice
## X-squared = 509710, df = 476640, p-value < 2.2e-16

Since the p-value < .05 significance level, we reject the null hypothesis that the BsmtFinSF is independent of SalePrice. The test shows dependence between BsmtFinSF and SalePrice. This was verified both using the breakout values and then doing the full test on the full data for X and Y.

Descriptive and Inferential Statistics.

library("dplyr")
library(purrr)
library(tidyr)
library(ggplot2)
library(corrplot)
## corrplot 0.84 loaded
ntrain<-select_if(train, is.numeric)
ntrain %>%
  keep(is.numeric) %>%                     # Keep only numeric columns
  gather() %>%                             # Convert to key-value pairs
  ggplot(aes(value)) +                     # Plot the values
    facet_wrap(~ key, scales = "free") +   # In separate panels
    geom_bar()                         # as density

ntrain %>%
  keep(is.numeric) %>%                     # Keep only numeric columns
  gather() %>%                             # Convert to key-value pairs
  ggplot(aes(value)) +                     # Plot the values
    facet_wrap(~ key, scales = "free") +   # In separate panels
    geom_density()                         # as density

subset <- select(train, TotalBsmtSF, TotRmsAbvGrd, LotArea, SalePrice)

subcor <- cor(subset)

par(mfrow=c(1,2))
ggplot(train, aes(x=BsmtFinSF12,y=SalePrice)) + geom_point() + ggtitle("Finished Basement SF vs Sale Price") + xlab("Finished Basement (BsmtFinSF1 + BsmtFinSF2)")

corrplot(subcor, method="square")

The best correlation is TotalBsmtSF and SalePrice according to the information from the correlation plot.

cor.test(train$TotalBsmtSF, train$SalePrice, method = "pearson" , conf.level = 0.92)
## 
##  Pearson's product-moment correlation
## 
## data:  train$TotalBsmtSF and train$SalePrice
## t = 29.671, df = 1458, p-value < 2.2e-16
## alternative hypothesis: true correlation is not equal to 0
## 92 percent confidence interval:
##  0.5841762 0.6413763
## sample estimates:
##       cor 
## 0.6135806
cor.test(train$TotRmsAbvGrd, train$SalePrice, method = "pearson" , conf.level = 0.92)
## 
##  Pearson's product-moment correlation
## 
## data:  train$TotRmsAbvGrd and train$SalePrice
## t = 24.099, df = 1458, p-value < 2.2e-16
## alternative hypothesis: true correlation is not equal to 0
## 92 percent confidence interval:
##  0.5001246 0.5657172
## sample estimates:
##       cor 
## 0.5337232
cor.test(train$LotArea, train$SalePrice, method = "pearson" , conf.level = 0.92)
## 
##  Pearson's product-moment correlation
## 
## data:  train$LotArea and train$SalePrice
## t = 10.445, df = 1458, p-value < 2.2e-16
## alternative hypothesis: true correlation is not equal to 0
## 92 percent confidence interval:
##  0.2206794 0.3059759
## sample estimates:
##       cor 
## 0.2638434

This indicates that there is correlation as the p-value is < 0.5 for all three selected variables. In all cases, the correlation values also fall within the 92 CI.

Furthermore, although there seems to be no correlation with each of the three variables, we will do the calculation for Familywise Errors FWE which is FWE ≤ 1 – (1 – αIT)c. Alpha in this icase if 92% and c is 3 for running the test for 3 variables. In this case we get a value that is very high 0.999488. In order to compensate for this we will rerun the correlation pearson tests but this time instead of CI of 92% we will adjust for the value of the 3 tests which is 8%/3 ~ 2.67%. Our new CI will be 1-(8%/3) which is 97.33%

cor.test(train$TotalBsmtSF, train$SalePrice, method = "pearson" , conf.level = 0.9733)
## 
##  Pearson's product-moment correlation
## 
## data:  train$TotalBsmtSF and train$SalePrice
## t = 29.671, df = 1458, p-value < 2.2e-16
## alternative hypothesis: true correlation is not equal to 0
## 97.33 percent confidence interval:
##  0.5760909 0.6484941
## sample estimates:
##       cor 
## 0.6135806
cor.test(train$TotRmsAbvGrd, train$SalePrice, method = "pearson" , conf.level = 0.9733)
## 
##  Pearson's product-moment correlation
## 
## data:  train$TotRmsAbvGrd and train$SalePrice
## t = 24.099, df = 1458, p-value < 2.2e-16
## alternative hypothesis: true correlation is not equal to 0
## 97.33 percent confidence interval:
##  0.4909302 0.5739468
## sample estimates:
##       cor 
## 0.5337232
cor.test(train$LotArea, train$SalePrice, method = "pearson" , conf.level = 0.9733)
## 
##  Pearson's product-moment correlation
## 
## data:  train$LotArea and train$SalePrice
## t = 10.445, df = 1458, p-value < 2.2e-16
## alternative hypothesis: true correlation is not equal to 0
## 97.33 percent confidence interval:
##  0.2090551 0.3169804
## sample estimates:
##       cor 
## 0.2638434

In this case, we will get the same p-value that p<0.05 for all 3 items so we can reject the null hypthoses and continue to assume that all 3 variables do not have a correlation.

Linear Algebra and Correlation.

Invert your 3 x 3 correlation matrix from above. (This is known as the precision matrix and contains variance inflation factors on the diagonal.) Multiply the correlation matrix by the precision matrix, and then multiply the precision matrix by the correlation matrix. Conduct LU decomposition on the matrix.

The following items generate the inverted matrix from the correlation matrix.

print(subcor)
##              TotalBsmtSF TotRmsAbvGrd   LotArea SalePrice
## TotalBsmtSF    1.0000000    0.2855726 0.2608331 0.6135806
## TotRmsAbvGrd   0.2855726    1.0000000 0.1900148 0.5337232
## LotArea        0.2608331    0.1900148 1.0000000 0.2638434
## SalePrice      0.6135806    0.5337232 0.2638434 1.0000000
inv<-solve(subcor)
print(inv)
##              TotalBsmtSF TotRmsAbvGrd     LotArea  SalePrice
## TotalBsmtSF    1.6396732   0.10848043 -0.18011064 -1.0164491
## TotRmsAbvGrd   0.1084804   1.41061034 -0.08612454 -0.7967135
## LotArea       -0.1801106  -0.08612454  1.09853013 -0.1333608
## SalePrice     -1.0164491  -0.79671350 -0.13336082  2.0840842
round(subcor %*% inv)
##              TotalBsmtSF TotRmsAbvGrd LotArea SalePrice
## TotalBsmtSF            1            0       0         0
## TotRmsAbvGrd           0            1       0         0
## LotArea                0            0       1         0
## SalePrice              0            0       0         1
round(inv %*% subcor)
##              TotalBsmtSF TotRmsAbvGrd LotArea SalePrice
## TotalBsmtSF            1            0       0         0
## TotRmsAbvGrd           0            1       0         0
## LotArea                0            0       1         0
## SalePrice              0            0       0         1
library(Matrix)
## Warning: package 'Matrix' was built under R version 3.4.4
## 
## Attaching package: 'Matrix'
## The following object is masked from 'package:tidyr':
## 
##     expand
lum <- lu(inv)
elu <- expand(lum)

round(elu$L,3)
## 4 x 4 Matrix of class "dtrMatrix" (unitriangular)
##      [,1]   [,2]   [,3]   [,4]  
## [1,]  1.000      .      .      .
## [2,]  0.066  1.000      .      .
## [3,] -0.110 -0.053  1.000      .
## [4,] -0.620 -0.520 -0.264  1.000
round(elu$U,3)
## 4 x 4 Matrix of class "dtrMatrix"
##      [,1]   [,2]   [,3]   [,4]  
## [1,]  1.640  0.108 -0.180 -1.016
## [2,]      .  1.403 -0.074 -0.729
## [3,]      .      .  1.075 -0.284
## [4,]      .      .      .  1.000

Multiplying the Correlation Matrix and the Inverse or the Inverse and the Correlation Matrix provides an identity matrix (diagonal of 1s) which is expected.

The package ‘Matrix’ is used to generate the Lower and Upper matrices.

#Decompose L and U for the above Inverse Matrix
library(Matrix)
lum <- lu(inv)
invlu <- expand(lum)

invlu$L
## 4 x 4 Matrix of class "dtrMatrix" (unitriangular)
##      [,1]        [,2]        [,3]        [,4]       
## [1,]  1.00000000           .           .           .
## [2,]  0.06615979  1.00000000           .           .
## [3,] -0.10984545 -0.05287637  1.00000000           .
## [4,] -0.61990957 -0.51977208 -0.26384335  1.00000000
invlu$U
## 4 x 4 Matrix of class "dtrMatrix"
##      [,1]        [,2]        [,3]        [,4]       
## [1,]  1.63967319  0.10848043 -0.18011064 -1.01644910
## [2,]           .  1.40343330 -0.07420846 -0.72946544
## [3,]           .           .  1.07482192 -0.28358462
## [4,]           .           .           .  1.00000000
#decomposing the correlation matrix
sublum <- lu(subcor)
sublu <- expand(sublum)

round(sublu$L,3)
## 4 x 4 Matrix of class "dtrMatrix" (unitriangular)
##      [,1]  [,2]  [,3]  [,4] 
## [1,] 1.000     .     .     .
## [2,] 0.286 1.000     .     .
## [3,] 0.261 0.126 1.000     .
## [4,] 0.614 0.390 0.064 1.000
round(sublu$U,3)
## 4 x 4 Matrix of class "dtrMatrix"
##      [,1]  [,2]  [,3]  [,4] 
## [1,] 1.000 0.286 0.261 0.614
## [2,]     . 0.918 0.116 0.359
## [3,]     .     . 0.917 0.059
## [4,]     .     .     . 0.480

Calculus-Based Probability & Statistics.

Many times, it makes sense to fit a closed form distribution to data. For the first variable that you selected which is skewed to the right, shift it so that the minimum value is above zero as necessary. Then load the MASS package and run fitdistr to fit an exponential probability density function. (See https://stat.ethz.ch/R-manual/R-devel/library/MASS/html/fitdistr.html ).

Find the optimal value of λ for this distribution, and then take 1000 samples from this exponential distribution using this value (e.g., rexp(1000, λ)). Plot a histogram and compare it with a histogram of your original variable. Using the exponential pdf, find the 5th and 95th percentiles using the cumulative distribution function (CDF). Also generate a 95% confidence interval from the empirical data, assuming normality. Finally, provide the empirical 5th percentile and 95th percentile of the data. Discuss.

#Decompose L and U for the above Inverse Matrix
library(MASS)
## 
## Attaching package: 'MASS'
## The following object is masked from 'package:dplyr':
## 
##     select
min(train$BsmtFinSF12)
## [1] 0
train$BsmtFinSF122 <- train$BsmtFinSF12 + 1/10000
min(train$BsmtFinSF122)
## [1] 1e-04
#fit an exp dist
fitexpd <- fitdistr(train$BsmtFinSF122, "exponential")
lam <- fitexpd$estimate
print(lam)
##        rate 
## 0.002040029
s <- rexp(1000, lam)

#histogram of old and new
par(mfrow=c(1,2))
hist(train$BsmtFinSF12, main="Kaggle Data", xlab="TotBsmtFin (1 and 2)")
hist(s, main="Sampled Data")

The two distributions follow the same form with a lower peak in the sampling.

The CDF is $ P = 1 − e^-x$ which means that \(X = log(1-P)/-\lambda\)

# Obtain CDF for 5% and 95% for the sample data.
cdf_5 <- log(1 - .05)/-lam
cdf_95 <- log(1 - .95)/-lam

# obtain normality 5% and 95%
#this is done using quantiles which assume IID
quantile(train$BsmtFinSF12, 0.05)
## 5% 
##  0
quantile(train$BsmtFinSF12, 0.95)
##  95% 
## 1309
#Generate Confidence Interval
#Use RMISC to calculate Confidence Interval
library(Rmisc)
## Loading required package: lattice
## Loading required package: plyr
## -------------------------------------------------------------------------
## You have loaded plyr after dplyr - this is likely to cause problems.
## If you need functions from both plyr and dplyr, please load plyr first, then dplyr:
## library(plyr); library(dplyr)
## -------------------------------------------------------------------------
## 
## Attaching package: 'plyr'
## The following object is masked from 'package:purrr':
## 
##     compact
## The following objects are masked from 'package:dplyr':
## 
##     arrange, count, desc, failwith, id, mutate, rename, summarise,
##     summarize
CI(train$BsmtFinSF12, 0.95)
##    upper     mean    lower 
## 514.6308 490.1890 465.7472
print(cdf_5)
##     rate 
## 25.14342
print(cdf_95)
##     rate 
## 1468.475

This means that the lower value is 466 and the upper is 515 for the Kaggle dataset. The CDF from the sample data provides a range of 26 and 1469. This means that the exponential function is NOT a good model to fit to the data. The spread of the empirical data in the exponential is encompassing nearly all the data vs the actual normal bounds for the original data.

Modeling

Build some type of multiple regression model and submit your model to the competition board. Provide your complete model summary and results with analysis. Report your Kaggle.com user name and score. Reminder that new variables were genrated at the start of this analysis for use here and include: AllSF, SFLotRatio and PriceperSF.

Step 1. Split the Data.

The first step is to split the data into a training set and a test set.

# Split data in reg and train
#using ntrain from the 2nd question

n <- nrow(ntrain) #1460 rows of data
shuffle_ntrain <- ntrain[sample(n), ]
train_indices <- 1:round(0.7 * n)
trainsub <- shuffle_ntrain[train_indices, ]
test_indices <- (round(0.7 * n) + 1):n
testsub <- shuffle_ntrain[test_indices, ]
testsub[is.na(testsub)] <- 0
ntrain[is.na(ntrain)] <- 0

Step 2. Create Multiple Linear Regression Model.

The next step is to create a multiple regression model. This is just an expansion of the lm() function inside r and providing more than 1 independent variable for anlaysis. In this case, the first model was generated by passing in all the numeric variables into the lm() function. Upon further inspection, the independent variables were whittled down to select only those of significance and a 2nd pass model was generated.

#fit a linear regression model
newdatacor = cor(trainsub)
corrplot(newdatacor)

model <- lm(SalePrice ~ ., data=trainsub)
summary(model)
## 
## Call:
## lm(formula = SalePrice ~ ., data = trainsub)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -402344  -17631   -2378   15330  212543 
## 
## Coefficients: (3 not defined because of singularities)
##                 Estimate Std. Error t value Pr(>|t|)    
## (Intercept)    8.119e+05  1.594e+06   0.509 0.610631    
## Id             7.515e-01  2.472e+00   0.304 0.761142    
## MSSubClass    -1.656e+02  3.121e+01  -5.306 1.39e-07 ***
## LotFrontage   -3.409e+01  3.247e+01  -1.050 0.294037    
## LotArea        5.046e-01  1.112e-01   4.536 6.45e-06 ***
## OverallQual    1.782e+04  1.367e+03  13.040  < 2e-16 ***
## OverallCond    5.727e+03  1.204e+03   4.758 2.25e-06 ***
## YearBuilt      3.598e+02  6.938e+01   5.185 2.62e-07 ***
## YearRemodAdd   9.675e+01  7.583e+01   1.276 0.202291    
## MasVnrArea     3.094e+01  6.688e+00   4.626 4.23e-06 ***
## BsmtFinSF1     1.575e+01  5.010e+00   3.145 0.001713 ** 
## BsmtFinSF2     5.593e+00  7.962e+00   0.702 0.482600    
## BsmtUnfSF      6.673e+00  4.454e+00   1.498 0.134434    
## TotalBsmtSF           NA         NA      NA       NA    
## X1stFlrSF      3.789e+01  6.566e+00   5.772 1.05e-08 ***
## X2ndFlrSF      3.672e+01  5.778e+00   6.355 3.17e-10 ***
## LowQualFinSF   1.376e+01  2.160e+01   0.637 0.524110    
## GrLivArea             NA         NA      NA       NA    
## BsmtFullBath   9.806e+03  2.872e+03   3.415 0.000664 ***
## BsmtHalfBath  -2.044e+03  4.624e+03  -0.442 0.658630    
## FullBath       7.097e+03  3.172e+03   2.237 0.025480 *  
## HalfBath      -1.645e+03  3.021e+03  -0.545 0.586193    
## BedroomAbvGr  -1.143e+04  1.923e+03  -5.942 3.89e-09 ***
## KitchenAbvGr  -1.727e+04  5.767e+03  -2.995 0.002813 ** 
## TotRmsAbvGrd   8.437e+03  1.415e+03   5.965 3.41e-09 ***
## Fireplaces     2.152e+03  1.994e+03   1.079 0.280658    
## GarageYrBlt   -1.246e+01  3.099e+00  -4.021 6.24e-05 ***
## GarageCars     1.525e+04  3.322e+03   4.590 4.99e-06 ***
## GarageArea     3.402e+00  1.092e+01   0.312 0.755455    
## WoodDeckSF     3.757e+01  9.561e+00   3.929 9.11e-05 ***
## OpenPorchSF    1.779e+01  1.740e+01   1.022 0.306882    
## EnclosedPorch  3.382e+01  1.940e+01   1.743 0.081670 .  
## X3SsnPorch     1.689e+01  3.139e+01   0.538 0.590753    
## ScreenPorch    8.361e+01  1.869e+01   4.474 8.59e-06 ***
## PoolArea      -1.178e+02  2.566e+01  -4.591 4.98e-06 ***
## MiscVal        3.027e-01  1.994e+00   0.152 0.879408    
## MoSold         4.201e+00  3.902e+02   0.011 0.991412    
## YrSold        -8.817e+02  7.938e+02  -1.111 0.266985    
## BsmtFinSF12           NA         NA      NA       NA    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 32900 on 986 degrees of freedom
## Multiple R-squared:  0.8309, Adjusted R-squared:  0.8249 
## F-statistic: 138.4 on 35 and 986 DF,  p-value: < 2.2e-16
plot(model$residuals ~ model$fitted.values)

#extract variables that are significant and rerun model
sigvars <- data.frame(summary(model)$coef[summary(model)$coef[,4] <= .05, 4])
sigvars <- add_rownames(sigvars, "vars")
## Warning: Deprecated, use tibble::rownames_to_column() instead.
colist<-dplyr::pull(sigvars, vars)

idx <- match(colist, names(trainsub))
trainsub2 <- cbind(trainsub[,idx], trainsub['SalePrice'])

model2<-lm(SalePrice ~ ., data=trainsub2)

summary(model2)
## 
## Call:
## lm(formula = SalePrice ~ ., data = trainsub2)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -397793  -17285   -2394   14752  210450 
## 
## Coefficients:
##                Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  -7.465e+05  1.079e+05  -6.918 8.18e-12 ***
## MSSubClass   -1.628e+02  2.972e+01  -5.477 5.47e-08 ***
## LotArea       5.062e-01  1.085e-01   4.667 3.47e-06 ***
## OverallQual   1.881e+04  1.290e+03  14.580  < 2e-16 ***
## OverallCond   5.877e+03  1.087e+03   5.404 8.12e-08 ***
## YearBuilt     3.482e+02  5.491e+01   6.342 3.43e-10 ***
## MasVnrArea    2.940e+01  6.565e+00   4.478 8.41e-06 ***
## BsmtFinSF1    9.396e+00  3.311e+00   2.838 0.004634 ** 
## X1stFlrSF     4.567e+01  5.396e+00   8.464  < 2e-16 ***
## X2ndFlrSF     3.773e+01  4.847e+00   7.783 1.76e-14 ***
## BsmtFullBath  1.080e+04  2.613e+03   4.132 3.89e-05 ***
## FullBath      8.236e+03  2.897e+03   2.843 0.004560 ** 
## BedroomAbvGr -1.179e+04  1.870e+03  -6.306 4.30e-10 ***
## KitchenAbvGr -2.096e+04  5.492e+03  -3.816 0.000144 ***
## TotRmsAbvGrd  8.187e+03  1.365e+03   5.998 2.78e-09 ***
## GarageYrBlt  -1.319e+01  2.955e+00  -4.463 9.00e-06 ***
## GarageCars    1.636e+04  2.319e+03   7.056 3.19e-12 ***
## WoodDeckSF    3.710e+01  9.339e+00   3.973 7.61e-05 ***
## ScreenPorch   8.212e+01  1.809e+01   4.540 6.32e-06 ***
## PoolArea     -1.159e+02  2.529e+01  -4.583 5.17e-06 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 32840 on 1002 degrees of freedom
## Multiple R-squared:  0.8288, Adjusted R-squared:  0.8256 
## F-statistic: 255.4 on 19 and 1002 DF,  p-value: < 2.2e-16
par(mfrow=c(1,2))
plot(model2$residuals ~ model2$fitted.values, main="New Reduced Var Model")
abline(h = 0)
plot(model$residuals ~ model$fitted.values, main="Orignal Model All Vars")
abline(h = 0)

The first model has an Adjusted R-squared: 0.8255 which indicates a good model but it seems like it might be overfitting the data since only 16 of the 80+ variables are actually significant. The second model has a lower Adjusted R-squared: 0.8243 but it relies on less variables to achieve this result which makes it a better model that relies less on overfitting. We could do a third pass by selecting only those that are below for example p<0.01 instead of p<0.05 which was used to select the variables.

Step 3. Predict Values on Test Data Provided by Kaggle Competition.

The final step was to apply the linear model tot he test data provided by Kaggle for submission. In this case the training set had 1460 records and the test data had 1459 records.

#predict


#select only numeric columns
test <- select_if(test, is.numeric)

test[is.na(test)] <- 0
pred2<-predict(model2,test)


#export data for Kaggle
kaggle <- as.data.frame(cbind(test$Id, pred2))
colnames(kaggle) <- c("Id", "SalePrice")

write.csv(kaggle, file = "Kaggle_Submission2.csv", quote=FALSE, row.names=FALSE)

After doing a second pass on my model, I was able to get predictions for the test data for the kaggle competition. In submitting my predictions, I got a score of 0.24257 for my username cspitmit03 which gave me a rank of 4786/5325 which is the 10th percentile. Note that when I added my own variables, my score actually dropped in half which means that they weren’t effective in producing better predictions.

# Split data in reg and train
#using ntrain from the 2nd question
knitr::include_graphics('Submission.png')