Generate a random variable \(X\) that has \(10,000\) random uniform numbers from \(1\) to \(N\), where \(N\) can be any number of your choosing greater than or equal to \(6\). Then generate a random variable \(Y\) that has \(10,000\) random normal numbers with a mean of \(\mu = \sigma = \frac{N+1}{2}\).
set.seed(124)
n <- 10000
N <- 10
mu <- (N+1)/2
sd <- (N+1)/2
X <- runif(n, min = 1, max = N)
Y <- rnorm(n, mean= mu, sd = sd)
Calculate as a minimum the below probabilities \(a\) through \(c\). Assume the small letter “x” is estimated as the median of the \(X\) variable, and the small letter “y” is estimated as the 1st quartile of the \(Y\) variable. Interpret the meaning of all probabilities.
x <- median(X)
y <- as.numeric(quantile(Y)[2])
z <- (y - mu) / sd
pXlx <- ((x-min(X))*(1/N))
pXgx <- 1 - ((x-min(X))*(1/N))
pXgy <- 1 - ((y-min(X)) * (1/N))
pYgy <- 1 - pnorm(z)
pYly <- pnorm(z)
a. P(X>x | X>y)
Conditional probability - probability that \(X\) is greater than \(x\) given that \(X\) is greater than \(y\).
\[P(X>x | X>y) = \frac{P(X>x \cap X>y)}{P(X>y)} = \frac{P(X>x) P(X>y)}{P(X>y)}\]
(pXgx*pXgy)/pXgy
## [1] 0.5463136
b. P(X>x, Y>y)
Joint Probability - Probability that \(X\) is greater than \(x\) and \(Y\) is greater than \(y\). \[P(X>x, Y>y) = P(X>x \cap Y>y) = P(X>x) P(Y>y)\]
pXgx*pYgy
## [1] 0.4077384
c. P(X<x | X>y) Conditional probability - probability that \(X\) is less than \(x\) given that \(X\) is greater than \(y\). \[P(X<x | X>y) = \frac{P(X<x \cap X>y)}{P(X>y)} = \frac{P(X<x) P(X>y)}{P(X>y)}\]
(pXlx*pXgy)/pXgy
## [1] 0.4536864
Investigate whether \(P(X>x\) \(and\) \(Y>y) = P(X>x)P(Y>y)\) by building a table and evaluating the marginal and joint probabilities.
We will create a matrix of joint probabilities and then calculate the marginal probabilities. We will then compare the joint probability corresponding to record \([1,1]\) in the matrix to the product of the marginal probabilities (records \([3,1]\) and \([1,3]\) in the matrix).
# calculate joint probabilities
a <- pXgx * pYgy
b <- pXgx * pYly
c <- pXlx * pYgy
d <- pXlx * pYly
# create matrix
jointMatrix <- matrix(c(a,b,c,d),
nrow=2)
# add row totals
fullMatrix <- rbind(jointMatrix, c(a+b, c+d))
# add column totals
fullMatrix <- cbind(fullMatrix, c(a+c, b+d, NA))
colnames(fullMatrix) <- c('P(X>x)', 'P(X<x)', 'Marginal Prob Y')
rownames(fullMatrix) <- c('P(Y>y)', 'P(Y<y)', 'Marginal Prob X')
fullMatrix
## P(X>x) P(X<x) Marginal Prob Y
## P(Y>y) 0.4077384 0.3386066 0.746345
## P(Y<y) 0.1385752 0.1150798 0.253655
## Marginal Prob X 0.5463136 0.4536864 NA
margProb <- fullMatrix[3,1] * fullMatrix[1,3]
jointProb <- fullMatrix[1,1]
margProb
## [1] 0.4077384
jointProb
## [1] 0.4077384
The two probabilities are equal.
Check to see if independence holds by using Fisher’s Exact Test and the Chi Square Test. What is the difference between the two? Which is most appropriate?
Random variables are independent if neither variable affects the probability distribution of the other. Both Fisher’s and Chi-Square examine the likelihood of independence of given variables.
In both cases, we test the hypothesis that the variables are independent:
A \(p-value \leq 0.05\) allows us to reject \(H_0\) and accept \(H_A\). In this case, we can prove that the two variables are independent if the \(p-value > 0.05\).
fisher.result <- fisher.test(jointMatrix)
print(fisher.result$p.value)
## [1] 1
chi.result <- chisq.test(jointMatrix)
print(chi.result$p.value)
## [1] 1
In both instances, the p-value is very high \((1)\), which means we accept \(H_0\) - the variables are independent.
The chi-square test can be used for any sized contingency table, whereas Fisher’s exact test can only be used for \(2x2\) contingency tables. Additionally, from reading online, it looks like Fisher’s exact test is generally preferred for small sample sizes, whereas chi-square may be unreliable if the sample is too small.
In this case, I would recommend using the chi-square test because we have 2 random variables that both have \(10,000\) numbers, to this test might be more accurate.
You are to register for Kaggle.com (free) and compete in the House Prices: Advanced Regression Techniques competition. https://www.kaggle.com/c/house-prices-advanced-regression-techniques
trainSet <- read.csv('train.csv', sep=',') %>%
as_tibble()
testSet <- read.csv('test.csv', sep=',') %>%
as_tibble()
Provide univariate descriptive statistics and appropriate plots for the training data set. Provide a scatterplot matrix for at least two of the independent variables and the dependent variable. Derive a correlation matrix for any three quantitative variables in the dataset. Test the hypotheses that the correlations between each pairwise set of variables is 0 and provide an 80% confidence interval. Discuss the meaning of your analysis. Would you be worried about familywise error? Why or why not?
We’ll take a look at the sale prices:
hist(trainSet$SalePrice,
main="Distribution of Sale Prices",
xlab="Sale Price",
breaks = 20)
The majority of homes fall between \(\$100,000\) and \(\$200,000\).
boxplot(trainSet$SalePrice,
main="Boxplot of Sale Prices",
xlab="Sale Price")
The boxplot shows us that we have quite a few outliers.
summary(trainSet)
## Id MSSubClass MSZoning LotFrontage
## Min. : 1.0 Min. : 20.0 C (all): 10 Min. : 21.00
## 1st Qu.: 365.8 1st Qu.: 20.0 FV : 65 1st Qu.: 59.00
## Median : 730.5 Median : 50.0 RH : 16 Median : 69.00
## Mean : 730.5 Mean : 56.9 RL :1151 Mean : 70.05
## 3rd Qu.:1095.2 3rd Qu.: 70.0 RM : 218 3rd Qu.: 80.00
## Max. :1460.0 Max. :190.0 Max. :313.00
## NA's :259
## LotArea Street Alley LotShape LandContour
## Min. : 1300 Grvl: 6 Grvl: 50 IR1:484 Bnk: 63
## 1st Qu.: 7554 Pave:1454 Pave: 41 IR2: 41 HLS: 50
## Median : 9478 NA's:1369 IR3: 10 Low: 36
## Mean : 10517 Reg:925 Lvl:1311
## 3rd Qu.: 11602
## Max. :215245
##
## Utilities LotConfig LandSlope Neighborhood Condition1
## AllPub:1459 Corner : 263 Gtl:1382 NAmes :225 Norm :1260
## NoSeWa: 1 CulDSac: 94 Mod: 65 CollgCr:150 Feedr : 81
## FR2 : 47 Sev: 13 OldTown:113 Artery : 48
## FR3 : 4 Edwards:100 RRAn : 26
## Inside :1052 Somerst: 86 PosN : 19
## Gilbert: 79 RRAe : 11
## (Other):707 (Other): 15
## Condition2 BldgType HouseStyle OverallQual
## Norm :1445 1Fam :1220 1Story :726 Min. : 1.000
## Feedr : 6 2fmCon: 31 2Story :445 1st Qu.: 5.000
## Artery : 2 Duplex: 52 1.5Fin :154 Median : 6.000
## PosN : 2 Twnhs : 43 SLvl : 65 Mean : 6.099
## RRNn : 2 TwnhsE: 114 SFoyer : 37 3rd Qu.: 7.000
## PosA : 1 1.5Unf : 14 Max. :10.000
## (Other): 2 (Other): 19
## OverallCond YearBuilt YearRemodAdd RoofStyle
## Min. :1.000 Min. :1872 Min. :1950 Flat : 13
## 1st Qu.:5.000 1st Qu.:1954 1st Qu.:1967 Gable :1141
## Median :5.000 Median :1973 Median :1994 Gambrel: 11
## Mean :5.575 Mean :1971 Mean :1985 Hip : 286
## 3rd Qu.:6.000 3rd Qu.:2000 3rd Qu.:2004 Mansard: 7
## Max. :9.000 Max. :2010 Max. :2010 Shed : 2
##
## RoofMatl Exterior1st Exterior2nd MasVnrType MasVnrArea
## CompShg:1434 VinylSd:515 VinylSd:504 BrkCmn : 15 Min. : 0.0
## Tar&Grv: 11 HdBoard:222 MetalSd:214 BrkFace:445 1st Qu.: 0.0
## WdShngl: 6 MetalSd:220 HdBoard:207 None :864 Median : 0.0
## WdShake: 5 Wd Sdng:206 Wd Sdng:197 Stone :128 Mean : 103.7
## ClyTile: 1 Plywood:108 Plywood:142 NA's : 8 3rd Qu.: 166.0
## Membran: 1 CemntBd: 61 CmentBd: 60 Max. :1600.0
## (Other): 2 (Other):128 (Other):136 NA's :8
## ExterQual ExterCond Foundation BsmtQual BsmtCond BsmtExposure
## Ex: 52 Ex: 3 BrkTil:146 Ex :121 Fa : 45 Av :221
## Fa: 14 Fa: 28 CBlock:634 Fa : 35 Gd : 65 Gd :134
## Gd:488 Gd: 146 PConc :647 Gd :618 Po : 2 Mn :114
## TA:906 Po: 1 Slab : 24 TA :649 TA :1311 No :953
## TA:1282 Stone : 6 NA's: 37 NA's: 37 NA's: 38
## Wood : 3
##
## BsmtFinType1 BsmtFinSF1 BsmtFinType2 BsmtFinSF2
## ALQ :220 Min. : 0.0 ALQ : 19 Min. : 0.00
## BLQ :148 1st Qu.: 0.0 BLQ : 33 1st Qu.: 0.00
## GLQ :418 Median : 383.5 GLQ : 14 Median : 0.00
## LwQ : 74 Mean : 443.6 LwQ : 46 Mean : 46.55
## Rec :133 3rd Qu.: 712.2 Rec : 54 3rd Qu.: 0.00
## Unf :430 Max. :5644.0 Unf :1256 Max. :1474.00
## NA's: 37 NA's: 38
## BsmtUnfSF TotalBsmtSF Heating HeatingQC CentralAir
## Min. : 0.0 Min. : 0.0 Floor: 1 Ex:741 N: 95
## 1st Qu.: 223.0 1st Qu.: 795.8 GasA :1428 Fa: 49 Y:1365
## Median : 477.5 Median : 991.5 GasW : 18 Gd:241
## Mean : 567.2 Mean :1057.4 Grav : 7 Po: 1
## 3rd Qu.: 808.0 3rd Qu.:1298.2 OthW : 2 TA:428
## Max. :2336.0 Max. :6110.0 Wall : 4
##
## Electrical X1stFlrSF X2ndFlrSF LowQualFinSF
## FuseA: 94 Min. : 334 Min. : 0 Min. : 0.000
## FuseF: 27 1st Qu.: 882 1st Qu.: 0 1st Qu.: 0.000
## FuseP: 3 Median :1087 Median : 0 Median : 0.000
## Mix : 1 Mean :1163 Mean : 347 Mean : 5.845
## SBrkr:1334 3rd Qu.:1391 3rd Qu.: 728 3rd Qu.: 0.000
## NA's : 1 Max. :4692 Max. :2065 Max. :572.000
##
## GrLivArea BsmtFullBath BsmtHalfBath FullBath
## Min. : 334 Min. :0.0000 Min. :0.00000 Min. :0.000
## 1st Qu.:1130 1st Qu.:0.0000 1st Qu.:0.00000 1st Qu.:1.000
## Median :1464 Median :0.0000 Median :0.00000 Median :2.000
## Mean :1515 Mean :0.4253 Mean :0.05753 Mean :1.565
## 3rd Qu.:1777 3rd Qu.:1.0000 3rd Qu.:0.00000 3rd Qu.:2.000
## Max. :5642 Max. :3.0000 Max. :2.00000 Max. :3.000
##
## HalfBath BedroomAbvGr KitchenAbvGr KitchenQual
## Min. :0.0000 Min. :0.000 Min. :0.000 Ex:100
## 1st Qu.:0.0000 1st Qu.:2.000 1st Qu.:1.000 Fa: 39
## Median :0.0000 Median :3.000 Median :1.000 Gd:586
## Mean :0.3829 Mean :2.866 Mean :1.047 TA:735
## 3rd Qu.:1.0000 3rd Qu.:3.000 3rd Qu.:1.000
## Max. :2.0000 Max. :8.000 Max. :3.000
##
## TotRmsAbvGrd Functional Fireplaces FireplaceQu GarageType
## Min. : 2.000 Maj1: 14 Min. :0.000 Ex : 24 2Types : 6
## 1st Qu.: 5.000 Maj2: 5 1st Qu.:0.000 Fa : 33 Attchd :870
## Median : 6.000 Min1: 31 Median :1.000 Gd :380 Basment: 19
## Mean : 6.518 Min2: 34 Mean :0.613 Po : 20 BuiltIn: 88
## 3rd Qu.: 7.000 Mod : 15 3rd Qu.:1.000 TA :313 CarPort: 9
## Max. :14.000 Sev : 1 Max. :3.000 NA's:690 Detchd :387
## Typ :1360 NA's : 81
## GarageYrBlt GarageFinish GarageCars GarageArea GarageQual
## Min. :1900 Fin :352 Min. :0.000 Min. : 0.0 Ex : 3
## 1st Qu.:1961 RFn :422 1st Qu.:1.000 1st Qu.: 334.5 Fa : 48
## Median :1980 Unf :605 Median :2.000 Median : 480.0 Gd : 14
## Mean :1979 NA's: 81 Mean :1.767 Mean : 473.0 Po : 3
## 3rd Qu.:2002 3rd Qu.:2.000 3rd Qu.: 576.0 TA :1311
## Max. :2010 Max. :4.000 Max. :1418.0 NA's: 81
## NA's :81
## GarageCond PavedDrive WoodDeckSF OpenPorchSF EnclosedPorch
## Ex : 2 N: 90 Min. : 0.00 Min. : 0.00 Min. : 0.00
## Fa : 35 P: 30 1st Qu.: 0.00 1st Qu.: 0.00 1st Qu.: 0.00
## Gd : 9 Y:1340 Median : 0.00 Median : 25.00 Median : 0.00
## Po : 7 Mean : 94.24 Mean : 46.66 Mean : 21.95
## TA :1326 3rd Qu.:168.00 3rd Qu.: 68.00 3rd Qu.: 0.00
## NA's: 81 Max. :857.00 Max. :547.00 Max. :552.00
##
## X3SsnPorch ScreenPorch PoolArea PoolQC
## Min. : 0.00 Min. : 0.00 Min. : 0.000 Ex : 2
## 1st Qu.: 0.00 1st Qu.: 0.00 1st Qu.: 0.000 Fa : 2
## Median : 0.00 Median : 0.00 Median : 0.000 Gd : 3
## Mean : 3.41 Mean : 15.06 Mean : 2.759 NA's:1453
## 3rd Qu.: 0.00 3rd Qu.: 0.00 3rd Qu.: 0.000
## Max. :508.00 Max. :480.00 Max. :738.000
##
## Fence MiscFeature MiscVal MoSold
## GdPrv: 59 Gar2: 2 Min. : 0.00 Min. : 1.000
## GdWo : 54 Othr: 2 1st Qu.: 0.00 1st Qu.: 5.000
## MnPrv: 157 Shed: 49 Median : 0.00 Median : 6.000
## MnWw : 11 TenC: 1 Mean : 43.49 Mean : 6.322
## NA's :1179 NA's:1406 3rd Qu.: 0.00 3rd Qu.: 8.000
## Max. :15500.00 Max. :12.000
##
## YrSold SaleType SaleCondition SalePrice
## Min. :2006 WD :1267 Abnorml: 101 Min. : 34900
## 1st Qu.:2007 New : 122 AdjLand: 4 1st Qu.:129975
## Median :2008 COD : 43 Alloca : 12 Median :163000
## Mean :2008 ConLD : 9 Family : 20 Mean :180921
## 3rd Qu.:2009 ConLI : 5 Normal :1198 3rd Qu.:214000
## Max. :2010 ConLw : 5 Partial: 125 Max. :755000
## (Other): 9
# trainset[c('SalePrice', 'OverallQual', 'OverallCond')]
spMatrix <- trainSet %>%
dplyr::select(SalePrice, OverallQual, OverallCond)
pairs(spMatrix, gap=0.2)
Quality - We can see from the graph below that as the quality of the home increases, so does the sale price.
Condition - There’s a lot more variation in the sale price compared to the overall condition of the home (especially around condition = 5), but in general, we can see as the condition improves, the sale price increases.
matrixVals <- trainSet %>%
dplyr::select(LotArea, TotalBsmtSF, GrLivArea)
corrMatrix <- cor(matrixVals)
corrMatrix
## LotArea TotalBsmtSF GrLivArea
## LotArea 1.0000000 0.2608331 0.2631162
## TotalBsmtSF 0.2608331 1.0000000 0.4548682
## GrLivArea 0.2631162 0.4548682 1.0000000
None of the variables appear to be that correlated – the highest correlation is between GrLivArea and TotalBsmtSF.
For this analysis, we will assume the following:
If the \(p-value \leq 0.05\), we will reject \(H_0\) and accept \(H_A\). This would mean that there is a non-zero correlation between the variables.
Lot Area vs Total Basement Square footage
laBsf <- cor.test(matrixVals$LotArea, matrixVals$TotalBsmtSF, method = 'pearson',conf.level = 0.8)
laBsf$p.value
## [1] 3.911258e-24
Lot Area vs Living Area
laLa <- cor.test(matrixVals$LotArea, matrixVals$GrLivArea, method = 'pearson',conf.level = 0.8)
laLa$p.value
## [1] 1.520481e-24
Total Basement Square footage vs Living Area
BsfLa <- cor.test(matrixVals$TotalBsmtSF, matrixVals$GrLivArea, method = 'pearson',conf.level = 0.8)
BsfLa$p.value
## [1] 1.85787e-75
In each pairwise-comparison, the \(p-value <= 0.05\), which means that we can reject \(H_0\) and accept the null. This means that the variables are not independent of each other - and regardless of how slight, they are all at least somewhat correlated to each other.
Familywise error is the likelihood of accepting a false positive - in this case, reporting that there is a correlation between variables when in fact they have occurred by chance. Because the \(p-values\) are so much lower than \(0.05\), I would not be worried about familywise error.
Invert your correlation matrix from above. (This is known as the precision matrix and contains variance inflation factors on the diagonal.) Multiply the correlation matrix by the precision matrix, and then multiply the precision matrix by the correlation matrix. Conduct LU decomposition on the matrix.
precMatrix <- inv(corrMatrix)
precMatrix
##
## [1,] 1.1041806 -0.1965150 -0.2011393
## [2,] -0.1965150 1.2958576 -0.5377382
## [3,] -0.2011393 -0.5377382 1.2975230
cp <- corrMatrix %*% precMatrix
pc <- precMatrix %*% corrMatrix
cp
##
## LotArea 1.000000e+00 1.372943e-09 1.342205e-09
## TotalBsmtSF 2.839125e-09 1.000000e+00 -2.366313e-09
## GrLivArea 3.033268e-09 -1.772131e-09 1.000000e+00
pc
## LotArea TotalBsmtSF GrLivArea
## [1,] 1.000000e+00 2.839125e-09 3.033268e-09
## [2,] 1.372943e-09 1.000000e+00 -1.772131e-09
## [3,] 1.342205e-09 -2.366313e-09 1.000000e+00
The precision matrix * correlation matrix is about equal to the correlation matrix * precision matrix, which makes sense because they are inverses.
LU Decomposition - I am shortcutting for this problem by using the lu.decomposition function from the matrixcalc library. To see a more detailed analysis of the LU decomposition function, please visit: https://rpubs.com/amberferger/DATA605_HW2_PS2
lu <- lu.decomposition(corrMatrix)
l <- lu$L
u <- lu$U
l
## [,1] [,2] [,3]
## [1,] 1.0000000 0.0000000 0
## [2,] 0.2608331 1.0000000 0
## [3,] 0.2631162 0.4144344 1
u
## [,1] [,2] [,3]
## [1,] 1 0.2608331 0.2631162
## [2,] 0 0.9319661 0.3862388
## [3,] 0 0.0000000 0.7706992
We can check that the LU Decomposition worked by comparing \(l * u\) to the original matrix:
l %*% u
## [,1] [,2] [,3]
## [1,] 1.0000000 0.2608331 0.2631162
## [2,] 0.2608331 1.0000000 0.4548682
## [3,] 0.2631162 0.4548682 1.0000000
corrMatrix
## LotArea TotalBsmtSF GrLivArea
## LotArea 1.0000000 0.2608331 0.2631162
## TotalBsmtSF 0.2608331 1.0000000 0.4548682
## GrLivArea 0.2631162 0.4548682 1.0000000
Many times, it makes sense to fit a closed form distribution to data. Select a variable in the Kaggle.com training dataset that is skewed to the right, shift it so that the minimum value is absolutely above zero if necessary. Then load the MASS package and run fitdistr to fit an exponential probability density function. (See https://stat.ethz.ch/R-manual/R-devel/library/MASS/html/fitdistr.html ). Find the optimal value of \(\lambda\) for this distribution, and then take \(1,000\) samples from this exponential distribution using this value (e.g., \(rexp(1,000, \lambda)\)). Plot a histogram and compare it with a histogram of your original variable. Using the exponential pdf, find the \(5^{th}\) and \(95^{th}\) percentiles using the cumulative distribution function (CDF). Also generate a \(95\%\) confidence interval from the empirical data, assuming normality. Finally, provide the empirical \(5^{th}\) percentile and \(95^{th}\) percentile of the data. Discuss.
I will use the variable X1stFlrSF.
hist(trainSet$X1stFlrSF)
Documentation for function output:
An object of class “fitdistr”, a list with four components,
exPDF <- fitdistr(trainSet$X1stFlrSF, "exponential")
lam <- as.numeric(exPDF$estimate)
samps <- rexp(1000,lam)
hist(samps,
main="Distribution of X1stFlrSF - Exponential")
hist(trainSet$X1stFlrSF,
main="Distribution of X1stFlrSF")
Percentiles for exponential distribution:
qexp(c(.05,0.95), rate=exPDF$estimate)
## [1] 59.63495 3482.91836
Confidence Interval for Normal Distribution:
n <- nrow(trainSet)
avg <- mean(trainSet$X1stFlrSF)
sdev <- sd(trainSet$X1stFlrSF)
ser <- (pnorm(0.95)*sdev)/sqrt(n)
lower <- avg - ser
upper <- avg + ser
lower
## [1] 1154.24
upper
## [1] 1171.014
Percentiles for empirical data:
quantile(trainSet$X1stFlrSF, c(0.05,0.95))
## 5% 95%
## 672.95 1831.25
The exponential model is not a good model for the data.
It gives the following:
whereas the empirical data shows:
Build some type of multiple regression model and submit your model to the competition board. Provide your complete model summary and results with analysis. Report your Kaggle.com user name and score.
Username: amberferger, Score: 0.20498
After a number of trials (and errors), I’ve come up with the following set of features to use:
From dataset:
Engineered features:
multReg <- trainSet
multReg$RemodInd <- with(multReg, ifelse(YearRemodAdd==YearBuilt, 0,1))
multReg$PropFront <- with(multReg, LotFrontage/LotArea)
multReg$PropBsmtFin <- with(multReg, (TotalBsmtSF - BsmtUnfSF)/TotalBsmtSF)
multReg$funcTyp <- with(multReg, ifelse(Functional == 'Typ', 1,0))
multReg$RR <- with(multReg, ifelse(Condition1 == 'Norm', 1, 0))
multReg$nbrhd <- with(multReg, ifelse(Neighborhood == 'StoneBr' | Neighborhood == 'NridgHt' | Neighborhood == 'NoRidge' | Neighborhood == 'Somerst' | Neighborhood == 'Crawfor',1,0))
multReg$KitchenQual <- with(multReg, ifelse(is.null(KitchenQual) | is.na(KitchenQual),'',KitchenQual))
multReg <- multReg %>%
dplyr::select(SalePrice,GrLivArea, OverallQual, YearBuilt,
RemodInd, LotArea, PropFront, PropBsmtFin,
GarageCars, funcTyp, KitchenQual,
X2ndFlrSF, RR, nbrhd)
# we will replace all nulls with 0
multReg[is.na(multReg)] <- 0
multReg.lm <- lm(SalePrice ~ GrLivArea + OverallQual + YearBuilt + RemodInd + LotArea + PropFront + PropBsmtFin + GarageCars + funcTyp + KitchenQual + X2ndFlrSF + RR + nbrhd , data = multReg)
summary(multReg.lm)
##
## Call:
## lm(formula = SalePrice ~ GrLivArea + OverallQual + YearBuilt +
## RemodInd + LotArea + PropFront + PropBsmtFin + GarageCars +
## funcTyp + KitchenQual + X2ndFlrSF + RR + nbrhd, data = multReg)
##
## Residuals:
## Min 1Q Median 3Q Max
## -413865 -14881 -421 13460 266074
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -6.281e+05 8.349e+04 -7.524 9.31e-14 ***
## GrLivArea 6.688e+01 3.146e+00 21.256 < 2e-16 ***
## OverallQual 1.467e+04 1.092e+03 13.434 < 2e-16 ***
## YearBuilt 3.028e+02 4.310e+01 7.026 3.27e-12 ***
## RemodInd 8.951e+03 2.001e+03 4.472 8.34e-06 ***
## LotArea 6.496e-01 9.608e-02 6.761 1.99e-11 ***
## PropFront -6.406e+05 2.464e+05 -2.600 0.00941 **
## PropBsmtFin 2.513e+04 2.470e+03 10.175 < 2e-16 ***
## GarageCars 1.229e+04 1.593e+03 7.711 2.32e-14 ***
## funcTyp 1.759e+04 3.630e+03 4.845 1.40e-06 ***
## KitchenQual -1.305e+04 1.312e+03 -9.950 < 2e-16 ***
## X2ndFlrSF -2.184e+01 2.949e+00 -7.406 2.21e-13 ***
## RR 1.238e+04 2.612e+03 4.739 2.36e-06 ***
## nbrhd 3.022e+04 2.653e+03 11.392 < 2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 33200 on 1446 degrees of freedom
## Multiple R-squared: 0.8269, Adjusted R-squared: 0.8254
## F-statistic: 531.4 on 13 and 1446 DF, p-value: < 2.2e-16
Analysis:
We will now plot the residuals:
plot(fitted(multReg.lm),resid(multReg.lm))
A good model will have a residual plot where (1) there’s no clear pattern and (2) the points hover both above and below 0. The curved pattern in the plot of our residuals indicates that we don’t have a great model. This indicates that we do not do as well predicting on higher-priced homes.
Additionally, in a good model we expect the residuals to be normally distributed. By graphing the data in a Q-Q plot, we can see how well the observations follow the line. If they fit the line well, we know that the data is normally distributed.
qqnorm(resid(multReg.lm))
qqline(resid(multReg.lm))
There is clear deviation from the line in both ends of the plot, so this further shows that we do poorly at predicting the high and low priced houses.
Finally, we will make predictions on our test data:
testing <- testSet
testing$RemodInd <- with(testing, ifelse(YearRemodAdd==YearBuilt, 0,1))
testing$PropFront <- with(testing, LotFrontage/LotArea)
testing$PropBsmtFin <- with(testing, (TotalBsmtSF - BsmtUnfSF)/TotalBsmtSF)
testing$funcTyp <- with(testing, ifelse(Functional == 'Typ', 1,0))
testing$RR <- with(testing, ifelse(Condition1 == 'Norm', 1, 0))
testing$nbrhd <- with(testing, ifelse(Neighborhood == 'StoneBr' | Neighborhood == 'NridgHt' | Neighborhood == 'NoRidge' | Neighborhood == 'Somerst' | Neighborhood == 'Crawfor',1,0))
testing$KitchenQual <- with(testing, ifelse(is.null(KitchenQual) | is.na(KitchenQual),'',KitchenQual))
testing$KitchenQual <- as.integer(testing$KitchenQual)
testing <- testing %>%
dplyr::select(Id, GrLivArea, OverallQual, YearBuilt,
RemodInd, LotArea, PropFront, PropBsmtFin,
GarageCars, funcTyp, KitchenQual,
X2ndFlrSF, RR, nbrhd)
testing[is.na(testing)] <- 0
pred <- predict(multReg.lm, newdata=testing)
finalPred <- pred %>%
as_tibble
## Warning: Calling `as_tibble()` on a vector is discouraged, because the behavior is likely to change in the future. Use `tibble::enframe(name = NULL)` instead.
## This warning is displayed once per session.
finalPred$Id <- testSet$Id
finalPred <- finalPred %>%
dplyr::select(Id,
SalePrice = value)
#write.csv(finalPred,file='finalSubmission.csv', row.names=FALSE)