Data 605 Final Exam
Data 605 Final Exam
- Youtube Link
- Libraries Used
- Problem One
- Problem Two
- Import Training Data
- Descriptive Statistics
- Visualizations
- Correlation
- Hypothesis Testing
- Feature Engineering
- Define My Random Forest Function
- Linear Algebra and Correlation
- Calculus-Based Probability & Statistics
- Create the Model and Test it on the Train Data
- I build both a Linear Model and a Random Forest Model and then I aggregate the two of them together to build a better model overall
- Testing the Model
- Variable Importance
- Displays the Most Important Variables in the Random Forest
- Score the Test Data
- Kaggle Submission with Random Forest Function and Linear Regression Function Combined
Libraries Used
library(MASS)
library(Matrix)
library(matlib)
library(dplyr)
library(ggplot2)
library(tidyr)
library(kableExtra)
library(purrr)
library(Hmisc)
Problem One
Using R, generate a random variable X that has 10,000 random uniform numbers from 1 to N, where N can be any number of your choosing greater than or equal to 6. Then generate a random variable Y that has 10,000 random normal numbers with a mean of \(\mu=\sigma=\frac{N+1}{2}\).
Random Variable X
N<- round(runif(1, 6, 100))
n<-10000
X<-runif(n,min=0,max=N)
hist(X)
Random Variable Y
Y<-rnorm(n,(N+1)/2,(N+1)/2)
hist(Y)
abline(v=(N+1)/2,col="red")
Probability
Calculate as a minimum the below probabilities a through c. Assume the small letter “x” is estimated as the median of the X variable, and the small letter “y” is estimated as the 1st quartile of the Y variable. Interpret the meaning of all probabilities.
x<-median(X)
round(x,2)
## [1] 28.31
y<-quantile(Y,0.25)[[1]]
round(y,2)
## [1] 9.22
Probability that X is greater than its median given that X is greater than the first quartile of Y
- \(P(X>x | X>y)\)
\(P(X>x \ | \ X>y) = \frac{P(X>x \ , \ X>y)}{P(X>y)}\)
Pxxandxy<-sum(X>x & X>y)/n #all the X greater than x and greater than y divided by all possible X
Pxy<-sum(X>y)/n #all x greater than y divided by all possible X
Pxxgivenxy=Pxxandxy/Pxy
round(Pxxgivenxy,2)
## [1] 0.59
Probability that X is grater than all possible x and Y is greater than all possible y
- \(P(X>x, Y>y)\)
Pxxyy<-(sum(X>x & Y>y))/n
round(Pxxyy,2)
## [1] 0.38
Probability of X greater than its median and greater than the first quantile of Y
- \(P(X<x | X>y)\)
Pxxandxy<-sum(X<x & X>y)/n
round(Pxxandxy,2)
## [1] 0.34
Independance
Investigate whether P(X>x and Y>y)=P(X>x)P(Y>y) by building a table and evaluating the marginal and joint probabilities.
matrix<-matrix( c(sum(X>x & Y<y),sum(X>x & Y>y), sum(X<x & Y<y),sum(X<x & Y>y)), nrow = 2,ncol = 2)
matrix<-cbind(matrix,c(matrix[1,1]+matrix[1,2],matrix[2,1]+matrix[2,2]))
matrix<-rbind(matrix,c(matrix[1,1]+matrix[2,1],matrix[1,2]+matrix[2,2],matrix[1,3]+matrix[2,3]))
contingency<-as.data.frame(matrix)
names(contingency) <- c("X>x","X<x", "Total")
row.names(contingency) <- c("Y<y","Y>y", "Total")
kable(contingency) %>%
kable_styling(bootstrap_options = "bordered")
X>x | X<x | Total | |
---|---|---|---|
Y<y | 1250 | 1250 | 2500 |
Y>y | 3750 | 3750 | 7500 |
Total | 5000 | 5000 | 10000 |
prob_matrix<-matrix/matrix[3,3]
contingency_p<-as.data.frame(prob_matrix)
names(contingency_p) <- c("X>x","X<x", "Total")
row.names(contingency_p) <- c("Y<y","Y>y", "Total")
kable(round(contingency_p,2)) %>%
kable_styling(bootstrap_options = "bordered")
X>x | X<x | Total | |
---|---|---|---|
Y<y | 0.12 | 0.12 | 0.25 |
Y>y | 0.38 | 0.38 | 0.75 |
Total | 0.50 | 0.50 | 1.00 |
Compute P(X>x)P(Y>y)
prob_matrix[3,1]*prob_matrix[2,3]
## [1] 0.375
Compute P(X>x and Y>y)
round(prob_matrix[2,1],digits = 3)
## [1] 0.375
P(X>x and Y>y)=P(X>x)P(Y>y)
prob_matrix[3,1]*prob_matrix[2,3]==round(prob_matrix[2,1],digits = 3)
## [1] TRUE
Since the results are so similar we would conclude that X and Y are indeed independent
Check to see if independence holds by using Fisher’s Exact Test and the Chi Square Test. What is the difference between the two? Which is most appropriate?
fisher.test(matrix,simulate.p.value=TRUE)
Fisher's Exact Test for Count Data with simulated p-value (based
on 2000 replicates)
data: matrix
p-value = 1
alternative hypothesis: two.sided
chisq.test(matrix, correct=TRUE)
Pearson's Chi-squared test
data: matrix
X-squared = 0, df = 4, p-value = 1
Fisher’s Exact Test is for is used when you have small cell sizes (less than 5). The Chi Square Test is used when the cell sizes are large. It would be appropriate in this case.
Problem Two
Import Training Data
# Import training data
train <- read.csv('https://raw.githubusercontent.com/crarnouts/Data_605_Final/master/train.csv')
test <- read.csv('https://raw.githubusercontent.com/crarnouts/Data_605_Final/master/test.csv')
test$SalePrice <- 0
Descriptive Statistics
summary(train)
## Id MSSubClass MSZoning LotFrontage
## Min. : 1.0 Min. : 20.0 C (all): 10 Min. : 21.00
## 1st Qu.: 365.8 1st Qu.: 20.0 FV : 65 1st Qu.: 59.00
## Median : 730.5 Median : 50.0 RH : 16 Median : 69.00
## Mean : 730.5 Mean : 56.9 RL :1151 Mean : 70.05
## 3rd Qu.:1095.2 3rd Qu.: 70.0 RM : 218 3rd Qu.: 80.00
## Max. :1460.0 Max. :190.0 Max. :313.00
## NA's :259
## LotArea Street Alley LotShape LandContour
## Min. : 1300 Grvl: 6 Grvl: 50 IR1:484 Bnk: 63
## 1st Qu.: 7554 Pave:1454 Pave: 41 IR2: 41 HLS: 50
## Median : 9478 NA's:1369 IR3: 10 Low: 36
## Mean : 10517 Reg:925 Lvl:1311
## 3rd Qu.: 11602
## Max. :215245
##
## Utilities LotConfig LandSlope Neighborhood Condition1
## AllPub:1459 Corner : 263 Gtl:1382 NAmes :225 Norm :1260
## NoSeWa: 1 CulDSac: 94 Mod: 65 CollgCr:150 Feedr : 81
## FR2 : 47 Sev: 13 OldTown:113 Artery : 48
## FR3 : 4 Edwards:100 RRAn : 26
## Inside :1052 Somerst: 86 PosN : 19
## Gilbert: 79 RRAe : 11
## (Other):707 (Other): 15
## Condition2 BldgType HouseStyle OverallQual
## Norm :1445 1Fam :1220 1Story :726 Min. : 1.000
## Feedr : 6 2fmCon: 31 2Story :445 1st Qu.: 5.000
## Artery : 2 Duplex: 52 1.5Fin :154 Median : 6.000
## PosN : 2 Twnhs : 43 SLvl : 65 Mean : 6.099
## RRNn : 2 TwnhsE: 114 SFoyer : 37 3rd Qu.: 7.000
## PosA : 1 1.5Unf : 14 Max. :10.000
## (Other): 2 (Other): 19
## OverallCond YearBuilt YearRemodAdd RoofStyle
## Min. :1.000 Min. :1872 Min. :1950 Flat : 13
## 1st Qu.:5.000 1st Qu.:1954 1st Qu.:1967 Gable :1141
## Median :5.000 Median :1973 Median :1994 Gambrel: 11
## Mean :5.575 Mean :1971 Mean :1985 Hip : 286
## 3rd Qu.:6.000 3rd Qu.:2000 3rd Qu.:2004 Mansard: 7
## Max. :9.000 Max. :2010 Max. :2010 Shed : 2
##
## RoofMatl Exterior1st Exterior2nd MasVnrType MasVnrArea
## CompShg:1434 VinylSd:515 VinylSd:504 BrkCmn : 15 Min. : 0.0
## Tar&Grv: 11 HdBoard:222 MetalSd:214 BrkFace:445 1st Qu.: 0.0
## WdShngl: 6 MetalSd:220 HdBoard:207 None :864 Median : 0.0
## WdShake: 5 Wd Sdng:206 Wd Sdng:197 Stone :128 Mean : 103.7
## ClyTile: 1 Plywood:108 Plywood:142 NA's : 8 3rd Qu.: 166.0
## Membran: 1 CemntBd: 61 CmentBd: 60 Max. :1600.0
## (Other): 2 (Other):128 (Other):136 NA's :8
## ExterQual ExterCond Foundation BsmtQual BsmtCond BsmtExposure
## Ex: 52 Ex: 3 BrkTil:146 Ex :121 Fa : 45 Av :221
## Fa: 14 Fa: 28 CBlock:634 Fa : 35 Gd : 65 Gd :134
## Gd:488 Gd: 146 PConc :647 Gd :618 Po : 2 Mn :114
## TA:906 Po: 1 Slab : 24 TA :649 TA :1311 No :953
## TA:1282 Stone : 6 NA's: 37 NA's: 37 NA's: 38
## Wood : 3
##
## BsmtFinType1 BsmtFinSF1 BsmtFinType2 BsmtFinSF2
## ALQ :220 Min. : 0.0 ALQ : 19 Min. : 0.00
## BLQ :148 1st Qu.: 0.0 BLQ : 33 1st Qu.: 0.00
## GLQ :418 Median : 383.5 GLQ : 14 Median : 0.00
## LwQ : 74 Mean : 443.6 LwQ : 46 Mean : 46.55
## Rec :133 3rd Qu.: 712.2 Rec : 54 3rd Qu.: 0.00
## Unf :430 Max. :5644.0 Unf :1256 Max. :1474.00
## NA's: 37 NA's: 38
## BsmtUnfSF TotalBsmtSF Heating HeatingQC CentralAir
## Min. : 0.0 Min. : 0.0 Floor: 1 Ex:741 N: 95
## 1st Qu.: 223.0 1st Qu.: 795.8 GasA :1428 Fa: 49 Y:1365
## Median : 477.5 Median : 991.5 GasW : 18 Gd:241
## Mean : 567.2 Mean :1057.4 Grav : 7 Po: 1
## 3rd Qu.: 808.0 3rd Qu.:1298.2 OthW : 2 TA:428
## Max. :2336.0 Max. :6110.0 Wall : 4
##
## Electrical X1stFlrSF X2ndFlrSF LowQualFinSF
## FuseA: 94 Min. : 334 Min. : 0 Min. : 0.000
## FuseF: 27 1st Qu.: 882 1st Qu.: 0 1st Qu.: 0.000
## FuseP: 3 Median :1087 Median : 0 Median : 0.000
## Mix : 1 Mean :1163 Mean : 347 Mean : 5.845
## SBrkr:1334 3rd Qu.:1391 3rd Qu.: 728 3rd Qu.: 0.000
## NA's : 1 Max. :4692 Max. :2065 Max. :572.000
##
## GrLivArea BsmtFullBath BsmtHalfBath FullBath
## Min. : 334 Min. :0.0000 Min. :0.00000 Min. :0.000
## 1st Qu.:1130 1st Qu.:0.0000 1st Qu.:0.00000 1st Qu.:1.000
## Median :1464 Median :0.0000 Median :0.00000 Median :2.000
## Mean :1515 Mean :0.4253 Mean :0.05753 Mean :1.565
## 3rd Qu.:1777 3rd Qu.:1.0000 3rd Qu.:0.00000 3rd Qu.:2.000
## Max. :5642 Max. :3.0000 Max. :2.00000 Max. :3.000
##
## HalfBath BedroomAbvGr KitchenAbvGr KitchenQual
## Min. :0.0000 Min. :0.000 Min. :0.000 Ex:100
## 1st Qu.:0.0000 1st Qu.:2.000 1st Qu.:1.000 Fa: 39
## Median :0.0000 Median :3.000 Median :1.000 Gd:586
## Mean :0.3829 Mean :2.866 Mean :1.047 TA:735
## 3rd Qu.:1.0000 3rd Qu.:3.000 3rd Qu.:1.000
## Max. :2.0000 Max. :8.000 Max. :3.000
##
## TotRmsAbvGrd Functional Fireplaces FireplaceQu GarageType
## Min. : 2.000 Maj1: 14 Min. :0.000 Ex : 24 2Types : 6
## 1st Qu.: 5.000 Maj2: 5 1st Qu.:0.000 Fa : 33 Attchd :870
## Median : 6.000 Min1: 31 Median :1.000 Gd :380 Basment: 19
## Mean : 6.518 Min2: 34 Mean :0.613 Po : 20 BuiltIn: 88
## 3rd Qu.: 7.000 Mod : 15 3rd Qu.:1.000 TA :313 CarPort: 9
## Max. :14.000 Sev : 1 Max. :3.000 NA's:690 Detchd :387
## Typ :1360 NA's : 81
## GarageYrBlt GarageFinish GarageCars GarageArea GarageQual
## Min. :1900 Fin :352 Min. :0.000 Min. : 0.0 Ex : 3
## 1st Qu.:1961 RFn :422 1st Qu.:1.000 1st Qu.: 334.5 Fa : 48
## Median :1980 Unf :605 Median :2.000 Median : 480.0 Gd : 14
## Mean :1979 NA's: 81 Mean :1.767 Mean : 473.0 Po : 3
## 3rd Qu.:2002 3rd Qu.:2.000 3rd Qu.: 576.0 TA :1311
## Max. :2010 Max. :4.000 Max. :1418.0 NA's: 81
## NA's :81
## GarageCond PavedDrive WoodDeckSF OpenPorchSF EnclosedPorch
## Ex : 2 N: 90 Min. : 0.00 Min. : 0.00 Min. : 0.00
## Fa : 35 P: 30 1st Qu.: 0.00 1st Qu.: 0.00 1st Qu.: 0.00
## Gd : 9 Y:1340 Median : 0.00 Median : 25.00 Median : 0.00
## Po : 7 Mean : 94.24 Mean : 46.66 Mean : 21.95
## TA :1326 3rd Qu.:168.00 3rd Qu.: 68.00 3rd Qu.: 0.00
## NA's: 81 Max. :857.00 Max. :547.00 Max. :552.00
##
## X3SsnPorch ScreenPorch PoolArea PoolQC
## Min. : 0.00 Min. : 0.00 Min. : 0.000 Ex : 2
## 1st Qu.: 0.00 1st Qu.: 0.00 1st Qu.: 0.000 Fa : 2
## Median : 0.00 Median : 0.00 Median : 0.000 Gd : 3
## Mean : 3.41 Mean : 15.06 Mean : 2.759 NA's:1453
## 3rd Qu.: 0.00 3rd Qu.: 0.00 3rd Qu.: 0.000
## Max. :508.00 Max. :480.00 Max. :738.000
##
## Fence MiscFeature MiscVal MoSold
## GdPrv: 59 Gar2: 2 Min. : 0.00 Min. : 1.000
## GdWo : 54 Othr: 2 1st Qu.: 0.00 1st Qu.: 5.000
## MnPrv: 157 Shed: 49 Median : 0.00 Median : 6.000
## MnWw : 11 TenC: 1 Mean : 43.49 Mean : 6.322
## NA's :1179 NA's:1406 3rd Qu.: 0.00 3rd Qu.: 8.000
## Max. :15500.00 Max. :12.000
##
## YrSold SaleType SaleCondition SalePrice
## Min. :2006 WD :1267 Abnorml: 101 Min. : 34900
## 1st Qu.:2007 New : 122 AdjLand: 4 1st Qu.:129975
## Median :2008 COD : 43 Alloca : 12 Median :163000
## Mean :2008 ConLD : 9 Family : 20 Mean :180921
## 3rd Qu.:2009 ConLI : 5 Normal :1198 3rd Qu.:214000
## Max. :2010 ConLw : 5 Partial: 125 Max. :755000
## (Other): 9
Visualizations
Overall Quality is a very influential variable
train$OverallQual_factor <- as.factor(as.character(train$OverallQual))
ggplot(train, aes(x=OverallQual, y=SalePrice, fill=OverallQual_factor)) + geom_boxplot()
train$OverallQual_factor<-NULL
The Neighborhood the house is in is also a very important feature
ggplot(train, aes(x=Neighborhood, y=SalePrice, fill=Neighborhood)) + geom_boxplot()+ coord_flip()
Correlation
Derive a correlation matrix for any three quantitative variables in the dataset.
library(corrplot)
correlationData<-dplyr::select(train,SalePrice,LotArea,BsmtUnfSF,GarageArea,YearBuilt,OverallQual,FullBath,TotalBsmtSF,GrLivArea,Fireplaces)
correlationMatrix<-round(cor(correlationData),4)
correlationMatrix
## SalePrice LotArea BsmtUnfSF GarageArea YearBuilt OverallQual
## SalePrice 1.0000 0.2638 0.2145 0.6234 0.5229 0.7910
## LotArea 0.2638 1.0000 -0.0026 0.1804 0.0142 0.1058
## BsmtUnfSF 0.2145 -0.0026 1.0000 0.1833 0.1490 0.3082
## GarageArea 0.6234 0.1804 0.1833 1.0000 0.4790 0.5620
## YearBuilt 0.5229 0.0142 0.1490 0.4790 1.0000 0.5723
## OverallQual 0.7910 0.1058 0.3082 0.5620 0.5723 1.0000
## FullBath 0.5607 0.1260 0.2889 0.4057 0.4683 0.5506
## TotalBsmtSF 0.6136 0.2608 0.4154 0.4867 0.3915 0.5378
## GrLivArea 0.7086 0.2631 0.2403 0.4690 0.1990 0.5930
## Fireplaces 0.4669 0.2714 0.0516 0.2691 0.1477 0.3968
## FullBath TotalBsmtSF GrLivArea Fireplaces
## SalePrice 0.5607 0.6136 0.7086 0.4669
## LotArea 0.1260 0.2608 0.2631 0.2714
## BsmtUnfSF 0.2889 0.4154 0.2403 0.0516
## GarageArea 0.4057 0.4867 0.4690 0.2691
## YearBuilt 0.4683 0.3915 0.1990 0.1477
## OverallQual 0.5506 0.5378 0.5930 0.3968
## FullBath 1.0000 0.3237 0.6300 0.2437
## TotalBsmtSF 0.3237 1.0000 0.4549 0.3395
## GrLivArea 0.6300 0.4549 1.0000 0.4617
## Fireplaces 0.2437 0.3395 0.4617 1.0000
corrplot(correlationMatrix,method ="color")
As you can see above all of the variables displayed have some amount of correlation with Sale Price
Hypothesis Testing
Test the hypotheses that the correlations between each pairwise set of variables is 0 and provide an 80% confidence interval. Discuss the meaning of your analysis.
SalePrice vs LotArea
With a low P value, we are confident the correlation between these two variables is not zero, and we are 80% confident it is between 0.1822292 and 0.2462680
cor.test(correlationData$SalePrice,correlationData$LotArea, conf.level = 0.8)
##
## Pearson's product-moment correlation
##
## data: correlationData$SalePrice and correlationData$LotArea
## t = 10.445, df = 1458, p-value < 2.2e-16
## alternative hypothesis: true correlation is not equal to 0
## 80 percent confidence interval:
## 0.2323391 0.2947946
## sample estimates:
## cor
## 0.2638434
SalePrice vs Overall Quality
With a low P value, we are confident the correlation between these two variables is not zero, and we are 80% confident it is between 0..778 and 0.803
cor.test(correlationData$SalePrice,correlationData$OverallQual, conf.level = 0.8)
##
## Pearson's product-moment correlation
##
## data: correlationData$SalePrice and correlationData$OverallQual
## t = 49.364, df = 1458, p-value < 2.2e-16
## alternative hypothesis: true correlation is not equal to 0
## 80 percent confidence interval:
## 0.7780752 0.8032204
## sample estimates:
## cor
## 0.7909816
SalePrice vs Total Basement Square Feet
With a low P value, we are confident the correlation between these two variables is not zero, and we are 80% confident it is between 0.59 and 0.634
cor.test(correlationData$SalePrice,correlationData$TotalBsmtSF, conf.level = 0.8)
##
## Pearson's product-moment correlation
##
## data: correlationData$SalePrice and correlationData$TotalBsmtSF
## t = 29.671, df = 1458, p-value < 2.2e-16
## alternative hypothesis: true correlation is not equal to 0
## 80 percent confidence interval:
## 0.5922142 0.6340846
## sample estimates:
## cor
## 0.6135806
Feature Engineering
##create a neighborhood numeric variable
Neighborhood_metric <- aggregate(train[, 81], list(train$Neighborhood), mean)
colnames(Neighborhood_metric)<- c("Neighborhood","Neighborhood_Average")
train <- merge(train,Neighborhood_metric)
test <- merge(test,Neighborhood_metric)
Define My Random Forest Function
source("https://raw.githubusercontent.com/crarnouts/Data_605_Final/master/RandomForestNulls_testing.R")
Linear Algebra and Correlation
Linear Algebra and Correlation. Invert your correlation matrix from above. (This is known as the precision matrix and contains variance inflation factors on the diagonal.) Multiply the correlation matrix by the precision matrix, and then multiply the precision matrix by the correlation matrix. Conduct LU decomposition on the matrix.
Invert your correlation matrix from above
precisionMatrix<-solve(correlationMatrix)
round(precisionMatrix,4)
## SalePrice LotArea BsmtUnfSF GarageArea YearBuilt OverallQual
## SalePrice 4.5882 -0.2837 0.4053 -0.5176 -0.5881 -1.7458
## LotArea -0.2837 1.1983 0.1099 -0.0638 0.1327 0.3149
## BsmtUnfSF 0.4053 0.1099 1.3638 0.0324 0.1786 -0.3791
## GarageArea -0.5176 -0.0638 0.0324 1.8029 -0.3953 -0.1574
## YearBuilt -0.5881 0.1327 0.1786 -0.3953 2.1279 -0.6831
## OverallQual -1.7458 0.3149 -0.3791 -0.1574 -0.6831 3.3014
## FullBath -0.0127 -0.0474 -0.3253 0.0494 -0.8342 -0.1309
## TotalBsmtSF -0.7568 -0.2538 -0.6536 -0.2450 -0.2898 -0.0257
## GrLivArea -1.4079 -0.0931 -0.0332 -0.2554 1.1299 -0.2980
## Fireplaces -0.2572 -0.1851 0.1424 0.0820 0.0725 -0.2393
## FullBath TotalBsmtSF GrLivArea Fireplaces
## SalePrice -0.0127 -0.7568 -1.4079 -0.2572
## LotArea -0.0474 -0.2538 -0.0931 -0.1851
## BsmtUnfSF -0.3253 -0.6536 -0.0332 0.1424
## GarageArea 0.0494 -0.2450 -0.2554 0.0820
## YearBuilt -0.8342 -0.2898 1.1299 0.0725
## OverallQual -0.1309 -0.0257 -0.2980 -0.2393
## FullBath 2.2242 0.3559 -1.3067 0.1379
## TotalBsmtSF 0.3559 2.0457 -0.1419 -0.1409
## GrLivArea -1.3067 -0.1419 3.1710 -0.3930
## Fireplaces 0.1379 -0.1409 -0.3930 1.4209
Multiply the correlation matrix by the precision matrix, and then multiply the precision matrix by the correlation matrix.
round(correlationMatrix %*% precisionMatrix,4)
## SalePrice LotArea BsmtUnfSF GarageArea YearBuilt OverallQual
## SalePrice 1 0 0 0 0 0
## LotArea 0 1 0 0 0 0
## BsmtUnfSF 0 0 1 0 0 0
## GarageArea 0 0 0 1 0 0
## YearBuilt 0 0 0 0 1 0
## OverallQual 0 0 0 0 0 1
## FullBath 0 0 0 0 0 0
## TotalBsmtSF 0 0 0 0 0 0
## GrLivArea 0 0 0 0 0 0
## Fireplaces 0 0 0 0 0 0
## FullBath TotalBsmtSF GrLivArea Fireplaces
## SalePrice 0 0 0 0
## LotArea 0 0 0 0
## BsmtUnfSF 0 0 0 0
## GarageArea 0 0 0 0
## YearBuilt 0 0 0 0
## OverallQual 0 0 0 0
## FullBath 1 0 0 0
## TotalBsmtSF 0 1 0 0
## GrLivArea 0 0 1 0
## Fireplaces 0 0 0 1
round(precisionMatrix %*% correlationMatrix,4)
## SalePrice LotArea BsmtUnfSF GarageArea YearBuilt OverallQual
## SalePrice 1 0 0 0 0 0
## LotArea 0 1 0 0 0 0
## BsmtUnfSF 0 0 1 0 0 0
## GarageArea 0 0 0 1 0 0
## YearBuilt 0 0 0 0 1 0
## OverallQual 0 0 0 0 0 1
## FullBath 0 0 0 0 0 0
## TotalBsmtSF 0 0 0 0 0 0
## GrLivArea 0 0 0 0 0 0
## Fireplaces 0 0 0 0 0 0
## FullBath TotalBsmtSF GrLivArea Fireplaces
## SalePrice 0 0 0 0
## LotArea 0 0 0 0
## BsmtUnfSF 0 0 0 0
## GarageArea 0 0 0 0
## YearBuilt 0 0 0 0
## OverallQual 0 0 0 0
## FullBath 1 0 0 0
## TotalBsmtSF 0 1 0 0
## GrLivArea 0 0 1 0
## Fireplaces 0 0 0 1
Conduct LU decomposition on the matrix.
solve.LDU<-function(A,D=FALSE) {
#build unity matrix b
b<-matrix(nrow = nrow(A),ncol = ncol(A))
for(j in 1:ncol(A)) {
for(i in 1:nrow(A)) {
if(i==j) b[i,j]<-1 else b[i,j]=0
}
}
#alternatively b could have been defined by b<-diag(ncol(A))
Ab<-cbind(A,b)
for(row in 1:(nrow(Ab)-1)){
col=row
for(next.row in (row+1):nrow(Ab)) {
multiplier<-Ab[next.row,col]/Ab[row,col]
Ab[next.row,]<-Ab[next.row,]-(multiplier*Ab[row,])
}
}
ru<-Ab[,1:ncol(A)]
rl<-solve(Ab[,(ncol(A)+1):ncol(Ab)])
if(D) {
Ab<-ru
rd<-diag(diag(ru))
for(row in 1:nrow(Ab)) {
Ab[row,]=Ab[row,]/Ab[row,row]
}
rup<-Ab
result<-list(rl,rd,rup)
return(result)
} else {
result<-list(rl,ru)
return(result)
}
}
r<-solve.LDU(correlationMatrix)
L<-r[[1]]
U<-r[[2]]
L
## SalePrice LotArea BsmtUnfSF GarageArea YearBuilt OverallQual
## 1.0000 0.00000000 0.00000000 0.00000000 0.0000000 0.00000000
## 0.2638 1.00000000 0.00000000 0.00000000 0.0000000 0.00000000
## 0.2145 -0.06361188 1.00000000 0.00000000 0.0000000 0.00000000
## 0.6234 0.01713985 0.05324542 1.00000000 0.0000000 0.00000000
## 0.5229 -0.13299629 0.03048389 0.25246779 1.0000000 0.00000000
## 0.7910 -0.11055970 0.13890082 0.10457834 0.1863185 1.00000000
## 0.5607 -0.02355163 0.17599618 0.07828772 0.2312941 0.15113681
## 0.6136 0.10633201 0.30527097 0.14306396 0.0790336 0.01307083
## 0.7086 0.08186859 0.09803016 0.03450554 -0.2528671 0.18231733
## 0.4669 0.15931885 -0.04116999 -0.03685559 -0.1042188 0.20550188
## FullBath TotalBsmtSF GrLivArea Fireplaces
## 0.0000000000 0.00000000 0.0000000 0
## 0.0000000000 0.00000000 0.0000000 0
## 0.0000000000 0.00000000 0.0000000 0
## 0.0000000000 0.00000000 0.0000000 0
## 0.0000000000 0.00000000 0.0000000 0
## 0.0000000000 0.00000000 0.0000000 0
## 1.0000000000 0.00000000 0.0000000 0
## -0.1457746757 1.00000000 0.0000000 0
## 0.4056382160 0.05905009 1.0000000 0
## 0.0007037973 0.11547985 0.2765902 1
U
## SalePrice LotArea BsmtUnfSF GarageArea YearBuilt
## SalePrice 1 0.2638000 0.2145000 0.62340000 5.229000e-01
## LotArea 0 0.9304096 -0.0591851 0.01594708 -1.237410e-01
## BsmtUnfSF 0 0.0000000 0.9502249 0.05059512 2.896655e-02
## GarageArea 0 0.0000000 0.0000000 0.60840515 1.536027e-01
## YearBuilt 0 0.0000000 0.0000000 0.00000000 6.704557e-01
## OverallQual 0 0.0000000 0.0000000 0.00000000 0.000000e+00
## FullBath 0 0.0000000 0.0000000 0.00000000 0.000000e+00
## TotalBsmtSF 0 0.0000000 0.0000000 0.00000000 0.000000e+00
## GrLivArea 0 0.0000000 0.0000000 0.00000000 -2.775558e-17
## Fireplaces 0 0.0000000 0.0000000 0.00000000 7.676921e-18
## OverallQual FullBath TotalBsmtSF GrLivArea
## SalePrice 7.910000e-01 5.607000e-01 0.61360000 0.70860000
## LotArea -1.028658e-01 -2.191266e-02 0.09893232 0.07617132
## BsmtUnfSF 1.319870e-01 1.672359e-01 0.29007607 0.09315070
## GarageArea 6.362600e-02 4.763065e-02 0.08704085 0.02099335
## YearBuilt 1.249183e-01 1.550725e-01 0.05298853 -0.16953618
## OverallQual 3.146846e-01 4.756042e-02 0.00411319 0.05737245
## FullBath 6.938894e-18 6.088822e-01 -0.08875960 0.24698588
## TotalBsmtSF 1.011515e-18 0.000000e+00 0.49479061 0.02921743
## GrLivArea -2.874411e-18 0.000000e+00 0.00000000 0.32655172
## Fireplaces 6.733407e-19 5.421011e-20 0.00000000 0.00000000
## Fireplaces
## SalePrice 0.4669000000
## LotArea 0.1482317800
## BsmtUnfSF -0.0391207480
## GarageArea -0.0224231297
## YearBuilt -0.0698740624
## OverallQual 0.0646682719
## FullBath 0.0004285296
## TotalBsmtSF 0.0571383466
## GrLivArea 0.0903210201
## Fireplaces 0.7037990740
We can confirm the decomposition by comparing to our original matrix correlationMatrix
round(L %*% U ,4)== round(correlationMatrix,4)
## SalePrice LotArea BsmtUnfSF GarageArea YearBuilt OverallQual FullBath
## TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## TotalBsmtSF GrLivArea Fireplaces
## TRUE TRUE TRUE
## TRUE TRUE TRUE
## TRUE TRUE TRUE
## TRUE TRUE TRUE
## TRUE TRUE TRUE
## TRUE TRUE TRUE
## TRUE TRUE TRUE
## TRUE TRUE TRUE
## TRUE TRUE TRUE
## TRUE TRUE TRUE
Calculus-Based Probability & Statistics
**Calculus-Based Probability & Statistics. Many times, it makes sense to fit a closed form distribution to data. Select a variable in the Kaggle.com training dataset that is skewed to the right, shift it so that the minimum value is absolutely above zero if necessary. Then load the MASS package and run fitdistr to fit an exponential probability density function. (See https://stat.ethz.ch/R-manual/R-devel/library/MASS/html/fitdistr.html ). Find the optimal value of λ for this distribution, and then take 1000 samples from this exponential distribution using this value (e.g., rexp(1000,λ )). Plot a histogram and compare it with a histogram of your original variable. Using the exponential pdf, find the 5th and 95th percentiles using the cumulative distribution function (CDF). Also generate a 95% confidence interval from the empirical data, assuming normality. Finally, provide the empirical 5th percentile and 95th percentile of the data. Discuss.
Many times, it makes sense to fit a closed form distribution to data. Select a variable in the Kaggle.com training dataset that is skewed to the right, shift it so that the minimum value is absolutely above zero if necessary. Then load the MASS package and run fitdistr to fit an exponential probability density function.**
toFit<-train$TotalBsmtSF
min(toFit)
## [1] 0
Then run fitdistr to fit an exponential probability density function.
fit <- fitdistr(toFit, "exponential")
fit
## rate
## 9.456896e-04
## (2.474983e-05)
Find the optimal value of λ for this distribution, and then take 1000 samples from this exponential distribution using this value
l<-fit$estimate
sim<- rexp(1000,l)
hist(sim,breaks = 100)
hist(toFit,breaks=100)
sim.df <- data.frame(length = sim)
toFit.df <- data.frame(length = toFit)
sim.df$from <- 'sim'
toFit.df$from <- 'toFit'
both.df <- rbind(sim.df,toFit.df)
ggplot(both.df, aes(length, fill = from)) + geom_density(alpha = 0.2)
Using the exponential pdf, find the 5th and 95th percentiles using the cumulative distribution function (CDF).
quantile(sim, probs=c(0.05, 0.95))
## 5% 95%
## 54.64128 3060.00210
Also generate a 95% confidence interval from the empirical data, assuming normality.
mean(toFit)
## [1] 1057.429
normal<-rnorm(length(toFit),mean(toFit),sd(toFit))
hist(normal)
quantile(normal, probs=c(0.05, 0.95))
## 5% 95%
## 320.432 1777.749
normal.df <- data.frame(length = normal)
normal.df$from <- 'normal'
all.df <- rbind(both.df,normal.df)
ggplot(all.df, aes(length, fill = from)) + geom_density(alpha = 0.2)
Finally, provide the empirical 5th percentile and 95th percentile of the data. Discuss.
quantile(toFit, probs=c(0.05, 0.95))
## 5% 95%
## 519.3 1753.0
From this analysis it appears the data select was not very right skew. The exponential simulation does not match our data very well, rather, our selected empirical data matches the normal distribution a lot better. This can be seen in the final density plot, but also on the confidence interval where the limits are much closer than for the exponential approximation.
Create the Model and Test it on the Train Data
I created a random forest function that you can find above that uses decision trees from rpart and aggregates them together for a combined score
Arguements for Random Forest Function are (data_to_train,data_to_test/score,target_variable,percent_of_rows_in_each_tree,number_of_columns_in_each_tree,number_of_trees,complexity_parameter,min_bucket_size,print_every_ith_tree)
I build both a Linear Model and a Random Forest Model and then I aggregate the two of them together to build a better model overall
#nums <- unlist(lapply(train, is.numeric))
#data_numeric <- train[, nums]
train$Id <- NULL
data_numeric <- train
train.index1 <- createDataPartition(data_numeric$SalePrice, p = .7, list = FALSE)
train_data<- data_numeric[ train.index1,]
hold_out_data <- data_numeric[-train.index1,] # add some categorical columns to data_numeric to see how it improves the model
lm2<-lm(SalePrice ~ GrLivArea+BsmtFinSF1+GarageCars+GrLivArea+LotArea+Fireplaces+YearBuilt+OverallQual+BedroomAbvGr+Neighborhood_Average,data=train)
plotlm(lm2)
##
## Call:
## lm(formula = SalePrice ~ GrLivArea + BsmtFinSF1 + GarageCars +
## GrLivArea + LotArea + Fireplaces + YearBuilt + OverallQual +
## BedroomAbvGr + Neighborhood_Average, data = train)
##
## Residuals:
## Min 1Q Median 3Q Max
## -424796 -16942 -683 14218 271914
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -2.552e+05 8.246e+04 -3.095 0.00200 **
## GrLivArea 4.932e+01 2.955e+00 16.692 < 2e-16 ***
## BsmtFinSF1 2.150e+01 2.188e+00 9.830 < 2e-16 ***
## GarageCars 1.089e+04 1.663e+03 6.548 8.08e-11 ***
## LotArea 4.691e-01 9.782e-02 4.796 1.79e-06 ***
## Fireplaces 4.813e+03 1.668e+03 2.886 0.00397 **
## YearBuilt 8.658e+01 4.342e+01 1.994 0.04636 *
## OverallQual 1.677e+04 1.108e+03 15.135 < 2e-16 ***
## BedroomAbvGr -4.155e+03 1.419e+03 -2.927 0.00348 **
## Neighborhood_Average 3.522e-01 2.390e-02 14.736 < 2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 34610 on 1450 degrees of freedom
## Multiple R-squared: 0.8113, Adjusted R-squared: 0.8101
## F-statistic: 692.8 on 9 and 1450 DF, p-value: < 2.2e-16
hold_out_data <- RF_with_Nulls(train_data,hold_out_data,"SalePrice",.5,7,500,.005,5,100)
hold_out_data$Linear_Prediction <- predict(lm2,hold_out_data)
hold_out_data$CombinedPredictor <- (hold_out_data$Linear_Prediction+hold_out_data$prediction_overall)/2
hold_out_data$Error <- abs(hold_out_data$SalePrice - hold_out_data$prediction_overall)
hold_out_data$lm_Error <- abs(hold_out_data$SalePrice - hold_out_data$Linear_Prediction)
hold_out_data$combined_Error <- abs(hold_out_data$SalePrice - hold_out_data$CombinedPredictor)
The above 5 trees were just 5 of the 500 trees used
Testing the Model
**Below you can see how taking the average score of the two models helps us make a better prediction because you can looking at the data in two completely different ways
cor(hold_out_data$prediction_overall,hold_out_data$SalePrice)
## [1] 0.9218452
cor(hold_out_data$Linear_Prediction,hold_out_data$SalePrice)
## [1] 0.9131427
cor(hold_out_data$CombinedPredictor,hold_out_data$SalePrice)
## [1] 0.9300433
mean(hold_out_data$Error)
## [1] 25885.99
mean(hold_out_data$lm_Error)
## [1] 21574.59
mean(hold_out_data$combined_Error)
## [1] 20566.9
sd(hold_out_data$Error)
## [1] 33801.18
ggplot(hold_out_data, aes(x=Linear_Prediction, y=SalePrice)) +
geom_point(shape=1) + # Use hollow circles
geom_smooth(method=lm, # Add linear regression line
se=FALSE)
ggplot(hold_out_data, aes(x=prediction_overall, y=SalePrice)) +
geom_point(shape=1) + # Use hollow circles
geom_smooth(method=lm, # Add linear regression line
se=FALSE) # Don't add shaded confidence region
ggplot(hold_out_data, aes(x=CombinedPredictor, y=SalePrice)) +
geom_point(shape=1) + # Use hollow circles
geom_smooth(method=lm, # Add linear regression line
se=FALSE) # Don't add shaded confidence region
ggplot(hold_out_data, aes(x=SalePrice, y=Error)) +
geom_point(shape=1)
ggplot(hold_out_data, aes(x=SalePrice, y=lm_Error)) +
geom_point(shape=1)
ggplot(hold_out_data, aes(x=SalePrice, y=combined_Error)) +
geom_point(shape=1)
Variable Importance
Displays the Most Important Variables in the Random Forest
#need to sort Variable Importance and then take the top
variable_importance <-variable_importance[order(variable_importance$Importance, decreasing = TRUE),]
variable_importance2 <- filter(variable_importance,variable_importance$Importance >variable_importance$Importance[30])
# Very basic bar graph
ggplot(data=variable_importance2, aes(x=reorder(Features, -Importance), y=Importance)) +
geom_bar(stat="identity")+
coord_flip() +
theme(text = element_text(size=7))+xlab("Variables Used")
Bring in Fresh Data
train <- read.csv('https://raw.githubusercontent.com/crarnouts/Data_605_Final/master/train.csv')
test <- read.csv('https://raw.githubusercontent.com/crarnouts/Data_605_Final/master/test.csv')
test$SalePrice <- 0
Feature Engineering
##create a neighborhood numeric variable
Neighborhood_metric <- aggregate(train[, 81], list(train$Neighborhood), mean)
colnames(Neighborhood_metric)<- c("Neighborhood","Neighborhood_Average")
train <- merge(train,Neighborhood_metric)
test <- merge(test,Neighborhood_metric)
Score the Test Data
# nums <- unlist(lapply(train, is.numeric))
# train <- train[, nums]
#
# nums <- unlist(lapply(test, is.numeric))
# test <- test[, nums]
train$Id <- NULL
ID <- test %>% select(Id)
test$Id <- NULL
lm2<-lm(SalePrice ~ GrLivArea+BsmtFinSF1+GarageCars+GrLivArea+LotArea+Fireplaces+YearBuilt+OverallQual+BedroomAbvGr+Neighborhood_Average,data=train)
test2 <- RF_with_Nulls(train,test,"SalePrice",.5,6,500,.005,5,501)
test2$Linear_Prediction<-predict(lm2,test2)
test2$Id <- 1
test2$Id <- ID$Id
test2$SalePrice <- (test2$prediction_overall+test2$Linear_Prediction)/2
#
# test2$SalePrice <- test2$Linear_Prediction
test2$SalePrice[is.na(test2$SalePrice)] <- test2$prediction_overall
submission <- test2 %>% select(Id,SalePrice)
write.csv(submission,file = "submission.csv",row.names = FALSE)
Kaggle Submission with Random Forest Function and Linear Regression Function Combined
Kaggle_submission