Youtube Link
Calculate as a minimum the below probabilities a through c. Assume the small letter “x” is estimated as the median of the X variable, and the small letter “y” is estimated as the 1st quartile of the Y variable. Interpret the meaning of all probabilities.
Generate random variable X , Y
set.seed(1)
mu = sigma = (10 + 1)/2
df = data.frame(X = runif(10000, min=1, max=10),
Y = rnorm(10000, mean=mu, sd=sigma))
summary(df$X)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 1.001 3.273 5.462 5.502 7.812 9.999
summary(df$Y)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## -18.165 1.732 5.342 5.412 9.063 26.004
Probability:
## Given that:
n= 10000
N=10
x <- median(df$X)
y <- as.numeric(quantile(df$Y)["25%"])
x
## [1] 5.461671
y
## [1] 1.731765
\(P(X>x|X>y) = \frac{P(X>x,X>y)}{P(X>y)}\)
P_Xx_and_Xy <- df %>%
filter(X > x,
X > y) %>%
nrow() / n
P_Xy <- df %>%
filter(X > y) %>%
nrow() / n
ans_1a <- P_Xx_and_Xy / P_Xy
ans_1a
## [1] 0.5465676
ans_1b <- df %>%
filter(X > x,
Y > y) %>%
nrow() / n
ans_1b
## [1] 0.3754
P_Xsx_and_Xgy <- df %>%
filter(X < x,
X > y) %>%
nrow() / n
P_Xgy <- df %>%
filter(X > y) %>%
nrow() / n
ans_1c <- P_Xsx_and_Xgy / P_Xgy
ans_1c
## [1] 0.4534324
Investigate whether P(X>x and Y>y)=P(X>x)P(Y>y) by building a table and evaluating the marginal and joint probabilities.
# Create Joint Probabilities
temp <- df %>%
mutate(A = ifelse(X > x, " X > x", " X <= x"),
B = ifelse(Y > y, " Y > y", " Y <= y")) %>%
group_by(A, B) %>%
summarise(count = n()) %>%
mutate(probability = count / n)
# Create Marginal Probabilities
temp <- temp %>%
ungroup() %>%
group_by(A) %>%
summarise(count = sum(count),
probability = sum(probability)) %>%
mutate(B = "Total") %>%
bind_rows(temp)
temp <- temp %>%
ungroup() %>%
group_by(B) %>%
summarise(count = sum(count),
probability = sum(probability)) %>%
mutate(A = "Total") %>%
bind_rows(temp)
# Create Table
temp %>%
select(-count) %>%
spread(A, probability) %>%
rename(" " = B) %>%
kable() %>%
kable_styling()
| X <= x | X > x | Total | |
|---|---|---|---|
| Y <= y | 0.1254 | 0.1246 | 0.25 |
| Y > y | 0.3746 | 0.3754 | 0.75 |
| Total | 0.5000 | 0.5000 | 1.00 |
Check to see if independence holds by using Fisher’s Exact Test and the Chi Square Test. What is the difference between the two? Which is most appropriate?
count_data <- temp %>%
filter(A != "Total",
B != "Total") %>%
select(-probability) %>%
spread(A, count) %>%
as.data.frame()
row.names(count_data) <- count_data$B
count_data <- count_data %>%
select(-B) %>%
as.matrix()
fisher.test(count_data)
Fisher's Exact Test for Count Data
data: count_data
p-value = 0.8716
alternative hypothesis: true odds ratio is not equal to 1
95 percent confidence interval:
0.9202847 1.1052820
sample estimates:
odds ratio
1.00857
chisq.test(count_data)
Pearson's Chi-squared test with Yates' continuity correction
data: count_data
X-squared = 0.026133, df = 1, p-value = 0.8716
Fisher’s Exact Test is for is used when sample size less than 5, while the Chi Square Test is used when the cell sizes are large. In this case, Chi Square would be appropriate.
5 points. Descriptive and Inferential Statistics. Provide univariate descriptive statistics and appropriate plots for the training data set. Provide a scatterplot matrix for at least two of the independent variables and the dependent variable. Derive a correlation matrix for any THREE quantitative variables in the dataset. Test the hypotheses that the correlations between each pairwise set of variables is 0 and provide a 80% confidence interval. Discuss the meaning of your analysis. Would you be worried about familywise error? Why or why not?
Load Data
# Load training data from GitHub
train <- read.csv('https://raw.githubusercontent.com/weizhou2273/DATA605/master/hw/data/train.csv')
test <- read.csv('https://raw.githubusercontent.com/weizhou2273/DATA605/master/hw/data/test.csv')
Provide univariate descriptive statistics and appropriate plots for the training data set.
summary(train)
## Id MSSubClass MSZoning LotFrontage
## Min. : 1.0 Min. : 20.0 C (all): 10 Min. : 21.00
## 1st Qu.: 365.8 1st Qu.: 20.0 FV : 65 1st Qu.: 59.00
## Median : 730.5 Median : 50.0 RH : 16 Median : 69.00
## Mean : 730.5 Mean : 56.9 RL :1151 Mean : 70.05
## 3rd Qu.:1095.2 3rd Qu.: 70.0 RM : 218 3rd Qu.: 80.00
## Max. :1460.0 Max. :190.0 Max. :313.00
## NA's :259
## LotArea Street Alley LotShape LandContour
## Min. : 1300 Grvl: 6 Grvl: 50 IR1:484 Bnk: 63
## 1st Qu.: 7554 Pave:1454 Pave: 41 IR2: 41 HLS: 50
## Median : 9478 NA's:1369 IR3: 10 Low: 36
## Mean : 10517 Reg:925 Lvl:1311
## 3rd Qu.: 11602
## Max. :215245
##
## Utilities LotConfig LandSlope Neighborhood Condition1
## AllPub:1459 Corner : 263 Gtl:1382 NAmes :225 Norm :1260
## NoSeWa: 1 CulDSac: 94 Mod: 65 CollgCr:150 Feedr : 81
## FR2 : 47 Sev: 13 OldTown:113 Artery : 48
## FR3 : 4 Edwards:100 RRAn : 26
## Inside :1052 Somerst: 86 PosN : 19
## Gilbert: 79 RRAe : 11
## (Other):707 (Other): 15
## Condition2 BldgType HouseStyle OverallQual
## Norm :1445 1Fam :1220 1Story :726 Min. : 1.000
## Feedr : 6 2fmCon: 31 2Story :445 1st Qu.: 5.000
## Artery : 2 Duplex: 52 1.5Fin :154 Median : 6.000
## PosN : 2 Twnhs : 43 SLvl : 65 Mean : 6.099
## RRNn : 2 TwnhsE: 114 SFoyer : 37 3rd Qu.: 7.000
## PosA : 1 1.5Unf : 14 Max. :10.000
## (Other): 2 (Other): 19
## OverallCond YearBuilt YearRemodAdd RoofStyle
## Min. :1.000 Min. :1872 Min. :1950 Flat : 13
## 1st Qu.:5.000 1st Qu.:1954 1st Qu.:1967 Gable :1141
## Median :5.000 Median :1973 Median :1994 Gambrel: 11
## Mean :5.575 Mean :1971 Mean :1985 Hip : 286
## 3rd Qu.:6.000 3rd Qu.:2000 3rd Qu.:2004 Mansard: 7
## Max. :9.000 Max. :2010 Max. :2010 Shed : 2
##
## RoofMatl Exterior1st Exterior2nd MasVnrType MasVnrArea
## CompShg:1434 VinylSd:515 VinylSd:504 BrkCmn : 15 Min. : 0.0
## Tar&Grv: 11 HdBoard:222 MetalSd:214 BrkFace:445 1st Qu.: 0.0
## WdShngl: 6 MetalSd:220 HdBoard:207 None :864 Median : 0.0
## WdShake: 5 Wd Sdng:206 Wd Sdng:197 Stone :128 Mean : 103.7
## ClyTile: 1 Plywood:108 Plywood:142 NA's : 8 3rd Qu.: 166.0
## Membran: 1 CemntBd: 61 CmentBd: 60 Max. :1600.0
## (Other): 2 (Other):128 (Other):136 NA's :8
## ExterQual ExterCond Foundation BsmtQual BsmtCond BsmtExposure
## Ex: 52 Ex: 3 BrkTil:146 Ex :121 Fa : 45 Av :221
## Fa: 14 Fa: 28 CBlock:634 Fa : 35 Gd : 65 Gd :134
## Gd:488 Gd: 146 PConc :647 Gd :618 Po : 2 Mn :114
## TA:906 Po: 1 Slab : 24 TA :649 TA :1311 No :953
## TA:1282 Stone : 6 NA's: 37 NA's: 37 NA's: 38
## Wood : 3
##
## BsmtFinType1 BsmtFinSF1 BsmtFinType2 BsmtFinSF2
## ALQ :220 Min. : 0.0 ALQ : 19 Min. : 0.00
## BLQ :148 1st Qu.: 0.0 BLQ : 33 1st Qu.: 0.00
## GLQ :418 Median : 383.5 GLQ : 14 Median : 0.00
## LwQ : 74 Mean : 443.6 LwQ : 46 Mean : 46.55
## Rec :133 3rd Qu.: 712.2 Rec : 54 3rd Qu.: 0.00
## Unf :430 Max. :5644.0 Unf :1256 Max. :1474.00
## NA's: 37 NA's: 38
## BsmtUnfSF TotalBsmtSF Heating HeatingQC CentralAir
## Min. : 0.0 Min. : 0.0 Floor: 1 Ex:741 N: 95
## 1st Qu.: 223.0 1st Qu.: 795.8 GasA :1428 Fa: 49 Y:1365
## Median : 477.5 Median : 991.5 GasW : 18 Gd:241
## Mean : 567.2 Mean :1057.4 Grav : 7 Po: 1
## 3rd Qu.: 808.0 3rd Qu.:1298.2 OthW : 2 TA:428
## Max. :2336.0 Max. :6110.0 Wall : 4
##
## Electrical X1stFlrSF X2ndFlrSF LowQualFinSF
## FuseA: 94 Min. : 334 Min. : 0 Min. : 0.000
## FuseF: 27 1st Qu.: 882 1st Qu.: 0 1st Qu.: 0.000
## FuseP: 3 Median :1087 Median : 0 Median : 0.000
## Mix : 1 Mean :1163 Mean : 347 Mean : 5.845
## SBrkr:1334 3rd Qu.:1391 3rd Qu.: 728 3rd Qu.: 0.000
## NA's : 1 Max. :4692 Max. :2065 Max. :572.000
##
## GrLivArea BsmtFullBath BsmtHalfBath FullBath
## Min. : 334 Min. :0.0000 Min. :0.00000 Min. :0.000
## 1st Qu.:1130 1st Qu.:0.0000 1st Qu.:0.00000 1st Qu.:1.000
## Median :1464 Median :0.0000 Median :0.00000 Median :2.000
## Mean :1515 Mean :0.4253 Mean :0.05753 Mean :1.565
## 3rd Qu.:1777 3rd Qu.:1.0000 3rd Qu.:0.00000 3rd Qu.:2.000
## Max. :5642 Max. :3.0000 Max. :2.00000 Max. :3.000
##
## HalfBath BedroomAbvGr KitchenAbvGr KitchenQual
## Min. :0.0000 Min. :0.000 Min. :0.000 Ex:100
## 1st Qu.:0.0000 1st Qu.:2.000 1st Qu.:1.000 Fa: 39
## Median :0.0000 Median :3.000 Median :1.000 Gd:586
## Mean :0.3829 Mean :2.866 Mean :1.047 TA:735
## 3rd Qu.:1.0000 3rd Qu.:3.000 3rd Qu.:1.000
## Max. :2.0000 Max. :8.000 Max. :3.000
##
## TotRmsAbvGrd Functional Fireplaces FireplaceQu GarageType
## Min. : 2.000 Maj1: 14 Min. :0.000 Ex : 24 2Types : 6
## 1st Qu.: 5.000 Maj2: 5 1st Qu.:0.000 Fa : 33 Attchd :870
## Median : 6.000 Min1: 31 Median :1.000 Gd :380 Basment: 19
## Mean : 6.518 Min2: 34 Mean :0.613 Po : 20 BuiltIn: 88
## 3rd Qu.: 7.000 Mod : 15 3rd Qu.:1.000 TA :313 CarPort: 9
## Max. :14.000 Sev : 1 Max. :3.000 NA's:690 Detchd :387
## Typ :1360 NA's : 81
## GarageYrBlt GarageFinish GarageCars GarageArea GarageQual
## Min. :1900 Fin :352 Min. :0.000 Min. : 0.0 Ex : 3
## 1st Qu.:1961 RFn :422 1st Qu.:1.000 1st Qu.: 334.5 Fa : 48
## Median :1980 Unf :605 Median :2.000 Median : 480.0 Gd : 14
## Mean :1979 NA's: 81 Mean :1.767 Mean : 473.0 Po : 3
## 3rd Qu.:2002 3rd Qu.:2.000 3rd Qu.: 576.0 TA :1311
## Max. :2010 Max. :4.000 Max. :1418.0 NA's: 81
## NA's :81
## GarageCond PavedDrive WoodDeckSF OpenPorchSF EnclosedPorch
## Ex : 2 N: 90 Min. : 0.00 Min. : 0.00 Min. : 0.00
## Fa : 35 P: 30 1st Qu.: 0.00 1st Qu.: 0.00 1st Qu.: 0.00
## Gd : 9 Y:1340 Median : 0.00 Median : 25.00 Median : 0.00
## Po : 7 Mean : 94.24 Mean : 46.66 Mean : 21.95
## TA :1326 3rd Qu.:168.00 3rd Qu.: 68.00 3rd Qu.: 0.00
## NA's: 81 Max. :857.00 Max. :547.00 Max. :552.00
##
## X3SsnPorch ScreenPorch PoolArea PoolQC
## Min. : 0.00 Min. : 0.00 Min. : 0.000 Ex : 2
## 1st Qu.: 0.00 1st Qu.: 0.00 1st Qu.: 0.000 Fa : 2
## Median : 0.00 Median : 0.00 Median : 0.000 Gd : 3
## Mean : 3.41 Mean : 15.06 Mean : 2.759 NA's:1453
## 3rd Qu.: 0.00 3rd Qu.: 0.00 3rd Qu.: 0.000
## Max. :508.00 Max. :480.00 Max. :738.000
##
## Fence MiscFeature MiscVal MoSold
## GdPrv: 59 Gar2: 2 Min. : 0.00 Min. : 1.000
## GdWo : 54 Othr: 2 1st Qu.: 0.00 1st Qu.: 5.000
## MnPrv: 157 Shed: 49 Median : 0.00 Median : 6.000
## MnWw : 11 TenC: 1 Mean : 43.49 Mean : 6.322
## NA's :1179 NA's:1406 3rd Qu.: 0.00 3rd Qu.: 8.000
## Max. :15500.00 Max. :12.000
##
## YrSold SaleType SaleCondition SalePrice
## Min. :2006 WD :1267 Abnorml: 101 Min. : 34900
## 1st Qu.:2007 New : 122 AdjLand: 4 1st Qu.:129975
## Median :2008 COD : 43 Alloca : 12 Median :163000
## Mean :2008 ConLD : 9 Family : 20 Mean :180921
## 3rd Qu.:2009 ConLI : 5 Normal :1198 3rd Qu.:214000
## Max. :2010 ConLw : 5 Partial: 125 Max. :755000
## (Other): 9
Provide a scatterplot matrix for at least two of the independent variables and the dependent variable.
train %>%
select(LotArea, GrLivArea, BedroomAbvGr, SalePrice) %>%
pairs()
Derive a correlation matrix for any three quantitative variables in the dataset.
correlation_matrix <- train %>%
select(LotArea, GrLivArea, BedroomAbvGr) %>%
cor() %>%
as.matrix()
correlation_matrix %>%
kable() %>%
kable_styling()
| LotArea | GrLivArea | BedroomAbvGr | |
|---|---|---|---|
| LotArea | 1.0000000 | 0.2631162 | 0.1196899 |
| GrLivArea | 0.2631162 | 1.0000000 | 0.5212695 |
| BedroomAbvGr | 0.1196899 | 0.5212695 | 1.0000000 |
Test the hypothesis that the correlations between each pairwise set of variables is 0 and provide an 80% confidence interval.
zero_vars <- 0
test <- 0
variables <- train %>%
select(-SalePrice) %>%
names()
for(variable in variables){
d <- train[,names(train) == variable]
if(is.numeric(d)){
test <- test + 1
results <- cor.test(train$SalePrice, d, conf.level = 0.8)
if(0 > results$conf.int[1] & results$conf.int[2] > 0){
hypothesis_test_results <- "Yes"
zero_vars <- zero_vars + 1
} else {
hypothesis_test_results <- "No"
}
print(paste(variable,': test result', hypothesis_test_results))
}
}
[1] “Id : test result Yes” [1] “MSSubClass : test result No” [1] “LotFrontage : test result No” [1] “LotArea : test result No” [1] “OverallQual : test result No” [1] “OverallCond : test result No” [1] “YearBuilt : test result No” [1] “YearRemodAdd : test result No” [1] “MasVnrArea : test result No” [1] “BsmtFinSF1 : test result No” [1] “BsmtFinSF2 : test result Yes” [1] “BsmtUnfSF : test result No” [1] “TotalBsmtSF : test result No” [1] “X1stFlrSF : test result No” [1] “X2ndFlrSF : test result No” [1] “LowQualFinSF : test result Yes” [1] “GrLivArea : test result No” [1] “BsmtFullBath : test result No” [1] “BsmtHalfBath : test result Yes” [1] “FullBath : test result No” [1] “HalfBath : test result No” [1] “BedroomAbvGr : test result No” [1] “KitchenAbvGr : test result No” [1] “TotRmsAbvGrd : test result No” [1] “Fireplaces : test result No” [1] “GarageYrBlt : test result No” [1] “GarageCars : test result No” [1] “GarageArea : test result No” [1] “WoodDeckSF : test result No” [1] “OpenPorchSF : test result No” [1] “EnclosedPorch : test result No” [1] “X3SsnPorch : test result No” [1] “ScreenPorch : test result No” [1] “PoolArea : test result No” [1] “MiscVal : test result Yes” [1] “MoSold : test result No” [1] “YrSold : test result Yes”
Discuss the meaning of your analysis. Would you be worried about familywise error? Why or why not?
There are 6 variables that have no correlation with the sale price.
They are Id, BsmtFinSF2, LowQualFinSF, BsmtHalfBath,MiscVal , YrSold
I would be worried be woried about family-wise error.
The family-wise error rate would be: \({FWER} = 1 - (1 - .2)^6 = 1 - 0.262144 = 0.737856\). This means there is a really high probability of committing a Type I error.
Invert your correlation matrix from above. (This is known as the precision matrix and contains variance inflation factors on the diagonal.)
precision_matrix <- inv(correlation_matrix)
precision_matrix %>%
kable() %>%
kable_styling()
| 1.0748631 | -0.2962500 | 0.0257758 |
| -0.2962500 | 1.4547532 | -0.7228604 |
| 0.0257758 | -0.7228604 | 1.3737200 |
Multiply the correlation matrix by the precision matrix, and then multiply the precision matrix by the correlation matrix.
This will generate the identity matrix.
correlation_matrix %*% precision_matrix %>%
round() %>%
kable() %>%
kable_styling()
| LotArea | 1 | 0 | 0 |
| GrLivArea | 0 | 1 | 0 |
| BedroomAbvGr | 0 | 0 | 1 |
This will too.
precision_matrix %*% correlation_matrix %>%
round() %>%
kable() %>%
kable_styling()
| LotArea | GrLivArea | BedroomAbvGr |
|---|---|---|
| 1 | 0 | 0 |
| 0 | 1 | 0 |
| 0 | 0 | 1 |
Conduct LU decomposition on the matrix.
lu_decomposition <- Matrix::expand(lu(correlation_matrix))
The LU decomposition should yield the correlation matrix after multiplying the two components. In other words this:
A <- lu_decomposition$L %*% lu_decomposition$U %>%
as.matrix()
colnames(A) <- colnames(correlation_matrix)
rownames(A) <- rownames(correlation_matrix)
A %>%
kable() %>%
kable_styling()
| LotArea | GrLivArea | BedroomAbvGr | |
|---|---|---|---|
| LotArea | 1.0000000 | 0.2631162 | 0.1196899 |
| GrLivArea | 0.2631162 | 1.0000000 | 0.5212695 |
| BedroomAbvGr | 0.1196899 | 0.5212695 | 1.0000000 |
Should match this:
correlation_matrix %>%
kable() %>%
kable_styling()
| LotArea | GrLivArea | BedroomAbvGr | |
|---|---|---|---|
| LotArea | 1.0000000 | 0.2631162 | 0.1196899 |
| GrLivArea | 0.2631162 | 1.0000000 | 0.5212695 |
| BedroomAbvGr | 0.1196899 | 0.5212695 | 1.0000000 |
Which it does.
Many times, it makes sense to fit a closed form distribution to data. Select a variable in the Kaggle.com training dataset that is skewed to the right, shift it so that the minimum value is absolutely above zero if necessary. Then load the MASS package and run fitdistr to fit an exponential probability density function. (See https://stat.ethz.ch/R-manual/R-devel/library/MASS/html/fitdistr.html). Find the optimal value of \(\lambda\) for this distribution, and then take 1000 samples from this exponential distribution using this value (e.g., rexp(1000, \(\lambda\))).
I will be using the Square Footage of the Unfinished Basement for this part of the exam. I know it is right skewed because the mean (567.240411) is larger than the median (477.5) as shown in the figure below:
The minimum (0) is not smaller than zero so we don’t need to shift the data. Remember that the maximum value is 2336. I have already loaded the MASS package.
lambda <- fitdistr(train$BsmtUnfSF, densfun = "exponential")$estimate
The optimal value for \(\lambda\) is 0.0017629.
samples <- rexp(1000, rate = lambda)
Plot a histogram and compare it with a histogram of your original variable.
The original data is not quite exponential, but it’s not a bad approximation. The exponential distribution data has a longer tail than the basement square footage data.
Using the exponential pdf, find the 5th and 95th percentiles using the cumulative distribution function (CDF).
The PDF would be \(f(x;\lambda) = \lambda e^{-\lambda x}\) where \(x \geq 0\) and otherwise zero.
The CDF would be \(F(x;\lambda) = 1 - e^{-\lambda x}\). lambda is 0.0017629. To find the 5th percentile we need to solve for x in:
\[0.05 = 1 - e^{-0.0017629 x}\]
\[-0.95 = - e^{-0.0017629 x}\]
\[-ln(0.95) = 0.0017629 x\]
\[x = \frac{-ln(0.95)}{0.0017629} = 29.0956294\] For the 95th percentile we need to solve for x in:
\[0.95 = 1 - e^{-0.0017629 x}\]
\[-0.05 = - e^{-0.0017629 x}\]
\[-ln(0.05) = 0.0017629 x\]
\[x = \frac{-ln(0.05)}{0.0017629} = 1699.300406\]
So the 5th and 95th percentiles are approximately 29 and 1699, respectively.
Also generate a 95% confidence interval from the empirical data, assuming normality.
mu <- mean(train$BsmtUnfSF)
s <- sd(train$BsmtUnfSF)
n <- nrow(train)
error <- qnorm(0.975) * s / sqrt(n)
ci <- c(mu - error, mu + error)
names(ci) <- c("5%", "95%")
ci
5% 95%
544.5750 589.9058
The 95% confidence interval (assuming normality) is 545 to 590. Although what I think was meant was assume the data is normally distributed. Calculate the 5th and 95th percentile. That is found using \(x = \mu + Z \sigma\). The Z is -1.645 for the 5th percentile, and 1.645 for the 95th. So the percentiles would be -160 and 1294.
Finally, provide the empirical 5th percentile and 95th percentile of the data. Discuss.
quantile(train$BsmtUnfSF, c(0.05, 0.95))
5% 95%
0 1468
The actual 5th percentile is 0 and the 95th is 1468. So the findings are summarized in the following table:
| Method | 5% | 95% |
|---|---|---|
| Exponential CDF | 29 | 1699 |
| Normal 95% CI | 545 | 590 |
| Normal Percentiles | -160 | 1294 |
| Emperical Percentiles | 0 | 1468 |
If we model the data as exponentially distributed the 5th percentile is 29. If we model it as normally distributed the 5th is at -160 which in the context of square footage does not make any sense. The actual 5th percentile is 0. The difference is explained to the assumed shape/distibution of the underlying data.
Looking at the 95th percentile we have 393 if the data are exponentially distributed, 1294 if it is normally distributed and 1468 in reality. Again the difference is due to the assumed shape.
I have left out the 95% CI from the discussion because the confidence interval is a way of estimating the mean of the population. We would know if we took 100 estimates that the actual mean falls within the confidence intervals 95% of the time. Within this context it is meaningless.
Build some type of multiple regression model and submit your model to the competition board. Provide your complete model summary and results with analysis. Report your Kaggle.com user name and score.
Since there are so many variables I will be using a random forest to detect which variables are the most important and will be using them to build the model. First I will drop out the variables that are missing a lot of data or otherwise meaningless.
fill_holes <- function(df){
df %>%
mutate(BedroomAbvGr = replace_na(BedroomAbvGr, mean(train$BedroomAbvGr)),
BsmtFullBath = replace_na(BsmtFullBath, mean(train$BsmtFullBath)),
BsmtHalfBath = replace_na(BsmtHalfBath, mean(train$BsmtHalfBath)),
BsmtUnfSF = replace_na(BsmtUnfSF, mean(train$BsmtUnfSF)),
EnclosedPorch = replace_na(EnclosedPorch, mean(train$EnclosedPorch)),
Fireplaces = replace_na(EnclosedPorch, mean(train$EnclosedPorch)),
GarageArea = replace_na(GarageArea, mean(train$GarageArea)),
GarageCars = replace_na(GarageCars, mean(train$GarageCars)),
HalfBath = replace_na(HalfBath, mean(train$HalfBath)),
KitchenAbvGr = replace_na(KitchenAbvGr, mean(train$KitchenAbvGr)),
LotFrontage = replace_na(LotFrontage, mean(train$LotFrontage)),
OpenPorchSF = replace_na(OpenPorchSF, mean(train$OpenPorchSF)),
PoolArea = replace_na(PoolArea, mean(train$PoolArea)),
ScreenPorch = replace_na(ScreenPorch, mean(train$ScreenPorch)),
TotRmsAbvGrd = replace_na(TotRmsAbvGrd, mean(train$TotRmsAbvGrd)),
WoodDeckSF = replace_na(WoodDeckSF, mean(train$WoodDeckSF)))
}
train = fill_holes(train)
test <- read.csv('https://raw.githubusercontent.com/weizhou2273/DATA605/master/hw/data/test.csv')
test = fill_holes(test)
model <- lm(SalePrice ~ OverallQual + YearBuilt + YearRemodAdd + TotalBsmtSF + X1stFlrSF + GrLivArea + FullBath + TotRmsAbvGrd + GarageCars + GarageArea, data = train)
summary(model)
##
## Call:
## lm(formula = SalePrice ~ OverallQual + YearBuilt + YearRemodAdd +
## TotalBsmtSF + X1stFlrSF + GrLivArea + FullBath + TotRmsAbvGrd +
## GarageCars + GarageArea, data = train)
##
## Residuals:
## Min 1Q Median 3Q Max
## -489958 -19316 -1948 16020 290558
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -1.186e+06 1.291e+05 -9.187 < 2e-16 ***
## OverallQual 1.960e+04 1.190e+03 16.472 < 2e-16 ***
## YearBuilt 2.682e+02 5.035e+01 5.328 1.15e-07 ***
## YearRemodAdd 2.965e+02 6.363e+01 4.659 3.47e-06 ***
## TotalBsmtSF 1.986e+01 4.295e+00 4.625 4.09e-06 ***
## X1stFlrSF 1.417e+01 4.930e+00 2.875 0.004097 **
## GrLivArea 5.130e+01 4.233e+00 12.119 < 2e-16 ***
## FullBath -6.791e+03 2.682e+03 -2.532 0.011457 *
## TotRmsAbvGrd 3.310e+01 1.119e+03 0.030 0.976404
## GarageCars 1.042e+04 3.044e+03 3.422 0.000639 ***
## GarageArea 1.495e+01 1.031e+01 1.450 0.147384
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 37920 on 1449 degrees of freedom
## Multiple R-squared: 0.7737, Adjusted R-squared: 0.7721
## F-statistic: 495.4 on 10 and 1449 DF, p-value: < 2.2e-16
mySalePrice <- predict(model,test)
##create dataframe
prediction <- data.frame( Id = test[,"Id"], SalePrice = mySalePrice)
prediction[prediction<0] <- 0
prediction <- replace(prediction,is.na(prediction),0)
head(prediction)
## Id SalePrice
## 1 1461 110135.9
## 2 1462 159060.0
## 3 1463 169683.7
## 4 1464 188059.7
## 5 1465 219782.0
## 6 1466 182152.0
write.csv(prediction, file="prediction.csv", row.names = FALSE)
Username: WeiZhou2273CUNY, Score: 0.73044