- Randomly split your data into a training set (80%) and a test set (20%)
- Build the regression model using the training set
- Make predictions using the test set and compute the model accuracy metrics
data("marketing", package = "datarium")
sample_n(marketing, 3)
p <- ggplot(marketing) +
geom_histogram(aes(x = sales, y = ..density..),
binwidth = 1, fill = "grey", color = "black") + geom_density(aes(x=sales, color="red"),
show.legend = FALSE)
p + theme_bw()
## Warning: The dot-dot notation (`..density..`) was deprecated in ggplot2 3.4.0.
## ℹ Please use `after_stat(density)` instead.
## This warning is displayed once every 8 hours.
## Call `lifecycle::last_lifecycle_warnings()` to see where this warning was
## generated.
preproc1 <- preProcess(marketing, method=c("center", "scale"))
norm1 <- predict(preproc1, marketing)
preproc2 <- preProcess(marketing, method=c("range"))
norm2 <- predict(preproc2, marketing)
M <-cor(norm1)
p.mat <- cor.mtest(norm1)
#print(p.mat)
corrplot(M, type="upper", order="hclust",
p.mat = p.mat$p, sig.level = 0.05)
set.seed(123)
training.samples <- createDataPartition(y = norm1$sales, p = 0.8, list = FALSE)
test.data <- norm1[-training.samples, ]
model <- lm(sales ~ youtube + facebook + newspaper,
data = norm1[training.samples, ])
predictions <- predict(model, newdata = norm1[-training.samples, ])
data.frame( RMSE = RMSE(predictions, test.data$sales),
R2 = R2(predictions, test.data$sales),
MAE = MAE(predictions, test.data$sales),
MSE = mse(predictions, test.data$sales))
vif(model)
## youtube facebook newspaper
## 1.004440 1.118155 1.115449
Use lm() function in the base package and swiss data from library datasets
##
## Call:
## lm(formula = Fertility ~ ., data = swiss)
##
## Residuals:
## Min 1Q Median 3Q Max
## -15.2743 -5.2617 0.5032 4.1198 15.3213
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 66.91518 10.70604 6.250 1.91e-07 ***
## Agriculture -0.17211 0.07030 -2.448 0.01873 *
## Examination -0.25801 0.25388 -1.016 0.31546
## Education -0.87094 0.18303 -4.758 2.43e-05 ***
## Catholic 0.10412 0.03526 2.953 0.00519 **
## Infant.Mortality 1.07705 0.38172 2.822 0.00734 **
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 7.165 on 41 degrees of freedom
## Multiple R-squared: 0.7067, Adjusted R-squared: 0.671
## F-statistic: 19.76 on 5 and 41 DF, p-value: 5.594e-10
Note: 70% (R-squared) of the variation in Fertility rate can be explained via linear regression
Use glm() function and set family = "binomial" Install
library bestglm
##
## Call:
## glm(formula = chd ~ ldl, family = binomial, data = SAheart)
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -1.96867 0.27308 -7.209 5.63e-13 ***
## ldl 0.27466 0.05164 5.319 1.04e-07 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 596.11 on 461 degrees of freedom
## Residual deviance: 564.28 on 460 degrees of freedom
## AIC: 568.28
##
## Number of Fisher Scoring iterations: 4
Regularization is generally useful in the following situations: - Large number of variables - Low ratio of number observations to number of variables - High Multi-Collinearity
Use swiss data and libray glmnet (install thsi library). Create two different datasets from swiss, one containing dependent variable and other containing independent variables:
## 6 x 1 sparse Matrix of class "dgCMatrix"
## s=1.584893
## (Intercept) 62.97585936
## Agriculture -0.09863022
## Examination -0.33967990
## Education -0.64733678
## Catholic 0.07703325
## Infant.Mortality 1.08821833
Lasso stands for Least Absolute Shrinkage and Selection Operator. - Use the same swiss dataset and X and Y - Use glmnet for cross-validation - Set standartize = TRUE (this is default)
## 6 x 1 sparse Matrix of class "dgCMatrix"
## s=0.1258925
## (Intercept) 65.46374579
## Agriculture -0.14994107
## Examination -0.24310141
## Education -0.83632674
## Catholic 0.09913931
## Infant.Mortality 1.07238898
Note - Both ridge regression and lasso regression are addressed to deal with multicollinearity.