The attached who.csv dataset contains real-world data from 2008. The variables included follow.
who <- read.csv("who.csv")
who %>% ggplot(aes(x=TotExp,y=LifeExp)) + geom_point() + ggtitle("Scatter Plot of LifeExp by TotExp")
base_lm <- lm(LifeExp ~ TotExp, data = who)
summary(base_lm)
##
## Call:
## lm(formula = LifeExp ~ TotExp, data = who)
##
## Residuals:
## Min 1Q Median 3Q Max
## -24.764 -4.778 3.154 7.116 13.292
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 6.475e+01 7.535e-01 85.933 < 2e-16 ***
## TotExp 6.297e-05 7.795e-06 8.079 7.71e-14 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 9.371 on 188 degrees of freedom
## Multiple R-squared: 0.2577, Adjusted R-squared: 0.2537
## F-statistic: 65.26 on 1 and 188 DF, p-value: 7.714e-14
The scatter plot shows an exponential relationship between the variables which implies that a linear model may not be the best model for this dataset.You can also see that with the R-squared of .2537 which explains 25% of the variance in the dependent variable
who_mod <- who
who_mod$LifeExp <- who_mod$LifeExp^4.6
who_mod$TotExp <- who_mod$TotExp^.06
who_mod %>% ggplot(aes(x=TotExp,y=LifeExp)) + geom_point() + ggtitle("Scatter Plot of LifeExp by TotExp")
new_lm <- lm(LifeExp ~ TotExp, data = who_mod)
summary(new_lm)
##
## Call:
## lm(formula = LifeExp ~ TotExp, data = who_mod)
##
## Residuals:
## Min 1Q Median 3Q Max
## -308616089 -53978977 13697187 59139231 211951764
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -736527910 46817945 -15.73 <2e-16 ***
## TotExp 620060216 27518940 22.53 <2e-16 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 90490000 on 188 degrees of freedom
## Multiple R-squared: 0.7298, Adjusted R-squared: 0.7283
## F-statistic: 507.7 on 1 and 188 DF, p-value: < 2.2e-16
The new model has an improved Adjusted R-squared of .7283 that drastically improves the prior model. This is a “better” model where we are fitting the data to reside on a more linear plane.
new_obs_1 = data.frame(TotExp = 1.5)
new_obs_2 = data.frame(TotExp = 2.5)
# we need to convert back to a "normal" life expectancy"
exp(log(predict.lm(new_lm, newdata = new_obs_1))/4.6)
## 1
## 63.31153
exp(log(predict.lm(new_lm, newdata = new_obs_2))/4.6)
## 1
## 86.50645
multi_lm <- lm(LifeExp ~ PropMD + TotExp + (PropMD*TotExp), data= who)
summary(multi_lm)
##
## Call:
## lm(formula = LifeExp ~ PropMD + TotExp + (PropMD * TotExp), data = who)
##
## Residuals:
## Min 1Q Median 3Q Max
## -27.320 -4.132 2.098 6.540 13.074
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 6.277e+01 7.956e-01 78.899 < 2e-16 ***
## PropMD 1.497e+03 2.788e+02 5.371 2.32e-07 ***
## TotExp 7.233e-05 8.982e-06 8.053 9.39e-14 ***
## PropMD:TotExp -6.026e-03 1.472e-03 -4.093 6.35e-05 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 8.765 on 186 degrees of freedom
## Multiple R-squared: 0.3574, Adjusted R-squared: 0.3471
## F-statistic: 34.49 on 3 and 186 DF, p-value: < 2.2e-16
This new multiple regression model has a p-value < .05 which shows statistical significance to explain 74% of the variance in the dependent variable by using this model to predict outcomes. However the residual standard error is still pretty large so we can see that there is a large range where we would predict a particular outcome. We can forecast an outcome and see how
new_obs = data.frame(PropMD = .03, TotExp = 14)
predict.lm(multi_lm, newdata = new_obs)
## 1
## 107.696
We can see that there is some overfitting in the model due to the manipulation of the underlying variables. This creates an unrealistic forecast of 107 years of life expectancy which we dont see as feasible in the original dataset that caps out around 85 years