path = "https://raw.githubusercontent.com/karmaggyatso/CUNY_SPS/main/data_605/week_12/who.csv"
csv_data = read.csv(path)

head(csv_data)
##               Country LifeExp InfantSurvival Under5Survival  TBFree      PropMD
## 1         Afghanistan      42          0.835          0.743 0.99769 0.000228841
## 2             Albania      71          0.985          0.983 0.99974 0.001143127
## 3             Algeria      71          0.967          0.962 0.99944 0.001060478
## 4             Andorra      82          0.997          0.996 0.99983 0.003297297
## 5              Angola      41          0.846          0.740 0.99656 0.000070400
## 6 Antigua and Barbuda      73          0.990          0.989 0.99991 0.000142857
##        PropRN PersExp GovtExp TotExp
## 1 0.000572294      20      92    112
## 2 0.004614439     169    3128   3297
## 3 0.002091362     108    5184   5292
## 4 0.003500000    2589  169725 172314
## 5 0.001146162      36    1620   1656
## 6 0.002773810     503   12543  13046
1. Provide a scatterplot of LifeExp~TotExp, and run simple linear regression. Do not transform the
variables. Provide and interpret the F statistics, R^2, standard error,and p-values only. Discuss
whether the assumptions of simple linear regression met.
model = lm(LifeExp ~ TotExp, data = csv_data)
plot(LifeExp ~ TotExp, data = csv_data, ylab = "Life Expectancy", xlab = "Total Expectancy") + 
  abline(model)

## integer(0)
summary(model)
## 
## Call:
## lm(formula = LifeExp ~ TotExp, data = csv_data)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -24.764  -4.778   3.154   7.116  13.292 
## 
## Coefficients:
##              Estimate Std. Error t value Pr(>|t|)    
## (Intercept) 6.475e+01  7.535e-01  85.933  < 2e-16 ***
## TotExp      6.297e-05  7.795e-06   8.079 7.71e-14 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 9.371 on 188 degrees of freedom
## Multiple R-squared:  0.2577, Adjusted R-squared:  0.2537 
## F-statistic: 65.26 on 1 and 188 DF,  p-value: 7.714e-14

F-statistic: 65.26 on 1 p-value: 7.714e-14 Multiple R-squared: 0.2577

The F-statistic is testing the model against the null model, and the p-value is the chance that, given the hypothesis is true, the data turned out the way it did. There is a very low chance of that, according to our p-value of 7.714e-14. The R-squared value indicates that about 25.77% of the variation in the response variable can be attributed to the independent variable.

2. Raise life expectancy to the 4.6 power (i.e., LifeExp^4.6). Raise total expenditures to the 0.06
power (nearly a log transform, TotExp^.06). Plot LifeExp^4.6 as a function of TotExp^.06, and r
re-run the simple regression model using the transformed variables. Provide and interpret the F
statistics, R^2, standard error, and p-values. Which model is "better?"
mod_2 <- lm(I(LifeExp^4.6) ~ I(TotExp^0.06), data = csv_data)
summary(mod_2)
## 
## Call:
## lm(formula = I(LifeExp^4.6) ~ I(TotExp^0.06), data = csv_data)
## 
## Residuals:
##        Min         1Q     Median         3Q        Max 
## -308616089  -53978977   13697187   59139231  211951764 
## 
## Coefficients:
##                  Estimate Std. Error t value Pr(>|t|)    
## (Intercept)    -736527910   46817945  -15.73   <2e-16 ***
## I(TotExp^0.06)  620060216   27518940   22.53   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 90490000 on 188 degrees of freedom
## Multiple R-squared:  0.7298, Adjusted R-squared:  0.7283 
## F-statistic: 507.7 on 1 and 188 DF,  p-value: < 2.2e-16
plot(I(LifeExp^4.6) ~ I(TotExp^0.06), data = csv_data, ylab = "Life Expectancy", xlab = "Total Expectancy")+
abline(mod_2)

## integer(0)

This adjustment increased our \(R^2\) value to 72.83% and out F statistic to over 500, so it is already a better model than the first problem.

3. Using the results from 3, forecast life expectancy when TotExp^.06 =1.5. Then forecast life
expectancy when TotExp^.06=2.5.
ForLifeExpt <- function (x){
  y <- mod_2$coefficients[1] + mod_2$coefficients[2] * x
  y <- y^(1/4.6)
  print(y)
}
ForLifeExpt(1.5)
## (Intercept) 
##    63.31153
ForLifeExpt(2.5)
## (Intercept) 
##    86.50645
4. Build the following multiple regression model and interpret the F Statistics, R^2, standard error, and p-values. How good is the model?

  LifeExp = b0+b1 x PropMd + b2 x TotExp +b3 x PropMD x TotExp
lm_fit <- lm(LifeExp ~ PropMD + TotExp + PropMD*TotExp, data = csv_data)
summary(lm_fit)
## 
## Call:
## lm(formula = LifeExp ~ PropMD + TotExp + PropMD * TotExp, data = csv_data)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -27.320  -4.132   2.098   6.540  13.074 
## 
## Coefficients:
##                 Estimate Std. Error t value Pr(>|t|)    
## (Intercept)    6.277e+01  7.956e-01  78.899  < 2e-16 ***
## PropMD         1.497e+03  2.788e+02   5.371 2.32e-07 ***
## TotExp         7.233e-05  8.982e-06   8.053 9.39e-14 ***
## PropMD:TotExp -6.026e-03  1.472e-03  -4.093 6.35e-05 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 8.765 on 186 degrees of freedom
## Multiple R-squared:  0.3574, Adjusted R-squared:  0.3471 
## F-statistic: 34.49 on 3 and 186 DF,  p-value: < 2.2e-16

Since the p-value is smaller than .05, the model is statistically significant. The R2 value indicates that the independent variable accounts for 35.74% of the variability in the independent variable. The F value indicates that the model is strong

5. Forecast LifeExp when PropMD=.03 and TotExp = 14. Does this forecast seem realistic? Why or why not?
(le <- (lm_fit$coefficients[1]) +  (lm_fit$coefficients[2]*0.03) + (lm_fit$coefficients[3]*14) - (lm_fit$coefficients[4]*0.03*14))
## (Intercept) 
##    107.7011

This forecast is unrealistic, considering the age of the person is 107 years old.