You may be familiar with modeling the equation of a line as:

\(y =\) \(mx\) \(+\) \(b\)

where \(y\) is our dependent variable, \(x\) is our independent variable, \(m\) is the slope of the line, and \(b\) represents the y-intercept (the y-value when x = 0).

Linear regression takes the same form and can be modeled by the equation below:

\(y =\) \(\beta_{1}x\) \(+\) \(\beta_{0}\)

We can use linear regression when we are trying to predict the outcome of a numerical variable using a single (or multiple) other variables we call predictor variables.

Let’s take a look at an example from the ‘mtcars’ dataset.

# Load the necessary libraries
library(tidyverse)
library(caret)

# Load the mtcars dataset
data("mtcars")

# Summary statistics
summary(mtcars)
      mpg             cyl             disp             hp             drat             wt             qsec      
 Min.   :10.40   Min.   :4.000   Min.   : 71.1   Min.   : 52.0   Min.   :2.760   Min.   :1.513   Min.   :14.50  
 1st Qu.:15.43   1st Qu.:4.000   1st Qu.:120.8   1st Qu.: 96.5   1st Qu.:3.080   1st Qu.:2.581   1st Qu.:16.89  
 Median :19.20   Median :6.000   Median :196.3   Median :123.0   Median :3.695   Median :3.325   Median :17.71  
 Mean   :20.09   Mean   :6.188   Mean   :230.7   Mean   :146.7   Mean   :3.597   Mean   :3.217   Mean   :17.85  
 3rd Qu.:22.80   3rd Qu.:8.000   3rd Qu.:326.0   3rd Qu.:180.0   3rd Qu.:3.920   3rd Qu.:3.610   3rd Qu.:18.90  
 Max.   :33.90   Max.   :8.000   Max.   :472.0   Max.   :335.0   Max.   :4.930   Max.   :5.424   Max.   :22.90  
       vs               am              gear            carb      
 Min.   :0.0000   Min.   :0.0000   Min.   :3.000   Min.   :1.000  
 1st Qu.:0.0000   1st Qu.:0.0000   1st Qu.:3.000   1st Qu.:2.000  
 Median :0.0000   Median :0.0000   Median :4.000   Median :2.000  
 Mean   :0.4375   Mean   :0.4062   Mean   :3.688   Mean   :2.812  
 3rd Qu.:1.0000   3rd Qu.:1.0000   3rd Qu.:4.000   3rd Qu.:4.000  
 Max.   :1.0000   Max.   :1.0000   Max.   :5.000   Max.   :8.000  
# Convert relevant columns to factors
mtcars <- mtcars %>%
  mutate(cyl = as.factor(cyl),
         gear = as.factor(gear),
         vs = as.factor(vs),
         am = as.factor(am),
         carb = as.factor(carb))

# Rename columns for better readability
mtcars <- mtcars %>%
  rename(MilesPerGallon = mpg,
         Cylinders = cyl,
         Displacement = disp,
         HorsePower = hp,
         RearAxleRatio = drat,
         Weight = wt,
         QuarterMileTime = qsec,
         VEngine = vs,
         Transmission = am,
         Gears = gear,
         Carburetors = carb)

trash <- trash %>%
  rename(discarded = "Total Discarded (in tonnes)")
Error: object 'trash' not found

Let’s visualize some of the data.


# Boxplot of MilesPerGallon by Cylinders
ggplot(mtcars, aes(x = Cylinders, y = MilesPerGallon)) +
  geom_boxplot(fill = "lightblue", color = "black") +
  theme_minimal() +
  labs(title = "Miles Per Gallon by Number of Cylinders")


# Scatter plot of Weight vs. MilesPerGallon
ggplot(mtcars, aes(x = Weight, y = MilesPerGallon)) +
  geom_point(color = "blue") +
  geom_smooth(method = "lm", se = FALSE, color = "red") +
  theme_minimal() +
  labs(title = "Weight vs. Miles Per Gallon")

It appears that both the weight of a car and the number of cylinders it has can have an effect on the miles per gallon of the car. Let’s see if we can use one of these variables for our model. Before we create a linear regression model however, we need to split our data into “train” and “test” data. This will allow us to build a model using 80% of our data, and then “test” our model with the remaining 20% of the data that our model has not seen yet. This will help us determine how good our model is at making predictions on data it has not seen before.

# Split the data into training and testing sets
set.seed(123) #This ensures that anyone who runs this will grab the same random values
trainIndex <- createDataPartition(mtcars$MilesPerGallon, p = 0.8, 
                                  list = FALSE, 
                                  times = 1)
mtcarsTrain <- mtcars[ trainIndex,]
mtcarsTest  <- mtcars[-trainIndex,]

We can now build a simple linear regression modeling trying to predict mileage per gallon from weight alone. We use the lm() function in R to do this.

# Linear regression model to predict MilesPerGallon
model <- lm(MilesPerGallon ~ Weight, data = mtcarsTrain)
summary(model)

Call:
lm(formula = MilesPerGallon ~ Weight, data = mtcarsTrain)

Residuals:
   Min     1Q Median     3Q    Max 
-3.890 -2.163 -0.091  1.361  7.140 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  36.2505     1.7925  20.223  < 2e-16 ***
Weight       -4.9957     0.5249  -9.516 5.89e-10 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 2.75 on 26 degrees of freedom
Multiple R-squared:  0.7769,    Adjusted R-squared:  0.7684 
F-statistic: 90.56 on 1 and 26 DF,  p-value: 5.889e-10

Whoah! This is a lot of information. Fortunately, all we really need to look at are the R-squared (Multiple) and p-values.

Our Multiple R-Squared value is 0.7769 This means that the weight of cars accounts for about 78% of the variation in the mileage per gallon!

Furthermore, since our p-value is below 0.05, we can conclude that weight is a strong predictor of mileage per gallon.

Let’s see if we can improve our model by accounting for an additional variable. Let’s see if we can use combined information about the weight AND number of cylinders in determining the mileage per gallon.

model <- lm(MilesPerGallon ~ Weight + Cylinders, data = mtcarsTrain)
summary(model)

Call:
lm(formula = MilesPerGallon ~ Weight + Cylinders, data = mtcarsTrain)

Residuals:
    Min      1Q  Median      3Q     Max 
-4.0100 -1.1761 -0.4717  1.2757  6.0409 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  33.4080     1.8921  17.657 2.97e-15 ***
Weight       -3.2040     0.7448  -4.302 0.000245 ***
Cylinders6   -3.7579     1.3745  -2.734 0.011564 *  
Cylinders8   -5.1500     1.6681  -3.087 0.005038 ** 
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 2.38 on 24 degrees of freedom
Multiple R-squared:  0.8459,    Adjusted R-squared:  0.8266 
F-statistic: 43.91 on 3 and 24 DF,  p-value: 6.699e-10

It appears that this information has improved our model! Our R-squared value (we will now use the “Adjusted” value since we are using more than one predictor) is 0.8266 - this means that both weight and cylinders account for 83% of the variation in mileage per gallon of the cars. The Pr(>|t|) value for each variable with the *** symbols also shows that these are all statically significant variables.

We can now test our model on our “test” data and look at the accuracy.


mtcarsTest <- mtcarsTest %>%
  mutate(PredictedMPG = predict(model, newdata = mtcarsTest))

# Predicted vs Actual plot
ggplot(mtcarsTest, aes(x = MilesPerGallon, y = PredictedMPG)) +
  geom_point(color = "blue") +
  geom_abline(slope = 1, intercept = 0, color = "red") +
  theme_minimal() +
  labs(title = "Predicted vs Actual Miles Per Gallon")


# Create a summary table
summary_table <- mtcarsTest %>%
  select(MilesPerGallon, PredictedMPG) %>%
  summary()

print(summary_table)
 MilesPerGallon   PredictedMPG  
 Min.   :14.30   Min.   :16.82  
 1st Qu.:15.43   1st Qu.:17.78  
 Median :18.40   Median :19.27  
 Mean   :21.25   Mean   :20.72  
 3rd Qu.:24.23   3rd Qu.:22.21  
 Max.   :33.90   Max.   :27.53  
LS0tCnRpdGxlOiAiTGluZWFyIFJlZ3Jlc3Npb24iCm91dHB1dDogaHRtbF9ub3RlYm9vawotLS0KCllvdSBtYXkgYmUgZmFtaWxpYXIgd2l0aCBtb2RlbGluZyB0aGUgZXF1YXRpb24gb2YgYSBsaW5lIGFzOiAKCjxjZW50ZXI+ICR5ID0kICAkbXgkICQrJCAkYiQgPC9jZW50ZXI+IAoKd2hlcmUgJHkkIGlzIG91ciBkZXBlbmRlbnQgdmFyaWFibGUsICR4JCBpcyBvdXIgaW5kZXBlbmRlbnQgdmFyaWFibGUsICRtJCBpcyB0aGUgc2xvcGUgb2YgdGhlIGxpbmUsIGFuZCAkYiQgcmVwcmVzZW50cyB0aGUgeS1pbnRlcmNlcHQgKHRoZSB5LXZhbHVlIHdoZW4geCA9IDApLgoKTGluZWFyIHJlZ3Jlc3Npb24gdGFrZXMgdGhlIHNhbWUgZm9ybSBhbmQgY2FuIGJlIG1vZGVsZWQgYnkgdGhlIGVxdWF0aW9uIGJlbG93OgoKPGNlbnRlcj4gJHkgPSQgJFxiZXRhX3sxfXgkICQrJCAkXGJldGFfezB9JCA8L2NlbnRlcj4KCldlIGNhbiB1c2UgbGluZWFyIHJlZ3Jlc3Npb24gd2hlbiB3ZSBhcmUgdHJ5aW5nIHRvIHByZWRpY3QgdGhlIG91dGNvbWUgb2YgYSBudW1lcmljYWwgdmFyaWFibGUgdXNpbmcgYSBzaW5nbGUgKG9yIG11bHRpcGxlKSBvdGhlciB2YXJpYWJsZXMgd2UgY2FsbCBwcmVkaWN0b3IgdmFyaWFibGVzLgoKTGV0J3MgdGFrZSBhIGxvb2sgYXQgYW4gZXhhbXBsZSBmcm9tIHRoZSAnbXRjYXJzJyBkYXRhc2V0LiAKCmBgYHtyfQojIExvYWQgdGhlIG5lY2Vzc2FyeSBsaWJyYXJpZXMKbGlicmFyeSh0aWR5dmVyc2UpCmxpYnJhcnkoY2FyZXQpCgojIExvYWQgdGhlIG10Y2FycyBkYXRhc2V0CmRhdGEoIm10Y2FycyIpCgojIFN1bW1hcnkgc3RhdGlzdGljcwpzdW1tYXJ5KG10Y2FycykKCiMgQ29udmVydCByZWxldmFudCBjb2x1bW5zIHRvIGZhY3RvcnMKbXRjYXJzIDwtIG10Y2FycyAlPiUKICBtdXRhdGUoY3lsID0gYXMuZmFjdG9yKGN5bCksCiAgICAgICAgIGdlYXIgPSBhcy5mYWN0b3IoZ2VhciksCiAgICAgICAgIHZzID0gYXMuZmFjdG9yKHZzKSwKICAgICAgICAgYW0gPSBhcy5mYWN0b3IoYW0pLAogICAgICAgICBjYXJiID0gYXMuZmFjdG9yKGNhcmIpKQoKIyBSZW5hbWUgY29sdW1ucyBmb3IgYmV0dGVyIHJlYWRhYmlsaXR5Cm10Y2FycyA8LSBtdGNhcnMgJT4lCiAgcmVuYW1lKE1pbGVzUGVyR2FsbG9uID0gbXBnLAogICAgICAgICBDeWxpbmRlcnMgPSBjeWwsCiAgICAgICAgIERpc3BsYWNlbWVudCA9IGRpc3AsCiAgICAgICAgIEhvcnNlUG93ZXIgPSBocCwKICAgICAgICAgUmVhckF4bGVSYXRpbyA9IGRyYXQsCiAgICAgICAgIFdlaWdodCA9IHd0LAogICAgICAgICBRdWFydGVyTWlsZVRpbWUgPSBxc2VjLAogICAgICAgICBWRW5naW5lID0gdnMsCiAgICAgICAgIFRyYW5zbWlzc2lvbiA9IGFtLAogICAgICAgICBHZWFycyA9IGdlYXIsCiAgICAgICAgIENhcmJ1cmV0b3JzID0gY2FyYikKCnRyYXNoIDwtIHRyYXNoICU+JQogIHJlbmFtZShkaXNjYXJkZWQgPSAiVG90YWwgRGlzY2FyZGVkIChpbiB0b25uZXMpIikKYGBgCgpMZXQncyB2aXN1YWxpemUgc29tZSBvZiB0aGUgZGF0YS4gCgpgYGB7cn0KCiMgQm94cGxvdCBvZiBNaWxlc1BlckdhbGxvbiBieSBDeWxpbmRlcnMKZ2dwbG90KG10Y2FycywgYWVzKHggPSBDeWxpbmRlcnMsIHkgPSBNaWxlc1BlckdhbGxvbikpICsKICBnZW9tX2JveHBsb3QoZmlsbCA9ICJsaWdodGJsdWUiLCBjb2xvciA9ICJibGFjayIpICsKICB0aGVtZV9taW5pbWFsKCkgKwogIGxhYnModGl0bGUgPSAiTWlsZXMgUGVyIEdhbGxvbiBieSBOdW1iZXIgb2YgQ3lsaW5kZXJzIikKCiMgU2NhdHRlciBwbG90IG9mIFdlaWdodCB2cy4gTWlsZXNQZXJHYWxsb24KZ2dwbG90KG10Y2FycywgYWVzKHggPSBXZWlnaHQsIHkgPSBNaWxlc1BlckdhbGxvbikpICsKICBnZW9tX3BvaW50KGNvbG9yID0gImJsdWUiKSArCiAgZ2VvbV9zbW9vdGgobWV0aG9kID0gImxtIiwgc2UgPSBGQUxTRSwgY29sb3IgPSAicmVkIikgKwogIHRoZW1lX21pbmltYWwoKSArCiAgbGFicyh0aXRsZSA9ICJXZWlnaHQgdnMuIE1pbGVzIFBlciBHYWxsb24iKQoKYGBgCgpJdCBhcHBlYXJzIHRoYXQgYm90aCB0aGUgd2VpZ2h0IG9mIGEgY2FyIGFuZCB0aGUgbnVtYmVyIG9mIGN5bGluZGVycyBpdCBoYXMgY2FuIGhhdmUgYW4gZWZmZWN0IG9uIHRoZSBtaWxlcyBwZXIgZ2FsbG9uIG9mIHRoZSBjYXIuIExldCdzIHNlZSBpZiB3ZSBjYW4gdXNlIG9uZSBvZiB0aGVzZSB2YXJpYWJsZXMgZm9yIG91ciBtb2RlbC4gQmVmb3JlIHdlIGNyZWF0ZSBhIGxpbmVhciByZWdyZXNzaW9uIG1vZGVsIGhvd2V2ZXIsIHdlIG5lZWQgdG8gc3BsaXQgb3VyIGRhdGEgaW50byAidHJhaW4iIGFuZCAidGVzdCIgZGF0YS4gVGhpcyB3aWxsIGFsbG93IHVzIHRvIGJ1aWxkIGEgbW9kZWwgdXNpbmcgODAlIG9mIG91ciBkYXRhLCBhbmQgdGhlbiAidGVzdCIgb3VyIG1vZGVsIHdpdGggdGhlIHJlbWFpbmluZyAyMCUgb2YgdGhlIGRhdGEgdGhhdCBvdXIgbW9kZWwgaGFzIG5vdCBzZWVuIHlldC4gVGhpcyB3aWxsIGhlbHAgdXMgZGV0ZXJtaW5lIGhvdyBnb29kIG91ciBtb2RlbCBpcyBhdCBtYWtpbmcgcHJlZGljdGlvbnMgb24gZGF0YSBpdCBoYXMgbm90IHNlZW4gYmVmb3JlLgoKYGBge3J9CiMgU3BsaXQgdGhlIGRhdGEgaW50byB0cmFpbmluZyBhbmQgdGVzdGluZyBzZXRzCnNldC5zZWVkKDEyMykgI1RoaXMgZW5zdXJlcyB0aGF0IGFueW9uZSB3aG8gcnVucyB0aGlzIHdpbGwgZ3JhYiB0aGUgc2FtZSByYW5kb20gdmFsdWVzCnRyYWluSW5kZXggPC0gY3JlYXRlRGF0YVBhcnRpdGlvbihtdGNhcnMkTWlsZXNQZXJHYWxsb24sIHAgPSAwLjgsIAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbGlzdCA9IEZBTFNFLCAKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHRpbWVzID0gMSkKbXRjYXJzVHJhaW4gPC0gbXRjYXJzWyB0cmFpbkluZGV4LF0KbXRjYXJzVGVzdCAgPC0gbXRjYXJzWy10cmFpbkluZGV4LF0KYGBgCgpXZSBjYW4gbm93IGJ1aWxkIGEgc2ltcGxlIGxpbmVhciByZWdyZXNzaW9uIG1vZGVsaW5nIHRyeWluZyB0byBwcmVkaWN0IG1pbGVhZ2UgcGVyIGdhbGxvbiBmcm9tIHdlaWdodCBhbG9uZS4gV2UgdXNlIHRoZSAqKmxtKCkqKiBmdW5jdGlvbiBpbiBSIHRvIGRvIHRoaXMuCgpgYGB7cn0KIyBMaW5lYXIgcmVncmVzc2lvbiBtb2RlbCB0byBwcmVkaWN0IE1pbGVzUGVyR2FsbG9uCm1vZGVsIDwtIGxtKE1pbGVzUGVyR2FsbG9uIH4gV2VpZ2h0LCBkYXRhID0gbXRjYXJzVHJhaW4pCnN1bW1hcnkobW9kZWwpCmBgYApXaG9haCEgVGhpcyBpcyBhIGxvdCBvZiBpbmZvcm1hdGlvbi4gRm9ydHVuYXRlbHksIGFsbCB3ZSByZWFsbHkgbmVlZCB0byBsb29rIGF0IGFyZSB0aGUgUi1zcXVhcmVkIChNdWx0aXBsZSkgYW5kIHAtdmFsdWVzLiAKCk91ciBNdWx0aXBsZSBSLVNxdWFyZWQgdmFsdWUgaXMgMC43NzY5IFRoaXMgbWVhbnMgdGhhdCB0aGUgd2VpZ2h0IG9mIGNhcnMgYWNjb3VudHMgZm9yIGFib3V0IDc4JSBvZiB0aGUgdmFyaWF0aW9uIGluIHRoZSBtaWxlYWdlIHBlciBnYWxsb24hCgpGdXJ0aGVybW9yZSwgc2luY2Ugb3VyIHAtdmFsdWUgaXMgYmVsb3cgMC4wNSwgd2UgY2FuIGNvbmNsdWRlIHRoYXQgd2VpZ2h0IGlzIGEgc3Ryb25nIHByZWRpY3RvciBvZiBtaWxlYWdlIHBlciBnYWxsb24uCgpMZXQncyBzZWUgaWYgd2UgY2FuIGltcHJvdmUgb3VyIG1vZGVsIGJ5IGFjY291bnRpbmcgZm9yIGFuIGFkZGl0aW9uYWwgdmFyaWFibGUuIExldCdzIHNlZSBpZiB3ZSBjYW4gdXNlIGNvbWJpbmVkIGluZm9ybWF0aW9uIGFib3V0IHRoZSB3ZWlnaHQgQU5EIG51bWJlciBvZiBjeWxpbmRlcnMgaW4gZGV0ZXJtaW5pbmcgdGhlIG1pbGVhZ2UgcGVyIGdhbGxvbi4KCmBgYHtyfQptb2RlbCA8LSBsbShNaWxlc1BlckdhbGxvbiB+IFdlaWdodCArIEN5bGluZGVycywgZGF0YSA9IG10Y2Fyc1RyYWluKQpzdW1tYXJ5KG1vZGVsKQpgYGAKSXQgYXBwZWFycyB0aGF0IHRoaXMgaW5mb3JtYXRpb24gaGFzIGltcHJvdmVkIG91ciBtb2RlbCEgT3VyIFItc3F1YXJlZCB2YWx1ZSAod2Ugd2lsbCBub3cgdXNlIHRoZSAiQWRqdXN0ZWQiIHZhbHVlIHNpbmNlIHdlIGFyZSB1c2luZyBtb3JlIHRoYW4gb25lIHByZWRpY3RvcikgaXMgMC44MjY2IC0gdGhpcyBtZWFucyB0aGF0IGJvdGggd2VpZ2h0IGFuZCBjeWxpbmRlcnMgYWNjb3VudCBmb3IgODMlIG9mIHRoZSB2YXJpYXRpb24gaW4gbWlsZWFnZSBwZXIgZ2FsbG9uIG9mIHRoZSBjYXJzLiBUaGUgUHIoPnx0fCkgdmFsdWUgZm9yIGVhY2ggdmFyaWFibGUgd2l0aCB0aGUgKioqIHN5bWJvbHMgYWxzbyBzaG93cyB0aGF0IHRoZXNlIGFyZSBhbGwgc3RhdGljYWxseSBzaWduaWZpY2FudCB2YXJpYWJsZXMuIAoKV2UgY2FuIG5vdyB0ZXN0IG91ciBtb2RlbCBvbiBvdXIgInRlc3QiIGRhdGEgYW5kIGxvb2sgYXQgdGhlIGFjY3VyYWN5LgpgYGB7cn0KCm10Y2Fyc1Rlc3QgPC0gbXRjYXJzVGVzdCAlPiUKICBtdXRhdGUoUHJlZGljdGVkTVBHID0gcHJlZGljdChtb2RlbCwgbmV3ZGF0YSA9IG10Y2Fyc1Rlc3QpKQoKIyBQcmVkaWN0ZWQgdnMgQWN0dWFsIHBsb3QKZ2dwbG90KG10Y2Fyc1Rlc3QsIGFlcyh4ID0gTWlsZXNQZXJHYWxsb24sIHkgPSBQcmVkaWN0ZWRNUEcpKSArCiAgZ2VvbV9wb2ludChjb2xvciA9ICJibHVlIikgKwogIGdlb21fYWJsaW5lKHNsb3BlID0gMSwgaW50ZXJjZXB0ID0gMCwgY29sb3IgPSAicmVkIikgKwogIHRoZW1lX21pbmltYWwoKSArCiAgbGFicyh0aXRsZSA9ICJQcmVkaWN0ZWQgdnMgQWN0dWFsIE1pbGVzIFBlciBHYWxsb24iKQoKIyBDcmVhdGUgYSBzdW1tYXJ5IHRhYmxlCnN1bW1hcnlfdGFibGUgPC0gbXRjYXJzVGVzdCAlPiUKICBzZWxlY3QoTWlsZXNQZXJHYWxsb24sIFByZWRpY3RlZE1QRykgJT4lCiAgc3VtbWFyeSgpCgpwcmludChzdW1tYXJ5X3RhYmxlKQoKYGBgCg==