The movie Moneyball focuses on the “quest for the secret of success in baseball”. It follows a low-budget team, the Oakland Athletics, who believed that underused statistics, such as a player’s ability to get on base, better predict the ability to score runs than typical statistics like home runs, RBIs (runs batted in), and batting average. Obtaining players who excelled in these underused statistics turned out to be much more affordable for the team.
In this lab we’ll be looking at data from all 30 Major League Baseball teams and examining the linear relationship between runs scored in a season and a number of other player statistics. Our aim will be to summarize these relationships both graphically and numerically in order to find which variable, if any, helps us best predict a team’s runs scored in a season.
Let’s load up the data for the 2011 season.
download.file("http://www.openintro.org/stat/data/mlb11.RData", destfile = "mlb11.RData")
load("mlb11.RData")
In addition to runs scored, there are seven traditionally used variables in the data set: at-bats, hits, home runs, batting average, strikeouts, stolen bases, and wins. There are also three newer variables: on-base percentage, slugging percentage (which is like on-base percentage but doesn’t include walks), and on-base plus slugging. For the first portion of the analysis we’ll consider the seven traditional variables. At the end of the lab, you’ll work with the newer variables on your own.
Recall that we can analyze the relationship between two numerical
variables using a scatter plot. For instance, to gain insight into the
relationship between hits
and bat_avg
we can
use the plot
command.
plot(x=mlb11$bat_avg, y=mlb11$hits)
Notice that the plot has messy looking labels for the horizontal and
vertical axes. We could certainly add arguments to the plot
and relabel the axes. However, there is another way to construct the
plot that avoids this problem. In addition, it introduces us to some of
the notation we will be using later in the lab.
plot(hits ~ bat_avg, data=mlb11)
The argument hits ~ bat_avg
can be read as “plot
hits
as a function of bat_avg
” and the second
argument data=mlb11
tells R that these variables are found
in the data frame mlb11
. The variable bat_avg
is plotted on the \(x\)-axis and is
sometimes referred to as the predictor variable.
Now, what can we learn about the relationship between
hits
and bat_avg
? The points in the
scatterplot appear to form a straight line, hence the relationship is
linear. Secondly, the relationship is positive since
an increase in the predictor variable, bat_avg
, leads to an
increase in the variable hits
. Lastly, the relationship
seems pretty strong, in that the points do not deviate much from the
straight line formation.
Now do Exercise 1.
If the relationship looks linear, we can quantify the strength of the relationship with the correlation coefficient. Try this with the hits and batting average data.
cor(mlb11$hits, mlb11$bat_avg)
Notice that the value is close to 1, indicating a strong linear relationship that is positive. Now look at the correlation coefficient for runs and at bats. Notice that the value is still positive, but not nearly as close to 1. This matches our visual observation that the linear relationship is positive, but not as strong.
cor(mlb11$runs, mlb11$at_bats)
Think back to the way that we described the distribution of a single
variable. Recall that we discussed characteristics such as center,
spread, and shape. It’s also useful to be able to describe the
relationship of two numerical variables, such as runs
and
at_bats
above.
Just as we used the mean and standard deviation to summarize a single variable, we can summarize the relationship between these two variables by finding the line that best follows their association.
Think about it this way. No line can perfectly pass through all the points. Imagine finding the line that minimizes the sum of all vertical distances between the data points and the line. We’re almost doing that here, except that we’re minimizing the sum of the squares of the distances. (similar to a variance calculation.) For that reason, the “best fit” line is called a least squares line.
It is rather cumbersome to try to get the correct least squares line,
i.e. the line that minimizes the sum of squared differences, through
trial and error. Instead we can use the lm
function in R to
fit the linear model (a.k.a. regression line).
m_atbats <- lm(runs ~ at_bats, data = mlb11)
The first argument in the function lm
is a formula that
takes the form y ~ x
. Here it can be read that we want to
make a linear model of runs
as a function of
at_bats
. (In other words, for a given number of
at_bats,
how many runs
would we predict?) The
second argument specifies that R should look in the mlb11
data frame to find the runs
and at_bats
variables.
The output of lm
is an object that contains all of the
information we need about the linear model that was just fit. We are
storing that information in a variable called m_atbats.
We
can call it anything we want, but the m here is for model and atbats
because we’re looking at the at_bats
variable as the
predictor of runs.
As you work through the rest of this
lab, pay attention to your variable names. Don’t overwrite existing
variables, and choose names that help you remember what information is
stored.
We can access the information from the linear model using the summary function.
summary(m_atbats)
Enter the above command into the console and see what it produces.
Let’s consider the output piece by piece. First, the formula used to
describe the model is shown at the top. After the formula you find the
five-number summary of the residuals. The “Coefficients” table shown
next is key; its first column displays the linear model’s y-intercept
and the coefficient of at_bats
. With this table, we can
write down the least squares regression line for the linear model:
\[\hat{y} = -2789.2429 + 0.6305 \cdot \text{at_bats}\]
One last piece of information we will discuss from the summary output
is the Multiple R-squared value, or more simply, \(R^2\) value. The \(R^2\) value represents the proportion of
variability in the response variable that is explained by the
explanatory variable. A value close to 1 corresponds to a strong linear
relationship. For this model, 37.3% of the variability in runs is
explained by at-bats. We can interpret this to mean that while it has
some predictive value, the at_bats
variable is not a very
strong predictor of runs.
Now do Exercise 2.
Let’s create a scatterplot with the least squares line laid on top.
plot(runs ~ at_bats, data=mlb11)
abline(m_atbats)
The function abline
plots a line based on its slope and
intercept. Here, we used a shortcut by providing the model
m_atbats
that we defined earlier. We saw in the summary
that m_atbats
contains estimates for both the slope and
intercept. This line can be used to predict \(y\) at any value of \(x\). When predictions are made for values
of \(x\) that are beyond the range of
the observed data, it is referred to as extrapolation and is
not usually recommended. However, predictions made within the range of
the data are more reliable. They’re also used to compute the
residuals.
Now do Exercise 3.
Now do Exercise 4.
Now do Exercise 5.
Now do Exercise 6.
Now do Exercise 7.
This is a modified version of a product of OpenIntro that is released under a Creative Commons Attribution-ShareAlike 3.0 Unported. This lab was adapted for OpenIntro by Andrew Bray and Mine Çetinkaya-Rundel from a lab written by the faculty and TAs of UCLA Statistics.