Objectives

The objectives of this problem set is to orient you to a number of activities in R and to conduct a thoughtful exercise in appreciating the importance of data visualization. For each question enter your code or text response in the code chunk that completes/answers the activity or question requested. To submit this homework you will create the document in Rstudio, using the knitr package (button included in Rstudio) and then submit the document to your Rpubs account. Once uploaded you will submit the link to that document on Canvas. Please make sure that this link is hyper linked and that I can see the visualization and the code required to create it. Each question is worth 5 points.

Questions

  1. Anscombe’s quartet is a set of 4 \(x,y\) data sets that were published by Francis Anscombe in a 1973 paper Graphs in statistical analysis. For this first question load the anscombe data that is part of the library(datasets) in R. And assign that data to a new object called data.
library(datasets)
library(dplyr)
data <- anscombe
  1. Summarise the data by calculating the mean, variance, for each column and the correlation between each pair (eg. x1 and y1, x2 and y2, etc) (Hint: use the dplyr package!)
summary_data <- data %>%
  summarise(
    mean_x1 = mean(x1),
    var_x1  = var(x1),
    mean_x2 = mean(x2),
    var_x2  = var(x2),
    mean_x3 = mean(x3),
    var_x3  = var(x3),
    mean_x4 = mean(x4),
    var_x4  = var(x4),
    corr_x1y1= cor(x1,y1),
    corr_x2y2= cor(x2,y2),
    corr_x3y3= cor(x3,y3),
    corr_x4y4= cor(x4,y4),
    )
  1. Using ggplot, create scatter plots for each \(x, y\) pair of data (maybe use ‘facet_grid’ or ‘facet_wrap’).
plot1<-ggplot(data,aes(x=x1,y=y1)) +
  geom_point()
plot2<-ggplot(data,aes(x=x2,y=y2)) +
  geom_point()
plot3<-ggplot(data,aes(x=x3,y=y3)) +
  geom_point()
plot4<-ggplot(data,aes(x=x4,y=y4)) +
  geom_point()

plot_grid(plot1, plot2, plot3, plot4)

  1. Now change the symbols on the scatter plots to solid blue circles.
plot1<-ggplot(data,aes(x=x1,y=y1)) +
  geom_point(shape=21, fill="blue")
plot2<-ggplot(data,aes(x=x2,y=y2)) +
  geom_point(shape=21, fill="blue")
plot3<-ggplot(data,aes(x=x3,y=y3)) +
  geom_point(shape=21, fill="blue")
plot4<-ggplot(data,aes(x=x4,y=y4)) +
  geom_point(shape=21, fill="blue")

plot_grid(plot1, plot2, plot3, plot4)

  1. Now fit a linear model to each data set using the lm() function.
model1 <- lm(y1 ~ x1, data=data)
model2 <- lm(y2 ~ x2, data=data)
model3 <- lm(y3 ~ x3, data=data)
model4 <- lm(y4 ~ x4, data=data)
  1. Now combine the last two tasks. Create a four panel scatter plot matrix that has both the data points and the regression lines. (hint: the model objects will carry over chunks!)
plot1<-ggplot(data,aes(x=x1,y=y1)) +
  geom_point(shape=21, fill="blue") +
  geom_smooth(method='lm')
plot2<-ggplot(data,aes(x=x2,y=y2)) +
  geom_point(shape=21, fill="blue") +
  geom_smooth(method='lm')
plot3<-ggplot(data,aes(x=x3,y=y3)) +
  geom_point(shape=21, fill="blue") +
  geom_smooth(method='lm')
plot4<-ggplot(data,aes(x=x4,y=y4)) +
  geom_point(shape=21, fill="blue") +
  geom_smooth(method='lm')

plot_grid(plot1, plot2, plot3, plot4)
## `geom_smooth()` using formula 'y ~ x'
## `geom_smooth()` using formula 'y ~ x'
## `geom_smooth()` using formula 'y ~ x'
## `geom_smooth()` using formula 'y ~ x'

  1. Now compare the model fits for each model object.
summary(model1)

Call: lm(formula = y1 ~ x1, data = data)

Residuals: Min 1Q Median 3Q Max -1.92127 -0.45577 -0.04136 0.70941 1.83882

Coefficients: Estimate Std. Error t value Pr(>|t|)
(Intercept) 3.0001 1.1247 2.667 0.02573 * x1 0.5001 0.1179 4.241 0.00217 ** — Signif. codes: 0 ‘’ 0.001 ’’ 0.01 ’’ 0.05 ‘.’ 0.1 ’ ’ 1

Residual standard error: 1.237 on 9 degrees of freedom Multiple R-squared: 0.6665, Adjusted R-squared: 0.6295 F-statistic: 17.99 on 1 and 9 DF, p-value: 0.00217

summary(model2)

Call: lm(formula = y2 ~ x2, data = data)

Residuals: Min 1Q Median 3Q Max -1.9009 -0.7609 0.1291 0.9491 1.2691

Coefficients: Estimate Std. Error t value Pr(>|t|)
(Intercept) 3.001 1.125 2.667 0.02576 * x2 0.500 0.118 4.239 0.00218 ** — Signif. codes: 0 ‘’ 0.001 ’’ 0.01 ’’ 0.05 ‘.’ 0.1 ’ ’ 1

Residual standard error: 1.237 on 9 degrees of freedom Multiple R-squared: 0.6662, Adjusted R-squared: 0.6292 F-statistic: 17.97 on 1 and 9 DF, p-value: 0.002179

summary(model3)

Call: lm(formula = y3 ~ x3, data = data)

Residuals: Min 1Q Median 3Q Max -1.1586 -0.6146 -0.2303 0.1540 3.2411

Coefficients: Estimate Std. Error t value Pr(>|t|)
(Intercept) 3.0025 1.1245 2.670 0.02562 * x3 0.4997 0.1179 4.239 0.00218 ** — Signif. codes: 0 ‘’ 0.001 ’’ 0.01 ’’ 0.05 ‘.’ 0.1 ’ ’ 1

Residual standard error: 1.236 on 9 degrees of freedom Multiple R-squared: 0.6663, Adjusted R-squared: 0.6292 F-statistic: 17.97 on 1 and 9 DF, p-value: 0.002176

summary(model4)

Call: lm(formula = y4 ~ x4, data = data)

Residuals: Min 1Q Median 3Q Max -1.751 -0.831 0.000 0.809 1.839

Coefficients: Estimate Std. Error t value Pr(>|t|)
(Intercept) 3.0017 1.1239 2.671 0.02559 * x4 0.4999 0.1178 4.243 0.00216 ** — Signif. codes: 0 ‘’ 0.001 ’’ 0.01 ’’ 0.05 ‘.’ 0.1 ’ ’ 1

Residual standard error: 1.236 on 9 degrees of freedom Multiple R-squared: 0.6667, Adjusted R-squared: 0.6297 F-statistic: 18 on 1 and 9 DF, p-value: 0.002165

  1. In text, summarize the lesson of Anscombe’s Quartet and what it says about the value of data visualization.

From this example, we can see that for the 4 datasets, when looked at through the lens of descriptive statistics (the mean, variance and correlation statistics that were calculated) they appear to be similar if not exactly the same. However, when the data is visualized, we can see that in reality the 4 datasets are completely different from one another. Even the model fits being so similar is striking given how visually different the 4 sets of data appear. What this tells us is of the importance of data visualizations, which allow us to absorb data over multiple visual cues and datapoints which cannot be recreated with descriptive statistics or models.