Objectives

The objectives of this problem set is to orient you to a number of activities in R and to conduct a thoughtful exercise in appreciating the importance of data visualization. For each question enter your code or text response in the code chunk that completes/answers the activity or question requested. To submit this homework you will create the document in Rstudio, using the knitr package (button included in Rstudio) and then submit the document to your Rpubs account. Once uploaded you will submit the link to that document on Canvas. Please make sure that this link is hyper linked and that I can see the visualization and the code required to create it. Each question is worth 5 points.

Questions

  1. Anscombe’s quartet is a set of 4 \(x,y\) data sets that were published by Francis Anscombe in a 1973 paper Graphs in statistical analysis. For this first question load the anscombe data that is part of the library(datasets) in R. And assign that data to a new object called data.
data=anscombe
head(data)
##   x1 x2 x3 x4   y1   y2    y3   y4
## 1 10 10 10  8 8.04 9.14  7.46 6.58
## 2  8  8  8  8 6.95 8.14  6.77 5.76
## 3 13 13 13  8 7.58 8.74 12.74 7.71
## 4  9  9  9  8 8.81 8.77  7.11 8.84
## 5 11 11 11  8 8.33 9.26  7.81 8.47
## 6 14 14 14  8 9.96 8.10  8.84 7.04
  1. Summarise the data by calculating the mean, variance, for each column and the correlation between each pair (eg. x1 and y1, x2 and y2, etc) (Hint: use the dplyr package!)
library(resample)
## Registered S3 method overwritten by 'resample':
##   method         from  
##   print.resample modelr
mean1=colMeans(data)
cor1=cor(data$x1,data$y1)
cor2=cor(data$x2,data$y2)
cor3=cor(data$x3,data$y3)
cor4=cor(data$x4,data$y4)
var1=var(data)
mean1
##       x1       x2       x3       x4       y1       y2       y3       y4 
## 9.000000 9.000000 9.000000 9.000000 7.500909 7.500909 7.500000 7.500909
cor1
## [1] 0.8164205
cor2
## [1] 0.8162365
cor3
## [1] 0.8162867
cor4
## [1] 0.8165214
var1
##        x1     x2     x3     x4        y1        y2       y3        y4
## x1 11.000 11.000 11.000 -5.500  5.501000  5.500000  5.49700 -2.115000
## x2 11.000 11.000 11.000 -5.500  5.501000  5.500000  5.49700 -2.115000
## x3 11.000 11.000 11.000 -5.500  5.501000  5.500000  5.49700 -2.115000
## x4 -5.500 -5.500 -5.500 11.000 -3.565000 -4.841000 -2.32100  5.499000
## y1  5.501  5.501  5.501 -3.565  4.127269  3.095609  1.93343 -2.017731
## y2  5.500  5.500  5.500 -4.841  3.095609  4.127629  2.42524 -1.972351
## y3  5.497  5.497  5.497 -2.321  1.933430  2.425240  4.12262 -0.641000
## y4 -2.115 -2.115 -2.115  5.499 -2.017731 -1.972351 -0.64100  4.123249
  1. Using ggplot, create scatter plots for each \(x, y\) pair of data (maybe use ‘facet_grid’ or ‘facet_wrap’).
library(ggplot2)
plot(data$x1,data$y1)

plot(data$x2,data$y2)

plot(data$x3,data$y3)

plot(data$x4,data$y4)

  1. Now change the symbols on the scatter plots to solid blue circles.
plot(data$x1,data$y1,pch=20,col="blue")

plot(data$x2,data$y2,pch=20,col="blue")

plot(data$x3,data$y3,pch=20,col="blue")

plot(data$x4,data$y4,pch=20,col="blue")

  1. Now fit a linear model to each data set using the lm() function.
attach(data)
lm1=lm(x1~y1)
lm2=lm(x2~y2)
lm3=lm(x3~y3)
lm4=lm(x4~y4)

lm1
## 
## Call:
## lm(formula = x1 ~ y1)
## 
## Coefficients:
## (Intercept)           y1  
##     -0.9975       1.3328
lm2
## 
## Call:
## lm(formula = x2 ~ y2)
## 
## Coefficients:
## (Intercept)           y2  
##     -0.9948       1.3325
lm3
## 
## Call:
## lm(formula = x3 ~ y3)
## 
## Coefficients:
## (Intercept)           y3  
##      -1.000        1.333
lm4
## 
## Call:
## lm(formula = x4 ~ y4)
## 
## Coefficients:
## (Intercept)           y4  
##      -1.004        1.334
  1. Now combine the last two tasks. Create a four panel scatter plot matrix that has both the data points and the regression lines. (hint: the model objects will carry over chunks!)
par(mfrow=c(2,2))
plot(data$x1,data$y1,pch=20,col="blue")
abline(lm1)
plot(data$x2,data$y2,pch=20,col="blue")
abline(lm2)
plot(data$x3,data$y3,pch=20,col="blue")
abline(lm3)
plot(data$x4,data$y4,pch=20,col="blue")
abline(lm4)

  1. Now compare the model fits for each model object.
summary(lm1)

Call: lm(formula = x1 ~ y1)

Residuals: Min 1Q Median 3Q Max -2.6522 -1.5117 -0.2657 1.2341 3.8946

Coefficients: Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.9975 2.4344 -0.410 0.69156
y1 1.3328 0.3142 4.241 0.00217 ** — Signif. codes: 0 ‘’ 0.001 ’’ 0.01 ’’ 0.05 ‘.’ 0.1 ’ ’ 1

Residual standard error: 2.019 on 9 degrees of freedom Multiple R-squared: 0.6665, Adjusted R-squared: 0.6295 F-statistic: 17.99 on 1 and 9 DF, p-value: 0.00217

summary(lm2)

Call: lm(formula = x2 ~ y2)

Residuals: Min 1Q Median 3Q Max -1.8516 -1.4315 -0.3440 0.8467 4.2017

Coefficients: Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.9948 2.4354 -0.408 0.69246
y2 1.3325 0.3144 4.239 0.00218 ** — Signif. codes: 0 ‘’ 0.001 ’’ 0.01 ’’ 0.05 ‘.’ 0.1 ’ ’ 1

Residual standard error: 2.02 on 9 degrees of freedom Multiple R-squared: 0.6662, Adjusted R-squared: 0.6292 F-statistic: 17.97 on 1 and 9 DF, p-value: 0.002179

summary(lm3)

Call: lm(formula = x3 ~ y3)

Residuals: Min 1Q Median 3Q Max -2.9869 -1.3733 -0.0266 1.3200 3.2133

Coefficients: Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.0003 2.4362 -0.411 0.69097
y3 1.3334 0.3145 4.239 0.00218 ** — Signif. codes: 0 ‘’ 0.001 ’’ 0.01 ’’ 0.05 ‘.’ 0.1 ’ ’ 1

Residual standard error: 2.019 on 9 degrees of freedom Multiple R-squared: 0.6663, Adjusted R-squared: 0.6292 F-statistic: 17.97 on 1 and 9 DF, p-value: 0.002176

summary(lm4)

Call: lm(formula = x4 ~ y4)

Residuals: Min 1Q Median 3Q Max -2.7859 -1.4122 -0.1853 1.4551 3.3329

Coefficients: Estimate Std. Error t value Pr(>|t|)
(Intercept) -1.0036 2.4349 -0.412 0.68985
y4 1.3337 0.3143 4.243 0.00216 ** — Signif. codes: 0 ‘’ 0.001 ’’ 0.01 ’’ 0.05 ‘.’ 0.1 ’ ’ 1

Residual standard error: 2.018 on 9 degrees of freedom Multiple R-squared: 0.6667, Adjusted R-squared: 0.6297 F-statistic: 18 on 1 and 9 DF, p-value: 0.002165

  1. In text, summarize the lesson of Anscombe’s Quartet and what it says about the value of data visualization.

Anscombe’s Quartet shows us the mportance of data visualisation in terms of interpreting datasets. It helps us identify anomalies in the dataset including outliers and diversity in the data.

```