Objectives

The objectives of this problem set is to orient you to a number of activities in R and to conduct a thoughtful exercise in appreciating the importance of data visualization. For each question enter your code or text response in the code chunk that completes/answers the activity or question requested. To submit this homework you will create the document in Rstudio, using the knitr package (button included in Rstudio) and then submit the document to your Rpubs account. Once uploaded you will submit the link to that document on Canvas. Please make sure that this link is hyper linked and that I can see the visualization and the code required to create it. Each question is worth 5 points.

Questions

  1. Anscombe’s quartet is a set of 4 \(x,y\) data sets that were published by Francis Anscombe in a 1973 paper Graphs in statistical analysis. For this first question load the anscombe data that is part of the library(datasets) in R. And assign that data to a new object called data.
library("datasets")

View(anscombe)
data <- anscombe
  1. Summarise the data by calculating the mean, variance, for each column and the correlation between each pair (eg. x1 and y1, x2 and y2, etc) (Hint: use the dplyr package!)
sapply(data, mean)
##       x1       x2       x3       x4       y1       y2       y3       y4 
## 9.000000 9.000000 9.000000 9.000000 7.500909 7.500909 7.500000 7.500909
sapply(data, var)
##        x1        x2        x3        x4        y1        y2        y3        y4 
## 11.000000 11.000000 11.000000 11.000000  4.127269  4.127629  4.122620  4.123249
c(cor(data$x1, data$y1),cor(data$x2, data$y2),cor(data$x3, data$y3),cor(data$x4, data$y4))
## [1] 0.8164205 0.8162365 0.8162867 0.8165214
summary(data)
##        x1             x2             x3             x4           y1        
##  Min.   : 4.0   Min.   : 4.0   Min.   : 4.0   Min.   : 8   Min.   : 4.260  
##  1st Qu.: 6.5   1st Qu.: 6.5   1st Qu.: 6.5   1st Qu.: 8   1st Qu.: 6.315  
##  Median : 9.0   Median : 9.0   Median : 9.0   Median : 8   Median : 7.580  
##  Mean   : 9.0   Mean   : 9.0   Mean   : 9.0   Mean   : 9   Mean   : 7.501  
##  3rd Qu.:11.5   3rd Qu.:11.5   3rd Qu.:11.5   3rd Qu.: 8   3rd Qu.: 8.570  
##  Max.   :14.0   Max.   :14.0   Max.   :14.0   Max.   :19   Max.   :10.840  
##        y2              y3              y4        
##  Min.   :3.100   Min.   : 5.39   Min.   : 5.250  
##  1st Qu.:6.695   1st Qu.: 6.25   1st Qu.: 6.170  
##  Median :8.140   Median : 7.11   Median : 7.040  
##  Mean   :7.501   Mean   : 7.50   Mean   : 7.501  
##  3rd Qu.:8.950   3rd Qu.: 7.98   3rd Qu.: 8.190  
##  Max.   :9.260   Max.   :12.74   Max.   :12.500
  1. Using ggplot, create scatter plots for each \(x, y\) pair of data (maybe use ‘facet_grid’ or ‘facet_wrap’).
library(ggplot2)

ggplot(data, aes(x=x1, y=y1)) +
  geom_point() +
  labs(title = "Scatter plot of x1 and y1")

ggplot(data, aes(x=x2, y=y2)) +
  geom_point() +
  labs(title = "Scatter plot of x2 and y2")

ggplot(data, aes(x=x3, y=y3)) +
  geom_point() +
  labs(title = "Scatter plot of x3 and y3")

ggplot(data, aes(x=x4, y=y4)) +
  geom_point() +
  labs(title = "Scatter plot of x4 and y4")

  1. Now change the symbols on the scatter plots to solid blue circles.
library(ggplot2)

ggplot(data, aes(x=x1, y=y1)) +
  geom_point(shape = 19, color = "blue") +
  labs(title = "Scatter plot of x1 and y1")

ggplot(data, aes(x=x2, y=y2)) +
  geom_point(shape = 19, color = "blue") +
  labs(title = "Scatter plot of x2 and y2")

ggplot(data, aes(x=x3, y=y3)) +
  geom_point(shape = 19, color = "blue") +
  labs(title = "Scatter plot of x3 and y3")

ggplot(data, aes(x=x4, y=y4)) +
  geom_point(shape = 19, color = "blue") +
  labs(title = "Scatter plot of x4 and y4")

  1. Now fit a linear model to each data set using the lm() function.
fit1 <- lm(y1 ~ x1, data = data)
fit2 <- lm(y2 ~ x2, data = data)
fit3 <- lm(y3 ~ x3, data = data)
fit4 <- lm(y4 ~ x4, data = data)
  1. Now combine the last two tasks. Create a four panel scatter plot matrix that has both the data points and the regression lines. (hint: the model objects will carry over chunks!)
library(ggplot2)

# Fit linear models
fit1 <- lm(y1 ~ x1, data = data)
fit2 <- lm(y2 ~ x2, data = data)
fit3 <- lm(y3 ~ x3, data = data)
fit4 <- lm(y4 ~ x4, data = data)

# Create scatter plot matrix
ggplot(data, aes(x = x1, y = y1)) +
  geom_point() +
  geom_smooth(method = "lm", se = FALSE, color = "red") +
  labs(x = "x1", y = "y1") +
  facet_wrap(~., nrow = 2, ncol = 2)
## `geom_smooth()` using formula = 'y ~ x'

ggplot(data, aes(x = x2, y = y2)) +
  geom_point() +
  geom_smooth(method = "lm", se = FALSE, color = "red") +
  labs(x = "x2", y = "y2") +
  facet_wrap(~., nrow = 2, ncol = 2)
## `geom_smooth()` using formula = 'y ~ x'

ggplot(data, aes(x = x3, y = y3)) +
  geom_point() +
  geom_smooth(method = "lm", se = FALSE, color = "red") +
  labs(x = "x3", y = "y3") +
  facet_wrap(~., nrow = 2, ncol = 2)
## `geom_smooth()` using formula = 'y ~ x'

ggplot(data, aes(x = x4, y = y4)) +
  geom_point() +
  geom_smooth(method = "lm", se = FALSE, color = "red") +
  labs(x = "x4", y = "y4") +
  facet_wrap(~., nrow = 2, ncol = 2)
## `geom_smooth()` using formula = 'y ~ x'

  1. Now compare the model fits for each model object.
anova(fit1)

Analysis of Variance Table

Response: y1 Df Sum Sq Mean Sq F value Pr(>F)
x1 1 27.510 27.5100 17.99 0.00217 ** Residuals 9 13.763 1.5292
— Signif. codes: 0 ‘’ 0.001 ’’ 0.01 ’’ 0.05 ‘.’ 0.1 ’ ’ 1

anova(fit2)
## Analysis of Variance Table
## 
## Response: y2
##           Df Sum Sq Mean Sq F value   Pr(>F)   
## x2         1 27.500 27.5000  17.966 0.002179 **
## Residuals  9 13.776  1.5307                    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
anova(fit3)
## Analysis of Variance Table
## 
## Response: y3
##           Df Sum Sq Mean Sq F value   Pr(>F)   
## x3         1 27.470 27.4700  17.972 0.002176 **
## Residuals  9 13.756  1.5285                    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
anova(fit4)
## Analysis of Variance Table
## 
## Response: y4
##           Df Sum Sq Mean Sq F value   Pr(>F)   
## x4         1 27.490 27.4900  18.003 0.002165 **
## Residuals  9 13.742  1.5269                    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
  1. In text, summarize the lesson of Anscombe’s Quartet and what it says about the value of data visualization.
#Anscombe's Quartet consists of four datasets, each consisting of eleven (x, y) pairs with the same summary statistics (mean, variance, and correlation) but with very different scatter plots. The datasets highlight the danger of relying solely on summary statistics and the importance of data visualization in identifying patterns and relationships in data.

#The lesson of Anscombe's Quartet is that data visualization is a critical tool for understanding and interpreting data. Summary statistics, while useful, cannot provide a complete picture of the data and may lead to incorrect conclusions. By visualizing data, we can better understand the relationships between variables, identify outliers and patterns, and ultimately make more informed decisions. The Quartet also underscores the need to explore data before making conclusions or assumptions, as well as the potential danger of relying solely on summary statistics. Therefore, data visualization plays an essential role in the data analysis process and should be used in conjunction with other statistical tools to gain a comprehensive understanding of data.