Objectives

The objectives of this problem set is to orient you to a number of activities in R. And to conduct a thoughtful exercise in appreciating the importance of data visualization. For each question create a code chunk or text response that completes/answers the activity or question requested. Finally, upon completion name your final output .html file as: YourName_ANLY512-Section-Year-Semester.html and upload it to the “Problem Set 2” assignmenet on Moodle.

Questions

  1. Anscombes quartet is a set of 4 \(x,y\) data sets that were published by Francis Anscombe in a 1973 paper Graphs in statistical analysis. For this first question load the anscombe data that is part of the library(datasets) in R. And assign that data to a new object called data.
library(datasets)
data=anscombe
  1. Summarise the data by calculating the mean, variance, for each column and the correlation between each pair (eg. x1 and y1, x2 and y2, etc) (Hint: use the fBasics() package!)
library(fBasics)
## Loading required package: timeDate
## Loading required package: timeSeries
summary(data)
##        x1             x2             x3             x4    
##  Min.   : 4.0   Min.   : 4.0   Min.   : 4.0   Min.   : 8  
##  1st Qu.: 6.5   1st Qu.: 6.5   1st Qu.: 6.5   1st Qu.: 8  
##  Median : 9.0   Median : 9.0   Median : 9.0   Median : 8  
##  Mean   : 9.0   Mean   : 9.0   Mean   : 9.0   Mean   : 9  
##  3rd Qu.:11.5   3rd Qu.:11.5   3rd Qu.:11.5   3rd Qu.: 8  
##  Max.   :14.0   Max.   :14.0   Max.   :14.0   Max.   :19  
##        y1               y2              y3              y4        
##  Min.   : 4.260   Min.   :3.100   Min.   : 5.39   Min.   : 5.250  
##  1st Qu.: 6.315   1st Qu.:6.695   1st Qu.: 6.25   1st Qu.: 6.170  
##  Median : 7.580   Median :8.140   Median : 7.11   Median : 7.040  
##  Mean   : 7.501   Mean   :7.501   Mean   : 7.50   Mean   : 7.501  
##  3rd Qu.: 8.570   3rd Qu.:8.950   3rd Qu.: 7.98   3rd Qu.: 8.190  
##  Max.   :10.840   Max.   :9.260   Max.   :12.74   Max.   :12.500
colMeans(data)
##       x1       x2       x3       x4       y1       y2       y3       y4 
## 9.000000 9.000000 9.000000 9.000000 7.500909 7.500909 7.500000 7.500909
colVars(data)
##        x1        x2        x3        x4        y1        y2        y3 
## 11.000000 11.000000 11.000000 11.000000  4.127269  4.127629  4.122620 
##        y4 
##  4.123249
correlationTest(data$x1,data$y1)
## 
## Title:
##  Pearson's Correlation Test
## 
## Test Results:
##   PARAMETER:
##     Degrees of Freedom: 9
##   SAMPLE ESTIMATES:
##     Correlation: 0.8164
##   STATISTIC:
##     t: 4.2415
##   P VALUE:
##     Alternative Two-Sided: 0.00217 
##     Alternative      Less: 0.9989 
##     Alternative   Greater: 0.001085 
##   CONFIDENCE INTERVAL:
##     Two-Sided: 0.4244, 0.9507
##          Less: -1, 0.9388
##       Greater: 0.5113, 1
## 
## Description:
##  Sun Feb 03 21:06:25 2019
correlationTest(data$x2,data$y2)
## 
## Title:
##  Pearson's Correlation Test
## 
## Test Results:
##   PARAMETER:
##     Degrees of Freedom: 9
##   SAMPLE ESTIMATES:
##     Correlation: 0.8162
##   STATISTIC:
##     t: 4.2386
##   P VALUE:
##     Alternative Two-Sided: 0.002179 
##     Alternative      Less: 0.9989 
##     Alternative   Greater: 0.001089 
##   CONFIDENCE INTERVAL:
##     Two-Sided: 0.4239, 0.9506
##          Less: -1, 0.9387
##       Greater: 0.5109, 1
## 
## Description:
##  Sun Feb 03 21:06:25 2019
correlationTest(data$x3,data$y3)
## 
## Title:
##  Pearson's Correlation Test
## 
## Test Results:
##   PARAMETER:
##     Degrees of Freedom: 9
##   SAMPLE ESTIMATES:
##     Correlation: 0.8163
##   STATISTIC:
##     t: 4.2394
##   P VALUE:
##     Alternative Two-Sided: 0.002176 
##     Alternative      Less: 0.9989 
##     Alternative   Greater: 0.001088 
##   CONFIDENCE INTERVAL:
##     Two-Sided: 0.4241, 0.9507
##          Less: -1, 0.9387
##       Greater: 0.511, 1
## 
## Description:
##  Sun Feb 03 21:06:25 2019
correlationTest(data$x4,data$y4)
## 
## Title:
##  Pearson's Correlation Test
## 
## Test Results:
##   PARAMETER:
##     Degrees of Freedom: 9
##   SAMPLE ESTIMATES:
##     Correlation: 0.8165
##   STATISTIC:
##     t: 4.243
##   P VALUE:
##     Alternative Two-Sided: 0.002165 
##     Alternative      Less: 0.9989 
##     Alternative   Greater: 0.001082 
##   CONFIDENCE INTERVAL:
##     Two-Sided: 0.4246, 0.9507
##          Less: -1, 0.9388
##       Greater: 0.5115, 1
## 
## Description:
##  Sun Feb 03 21:06:25 2019
  1. Create scatter plots for each \(x, y\) pair of data.
par(mfrow=c(2,2))
plot(data$x1,data$y1,main="Scatter plot - x1 and y1", xlab="x1", ylab="y1")
plot(data$x2,data$y2,main="Scatter plot - x2 and y2", xlab="x2", ylab="y2")
plot(data$x3,data$y3,main="Scatter plot - x3 and y3", xlab="x3", ylab="y3")
plot(data$x4,data$y4,main="Scatter plot - x4 and y4", xlab="x4", ylab="y4")

  1. Now change the symbols on the scatter plots to solid circles and plot them together as a 4 panel graphic
par(mfrow=c(2,2))
plot(data$x1,data$y1,main="Scatter plot - x1 and y1", xlab="x1", ylab="y1",pch=20)
plot(data$x2,data$y2,main="Scatter plot - x2 and y2", xlab="x2", ylab="y2",pch=20)
plot(data$x3,data$y3,main="Scatter plot - x3 and y3", xlab="x3", ylab="y3",pch=20)
plot(data$x4,data$y4,main="Scatter plot - x4 and y4", xlab="x4", ylab="y4",pch=20)

  1. Now fit a linear model to each data set using the lm() function.
lm1=lm(data$y1~data$x1)
lm2=lm(data$y2~data$x2)
lm3=lm(data$y3~data$x3)
lm4=lm(data$y4~data$x4)
  1. Now combine the last two tasks. Create a four panel scatter plot matrix that has both the data points and the regression lines. (hint: the model objects will carry over chunks!)
par(mfrow=c(2,2))
plot(data$x1,data$y1,main="Scatter plot - x1 and y1", xlab="x1", ylab="y1",pch=20)
abline(lm1)
plot(data$x2,data$y2,main="Scatter plot - x2 and y2", xlab="x2", ylab="y2",pch=20)
abline(lm2)
plot(data$x3,data$y3,main="Scatter plot - x3 and y3", xlab="x3", ylab="y3",pch=20)
abline(lm3)
plot(data$x4,data$y4,main="Scatter plot - x4 and y4", xlab="x4", ylab="y4",pch=20)
abline(lm4)

  1. Now compare the model fits for each model object.
library(fit.models)
comparefits=fit.models(lm(data$y1~data$x1), lm(data$y2~data$x2), lm(data$y3~data$x3), lm(data$y4~data$x4))
comparefits

Calls: lm(data\(y1 ~ data\)x1): lm(formula = data\(y1 ~ data\)x1) lm(data\(y2 ~ data\)x2): lm(formula = data\(y2 ~ data\)x2) lm(data\(y3 ~ data\)x3): lm(formula = data\(y3 ~ data\)x3) lm(data\(y4 ~ data\)x4): lm(formula = data\(y4 ~ data\)x4)

Coefficients: (Intercept) data\(x1 data\)x2 data\(x3 data\)x4 lm(data\(y1 ~ data\)x1) 3.0001 0.5001
lm(data\(y2 ~ data\)x2) 3.0009 0.5000
lm(data\(y3 ~ data\)x3) 3.0025 0.4997
lm(data\(y4 ~ data\)x4) 3.0017 0.5

summary(comparefits)

Calls: lm(data\(y1 ~ data\)x1): lm(formula = data\(y1 ~ data\)x1) lm(data\(y2 ~ data\)x2): lm(formula = data\(y2 ~ data\)x2) lm(data\(y3 ~ data\)x3): lm(formula = data\(y3 ~ data\)x3) lm(data\(y4 ~ data\)x4): lm(formula = data\(y4 ~ data\)x4)

Residual Statistics: Min 1Q Median 3Q Max lm(data\(y1 ~ data\)x1): -1.921 -0.4558 -4.136e-02 0.7094 1.839 lm(data\(y2 ~ data\)x2): -1.901 -0.7609 1.291e-01 0.9491 1.269 lm(data\(y3 ~ data\)x3): -1.159 -0.6146 -2.303e-01 0.1540 3.241 lm(data\(y4 ~ data\)x4): -1.751 -0.8310 1.110e-16 0.8090 1.839

Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept): lm(data\(y1 ~ data\)x1): 3.0001 1.1247 2.667 0.02573 lm(data\(y2 ~ data\)x2): 3.0009 1.1253 2.667 0.02576 lm(data\(y3 ~ data\)x3): 3.0025 1.1245 2.670 0.02562 lm(data\(y4 ~ data\)x4): 3.0017 1.1239 2.671 0.02559

data$x1: lm(data$y1 ~ data$x1):   0.5001     0.1179   4.241  0.00217
         lm(data$y2 ~ data$x2):                                     
         lm(data$y3 ~ data$x3):                                     
         lm(data$y4 ~ data$x4):                                     
                                                                    
data$x2: lm(data$y1 ~ data$x1):                                     
         lm(data$y2 ~ data$x2):   0.5000     0.1180   4.239  0.00218
         lm(data$y3 ~ data$x3):                                     
         lm(data$y4 ~ data$x4):                                     
                                                                    
data$x3: lm(data$y1 ~ data$x1):                                     
         lm(data$y2 ~ data$x2):                                     
         lm(data$y3 ~ data$x3):   0.4997     0.1179   4.239  0.00218
         lm(data$y4 ~ data$x4):                                     
                                                                    
data$x4: lm(data$y1 ~ data$x1):                                     
         lm(data$y2 ~ data$x2):                                     
         lm(data$y3 ~ data$x3):                                     
         lm(data$y4 ~ data$x4):   0.4999     0.1178   4.243  0.00216
                                  

(Intercept): lm(data\(y1 ~ data\)x1): * lm(data\(y2 ~ data\)x2): * lm(data\(y3 ~ data\)x3): * lm(data\(y4 ~ data\)x4): *

data$x1: lm(data$y1 ~ data$x1): **
         lm(data$y2 ~ data$x2):   
         lm(data$y3 ~ data$x3):   
         lm(data$y4 ~ data$x4):   
                                  
data$x2: lm(data$y1 ~ data$x1):   
         lm(data$y2 ~ data$x2): **
         lm(data$y3 ~ data$x3):   
         lm(data$y4 ~ data$x4):   
                                  
data$x3: lm(data$y1 ~ data$x1):   
         lm(data$y2 ~ data$x2):   
         lm(data$y3 ~ data$x3): **
         lm(data$y4 ~ data$x4):   
                                  
data$x4: lm(data$y1 ~ data$x1):   
         lm(data$y2 ~ data$x2):   
         lm(data$y3 ~ data$x3):   
         lm(data$y4 ~ data$x4): **

Signif. codes: 0 ‘’ 0.001 ’’ 0.01 ’’ 0.05 ‘.’ 0.1 ‘’ 1

Residual Scale Estimates: lm(data\(y1 ~ data\)x1): 1.237 on 9 degrees of freedom lm(data\(y2 ~ data\)x2): 1.237 on 9 degrees of freedom lm(data\(y3 ~ data\)x3): 1.236 on 9 degrees of freedom lm(data\(y4 ~ data\)x4): 1.236 on 9 degrees of freedom

Multiple R-squared: lm(data\(y1 ~ data\)x1): 0.6665 lm(data\(y2 ~ data\)x2): 0.6662 lm(data\(y3 ~ data\)x3): 0.6663 lm(data\(y4 ~ data\)x4): 0.6667

  1. In text, summarize the lesson of Anscombe’s Quartet and what it says about the value of data visualization. # This project tell us that a descriptive analysis is not sufficient to give an over view of the data. The mean and vairance for this each variable are exactly the same, which may lead us to believe the variables are the same. However, the scatter plot, which is one of the easy and important methods of data visualization, tells us a totally different story. The distribution of each pair of data is different. Therefore, data visualization is an important part of data analysis and provides more details.