The objectives of this problem set is to orient you to a number of activities in R. And to conduct a thoughtful exercise in appreciating the importance of data visualization. For each question create a code chunk or text response that completes/answers the activity or question requested. Finally, upon completion name your final output .html file as: YourName_ANLY512-Section-Year-Semester.html and upload it to the “Problem Set 2” assignment to your R Pubs account and submit the link to Moodle. Points will be deducted for uploading the improper format.
anscombe data that is part of the library(datasets) in R. And assign that data to a new object called data.library(datasets)
data<-anscombe
data
## x1 x2 x3 x4 y1 y2 y3 y4
## 1 10 10 10 8 8.04 9.14 7.46 6.58
## 2 8 8 8 8 6.95 8.14 6.77 5.76
## 3 13 13 13 8 7.58 8.74 12.74 7.71
## 4 9 9 9 8 8.81 8.77 7.11 8.84
## 5 11 11 11 8 8.33 9.26 7.81 8.47
## 6 14 14 14 8 9.96 8.10 8.84 7.04
## 7 6 6 6 8 7.24 6.13 6.08 5.25
## 8 4 4 4 19 4.26 3.10 5.39 12.50
## 9 12 12 12 8 10.84 9.13 8.15 5.56
## 10 7 7 7 8 4.82 7.26 6.42 7.91
## 11 5 5 5 8 5.68 4.74 5.73 6.89
fBasics() package!)mean(data$x1)
## [1] 9
var(data$x1)
## [1] 11
mean(data$x2)
## [1] 9
var(data$x2)
## [1] 11
mean(data$x3)
## [1] 9
var(data$x3)
## [1] 11
mean(data$x4)
## [1] 9
var(data$x4)
## [1] 11
mean(data$y1)
## [1] 7.500909
var(data$y1)
## [1] 4.127269
mean(data$y2)
## [1] 7.500909
var(data$y2)
## [1] 4.127629
mean(data$y3)
## [1] 7.5
var(data$y3)
## [1] 4.12262
mean(data$y4)
## [1] 7.500909
var(data$y4)
## [1] 4.123249
library(fBasics)
## Warning: package 'fBasics' was built under R version 3.6.3
## Loading required package: timeDate
## Warning: package 'timeDate' was built under R version 3.6.3
## Loading required package: timeSeries
## Warning: package 'timeSeries' was built under R version 3.6.3
correlationTest(data$x1,data$y1)
##
## Title:
## Pearson's Correlation Test
##
## Test Results:
## PARAMETER:
## Degrees of Freedom: 9
## SAMPLE ESTIMATES:
## Correlation: 0.8164
## STATISTIC:
## t: 4.2415
## P VALUE:
## Alternative Two-Sided: 0.00217
## Alternative Less: 0.9989
## Alternative Greater: 0.001085
## CONFIDENCE INTERVAL:
## Two-Sided: 0.4244, 0.9507
## Less: -1, 0.9388
## Greater: 0.5113, 1
##
## Description:
## Wed May 20 21:35:09 2020
correlationTest(data$x2,data$y2)
##
## Title:
## Pearson's Correlation Test
##
## Test Results:
## PARAMETER:
## Degrees of Freedom: 9
## SAMPLE ESTIMATES:
## Correlation: 0.8162
## STATISTIC:
## t: 4.2386
## P VALUE:
## Alternative Two-Sided: 0.002179
## Alternative Less: 0.9989
## Alternative Greater: 0.001089
## CONFIDENCE INTERVAL:
## Two-Sided: 0.4239, 0.9506
## Less: -1, 0.9387
## Greater: 0.5109, 1
##
## Description:
## Wed May 20 21:35:09 2020
correlationTest(data$x3,data$y3)
##
## Title:
## Pearson's Correlation Test
##
## Test Results:
## PARAMETER:
## Degrees of Freedom: 9
## SAMPLE ESTIMATES:
## Correlation: 0.8163
## STATISTIC:
## t: 4.2394
## P VALUE:
## Alternative Two-Sided: 0.002176
## Alternative Less: 0.9989
## Alternative Greater: 0.001088
## CONFIDENCE INTERVAL:
## Two-Sided: 0.4241, 0.9507
## Less: -1, 0.9387
## Greater: 0.511, 1
##
## Description:
## Wed May 20 21:35:09 2020
correlationTest(data$x4,data$y4)
##
## Title:
## Pearson's Correlation Test
##
## Test Results:
## PARAMETER:
## Degrees of Freedom: 9
## SAMPLE ESTIMATES:
## Correlation: 0.8165
## STATISTIC:
## t: 4.243
## P VALUE:
## Alternative Two-Sided: 0.002165
## Alternative Less: 0.9989
## Alternative Greater: 0.001082
## CONFIDENCE INTERVAL:
## Two-Sided: 0.4246, 0.9507
## Less: -1, 0.9388
## Greater: 0.5115, 1
##
## Description:
## Wed May 20 21:35:09 2020
plot(data$x1,data$y1, main="Scatter plot 1")
plot(data$x2,data$y2, main="Scatter plot 2")
plot(data$x3,data$y3, main="Scatter plot 3")
plot(data$x4,data$y4, main="Scatter plot 4")
par(mfrow = c(2, 2))
plot(data$x1,data$y1,pch=20, main="Scatter plot 1")
plot(data$x2,data$y2,pch=20, main="Scatter plot 2")
plot(data$x3,data$y3,pch=20, main="Scatter plot 3")
plot(data$x4,data$y4,pch=20, main="Scatter plot 4")
lm() function.linear1<-lm(data$y1~data$x1)
linear2<-lm(data$y2~data$x2)
linear3<-lm(data$y3~data$x3)
linear4<-lm(data$y4~data$x4)
par(mfrow = c(2, 2))
plot(data$x1,data$y1,pch=20, main="Scatter plot 1")
abline(linear1,col="red")
plot(data$x2,data$y2,pch=20, main="Scatter plot 2")
abline(linear2,col="red")
plot(data$x3,data$y3,pch=20, main="Scatter plot 3")
abline(linear3,col="red")
plot(data$x4,data$y4,pch=20, main="Scatter plot 4")
abline(linear4,col="red")
anova(linear1)
Analysis of Variance Table
Response: data\(y1 Df Sum Sq Mean Sq F value Pr(>F) data\)x1 1 27.510 27.5100 17.99 0.00217 ** Residuals 9 13.763 1.5292
— Signif. codes: 0 ‘’ 0.001 ’’ 0.01 ’’ 0.05 ‘.’ 0.1 ’ ’ 1
anova(linear2)
Analysis of Variance Table
Response: data\(y2 Df Sum Sq Mean Sq F value Pr(>F) data\)x2 1 27.500 27.5000 17.966 0.002179 ** Residuals 9 13.776 1.5307
— Signif. codes: 0 ‘’ 0.001 ’’ 0.01 ’’ 0.05 ‘.’ 0.1 ’ ’ 1
anova(linear3)
Analysis of Variance Table
Response: data\(y3 Df Sum Sq Mean Sq F value Pr(>F) data\)x3 1 27.470 27.4700 17.972 0.002176 ** Residuals 9 13.756 1.5285
— Signif. codes: 0 ‘’ 0.001 ’’ 0.01 ’’ 0.05 ‘.’ 0.1 ’ ’ 1
anova(linear4)
Analysis of Variance Table
Response: data\(y4 Df Sum Sq Mean Sq F value Pr(>F) data\)x4 1 27.490 27.4900 18.003 0.002165 ** Residuals 9 13.742 1.5269
— Signif. codes: 0 ‘’ 0.001 ’’ 0.01 ’’ 0.05 ‘.’ 0.1 ’ ’ 1 All the models have pretty much the same results.
The lesson with Anscombe’s Quarter is that summary statistics can be misleading. For instance, in question 2, when calculating the mean & variance of the columns, the results were identical for the x columns because they all had the same mean (9) & variance (11). The correlation between each pair (ie: x1, y1) was identical up to 3 decimals = 0.186. However in looking at question 4, you can see that they are all plot different pictures. In scatter plot 1, the plots roughly follow a linear model, whereas in Scatter plot 4, the plots are constant with one single outlier. So, I think that Anscombe was trying to show the real importance of data visualization as part of our data analysis. If we had only paid attention to the summary statistics, we would have concluded that the datasets were nearly identical when that was not the case.