1. Anscombes quartet is a set of 4 \(x,y\) data sets that were published by Francis Anscombe in a 1973 paper Graphs in statistical analysis. For this first question load the anscombe data that is part of the library(datasets) in R. And assign that data to a new object called data.
library(datasets)
data<-anscombe
str(data)
## 'data.frame': 11 obs. of 8 variables:
## $ x1: num 10 8 13 9 11 14 6 4 12 7 ...
## $ x2: num 10 8 13 9 11 14 6 4 12 7 ...
## $ x3: num 10 8 13 9 11 14 6 4 12 7 ...
## $ x4: num 8 8 8 8 8 8 8 19 8 8 ...
## $ y1: num 8.04 6.95 7.58 8.81 8.33 ...
## $ y2: num 9.14 8.14 8.74 8.77 9.26 8.1 6.13 3.1 9.13 7.26 ...
## $ y3: num 7.46 6.77 12.74 7.11 7.81 ...
## $ y4: num 6.58 5.76 7.71 8.84 8.47 7.04 5.25 12.5 5.56 7.91 ...
2. Summarise the data by calculating the mean, variance, for each column and the correlation between each pair (eg. x1 and y1, x2 and y2, etc) (Hint: use the fBasics() package!)
library(fBasics)
## Warning: package 'fBasics' was built under R version 3.3.3
## Loading required package: timeDate
## Warning: package 'timeDate' was built under R version 3.3.3
## Loading required package: timeSeries
## Warning: package 'timeSeries' was built under R version 3.3.3
##
## Rmetrics Package fBasics
## Analysing Markets and calculating Basic Statistics
## Copyright (C) 2005-2014 Rmetrics Association Zurich
## Educational Software for Financial Engineering and Computational Science
## Rmetrics is free software and comes with ABSOLUTELY NO WARRANTY.
## https://www.rmetrics.org --- Mail to: info@rmetrics.org
fBasics::basicStats(data)
## x1 x2 x3 x4 y1 y2
## nobs 11.000000 11.000000 11.000000 11.000000 11.000000 11.000000
## NAs 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000
## Minimum 4.000000 4.000000 4.000000 8.000000 4.260000 3.100000
## Maximum 14.000000 14.000000 14.000000 19.000000 10.840000 9.260000
## 1. Quartile 6.500000 6.500000 6.500000 8.000000 6.315000 6.695000
## 3. Quartile 11.500000 11.500000 11.500000 8.000000 8.570000 8.950000
## Mean 9.000000 9.000000 9.000000 9.000000 7.500909 7.500909
## Median 9.000000 9.000000 9.000000 8.000000 7.580000 8.140000
## Sum 99.000000 99.000000 99.000000 99.000000 82.510000 82.510000
## SE Mean 1.000000 1.000000 1.000000 1.000000 0.612541 0.612568
## LCL Mean 6.771861 6.771861 6.771861 6.771861 6.136083 6.136024
## UCL Mean 11.228139 11.228139 11.228139 11.228139 8.865735 8.865795
## Variance 11.000000 11.000000 11.000000 11.000000 4.127269 4.127629
## Stdev 3.316625 3.316625 3.316625 3.316625 2.031568 2.031657
## Skewness 0.000000 0.000000 0.000000 2.466911 -0.048374 -0.978693
## Kurtosis -1.528926 -1.528926 -1.528926 4.520661 -1.199123 -0.514319
## y3 y4
## nobs 11.000000 11.000000
## NAs 0.000000 0.000000
## Minimum 5.390000 5.250000
## Maximum 12.740000 12.500000
## 1. Quartile 6.250000 6.170000
## 3. Quartile 7.980000 8.190000
## Mean 7.500000 7.500909
## Median 7.110000 7.040000
## Sum 82.500000 82.510000
## SE Mean 0.612196 0.612242
## LCL Mean 6.135943 6.136748
## UCL Mean 8.864057 8.865070
## Variance 4.122620 4.123249
## Stdev 2.030424 2.030579
## Skewness 1.380120 1.120774
## Kurtosis 1.240044 0.628751
# to plot only mean and variance
basicStats(data)[c("Mean", "Variance"),]
## x1 x2 x3 x4 y1 y2 y3 y4
## Mean 9 9 9 9 7.500909 7.500909 7.50000 7.500909
## Variance 11 11 11 11 4.127269 4.127629 4.12262 4.123249
# correlation between X1 and Y1
fBasics::correlationTest(data$x1, data$y1)
##
## Title:
## Pearson's Correlation Test
##
## Test Results:
## PARAMETER:
## Degrees of Freedom: 9
## SAMPLE ESTIMATES:
## Correlation: 0.8164
## STATISTIC:
## t: 4.2415
## P VALUE:
## Alternative Two-Sided: 0.00217
## Alternative Less: 0.9989
## Alternative Greater: 0.001085
## CONFIDENCE INTERVAL:
## Two-Sided: 0.4244, 0.9507
## Less: -1, 0.9388
## Greater: 0.5113, 1
##
## Description:
## Sat Sep 09 17:04:36 2017
# correlation between X2 and Y2
fBasics::correlationTest(data$x2, data$y2)
##
## Title:
## Pearson's Correlation Test
##
## Test Results:
## PARAMETER:
## Degrees of Freedom: 9
## SAMPLE ESTIMATES:
## Correlation: 0.8162
## STATISTIC:
## t: 4.2386
## P VALUE:
## Alternative Two-Sided: 0.002179
## Alternative Less: 0.9989
## Alternative Greater: 0.001089
## CONFIDENCE INTERVAL:
## Two-Sided: 0.4239, 0.9506
## Less: -1, 0.9387
## Greater: 0.5109, 1
##
## Description:
## Sat Sep 09 17:04:36 2017
# correlation between X3 and Y3
fBasics::correlationTest(data$x3, data$y3)
##
## Title:
## Pearson's Correlation Test
##
## Test Results:
## PARAMETER:
## Degrees of Freedom: 9
## SAMPLE ESTIMATES:
## Correlation: 0.8163
## STATISTIC:
## t: 4.2394
## P VALUE:
## Alternative Two-Sided: 0.002176
## Alternative Less: 0.9989
## Alternative Greater: 0.001088
## CONFIDENCE INTERVAL:
## Two-Sided: 0.4241, 0.9507
## Less: -1, 0.9387
## Greater: 0.511, 1
##
## Description:
## Sat Sep 09 17:04:36 2017
# correlation between X4 and Y4
fBasics::correlationTest(data$x4, data$y4)
##
## Title:
## Pearson's Correlation Test
##
## Test Results:
## PARAMETER:
## Degrees of Freedom: 9
## SAMPLE ESTIMATES:
## Correlation: 0.8165
## STATISTIC:
## t: 4.243
## P VALUE:
## Alternative Two-Sided: 0.002165
## Alternative Less: 0.9989
## Alternative Greater: 0.001082
## CONFIDENCE INTERVAL:
## Two-Sided: 0.4246, 0.9507
## Less: -1, 0.9388
## Greater: 0.5115, 1
##
## Description:
## Sat Sep 09 17:04:36 2017
3. Create scatter plots for each \(x, y\) pair of data.
plot(data$x1, data$y1, main="Scatter Plot of X1 vs Y1", xlab="X1", ylab="Y1")
plot(data$x2, data$y2, main="Scatter Plot of X2 vs Y2", xlab="X2", ylab="Y2")
plot(data$x3, data$y3, main="Scatter Plot of X3 vs Y3", xlab="X3", ylab="Y3")
plot(data$x4, data$y4, main="Scatter Plot of X4 vs Y4", xlab="X4", ylab="Y4")
4. Now change the symbols on the scatter plots to solid circles and plot them together as a 4 panel graphic.
par(mfrow=c(2,2))
plot(data$x1, data$y1,main="Scatter Plot of X1 vs Y1",xlab="X1",ylab="Y1",pch=20)
plot(data$x2, data$y2,main="Scatter Plot of X2 vs Y2",xlab="X2",ylab="Y2",pch=20)
plot(data$x3, data$y3,main="Scatter Plot of X3 vs Y3",xlab="X3",ylab="Y3",pch=20)
plot(data$x4, data$y4,main="Scatter Plot of X4 vs Y4",xlab="X4",ylab="Y4",pch=20)
5. Now fit a linear model to each data set using the lm() function.
l_x1y1 <- lm(data$y1~data$x1)
l_x1y1
##
## Call:
## lm(formula = data$y1 ~ data$x1)
##
## Coefficients:
## (Intercept) data$x1
## 3.0001 0.5001
l_x2y2 <- lm(data$y2~data$x2)
l_x2y2
##
## Call:
## lm(formula = data$y2 ~ data$x2)
##
## Coefficients:
## (Intercept) data$x2
## 3.001 0.500
l_x3y3 <- lm(data$y3~data$x3)
l_x3y3
##
## Call:
## lm(formula = data$y3 ~ data$x3)
##
## Coefficients:
## (Intercept) data$x3
## 3.0025 0.4997
l_x4y4 <- lm(data$y4~data$x4)
l_x4y4
##
## Call:
## lm(formula = data$y4 ~ data$x4)
##
## Coefficients:
## (Intercept) data$x4
## 3.0017 0.4999
6. Now combine the last two tasks. Create a four panel scatter plot matrix that has both the data points and the regression lines. (hint: the model objects will carry over chunks!)
par(mfrow=c(2,2))
plot(data$x1, data$y1,main="Scatter Plot of X1 vs Y1",xlab="X1",ylab="Y1",pch=20)
abline(l_x1y1, col="blue")
plot(data$x2, data$y2,main="Scatter Plot of X2 vs Y2",xlab="X2",ylab="Y2",pch=20)
abline(l_x2y2, col="blue")
plot(data$x3, data$y3,main="Scatter Plot of X3 vs Y3",xlab="X3",ylab="Y3",pch=20)
abline(l_x3y3, col="blue")
plot(data$x4, data$y4,main="Scatter Plot of X4 vs Y4",xlab="X4",ylab="Y4",pch=20)
abline(l_x4y4, col="blue")
7. Now compare the model fits for each model object.
summary(l_x1y1)
Call: lm(formula = data\(y1 ~ data\)x1)
Residuals: Min 1Q Median 3Q Max -1.92127 -0.45577 -0.04136 0.70941 1.83882
Coefficients: Estimate Std. Error t value Pr(>|t|)
(Intercept) 3.0001 1.1247 2.667 0.02573 * data$x1 0.5001 0.1179 4.241 0.00217 ** — Signif. codes: 0 ‘’ 0.001 ’’ 0.01 ’’ 0.05 ‘.’ 0.1 ‘’ 1
Residual standard error: 1.237 on 9 degrees of freedom Multiple R-squared: 0.6665, Adjusted R-squared: 0.6295 F-statistic: 17.99 on 1 and 9 DF, p-value: 0.00217
par(mfrow=c(2,2))
plot(l_x1y1)
summary(l_x2y2)
Call: lm(formula = data\(y2 ~ data\)x2)
Residuals: Min 1Q Median 3Q Max -1.9009 -0.7609 0.1291 0.9491 1.2691
Coefficients: Estimate Std. Error t value Pr(>|t|)
(Intercept) 3.001 1.125 2.667 0.02576 * data$x2 0.500 0.118 4.239 0.00218 ** — Signif. codes: 0 ‘’ 0.001 ’’ 0.01 ’’ 0.05 ‘.’ 0.1 ‘’ 1
Residual standard error: 1.237 on 9 degrees of freedom Multiple R-squared: 0.6662, Adjusted R-squared: 0.6292 F-statistic: 17.97 on 1 and 9 DF, p-value: 0.002179
par(mfrow=c(2,2))
plot(l_x2y2)
summary(l_x3y3)
Call: lm(formula = data\(y3 ~ data\)x3)
Residuals: Min 1Q Median 3Q Max -1.1586 -0.6146 -0.2303 0.1540 3.2411
Coefficients: Estimate Std. Error t value Pr(>|t|)
(Intercept) 3.0025 1.1245 2.670 0.02562 * data$x3 0.4997 0.1179 4.239 0.00218 ** — Signif. codes: 0 ‘’ 0.001 ’’ 0.01 ’’ 0.05 ‘.’ 0.1 ‘’ 1
Residual standard error: 1.236 on 9 degrees of freedom Multiple R-squared: 0.6663, Adjusted R-squared: 0.6292 F-statistic: 17.97 on 1 and 9 DF, p-value: 0.002176
par(mfrow=c(2,2))
plot(l_x3y3)
summary(l_x4y4)
Call: lm(formula = data\(y4 ~ data\)x4)
Residuals: Min 1Q Median 3Q Max -1.751 -0.831 0.000 0.809 1.839
Coefficients: Estimate Std. Error t value Pr(>|t|)
(Intercept) 3.0017 1.1239 2.671 0.02559 * data$x4 0.4999 0.1178 4.243 0.00216 ** — Signif. codes: 0 ‘’ 0.001 ’’ 0.01 ’’ 0.05 ‘.’ 0.1 ‘’ 1
Residual standard error: 1.236 on 9 degrees of freedom Multiple R-squared: 0.6667, Adjusted R-squared: 0.6297 F-statistic: 18 on 1 and 9 DF, p-value: 0.002165
par(mfrow=c(2,2))
plot(l_x4y4)
## Warning: not plotting observations with leverage one:
## 8
## Warning: not plotting observations with leverage one:
## 8
8. In text, summarize the lesson of Anscombe’s Quartet and what it says about the value of data visualization.
Anscombe’s Quartet dataset consists of four x and y pairs. By calculating mean, variance and correlation coefficient in the first question, we can interpret that statistical properties of these pairs are similar. By plotting the data, we can clearly see the data at granularity level and their plots do not indicate exact similar properties. Especially, x3,y3 and x4,y4 pairs contains some outliers which can be seen in scatter plot as well as in Normal QQ plot. These outliers affects the liner model fit according questions 6 and 7 graphics. Hence, we can conclude that visualization is important to explore true relationship between variables which is sometimes not revealed by statistical measures.