The objectives of this problem set is to orient you to a number of activities in R. And to conduct a thoughtful exercise in appreciating the importance of data visualization. For each question create a code chunk or text response that completes/answers the activity or question requested. Finally, upon completion name your final output .html file as: YourName_ANLY512-Section-Year-Semester.html and upload it to the “Problem Set 2” assignmenet on Moodle.
anscombe data that is part of the library(datasets) in R. And assign that data to a new object called data.data<-anscombe
View (data)
fBasics() package!)library(fBasics)
## Loading required package: timeDate
## Loading required package: timeSeries
library(timeDate)
library(timeSeries)
colMeans(data)
## x1 x2 x3 x4 y1 y2 y3 y4
## 9.000000 9.000000 9.000000 9.000000 7.500909 7.500909 7.500000 7.500909
colVars(data)
## x1 x2 x3 x4 y1 y2 y3
## 11.000000 11.000000 11.000000 11.000000 4.127269 4.127629 4.122620
## y4
## 4.123249
x1<-colVec(data$x1)
y1<-colVec(data$y1)
correlationTest(x1,y1)
##
## Title:
## Pearson's Correlation Test
##
## Test Results:
## PARAMETER:
## Degrees of Freedom: 9
## SAMPLE ESTIMATES:
## Correlation: 0.8164
## STATISTIC:
## t: 4.2415
## P VALUE:
## Alternative Two-Sided: 0.00217
## Alternative Less: 0.9989
## Alternative Greater: 0.001085
## CONFIDENCE INTERVAL:
## Two-Sided: 0.4244, 0.9507
## Less: -1, 0.9388
## Greater: 0.5113, 1
##
## Description:
## Wed Feb 21 21:24:59 2018
x2<-colVec(data$x2)
y2<-colVec(data$y2)
correlationTest(x2,y2)
##
## Title:
## Pearson's Correlation Test
##
## Test Results:
## PARAMETER:
## Degrees of Freedom: 9
## SAMPLE ESTIMATES:
## Correlation: 0.8162
## STATISTIC:
## t: 4.2386
## P VALUE:
## Alternative Two-Sided: 0.002179
## Alternative Less: 0.9989
## Alternative Greater: 0.001089
## CONFIDENCE INTERVAL:
## Two-Sided: 0.4239, 0.9506
## Less: -1, 0.9387
## Greater: 0.5109, 1
##
## Description:
## Wed Feb 21 21:24:59 2018
x3<-colVec(data$x3)
y3<-colVec(data$y3)
correlationTest(x3,y3)
##
## Title:
## Pearson's Correlation Test
##
## Test Results:
## PARAMETER:
## Degrees of Freedom: 9
## SAMPLE ESTIMATES:
## Correlation: 0.8163
## STATISTIC:
## t: 4.2394
## P VALUE:
## Alternative Two-Sided: 0.002176
## Alternative Less: 0.9989
## Alternative Greater: 0.001088
## CONFIDENCE INTERVAL:
## Two-Sided: 0.4241, 0.9507
## Less: -1, 0.9387
## Greater: 0.511, 1
##
## Description:
## Wed Feb 21 21:24:59 2018
x4<-colVec(data$x4)
y4<-colVec(data$y4)
correlationTest(x4,y4)
##
## Title:
## Pearson's Correlation Test
##
## Test Results:
## PARAMETER:
## Degrees of Freedom: 9
## SAMPLE ESTIMATES:
## Correlation: 0.8165
## STATISTIC:
## t: 4.243
## P VALUE:
## Alternative Two-Sided: 0.002165
## Alternative Less: 0.9989
## Alternative Greater: 0.001082
## CONFIDENCE INTERVAL:
## Two-Sided: 0.4246, 0.9507
## Less: -1, 0.9388
## Greater: 0.5115, 1
##
## Description:
## Wed Feb 21 21:24:59 2018
library(ggplot2)
ggplot(data, aes(x=x1, y=y1)) + geom_point()
ggplot(data, aes(x=x2, y=y2)) + geom_point()
ggplot(data, aes(x=x3, y=y3)) + geom_point()
ggplot(data, aes(x=x4, y=y4)) + geom_point()
ggplot(data, aes(x=x1, y=y1)) + geom_point(size=5, shape=1, color="black")+labs(title= "data1")
ggplot(data, aes(x=x2, y=y2)) + geom_point(size=5, shape=1, color="black")+labs(title= "data2")
ggplot(data, aes(x=x3, y=y3)) + geom_point(size=5, shape=1)+labs(title= "data3")
ggplot(data, aes(x=x4, y=y4)) + geom_point(size=5, shape=1)+labs(title= "data4")
lm() function.visualmodel1<-lm(y1~x1, data=data)
visualmodel2<-lm(y2~x2, data=data)
visualmodel3<-lm(y3~x3, data=data)
visualmodel4<-lm(y4~x4, data=data)
ggplot(data,aes(x=x1,y=y1))+geom_smooth(method=lm, se=FALSE)+geom_point(shape=1.,size=3)
ggplot(data,aes(x=x2,y=y2))+geom_smooth(method=lm, se=FALSE)+geom_point(shape=1.,size=3)
ggplot(data,aes(x=x3,y=y3))+geom_smooth(method=lm, se=FALSE)+geom_point(shape=1.,size=3)
ggplot(data,aes(x=x4,y=y4))+geom_smooth(method=lm, se=FALSE)+geom_point(shape=1.,size=3)
7. Now compare the model fits for each model object.
summary(visualmodel1)
Call: lm(formula = y1 ~ x1, data = data)
Residuals: Min 1Q Median 3Q Max -1.92127 -0.45577 -0.04136 0.70941 1.83882
Coefficients: Estimate Std. Error t value Pr(>|t|)
(Intercept) 3.0001 1.1247 2.667 0.02573 * x1 0.5001 0.1179 4.241 0.00217 ** — Signif. codes: 0 ‘’ 0.001 ’’ 0.01 ’’ 0.05 ‘.’ 0.1 ‘’ 1
Residual standard error: 1.237 on 9 degrees of freedom Multiple R-squared: 0.6665, Adjusted R-squared: 0.6295 F-statistic: 17.99 on 1 and 9 DF, p-value: 0.00217
summary(visualmodel2)
Call: lm(formula = y2 ~ x2, data = data)
Residuals: Min 1Q Median 3Q Max -1.9009 -0.7609 0.1291 0.9491 1.2691
Coefficients: Estimate Std. Error t value Pr(>|t|)
(Intercept) 3.001 1.125 2.667 0.02576 * x2 0.500 0.118 4.239 0.00218 ** — Signif. codes: 0 ‘’ 0.001 ’’ 0.01 ’’ 0.05 ‘.’ 0.1 ‘’ 1
Residual standard error: 1.237 on 9 degrees of freedom Multiple R-squared: 0.6662, Adjusted R-squared: 0.6292 F-statistic: 17.97 on 1 and 9 DF, p-value: 0.002179
summary(visualmodel3)
Call: lm(formula = y3 ~ x3, data = data)
Residuals: Min 1Q Median 3Q Max -1.1586 -0.6146 -0.2303 0.1540 3.2411
Coefficients: Estimate Std. Error t value Pr(>|t|)
(Intercept) 3.0025 1.1245 2.670 0.02562 * x3 0.4997 0.1179 4.239 0.00218 ** — Signif. codes: 0 ‘’ 0.001 ’’ 0.01 ’’ 0.05 ‘.’ 0.1 ‘’ 1
Residual standard error: 1.236 on 9 degrees of freedom Multiple R-squared: 0.6663, Adjusted R-squared: 0.6292 F-statistic: 17.97 on 1 and 9 DF, p-value: 0.002176
summary(visualmodel4)
Call: lm(formula = y4 ~ x4, data = data)
Residuals: Min 1Q Median 3Q Max -1.751 -0.831 0.000 0.809 1.839
Coefficients: Estimate Std. Error t value Pr(>|t|)
(Intercept) 3.0017 1.1239 2.671 0.02559 * x4 0.4999 0.1178 4.243 0.00216 ** — Signif. codes: 0 ‘’ 0.001 ’’ 0.01 ’’ 0.05 ‘.’ 0.1 ‘’ 1
Residual standard error: 1.236 on 9 degrees of freedom Multiple R-squared: 0.6667, Adjusted R-squared: 0.6297 F-statistic: 18 on 1 and 9 DF, p-value: 0.002165
From the results above, we can see that models 1, 3 and 4 have the same multiple Rsquared value, F statistic, pvalue as well as adjusted Rsquare. Hence, they all perform the same. It is hard to determine which one is better than the other.
Anscombe champions for the value of visualization in addition to use of summary statistics in order to understand data. Graphs are used to check for assumptions in the data as well as appreciate broad features in the dataset.He also mentions about plotting residuals against the fitted values to show how residuals behave in relation to the x values as well as a glance into how far off the residuals are from the fitted values. He uses the Anscombe dataset to illustrate his points.
Anscombe’s quartet is a series of 4 pieaces of data that are almost similar but when graphed out, they do communicate different stories. Through the scatterplot graphs, we understand the value of visualization which exposes relationships between the variables. Looking at the four datasets, a simple x-y plot reveals that one dataset has a fairly linear relationship, another has a smooth curve, another has a tight linear relationship with an outlier and in the last one, the x remains constant while y value changes and it also has an extreme outlier. Had we not graphed the data in simple scater plots, we would have assumed the relationships or insights from the graphs eg .presence of outliers
Concluding, Anscombe shows the value of relying on visualization as opposed to using Summary statistics only which usually never tells the whole story.