The objectives of this problem set is to orient you to a number of activities in R. And to conduct a thoughtful exercise in appreciating the importance of data visualization. For each question create a code chunk or text response that completes/answers the activity or question requested. Finally, upon completion name your final output .html file as: YourName_ANLY512-Section-Year-Semester.html and upload it to the “Problem Set 2” assignment to your R Pubs account and submit the link to Moodle. Points will be deducted for uploading the improper format.
anscombe data that is part of the library(datasets) in R. And assign that data to a new object called data.library(datasets)
data <- datasets::anscombe
fBasics() package!)library(fBasics)
## Loading required package: timeDate
## Loading required package: timeSeries
colMeans(data)
## x1 x2 x3 x4 y1 y2 y3 y4
## 9.000000 9.000000 9.000000 9.000000 7.500909 7.500909 7.500000 7.500909
colVars(data)
## x1 x2 x3 x4 y1 y2 y3
## 11.000000 11.000000 11.000000 11.000000 4.127269 4.127629 4.122620
## y4
## 4.123249
cor(data$x1,data$y1)
## [1] 0.8164205
cor(data$x2,data$y2)
## [1] 0.8162365
cor(data$x3,data$y3)
## [1] 0.8162867
cor(data$x4,data$y4)
## [1] 0.8165214
plot(data$x1, data$y1, xlab="x1", ylab="y1")
plot(data$x2, data$y2, xlab="x2", ylab="y2")
plot(data$x3, data$y3, xlab="x3", ylab="y3")
plot(data$x4, data$y4, xlab="x4", ylab="y4")
par(mfrow = c(2, 2))
plot(data$x1, data$y1, xlab="x1", ylab="y1", pch=19)
plot(data$x2, data$y2, xlab="x2", ylab="y2", pch=19)
plot(data$x3, data$y3, xlab="x3", ylab="y3", pch=19)
plot(data$x4, data$y4, xlab="x4", ylab="y4", pch=19)
lm() function.lm1 <- lm(data$y1~data$x1)
lm2 <- lm(data$y2~data$x2)
lm3 <- lm(data$y3~data$x3)
lm4 <- lm(data$y4~data$x4)
par(mfrow = c(2, 2))
plot(data$x1, data$y1, xlab="x1", ylab="y1", pch=19); abline(lm1)
plot(data$x2, data$y2, xlab="x2", ylab="y2", pch=19); abline(lm2)
plot(data$x3, data$y3, xlab="x3", ylab="y3", pch=19); abline(lm3)
plot(data$x4, data$y4, xlab="x4", ylab="y4", pch=19); abline(lm4)
summary(lm1)
Call: lm(formula = data\(y1 ~ data\)x1)
Residuals: Min 1Q Median 3Q Max -1.92127 -0.45577 -0.04136 0.70941 1.83882
Coefficients: Estimate Std. Error t value Pr(>|t|)
(Intercept) 3.0001 1.1247 2.667 0.02573 * data$x1 0.5001 0.1179 4.241 0.00217 ** — Signif. codes: 0 ‘’ 0.001 ’’ 0.01 ’’ 0.05 ‘.’ 0.1 ‘’ 1
Residual standard error: 1.237 on 9 degrees of freedom Multiple R-squared: 0.6665, Adjusted R-squared: 0.6295 F-statistic: 17.99 on 1 and 9 DF, p-value: 0.00217
summary(lm2)
Call: lm(formula = data\(y2 ~ data\)x2)
Residuals: Min 1Q Median 3Q Max -1.9009 -0.7609 0.1291 0.9491 1.2691
Coefficients: Estimate Std. Error t value Pr(>|t|)
(Intercept) 3.001 1.125 2.667 0.02576 * data$x2 0.500 0.118 4.239 0.00218 ** — Signif. codes: 0 ‘’ 0.001 ’’ 0.01 ’’ 0.05 ‘.’ 0.1 ‘’ 1
Residual standard error: 1.237 on 9 degrees of freedom Multiple R-squared: 0.6662, Adjusted R-squared: 0.6292 F-statistic: 17.97 on 1 and 9 DF, p-value: 0.002179
summary(lm3)
Call: lm(formula = data\(y3 ~ data\)x3)
Residuals: Min 1Q Median 3Q Max -1.1586 -0.6146 -0.2303 0.1540 3.2411
Coefficients: Estimate Std. Error t value Pr(>|t|)
(Intercept) 3.0025 1.1245 2.670 0.02562 * data$x3 0.4997 0.1179 4.239 0.00218 ** — Signif. codes: 0 ‘’ 0.001 ’’ 0.01 ’’ 0.05 ‘.’ 0.1 ‘’ 1
Residual standard error: 1.236 on 9 degrees of freedom Multiple R-squared: 0.6663, Adjusted R-squared: 0.6292 F-statistic: 17.97 on 1 and 9 DF, p-value: 0.002176
summary(lm4)
Call: lm(formula = data\(y4 ~ data\)x4)
Residuals: Min 1Q Median 3Q Max -1.751 -0.831 0.000 0.809 1.839
Coefficients: Estimate Std. Error t value Pr(>|t|)
(Intercept) 3.0017 1.1239 2.671 0.02559 * data$x4 0.4999 0.1178 4.243 0.00216 ** — Signif. codes: 0 ‘’ 0.001 ’’ 0.01 ’’ 0.05 ‘.’ 0.1 ‘’ 1
Residual standard error: 1.236 on 9 degrees of freedom Multiple R-squared: 0.6667, Adjusted R-squared: 0.6297 F-statistic: 18 on 1 and 9 DF, p-value: 0.002165
The lesson I learned from Anscombe’s Quartet is that we shouldn’t reach any conclusion about data without visualizing it. The 4 datasets of Anscombe’s Quartet have very similar statistics in terms of mean, variance and correlation. When trying to fit each of them with a linear model, we see similar F-statistics as well. All of above is suggesting that these 4 datasets are very similar to each other. However, after visualizing the data sets, we realize they are fundamentally different. Some of them are very good fits except outliers, while in other cases, the relationship between variables requires a more complicated model than linear ones to fit.