The objectives of this problem set is to orient you to a number of activities in R
. And to conduct a thoughtful exercise in appreciating the importance of data visualization. For each question create a code chunk or text response that completes/answers the activity or question requested. Finally, upon completion name your final output .html
file as: YourName_ANLY512-Section-Year-Semester.html
and upload it to the “Problem Set 2” assignmenet on Moodle.
anscombe
data that is part of the library(datasets)
in R
. And assign that data to a new object called data
.data<-anscombe
fBasics()
package!)library(fBasics)
## Loading required package: timeDate
## Loading required package: timeSeries
##
## Rmetrics Package fBasics
## Analysing Markets and calculating Basic Statistics
## Copyright (C) 2005-2014 Rmetrics Association Zurich
## Educational Software for Financial Engineering and Computational Science
## Rmetrics is free software and comes with ABSOLUTELY NO WARRANTY.
## https://www.rmetrics.org --- Mail to: info@rmetrics.org
mean<-colMeans(data)
mean
## x1 x2 x3 x4 y1 y2 y3 y4
## 9.000000 9.000000 9.000000 9.000000 7.500909 7.500909 7.500000 7.500909
variance<-colVars(data)
variance
## x1 x2 x3 x4 y1 y2 y3
## 11.000000 11.000000 11.000000 11.000000 4.127269 4.127629 4.122620
## y4
## 4.123249
correlation1<-correlationTest(data$x1,data$y1)
correlation1
##
## Title:
## Pearson's Correlation Test
##
## Test Results:
## PARAMETER:
## Degrees of Freedom: 9
## SAMPLE ESTIMATES:
## Correlation: 0.8164
## STATISTIC:
## t: 4.2415
## P VALUE:
## Alternative Two-Sided: 0.00217
## Alternative Less: 0.9989
## Alternative Greater: 0.001085
## CONFIDENCE INTERVAL:
## Two-Sided: 0.4244, 0.9507
## Less: -1, 0.9388
## Greater: 0.5113, 1
##
## Description:
## Wed Nov 29 18:01:28 2017
correlation2<-correlationTest(data$x2,data$y2)
correlation2
##
## Title:
## Pearson's Correlation Test
##
## Test Results:
## PARAMETER:
## Degrees of Freedom: 9
## SAMPLE ESTIMATES:
## Correlation: 0.8162
## STATISTIC:
## t: 4.2386
## P VALUE:
## Alternative Two-Sided: 0.002179
## Alternative Less: 0.9989
## Alternative Greater: 0.001089
## CONFIDENCE INTERVAL:
## Two-Sided: 0.4239, 0.9506
## Less: -1, 0.9387
## Greater: 0.5109, 1
##
## Description:
## Wed Nov 29 18:01:28 2017
correlation3<-correlationTest(data$x3,data$y3)
correlation3
##
## Title:
## Pearson's Correlation Test
##
## Test Results:
## PARAMETER:
## Degrees of Freedom: 9
## SAMPLE ESTIMATES:
## Correlation: 0.8163
## STATISTIC:
## t: 4.2394
## P VALUE:
## Alternative Two-Sided: 0.002176
## Alternative Less: 0.9989
## Alternative Greater: 0.001088
## CONFIDENCE INTERVAL:
## Two-Sided: 0.4241, 0.9507
## Less: -1, 0.9387
## Greater: 0.511, 1
##
## Description:
## Wed Nov 29 18:01:28 2017
correlation4<-correlationTest(data$x4,data$y4)
correlation4
##
## Title:
## Pearson's Correlation Test
##
## Test Results:
## PARAMETER:
## Degrees of Freedom: 9
## SAMPLE ESTIMATES:
## Correlation: 0.8165
## STATISTIC:
## t: 4.243
## P VALUE:
## Alternative Two-Sided: 0.002165
## Alternative Less: 0.9989
## Alternative Greater: 0.001082
## CONFIDENCE INTERVAL:
## Two-Sided: 0.4246, 0.9507
## Less: -1, 0.9388
## Greater: 0.5115, 1
##
## Description:
## Wed Nov 29 18:01:28 2017
plot(data)
plot(data$x1,data$y1)
plot(data$x2,data$y2)
plot(data$x3,data$y3)
plot(data$x4,data$y4)
par(mfrow=c(2,2))
plot(data$x1,data$y1, main="Scatterplot1",pch=16)
plot(data$x2,data$y2, main="Scatterplot2",pch=16)
plot(data$x3,data$y3, main="Scatterplot3",pch=16)
plot(data$x4,data$y4, main="Scatterplot4",pch=16)
lm()
function.lm1<-lm(data$y1 ~ data$x1)
lm1
##
## Call:
## lm(formula = data$y1 ~ data$x1)
##
## Coefficients:
## (Intercept) data$x1
## 3.0001 0.5001
lm2<-lm(data$y2 ~ data$x2)
lm2
##
## Call:
## lm(formula = data$y2 ~ data$x2)
##
## Coefficients:
## (Intercept) data$x2
## 3.001 0.500
lm3<-lm(data$y3 ~ data$x3)
lm3
##
## Call:
## lm(formula = data$y3 ~ data$x3)
##
## Coefficients:
## (Intercept) data$x3
## 3.0025 0.4997
lm4<-lm(data$y4 ~ data$x4)
lm4
##
## Call:
## lm(formula = data$y4 ~ data$x4)
##
## Coefficients:
## (Intercept) data$x4
## 3.0017 0.4999
par(mfrow=c(2,2))
lm1<-lm(data$y1 ~ data$x1)
with(data,plot(x1,y1,main="Scatterplot1 & Regression Line1"))
abline(lm1)
lm2<-lm(data$y2 ~ data$x2)
with(data,plot(x2,y2,main="Scatterplot2 & Regression Line2"))
abline(lm2)
lm3<-lm(data$y3 ~ data$x3)
with(data,plot(x3,y3,main="Scatterplot3 & Regression Line3"))
abline(lm3)
lm4<-lm(data$y4 ~ data$x4)
with(data,plot(x4,y4,main="Scatterplot4 & Regression Line4"))
abline(lm4)
anova(lm1)
Analysis of Variance Table
Response: data\(y1 Df Sum Sq Mean Sq F value Pr(>F) data\)x1 1 27.510 27.5100 17.99 0.00217 ** Residuals 9 13.763 1.5292
— Signif. codes: 0 ‘’ 0.001 ’’ 0.01 ’’ 0.05 ‘.’ 0.1 ‘’ 1
anova(lm2)
Analysis of Variance Table
Response: data\(y2 Df Sum Sq Mean Sq F value Pr(>F) data\)x2 1 27.500 27.5000 17.966 0.002179 ** Residuals 9 13.776 1.5307
— Signif. codes: 0 ‘’ 0.001 ’’ 0.01 ’’ 0.05 ‘.’ 0.1 ‘’ 1
anova(lm3)
Analysis of Variance Table
Response: data\(y3 Df Sum Sq Mean Sq F value Pr(>F) data\)x3 1 27.470 27.4700 17.972 0.002176 ** Residuals 9 13.756 1.5285
— Signif. codes: 0 ‘’ 0.001 ’’ 0.01 ’’ 0.05 ‘.’ 0.1 ‘’ 1
anova(lm4)
Analysis of Variance Table
Response: data\(y4 Df Sum Sq Mean Sq F value Pr(>F) data\)x4 1 27.490 27.4900 18.003 0.002165 ** Residuals 9 13.742 1.5269
— Signif. codes: 0 ‘’ 0.001 ’’ 0.01 ’’ 0.05 ‘.’ 0.1 ‘’ 1
Using data visualization makes it more intuitive to look at the data and it tells more clearly about the relations between variables in a dataset. Sometime the statistics could be the same for a few sets of data, but if you plot the data, they could actually be completely different kinds of data. There are times when the statistics say that your model fits the data, but if you plot the data you can find the model you used is actually not the best model or the model that makes sense in reality. Also, each model has its own assumptions, so by checking the graph of the data you can tell whether you made the right assumptions.