A 90% confidence interval for a population mean is (65, 77). The population distribution is approximately normal and the population standard deviation is unknown. This confidence interval is based on a simple random sample of 25 observations. Calculate the sample mean, the margin of error, and the sample standard deviation.
n <- 25
ME <- ((77-65)/2)
ME
## [1] 6
xbar <- ((77+65)/2)
xbar
## [1] 71
df <- 25-1
t.value <- qt(.95, df)
t.value
## [1] 1.710882
sd <- (ME/t.value)*5
sd
## [1] 17.53481
SAT scores of students at an Ivy League college are distributed with a standard deviation of 250 points. Two statistics students, Raina and Luke, want to estimate the average SAT score of students at this college as part of a class project. They want their margin of error to be no more than 25 points.
z.star <- 1.65
ME <- 25
SD <- 250
sample.size <- (((z.star*SD)/(ME))^2)
sample.size
## [1] 272.25
Since we are multiplying the Standard Deviation by a larger number, we will get a larger sample size for Luke’s 99% interval.
zstar.Luke <- 2.58
ME <- 25
SD <- 250
samplesize.Luke <- (((zstar.Luke*SD)/(ME))^2)
samplesize.Luke
## [1] 665.64
imgage <- "C:/Users/jpsim/Documents/DATA Acquisition and Management/hs.png"
include_graphics(imgage)
There seems to be no clear difference in the average of the reading and writing scores.
The reading and writing scores are independent of each other.
\(H_0:\) The difference of average in between reading and writing equal zero
\(\mu_r - \mu_w = 0\)
\(H_1:\) The difference of average in between reading and writing does NOT equal zero
\(\mu_r - \mu_w \ne 0\)
Independence of observations: The difference histogram suggested the data are paired. If paired, then they wouldn’t be independent.
Observations come from nearly normal distribution: The box plot provided in the text suggests the data are reasonably normally distributed and no outliers exist.
\(H_0:\) The difference of average scores is equal to zero
\(\mu_{diff} = 0\)
\(H_1:\) The difference of average scores is NOT equal to zero
\(\mu_{diff} \ne 0\)
n <- 200
mean.diff <- -.545
df <- n-1
SD <- 8.887
SE <- SD/sqrt(n)
t <- (mean.diff-0)/SE
pvalue <- pt(t, df)
pvalue
## [1] 0.1934182
The t-value, .19 > .05 so we fail to reject the null hypothesis. we do no have convining evidence of a difference between the average reading and writing exam scores.
Type I error: Incorrectly reject the null hypothesis.
Type II error: Incorrectly reject the alternative hypothesis.
In the case, we may have made a type II error by rejecting the alternative hypothesis HA. that is, we might have wrongly concluded that there is not a difference in the average student reading and writing exam scores.
The table provides summary statistics on highway fuel economy of cars manufactured in 2012. Use these statistics to calculate a 98% confidence interval for the difference between average highway mileage of manual and automatic cars, and interpret this interval in the context of the data.
imgage <- "C:/Users/jpsim/Documents/DATA Acquisition and Management/mpg.png"
include_graphics(imgage)
\(H_0:\) The difference of average miles is equal to zero
\(\mu_{diff} = 0\)
\(H_1:\) The difference of average miles is NOT equal to zero
\(\mu_{diff} \ne 0\)
n <- 26
mu_a <- 16.12
sd_a <- 3.58
mu_m <- 19.85
sd_m <- 4.51
mu_Diff <- mu_a - mu_m
SE_Diff <- ( (sd_a ^ 2 / n) + ( sd_m ^ 2 / n) ) ^ 0.5
t_val <- (mu_Diff - 0) / SE_Diff
df <- n - 1
p <- pt(t_val, df = df)
p
## [1] 0.001441807
Due to the fact tha the p-value is less than 0.05, we have to reject the null hypothesis H0 , therefore there is strong evidence of a difference in fuel efficiency between manual and automatic transmissions.
A medical research group is recruiting people to complete short surveys about their medical history. For example, one survey asks for information on a person’s family history in regards to cancer. Another survey asks about what topics were discussed during the person’s last visit to a hospital. So far, as people sign up, they complete an average of just 4 surveys, and the standard deviation of the number of surveys is about 2.2. The research group wants to try a new interface that they think will encourage new enrollees to complete more surveys, where they will randomize each enrollee to either get the new interface or the current interface. How many new enrollees do they need for each interface to detect an effect size of 0.5 surveys per enrollee, if the desired power level is 80%?
\(s = 2.2\)
\(S = 0.5\)
\(1 - \beta = .80\)
\(\beta = .20\)
\(\alpha = 0.05\)
((2 * ((1.96+0.84162)*(1.96+0.84162)) * (2.2*2.2))/ 0.25)
## [1] 303.9162
Therefore, the number of enrolles they need is 304
The General Social Survey collects data on demographics, education, and work, among many other characteristics of US residents.47 Using ANOVA, we can consider educational attainment levels for all 1,172 respondents at once. Below are the distributions of hours worked by educational attainment and relevant summary statistics that will be helpful in carrying out this analysis.
imgage <- "C:/Users/jpsim/Documents/DATA Acquisition and Management/education.png"
include_graphics(imgage)
$H_0 : $ The difference of ALL averages is equal. \(\mu_l = \mu_h = \mu_j = \mu_b = \mu_g\)
$H_1 : $ There is one average that is NOT equal to the other ones
The observations are independent within and across groups: I will assume independence within and across the groups based on the nature of the provided data.
The data within each group are nearly normal: The box plots do not support nearly normal data within each group. Each group has outliers some groups seem to follow a normal distribution.
The variability across the groups is about equal: There seems to be a similarity of variability in between some of the groups just by observing the standard deviations.
mu <- c(38.67, 39.6, 41.39, 42.55, 40.85)
sd <- c(15.81, 14.97, 18.1, 13.62, 15.51)
n <- c(121, 546, 97, 253, 155)
k <- 5
MSG <- 501.54
SSE <- 267382
n <- sum(n) - k
n
## [1] 1167
p <- 0.0682
dfG <- k-1
dfE <- n-k
dfT <- dfG + dfE
df <- c(dfG, dfE, dfT)
SSG <- dfG * MSG
SST <- SSG + SSE
SS <- c(SSG, SSE, SST)
MSE <- SSE / dfE
MS <- c(MSG, MSE, NA)
SST <- SSG + SSE
dft <- df + dfE
Fv <- MSG / MSE
df <- data.frame(df, SS, MS, c(Fv, NA, NA), c(p, NA, NA))
colnames(df) <- c("Df", "Sum Sq", "Mean Sq", "F Value", "Pr(>F)")
rownames(df) <- c("degree", "Residuals", "Total")
df [1:5]
## Df Sum Sq Mean Sq F Value Pr(>F)
## degree 4 2006.16 501.540 2.179614 0.0682
## Residuals 1162 267382.00 230.105 NA NA
## Total 1166 269388.16 NA NA NA
Since p value is greater than .05, the null hypothesis is rejected and there is not a significant difference between the groups