2.32

Reading data:

C1 <- c(0.265, 0.265, 0.266, 0.267, 0.267, 0.265, 0.267, 0.267, 0.265, 0.268, 0.268, 0.265)
C2 <- c(0.264, 0.265, 0.264, 0.266, 0.267, 0.268, 0.264, 0.265, 0.265, 0.267, 0.268, 0.269)

Since same inspector is using caplier 1 and capiler 2 to measure the ball bearing, this is a paired t-test.

————————————————————————————————

Since the smple data is 12 for this case, from qqnormal plot, we can assume it is approximatedly normal distributed. But it is better that if we have more data, since normality evidence will be more rebust. Noticably, we have to ensure they are approximately normal distributed before using paired t-test.

Normality and paired t-test:

C3 <- C1-C2
qqnorm(C3)

t.test(C1,C2,paired = TRUE)
## 
##  Paired t-test
## 
## data:  C1 and C2
## t = 0.43179, df = 11, p-value = 0.6742
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
##  -0.001024344  0.001524344
## sample estimates:
## mean of the differences 
##                 0.00025

Null hypothesis \(H_0: \mu_1 = \mu_2\)

Alternative hypothesis \(H_1: \mu_1 \ne \mu_2\)

(a)(b): Since p-value=0.6742>0.05, fail to reject null hypothesis. So, there is no significant difference between the means of polulation of measurements from two samples.

(c): 95% confidence interval: -0.001024344 \(\le\) \(\mu_1 - \mu_2\) \(\le\) 0.001524344

————————————————————————————————

2.34

Reading data:

KM <- c(1.186, 1.151, 1.322, 1.339, 1.200, 1.402, 1.365, 1.537, 1.559)
LM <- c(1.061, 0.992, 1.063, 1.062, 1.065, 1.178, 1.037, 1.086, 1.052)
D <- KM-LM

Since we are using two different methods to test the same Girder. This is a paired t-test.

Since the sample data is 9 for this case, from qqnormal plot, we can assume they are both approximatedly normal distributed yet Lehigh Method is a bit long/heavy tailed.

It is better if we have more data, since normality evidence will be more rebust. Noticably, we have to ensure they are approximately normal distributed before using paired t-test.

qqnorm(KM, main="Karlsruhe Method Normal Probability Plot")

qqnorm(LM, main="Lehigh Method Normal Probability Plot")

qqnorm(D, main="Two Methods difference Normal Probability Plot")

t.test(KM,LM,paired = TRUE)
## 
##  Paired t-test
## 
## data:  KM and LM
## t = 6.0819, df = 8, p-value = 0.0002953
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
##  0.1700423 0.3777355
## sample estimates:
## mean of the differences 
##               0.2738889

(a)(b): Since p-value=0.0002953<0.05, reject null hypothesis. So, there is a difference in mean performance between the two methods.

(c): 95% confidence interval: 0.1700423 \(\le\) \(\mu_1\) - \(\mu_2\) \(\le\) 0.3777355

(d): Same here. Since the sample data is 9 for this case, from qqnormal plot, we can assume they are both approximatedly normal distributed yet Lehigh Method is a bit long/heavy tailed.

It is better if we have more data, since normality evidence will be more rebust. Noticably, we have to ensure they are approximately normal distributed before using paired t-test.

(e): From qqnorm plot, we can draw that difference in ratio for two methods is approximately noraml distributed.

(f): Assumption on normality is of moderate importance in any forms of t-tests. Noticeably, in the paired t -test, we focus on the assumption of normality on their differences. Namely, individual samples are not required to follow this.

————————————————————————————————

2.29

Reading data

datTemp1 <- c(11.176, 7.089, 8.097, 11.739, 11.291, 10.759, 6.467, 8.315)
datTemp2 <- c(5.623, 6.748, 7.461, 7.015, 8.133, 7.418, 3.772, 8.963)

(e):

qqnorm(datTemp1)

qqnorm(datTemp2)

These two samples follow the normal distribution by and large and could pass the fat pencil test. However, we can find a certain skewness on the curve, the degree of normality is not perfect. Since the assumptions on t-test in terms of normality is a moderate assumption, these are still good to be used.

————————————————————————————————

(f): Since different waffers under different temperatures. We should use two sample type, rather than paired type.

library(pwr)
power.t.test(n=8,delta =2.5,sd = sqrt(((8-1)*(sd(datTemp1))^2+(8-1)*(sd(datTemp2))^2)/(8+8-2)), sig.level=0.05,power=NULL,type="two.sample", alternative = "two.sided")
## 
##      Two-sample t test power calculation 
## 
##               n = 8
##           delta = 2.5
##              sd = 1.864468
##       sig.level = 0.05
##           power = 0.7035649
##     alternative = two.sided
## 
## NOTE: n is number in *each* group

We have power = 0.7035649.

————————————————————————————————

2.27

Reading data:

F125 <- c(2.7, 4.6, 2.6, 3.0, 3.2, 3.8)
F200 <- c(4.6, 3.4, 2.9, 3.5, 4.1, 5.1)

Since we only have 6 samples for each group. This is too less to tell if the samples are following normal distribution, so we choose to use non-parameteric test. Different flow rate on same etch, so paired data. rmd file has not been corrected, please use this updated one in html.

Null hypothesis \(H_0: \mu_1 = \mu_2\)

Alternative hypothesis \(H_1: \mu_1 \ne \mu_2\)

wilcox.test(F125,F200, paired= TRUE, alternative="two.sided")
## 
##  Wilcoxon signed rank exact test
## 
## data:  F125 and F200
## V = 4, p-value = 0.2188
## alternative hypothesis: true location shift is not equal to 0

Since p-value = 0.2188>0.05, we fail to reject the null hypothesis. So, the C2F6 flow rate does not affect average etch uniformity.

————————————————————————————————

Raw codes:

C1 <- c(0.265, 0.265, 0.266, 0.267, 0.267, 0.265, 0.267, 0.267, 0.265, 0.268, 0.268, 0.265)

C2 <- c(0.264, 0.265, 0.264, 0.266, 0.267, 0.268, 0.264, 0.265, 0.265, 0.267, 0.268, 0.269)

C3 <- C1-C2

qqnorm(C3)

t.test(C1,C2,paired = TRUE)

————————————————————————————————

KM <- c(1.186, 1.151, 1.322, 1.339, 1.200, 1.402, 1.365, 1.537, 1.559)

LM <- c(1.061, 0.992, 1.063, 1.062, 1.065, 1.178, 1.037, 1.086, 1.052)

D <- KM-LM

qqnorm(KM, main=“Karlsruhe Method Normal Probability Plot”)

qqnorm(LM, main=“Lehigh Method Normal Probability Plot”)

qqnorm(D, main=“Two Methods difference Normal Probability Plot”)

t.test(KM,LM,paired = TRUE)

————————————————————————————————

datTemp1 <- c(11.176, 7.089, 8.097, 11.739, 11.291, 10.759, 6.467, 8.315)

datTemp2 <- c(5.623, 6.748, 7.461, 7.015, 8.133, 7.418, 3.772, 8.963)

qqnorm(datTemp1)

qqnorm(datTemp2)

library(pwr)

power.t.test(n=8,delta =2.5,sd = sqrt(((8-1)*(sd(datTemp1))^2+(8-1)**(sd(datTemp2))^2)/(8+8-2)), sig.level=0.05,power=NULL,type=“two.sample”, alternative = “two.sided”)

————————————————————————————————

F125 <- c(2.7, 4.6, 2.6, 3.0, 3.2, 3.8)

F200 <- c(4.6, 3.4, 2.9, 3.5, 4.1, 5.1)

wilcox.test(F125,F200, paired= TRUE, alternative=“two.sided”)

————————————————————————————————