Answer 2.32

library(dplyr)

Entering Data

C1 <-c(0.265,0.265,0.266,0.267,0.267,0.265,0.267,0.267,0.265,0.268,0.268,0.265)
C2 <-c(0.264,0.265,0.264,0.266,0.267,0.268,0.264,0.265,0.265,0.267,0.268,0.269)
t.test(C1, C2, paired = TRUE)
## 
##  Paired t-test
## 
## data:  C1 and C2
## t = 0.43179, df = 11, p-value = 0.6742
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
##  -0.001024344  0.001524344
## sample estimates:
## mean of the differences 
##                 0.00025

(a) Null Hypothesis Ho:μ1=μ2 and Alternative Hypothesis Ha:μ1≠μ2

μ1: mean measurements caliper 1 and μ2: mean measurements of caliper 2

We can see that the P-Value is 0.6742 which is very large when compared to the alpha value 0.05. This shows a weak evidence against the null hypothesis, so we fail to reject the null hypothesis. Hence, there is no significant difference between the means of the population of measurements.

(b) p-value = 0.6742

(c) The 95 percent confidence interval is -0.001024344 ≤ u1-u2 ≤ 0.001524344

Answer 2.34

Entering Data

KM <-c(1.186,1.151,1.322,1.339,1.200,1.402,1.365,1.537,1.559)
LM <- c(1.061,0.992,1.063,1.062,1.065,1.178,1.037,1.086,1.052)
t.test(KM, LM, paired = TRUE)
## 
##  Paired t-test
## 
## data:  KM and LM
## t = 6.0819, df = 8, p-value = 0.0002953
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
##  0.1700423 0.3777355
## sample estimates:
## mean of the differences 
##               0.2738889

(a) Null Hypothesis Ho:μ1=μ2 and Alternative Hypothesis Ha:μ1≠μ2

μ1: mean for Karlsruhe and μ2: mean for Lehigh

We can see that the P-Value is 0.0002953 which is very small when compared to the alpha value 0.05. Therefore, we reject the null hypothesis Ho, and conclude that there is a difference in the means.

(b) p-value = 0.0002953

(c) The 95 percent confidence interval is 0.1700423 ≤ u1-u2 ≤ 0.3777355

(d) Checking the assumption of normality

mat1 <- cbind(KM, LM)
dat1 <- as.data.frame(mat1)
qqnorm(dat1$KM, main = "Normal Probability Plot for Karlsruhe Method")
qqline(dat1$KM)

qqnorm(dat1$LM, main = "Normal Probability Plot for Lehigh Method")
qqline(dat1$LM)

The Normal Probability Plots for both the methods appears to be fairly normal as seen from the above plots

(e) Normality assumption for the difference in ratios for the two methods:

R <- c(KM - LM)
qqnorm(R, main = "Normal Probability Plot for the difference in ratios")
qqline(R)

The differences of the ratios of the two methods generally appears to be normally distributed.

(f) For the paired t-test the assumption of normality applies only to the distribution of the differences, which means the individual sample measurements do not have to be normally distributed

Answer 2.29 (e, f)

T95 <- c(11.176, 7.089, 8.097, 11.739, 11.291, 10.759, 6.467, 8.315)
T100 <- c(5.263, 6.748, 7.461, 7.015, 8.133, 7.418, 3.772, 8.963)
PT <- cbind(T95, T100)
PT <- as.data.frame(PT)
qqnorm(PT$T95, main = "Normal Probability Plot for Thickness at 95 C")
qqline(PT$T95)

qqnorm(PT$T100, main = "Normal Probability Plot for Thickness at 100 C")
qqline(PT$T100)

(e) As data points on both the probability distribution plots are mostly falling on a straight line, we can conclude that they appear to be normally distributed.

(f) Power of the test:

Pooled standard deviation

s1 <- sd(T95)
n1 <- length(T95)
s2 <- sd(T100)
n2 <- length(T100)
pooled <- sqrt (((n1-1)*s1^2 + (n2-1)*s2^2) / (n1+n2-2))
pooled
## [1] 1.884034
library(pwr)
power.t.test(n = 8, d = 2.5, sd = pooled, sig.level= 0.05, power = NULL, alternative = "one.sided", type="two.sample")
## 
##      Two-sample t test power calculation 
## 
##               n = 8
##           delta = 2.5
##              sd = 1.884034
##       sig.level = 0.05
##           power = 0.8098869
##     alternative = one.sided
## 
## NOTE: n is number in *each* group

Sample size here will be 8 in each group, power = 0.8098 which is approx 81%

Answer 2.27 (Using non-parametric method)

SCCM125 <-c(2.7, 4.6, 2.6, 3.0, 3.2, 3.8)
SCCM200 <-c(4.6, 3.4, 2.9, 3.5, 4.1, 5.1)
wilcox.test(SCCM125,SCCM200)
## Warning in wilcox.test.default(SCCM125, SCCM200): cannot compute exact p-value
## with ties
## 
##  Wilcoxon rank sum test with continuity correction
## 
## data:  SCCM125 and SCCM200
## W = 9.5, p-value = 0.1994
## alternative hypothesis: true location shift is not equal to 0

The p-value is 0.1994 > 0.05 (alpha), Hence we accept Null Hypotheses and conclude that the flow rate does not affect the average etch uniformity.

Source Code

# All R code used in the file

library(dplyr)

C1 <-c(0.265,0.265,0.266,0.267,0.267,0.265,0.267,0.267,0.265,0.268,0.268,0.265)
C2 <-c(0.264,0.265,0.264,0.266,0.267,0.268,0.264,0.265,0.265,0.267,0.268,0.269)
t.test(C1, C2, paired = TRUE)

KM <-c(1.186,1.151,1.322,1.339,1.200,1.402,1.365,1.537,1.559)
LM <- c(1.061,0.992,1.063,1.062,1.065,1.178,1.037,1.086,1.052)
t.test(KM, LM, paired = TRUE)

qqnorm(KM, main = "Normal Probability Plot for Karlsruhe Method")
qqline(KM)

qqnorm(LM, main = "Normal Probability Plot for Lehigh Method")
qqline(LM)

R <- c(KM - LM)
qqnorm(R, main = "Normal Probability Plot for the difference in ratios")
qqline(R)

T95 <- c(11.176, 7.089, 8.097, 11.739, 11.291, 10.759, 6.467, 8.315)
T100 <- c(5.263, 6.748, 7.461, 7.015, 8.133, 7.418, 3.772, 8.963)
PT <- cbind(T95, T100)
PT <- as.data.frame(PT)
qqnorm(PT$T95, main = "Normal Probability Plot for Thickness at 95 C")
qqline(PT$T95)
qqnorm(PT$T100, main = "Normal Probability Plot for Thickness at 100 C")
qqline(PT$T100)

s1 <- sd(T95)
n1 <- length(T95)
s2 <- sd(T100)
n2 <- length(T100)
pooled <- sqrt (((n1-1)*s1^2 + (n2-1)*s2^2) / (n1+n2-2))
pooled

library(pwr)
power.t.test(n = 8, d = 2.5, sd = pooled, sig.level= 0.05, power = NULL, type="two.sample", alternative = "one.sided")
             
SCCM125 <-c(2.7, 4.6, 2.6, 3.0, 3.2, 3.8)
SCCM200 <-c(4.6, 3.4, 2.9, 3.5, 4.1, 5.1)
wilcox.test(SCCM125,SCCM200)