Problem 6.8

library(DoE.base)
## Warning: package 'DoE.base' was built under R version 4.1.3
## Loading required package: grid
## Loading required package: conf.design
## Registered S3 method overwritten by 'DoE.base':
##   method           from       
##   factorize.factor conf.design
## 
## Attaching package: 'DoE.base'
## The following objects are masked from 'package:stats':
## 
##     aov, lm
## The following object is masked from 'package:graphics':
## 
##     plot.design
## The following object is masked from 'package:base':
## 
##     lengths
time<-c(rep(-1, 12),rep(1, 12) )
cul_med<-rep(c(rep(-1, 2), rep(1, 2)),6)
obs <- c(21,22,25,26,23,28,24,25,20,26,29,27,37,39,31,34,38,38,29,33,35,36,30,35)
time <- as.factor(time)
cul_med <- as.factor(cul_med)
dat <- data.frame(time,cul_med,obs)
dat
model <- aov(obs~time*cul_med,data=dat)
summary(model) 
##              Df Sum Sq Mean Sq F value   Pr(>F)    
## time          1  590.0   590.0 115.506 9.29e-10 ***
## cul_med       1    9.4     9.4   1.835 0.190617    
## time:cul_med  1   92.0    92.0  18.018 0.000397 ***
## Residuals    20  102.2     5.1                     
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
plot(model)

Hypothesis Test:

\(H_0: \alpha_i = 0\)

\(H_a: \alpha_i \neq 0\)

\(H_0: \beta_j\ = 0\)

\(H_a: \beta_j \neq 0\)

\(H_0: \alpha\beta_{ij}=0\)

\(H_a: \alpha\beta_{ij}\neq0\)

Only the factors Time and the interaction effects (Time:Culture Medium) are significant as the p value is less than \(\alpha=0.001\) level of significance so for these factors we reject \(H_0\) and the factor Culture medium is not significant as the p value is larger than \(\alpha=0.05\) level of significance so for this factors we fail to reject \(H_0\). Also Residual vs Fitted plot shows that the variance is constant and the errors are normally distributed and thus the model is adequate.

Problem 6.12 (a) Estimate the factor effects

A <- c(rep(-1,4),rep(1,4))
B <- c(rep(-1,8),rep(1,8))
obs <- c(14.037,16.165,13.972,13.907,13.880,13.860,14.032,13.914,14.821,14.757,14.843,14.878,14.888,14.921,14.415,14.932)
A <- as.factor(A)
B <- as.factor(B)
dat <- data.frame(A,B,obs)
dat

Finding Corner Points:

one <- sum(dat$obs[1:4])
one
## [1] 58.081
a <- sum(dat$obs[5:8])
a
## [1] 55.686
b <- sum(dat$obs[9:12])
b
## [1] 59.299
ab <- sum(dat$obs[13:16])
ab
## [1] 59.156
n <- c(4)
fact_A <- 2*(a+ab-b-one)/(4*n)
fact_A
## [1] -0.31725
fact_B <- 2*(b+ab-a-one)/(4*n)
fact_B
## [1] 0.586
fact_AB <- 2*(ab+one-a-b)/(4*n)
fact_AB
## [1] 0.2815

Therefore, The factor Effect A = -0.31725, B = 0.586, Interaction effect AB = 0.2815

  1. Analysis of variance
A <- c(rep(-1,4),rep(1,4))
B <- c(rep(-1,8),rep(1,8))
obs <- c(14.037,16.165,13.972,13.907,13.880,13.860,14.032,13.914,14.821,14.757,14.843,14.878,14.888,14.921,14.415,14.932)
A <- as.factor(A)
B <- as.factor(B)
dat <- data.frame(A,B,obs)
dat
model <- aov(obs~A*B,data=dat)
summary(model)
##             Df Sum Sq Mean Sq F value Pr(>F)  
## A            1  0.403  0.4026   1.262 0.2833  
## B            1  1.374  1.3736   4.305 0.0602 .
## A:B          1  0.317  0.3170   0.994 0.3386  
## Residuals   12  3.828  0.3190                 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Hypothesis Test: \(H_0: \alpha_i = 0\)

\(H_a: \alpha_i \neq 0\)

\(H_0: \beta_j\ = 0\)

\(H_a: \beta_j \neq 0\)

\(H_0: \alpha\beta_{ij}=0\)

\(H_a: \alpha\beta_{ij}\neq0\)

p values of the main effect A and the interaction effect (A:B) is greater than \(\alpha = 0.05\) level of significance. Hence the main effect A and the interaction effect (A:B) are not significant as we fail to reject \(H_0\). On the other hand p value of the main effect B is less than \(\alpha = 0.1\) level of significance. Hence the main effect B is significant as we reject \(H_0\).

  1. Regression Equation:
A <- c(rep(-1,4),rep(1,4))
B <- c(rep(-1,8),rep(1,8))
obs <- c(14.037,16.165,13.972,13.907,13.880,13.860,14.032,13.914,14.821,14.757,14.843,14.878,14.888,14.921,14.415,14.932)
A <- as.factor(A)
B <- as.factor(B)
dat <- data.frame(A,B,obs)
dat
model <- lm(obs~A+B+A*B,data=dat)
summary(model)
## 
## Call:
## lm.default(formula = obs ~ A + B + A * B, data = dat)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -0.61325 -0.14431 -0.00563  0.10188  1.64475 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  14.5203     0.2824  51.414 1.93e-15 ***
## A1           -0.5988     0.3994  -1.499    0.160    
## B1            0.3045     0.3994   0.762    0.461    
## A1:B1         0.5630     0.5648   0.997    0.339    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.5648 on 12 degrees of freedom
## Multiple R-squared:  0.3535, Adjusted R-squared:  0.1918 
## F-statistic: 2.187 on 3 and 12 DF,  p-value: 0.1425

y = \(\beta_0 + \beta_2 *x_2 + \epsilon\)

Hence, y = \(14.5203 + 0.3045 * x_2 + \epsilon\)

where \(\beta_0\) = the intercept = the grand mean of all 16 observations and the regression coefficient \(\beta_2\) is one-half the corresponding factor effect B estimate.

  1. The Residuals Analysis
A <- c(rep(-1,4),rep(1,4))
B <- c(rep(-1,8),rep(1,8))
obs <- c(14.037,16.165,13.972,13.907,13.880,13.860,14.032,13.914,14.821,14.757,14.843,14.878,14.888,14.921,14.415,14.932)
A <- as.factor(A)
B <- as.factor(B)
dat <- data.frame(A,B,obs)
dat
model <- aov(obs~A*B,data=dat)
summary(model)
##             Df Sum Sq Mean Sq F value Pr(>F)  
## A            1  0.403  0.4026   1.262 0.2833  
## B            1  1.374  1.3736   4.305 0.0602 .
## A:B          1  0.317  0.3170   0.994 0.3386  
## Residuals   12  3.828  0.3190                 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
plot(model)

There is a significant concern in the Residuals vs Fitted Plot, which shows that the variance is not constant. Hence the model is not adequate. Though the normality assumptions are fairly satisfied in the Normal Q-Q plot.

  1. The potential Outlier: Only the observation no. 2 might be the potential outlier according to the \(\sqrt\)(standardized residual) plot in part (d). Since There is 15 more observations left so only one outlier won’t cause any effect to grow an epitaxial layer on polished silicon wafers. Also,the interaction effects are not significant, we can choose any main effect to maximize the yield of the response vairable (epitaxal layer thickness).

Problem 6.21 (a)

A <- c(rep(-1,7),rep(1,7))
B <- c(rep(-1,14),rep(1,14))
C <- c(rep(-1,28),rep(1,28))
D <- c(rep(-1,56),rep(1,56))
obs <- c(10,18,14,12.5,19,16,18.5,0,16.5,4.5,17.5,20.5,17.5,33,4,6,1,14.5,12,14,5,0,10,34,11,25.5,21.5,0,0,0,18.5,19.5,16,15,11,
         5,20.5,18,20,29.5,19,10,6.5,18.5,7.5,6,0,10,0,16.5,4.5,0,23.5,8,8,8,4.5,18,14.5,10,0,17.5,6,19.5,18,16,5.5,10,7,
         36,15,16,8.5,0,0.5,9,3,41.5,39,6.5,3.5,7,8.5,36,8,4.5,6.5,10,13,41,14,21.5,10.5,6.5,0,15.5,24,16,0,0,0,4.5,1,4,6.5,18,
         5,7,10,32.5,18.5,8)
A <- as.factor(A)
B <- as.factor(B)
C <- as.factor(C)
D <- as.factor(D)
dat <- data.frame(A,B,C,D,obs)
dat
model <- aov(obs~A*B*C*D,data=dat)
summary(model)
##             Df Sum Sq Mean Sq F value  Pr(>F)   
## A            1    917   917.1  10.588 0.00157 **
## B            1    388   388.1   4.481 0.03686 * 
## C            1    145   145.1   1.676 0.19862   
## D            1      1     1.4   0.016 0.89928   
## A:B          1    219   218.7   2.525 0.11538   
## A:C          1     12    11.9   0.137 0.71178   
## B:C          1    115   115.0   1.328 0.25205   
## A:D          1     94    93.8   1.083 0.30066   
## B:D          1     56    56.4   0.651 0.42159   
## C:D          1      2     1.6   0.019 0.89127   
## A:B:C        1      7     7.3   0.084 0.77294   
## A:B:D        1    113   113.0   1.305 0.25623   
## A:C:D        1     39    39.5   0.456 0.50121   
## B:C:D        1     34    33.8   0.390 0.53386   
## A:B:C:D      1     96    95.6   1.104 0.29599   
## Residuals   96   8316    86.6                   
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Hypothesis Test:

\(H_0: \alpha_i = 0\)

\(H_a: \alpha_i \neq 0\)

\(H_0: \beta_j\ = 0\)

\(H_a: \beta_j \neq 0\)

\(H_0: \gamma_k\ = 0\)

\(H_a: \gamma_k \neq 0\)

\(H_0: \delta_l\ = 0\)

\(H_a: \delta_l \neq 0\)

\(H_0: \alpha\beta_{ij}=0\)

\(H_a: \alpha\beta_{ij}\neq0\)

\(H_0: \alpha\gamma_{ik}=0\)

\(H_a: \alpha\gamma_{ik}\neq0\)

\(H_0: \alpha\delta_{il}=0\)

\(H_a: \alpha\delta_{il}\neq0\)

…..

…..

…..

\(H_0: \alpha\beta\gamma\delta_{ijkl}=0\)

\(H_a: \alpha\beta\gamma\delta_{ijkl}\neq0\)

From the anova model summary and analyzing the p value we see that None of the interaction terms are significant but the main effect factors “length of putt” and “the type of putter” are significant and these factors significantly affect putting performance.

Problem 6.21 (b)

plot(model)

Residual vs fitted plot shows that the variance is not constant and hence the model is not adequate though the errors are assumed to be normally distributed from the normal Q-Q plot.

QUESTION 6.36

Getting in data

library(DoE.base)
A<- c(-1,1,-1,1,-1,1,-1,1,-1,1,-1,1,-1,1,-1,1)
B<- c(-1,-1,1,1,-1,-1,1,1,-1,-1,1,1,-1,-1,1,1)
C<- c(-1,-1,-1,-1,1,1,1,1,-1,-1,-1,-1,1,1,1,1)
D<- c(-1,-1,-1,-1,-1,-1,-1,-1,1,1,1,1,1,1,1,1)
obs<- c(1.92,11.28,1.09,5.75,2.13,9.53,1.03,5.35,1.6,11.73,1.16,4.68,2.16,9.11,1.07,5.3)
dat<- data.frame(A,B,C,D,obs)

6.36) A

model<-lm(obs~A*B*C*D,data=dat)
coef(model)
## (Intercept)           A           B           C           D         A:B 
##    4.680625    3.160625   -1.501875   -0.220625   -0.079375   -1.069375 
##         A:C         B:C         A:D         B:D         C:D       A:B:C 
##   -0.298125    0.229375   -0.056875   -0.046875    0.029375    0.344375 
##       A:B:D       A:C:D       B:C:D     A:B:C:D 
##   -0.096875   -0.010625    0.094375    0.141875
halfnormal(model)
## 
## Significant effects (alpha=0.05, Lenth method):
## [1] A     B     A:B   A:B:C

From the Half-normal plot,we can see that Factors A,B, A:B , and A:B:C are significant at alpha=0.5

summary(model)
## 
## Call:
## lm.default(formula = obs ~ A * B * C * D, data = dat)
## 
## Residuals:
## ALL 16 residuals are 0: no residual degrees of freedom!
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)
## (Intercept)  4.68062        NaN     NaN      NaN
## A            3.16062        NaN     NaN      NaN
## B           -1.50187        NaN     NaN      NaN
## C           -0.22062        NaN     NaN      NaN
## D           -0.07937        NaN     NaN      NaN
## A:B         -1.06938        NaN     NaN      NaN
## A:C         -0.29812        NaN     NaN      NaN
## B:C          0.22937        NaN     NaN      NaN
## A:D         -0.05687        NaN     NaN      NaN
## B:D         -0.04688        NaN     NaN      NaN
## C:D          0.02937        NaN     NaN      NaN
## A:B:C        0.34437        NaN     NaN      NaN
## A:B:D       -0.09688        NaN     NaN      NaN
## A:C:D       -0.01063        NaN     NaN      NaN
## B:C:D        0.09438        NaN     NaN      NaN
## A:B:C:D      0.14188        NaN     NaN      NaN
## 
## Residual standard error: NaN on 0 degrees of freedom
## Multiple R-squared:      1,  Adjusted R-squared:    NaN 
## F-statistic:   NaN on 15 and 0 DF,  p-value: NA

To select our tentative model, we would consider the significant factors from our half-normal plots,which are factors A,B,A:B,A:B:C.

So therefore our tentative model can be written as

\(Y_{i,j,k,l}=4.68062+3.16062\alpha _{i}-1.50187\beta_{j}-1.06938\alpha\beta_{ij}+0.34437\alpha\beta\gamma+\epsilon_{ijkl}\)

6.36) B

model2<- aov(obs~A+B+C+A*B+A*B*C,data =dat)
summary(model2)
##             Df Sum Sq Mean Sq  F value   Pr(>F)    
## A            1 159.83  159.83 1563.061 1.84e-10 ***
## B            1  36.09   36.09  352.937 6.66e-08 ***
## C            1   0.78    0.78    7.616  0.02468 *  
## A:B          1  18.30   18.30  178.933 9.33e-07 ***
## A:C          1   1.42    1.42   13.907  0.00579 ** 
## B:C          1   0.84    0.84    8.232  0.02085 *  
## A:B:C        1   1.90    1.90   18.556  0.00259 ** 
## Residuals    8   0.82    0.10                      
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
plot(model2)

## hat values (leverages) are all = 0.5
##  and there are no factor predictors; no plot no. 5

From the Normal Q-Q plot, we can’t assume Normality in the data since the data points in the plot doesn’t appears to fall on a straight line

Also, from the Residuals vs fitted plot, we can’t assume constant variance since all the residuals have varying spread, depleting that the variances are not equal.

6.36) C

Using a Log transformation on our data below

logobs <- log(obs)
dat2 <- data.frame(A,B,C,D,logobs)
model3<- lm(logobs~A*B*C*D,data = dat2)
coef(model3)
##  (Intercept)            A            B            C            D          A:B 
##  1.185417116  0.812870345 -0.314277554 -0.006408558 -0.018077390 -0.024684570 
##          A:C          B:C          A:D          B:D          C:D        A:B:C 
## -0.039723700 -0.004225796 -0.009578245  0.003708723  0.017780432  0.063434408 
##        A:B:D        A:C:D        B:C:D      A:B:C:D 
## -0.029875960 -0.003740235  0.003765760  0.031322043
halfnormal(model3)
## 
## Significant effects (alpha=0.05, Lenth method):
## [1] A     B     A:B:C

summary(model3)
## 
## Call:
## lm.default(formula = logobs ~ A * B * C * D, data = dat2)
## 
## Residuals:
## ALL 16 residuals are 0: no residual degrees of freedom!
## 
## Coefficients:
##              Estimate Std. Error t value Pr(>|t|)
## (Intercept)  1.185417        NaN     NaN      NaN
## A            0.812870        NaN     NaN      NaN
## B           -0.314278        NaN     NaN      NaN
## C           -0.006409        NaN     NaN      NaN
## D           -0.018077        NaN     NaN      NaN
## A:B         -0.024685        NaN     NaN      NaN
## A:C         -0.039724        NaN     NaN      NaN
## B:C         -0.004226        NaN     NaN      NaN
## A:D         -0.009578        NaN     NaN      NaN
## B:D          0.003709        NaN     NaN      NaN
## C:D          0.017780        NaN     NaN      NaN
## A:B:C        0.063434        NaN     NaN      NaN
## A:B:D       -0.029876        NaN     NaN      NaN
## A:C:D       -0.003740        NaN     NaN      NaN
## B:C:D        0.003766        NaN     NaN      NaN
## A:B:C:D      0.031322        NaN     NaN      NaN
## 
## Residual standard error: NaN on 0 degrees of freedom
## Multiple R-squared:      1,  Adjusted R-squared:    NaN 
## F-statistic:   NaN on 15 and 0 DF,  p-value: NA
model4<-aov(logobs~A+B+C+A*B*C,data=dat2)
summary(model4)
##             Df Sum Sq Mean Sq  F value   Pr(>F)    
## A            1 10.572  10.572 1994.556 6.98e-11 ***
## B            1  1.580   1.580  298.147 1.29e-07 ***
## C            1  0.001   0.001    0.124  0.73386    
## A:B          1  0.010   0.010    1.839  0.21207    
## A:C          1  0.025   0.025    4.763  0.06063 .  
## B:C          1  0.000   0.000    0.054  0.82223    
## A:B:C        1  0.064   0.064   12.147  0.00826 ** 
## Residuals    8  0.042   0.005                      
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
plot(model4)

After the Log transformation, we can observe that from the half-normal plots that only factors A, B, and A:B:C are significant and interactions of A:B seems insignificant as compared to pre-log transformation.

The confirmation of the half normal plot was double checked using ANOVA analysis that showed factors A,B, and A:B:C to be significant as shown above.

Based on the residuals plots, after the log transformation was performed we observed that

From the Normal Q-Q plot, we can not assume Normality in the data since the data points in the plot does-not appear to fall on a straight line

Also, from the Residuals vs fitted plot, we cannot assume constant variance since all the residuals have varying spread, depleting that the variances are not equal.

6.36) D

Fitting a model of the coded variables we have

\(Y_{ijkl}=1.185417+0.812870\alpha _{i}-0.314278\beta_{j}+0.063434\alpha\beta\gamma+\epsilon_{{ijkl}}\)

QUESTION 6.39

Getting in the data we have

library(DoE.base)
A<-c(-1,1,-1,1,-1,1,-1,1,-1,1,-1,1,-1,1,-1,1,-1,1,-1,1,-1,1,-1,1,-1,1,-1,1,-1,1,-1,1)
B<-c(-1,-1,1,1,-1,-1,1,1,-1,-1,1,1,-1,-1,1,1,-1,-1,1,1,-1,-1,1,1,-1,-1,1,1,-1,-1,1,1)
C<-c(-1,-1,-1,-1,1,1,1,1,-1,-1,-1,-1,1,1,1,1,-1,-1,-1,-1,1,1,1,1,-1,-1,-1,-1,1,1,1,1)
D<-c(-1,-1,-1,-1,-1,-1,-1,-1,1,1,1,1,1,1,1,1,-1,-1,-1,-1,-1,-1,-1,-1,1,1,1,1,1,1,1,1)
E<- c(-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1)
obs<- c(8.11,5.56,5.77,5.82,9.17,7.8,3.23,5.69,8.82,14.23,9.2,8.94,8.68,11.49,6.25,9.12,7.93,5,7.47,12,9.86,3.65,6.4,11.61,12.43,17.55,8.87,25.38,13.06,18.85,11.78,26.05)
dat<- data.frame(A,B,C,D,E,obs)

6.39) A

model<-lm(obs~A*B*C*D*E,data=dat)
coef(model)
## (Intercept)           A           B           C           D           E 
##  10.1803125   1.6159375   0.0434375  -0.0121875   2.9884375   2.1878125 
##         A:B         A:C         B:C         A:D         B:D         C:D 
##   1.2365625  -0.0015625  -0.1953125   1.6665625  -0.0134375   0.0034375 
##         A:E         B:E         C:E         D:E       A:B:C       A:B:D 
##   1.0271875   1.2834375   0.3015625   1.3896875   0.2503125  -0.3453125 
##       A:C:D       B:C:D       A:B:E       A:C:E       B:C:E       A:D:E 
##  -0.0634375   0.3053125   1.1853125  -0.2590625   0.1709375   0.9015625 
##       B:D:E       C:D:E     A:B:C:D     A:B:C:E     A:B:D:E     A:C:D:E 
##  -0.0396875   0.3959375  -0.0740625  -0.1846875   0.4071875   0.1278125 
##     B:C:D:E   A:B:C:D:E 
##  -0.0746875  -0.3553125
halfnormal(model)
## 
## Significant effects (alpha=0.05, Lenth method):
##  [1] D     E     A:D   A     D:E   B:E   A:B   A:B:E A:E   A:D:E

summary(model)
## 
## Call:
## lm.default(formula = obs ~ A * B * C * D * E, data = dat)
## 
## Residuals:
## ALL 32 residuals are 0: no residual degrees of freedom!
## 
## Coefficients:
##              Estimate Std. Error t value Pr(>|t|)
## (Intercept) 10.180312        NaN     NaN      NaN
## A            1.615938        NaN     NaN      NaN
## B            0.043438        NaN     NaN      NaN
## C           -0.012187        NaN     NaN      NaN
## D            2.988437        NaN     NaN      NaN
## E            2.187813        NaN     NaN      NaN
## A:B          1.236562        NaN     NaN      NaN
## A:C         -0.001563        NaN     NaN      NaN
## B:C         -0.195313        NaN     NaN      NaN
## A:D          1.666563        NaN     NaN      NaN
## B:D         -0.013438        NaN     NaN      NaN
## C:D          0.003437        NaN     NaN      NaN
## A:E          1.027188        NaN     NaN      NaN
## B:E          1.283437        NaN     NaN      NaN
## C:E          0.301563        NaN     NaN      NaN
## D:E          1.389687        NaN     NaN      NaN
## A:B:C        0.250313        NaN     NaN      NaN
## A:B:D       -0.345312        NaN     NaN      NaN
## A:C:D       -0.063437        NaN     NaN      NaN
## B:C:D        0.305312        NaN     NaN      NaN
## A:B:E        1.185313        NaN     NaN      NaN
## A:C:E       -0.259062        NaN     NaN      NaN
## B:C:E        0.170938        NaN     NaN      NaN
## A:D:E        0.901563        NaN     NaN      NaN
## B:D:E       -0.039687        NaN     NaN      NaN
## C:D:E        0.395938        NaN     NaN      NaN
## A:B:C:D     -0.074063        NaN     NaN      NaN
## A:B:C:E     -0.184688        NaN     NaN      NaN
## A:B:D:E      0.407187        NaN     NaN      NaN
## A:C:D:E      0.127812        NaN     NaN      NaN
## B:C:D:E     -0.074688        NaN     NaN      NaN
## A:B:C:D:E   -0.355312        NaN     NaN      NaN
## 
## Residual standard error: NaN on 0 degrees of freedom
## Multiple R-squared:      1,  Adjusted R-squared:    NaN 
## F-statistic:   NaN on 31 and 0 DF,  p-value: NA
summary(model)
## 
## Call:
## lm.default(formula = obs ~ A * B * C * D * E, data = dat)
## 
## Residuals:
## ALL 32 residuals are 0: no residual degrees of freedom!
## 
## Coefficients:
##              Estimate Std. Error t value Pr(>|t|)
## (Intercept) 10.180312        NaN     NaN      NaN
## A            1.615938        NaN     NaN      NaN
## B            0.043438        NaN     NaN      NaN
## C           -0.012187        NaN     NaN      NaN
## D            2.988437        NaN     NaN      NaN
## E            2.187813        NaN     NaN      NaN
## A:B          1.236562        NaN     NaN      NaN
## A:C         -0.001563        NaN     NaN      NaN
## B:C         -0.195313        NaN     NaN      NaN
## A:D          1.666563        NaN     NaN      NaN
## B:D         -0.013438        NaN     NaN      NaN
## C:D          0.003437        NaN     NaN      NaN
## A:E          1.027188        NaN     NaN      NaN
## B:E          1.283437        NaN     NaN      NaN
## C:E          0.301563        NaN     NaN      NaN
## D:E          1.389687        NaN     NaN      NaN
## A:B:C        0.250313        NaN     NaN      NaN
## A:B:D       -0.345312        NaN     NaN      NaN
## A:C:D       -0.063437        NaN     NaN      NaN
## B:C:D        0.305312        NaN     NaN      NaN
## A:B:E        1.185313        NaN     NaN      NaN
## A:C:E       -0.259062        NaN     NaN      NaN
## B:C:E        0.170938        NaN     NaN      NaN
## A:D:E        0.901563        NaN     NaN      NaN
## B:D:E       -0.039687        NaN     NaN      NaN
## C:D:E        0.395938        NaN     NaN      NaN
## A:B:C:D     -0.074063        NaN     NaN      NaN
## A:B:C:E     -0.184688        NaN     NaN      NaN
## A:B:D:E      0.407187        NaN     NaN      NaN
## A:C:D:E      0.127812        NaN     NaN      NaN
## B:C:D:E     -0.074688        NaN     NaN      NaN
## A:B:C:D:E   -0.355312        NaN     NaN      NaN
## 
## Residual standard error: NaN on 0 degrees of freedom
## Multiple R-squared:      1,  Adjusted R-squared:    NaN 
## F-statistic:   NaN on 31 and 0 DF,  p-value: NA
model2<- aov(obs~A+B+D+E+A*B+A*D+A*E+B*E+D*E+A*B*E+A*D*E,data=dat)
summary(model2)
##             Df Sum Sq Mean Sq F value   Pr(>F)    
## A            1  83.56   83.56  51.362 6.10e-07 ***
## B            1   0.06    0.06   0.037 0.849178    
## D            1 285.78  285.78 175.664 2.30e-11 ***
## E            1 153.17  153.17  94.149 5.24e-09 ***
## A:B          1  48.93   48.93  30.076 2.28e-05 ***
## A:D          1  88.88   88.88  54.631 3.87e-07 ***
## A:E          1  33.76   33.76  20.754 0.000192 ***
## B:E          1  52.71   52.71  32.400 1.43e-05 ***
## D:E          1  61.80   61.80  37.986 5.07e-06 ***
## A:B:E        1  44.96   44.96  27.635 3.82e-05 ***
## A:D:E        1  26.01   26.01  15.988 0.000706 ***
## Residuals   20  32.54    1.63                     
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

We can observe that from the half-normal plots that A,D,E,A:D,D:E,B:E,A:B,A:E,A:B:E,A:D:E are significant

The confirmation of the half normal plot was double checked using ANOVA analysis that showed that the above factors mentioned were significant

6.39) B

plot(model2)

## hat values (leverages) are all = 0.375
##  and there are no factor predictors; no plot no. 5

From the Normal Q-Q plot, we can’t assume Normality in the data since the data points in the plot doesn’t appears to fall on a straight line

Also, from the Residuals vs fitted plot, we can’t assume constant variance since all the residuals have varying spread, depleting that the variances are not equal.

6.39) C

A<- c(-1,1,-1,1,-1,1,-1,1,-1,1,-1,1,-1,1,-1,1,-1,1,-1,1,-1,1,-1,1,-1,1,-1,1,-1,1,-1,1)
B<- c(-1,-1,1,1,-1,-1,1,1,-1,-1,1,1,-1,-1,1,1,-1,-1,1,1,-1,-1,1,1,-1,-1,1,1,-1,-1,1,1)
D<- c(-1,-1,-1,-1,-1,-1,-1,-1,1,1,1,1,1,1,1,1,-1,-1,-1,-1,-1,-1,-1,-1,1,1,1,1,1,1,1,1)
E<- c(-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,-1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1)
obs<- c(8.11,5.56,5.77,5.82,9.17,7.8,3.23,5.69,8.82,14.23,9.2,8.94,8.68,11.49,6.25,9.12,7.93,5,7.47,12,9.86,3.65,6.4,11.61,12.43,17.55,8.87,25.38,13.06,18.85,11.78,26.05)
dat<- data.frame(A,B,D,E,obs)
model4<- lm(obs~A*B*D*E,data =dat)
coef(model4)
## (Intercept)           A           B           D           E         A:B 
##  10.1803125   1.6159375   0.0434375   2.9884375   2.1878125   1.2365625 
##         A:D         B:D         A:E         B:E         D:E       A:B:D 
##   1.6665625  -0.0134375   1.0271875   1.2834375   1.3896875  -0.3453125 
##       A:B:E       A:D:E       B:D:E     A:B:D:E 
##   1.1853125   0.9015625  -0.0396875   0.4071875
halfnormal(model4)
## 
## Significant effects (alpha=0.05, Lenth method):
##  [1] D     E     A:D   A     D:E   B:E   A:B   A:B:E A:E   A:D:E e10

summary(model4)
## 
## Call:
## lm.default(formula = obs ~ A * B * D * E, data = dat)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -1.4750 -0.5637  0.0000  0.5637  1.4750 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept) 10.18031    0.21360  47.661  < 2e-16 ***
## A            1.61594    0.21360   7.565 1.14e-06 ***
## B            0.04344    0.21360   0.203 0.841418    
## D            2.98844    0.21360  13.991 2.16e-10 ***
## E            2.18781    0.21360  10.243 1.97e-08 ***
## A:B          1.23656    0.21360   5.789 2.77e-05 ***
## A:D          1.66656    0.21360   7.802 7.66e-07 ***
## B:D         -0.01344    0.21360  -0.063 0.950618    
## A:E          1.02719    0.21360   4.809 0.000193 ***
## B:E          1.28344    0.21360   6.009 1.82e-05 ***
## D:E          1.38969    0.21360   6.506 7.24e-06 ***
## A:B:D       -0.34531    0.21360  -1.617 0.125501    
## A:B:E        1.18531    0.21360   5.549 4.40e-05 ***
## A:D:E        0.90156    0.21360   4.221 0.000650 ***
## B:D:E       -0.03969    0.21360  -0.186 0.854935    
## A:B:D:E      0.40719    0.21360   1.906 0.074735 .  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.208 on 16 degrees of freedom
## Multiple R-squared:  0.9744, Adjusted R-squared:  0.9504 
## F-statistic: 40.58 on 15 and 16 DF,  p-value: 7.07e-10
model5<- aov(obs~A+B+D+E+A*B+A*D+A*E+B*E+D*E+A*B*E+A*D*E,data=dat)
summary(model5)
##             Df Sum Sq Mean Sq F value   Pr(>F)    
## A            1  83.56   83.56  51.362 6.10e-07 ***
## B            1   0.06    0.06   0.037 0.849178    
## D            1 285.78  285.78 175.664 2.30e-11 ***
## E            1 153.17  153.17  94.149 5.24e-09 ***
## A:B          1  48.93   48.93  30.076 2.28e-05 ***
## A:D          1  88.88   88.88  54.631 3.87e-07 ***
## A:E          1  33.76   33.76  20.754 0.000192 ***
## B:E          1  52.71   52.71  32.400 1.43e-05 ***
## D:E          1  61.80   61.80  37.986 5.07e-06 ***
## A:B:E        1  44.96   44.96  27.635 3.82e-05 ***
## A:D:E        1  26.01   26.01  15.988 0.000706 ***
## Residuals   20  32.54    1.63                     
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
plot(model5)

## hat values (leverages) are all = 0.375
##  and there are no factor predictors; no plot no. 5

Factor C was dropped totally from our model initially because it seemed insignificant. After dropping factor C we realized that from the half-normal plots that A,D,E,A:D,D:E,B:E,A:B,A:E,A:B:E,A:D:E are significant

The confirmation of the half normal plot was double checked using ANOVA analysis that showed that the above factors mentioned were significant

Which was similar to the results gotten when factor C was included.

6.39) D

Fitting our model in a linear equation we can see that

\(Y_{ijkl}=10.18031+1.61594\alpha _{i}+0.04344\beta_{j}+2.98844\gamma_{k}+1.66656\alpha\gamma+1.38969\gamma\delta+1.28344\beta\delta+1.23656\alpha\beta+1.02719\alpha\delta+1.18531\alpha\beta\delta+0.90156\alpha\gamma\delta\)

To maximize the predicted response, the above linear model should be follows

Note

Because all the factors are of positive coefficient, therefore they should be at a +1 level to produce maximum response.