1. An article in the Journal of Sound and Vibration [“Measurement of Noise-Evoked Blood Pressure by Means of Averaging Method: Relation between Blood Pressure Rise and PSL” (1991, Vol. 151(3), pp. 383-394)] described a study investigating the relationship between noise exposure and hypertension. The following data are representative of those reported in the article.

A. Draw a scatter diagram of y (blood pressure rise in millimeters of mercury) versus x (sound pressure level in decibels). Does a simple linear regression model seem reasonable in this situation?

A Simple Linear Regression model is reasonable to use in this situation because the data lies on a straight line.

b. Fit the simple linear regression model using least squares. Find an estimate of \(\sigma^2\).

## 
## Call:
## lm(formula = BP_data$bpr ~ BP_data$ne, data = BP_data)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -1.8120 -0.9040 -0.1333  0.5023  2.9310 
## 
## Coefficients:
##              Estimate Std. Error t value Pr(>|t|)    
## (Intercept) -10.13154    1.99490  -5.079 7.83e-05 ***
## BP_data$ne    0.17429    0.02383   7.314 8.57e-07 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.318 on 18 degrees of freedom
## Multiple R-squared:  0.7483, Adjusted R-squared:  0.7343 
## F-statistic:  53.5 on 1 and 18 DF,  p-value: 8.567e-07
Table 1.1: Simple Linear Regression Analysis(coefficients)
Predictor Coefficient SE T P
Noise Exposure 0.17429 0.02383 7.314 9e-07
Table 1.2: Simple Linear Regression Analysis
Residual Standard Error: 1.318 on 18 degrees of freedom
Multiple R-Squared: 0.7483
Adjusted R-Squared: 0.7343
F-statistic: 53.5 on 1 and 18 DF, p-value: 8.567e-07

\[\LARGE \hat{\beta}_1 = 0.174\] \[\LARGE \hat{\beta}_0 = -10.131\]

Fitted Simple Linear Regression Model:

\[\LARGE \hat{y} = \hat{\beta}_0 + \hat{\beta}_1 x = -10.131 + 0.174x\]

Estimate of \(\sigma^2\):

\[\hat{\sigma^2}=\frac {SS_E}{n-2}=(1.318)^2=1.737\]



c. Find the Predicted Mean Rise with 85 decibels.

\[\LARGE \hat{y} = -10.107 + 0.174(85) = 4.683\]


2. An article in Optical Engineering [“Operating Curve Extraction of a Correlator’s Filter” (2004, Vol. 43, pp. 2775-2779)] reported on the use of an optical correlator to perform an experiment by varying brightness and contrast. The resulting modulation is characterized by the useful range of gray levels. The data follow:

A. Fit a multiple linear regression model to these data.

## 
## Call:
## lm(formula = range ~ bright + contrast, data = optic_data)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -32.334 -20.090  -8.451   8.413  69.047 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)   
## (Intercept) 238.5569    45.2285   5.274  0.00188 **
## bright        0.3339     0.6763   0.494  0.63904   
## contrast     -2.7167     0.6887  -3.945  0.00759 **
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 36.35 on 6 degrees of freedom
## Multiple R-squared:  0.7557, Adjusted R-squared:  0.6742 
## F-statistic: 9.278 on 2 and 6 DF,  p-value: 0.01459
Table 2.1: Regression Analysis(coefficients)
Predictor Coefficient SE T P
(Intercept) 238.5569 45.2285 5.274 0.00188
Brightness 0.3339 0.6763 0.494 0.63904
Contrast -2.7167 0.6887 -3.945 0.00759

Fitted Multiple Linear Regression Model:

\[\large Y = \beta_0 +\beta_1 x_1+ \beta_2 x_2 + \epsilon \] \[\large =238.6+0.3339x_1-2.717x_2\]

where \(Y = Useful\:Range, x_1 = Brightness, x_2= Contrast\)

B. Estimate \(\sigma^2\):

Table 2.2: Regression Analysis
Residual Standard Error: 36.35 on 6 degrees of freedom
Multiple R-Squared: 0.7557
Adjusted R-Squared: 0.6742
F-statistic: 9.278 on 2 and 6 DF, p-value: 0.01459

\[\large \hat{\sigma^2}= RSE^2=(36.35)^2=1321\]

C. Compute the standard errors of the regression coefficients.

Based from table 2.2 standard errors of the regression coefficients are

\[SE_{B_0}=45.23\] \[SE_{B_1}=0.6763\] \[SE_{B_2}=0.6887\]

D. Predict the useful range when brightness = 80 and contrast = 75

\[UsefulRange= 238.6 + 0.3339(brightness) - 2.717(contrast)\] \[=238.6+0.3339(80)-2.717(75)\]
\[=59.737\:ng\]

E. Test for significance of regression using α=0.05. What is the P-value for this test?

Based from table 2.2

\(P-value=0.01459\)

F. Construct a t-test on each regression coefficient. What conclusions can you draw about the variables in this model? Use α=0.05.

Hypotheses:

\(\large H_0:B_0=0,\: H_1:B_0\neq0\)
\(\large H_0:B_1=0,\: H_1:B_1\neq0\)
\(\large H_0:B_2=0,\: H_1:B_2\neq0\)

Reject \(H_0:B=0\) if: the P-value is less than 0.05.

Test statistic:

Based from table 2.1

\[\large t_0=5.27\] \[\large t_1=0.494\] \[\large t_2=-3.95\]

Test statistic:

Conclusion: We can see from table 2.2 that the p-values for \(B_0,\:B_1,\:B_2\) are \(0.00188,\:0.639 and\: 0.00759\) respectively. Therefore, \(H_0:B_0=0\:and\: H_0:B_2=0\) are rejected while \(H_0:B_1=0\) is not.

Practical Interpretation: The intercept and contrast is significant to the linear regression model while the brightness is not.