##Chapter 3, problem number 7.

In the punting data,we find the average distance punted and hang times of 10 punts of an American football as related to various measures of leg strength for 13 volunteers.

###a.Fit a regression model with Distance as the response and the right and left leg strengths and flexibilities as predictors. Which predictors are significant at the 5% level?

data(punting, package="faraway") 
lmod7a <- lm(Distance ~ RStr + LStr + RFlex + LFlex ,punting)
lmod7a

Call:
lm(formula = Distance ~ RStr + LStr + RFlex + LFlex, data = punting)

Coefficients:
(Intercept)         RStr         LStr        RFlex        LFlex  
   -79.6236       0.5116      -0.1862       2.3745      -0.5277  
summary(lmod7a) 

Call:
lm(formula = Distance ~ RStr + LStr + RFlex + LFlex, data = punting)

Residuals:
    Min      1Q  Median      3Q     Max 
-23.941  -8.958  -4.441  13.523  17.016 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)
(Intercept) -79.6236    65.5935  -1.214    0.259
RStr          0.5116     0.4856   1.054    0.323
LStr         -0.1862     0.5130  -0.363    0.726
RFlex         2.3745     1.4374   1.652    0.137
LFlex        -0.5277     0.8255  -0.639    0.541

Residual standard error: 16.33 on 8 degrees of freedom
Multiple R-squared:  0.7365,    Adjusted R-squared:  0.6047 
F-statistic:  5.59 on 4 and 8 DF,  p-value: 0.01902

From the summary,we can know that no variables’ p-value<0.05 which means that none predictors are statistically significant at the 5% level.

###b.Use an F-test to determine whether collectively these four predictors have a relationship to the response.

nullmod7 <- lm(Distance ~ 1,punting)
anova(nullmod7,lmod7a)
Analysis of Variance Table

Model 1: Distance ~ 1
Model 2: Distance ~ RStr + LStr + RFlex + LFlex
  Res.Df    RSS Df Sum of Sq      F  Pr(>F)  
1     12 8093.3                              
2      8 2132.6  4    5960.7 5.5899 0.01902 *
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

We can see that the p-value of the F test is less than 0.05, so we can reject the null hypothesis. This means that these four predictors have a linear relationship to the response.

###c.Relative to the model in (a), test whether the right and left leg strengths have the same effect.

lmod7c<- lm(Distance ~ I(RStr + LStr) + RFlex + LFlex ,punting)
summary(lmod7c)

Call:
lm(formula = Distance ~ I(RStr + LStr) + RFlex + LFlex, data = punting)

Residuals:
    Min      1Q  Median      3Q     Max 
-21.698  -9.494  -5.155   9.081  20.611 

Coefficients:
               Estimate Std. Error t value Pr(>|t|)
(Intercept)    -71.2694    63.1447  -1.129    0.288
I(RStr + LStr)   0.1741     0.1940   0.898    0.393
RFlex            2.3137     1.4013   1.651    0.133
LFlex           -0.5772     0.8035  -0.718    0.491

Residual standard error: 15.94 on 9 degrees of freedom
Multiple R-squared:  0.7174,    Adjusted R-squared:  0.6232 
F-statistic: 7.615 on 3 and 9 DF,  p-value: 0.00769
anova(lmod7c,lmod7a)
Analysis of Variance Table

Model 1: Distance ~ I(RStr + LStr) + RFlex + LFlex
Model 2: Distance ~ RStr + LStr + RFlex + LFlex
  Res.Df    RSS Df Sum of Sq      F Pr(>F)
1      9 2287.4                           
2      8 2132.6  1    154.72 0.5804  0.468

Because the p-value of the F test is larger than 0.05, we cannot reject the null hypothesis which means that the right and left leg strengths don’t have the same effect.

###d.Construct a 95% confidence region for (βRStr,βLStr). Explain how the test in (c) relates to this region.

confint(lmod7a, c("RStr", "LStr"))
          2.5 %    97.5 %
RStr -0.6080871 1.6313618
LStr -1.3690973 0.9966981

From test C,we know that βRStr doesn’t equal to βLStr and their 95% confidence regions are significantly different, which is verified by it.

###e.Fit a model to test the hypothesis that it is total leg strength defined by adding the right and left leg strengths that is sufficient to predict the response in com- parison to using individual left and right leg strengths.

lmod7e <- lm(Distance ~ RStr + LStr,  punting)
summary(lmod7e)

Call:
lm(formula = Distance ~ RStr + LStr, data = punting)

Residuals:
    Min      1Q  Median      3Q     Max 
-29.280  -9.583   3.147  10.266  26.450 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)
(Intercept)  12.8490    33.0334   0.389    0.705
RStr          0.7208     0.4913   1.467    0.173
LStr          0.2011     0.4883   0.412    0.689

Residual standard error: 17.24 on 10 degrees of freedom
Multiple R-squared:  0.6327,    Adjusted R-squared:  0.5592 
F-statistic: 8.611 on 2 and 10 DF,  p-value: 0.00669
lmod7total<- lm(Distance ~I(RStr + LStr),punting)
summary(lmod7total)

Call:
lm(formula = Distance ~ I(RStr + LStr), data = punting)

Residuals:
    Min      1Q  Median      3Q     Max 
-27.632 -11.531   2.171   8.443  30.672 

Coefficients:
               Estimate Std. Error t value Pr(>|t|)   
(Intercept)     14.0936    31.8838   0.442  0.66703   
I(RStr + LStr)   0.4601     0.1082   4.252  0.00136 **
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 16.68 on 11 degrees of freedom
Multiple R-squared:  0.6217,    Adjusted R-squared:  0.5874 
F-statistic: 18.08 on 1 and 11 DF,  p-value: 0.001361
anova(lmod7e,lmod7total)
Analysis of Variance Table

Model 1: Distance ~ RStr + LStr
Model 2: Distance ~ I(RStr + LStr)
  Res.Df    RSS Df Sum of Sq      F Pr(>F)
1     10 2973.1                           
2     11 3061.3 -1   -88.281 0.2969 0.5978

Because the p-value of the F test is larger than 0.05, we cannot reject the null hypothesis which means that total leg strength is not sufficient to predict the response in comparison to using individual left and right leg strengths.

###h.Fit a model with Hang as the response and the same four predictors. Can we make a test to compare this model to that used in (a)? Explain.

lmod7h<- lm(Hang ~ RStr + LStr + RFlex + LFlex ,punting)
lmod7h

Call:
lm(formula = Hang ~ RStr + LStr + RFlex + LFlex, data = punting)

Coefficients:
(Intercept)         RStr         LStr        RFlex        LFlex  
  -0.225239     0.005153     0.007697     0.019404     0.004614  
summary(lmod7h)

Call:
lm(formula = Hang ~ RStr + LStr + RFlex + LFlex, data = punting)

Residuals:
     Min       1Q   Median       3Q      Max 
-0.36297 -0.13528 -0.07849  0.09938  0.35893 

Coefficients:
             Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.225239   1.032784  -0.218    0.833
RStr         0.005153   0.007645   0.674    0.519
LStr         0.007697   0.008077   0.953    0.369
RFlex        0.019404   0.022631   0.857    0.416
LFlex        0.004614   0.012998   0.355    0.732

Residual standard error: 0.2571 on 8 degrees of freedom
Multiple R-squared:  0.8156,    Adjusted R-squared:  0.7235 
F-statistic: 8.848 on 4 and 8 DF,  p-value: 0.004925

No, we can’t make a F-test to compare model(a) and model(b) casue they are not nest-models. While we can compare by their values of R-squared.For model(h), the R-squared is 0.8156 which is higher than that in model(a) which means that four predictors have stronger linear relationship with response “Hang” rather than “Distance”.

##Chapter 4, problem number 1.For the prostate data, fit a model with lpsa as the response and the other variables as predictors. ###a.Suppose a new patient with the following values arrives:Predict the lpsa for this patient along with an appropriate 95% CI.

data(prostate, package="faraway") 
lmod4a<- lm(lpsa ~ lcavol + lweight + age + lbph + svi + lcp + gleason + pgg45 ,prostate)
summary(lmod4a)

Call:
lm(formula = lpsa ~ lcavol + lweight + age + lbph + svi + lcp + 
    gleason + pgg45, data = prostate)

Residuals:
    Min      1Q  Median      3Q     Max 
-1.7331 -0.3713 -0.0170  0.4141  1.6381 

Coefficients:
             Estimate Std. Error t value Pr(>|t|)    
(Intercept)  0.669337   1.296387   0.516  0.60693    
lcavol       0.587022   0.087920   6.677 2.11e-09 ***
lweight      0.454467   0.170012   2.673  0.00896 ** 
age         -0.019637   0.011173  -1.758  0.08229 .  
lbph         0.107054   0.058449   1.832  0.07040 .  
svi          0.766157   0.244309   3.136  0.00233 ** 
lcp         -0.105474   0.091013  -1.159  0.24964    
gleason      0.045142   0.157465   0.287  0.77503    
pgg45        0.004525   0.004421   1.024  0.30886    
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.7084 on 88 degrees of freedom
Multiple R-squared:  0.6548,    Adjusted R-squared:  0.6234 
F-statistic: 20.86 on 8 and 88 DF,  p-value: < 2.2e-16
newp <- data.frame(lcavol= 1.44692, lweight= 3.62301, age= 65.00000, lbph= 0.30010, svi= 0.00000, lcp= -0.79851,gleason= 7.00000,pgg45= 15.00000)
predict (lmod4a , newp,interval="confidence")
       fit      lwr      upr
1 2.389053 2.172437 2.605669

###b.Repeat the last question for a patient with the same values except that he is age 20. Explain why the CI is wider.

newb <- data.frame(lcavol= 1.44692, gleason= 7.00000, lweight= 3.62301, pgg45= 15.00000, age= 20, lbph= 0.30010, svi= 0.00000, lcp= -0.79851)
predict(lmod4a, newb,interval="confidence")
       fit      lwr      upr
1 3.272726 2.260444 4.285007

Since the age 20 lies far outside of the original data, so that CI is wider.

###c.For the model of the previous question, remove all the predictors that are not significant at the 5% level. Now recompute the predictions of the previous question. Are the CIs wider or narrower? Which predictions would you prefer? Explain.

lmod42c <- lm(lpsa ~ lcavol + lweight + svi, data = prostate)
summary(lmod42c)

Call:
lm(formula = lpsa ~ lcavol + lweight + svi, data = prostate)

Residuals:
     Min       1Q   Median       3Q      Max 
-1.72964 -0.45764  0.02812  0.46403  1.57013 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept) -0.26809    0.54350  -0.493  0.62298    
lcavol       0.55164    0.07467   7.388  6.3e-11 ***
lweight      0.50854    0.15017   3.386  0.00104 ** 
svi          0.66616    0.20978   3.176  0.00203 ** 
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.7168 on 93 degrees of freedom
Multiple R-squared:  0.6264,    Adjusted R-squared:  0.6144 
F-statistic: 51.99 on 3 and 93 DF,  p-value: < 2.2e-16
newb <- data.frame(lcavol= 1.44692, gleason= 7.00000, lweight= 3.62301, pgg45= 15.00000, age= 20, lbph= 0.30010, svi= 0.00000, lcp= -0.79851)
predict(lmod42c, newb, interval="confidence")
       fit      lwr      upr
1 2.372534 2.197274 2.547794
anova(lmod4a,lmod42c)
Analysis of Variance Table

Model 1: lpsa ~ lcavol + lweight + age + lbph + svi + lcp + gleason + 
    pgg45
Model 2: lpsa ~ lcavol + lweight + svi
  Res.Df    RSS Df Sum of Sq      F Pr(>F)
1     88 44.163                           
2     93 47.785 -5   -3.6218 1.4434 0.2167

We can see that the CIs are narrower. Since the p-value of the T test is larger than 0.05, we cannot reject the null hypothesis. I would prefer the original model (with lpsa as the response and the other variables as predictors).

##Chapter 4, problem number 2. Using the teengamb data, fit a model with gamble as the response and the other variables as predictors.

###a.Predict the amount that a male with average (given these data) status, income and verbal score would gamble along with an appropriate 95% CI.

data(teengamb, package="faraway") 
lmod42a <- lm(gamble ~ sex + status + income + verbal , teengamb)
summary(lmod42a)

Call:
lm(formula = gamble ~ sex + status + income + verbal, data = teengamb)

Residuals:
    Min      1Q  Median      3Q     Max 
-51.082 -11.320  -1.451   9.452  94.252 

Coefficients:
             Estimate Std. Error t value Pr(>|t|)    
(Intercept)  22.55565   17.19680   1.312   0.1968    
sex         -22.11833    8.21111  -2.694   0.0101 *  
status        0.05223    0.28111   0.186   0.8535    
income        4.96198    1.02539   4.839 1.79e-05 ***
verbal       -2.95949    2.17215  -1.362   0.1803    
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 22.69 on 42 degrees of freedom
Multiple R-squared:  0.5267,    Adjusted R-squared:  0.4816 
F-statistic: 11.69 on 4 and 42 DF,  p-value: 1.815e-06
x <- model.matrix(lmod42a)
mean <- apply(x, 2, mean)
mean["sex"] <- 0
predict(lmod42a, data.frame(t(mean)), interval="confidence")
       fit      lwr      upr
1 28.24252 18.78277 37.70227

###b.Repeat the prediction for a male with maximal values (for this data) of status, income and verbal score. Which CI is wider and why is this result expected?

max <- apply(x, 2, max)
max["sex"] <- 0
predict(lmod42a, data.frame(t(max)), interval="confidence")
       fit      lwr      upr
1 71.30794 42.23237 100.3835

The CI of b is wider. The maximal value lies far outside from the original data.

LS0tCnRpdGxlOiAiSG9tZXdvcmsgMyBSIE5vdGVib29rIgpvdXRwdXQ6IGh0bWxfbm90ZWJvb2sKLS0tCgojI0NoYXB0ZXIgMywgcHJvYmxlbSBudW1iZXIgNy4KCkluIHRoZSBwdW50aW5nIGRhdGEsd2UgZmluZCB0aGUgYXZlcmFnZSBkaXN0YW5jZSBwdW50ZWQgYW5kIGhhbmcgdGltZXMgb2YgMTAgcHVudHMgb2YgYW4gQW1lcmljYW4gZm9vdGJhbGwgYXMgcmVsYXRlZCB0byB2YXJpb3VzIG1lYXN1cmVzIG9mIGxlZyBzdHJlbmd0aCBmb3IgMTMgdm9sdW50ZWVycy4KCiMjI2EuRml0IGEgcmVncmVzc2lvbiBtb2RlbCB3aXRoIERpc3RhbmNlIGFzIHRoZSByZXNwb25zZSBhbmQgdGhlIHJpZ2h0IGFuZCBsZWZ0IGxlZyBzdHJlbmd0aHMgYW5kIGZsZXhpYmlsaXRpZXMgYXMgcHJlZGljdG9ycy4gV2hpY2ggcHJlZGljdG9ycyBhcmUgc2lnbmlmaWNhbnQgYXQgdGhlIDUlIGxldmVsPwoKYGBge3J9CmRhdGEocHVudGluZywgcGFja2FnZT0iZmFyYXdheSIpIApsbW9kN2EgPC0gbG0oRGlzdGFuY2UgfiBSU3RyICsgTFN0ciArIFJGbGV4ICsgTEZsZXggLHB1bnRpbmcpCmxtb2Q3YQpzdW1tYXJ5KGxtb2Q3YSkgCmBgYApGcm9tIHRoZSBzdW1tYXJ5LHdlIGNhbiBrbm93IHRoYXQgbm8gdmFyaWFibGVzJyBwLXZhbHVlPDAuMDUgd2hpY2ggbWVhbnMgdGhhdCBub25lIHByZWRpY3RvcnMgYXJlIHN0YXRpc3RpY2FsbHkgc2lnbmlmaWNhbnQgYXQgdGhlIDUlIGxldmVsLgoKIyMjYi5Vc2UgYW4gRi10ZXN0IHRvIGRldGVybWluZSB3aGV0aGVyIGNvbGxlY3RpdmVseSB0aGVzZSBmb3VyIHByZWRpY3RvcnMgaGF2ZSBhIHJlbGF0aW9uc2hpcCB0byB0aGUgcmVzcG9uc2UuCgpgYGB7cn0KbnVsbG1vZDcgPC0gbG0oRGlzdGFuY2UgfiAxLHB1bnRpbmcpCmFub3ZhKG51bGxtb2Q3LGxtb2Q3YSkKYGBgCldlIGNhbiBzZWUgdGhhdCB0aGUgcC12YWx1ZSBvZiB0aGUgRiB0ZXN0IGlzIGxlc3MgdGhhbiAwLjA1LCBzbyB3ZSBjYW4gcmVqZWN0IHRoZSBudWxsIGh5cG90aGVzaXMuIFRoaXMgbWVhbnMgdGhhdCB0aGVzZSBmb3VyIHByZWRpY3RvcnMgaGF2ZSBhIGxpbmVhciByZWxhdGlvbnNoaXAgdG8gdGhlIHJlc3BvbnNlLgoKIyMjYy5SZWxhdGl2ZSB0byB0aGUgbW9kZWwgaW4gKGEpLCB0ZXN0IHdoZXRoZXIgdGhlIHJpZ2h0IGFuZCBsZWZ0IGxlZyBzdHJlbmd0aHMgaGF2ZSB0aGUgc2FtZSBlZmZlY3QuCgpgYGB7cn0KbG1vZDdjPC0gbG0oRGlzdGFuY2UgfiBJKFJTdHIgKyBMU3RyKSArIFJGbGV4ICsgTEZsZXggLHB1bnRpbmcpCnN1bW1hcnkobG1vZDdjKQphbm92YShsbW9kN2MsbG1vZDdhKQpgYGAKQmVjYXVzZSB0aGUgcC12YWx1ZSBvZiB0aGUgRiB0ZXN0IGlzIGxhcmdlciB0aGFuIDAuMDUsIHdlIGNhbm5vdCByZWplY3QgdGhlIG51bGwgaHlwb3RoZXNpcyB3aGljaCBtZWFucyB0aGF0IHRoZSByaWdodCBhbmQgbGVmdCBsZWcgc3RyZW5ndGhzIGRvbid0IGhhdmUgdGhlIHNhbWUgZWZmZWN0LgoKIyMjZC5Db25zdHJ1Y3QgYSA5NSUgY29uZmlkZW5jZSByZWdpb24gZm9yICjOslJTdHIszrJMU3RyKS4gRXhwbGFpbiBob3cgdGhlIHRlc3QgaW4gKGMpIHJlbGF0ZXMgdG8gdGhpcyByZWdpb24uCgpgYGB7cn0KY29uZmludChsbW9kN2EsIGMoIlJTdHIiLCAiTFN0ciIpKQpgYGAKRnJvbSB0ZXN0IEPvvIx3ZSBrbm93IHRoYXQgzrJSU3RyIGRvZXNuJ3QgZXF1YWwgdG8gzrJMU3RyIGFuZCB0aGVpciA5NSUgY29u76yBZGVuY2UgcmVnaW9ucyBhcmUgc2lnbmlmaWNhbnRseSBkaWZmZXJlbnQsIHdoaWNoIGlzIHZlcmlmaWVkIGJ5IGl0LgoKIyMjZS5GaXQgYSBtb2RlbCB0byB0ZXN0IHRoZSBoeXBvdGhlc2lzIHRoYXQgaXQgaXMgdG90YWwgbGVnIHN0cmVuZ3RoIGRlZmluZWQgYnkgYWRkaW5nIHRoZSByaWdodCBhbmQgbGVmdCBsZWcgc3RyZW5ndGhzIHRoYXQgaXMgc3VmZmljaWVudCB0byBwcmVkaWN0IHRoZSByZXNwb25zZSBpbiBjb20tIHBhcmlzb24gdG8gdXNpbmcgaW5kaXZpZHVhbCBsZWZ0IGFuZCByaWdodCBsZWcgc3RyZW5ndGhzLgoKYGBge3J9Cmxtb2Q3ZSA8LSBsbShEaXN0YW5jZSB+IFJTdHIgKyBMU3RyLCAgcHVudGluZykKc3VtbWFyeShsbW9kN2UpCgpsbW9kN3RvdGFsPC0gbG0oRGlzdGFuY2UgfkkoUlN0ciArIExTdHIpLHB1bnRpbmcpCnN1bW1hcnkobG1vZDd0b3RhbCkKCmFub3ZhKGxtb2Q3ZSxsbW9kN3RvdGFsKQpgYGAKQmVjYXVzZSB0aGUgcC12YWx1ZSBvZiB0aGUgRiB0ZXN0IGlzIGxhcmdlciB0aGFuIDAuMDUsIHdlIGNhbm5vdCByZWplY3QgdGhlIG51bGwgaHlwb3RoZXNpcyB3aGljaCBtZWFucyB0aGF0IHRvdGFsIGxlZyBzdHJlbmd0aCBpcyBub3Qgc3Vm76yBY2llbnQgdG8gcHJlZGljdCB0aGUgcmVzcG9uc2UgaW4gY29tcGFyaXNvbiB0byB1c2luZyBpbmRpdmlkdWFsIGxlZnQgYW5kIHJpZ2h0IGxlZyBzdHJlbmd0aHMuCgojIyNoLkZpdCBhIG1vZGVsIHdpdGggSGFuZyBhcyB0aGUgcmVzcG9uc2UgYW5kIHRoZSBzYW1lIGZvdXIgcHJlZGljdG9ycy4gQ2FuIHdlIG1ha2UgYSB0ZXN0IHRvIGNvbXBhcmUgdGhpcyBtb2RlbCB0byB0aGF0IHVzZWQgaW4gKGEpPyBFeHBsYWluLgoKYGBge3J9Cmxtb2Q3aDwtIGxtKEhhbmcgfiBSU3RyICsgTFN0ciArIFJGbGV4ICsgTEZsZXggLHB1bnRpbmcpCmxtb2Q3aApzdW1tYXJ5KGxtb2Q3aCkKYGBgCk5vLCB3ZSBjYW4ndCBtYWtlIGEgRi10ZXN0IHRvIGNvbXBhcmUgbW9kZWwoYSkgYW5kIG1vZGVsKGIpIGNhc3VlIHRoZXkgYXJlIG5vdCBuZXN0LW1vZGVscy4gV2hpbGUgd2UgY2FuIGNvbXBhcmUgYnkgdGhlaXIgdmFsdWVzIG9mICBSLXNxdWFyZWQuRm9yIG1vZGVsKGgpLCB0aGUgUi1zcXVhcmVkIGlzIDAuODE1NiB3aGljaCBpcyBoaWdoZXIgdGhhbiB0aGF0IGluIG1vZGVsKGEpIHdoaWNoIG1lYW5zIHRoYXQgZm91ciBwcmVkaWN0b3JzIGhhdmUgc3Ryb25nZXIgbGluZWFyIHJlbGF0aW9uc2hpcCB3aXRoIHJlc3BvbnNlICJIYW5nIiAgcmF0aGVyIHRoYW4gIkRpc3RhbmNlIi4KCiMjQ2hhcHRlciA0LCBwcm9ibGVtIG51bWJlciAxLkZvciB0aGUgcHJvc3RhdGUgZGF0YSwgZml0IGEgbW9kZWwgd2l0aCBscHNhIGFzIHRoZSByZXNwb25zZSBhbmQgdGhlIG90aGVyIHZhcmlhYmxlcyBhcyBwcmVkaWN0b3JzLgojIyNhLlN1cHBvc2UgYSBuZXcgcGF0aWVudCB3aXRoIHRoZSBmb2xsb3dpbmcgdmFsdWVzIGFycml2ZXM6UHJlZGljdCB0aGUgbHBzYSBmb3IgdGhpcyBwYXRpZW50IGFsb25nIHdpdGggYW4gYXBwcm9wcmlhdGUgOTUlIENJLgoKYGBge3J9CmRhdGEocHJvc3RhdGUsIHBhY2thZ2U9ImZhcmF3YXkiKSAKbG1vZDRhPC0gbG0obHBzYSB+IGxjYXZvbCArIGx3ZWlnaHQgKyBhZ2UgKyBsYnBoICsgc3ZpICsgbGNwICsgZ2xlYXNvbiArIHBnZzQ1ICxwcm9zdGF0ZSkKc3VtbWFyeShsbW9kNGEpCmBgYAoKYGBge3J9Cm5ld3AgPC0gZGF0YS5mcmFtZShsY2F2b2w9IDEuNDQ2OTIsIGx3ZWlnaHQ9IDMuNjIzMDEsIGFnZT0gNjUuMDAwMDAsIGxicGg9IDAuMzAwMTAsIHN2aT0gMC4wMDAwMCwgbGNwPSAtMC43OTg1MSxnbGVhc29uPSA3LjAwMDAwLHBnZzQ1PSAxNS4wMDAwMCkKcHJlZGljdCAobG1vZDRhICwgbmV3cCxpbnRlcnZhbD0iY29uZmlkZW5jZSIpCmBgYAoKCiMjI2IuUmVwZWF0IHRoZSBsYXN0IHF1ZXN0aW9uIGZvciBhIHBhdGllbnQgd2l0aCB0aGUgc2FtZSB2YWx1ZXMgZXhjZXB0IHRoYXQgaGUgaXMgYWdlIDIwLiBFeHBsYWluIHdoeSB0aGUgQ0kgaXMgd2lkZXIuCmBgYHtyfQpuZXdiIDwtIGRhdGEuZnJhbWUobGNhdm9sPSAxLjQ0NjkyLCBnbGVhc29uPSA3LjAwMDAwLCBsd2VpZ2h0PSAzLjYyMzAxLCBwZ2c0NT0gMTUuMDAwMDAsIGFnZT0gMjAsIGxicGg9IDAuMzAwMTAsIHN2aT0gMC4wMDAwMCwgbGNwPSAtMC43OTg1MSkKcHJlZGljdChsbW9kNGEsIG5ld2IsaW50ZXJ2YWw9ImNvbmZpZGVuY2UiKQpgYGAKU2luY2UgdGhlIGFnZSAyMCBsaWVzIGZhciBvdXRzaWRlIG9mIHRoZSBvcmlnaW5hbCBkYXRhLCBzbyB0aGF0IENJIGlzIHdpZGVyLgoKCiMjI2MuRm9yIHRoZSBtb2RlbCBvZiB0aGUgcHJldmlvdXMgcXVlc3Rpb24sIHJlbW92ZSBhbGwgdGhlIHByZWRpY3RvcnMgdGhhdCBhcmUgbm90IHNpZ25pZmljYW50IGF0IHRoZSA1JSBsZXZlbC4gTm93IHJlY29tcHV0ZSB0aGUgcHJlZGljdGlvbnMgb2YgdGhlIHByZXZpb3VzIHF1ZXN0aW9uLiBBcmUgdGhlIENJcyB3aWRlciBvciBuYXJyb3dlcj8gV2hpY2ggcHJlZGljdGlvbnMgd291bGQgeW91IHByZWZlcj8gRXhwbGFpbi4KCmBgYHtyfQpsbW9kNDJjIDwtIGxtKGxwc2EgfiBsY2F2b2wgKyBsd2VpZ2h0ICsgc3ZpLCBkYXRhID0gcHJvc3RhdGUpCnN1bW1hcnkobG1vZDQyYykKCm5ld2IgPC0gZGF0YS5mcmFtZShsY2F2b2w9IDEuNDQ2OTIsIGdsZWFzb249IDcuMDAwMDAsIGx3ZWlnaHQ9IDMuNjIzMDEsIHBnZzQ1PSAxNS4wMDAwMCwgYWdlPSAyMCwgbGJwaD0gMC4zMDAxMCwgc3ZpPSAwLjAwMDAwLCBsY3A9IC0wLjc5ODUxKQpwcmVkaWN0KGxtb2Q0MmMsIG5ld2IsIGludGVydmFsPSJjb25maWRlbmNlIikKCmFub3ZhKGxtb2Q0YSxsbW9kNDJjKQpgYGAKV2UgY2FuIHNlZSB0aGF0IHRoZSBDSXMgYXJlIG5hcnJvd2VyLiAKU2luY2UgdGhlIHAtdmFsdWUgb2YgdGhlIFQgdGVzdCBpcyBsYXJnZXIgdGhhbiAwLjA1LCB3ZSBjYW5ub3QgcmVqZWN0IHRoZSBudWxsIGh5cG90aGVzaXMuIEkgd291bGQgcHJlZmVyIHRoZSBvcmlnaW5hbCBtb2RlbCAod2l0aCBscHNhIGFzIHRoZSByZXNwb25zZSBhbmQgdGhlIG90aGVyIHZhcmlhYmxlcyBhcyBwcmVkaWN0b3JzKS4KCgojI0NoYXB0ZXIgNCwgcHJvYmxlbSBudW1iZXIgMi4KVXNpbmcgdGhlIHRlZW5nYW1iIGRhdGEsIGZpdCBhIG1vZGVsIHdpdGggZ2FtYmxlIGFzIHRoZSByZXNwb25zZSBhbmQgdGhlIG90aGVyIHZhcmlhYmxlcyBhcyBwcmVkaWN0b3JzLgoKIyMjYS5QcmVkaWN0IHRoZSBhbW91bnQgdGhhdCBhIG1hbGUgd2l0aCBhdmVyYWdlIChnaXZlbiB0aGVzZSBkYXRhKSBzdGF0dXMsIGluY29tZSBhbmQgdmVyYmFsIHNjb3JlIHdvdWxkIGdhbWJsZSBhbG9uZyB3aXRoIGFuIGFwcHJvcHJpYXRlIDk1JSBDSS4KYGBge3J9CmRhdGEodGVlbmdhbWIsIHBhY2thZ2U9ImZhcmF3YXkiKSAKbG1vZDQyYSA8LSBsbShnYW1ibGUgfiBzZXggKyBzdGF0dXMgKyBpbmNvbWUgKyB2ZXJiYWwgLCB0ZWVuZ2FtYikKc3VtbWFyeShsbW9kNDJhKQpgYGAKCmBgYHtyfQp4IDwtIG1vZGVsLm1hdHJpeChsbW9kNDJhKQptZWFuIDwtIGFwcGx5KHgsIDIsIG1lYW4pCm1lYW5bInNleCJdIDwtIDAKcHJlZGljdChsbW9kNDJhLCBkYXRhLmZyYW1lKHQobWVhbikpLCBpbnRlcnZhbD0iY29uZmlkZW5jZSIpCmBgYAoKIyMjYi5SZXBlYXQgdGhlIHByZWRpY3Rpb24gZm9yIGEgbWFsZSB3aXRoIG1heGltYWwgdmFsdWVzIChmb3IgdGhpcyBkYXRhKSBvZiBzdGF0dXMsIGluY29tZSBhbmQgdmVyYmFsIHNjb3JlLiBXaGljaCBDSSBpcyB3aWRlciBhbmQgd2h5IGlzIHRoaXMgcmVzdWx0IGV4cGVjdGVkPwpgYGB7cn0KbWF4IDwtIGFwcGx5KHgsIDIsIG1heCkKbWF4WyJzZXgiXSA8LSAwCnByZWRpY3QobG1vZDQyYSwgZGF0YS5mcmFtZSh0KG1heCkpLCBpbnRlcnZhbD0iY29uZmlkZW5jZSIpCmBgYApUaGUgQ0kgb2YgYiBpcyB3aWRlci4gClRoZSBtYXhpbWFsIHZhbHVlIGxpZXMgZmFyIG91dHNpZGUgZnJvbSB0aGUgb3JpZ2luYWwgZGF0YS4KCgo=