3.6.2 Simple Linear Regression

names(Boston)
 [1] "crim"    "zn"      "indus"   "chas"    "nox"     "rm"      "age"     "dis"    
 [9] "rad"     "tax"     "ptratio" "black"   "lstat"   "medv"   
head(Boston)
?Boston
lm.fit <- lm(medv ~ lstat, data = Boston)
summary(lm.fit)

Call:
lm(formula = medv ~ lstat, data = Boston)

Residuals:
    Min      1Q  Median      3Q     Max 
-15.168  -3.990  -1.318   2.034  24.500 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept) 34.55384    0.56263   61.41   <2e-16 ***
lstat       -0.95005    0.03873  -24.53   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 6.216 on 504 degrees of freedom
Multiple R-squared:  0.5441,    Adjusted R-squared:  0.5432 
F-statistic: 601.6 on 1 and 504 DF,  p-value: < 2.2e-16
names(lm.fit)
 [1] "coefficients"  "residuals"     "effects"       "rank"          "fitted.values"
 [6] "assign"        "qr"            "df.residual"   "xlevels"       "call"         
[11] "terms"         "model"        
coef(lm.fit)
(Intercept)       lstat 
 34.5538409  -0.9500494 
confint(lm.fit)
                2.5 %     97.5 %
(Intercept) 33.448457 35.6592247
lstat       -1.026148 -0.8739505

Predict the confidence intervals and prediction intervals (see the notes for section 3.2) of medv (median value of owner-occupied homes) on a given value of lstat:

predict(lm.fit, data.frame(lstat=(c(5,10,15))), interval = 'confidence')
       fit      lwr      upr
1 29.80359 29.00741 30.59978
2 25.05335 24.47413 25.63256
3 20.30310 19.73159 20.87461
predict(lm.fit, data.frame(lstat=(c(5,10,15))), interval = 'prediction')
       fit       lwr      upr
1 29.80359 17.565675 42.04151
2 25.05335 12.827626 37.27907
3 20.30310  8.077742 32.52846

It’s clear that prediction intervals are much wider than confidence intervals.

Plot the data points and regression line:

plot(Boston$lstat, Boston$medv)
abline(lm.fit)

par(mfrow=c(2,2))
plot(lm.fit)

Residual plots of medv on lstat:

plot(predict(lm.fit), residuals(lm.fit))

There is a pattern in above plot. See Figure 3.9 for reference, and 1. Non-linearity of the Data in section 3.3.3 for explanations. For example:

The linear regression model assumes that there is a straight-line relationship between the predictors and the response.

Residual plots are a useful graphical tool for identifying non-linearity.

PS: section 3.1 on p61

We will sometimes describe (3.1) by saying that we are regressing Y on X


Outliers are discussed in 4. Outliers in section 3.3.3 and Figure 3.12.

Observations whose studentized residuals are greater than 3 in absolute value are possible outliers.

plot(predict(lm.fit), rstudent(lm.fit))


Plot high leverage (discussed in 5. High Leverage Points in section 3.3.3 and Figure 3.13) points:

plot(hatvalues(lm.fit))

which.max(hatvalues(lm.fit))
375 
375 

So the 375th observatin has the largest leverage statistics.

3.6.3 Multiple Linear Regression

Regression medv on lstat and age:

lm.fit <- lm(medv ~ lstat + age, data = Boston)
summary(lm.fit)

Call:
lm(formula = medv ~ lstat + age, data = Boston)

Residuals:
    Min      1Q  Median      3Q     Max 
-15.981  -3.978  -1.283   1.968  23.158 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept) 33.22276    0.73085  45.458  < 2e-16 ***
lstat       -1.03207    0.04819 -21.416  < 2e-16 ***
age          0.03454    0.01223   2.826  0.00491 ** 
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 6.173 on 503 degrees of freedom
Multiple R-squared:  0.5513,    Adjusted R-squared:  0.5495 
F-statistic:   309 on 2 and 503 DF,  p-value: < 2.2e-16

Short-hand expression . for all other variables as predictors:

all.fit <- lm(medv ~ ., data = Boston)
summary(all.fit)

Call:
lm(formula = medv ~ ., data = Boston)

Residuals:
    Min      1Q  Median      3Q     Max 
-15.595  -2.730  -0.518   1.777  26.199 

Coefficients:
              Estimate Std. Error t value Pr(>|t|)    
(Intercept)  3.646e+01  5.103e+00   7.144 3.28e-12 ***
crim        -1.080e-01  3.286e-02  -3.287 0.001087 ** 
zn           4.642e-02  1.373e-02   3.382 0.000778 ***
indus        2.056e-02  6.150e-02   0.334 0.738288    
chas         2.687e+00  8.616e-01   3.118 0.001925 ** 
nox         -1.777e+01  3.820e+00  -4.651 4.25e-06 ***
rm           3.810e+00  4.179e-01   9.116  < 2e-16 ***
age          6.922e-04  1.321e-02   0.052 0.958229    
dis         -1.476e+00  1.995e-01  -7.398 6.01e-13 ***
rad          3.060e-01  6.635e-02   4.613 5.07e-06 ***
tax         -1.233e-02  3.760e-03  -3.280 0.001112 ** 
ptratio     -9.527e-01  1.308e-01  -7.283 1.31e-12 ***
black        9.312e-03  2.686e-03   3.467 0.000573 ***
lstat       -5.248e-01  5.072e-02 -10.347  < 2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 4.745 on 492 degrees of freedom
Multiple R-squared:  0.7406,    Adjusted R-squared:  0.7338 
F-statistic: 108.1 on 13 and 492 DF,  p-value: < 2.2e-16

Test colinearity of the predictors with VIF (explained in 6. Colinearity in section 3.3.3):

library(car)
Note: the specification for S3 class “family” in package ‘MatrixModels’ seems equivalent to one from package ‘lme4’: not turning on duplicate class definitions for this class.

Attaching package: ‘car’

The following object is masked from ‘package:dplyr’:

    recode

The following objects are masked from ‘package:faraway’:

    logit, vif

The following object is masked from ‘package:psych’:

    logit
vif(all.fit)
    crim       zn    indus     chas      nox       rm      age      dis      rad 
1.792192 2.298758 3.991596 1.073995 4.393720 1.933744 3.100826 3.955945 7.484496 
     tax  ptratio    black    lstat 
9.008554 1.799084 1.348521 2.941491 

So the rad and tax has a high possibility of colinearity. See bottom of p101:

As a rule of thumb, a VIF value that exceeds 5 or 10 indicates a problematic amount of collinearity.


Perform a regression using all of the variables but age:

fit.no.age <- lm(medv ~ . -age, data = Boston)
summary(fit.no.age)

Call:
lm(formula = medv ~ . - age, data = Boston)

Residuals:
     Min       1Q   Median       3Q      Max 
-15.6054  -2.7313  -0.5188   1.7601  26.2243 

Coefficients:
              Estimate Std. Error t value Pr(>|t|)    
(Intercept)  36.436927   5.080119   7.172 2.72e-12 ***
crim         -0.108006   0.032832  -3.290 0.001075 ** 
zn            0.046334   0.013613   3.404 0.000719 ***
indus         0.020562   0.061433   0.335 0.737989    
chas          2.689026   0.859598   3.128 0.001863 ** 
nox         -17.713540   3.679308  -4.814 1.97e-06 ***
rm            3.814394   0.408480   9.338  < 2e-16 ***
dis          -1.478612   0.190611  -7.757 5.03e-14 ***
rad           0.305786   0.066089   4.627 4.75e-06 ***
tax          -0.012329   0.003755  -3.283 0.001099 ** 
ptratio      -0.952211   0.130294  -7.308 1.10e-12 ***
black         0.009321   0.002678   3.481 0.000544 ***
lstat        -0.523852   0.047625 -10.999  < 2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 4.74 on 493 degrees of freedom
Multiple R-squared:  0.7406,    Adjusted R-squared:  0.7343 
F-statistic: 117.3 on 12 and 493 DF,  p-value: < 2.2e-16

Or use update() function:

fit.wo.age <- update(all.fit, ~ .-age)
summary(fit.wo.age)

Call:
lm(formula = medv ~ crim + zn + indus + chas + nox + rm + dis + 
    rad + tax + ptratio + black + lstat, data = Boston)

Residuals:
     Min       1Q   Median       3Q      Max 
-15.6054  -2.7313  -0.5188   1.7601  26.2243 

Coefficients:
              Estimate Std. Error t value Pr(>|t|)    
(Intercept)  36.436927   5.080119   7.172 2.72e-12 ***
crim         -0.108006   0.032832  -3.290 0.001075 ** 
zn            0.046334   0.013613   3.404 0.000719 ***
indus         0.020562   0.061433   0.335 0.737989    
chas          2.689026   0.859598   3.128 0.001863 ** 
nox         -17.713540   3.679308  -4.814 1.97e-06 ***
rm            3.814394   0.408480   9.338  < 2e-16 ***
dis          -1.478612   0.190611  -7.757 5.03e-14 ***
rad           0.305786   0.066089   4.627 4.75e-06 ***
tax          -0.012329   0.003755  -3.283 0.001099 ** 
ptratio      -0.952211   0.130294  -7.308 1.10e-12 ***
black         0.009321   0.002678   3.481 0.000544 ***
lstat        -0.523852   0.047625 -10.999  < 2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 4.74 on 493 degrees of freedom
Multiple R-squared:  0.7406,    Adjusted R-squared:  0.7343 
F-statistic: 117.3 on 12 and 493 DF,  p-value: < 2.2e-16

3.6.4 Interaction Terms

See Removing the Additive Assumption in section 3.3.2 for detailed discussions.

For adding interaction terms, lstat * age is a shorthand for lstat + age + lstat:age:

fit.ls.age <- lm(medv ~ lstat * age, data = Boston)
summary(fit.ls.age)

Call:
lm(formula = medv ~ lstat * age, data = Boston)

Residuals:
    Min      1Q  Median      3Q     Max 
-15.806  -4.045  -1.333   2.085  27.552 

Coefficients:
              Estimate Std. Error t value Pr(>|t|)    
(Intercept) 36.0885359  1.4698355  24.553  < 2e-16 ***
lstat       -1.3921168  0.1674555  -8.313 8.78e-16 ***
age         -0.0007209  0.0198792  -0.036   0.9711    
lstat:age    0.0041560  0.0018518   2.244   0.0252 *  
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 6.149 on 502 degrees of freedom
Multiple R-squared:  0.5557,    Adjusted R-squared:  0.5531 
F-statistic: 209.3 on 3 and 502 DF,  p-value: < 2.2e-16

The p-value in result shows that age and lstat:age have little effect on medv.

3.6.5 Non-linear Transformations of the Predictors

Regression medv on lstat and \(lstat^2\):

fit.bi.ls <- lm(medv ~ lstat + I(lstat ^ 2), data = Boston)
summary(fit.bi.ls)

Call:
lm(formula = medv ~ lstat + I(lstat^2), data = Boston)

Residuals:
     Min       1Q   Median       3Q      Max 
-15.2834  -3.8313  -0.5295   2.3095  25.4148 

Coefficients:
             Estimate Std. Error t value Pr(>|t|)    
(Intercept) 42.862007   0.872084   49.15   <2e-16 ***
lstat       -2.332821   0.123803  -18.84   <2e-16 ***
I(lstat^2)   0.043547   0.003745   11.63   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 5.524 on 503 degrees of freedom
Multiple R-squared:  0.6407,    Adjusted R-squared:  0.6393 
F-statistic: 448.5 on 2 and 503 DF,  p-value: < 2.2e-16

p-value in result shows the \(lstat^2\) has influence on medv.

Then use ANOVA to test if lstat and lstat plus \(lstat^2\) has difference for predicting medv:

fit.ls <- lm(medv ~ lstat, data = Boston)
anova(fit.ls, fit.bi.ls)
Analysis of Variance Table

Model 1: medv ~ lstat
Model 2: medv ~ lstat + I(lstat^2)
  Res.Df   RSS Df Sum of Sq     F    Pr(>F)    
1    504 19472                                 
2    503 15347  1    4125.1 135.2 < 2.2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

So \(lstat^2\) has influence on medv.

PS: ANOVA is the used when the predictors are qualitative and the response variable is quantitative. See chapter 8 in “统计学” by 贾俊平 for detailed discussions.

Plot the regression:

par(mfrow=c(2,2))
plot(fit.bi.ls)

Regression for more higher order of predictors:

fit.ls.5 <- lm(medv ~ poly(lstat, 5), data = Boston)
summary(fit.ls.5)

Call:
lm(formula = medv ~ poly(lstat, 5), data = Boston)

Residuals:
     Min       1Q   Median       3Q      Max 
-13.5433  -3.1039  -0.7052   2.0844  27.1153 

Coefficients:
                 Estimate Std. Error t value Pr(>|t|)    
(Intercept)       22.5328     0.2318  97.197  < 2e-16 ***
poly(lstat, 5)1 -152.4595     5.2148 -29.236  < 2e-16 ***
poly(lstat, 5)2   64.2272     5.2148  12.316  < 2e-16 ***
poly(lstat, 5)3  -27.0511     5.2148  -5.187 3.10e-07 ***
poly(lstat, 5)4   25.4517     5.2148   4.881 1.42e-06 ***
poly(lstat, 5)5  -19.2524     5.2148  -3.692 0.000247 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 5.215 on 500 degrees of freedom
Multiple R-squared:  0.6817,    Adjusted R-squared:  0.6785 
F-statistic: 214.2 on 5 and 500 DF,  p-value: < 2.2e-16

3.6.6 Qualitative Predictors

data("Carseats")
names(Carseats)
 [1] "Sales"       "CompPrice"   "Income"      "Advertising" "Population" 
 [6] "Price"       "ShelveLoc"   "Age"         "Education"   "Urban"      
[11] "US"         
fit.carseats <- lm(Sales ~ . + Income:Advertising + Price:Age, data = Carseats)
summary(fit.carseats)

Call:
lm(formula = Sales ~ . + Income:Advertising + Price:Age, data = Carseats)

Residuals:
    Min      1Q  Median      3Q     Max 
-2.9208 -0.7503  0.0177  0.6754  3.3413 

Coefficients:
                     Estimate Std. Error t value Pr(>|t|)    
(Intercept)         6.5755654  1.0087470   6.519 2.22e-10 ***
CompPrice           0.0929371  0.0041183  22.567  < 2e-16 ***
Income              0.0108940  0.0026044   4.183 3.57e-05 ***
Advertising         0.0702462  0.0226091   3.107 0.002030 ** 
Population          0.0001592  0.0003679   0.433 0.665330    
Price              -0.1008064  0.0074399 -13.549  < 2e-16 ***
ShelveLocGood       4.8486762  0.1528378  31.724  < 2e-16 ***
ShelveLocMedium     1.9532620  0.1257682  15.531  < 2e-16 ***
Age                -0.0579466  0.0159506  -3.633 0.000318 ***
Education          -0.0208525  0.0196131  -1.063 0.288361    
UrbanYes            0.1401597  0.1124019   1.247 0.213171    
USYes              -0.1575571  0.1489234  -1.058 0.290729    
Income:Advertising  0.0007510  0.0002784   2.698 0.007290 ** 
Price:Age           0.0001068  0.0001333   0.801 0.423812    
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 1.011 on 386 degrees of freedom
Multiple R-squared:  0.8761,    Adjusted R-squared:  0.8719 
F-statistic:   210 on 13 and 386 DF,  p-value: < 2.2e-16

Show the dummy variables created by R automatically:

contrasts(Carseats$ShelveLoc)
       Good Medium
Bad       0      0
Good      1      0
Medium    0      1

The row name indicates the names of the dummy variables, here they are ShelveLocGood and ShelveLocMedium, which can be found in the output of the regression calculation above. The last paragraph of p118 gives a good explanations about how to read the output table and the regression result including qualitative predictors.

LS0tCnRpdGxlOiAiTGFiIG9mIENoYXB0ZXIgMyIKb3V0cHV0OiBodG1sX25vdGVib29rCi0tLQoKIyAzLjYuMiBTaW1wbGUgTGluZWFyIFJlZ3Jlc3Npb24KYGBge3J9Cm5hbWVzKEJvc3RvbikKaGVhZChCb3N0b24pCj9Cb3N0b24KbG0uZml0IDwtIGxtKG1lZHYgfiBsc3RhdCwgZGF0YSA9IEJvc3RvbikKc3VtbWFyeShsbS5maXQpCm5hbWVzKGxtLmZpdCkKY29lZihsbS5maXQpCmNvbmZpbnQobG0uZml0KQpgYGAKClByZWRpY3QgdGhlICoqY29uZmlkZW5jZSBpbnRlcnZhbHMqKiBhbmQgKipwcmVkaWN0aW9uIGludGVydmFscyoqIChzZWUgdGhlIG5vdGVzIGZvciBzZWN0aW9uIDMuMikgb2YgKm1lZHYqIChtZWRpYW4gdmFsdWUgb2Ygb3duZXItb2NjdXBpZWQgaG9tZXMpIG9uIGEgZ2l2ZW4gdmFsdWUgb2YgICpsc3RhdCo6CmBgYHtyfQpwcmVkaWN0KGxtLmZpdCwgZGF0YS5mcmFtZShsc3RhdD0oYyg1LDEwLDE1KSkpLCBpbnRlcnZhbCA9ICdjb25maWRlbmNlJykKcHJlZGljdChsbS5maXQsIGRhdGEuZnJhbWUobHN0YXQ9KGMoNSwxMCwxNSkpKSwgaW50ZXJ2YWwgPSAncHJlZGljdGlvbicpCmBgYApJdCdzIGNsZWFyIHRoYXQgcHJlZGljdGlvbiBpbnRlcnZhbHMgYXJlIG11Y2ggd2lkZXIgdGhhbiBjb25maWRlbmNlIGludGVydmFscy4KClBsb3QgdGhlIGRhdGEgcG9pbnRzIGFuZCByZWdyZXNzaW9uIGxpbmU6CmBgYHtyfQpwbG90KEJvc3RvbiRsc3RhdCwgQm9zdG9uJG1lZHYpCmFibGluZShsbS5maXQsIGx3ZD0zLCBjb2w9J3JlZCcpCmBgYApgYGB7cn0KcGFyKG1mcm93PWMoMiwyKSkKcGxvdChsbS5maXQpCmBgYAoKUmVzaWR1YWwgcGxvdHMgb2YgKm1lZHYqIG9uICpsc3RhdCo6CmBgYHtyfQpwbG90KHByZWRpY3QobG0uZml0KSwgcmVzaWR1YWxzKGxtLmZpdCkpCmBgYApUaGVyZSBpcyBhIHBhdHRlcm4gaW4gYWJvdmUgcGxvdC4gU2VlIEZpZ3VyZSAzLjkgZm9yIHJlZmVyZW5jZSwgYW5kICoxLiBOb24tbGluZWFyaXR5IG9mIHRoZSBEYXRhKiBpbiBzZWN0aW9uIDMuMy4zIGZvciBleHBsYW5hdGlvbnMuIEZvciBleGFtcGxlOgoKPiBUaGUgbGluZWFyIHJlZ3Jlc3Npb24gbW9kZWwgYXNzdW1lcyB0aGF0IHRoZXJlIGlzIGEgc3RyYWlnaHQtbGluZSByZWxhdGlvbnNoaXAgYmV0d2VlbiB0aGUgcHJlZGljdG9ycyBhbmQgdGhlIHJlc3BvbnNlLgoKPiBSZXNpZHVhbCBwbG90cyBhcmUgYSB1c2VmdWwgZ3JhcGhpY2FsIHRvb2wgZm9yIGlkZW50aWZ5aW5nIG5vbi1saW5lYXJpdHkuCgpQUzogc2VjdGlvbiAzLjEgb24gcDYxCgo+IFdlIHdpbGwgc29tZXRpbWVzIGRlc2NyaWJlICgzLjEpIGJ5IHNheWluZyB0aGF0IHdlIGFyZSByZWdyZXNzaW5nICpZKiBvbiAqWCogLi4uCgotLS0tLS0KCk91dGxpZXJzIGFyZSBkaXNjdXNzZWQgaW4gKjQuIE91dGxpZXJzKiBpbiBzZWN0aW9uIDMuMy4zIGFuZCBGaWd1cmUgMy4xMi4KCj4gT2JzZXJ2YXRpb25zIHdob3NlIHN0dWRlbnRpemVkIHJlc2lkdWFscyBhcmUgZ3JlYXRlciB0aGFuIDMgaW4gYWJzb2x1dGUgdmFsdWUgYXJlIHBvc3NpYmxlIG91dGxpZXJzLgoKYGBge3J9CnBsb3QocHJlZGljdChsbS5maXQpLCByc3R1ZGVudChsbS5maXQpKQpgYGAKCi0tLS0tLQoKUGxvdCBoaWdoIGxldmVyYWdlIChkaXNjdXNzZWQgaW4gKjUuIEhpZ2ggTGV2ZXJhZ2UgUG9pbnRzKiBpbiBzZWN0aW9uIDMuMy4zIGFuZCBGaWd1cmUgMy4xMykgcG9pbnRzOgpgYGB7cn0KcGxvdChoYXR2YWx1ZXMobG0uZml0KSkKd2hpY2gubWF4KGhhdHZhbHVlcyhsbS5maXQpKQpgYGAKClNvIHRoZSAzNzV0aCBvYnNlcnZhdGluIGhhcyB0aGUgbGFyZ2VzdCBsZXZlcmFnZSBzdGF0aXN0aWNzLgoKIyAzLjYuMyBNdWx0aXBsZSBMaW5lYXIgUmVncmVzc2lvbgpSZWdyZXNzaW9uICptZWR2KiBvbiAqbHN0YXQqIGFuZCAqYWdlKjoKYGBge3J9CmxtLmZpdCA8LSBsbShtZWR2IH4gbHN0YXQgKyBhZ2UsIGRhdGEgPSBCb3N0b24pCnN1bW1hcnkobG0uZml0KQpgYGAKClNob3J0LWhhbmQgZXhwcmVzc2lvbiBgLmAgZm9yICphbGwgb3RoZXIgdmFyaWFibGVzKiBhcyBwcmVkaWN0b3JzOgpgYGB7cn0KYWxsLmZpdCA8LSBsbShtZWR2IH4gLiwgZGF0YSA9IEJvc3RvbikKc3VtbWFyeShhbGwuZml0KQpgYGAKVGVzdCAqY29saW5lYXJpdHkqIG9mIHRoZSBwcmVkaWN0b3JzIHdpdGggKlZJRiogKGV4cGxhaW5lZCBpbiAqNi4gQ29saW5lYXJpdHkqIGluIHNlY3Rpb24gMy4zLjMpOgpgYGB7cn0KbGlicmFyeShjYXIpCnZpZihhbGwuZml0KQpgYGAKU28gdGhlICpyYWQqIGFuZCAqdGF4KiBoYXMgYSBoaWdoIHBvc3NpYmlsaXR5IG9mIGNvbGluZWFyaXR5LiBTZWUgYm90dG9tIG9mIHAxMDE6Cgo+IEFzIGEgcnVsZSBvZiB0aHVtYiwgYSBWSUYgdmFsdWUgdGhhdCBleGNlZWRzIDUgb3IgMTAgaW5kaWNhdGVzIGEgcHJvYmxlbWF0aWMgYW1vdW50IG9mIGNvbGxpbmVhcml0eS4KCi0tLS0tLQoKUGVyZm9ybSBhIHJlZ3Jlc3Npb24gdXNpbmcgYWxsIG9mIHRoZSB2YXJpYWJsZXMgYnV0ICphZ2UqOgpgYGB7cn0KZml0Lm5vLmFnZSA8LSBsbShtZWR2IH4gLiAtYWdlLCBkYXRhID0gQm9zdG9uKQpzdW1tYXJ5KGZpdC5uby5hZ2UpCmBgYApPciB1c2UgYHVwZGF0ZSgpYCBmdW5jdGlvbjoKYGBge3J9CmZpdC53by5hZ2UgPC0gdXBkYXRlKGFsbC5maXQsIH4gLi1hZ2UpCnN1bW1hcnkoZml0LndvLmFnZSkKYGBgCiMgMy42LjQgSW50ZXJhY3Rpb24gVGVybXMKClNlZSAqUmVtb3ZpbmcgdGhlIEFkZGl0aXZlIEFzc3VtcHRpb24qIGluIHNlY3Rpb24gMy4zLjIgZm9yIGRldGFpbGVkIGRpc2N1c3Npb25zLgoKRm9yIGFkZGluZyBpbnRlcmFjdGlvbiB0ZXJtcywgYGxzdGF0ICogYWdlYCBpcyBhIHNob3J0aGFuZCBmb3IgYGxzdGF0ICsgYWdlICsgbHN0YXQ6YWdlYDoKYGBge3J9CmZpdC5scy5hZ2UgPC0gbG0obWVkdiB+IGxzdGF0ICogYWdlLCBkYXRhID0gQm9zdG9uKQpzdW1tYXJ5KGZpdC5scy5hZ2UpCmBgYApUaGUgKnAtdmFsdWUqIGluIHJlc3VsdCBzaG93cyB0aGF0ICphZ2UqIGFuZCAqbHN0YXQ6YWdlKiBoYXZlIGxpdHRsZSBlZmZlY3Qgb24gKm1lZHYqLgoKIyAzLjYuNSBOb24tbGluZWFyIFRyYW5zZm9ybWF0aW9ucyBvZiB0aGUgUHJlZGljdG9ycwpSZWdyZXNzaW9uICptZWR2KiBvbiAqbHN0YXQqIGFuZCAkbHN0YXReMiQ6CmBgYHtyfQpmaXQuYmkubHMgPC0gbG0obWVkdiB+IGxzdGF0ICsgSShsc3RhdCBeIDIpLCBkYXRhID0gQm9zdG9uKQpzdW1tYXJ5KGZpdC5iaS5scykKYGBgCipwLXZhbHVlKiBpbiByZXN1bHQgc2hvd3MgdGhlICRsc3RhdF4yJCBoYXMgaW5mbHVlbmNlIG9uICptZWR2Ki4KClRoZW4gdXNlIEFOT1ZBIHRvIHRlc3QgaWYgKmxzdGF0KiBhbmQgKmxzdGF0KiBwbHVzICRsc3RhdF4yJCBoYXMgZGlmZmVyZW5jZSBmb3IgcHJlZGljdGluZyAqbWVkdio6CmBgYHtyfQpmaXQubHMgPC0gbG0obWVkdiB+IGxzdGF0LCBkYXRhID0gQm9zdG9uKQphbm92YShmaXQubHMsIGZpdC5iaS5scykKYGBgClNvICRsc3RhdF4yJCBoYXMgaW5mbHVlbmNlIG9uICptZWR2Ki4KClBTOiBBTk9WQSBpcyB0aGUgdXNlZCB3aGVuIHRoZSBwcmVkaWN0b3JzIGFyZSBxdWFsaXRhdGl2ZSBhbmQgdGhlIHJlc3BvbnNlIHZhcmlhYmxlIGlzIHF1YW50aXRhdGl2ZS4gU2VlIGNoYXB0ZXIgOCBpbiAi57uf6K6h5a2mIiBieSDotL7kv4rlubMgZm9yIGRldGFpbGVkIGRpc2N1c3Npb25zLgoKUGxvdCB0aGUgcmVncmVzc2lvbjoKYGBge3J9CnBhcihtZnJvdz1jKDIsMikpCnBsb3QoZml0LmJpLmxzKQpgYGAKClJlZ3Jlc3Npb24gZm9yIG1vcmUgaGlnaGVyIG9yZGVyIG9mIHByZWRpY3RvcnM6CmBgYHtyfQpmaXQubHMuNSA8LSBsbShtZWR2IH4gcG9seShsc3RhdCwgNSksIGRhdGEgPSBCb3N0b24pCnN1bW1hcnkoZml0LmxzLjUpCmBgYAoKIyAzLjYuNiBRdWFsaXRhdGl2ZSBQcmVkaWN0b3JzCmBgYHtyfQpkYXRhKCJDYXJzZWF0cyIpCm5hbWVzKENhcnNlYXRzKQpmaXQuY2Fyc2VhdHMgPC0gbG0oU2FsZXMgfiAuICsgSW5jb21lOkFkdmVydGlzaW5nICsgUHJpY2U6QWdlLCBkYXRhID0gQ2Fyc2VhdHMpCnN1bW1hcnkoZml0LmNhcnNlYXRzKQpgYGAKClNob3cgdGhlIGR1bW15IHZhcmlhYmxlcyBjcmVhdGVkIGJ5IFIgYXV0b21hdGljYWxseToKYGBge3J9CmNvbnRyYXN0cyhDYXJzZWF0cyRTaGVsdmVMb2MpCmBgYApUaGUgcm93IG5hbWUgaW5kaWNhdGVzIHRoZSBuYW1lcyBvZiB0aGUgZHVtbXkgdmFyaWFibGVzLCBoZXJlIHRoZXkgYXJlICpTaGVsdmVMb2NHb29kKiBhbmQgKlNoZWx2ZUxvY01lZGl1bSosIHdoaWNoIGNhbiBiZSBmb3VuZCBpbiB0aGUgb3V0cHV0IG9mIHRoZSByZWdyZXNzaW9uIGNhbGN1bGF0aW9uIGFib3ZlLgpUaGUgbGFzdCBwYXJhZ3JhcGggb2YgcDExOCBnaXZlcyBhIGdvb2QgZXhwbGFuYXRpb25zIGFib3V0IGhvdyB0byByZWFkIHRoZSBvdXRwdXQgdGFibGUgYW5kIHRoZSByZWdyZXNzaW9uIHJlc3VsdCBpbmNsdWRpbmcgcXVhbGl0YXRpdmUgcHJlZGljdG9ycy4K