Conceptual

  1. Describe the null hypotheses to which the p-values given in Table 3.4 correspond. Explain what conclusions you can draw based on these p-values. Your explanation should be phrased in terms of sales, TV, radio, and newspaper, rather than in terms of the coefficients of the linear model.
library(imager)
im<-load.image("table34.jpg")
plot(im)

Respuesta: Hipótesis nula (TV): “Esta variable no es significativa”, conclusión (p-value): NO es significativa

Hipótesis nula (radio): “Esta variable no es significativa”, conclusión (p-value): NO es significativa

Hipótesis nula (newspaper): “Esta variable no es significativa”, conclusión (p-value): ES significativa

  1. Carefully explain the differences between the KNN classifier and KNN regression methods.

Respuesta Los “KNN classifier” se utilizan para resolver problemas con una respuesta cualitativa (problemas de clasificaciOn), los “KNN regression methods” se utilizan para resolver problemas cuantitativos (problemas de regresiOn).

  1. Suppose we have a data set with five predictors, X1 =GPA, X2 = IQ, X3 = Gender (1 for Female and 0 for Male), X4 = Interaction between GPA and IQ, and X5 = Interaction between GPA and Gender. The response is starting salary after graduation (in thousands of dollars). Suppose we use least squares to fit the model, and get ??0 = 50, ??1 = 20, ??2 = 0.07, ??3 = 35, ??4 = 0.01, ??5 = ???10.

    y = 50 + 20(GPA) + 0.07(IQ) + 35(Gender) + 0.01(GPA)(IQ) -10(GPA)(Gender)

  1. Which answer is correct, and why?
  1. For a fixed value of IQ and GPA, males earn more on average than females.
  2. For a fixed value of IQ and GPA, females earn more on average than males.
  3. For a fixed value of IQ and GPA, males earn more on average than females provided that the GPA is high enough.
  4. For a fixed value of IQ and GPA, females earn more on average than males provided that the GPA is high enough.

Respuesta El salario de los hombres es mayor que el de las mujeres, por lo que la respuesta correcta es la 3.

  1. Predict the salary of a female with IQ of 110 and a GPA of 4.0.

    Respuesta 50 + 20(4) + 0.07(110) + 35(1) + 0.01(4)(110) -10(4)(1) = 137.1

  1. I collect a set of data (n = 100 observations) containing a single predictor and a quantitative response. I then fit a linear regression model to the data, as well as a separate cubic regression, i.e. Y = ??0 + ??1X + ??2X2 + ??3X3 + e.
  1. Suppose that the true relationship between X and Y is linear, i.e. Y = ??0 + ??1X + e. Consider the training residual sum of squares (RSS) for the linear regression, and also the training RSS for the cubic regression. Would we expect one to be lower than the other, would we expect them to be the same, or is there not enough information to tell? Justify your answer.

Respuesta Como la relación es lineal, se esperaría que la suma de los residuos cuadrados sean menor que el residuo de los cubos.

  1. Answer (a) using test rather than training RSS.

  2. Suppose that the true relationship between X and Y is not linear, but we don’t know how far it is from linear. Consider the training RSS for the linear regression, and also the training RSS for the cubic regression. Would we expect one to be lower than the other, would we expect them to be the same, or is there not enough information to tell? Justify your answer.

Respuesta Se esperaría que el training RSS sea menor que la regresión lineal, pues es más flexible que el anterior y al no ser lineal, la respuesta se puede adaptar más.

  1. Consider the fitted values that result from performing linear regression without an intercept. In this setting, the ith fitted value takes the form yi = xi??, where ?? =(sum(xiyi), 1 to n)/(sum(xi^2), 1 to n) Show that we can write yi = (sum(ai’yi’), 1 to n)

What is ai’? Note: We interpret this result by saying that the fitted values from linear regression are linear combinations of the response values.

Applied

  1. This question involves the use of simple linear regression on the Auto data set.
  1. Use the lm() function to perform a simple linear regression with mpg as the response and horsepower as the predictor. Use the summary() function to print the results. Comment on the output.
library(ISLR)
Auto_reg <- lm(mpg~horsepower,Auto)
summary(Auto_reg)

Call:
lm(formula = mpg ~ horsepower, data = Auto)

Residuals:
     Min       1Q   Median       3Q      Max 
-13.5710  -3.2592  -0.3435   2.7630  16.9240 

Coefficients:
             Estimate Std. Error t value Pr(>|t|)    
(Intercept) 39.935861   0.717499   55.66   <2e-16 ***
horsepower  -0.157845   0.006446  -24.49   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 4.906 on 390 degrees of freedom
Multiple R-squared:  0.6059,    Adjusted R-squared:  0.6049 
F-statistic: 599.7 on 1 and 390 DF,  p-value: < 2.2e-16

a.1. Is there a relationship between the predictor and the response? Respuesta Como p es muy bajo, se rechaza la hipótesis nula y se dice que “horsepower” tiene relación con “mpg”

a.2 How strong is the relationship between the predictor and the response? Respuesta Esto se observa en el r^2 que explica la relación entre las variables, siendo en este caso de 0.6059

a.3 Is the relationship between the predictor and the response positive or negative? Respuesta El coeficiente de “horsepower” es negativo, por lo tanto la relación también lo es

a.4 What is the predicted mpg associated with a horsepower of 98? What are the associated 95% confidence and prediction intervals?

predict(Auto_reg ,data.frame(horsepower=98),interval ="confidence")
       fit      lwr      upr
1 24.46708 23.97308 24.96108
predict(Auto_reg ,data.frame(horsepower=98),interval ="prediction")
       fit     lwr      upr
1 24.46708 14.8094 34.12476

Plot the response and the predictor. Use the abline() function to display the least squares regression line.

plot(Auto$horsepower,Auto$mpg,col="blue",pch=20)
abline(Auto_reg,col="red",lwd=3)

  1. Use the plot() function to produce diagnostic plots of the least squares regression fit. Comment on any problems you see with the fit.
par(mfrow =c(2,2))
plot(Auto_reg)

  1. This question involves the use of multiple linear regression on the Auto data set.
  1. Produce a scatterplot matrix which includes all of the variables in the data set.
plot(Auto)

  1. Compute the matrix of correlations between the variables using the function cor(). You will need to exclude the name variable, which is qualitative.
cor(Auto[,-9])
                    mpg  cylinders displacement horsepower     weight acceleration       year     origin
mpg           1.0000000 -0.7776175   -0.8051269 -0.7784268 -0.8322442    0.4233285  0.5805410  0.5652088
cylinders    -0.7776175  1.0000000    0.9508233  0.8429834  0.8975273   -0.5046834 -0.3456474 -0.5689316
displacement -0.8051269  0.9508233    1.0000000  0.8972570  0.9329944   -0.5438005 -0.3698552 -0.6145351
horsepower   -0.7784268  0.8429834    0.8972570  1.0000000  0.8645377   -0.6891955 -0.4163615 -0.4551715
weight       -0.8322442  0.8975273    0.9329944  0.8645377  1.0000000   -0.4168392 -0.3091199 -0.5850054
acceleration  0.4233285 -0.5046834   -0.5438005 -0.6891955 -0.4168392    1.0000000  0.2903161  0.2127458
year          0.5805410 -0.3456474   -0.3698552 -0.4163615 -0.3091199    0.2903161  1.0000000  0.1815277
origin        0.5652088 -0.5689316   -0.6145351 -0.4551715 -0.5850054    0.2127458  0.1815277  1.0000000
  1. Use the lm() function to perform a multiple linear regression with mpg as the response and all other variables except name as the predictors. Use the summary() function to print the results. Comment on the output. For instance:
AutoN <- Auto[,-9]
Auto_mreg <- lm(mpg~.,AutoN)
summary(Auto_mreg)

Call:
lm(formula = mpg ~ ., data = AutoN)

Residuals:
    Min      1Q  Median      3Q     Max 
-9.5903 -2.1565 -0.1169  1.8690 13.0604 

Coefficients:
               Estimate Std. Error t value Pr(>|t|)    
(Intercept)  -17.218435   4.644294  -3.707  0.00024 ***
cylinders     -0.493376   0.323282  -1.526  0.12780    
displacement   0.019896   0.007515   2.647  0.00844 ** 
horsepower    -0.016951   0.013787  -1.230  0.21963    
weight        -0.006474   0.000652  -9.929  < 2e-16 ***
acceleration   0.080576   0.098845   0.815  0.41548    
year           0.750773   0.050973  14.729  < 2e-16 ***
origin         1.426141   0.278136   5.127 4.67e-07 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 3.328 on 384 degrees of freedom
Multiple R-squared:  0.8215,    Adjusted R-squared:  0.8182 
F-statistic: 252.4 on 7 and 384 DF,  p-value: < 2.2e-16

c.1. Is there a relationship between the predictors and the response? Respuesta Sí, existe una relación entre los predictores y la respuesta, puesto que se tiene un r^2 significativo de 0.8215

c.2. Which predictors appear to have a statistically significant relationship to the response? Respuesta “displacemente”, “weight”, “year”, “origin”

c.3. What does the coefficient for the year variable suggest? Respuesta Cada Aumento de 1 año, aumenta 0.75 mpg el carro

  1. Use the plot() function to produce diagnostic plots of the linear regression fit. Comment on any problems you see with the fit. Do the residual plots suggest any unusually large outliers? Does the leverage plot identify any observations with unusually high leverage?
par(mfrow =c(2,2))
plot(Auto_mreg)

  1. Use the * and : symbols to fit linear regression models with interaction effects. Do any interactions appear to be statistically significant?

e.1 Sin las variables no significativas modelo anterior

A1 <- lm(mpg~.-cylinders -horsepower -acceleration ,AutoN)
summary(A1)

Call:
lm(formula = mpg ~ . - cylinders - horsepower - acceleration, 
    data = AutoN)

Residuals:
    Min      1Q  Median      3Q     Max 
-9.8102 -2.1129 -0.0388  1.7725 13.2085 

Coefficients:
               Estimate Std. Error t value Pr(>|t|)    
(Intercept)  -1.861e+01  4.028e+00  -4.620 5.25e-06 ***
displacement  5.588e-03  4.768e-03   1.172    0.242    
weight       -6.575e-03  5.571e-04 -11.802  < 2e-16 ***
year          7.714e-01  4.981e-02  15.486  < 2e-16 ***
origin        1.226e+00  2.670e-01   4.593 5.92e-06 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 3.346 on 387 degrees of freedom
Multiple R-squared:  0.8181,    Adjusted R-squared:  0.8162 
F-statistic: 435.1 on 4 and 387 DF,  p-value: < 2.2e-16

Respuesta r^2 0.81 (menor al modelo anterior)

e.2 Sin las variables no significativas modelo anterior

A2 <- lm(mpg~.-cylinders -horsepower -acceleration-displacement ,AutoN)
summary(A2)

Call:
lm(formula = mpg ~ . - cylinders - horsepower - acceleration - 
    displacement, data = AutoN)

Residuals:
    Min      1Q  Median      3Q     Max 
-9.9440 -2.0948 -0.0389  1.7255 13.2722 

Coefficients:
              Estimate Std. Error t value Pr(>|t|)    
(Intercept) -1.805e+01  4.001e+00  -4.510 8.60e-06 ***
weight      -5.994e-03  2.541e-04 -23.588  < 2e-16 ***
year         7.571e-01  4.832e-02  15.668  < 2e-16 ***
origin       1.150e+00  2.591e-01   4.439 1.18e-05 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 3.348 on 388 degrees of freedom
Multiple R-squared:  0.8175,    Adjusted R-squared:  0.816 
F-statistic: 579.2 on 3 and 388 DF,  p-value: < 2.2e-16

e.3

A3 <- lm(mpg~cylinders+horsepower+(cylinders*horsepower)+
             weight+acceleration+(weight*acceleration)+
             year+origin,AutoN)
summary(A3)

Call:
lm(formula = mpg ~ cylinders + horsepower + (cylinders * horsepower) + 
    weight + acceleration + (weight * acceleration) + year + 
    origin, data = AutoN)

Residuals:
    Min      1Q  Median      3Q     Max 
-8.9936 -1.6294 -0.0371  1.3014 11.7317 

Coefficients:
                       Estimate Std. Error t value Pr(>|t|)    
(Intercept)           3.807e+00  8.135e+00   0.468  0.64011    
cylinders            -3.962e+00  5.416e-01  -7.315 1.52e-12 ***
horsepower           -2.934e-01  3.508e-02  -8.363 1.15e-15 ***
weight               -2.147e-03  1.607e-03  -1.336  0.18240    
acceleration          1.647e-01  2.912e-01   0.566  0.57193    
year                  7.479e-01  4.516e-02  16.559  < 2e-16 ***
origin                9.066e-01  2.333e-01   3.885  0.00012 ***
cylinders:horsepower  3.635e-02  4.718e-03   7.705 1.14e-13 ***
weight:acceleration  -1.157e-04  9.664e-05  -1.197  0.23217    
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 2.923 on 383 degrees of freedom
Multiple R-squared:  0.8626,    Adjusted R-squared:  0.8597 
F-statistic: 300.5 on 8 and 383 DF,  p-value: < 2.2e-16

e.4

A4 <- lm(mpg~cylinders+horsepower+(cylinders*horsepower)+year+origin,AutoN)
summary(A4)

Call:
lm(formula = mpg ~ cylinders + horsepower + (cylinders * horsepower) + 
    year + origin, data = AutoN)

Residuals:
    Min      1Q  Median      3Q     Max 
-9.2635 -1.9176 -0.4087  1.7760 12.3113 

Coefficients:
                      Estimate Std. Error t value Pr(>|t|)    
(Intercept)          12.313808   4.691320   2.625  0.00901 ** 
cylinders            -5.961005   0.429236 -13.887  < 2e-16 ***
horsepower           -0.380413   0.027737 -13.715  < 2e-16 ***
year                  0.691017   0.049132  14.065  < 2e-16 ***
origin                1.412447   0.252303   5.598 4.12e-08 ***
cylinders:horsepower  0.045883   0.003818  12.016  < 2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 3.251 on 386 degrees of freedom
Multiple R-squared:  0.8288,    Adjusted R-squared:  0.8266 
F-statistic: 373.7 on 5 and 386 DF,  p-value: < 2.2e-16

e.5

A5 <- lm(mpg~cylinders+horsepower+(cylinders*horsepower)+
             weight+acceleration+(weight*acceleration)+year,AutoN)
summary(A5)

Call:
lm(formula = mpg ~ cylinders + horsepower + (cylinders * horsepower) + 
    weight + acceleration + (weight * acceleration) + year, data = AutoN)

Residuals:
    Min      1Q  Median      3Q     Max 
-8.2693 -1.7845 -0.0232  1.4741 12.4551 

Coefficients:
                       Estimate Std. Error t value Pr(>|t|)    
(Intercept)           5.847e+00  8.266e+00   0.707    0.480    
cylinders            -4.194e+00  5.481e-01  -7.651 1.62e-13 ***
horsepower           -2.943e-01  3.571e-02  -8.239 2.77e-15 ***
weight               -2.285e-03  1.636e-03  -1.397    0.163    
acceleration          2.268e-01  2.961e-01   0.766    0.444    
year                  7.548e-01  4.595e-02  16.426  < 2e-16 ***
cylinders:horsepower  3.723e-02  4.799e-03   7.758 7.88e-14 ***
weight:acceleration  -1.350e-04  9.827e-05  -1.374    0.170    
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 2.977 on 384 degrees of freedom
Multiple R-squared:  0.8572,    Adjusted R-squared:  0.8546 
F-statistic: 329.2 on 7 and 384 DF,  p-value: < 2.2e-16

Respuesta De los modelos realizados, el más apegado (mejor r^2) es el modelo A3 (inciso e.3)

f.Try a few different transformations of the variables, such as log(X),square(X), X^2. Comment on your findings.

par(mfrow = c(2, 2))
plot(log(Auto$horsepower), Auto$mpg)
plot(sqrt(Auto$horsepower), Auto$mpg)
plot((Auto$horsepower)^2, Auto$mpg)

  1. This question should be answered using the Carseats data set.
Cars <- Carseats
names(Cars)
 [1] "Sales"       "CompPrice"   "Income"      "Advertising" "Population"  "Price"       "ShelveLoc"  
 [8] "Age"         "Education"   "Urban"       "US"         
  1. Fit a multiple regression model to predict Sales using Price,Urban, and US.
cars_mreg<-lm(Sales~Price+Urban+US,Cars)
summary(cars_mreg)

Call:
lm(formula = Sales ~ Price + Urban + US, data = Cars)

Residuals:
    Min      1Q  Median      3Q     Max 
-6.9206 -1.6220 -0.0564  1.5786  7.0581 

Coefficients:
             Estimate Std. Error t value Pr(>|t|)    
(Intercept) 13.043469   0.651012  20.036  < 2e-16 ***
Price       -0.054459   0.005242 -10.389  < 2e-16 ***
UrbanYes    -0.021916   0.271650  -0.081    0.936    
USYes        1.200573   0.259042   4.635 4.86e-06 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 2.472 on 396 degrees of freedom
Multiple R-squared:  0.2393,    Adjusted R-squared:  0.2335 
F-statistic: 41.52 on 3 and 396 DF,  p-value: < 2.2e-16

b.Provide an interpretation of each coefficient in the model. Be careful-some of the variables in the model are qualitative! Respuesta si es urbano y de US entonces las ventas se reduce en 0.07 del intercepto, si solo es US y no Urban entonces solo se reduce 0.05 del intercepto, si es urbano y no US se reduce 1.27

  1. Write out the model in equation form, being careful to handle the qualitative variables properly. Respuesta Sales = 13.0-0.054(Price)-0.02(Urban)+1.2(US)

  2. For which of the predictors can you reject the null hypothesis H0 : ??j = 0? Respuesta ??1 = falso, la hipótesis nula no se acepta, por lo tanto el Precio es significativo ??2 = verdadero, la hipótesis nula se acepta, el Urban es poco significativo ??3 = falso, la hipótesis nula no se acepta, el US es significativo

  3. On the basis of your response to the previous question, fit a smaller model that only uses the predictors for which there is evidence of association with the outcome.

c1<-lm(Sales~Price+US,Cars)
summary(c1)

Call:
lm(formula = Sales ~ Price + US, data = Cars)

Residuals:
    Min      1Q  Median      3Q     Max 
-6.9269 -1.6286 -0.0574  1.5766  7.0515 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept) 13.03079    0.63098  20.652  < 2e-16 ***
Price       -0.05448    0.00523 -10.416  < 2e-16 ***
USYes        1.19964    0.25846   4.641 4.71e-06 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 2.469 on 397 degrees of freedom
Multiple R-squared:  0.2393,    Adjusted R-squared:  0.2354 
F-statistic: 62.43 on 2 and 397 DF,  p-value: < 2.2e-16
  1. How well do the models in (a) and (e) fit the data? El modelo a y el e tiene residuos similares y r^2 parecidos, por lo que se ajustan de manera muy similar y pobre.

  2. Using the model from (e), obtain 95% confidence intervals for the coefficient(s).

confint(c1,level=0.95)
                  2.5 %      97.5 %
(Intercept) 11.79032020 14.27126531
Price       -0.06475984 -0.04419543
USYes        0.69151957  1.70776632
  1. In this problem we will investigate the t-statistic for the null hypothesis H0 : ?? = 0 in simple linear regression without an intercept. To begin, we generate a predictor x and a response y as follows.
set.seed (1)
x=rnorm (100)
y=2*x+rnorm (100)
  1. Perform a simple linear regression of y onto x, without an intercept. Report the coefficient estimate ^??, the standard error of this coefficient estimate, and the t-statistic and p-value associated with the null hypothesis H0 : ?? = 0. Comment on these results. (You can perform regression without an intercept using the command lm(y???x+0).)
reg1<-lm(y~x+0)
summary(reg1)

Call:
lm(formula = y ~ x + 0)

Residuals:
    Min      1Q  Median      3Q     Max 
-1.9154 -0.6472 -0.1771  0.5056  2.3109 

Coefficients:
  Estimate Std. Error t value Pr(>|t|)    
x   1.9939     0.1065   18.73   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.9586 on 99 degrees of freedom
Multiple R-squared:  0.7798,    Adjusted R-squared:  0.7776 
F-statistic: 350.7 on 1 and 99 DF,  p-value: < 2.2e-16
  1. Now perform a simple linear regression of x onto y without an intercept, and report the coefficient estimate, its standard error, and the corresponding t-statistic and p-values associated with the null hypothesis H0 : ?? = 0. Comment on these results.
reg2<-lm(x~y+0)
summary(reg2)

Call:
lm(formula = x ~ y + 0)

Residuals:
    Min      1Q  Median      3Q     Max 
-0.8699 -0.2368  0.1030  0.2858  0.8938 

Coefficients:
  Estimate Std. Error t value Pr(>|t|)    
y  0.39111    0.02089   18.73   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.4246 on 99 degrees of freedom
Multiple R-squared:  0.7798,    Adjusted R-squared:  0.7776 
F-statistic: 350.7 on 1 and 99 DF,  p-value: < 2.2e-16
  1. What is the relationship between the results obtained in (a) and (b)? Respuesta Es una relación directa, solo cambian los ejes, por eso se obtiene el mismo r^2 y el mismo F-statistic

  2. In R, show that when regression is performed with an intercept, the t-statistic for H0 : ??1 = 0 is the same for the regression of y onto x as it is for the regression of x onto y.

reg1<-lm(y~x+0)
summary(reg1)$coefficients
  Estimate Std. Error  t value     Pr(>|t|)
x 1.993876  0.1064767 18.72593 2.642197e-34
reg2<-lm(x~y+0)
summary(reg2)$coefficients
   Estimate Std. Error  t value     Pr(>|t|)
y 0.3911145 0.02088625 18.72593 2.642197e-34
  1. This problem involves simple linear regression without an intercept.
  1. Recall that the coefficient estimate ?? for the linear regression of Y onto X without an intercept is given by (3.38). Under what circumstance is the coefficient estimate for the regression of X onto Y the same as the coefficient estimate for the regression of Y onto X? Respuesta Cuando la sumatoria de los x al cuadrado y los y al cuadrado sea la misma

  2. Generate an example in R with n = 100 observations in which the coefficient estimate for the regression of X onto Y is different from the coefficient estimate for the regression of Y onto X.

x<-1:100
y<-101:200
rx<-lm(y~x+0)
summary(rx)

Call:
lm(formula = y ~ x + 0)

Residuals:
   Min     1Q Median     3Q    Max 
-49.25 -12.31  24.63  61.57  98.51 

Coefficients:
  Estimate Std. Error t value Pr(>|t|)    
x  2.49254    0.08574   29.07   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 49.88 on 99 degrees of freedom
Multiple R-squared:  0.8951,    Adjusted R-squared:  0.8941 
F-statistic:   845 on 1 and 99 DF,  p-value: < 2.2e-16
ry<-lm(x~y+0)
summary(ry)

Call:
lm(formula = x ~ y + 0)

Residuals:
    Min      1Q  Median      3Q     Max 
-35.272 -19.410  -3.548  12.313  28.175 

Coefficients:
  Estimate Std. Error t value Pr(>|t|)    
y  0.35912    0.01235   29.07   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 18.93 on 99 degrees of freedom
Multiple R-squared:  0.8951,    Adjusted R-squared:  0.8941 
F-statistic:   845 on 1 and 99 DF,  p-value: < 2.2e-16
  1. Generate an example in R with n = 100 observations in which the coefficient estimate for the regression of X onto Y is the same as the coefficient estimate for the regression of Y onto X.
x<-1:100
y<-100:1
rx<-lm(y~x+0)
summary(rx)

Call:
lm(formula = y ~ x + 0)

Residuals:
   Min     1Q Median     3Q    Max 
-49.75 -12.44  24.87  62.18  99.49 

Coefficients:
  Estimate Std. Error t value Pr(>|t|)    
x   0.5075     0.0866    5.86 6.09e-08 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 50.37 on 99 degrees of freedom
Multiple R-squared:  0.2575,    Adjusted R-squared:   0.25 
F-statistic: 34.34 on 1 and 99 DF,  p-value: 6.094e-08
ry<-lm(x~y+0)
summary(ry)

Call:
lm(formula = x ~ y + 0)

Residuals:
   Min     1Q Median     3Q    Max 
-49.75 -12.44  24.87  62.18  99.49 

Coefficients:
  Estimate Std. Error t value Pr(>|t|)    
y   0.5075     0.0866    5.86 6.09e-08 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 50.37 on 99 degrees of freedom
Multiple R-squared:  0.2575,    Adjusted R-squared:   0.25 
F-statistic: 34.34 on 1 and 99 DF,  p-value: 6.094e-08
  1. In this exercise you will create some simulated data and will fit simple linear regression models to it. Make sure to use set.seed(1) prior to starting part (a) to ensure consistent results.
  1. Using the rnorm() function, create a vector, x, containing 100 observations drawn from a N(0, 1) distribution. This represents a feature, X.
set.seed(1)
x<- rnorm(n=100)
  1. Using the rnorm() function, create a vector, eps, containing 100 observations drawn from a N(0, 0.25) distribution i.e. a normal distribution with mean zero and variance 0.25.
set.seed(2)
eps<- rnorm(n=100,sd=sqrt(0.25))
  1. Using x and eps, generate a vector y according to the model Y = ???1 + 0.5X + e What is the length of the vector y? What are the values of ??0 and ??1 in this linear model?
y=-1+0.5*x+eps
length(y)
[1] 100

Respuesta los valores de ??0 y ??1 son -1 y 0.5 respectivamente

  1. Create a scatterplot displaying the relationship between “x” and “y”. Comment on what you observe.
plot(x,y)

  1. Fit a least squares linear model to predict y using x. Comment on the model obtained. How do ??0 and ??1 compare to ??0 and ??1?
fit <- lm(y~x)
summary(fit)

Call:
lm(formula = y ~ x)

Residuals:
     Min       1Q   Median       3Q      Max 
-1.22689 -0.40393 -0.04575  0.41574  1.14118 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept) -1.00454    0.05804 -17.308  < 2e-16 ***
x            0.40072    0.06446   6.216 1.25e-08 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.5761 on 98 degrees of freedom
Multiple R-squared:  0.2828,    Adjusted R-squared:  0.2755 
F-statistic: 38.64 on 1 and 98 DF,  p-value: 1.247e-08
  1. Display the least squares line on the scatterplot obtained in (d). Draw the population regression line on the plot, in a different color. Use the legend() command to create an appropriate legend.
plot(x,y)
abline(fit,col="red",lwd=3)

  1. Now fit a polynomial regression model that predicts y using x and x^2. Is there evidence that the quadratic term improves the model fit? Explain your answer.
fit1 <- lm(y ~ x + I(x^2))
summary(fit1)

Call:
lm(formula = y ~ x + I(x^2))

Residuals:
     Min       1Q   Median       3Q      Max 
-1.30604 -0.38957 -0.06695  0.40921  1.13539 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept) -0.92420    0.06967 -13.265  < 2e-16 ***
x            0.41623    0.06394   6.509 3.33e-09 ***
I(x^2)      -0.10121    0.05020  -2.016   0.0465 *  
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.5673 on 97 degrees of freedom
Multiple R-squared:  0.3116,    Adjusted R-squared:  0.2974 
F-statistic: 21.96 on 2 and 97 DF,  p-value: 1.362e-08
  1. This problem focuses on the collinearity problem.
  1. Perform the following commands in R:
set.seed (1)
x1=runif (100)
x2 =0.5* x1+rnorm (100) /10
y=2+2* x1 +0.3* x2+rnorm (100)

The last line corresponds to creating a linear model in which y is a function of x1 and x2. Write out the form of the linear model. What are the regression coefficients?

Respuesta ??0=2; ??1=2; ??3=0.3

  1. What is the correlation between x1 and x2? Create a scatterplot displaying the relationship between the variables.
cor(x1,x2)
[1] 0.8351212
plot(x1,x2)

  1. Using this data, fit a least squares regression to predict y using x1 and x2. Describe the results obtained. What are ^ ??0, ^ ??1, and ^ ??2? How do these relate to the true ??0, ??1, and ??2? Can you reject the null hypothesis H0 : ??1 = 0? How about the null hypothesis H0 : ??2 = 0?
reg_y <- lm(y~x1+x2)
summary(reg_y)

Call:
lm(formula = y ~ x1 + x2)

Residuals:
    Min      1Q  Median      3Q     Max 
-2.8311 -0.7273 -0.0537  0.6338  2.3359 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)   2.1305     0.2319   9.188 7.61e-15 ***
x1            1.4396     0.7212   1.996   0.0487 *  
x2            1.0097     1.1337   0.891   0.3754    
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 1.056 on 97 degrees of freedom
Multiple R-squared:  0.2088,    Adjusted R-squared:  0.1925 
F-statistic:  12.8 on 2 and 97 DF,  p-value: 1.164e-05
  1. Now fit a least squares regression to predict y using only x1. Comment on your results. Can you reject the null hypothesis H0 : ??1 = 0?
reg_yx1 <- lm(y~x1)
summary(reg_yx1)

Call:
lm(formula = y ~ x1)

Residuals:
     Min       1Q   Median       3Q      Max 
-2.89495 -0.66874 -0.07785  0.59221  2.45560 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)   2.1124     0.2307   9.155 8.27e-15 ***
x1            1.9759     0.3963   4.986 2.66e-06 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 1.055 on 98 degrees of freedom
Multiple R-squared:  0.2024,    Adjusted R-squared:  0.1942 
F-statistic: 24.86 on 1 and 98 DF,  p-value: 2.661e-06
  1. Now fit a least squares regression to predict y using only x2. Comment on your results. Can you reject the null hypothesis H0 : ??1 = 0?
reg_yx2 <- lm(y~x2)
summary(reg_yx2)

Call:
lm(formula = y ~ x2)

Residuals:
     Min       1Q   Median       3Q      Max 
-2.62687 -0.75156 -0.03598  0.72383  2.44890 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)   2.3899     0.1949   12.26  < 2e-16 ***
x2            2.8996     0.6330    4.58 1.37e-05 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 1.072 on 98 degrees of freedom
Multiple R-squared:  0.1763,    Adjusted R-squared:  0.1679 
F-statistic: 20.98 on 1 and 98 DF,  p-value: 1.366e-05
  1. Now suppose we obtain one additional observation, which was unfortunately mismeasured.
x1=c(x1 , 0.1)
x2=c(x2 , 0.8)
y=c(y,6)

Re-fit the linear models from (c) to (e) using this new data. What effect does this new observation have on the each of the models? In each model, is this observation an outlier? A high-leverage point? Both? Explain your answers.

reg_y <- lm(y~x1+x2)
summary(reg_y)

Call:
lm(formula = y ~ x1 + x2)

Residuals:
     Min       1Q   Median       3Q      Max 
-2.73348 -0.69318 -0.05263  0.66385  2.30619 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)   2.2267     0.2314   9.624 7.91e-16 ***
x1            0.5394     0.5922   0.911  0.36458    
x2            2.5146     0.8977   2.801  0.00614 ** 
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 1.075 on 98 degrees of freedom
Multiple R-squared:  0.2188,    Adjusted R-squared:  0.2029 
F-statistic: 13.72 on 2 and 98 DF,  p-value: 5.564e-06
reg_yx1 <- lm(y~x1)
summary(reg_yx1)

Call:
lm(formula = y ~ x1)

Residuals:
    Min      1Q  Median      3Q     Max 
-2.8897 -0.6556 -0.0909  0.5682  3.5665 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)   2.2569     0.2390   9.445 1.78e-15 ***
x1            1.7657     0.4124   4.282 4.29e-05 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 1.111 on 99 degrees of freedom
Multiple R-squared:  0.1562,    Adjusted R-squared:  0.1477 
F-statistic: 18.33 on 1 and 99 DF,  p-value: 4.295e-05
reg_yx2 <- lm(y~x2)
summary(reg_yx2)

Call:
lm(formula = y ~ x2)

Residuals:
     Min       1Q   Median       3Q      Max 
-2.64729 -0.71021 -0.06899  0.72699  2.38074 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)   2.3451     0.1912  12.264  < 2e-16 ***
x2            3.1190     0.6040   5.164 1.25e-06 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 1.074 on 99 degrees of freedom
Multiple R-squared:  0.2122,    Adjusted R-squared:  0.2042 
F-statistic: 26.66 on 1 and 99 DF,  p-value: 1.253e-06
LS0tDQp0aXRsZTogIlIgTm90ZWJvb2siDQpvdXRwdXQ6IGh0bWxfbm90ZWJvb2sNCmF1dG9yOiBBbGVqYW5kcmEgQ2VybWXxbw0KLS0tDQoNCiMjI0NvbmNlcHR1YWwNCg0KICAxLiBEZXNjcmliZSB0aGUgbnVsbCBoeXBvdGhlc2VzIHRvIHdoaWNoIHRoZSBwLXZhbHVlcyBnaXZlbiBpbiBUYWJsZSAzLjQNCmNvcnJlc3BvbmQuIEV4cGxhaW4gd2hhdCBjb25jbHVzaW9ucyB5b3UgY2FuIGRyYXcgYmFzZWQgb24gdGhlc2UNCnAtdmFsdWVzLiBZb3VyIGV4cGxhbmF0aW9uIHNob3VsZCBiZSBwaHJhc2VkIGluIHRlcm1zIG9mIHNhbGVzLCBUViwNCnJhZGlvLCBhbmQgbmV3c3BhcGVyLCByYXRoZXIgdGhhbiBpbiB0ZXJtcyBvZiB0aGUgY29lZmZpY2llbnRzIG9mIHRoZQ0KbGluZWFyIG1vZGVsLg0KDQpgYGB7cixlY2hvPVRSVUUsd2FybmluZz1GQUxTRSxlcnJvcj1GQUxTRX0NCmxpYnJhcnkoaW1hZ2VyKQ0KYGBgDQpgYGB7cixlY2hvPVRSVUUsd2FybmluZz1GQUxTRSxlcnJvcj1GQUxTRX0NCmltPC1sb2FkLmltYWdlKCJ0YWJsZTM0LmpwZyIpDQpwbG90KGltKQ0KYGBgDQoNCioqUmVzcHVlc3RhOioqIEhpcPN0ZXNpcyBudWxhIChUVik6ICJFc3RhIHZhcmlhYmxlIG5vIGVzIHNpZ25pZmljYXRpdmEiLCBjb25jbHVzafNuIChwLXZhbHVlKTogTk8gZXMgc2lnbmlmaWNhdGl2YQ0KICAgIFxuZXdsaW5lDQogICAgDQpIaXDzdGVzaXMgbnVsYSAocmFkaW8pOiAiRXN0YSB2YXJpYWJsZSBubyBlcyBzaWduaWZpY2F0aXZhIiwgY29uY2x1c2nzbiAocC12YWx1ZSk6IE5PIGVzIHNpZ25pZmljYXRpdmENCiAgICBcbmV3bGluZQ0KICAgIA0KSGlw83Rlc2lzIG51bGEgKG5ld3NwYXBlcik6ICJFc3RhIHZhcmlhYmxlIG5vIGVzIHNpZ25pZmljYXRpdmEiLCBjb25jbHVzafNuIChwLXZhbHVlKTogRVMgc2lnbmlmaWNhdGl2YQ0KICAgIFxuZXdsaW5lDQoNCjIuIENhcmVmdWxseSBleHBsYWluIHRoZSBkaWZmZXJlbmNlcyBiZXR3ZWVuIHRoZSBLTk4gY2xhc3NpZmllciBhbmQgS05OIHJlZ3Jlc3Npb24gbWV0aG9kcy4NCg0KKipSZXNwdWVzdGEqKiBMb3MgIktOTiBjbGFzc2lmaWVyIiBzZSB1dGlsaXphbiBwYXJhIHJlc29sdmVyIHByb2JsZW1hcyBjb24gdW5hIHJlc3B1ZXN0YSBjdWFsaXRhdGl2YSAocHJvYmxlbWFzIGRlIGNsYXNpZmljYWNpT24pLCBsb3MgIktOTiByZWdyZXNzaW9uIG1ldGhvZHMiIHNlIHV0aWxpemFuIHBhcmEgcmVzb2x2ZXIgcHJvYmxlbWFzIGN1YW50aXRhdGl2b3MgKHByb2JsZW1hcyBkZSByZWdyZXNpT24pLg0KDQoNCjMuIFN1cHBvc2Ugd2UgaGF2ZSBhIGRhdGEgc2V0IHdpdGggZml2ZSBwcmVkaWN0b3JzLCBYMSA9R1BBLCBYMiA9IElRLA0KWDMgPSBHZW5kZXIgKDEgZm9yIEZlbWFsZSBhbmQgMCBmb3IgTWFsZSksIFg0ID0gSW50ZXJhY3Rpb24gYmV0d2Vlbg0KR1BBIGFuZCBJUSwgYW5kIFg1ID0gSW50ZXJhY3Rpb24gYmV0d2VlbiBHUEEgYW5kIEdlbmRlci4gVGhlDQpyZXNwb25zZSBpcyBzdGFydGluZyBzYWxhcnkgYWZ0ZXIgZ3JhZHVhdGlvbiAoaW4gdGhvdXNhbmRzIG9mIGRvbGxhcnMpLg0KU3VwcG9zZSB3ZSB1c2UgbGVhc3Qgc3F1YXJlcyB0byBmaXQgdGhlIG1vZGVsLCBhbmQgZ2V0ID8/MCA9IDUwLCA/PzEgPSAyMCwgPz8yID0gMC4wNywgPz8zID0gMzUsID8/NCA9IDAuMDEsID8/NSA9ID8/PzEwLg0KDQogICAgKip5ID0gNTAgKyAyMChHUEEpICsgMC4wNyhJUSkgKyAzNShHZW5kZXIpICsgMC4wMShHUEEpKElRKSAtMTAoR1BBKShHZW5kZXIpKioNCg0KYS4gV2hpY2ggYW5zd2VyIGlzIGNvcnJlY3QsIGFuZCB3aHk/DQoxLiBGb3IgYSBmaXhlZCB2YWx1ZSBvZiBJUSBhbmQgR1BBLCBtYWxlcyBlYXJuIG1vcmUgb24gYXZlcmFnZQ0KdGhhbiBmZW1hbGVzLg0KMi4gRm9yIGEgZml4ZWQgdmFsdWUgb2YgSVEgYW5kIEdQQSwgZmVtYWxlcyBlYXJuIG1vcmUgb24NCmF2ZXJhZ2UgdGhhbiBtYWxlcy4NCjMuIEZvciBhIGZpeGVkIHZhbHVlIG9mIElRIGFuZCBHUEEsIG1hbGVzIGVhcm4gbW9yZSBvbiBhdmVyYWdlDQp0aGFuIGZlbWFsZXMgcHJvdmlkZWQgdGhhdCB0aGUgR1BBIGlzIGhpZ2ggZW5vdWdoLg0KNC4gRm9yIGEgZml4ZWQgdmFsdWUgb2YgSVEgYW5kIEdQQSwgZmVtYWxlcyBlYXJuIG1vcmUgb24NCmF2ZXJhZ2UgdGhhbiBtYWxlcyBwcm92aWRlZCB0aGF0IHRoZSBHUEEgaXMgaGlnaCBlbm91Z2guDQoNCioqUmVzcHVlc3RhKiogRWwgc2FsYXJpbyBkZSBsb3MgaG9tYnJlcyBlcyBtYXlvciBxdWUgZWwgZGUgbGFzIG11amVyZXMsIHBvciBsbyBxdWUgbGEgcmVzcHVlc3RhIGNvcnJlY3RhIGVzIGxhIDMuDQoNCmIuIFByZWRpY3QgdGhlIHNhbGFyeSBvZiBhIGZlbWFsZSB3aXRoIElRIG9mIDExMCBhbmQgYSBHUEEgb2YgNC4wLg0KDQogICAgKipSZXNwdWVzdGEqKiA1MCArIDIwKDQpICsgMC4wNygxMTApICsgMzUoMSkgKyAwLjAxKDQpKDExMCkgLTEwKDQpKDEpID0gMTM3LjENCg0KNC4gSSBjb2xsZWN0IGEgc2V0IG9mIGRhdGEgKG4gPSAxMDAgb2JzZXJ2YXRpb25zKSBjb250YWluaW5nIGEgc2luZ2xlDQpwcmVkaWN0b3IgYW5kIGEgcXVhbnRpdGF0aXZlIHJlc3BvbnNlLiBJIHRoZW4gZml0IGEgbGluZWFyIHJlZ3Jlc3Npb24NCm1vZGVsIHRvIHRoZSBkYXRhLCBhcyB3ZWxsIGFzIGEgc2VwYXJhdGUgY3ViaWMgcmVncmVzc2lvbiwgaS5lLiBZID0NCj8/MCArID8/MVggKyA/PzJYMiArID8/M1gzICsgZS4NCg0KYS4gU3VwcG9zZSB0aGF0IHRoZSB0cnVlIHJlbGF0aW9uc2hpcCBiZXR3ZWVuIFggYW5kIFkgaXMgbGluZWFyLA0KaS5lLiBZID0gPz8wICsgPz8xWCArIGUuIENvbnNpZGVyIHRoZSB0cmFpbmluZyByZXNpZHVhbCBzdW0gb2YNCnNxdWFyZXMgKFJTUykgZm9yIHRoZSBsaW5lYXIgcmVncmVzc2lvbiwgYW5kIGFsc28gdGhlIHRyYWluaW5nDQpSU1MgZm9yIHRoZSBjdWJpYyByZWdyZXNzaW9uLiBXb3VsZCB3ZSBleHBlY3Qgb25lIHRvIGJlIGxvd2VyDQp0aGFuIHRoZSBvdGhlciwgd291bGQgd2UgZXhwZWN0IHRoZW0gdG8gYmUgdGhlIHNhbWUsIG9yIGlzIHRoZXJlDQpub3QgZW5vdWdoIGluZm9ybWF0aW9uIHRvIHRlbGw/IEp1c3RpZnkgeW91ciBhbnN3ZXIuDQoNCioqUmVzcHVlc3RhKiogQ29tbyBsYSByZWxhY2nzbiBlcyBsaW5lYWwsIHNlIGVzcGVyYXLtYSBxdWUgbGEgc3VtYSBkZSBsb3MgcmVzaWR1b3MgY3VhZHJhZG9zIHNlYW4gbWVub3IgcXVlIGVsIHJlc2lkdW8gZGUgbG9zIGN1Ym9zLiANCg0KYi4gQW5zd2VyIChhKSB1c2luZyB0ZXN0IHJhdGhlciB0aGFuIHRyYWluaW5nIFJTUy4NCg0KYy4gU3VwcG9zZSB0aGF0IHRoZSB0cnVlIHJlbGF0aW9uc2hpcCBiZXR3ZWVuIFggYW5kIFkgaXMgbm90IGxpbmVhciwgYnV0IHdlIGRvbid0IGtub3cgaG93IGZhciBpdCBpcyBmcm9tIGxpbmVhci4gQ29uc2lkZXIgdGhlIHRyYWluaW5nIFJTUyBmb3IgdGhlIGxpbmVhciByZWdyZXNzaW9uLCBhbmQgYWxzbyB0aGUgdHJhaW5pbmcgUlNTIGZvciB0aGUgY3ViaWMgcmVncmVzc2lvbi4gV291bGQgd2UgZXhwZWN0IG9uZSB0byBiZSBsb3dlciB0aGFuIHRoZSBvdGhlciwgd291bGQgd2UgZXhwZWN0IHRoZW0gdG8gYmUgdGhlIHNhbWUsIG9yIGlzIHRoZXJlIG5vdCBlbm91Z2ggaW5mb3JtYXRpb24gdG8gdGVsbD8gSnVzdGlmeSB5b3VyIGFuc3dlci4NCg0KKipSZXNwdWVzdGEqKiBTZSBlc3BlcmFy7WEgcXVlIGVsIHRyYWluaW5nIFJTUyBzZWEgbWVub3IgcXVlIGxhIHJlZ3Jlc2nzbiBsaW5lYWwsIHB1ZXMgZXMgbeFzIGZsZXhpYmxlIHF1ZSBlbCBhbnRlcmlvciB5IGFsIG5vIHNlciBsaW5lYWwsIGxhIHJlc3B1ZXN0YSBzZSBwdWVkZSBhZGFwdGFyIG3hcy4NCg0KNi4gQ29uc2lkZXIgdGhlIGZpdHRlZCB2YWx1ZXMgdGhhdCByZXN1bHQgZnJvbSBwZXJmb3JtaW5nIGxpbmVhciByZWdyZXNzaW9uDQp3aXRob3V0IGFuIGludGVyY2VwdC4gSW4gdGhpcyBzZXR0aW5nLCB0aGUgaXRoIGZpdHRlZCB2YWx1ZSB0YWtlcw0KdGhlIGZvcm0NCnlpID0geGk/PywNCndoZXJlIA0KPz8gPShzdW0oeGl5aSksIDEgdG8gbikvKHN1bSh4aV4yKSwgMSB0byBuKQ0KU2hvdyB0aGF0IHdlIGNhbiB3cml0ZQ0KeWkgPSAoc3VtKGFpJ3lpJyksIDEgdG8gbikNCg0KV2hhdCBpcyBhaSc/DQpOb3RlOiBXZSBpbnRlcnByZXQgdGhpcyByZXN1bHQgYnkgc2F5aW5nIHRoYXQgdGhlIGZpdHRlZCB2YWx1ZXMgZnJvbQ0KbGluZWFyIHJlZ3Jlc3Npb24gYXJlIGxpbmVhciBjb21iaW5hdGlvbnMgb2YgdGhlIHJlc3BvbnNlIHZhbHVlcy4NCg0KIyMjQXBwbGllZA0KDQo4LiBUaGlzIHF1ZXN0aW9uIGludm9sdmVzIHRoZSB1c2Ugb2Ygc2ltcGxlIGxpbmVhciByZWdyZXNzaW9uIG9uIHRoZSBBdXRvDQpkYXRhIHNldC4NCg0KYS4gVXNlIHRoZSBsbSgpIGZ1bmN0aW9uIHRvIHBlcmZvcm0gYSBzaW1wbGUgbGluZWFyIHJlZ3Jlc3Npb24gd2l0aA0KbXBnIGFzIHRoZSByZXNwb25zZSBhbmQgaG9yc2Vwb3dlciBhcyB0aGUgcHJlZGljdG9yLiBVc2UgdGhlDQpzdW1tYXJ5KCkgZnVuY3Rpb24gdG8gcHJpbnQgdGhlIHJlc3VsdHMuIENvbW1lbnQgb24gdGhlIG91dHB1dC4NCg0KYGBge3J9DQpsaWJyYXJ5KElTTFIpDQpBdXRvX3JlZyA8LSBsbShtcGd+aG9yc2Vwb3dlcixBdXRvKQ0Kc3VtbWFyeShBdXRvX3JlZykNCmBgYA0KYS4xLiBJcyB0aGVyZSBhIHJlbGF0aW9uc2hpcCBiZXR3ZWVuIHRoZSBwcmVkaWN0b3IgYW5kIHRoZSByZXNwb25zZT8NCioqUmVzcHVlc3RhKiogQ29tbyBwIGVzIG11eSBiYWpvLCBzZSByZWNoYXphIGxhIGhpcPN0ZXNpcyBudWxhIHkgc2UgZGljZSBxdWUgImhvcnNlcG93ZXIiIHRpZW5lIHJlbGFjafNuIGNvbiAibXBnIg0KDQphLjIgSG93IHN0cm9uZyBpcyB0aGUgcmVsYXRpb25zaGlwIGJldHdlZW4gdGhlIHByZWRpY3RvciBhbmQgdGhlIHJlc3BvbnNlPw0KKipSZXNwdWVzdGEqKiBFc3RvIHNlIG9ic2VydmEgZW4gZWwgcl4yIHF1ZSBleHBsaWNhIGxhIHJlbGFjafNuIGVudHJlIGxhcyB2YXJpYWJsZXMsIHNpZW5kbyBlbiBlc3RlIGNhc28gZGUgMC42MDU5DQoNCmEuMyBJcyB0aGUgcmVsYXRpb25zaGlwIGJldHdlZW4gdGhlIHByZWRpY3RvciBhbmQgdGhlIHJlc3BvbnNlIHBvc2l0aXZlIG9yIG5lZ2F0aXZlPw0KKipSZXNwdWVzdGEqKiBFbCBjb2VmaWNpZW50ZSBkZSAiaG9yc2Vwb3dlciIgZXMgbmVnYXRpdm8sIHBvciBsbyB0YW50byBsYSByZWxhY2nzbiB0YW1iaeluIGxvIGVzDQoNCmEuNCBXaGF0IGlzIHRoZSBwcmVkaWN0ZWQgbXBnIGFzc29jaWF0ZWQgd2l0aCBhIGhvcnNlcG93ZXIgb2YgOTg/IFdoYXQgYXJlIHRoZSBhc3NvY2lhdGVkIDk1JSANCmNvbmZpZGVuY2UgYW5kIHByZWRpY3Rpb24gaW50ZXJ2YWxzPw0KYGBge3J9DQpwcmVkaWN0KEF1dG9fcmVnICxkYXRhLmZyYW1lKGhvcnNlcG93ZXI9OTgpLGludGVydmFsID0iY29uZmlkZW5jZSIpDQpgYGANCg0KYGBge3J9DQpwcmVkaWN0KEF1dG9fcmVnICxkYXRhLmZyYW1lKGhvcnNlcG93ZXI9OTgpLGludGVydmFsID0icHJlZGljdGlvbiIpDQpgYGANCg0KUGxvdCB0aGUgcmVzcG9uc2UgYW5kIHRoZSBwcmVkaWN0b3IuIFVzZSB0aGUgYWJsaW5lKCkgZnVuY3Rpb24NCnRvIGRpc3BsYXkgdGhlIGxlYXN0IHNxdWFyZXMgcmVncmVzc2lvbiBsaW5lLg0KDQpgYGB7cn0NCnBsb3QoQXV0byRob3JzZXBvd2VyLEF1dG8kbXBnLGNvbD0iYmx1ZSIscGNoPTIwKQ0KYWJsaW5lKEF1dG9fcmVnLGNvbD0icmVkIixsd2Q9MykNCmBgYA0KDQpjLiBVc2UgdGhlIHBsb3QoKSBmdW5jdGlvbiB0byBwcm9kdWNlIGRpYWdub3N0aWMgcGxvdHMgb2YgdGhlIGxlYXN0DQpzcXVhcmVzIHJlZ3Jlc3Npb24gZml0LiBDb21tZW50IG9uIGFueSBwcm9ibGVtcyB5b3Ugc2VlIHdpdGgNCnRoZSBmaXQuDQoNCmBgYHtyfQ0KcGFyKG1mcm93ID1jKDIsMikpDQpwbG90KEF1dG9fcmVnKQ0KYGBgDQoNCjkuIFRoaXMgcXVlc3Rpb24gaW52b2x2ZXMgdGhlIHVzZSBvZiBtdWx0aXBsZSBsaW5lYXIgcmVncmVzc2lvbiBvbiB0aGUgQXV0byBkYXRhIHNldC4NCg0KYS4gUHJvZHVjZSBhIHNjYXR0ZXJwbG90IG1hdHJpeCB3aGljaCBpbmNsdWRlcyBhbGwgb2YgdGhlIHZhcmlhYmxlcyBpbiB0aGUgZGF0YSBzZXQuDQpgYGB7cn0NCnBsb3QoQXV0bykNCmBgYA0KDQpiLiBDb21wdXRlIHRoZSBtYXRyaXggb2YgY29ycmVsYXRpb25zIGJldHdlZW4gdGhlIHZhcmlhYmxlcyB1c2luZw0KdGhlIGZ1bmN0aW9uIGNvcigpLiBZb3Ugd2lsbCBuZWVkIHRvIGV4Y2x1ZGUgdGhlIG5hbWUgdmFyaWFibGUsIHdoaWNoIGlzIHF1YWxpdGF0aXZlLg0KDQpgYGB7cn0NCmNvcihBdXRvWywtOV0pDQpgYGANCg0KYy4gVXNlIHRoZSBsbSgpIGZ1bmN0aW9uIHRvIHBlcmZvcm0gYSBtdWx0aXBsZSBsaW5lYXIgcmVncmVzc2lvbg0Kd2l0aCBtcGcgYXMgdGhlIHJlc3BvbnNlIGFuZCBhbGwgb3RoZXIgdmFyaWFibGVzIGV4Y2VwdCBuYW1lIGFzDQp0aGUgcHJlZGljdG9ycy4gVXNlIHRoZSBzdW1tYXJ5KCkgZnVuY3Rpb24gdG8gcHJpbnQgdGhlIHJlc3VsdHMuDQpDb21tZW50IG9uIHRoZSBvdXRwdXQuIEZvciBpbnN0YW5jZToNCg0KYGBge3J9DQpBdXRvTiA8LSBBdXRvWywtOV0NCkF1dG9fbXJlZyA8LSBsbShtcGd+LixBdXRvTikNCnN1bW1hcnkoQXV0b19tcmVnKQ0KYGBgDQpjLjEuIElzIHRoZXJlIGEgcmVsYXRpb25zaGlwIGJldHdlZW4gdGhlIHByZWRpY3RvcnMgYW5kIHRoZSByZXNwb25zZT8NCioqUmVzcHVlc3RhKiogU+0sIGV4aXN0ZSB1bmEgcmVsYWNp824gZW50cmUgbG9zIHByZWRpY3RvcmVzIHkgbGEgcmVzcHVlc3RhLCBwdWVzdG8gcXVlIHNlIHRpZW5lIHVuIHJeMiBzaWduaWZpY2F0aXZvIGRlIDAuODIxNQ0KDQpjLjIuIFdoaWNoIHByZWRpY3RvcnMgYXBwZWFyIHRvIGhhdmUgYSBzdGF0aXN0aWNhbGx5IHNpZ25pZmljYW50IHJlbGF0aW9uc2hpcCB0byB0aGUgcmVzcG9uc2U/DQoqKlJlc3B1ZXN0YSoqICJkaXNwbGFjZW1lbnRlIiwgIndlaWdodCIsICJ5ZWFyIiwgIm9yaWdpbiINCg0KYy4zLiBXaGF0IGRvZXMgdGhlIGNvZWZmaWNpZW50IGZvciB0aGUgeWVhciB2YXJpYWJsZSBzdWdnZXN0Pw0KKipSZXNwdWVzdGEqKiBDYWRhIEF1bWVudG8gZGUgMSBh8W8sIGF1bWVudGEgMC43NSBtcGcgZWwgY2Fycm8NCg0KZC4gVXNlIHRoZSBwbG90KCkgZnVuY3Rpb24gdG8gcHJvZHVjZSBkaWFnbm9zdGljIHBsb3RzIG9mIHRoZSBsaW5lYXINCnJlZ3Jlc3Npb24gZml0LiBDb21tZW50IG9uIGFueSBwcm9ibGVtcyB5b3Ugc2VlIHdpdGggdGhlIGZpdC4NCkRvIHRoZSByZXNpZHVhbCBwbG90cyBzdWdnZXN0IGFueSB1bnVzdWFsbHkgbGFyZ2Ugb3V0bGllcnM/IERvZXMNCnRoZSBsZXZlcmFnZSBwbG90IGlkZW50aWZ5IGFueSBvYnNlcnZhdGlvbnMgd2l0aCB1bnVzdWFsbHkgaGlnaA0KbGV2ZXJhZ2U/DQoNCmBgYHtyfQ0KcGFyKG1mcm93ID1jKDIsMikpDQpwbG90KEF1dG9fbXJlZykNCmBgYA0KDQplLiBVc2UgdGhlICogYW5kIDogc3ltYm9scyB0byBmaXQgbGluZWFyIHJlZ3Jlc3Npb24gbW9kZWxzIHdpdGgNCmludGVyYWN0aW9uIGVmZmVjdHMuIERvIGFueSBpbnRlcmFjdGlvbnMgYXBwZWFyIHRvIGJlIHN0YXRpc3RpY2FsbHkNCnNpZ25pZmljYW50Pw0KDQplLjEgU2luIGxhcyB2YXJpYWJsZXMgbm8gc2lnbmlmaWNhdGl2YXMgbW9kZWxvIGFudGVyaW9yDQpgYGB7cn0NCkExIDwtIGxtKG1wZ34uLWN5bGluZGVycyAtaG9yc2Vwb3dlciAtYWNjZWxlcmF0aW9uICxBdXRvTikNCnN1bW1hcnkoQTEpDQpgYGANCg0KKipSZXNwdWVzdGEqKiByXjIgMC44MSAobWVub3IgYWwgbW9kZWxvIGFudGVyaW9yKQ0KDQplLjIgU2luIGxhcyB2YXJpYWJsZXMgbm8gc2lnbmlmaWNhdGl2YXMgbW9kZWxvIGFudGVyaW9yDQpgYGB7cn0NCkEyIDwtIGxtKG1wZ34uLWN5bGluZGVycyAtaG9yc2Vwb3dlciAtYWNjZWxlcmF0aW9uLWRpc3BsYWNlbWVudCAsQXV0b04pDQpzdW1tYXJ5KEEyKQ0KYGBgDQoNCmUuMyANCmBgYHtyfQ0KQTMgPC0gbG0obXBnfmN5bGluZGVycytob3JzZXBvd2VyKyhjeWxpbmRlcnMqaG9yc2Vwb3dlcikrDQogICAgICAgICAgICAgd2VpZ2h0K2FjY2VsZXJhdGlvbisod2VpZ2h0KmFjY2VsZXJhdGlvbikrDQogICAgICAgICAgICAgeWVhcitvcmlnaW4sQXV0b04pDQpzdW1tYXJ5KEEzKQ0KYGBgDQoNCmUuNCANCmBgYHtyfQ0KQTQgPC0gbG0obXBnfmN5bGluZGVycytob3JzZXBvd2VyKyhjeWxpbmRlcnMqaG9yc2Vwb3dlcikreWVhcitvcmlnaW4sQXV0b04pDQpzdW1tYXJ5KEE0KQ0KYGBgDQoNCmUuNQ0KYGBge3J9DQpBNSA8LSBsbShtcGd+Y3lsaW5kZXJzK2hvcnNlcG93ZXIrKGN5bGluZGVycypob3JzZXBvd2VyKSsNCiAgICAgICAgICAgICB3ZWlnaHQrYWNjZWxlcmF0aW9uKyh3ZWlnaHQqYWNjZWxlcmF0aW9uKSt5ZWFyLEF1dG9OKQ0Kc3VtbWFyeShBNSkNCmBgYA0KDQoqKlJlc3B1ZXN0YSoqIERlIGxvcyBtb2RlbG9zIHJlYWxpemFkb3MsIGVsIG3hcyBhcGVnYWRvIChtZWpvciByXjIpIGVzIGVsIG1vZGVsbyBBMyAoaW5jaXNvIGUuMykNCg0KZi5UcnkgYSBmZXcgZGlmZmVyZW50IHRyYW5zZm9ybWF0aW9ucyBvZiB0aGUgdmFyaWFibGVzLCBzdWNoIGFzDQpsb2coWCksc3F1YXJlKFgpLCBYXjIuIENvbW1lbnQgb24geW91ciBmaW5kaW5ncy4NCg0KYGBge3J9DQpwYXIobWZyb3cgPSBjKDIsIDIpKQ0KcGxvdChsb2coQXV0byRob3JzZXBvd2VyKSwgQXV0byRtcGcpDQpwbG90KHNxcnQoQXV0byRob3JzZXBvd2VyKSwgQXV0byRtcGcpDQpwbG90KChBdXRvJGhvcnNlcG93ZXIpXjIsIEF1dG8kbXBnKQ0KYGBgDQoNCjEwLiBUaGlzIHF1ZXN0aW9uIHNob3VsZCBiZSBhbnN3ZXJlZCB1c2luZyB0aGUgQ2Fyc2VhdHMgZGF0YSBzZXQuDQpgYGB7cn0NCkNhcnMgPC0gQ2Fyc2VhdHMNCm5hbWVzKENhcnMpDQpgYGANCg0KYS4gRml0IGEgbXVsdGlwbGUgcmVncmVzc2lvbiBtb2RlbCB0byBwcmVkaWN0IFNhbGVzIHVzaW5nIFByaWNlLFVyYmFuLCBhbmQgVVMuDQpgYGB7cn0NCmNhcnNfbXJlZzwtbG0oU2FsZXN+UHJpY2UrVXJiYW4rVVMsQ2FycykNCnN1bW1hcnkoY2Fyc19tcmVnKQ0KYGBgDQoNCmIuUHJvdmlkZSBhbiBpbnRlcnByZXRhdGlvbiBvZiBlYWNoIGNvZWZmaWNpZW50IGluIHRoZSBtb2RlbC4gQmUNCmNhcmVmdWwtc29tZSBvZiB0aGUgdmFyaWFibGVzIGluIHRoZSBtb2RlbCBhcmUgcXVhbGl0YXRpdmUhDQoqKlJlc3B1ZXN0YSoqIHNpIGVzIHVyYmFubyB5IGRlIFVTIGVudG9uY2VzIGxhcyB2ZW50YXMgc2UgcmVkdWNlIGVuIDAuMDcgZGVsIGludGVyY2VwdG8sIHNpIHNvbG8gZXMgVVMgeSBubyBVcmJhbiBlbnRvbmNlcyBzb2xvIHNlIHJlZHVjZSAwLjA1IGRlbCBpbnRlcmNlcHRvLCBzaSBlcyB1cmJhbm8geSBubyBVUyBzZSByZWR1Y2UgMS4yNw0KDQpjLiBXcml0ZSBvdXQgdGhlIG1vZGVsIGluIGVxdWF0aW9uIGZvcm0sIGJlaW5nIGNhcmVmdWwgdG8gaGFuZGxlDQp0aGUgcXVhbGl0YXRpdmUgdmFyaWFibGVzIHByb3Blcmx5Lg0KKipSZXNwdWVzdGEqKiBTYWxlcyA9IDEzLjAtMC4wNTQoUHJpY2UpLTAuMDIoVXJiYW4pKzEuMihVUykNCg0KZC4gRm9yIHdoaWNoIG9mIHRoZSBwcmVkaWN0b3JzIGNhbiB5b3UgcmVqZWN0IHRoZSBudWxsIGh5cG90aGVzaXMgSDAgOiA/P2ogPSAwPw0KKipSZXNwdWVzdGEqKiA/PzEgPSBmYWxzbywgbGEgaGlw83Rlc2lzIG51bGEgbm8gc2UgYWNlcHRhLCBwb3IgbG8gdGFudG8gZWwgUHJlY2lvIGVzIHNpZ25pZmljYXRpdm8NCj8/MiA9IHZlcmRhZGVybywgbGEgaGlw83Rlc2lzIG51bGEgc2UgYWNlcHRhLCBlbCBVcmJhbiBlcyBwb2NvIHNpZ25pZmljYXRpdm8NCj8/MyA9IGZhbHNvLCBsYSBoaXDzdGVzaXMgbnVsYSBubyBzZSBhY2VwdGEsIGVsIFVTIGVzIHNpZ25pZmljYXRpdm8NCg0KZS4gT24gdGhlIGJhc2lzIG9mIHlvdXIgcmVzcG9uc2UgdG8gdGhlIHByZXZpb3VzIHF1ZXN0aW9uLCBmaXQgYQ0Kc21hbGxlciBtb2RlbCB0aGF0IG9ubHkgdXNlcyB0aGUgcHJlZGljdG9ycyBmb3Igd2hpY2ggdGhlcmUgaXMNCmV2aWRlbmNlIG9mIGFzc29jaWF0aW9uIHdpdGggdGhlIG91dGNvbWUuDQoNCmBgYHtyfQ0KYzE8LWxtKFNhbGVzflByaWNlK1VTLENhcnMpDQpzdW1tYXJ5KGMxKQ0KYGBgDQoNCmYuIEhvdyB3ZWxsIGRvIHRoZSBtb2RlbHMgaW4gKGEpIGFuZCAoZSkgZml0IHRoZSBkYXRhPw0KRWwgbW9kZWxvIGEgeSBlbCBlIHRpZW5lIHJlc2lkdW9zIHNpbWlsYXJlcyB5IHJeMiBwYXJlY2lkb3MsIHBvciBsbyBxdWUgc2UgYWp1c3RhbiBkZSBtYW5lcmEgbXV5IHNpbWlsYXIgeSBwb2JyZS4NCg0KZy4gVXNpbmcgdGhlIG1vZGVsIGZyb20gKGUpLCBvYnRhaW4gOTUlIGNvbmZpZGVuY2UgaW50ZXJ2YWxzIGZvcg0KdGhlIGNvZWZmaWNpZW50KHMpLg0KYGBge3J9DQpjb25maW50KGMxLGxldmVsPTAuOTUpDQpgYGANCg0KMTEuIEluIHRoaXMgcHJvYmxlbSB3ZSB3aWxsIGludmVzdGlnYXRlIHRoZSB0LXN0YXRpc3RpYyBmb3IgdGhlIG51bGwgaHlwb3RoZXNpcw0KSDAgOiA/PyA9IDAgaW4gc2ltcGxlIGxpbmVhciByZWdyZXNzaW9uIHdpdGhvdXQgYW4gaW50ZXJjZXB0LiBUbw0KYmVnaW4sIHdlIGdlbmVyYXRlIGEgcHJlZGljdG9yIHggYW5kIGEgcmVzcG9uc2UgeSBhcyBmb2xsb3dzLg0KDQpgYGB7cn0NCnNldC5zZWVkICgxKQ0KeD1ybm9ybSAoMTAwKQ0KeT0yKngrcm5vcm0gKDEwMCkNCmBgYA0KDQphLiBQZXJmb3JtIGEgc2ltcGxlIGxpbmVhciByZWdyZXNzaW9uIG9mIHkgb250byB4LCB3aXRob3V0IGFuIGludGVyY2VwdC4NClJlcG9ydCB0aGUgY29lZmZpY2llbnQgZXN0aW1hdGUgXj8/LCB0aGUgc3RhbmRhcmQgZXJyb3Igb2YNCnRoaXMgY29lZmZpY2llbnQgZXN0aW1hdGUsIGFuZCB0aGUgdC1zdGF0aXN0aWMgYW5kIHAtdmFsdWUgYXNzb2NpYXRlZA0Kd2l0aCB0aGUgbnVsbCBoeXBvdGhlc2lzIEgwIDogPz8gPSAwLiBDb21tZW50IG9uIHRoZXNlDQpyZXN1bHRzLiAoWW91IGNhbiBwZXJmb3JtIHJlZ3Jlc3Npb24gd2l0aG91dCBhbiBpbnRlcmNlcHQgdXNpbmcNCnRoZSBjb21tYW5kIGxtKHk/Pz94KzApLikNCmBgYHtyfQ0KcmVnMTwtbG0oeX54KzApDQpzdW1tYXJ5KHJlZzEpDQpgYGANCg0KYi4gTm93IHBlcmZvcm0gYSBzaW1wbGUgbGluZWFyIHJlZ3Jlc3Npb24gb2YgeCBvbnRvIHkgd2l0aG91dCBhbg0KaW50ZXJjZXB0LCBhbmQgcmVwb3J0IHRoZSBjb2VmZmljaWVudCBlc3RpbWF0ZSwgaXRzIHN0YW5kYXJkIGVycm9yLA0KYW5kIHRoZSBjb3JyZXNwb25kaW5nIHQtc3RhdGlzdGljIGFuZCBwLXZhbHVlcyBhc3NvY2lhdGVkIHdpdGgNCnRoZSBudWxsIGh5cG90aGVzaXMgSDAgOiA/PyA9IDAuIENvbW1lbnQgb24gdGhlc2UgcmVzdWx0cy4NCmBgYHtyfQ0KcmVnMjwtbG0oeH55KzApDQpzdW1tYXJ5KHJlZzIpDQpgYGANCg0KYy4gV2hhdCBpcyB0aGUgcmVsYXRpb25zaGlwIGJldHdlZW4gdGhlIHJlc3VsdHMgb2J0YWluZWQgaW4gKGEpIGFuZA0KKGIpPw0KKipSZXNwdWVzdGEqKiBFcyB1bmEgcmVsYWNp824gZGlyZWN0YSwgc29sbyBjYW1iaWFuIGxvcyBlamVzLCBwb3IgZXNvIHNlIG9idGllbmUgZWwgbWlzbW8gcl4yIHkgZWwgbWlzbW8gRi1zdGF0aXN0aWMNCg0KZC4gSW4gUiwgc2hvdyB0aGF0IHdoZW4gcmVncmVzc2lvbiBpcyBwZXJmb3JtZWQgd2l0aCBhbiBpbnRlcmNlcHQsDQp0aGUgdC1zdGF0aXN0aWMgZm9yIEgwIDogPz8xID0gMCBpcyB0aGUgc2FtZSBmb3IgdGhlIHJlZ3Jlc3Npb24gb2YgeQ0Kb250byB4IGFzIGl0IGlzIGZvciB0aGUgcmVncmVzc2lvbiBvZiB4IG9udG8geS4NCmBgYHtyfQ0KcmVnMTwtbG0oeX54KzApDQpzdW1tYXJ5KHJlZzEpJGNvZWZmaWNpZW50cw0KcmVnMjwtbG0oeH55KzApDQpzdW1tYXJ5KHJlZzIpJGNvZWZmaWNpZW50cw0KYGBgDQoNCjEyLiBUaGlzIHByb2JsZW0gaW52b2x2ZXMgc2ltcGxlIGxpbmVhciByZWdyZXNzaW9uIHdpdGhvdXQgYW4gaW50ZXJjZXB0Lg0KDQphLiBSZWNhbGwgdGhhdCB0aGUgY29lZmZpY2llbnQgZXN0aW1hdGUgPz8gZm9yIHRoZSBsaW5lYXIgcmVncmVzc2lvbiBvZg0KWSBvbnRvIFggd2l0aG91dCBhbiBpbnRlcmNlcHQgaXMgZ2l2ZW4gYnkgKDMuMzgpLiBVbmRlciB3aGF0DQpjaXJjdW1zdGFuY2UgaXMgdGhlIGNvZWZmaWNpZW50IGVzdGltYXRlIGZvciB0aGUgcmVncmVzc2lvbiBvZiBYDQpvbnRvIFkgdGhlIHNhbWUgYXMgdGhlIGNvZWZmaWNpZW50IGVzdGltYXRlIGZvciB0aGUgcmVncmVzc2lvbiBvZg0KWSBvbnRvIFg/DQoqKlJlc3B1ZXN0YSoqIEN1YW5kbyBsYSBzdW1hdG9yaWEgZGUgbG9zIHggYWwgY3VhZHJhZG8geSBsb3MgeSBhbCBjdWFkcmFkbyBzZWEgbGEgbWlzbWENCg0KYi4gR2VuZXJhdGUgYW4gZXhhbXBsZSBpbiBSIHdpdGggbiA9IDEwMCBvYnNlcnZhdGlvbnMgaW4gd2hpY2gNCnRoZSBjb2VmZmljaWVudCBlc3RpbWF0ZSBmb3IgdGhlIHJlZ3Jlc3Npb24gb2YgWCBvbnRvIFkgaXMgZGlmZmVyZW50DQpmcm9tIHRoZSBjb2VmZmljaWVudCBlc3RpbWF0ZSBmb3IgdGhlIHJlZ3Jlc3Npb24gb2YgWSBvbnRvIFguDQoNCmBgYHtyfQ0KeDwtMToxMDANCnk8LTEwMToyMDANCnJ4PC1sbSh5fngrMCkNCnN1bW1hcnkocngpDQpyeTwtbG0oeH55KzApDQpzdW1tYXJ5KHJ5KQ0KYGBgDQoNCmMuIEdlbmVyYXRlIGFuIGV4YW1wbGUgaW4gUiB3aXRoIG4gPSAxMDAgb2JzZXJ2YXRpb25zIGluIHdoaWNoDQp0aGUgY29lZmZpY2llbnQgZXN0aW1hdGUgZm9yIHRoZSByZWdyZXNzaW9uIG9mIFggb250byBZIGlzIHRoZQ0Kc2FtZSBhcyB0aGUgY29lZmZpY2llbnQgZXN0aW1hdGUgZm9yIHRoZSByZWdyZXNzaW9uIG9mIFkgb250byBYLg0KYGBge3J9DQp4PC0xOjEwMA0KeTwtMTAwOjENCnJ4PC1sbSh5fngrMCkNCnN1bW1hcnkocngpDQpyeTwtbG0oeH55KzApDQpzdW1tYXJ5KHJ5KQ0KYGBgDQoNCjEzLiBJbiB0aGlzIGV4ZXJjaXNlIHlvdSB3aWxsIGNyZWF0ZSBzb21lIHNpbXVsYXRlZCBkYXRhIGFuZCB3aWxsIGZpdCBzaW1wbGUNCmxpbmVhciByZWdyZXNzaW9uIG1vZGVscyB0byBpdC4gTWFrZSBzdXJlIHRvIHVzZSBzZXQuc2VlZCgxKSBwcmlvciB0bw0Kc3RhcnRpbmcgcGFydCAoYSkgdG8gZW5zdXJlIGNvbnNpc3RlbnQgcmVzdWx0cy4NCg0KYS4gVXNpbmcgdGhlIHJub3JtKCkgZnVuY3Rpb24sIGNyZWF0ZSBhIHZlY3RvciwgeCwgY29udGFpbmluZyAxMDANCm9ic2VydmF0aW9ucyBkcmF3biBmcm9tIGEgTigwLCAxKSBkaXN0cmlidXRpb24uIFRoaXMgcmVwcmVzZW50cw0KYSBmZWF0dXJlLCBYLg0KDQpgYGB7cn0NCnNldC5zZWVkKDEpDQp4PC0gcm5vcm0obj0xMDApDQpgYGANCmIuIFVzaW5nIHRoZSBybm9ybSgpIGZ1bmN0aW9uLCBjcmVhdGUgYSB2ZWN0b3IsIGVwcywgY29udGFpbmluZyAxMDANCm9ic2VydmF0aW9ucyBkcmF3biBmcm9tIGEgTigwLCAwLjI1KSBkaXN0cmlidXRpb24gaS5lLiBhIG5vcm1hbA0KZGlzdHJpYnV0aW9uIHdpdGggbWVhbiB6ZXJvIGFuZCB2YXJpYW5jZSAwLjI1Lg0KYGBge3J9DQpzZXQuc2VlZCgyKQ0KZXBzPC0gcm5vcm0obj0xMDAsc2Q9c3FydCgwLjI1KSkNCmBgYA0KYy4gVXNpbmcgeCBhbmQgZXBzLCBnZW5lcmF0ZSBhIHZlY3RvciB5IGFjY29yZGluZyB0byB0aGUgbW9kZWwNClkgPSA/Pz8xICsgMC41WCArIGUNCldoYXQgaXMgdGhlIGxlbmd0aCBvZiB0aGUgdmVjdG9yIHk/IFdoYXQgYXJlIHRoZSB2YWx1ZXMgb2YgPz8wIGFuZCA/PzEgaW4gdGhpcyBsaW5lYXIgbW9kZWw/DQpgYGB7cn0NCnk9LTErMC41KngrZXBzDQpsZW5ndGgoeSkNCmBgYA0KDQoqKlJlc3B1ZXN0YSoqIGxvcyB2YWxvcmVzIGRlID8/MCB5ID8/MSBzb24gLTEgeSAwLjUgcmVzcGVjdGl2YW1lbnRlDQoNCmQuIENyZWF0ZSBhIHNjYXR0ZXJwbG90IGRpc3BsYXlpbmcgdGhlIHJlbGF0aW9uc2hpcCBiZXR3ZWVuICJ4IiBhbmQgInkiLiBDb21tZW50IG9uIHdoYXQgeW91IG9ic2VydmUuDQpgYGB7cn0NCnBsb3QoeCx5KQ0KYGBgDQplLiBGaXQgYSBsZWFzdCBzcXVhcmVzIGxpbmVhciBtb2RlbCB0byBwcmVkaWN0IHkgdXNpbmcgeC4gQ29tbWVudA0Kb24gdGhlIG1vZGVsIG9idGFpbmVkLiBIb3cgZG8gPz8wIGFuZCA/PzEgY29tcGFyZSB0byA/PzAgYW5kDQo/PzE/DQoNCmBgYHtyfQ0KZml0IDwtIGxtKHl+eCkNCnN1bW1hcnkoZml0KQ0KYGBgDQpmLiBEaXNwbGF5IHRoZSBsZWFzdCBzcXVhcmVzIGxpbmUgb24gdGhlIHNjYXR0ZXJwbG90IG9idGFpbmVkIGluIChkKS4NCkRyYXcgdGhlIHBvcHVsYXRpb24gcmVncmVzc2lvbiBsaW5lIG9uIHRoZSBwbG90LCBpbiBhIGRpZmZlcmVudA0KY29sb3IuIFVzZSB0aGUgbGVnZW5kKCkgY29tbWFuZCB0byBjcmVhdGUgYW4gYXBwcm9wcmlhdGUgbGVnZW5kLg0KDQpgYGB7cn0NCnBsb3QoeCx5KQ0KYWJsaW5lKGZpdCxjb2w9InJlZCIsbHdkPTMpDQpgYGANCmcuIE5vdyBmaXQgYSBwb2x5bm9taWFsIHJlZ3Jlc3Npb24gbW9kZWwgdGhhdCBwcmVkaWN0cyB5IHVzaW5nIHgNCmFuZCB4XjIuIElzIHRoZXJlIGV2aWRlbmNlIHRoYXQgdGhlIHF1YWRyYXRpYyB0ZXJtIGltcHJvdmVzIHRoZQ0KbW9kZWwgZml0PyBFeHBsYWluIHlvdXIgYW5zd2VyLg0KYGBge3J9DQpmaXQxIDwtIGxtKHkgfiB4ICsgSSh4XjIpKQ0Kc3VtbWFyeShmaXQxKQ0KYGBgDQoNCjE0LiBUaGlzIHByb2JsZW0gZm9jdXNlcyBvbiB0aGUgY29sbGluZWFyaXR5IHByb2JsZW0uDQoNCmEuIFBlcmZvcm0gdGhlIGZvbGxvd2luZyBjb21tYW5kcyBpbiBSOg0KYGBge3J9DQpzZXQuc2VlZCAoMSkNCngxPXJ1bmlmICgxMDApDQp4MiA9MC41KiB4MStybm9ybSAoMTAwKSAvMTANCnk9MisyKiB4MSArMC4zKiB4Mitybm9ybSAoMTAwKQ0KYGBgDQoNClRoZSBsYXN0IGxpbmUgY29ycmVzcG9uZHMgdG8gY3JlYXRpbmcgYSBsaW5lYXIgbW9kZWwgaW4gd2hpY2ggeSBpcw0KYSBmdW5jdGlvbiBvZiB4MSBhbmQgeDIuIFdyaXRlIG91dCB0aGUgZm9ybSBvZiB0aGUgbGluZWFyIG1vZGVsLg0KV2hhdCBhcmUgdGhlIHJlZ3Jlc3Npb24gY29lZmZpY2llbnRzPw0KDQoqKlJlc3B1ZXN0YSoqIA0KPz8wPTI7ID8/MT0yOyA/PzM9MC4zDQoNCmIuIFdoYXQgaXMgdGhlIGNvcnJlbGF0aW9uIGJldHdlZW4geDEgYW5kIHgyPyBDcmVhdGUgYSBzY2F0dGVycGxvdA0KZGlzcGxheWluZyB0aGUgcmVsYXRpb25zaGlwIGJldHdlZW4gdGhlIHZhcmlhYmxlcy4NCmBgYHtyfQ0KY29yKHgxLHgyKQ0KYGBgDQpgYGB7cn0NCnBsb3QoeDEseDIpDQpgYGANCg0KYy4gVXNpbmcgdGhpcyBkYXRhLCBmaXQgYSBsZWFzdCBzcXVhcmVzIHJlZ3Jlc3Npb24gdG8gcHJlZGljdCB5IHVzaW5nDQp4MSBhbmQgeDIuIERlc2NyaWJlIHRoZSByZXN1bHRzIG9idGFpbmVkLiBXaGF0IGFyZSBeID8/MCwgXiA/PzEsIGFuZA0KXiA/PzI/IEhvdyBkbyB0aGVzZSByZWxhdGUgdG8gdGhlIHRydWUgPz8wLCA/PzEsIGFuZCA/PzI/IENhbiB5b3UNCnJlamVjdCB0aGUgbnVsbCBoeXBvdGhlc2lzIEgwIDogPz8xID0gMD8gSG93IGFib3V0IHRoZSBudWxsDQpoeXBvdGhlc2lzIEgwIDogPz8yID0gMD8NCg0KYGBge3J9DQpyZWdfeSA8LSBsbSh5fngxK3gyKQ0Kc3VtbWFyeShyZWdfeSkNCmBgYA0KDQpkLiBOb3cgZml0IGEgbGVhc3Qgc3F1YXJlcyByZWdyZXNzaW9uIHRvIHByZWRpY3QgeSB1c2luZyBvbmx5IHgxLg0KQ29tbWVudCBvbiB5b3VyIHJlc3VsdHMuIENhbiB5b3UgcmVqZWN0IHRoZSBudWxsIGh5cG90aGVzaXMNCkgwIDogPz8xID0gMD8NCmBgYHtyfQ0KcmVnX3l4MSA8LSBsbSh5fngxKQ0Kc3VtbWFyeShyZWdfeXgxKQ0KYGBgDQplLiBOb3cgZml0IGEgbGVhc3Qgc3F1YXJlcyByZWdyZXNzaW9uIHRvIHByZWRpY3QgeSB1c2luZyBvbmx5IHgyLg0KQ29tbWVudCBvbiB5b3VyIHJlc3VsdHMuIENhbiB5b3UgcmVqZWN0IHRoZSBudWxsIGh5cG90aGVzaXMNCkgwIDogPz8xID0gMD8NCmBgYHtyfQ0KcmVnX3l4MiA8LSBsbSh5fngyKQ0Kc3VtbWFyeShyZWdfeXgyKQ0KYGBgDQpmLiBOb3cgc3VwcG9zZSB3ZSBvYnRhaW4gb25lIGFkZGl0aW9uYWwgb2JzZXJ2YXRpb24sIHdoaWNoIHdhcw0KdW5mb3J0dW5hdGVseSBtaXNtZWFzdXJlZC4NCmBgYHtyfQ0KeDE9Yyh4MSAsIDAuMSkNCngyPWMoeDIgLCAwLjgpDQp5PWMoeSw2KQ0KYGBgDQpSZS1maXQgdGhlIGxpbmVhciBtb2RlbHMgZnJvbSAoYykgdG8gKGUpIHVzaW5nIHRoaXMgbmV3IGRhdGEuIFdoYXQNCmVmZmVjdCBkb2VzIHRoaXMgbmV3IG9ic2VydmF0aW9uIGhhdmUgb24gdGhlIGVhY2ggb2YgdGhlIG1vZGVscz8NCkluIGVhY2ggbW9kZWwsIGlzIHRoaXMgb2JzZXJ2YXRpb24gYW4gb3V0bGllcj8gQSBoaWdoLWxldmVyYWdlDQpwb2ludD8gQm90aD8gRXhwbGFpbiB5b3VyIGFuc3dlcnMuDQpgYGB7cn0NCnJlZ195IDwtIGxtKHl+eDEreDIpDQpzdW1tYXJ5KHJlZ195KQ0KYGBgDQpgYGB7cn0NCnJlZ195eDEgPC0gbG0oeX54MSkNCnN1bW1hcnkocmVnX3l4MSkNCmBgYA0KYGBge3J9DQpyZWdfeXgyIDwtIGxtKHl+eDIpDQpzdW1tYXJ5KHJlZ195eDIpDQpgYGANCg==