Section 1 - Dataset size

1.1

Load the training and testing sets using the read.csv() function, and save them as variables with the names pisaTrain and pisaTest. How many students are there in the training set?

pisaTrain = read.csv("pisa2009train.csv")
pisaTest = read.csv("pisa2009test.csv")
str(pisaTrain)
'data.frame':   3663 obs. of  24 variables:
 $ grade                : int  11 11 9 10 10 10 10 10 9 10 ...
 $ male                 : int  1 1 1 0 1 1 0 0 0 1 ...
 $ raceeth              : Factor w/ 7 levels "American Indian/Alaska Native",..: NA 7 7 3 4 3 2 7 7 5 ...
 $ preschool            : int  NA 0 1 1 1 1 0 1 1 1 ...
 $ expectBachelors      : int  0 0 1 1 0 1 1 1 0 1 ...
 $ motherHS             : int  NA 1 1 0 1 NA 1 1 1 1 ...
 $ motherBachelors      : int  NA 1 1 0 0 NA 0 0 NA 1 ...
 $ motherWork           : int  1 1 1 1 1 1 1 0 1 1 ...
 $ fatherHS             : int  NA 1 1 1 1 1 NA 1 0 0 ...
 $ fatherBachelors      : int  NA 0 NA 0 0 0 NA 0 NA 0 ...
 $ fatherWork           : int  1 1 1 1 0 1 NA 1 1 1 ...
 $ selfBornUS           : int  1 1 1 1 1 1 0 1 1 1 ...
 $ motherBornUS         : int  0 1 1 1 1 1 1 1 1 1 ...
 $ fatherBornUS         : int  0 1 1 1 0 1 NA 1 1 1 ...
 $ englishAtHome        : int  0 1 1 1 1 1 1 1 1 1 ...
 $ computerForSchoolwork: int  1 1 1 1 1 1 1 1 1 1 ...
 $ read30MinsADay       : int  0 1 0 1 1 0 0 1 0 0 ...
 $ minutesPerWeekEnglish: int  225 450 250 200 250 300 250 300 378 294 ...
 $ studentsInEnglish    : int  NA 25 28 23 35 20 28 30 20 24 ...
 $ schoolHasLibrary     : int  1 1 1 1 1 1 1 1 0 1 ...
 $ publicSchool         : int  1 1 1 1 1 1 1 1 1 1 ...
 $ urban                : int  1 0 0 1 1 0 1 0 1 0 ...
 $ schoolSize           : int  673 1173 1233 2640 1095 227 2080 1913 502 899 ...
 $ readingScore         : num  476 575 555 458 614 ...

1.2

Using tapply() on pisaTrain, what is the average reading test score of males?

tapply(pisaTrain$readingScore, pisaTrain$male, mean)
       0        1 
512.9406 483.5325 

1.3

Which variables are missing data in at least one observation in the training set? Select all that apply.

summary(pisaTrain)
     grade            male                      raceeth       preschool      expectBachelors 
 Min.   : 8.00   Min.   :0.0000   White             :2015   Min.   :0.0000   Min.   :0.0000  
 1st Qu.:10.00   1st Qu.:0.0000   Hispanic          : 834   1st Qu.:0.0000   1st Qu.:1.0000  
 Median :10.00   Median :1.0000   Black             : 444   Median :1.0000   Median :1.0000  
 Mean   :10.09   Mean   :0.5111   Asian             : 143   Mean   :0.7228   Mean   :0.7859  
 3rd Qu.:10.00   3rd Qu.:1.0000   More than one race: 124   3rd Qu.:1.0000   3rd Qu.:1.0000  
 Max.   :12.00   Max.   :1.0000   (Other)           :  68   Max.   :1.0000   Max.   :1.0000  
                                  NA's              :  35   NA's   :56       NA's   :62      
    motherHS    motherBachelors    motherWork        fatherHS      fatherBachelors 
 Min.   :0.00   Min.   :0.0000   Min.   :0.0000   Min.   :0.0000   Min.   :0.0000  
 1st Qu.:1.00   1st Qu.:0.0000   1st Qu.:0.0000   1st Qu.:1.0000   1st Qu.:0.0000  
 Median :1.00   Median :0.0000   Median :1.0000   Median :1.0000   Median :0.0000  
 Mean   :0.88   Mean   :0.3481   Mean   :0.7345   Mean   :0.8593   Mean   :0.3319  
 3rd Qu.:1.00   3rd Qu.:1.0000   3rd Qu.:1.0000   3rd Qu.:1.0000   3rd Qu.:1.0000  
 Max.   :1.00   Max.   :1.0000   Max.   :1.0000   Max.   :1.0000   Max.   :1.0000  
 NA's   :97     NA's   :397      NA's   :93       NA's   :245      NA's   :569     
   fatherWork       selfBornUS      motherBornUS     fatherBornUS    englishAtHome   
 Min.   :0.0000   Min.   :0.0000   Min.   :0.0000   Min.   :0.0000   Min.   :0.0000  
 1st Qu.:1.0000   1st Qu.:1.0000   1st Qu.:1.0000   1st Qu.:1.0000   1st Qu.:1.0000  
 Median :1.0000   Median :1.0000   Median :1.0000   Median :1.0000   Median :1.0000  
 Mean   :0.8531   Mean   :0.9313   Mean   :0.7725   Mean   :0.7668   Mean   :0.8717  
 3rd Qu.:1.0000   3rd Qu.:1.0000   3rd Qu.:1.0000   3rd Qu.:1.0000   3rd Qu.:1.0000  
 Max.   :1.0000   Max.   :1.0000   Max.   :1.0000   Max.   :1.0000   Max.   :1.0000  
 NA's   :233      NA's   :69       NA's   :71       NA's   :113      NA's   :71      
 computerForSchoolwork read30MinsADay   minutesPerWeekEnglish studentsInEnglish
 Min.   :0.0000        Min.   :0.0000   Min.   :   0.0        Min.   : 1.0     
 1st Qu.:1.0000        1st Qu.:0.0000   1st Qu.: 225.0        1st Qu.:20.0     
 Median :1.0000        Median :0.0000   Median : 250.0        Median :25.0     
 Mean   :0.8994        Mean   :0.2899   Mean   : 266.2        Mean   :24.5     
 3rd Qu.:1.0000        3rd Qu.:1.0000   3rd Qu.: 300.0        3rd Qu.:30.0     
 Max.   :1.0000        Max.   :1.0000   Max.   :2400.0        Max.   :75.0     
 NA's   :65            NA's   :34       NA's   :186           NA's   :249      
 schoolHasLibrary  publicSchool        urban          schoolSize    readingScore  
 Min.   :0.0000   Min.   :0.0000   Min.   :0.0000   Min.   : 100   Min.   :168.6  
 1st Qu.:1.0000   1st Qu.:1.0000   1st Qu.:0.0000   1st Qu.: 712   1st Qu.:431.7  
 Median :1.0000   Median :1.0000   Median :0.0000   Median :1212   Median :499.7  
 Mean   :0.9676   Mean   :0.9339   Mean   :0.3849   Mean   :1369   Mean   :497.9  
 3rd Qu.:1.0000   3rd Qu.:1.0000   3rd Qu.:1.0000   3rd Qu.:1900   3rd Qu.:566.2  
 Max.   :1.0000   Max.   :1.0000   Max.   :1.0000   Max.   :6694   Max.   :746.0  
 NA's   :143                                        NA's   :162                   

1.4

Linear regression discards observations with missing data, so we will remove all such observations from the training and testing sets. Later in the course, we will learn about imputation, which deals with missing data by filling in missing values with plausible information.

Type the following commands into your R console to remove observations with any missing value from pisaTrain and pisaTest: pisaTrain = na.omit(pisaTrain) pisaTest = na.omit(pisaTest) How many observations are now in the training set?

pisaTrain = na.omit(pisaTrain)
pisaTest = na.omit(pisaTest)

Section 2 - Factor variables

2.1

Factor variables are variables that take on a discrete set of values, like the “Region” variable in the WHO dataset from the second lecture of Unit 1. This is an unordered factor because there isn’t any natural ordering between the levels. An ordered factor has a natural ordering between the levels (an example would be the classifications “large,” “medium,” and “small”).

Which of the following variables is an unordered factor with at least 3 levels? (Select all that apply.)

str(pisaTrain)
'data.frame':   2414 obs. of  24 variables:
 $ grade                : int  11 10 10 10 10 10 10 10 11 9 ...
 $ male                 : int  1 0 1 0 1 0 0 0 1 1 ...
 $ raceeth              : Factor w/ 7 levels "American Indian/Alaska Native",..: 7 3 4 7 5 4 7 4 7 7 ...
 $ preschool            : int  0 1 1 1 1 1 1 1 1 1 ...
 $ expectBachelors      : int  0 1 0 1 1 1 1 0 1 1 ...
 $ motherHS             : int  1 0 1 1 1 1 1 0 1 1 ...
 $ motherBachelors      : int  1 0 0 0 1 0 0 0 0 1 ...
 $ motherWork           : int  1 1 1 0 1 1 1 0 0 1 ...
 $ fatherHS             : int  1 1 1 1 0 1 1 0 1 1 ...
 $ fatherBachelors      : int  0 0 0 0 0 0 1 0 1 1 ...
 $ fatherWork           : int  1 1 0 1 1 0 1 1 1 1 ...
 $ selfBornUS           : int  1 1 1 1 1 0 1 0 1 1 ...
 $ motherBornUS         : int  1 1 1 1 1 0 1 0 1 1 ...
 $ fatherBornUS         : int  1 1 0 1 1 0 1 0 1 1 ...
 $ englishAtHome        : int  1 1 1 1 1 0 1 0 1 1 ...
 $ computerForSchoolwork: int  1 1 1 1 1 0 1 1 1 1 ...
 $ read30MinsADay       : int  1 1 1 1 0 1 1 1 0 0 ...
 $ minutesPerWeekEnglish: int  450 200 250 300 294 232 225 270 275 225 ...
 $ studentsInEnglish    : int  25 23 35 30 24 14 20 25 30 15 ...
 $ schoolHasLibrary     : int  1 1 1 1 1 1 1 1 1 1 ...
 $ publicSchool         : int  1 1 1 1 1 1 1 1 1 0 ...
 $ urban                : int  0 1 1 0 0 0 0 1 1 1 ...
 $ schoolSize           : int  1173 2640 1095 1913 899 1733 149 1400 1988 915 ...
 $ readingScore         : num  575 458 614 439 466 ...
 - attr(*, "na.action")=Class 'omit'  Named int [1:1249] 1 3 6 7 9 11 13 21 29 30 ...
  .. ..- attr(*, "names")= chr [1:1249] "1" "3" "6" "7" ...

2.2

To include unordered factors in a linear regression model, we define one level as the “reference level” and add a binary variable for each of the remaining levels. In this way, a factor with n levels is replaced by n-1 binary variables. The reference level is typically selected to be the most frequently occurring level in the dataset.

As an example, consider the unordered factor variable “color”, with levels “red”, “green”, and “blue”. If “green” were the reference level, then we would add binary variables “colorred” and “colorblue” to a linear regression problem. All red examples would have colorred=1 and colorblue=0. All blue examples would have colorred=0 and colorblue=1. All green examples would have colorred=0 and colorblue=0.

Now, consider the variable “raceeth” in our problem, which has levels “American Indian/Alaska Native”, “Asian”, “Black”, “Hispanic”, “More than one race”, “Native Hawaiian/Other Pacific Islander”, and “White”. Because it is the most common in our population, we will select White as the reference level.

Which binary variables will be included in the regression model? (Select all that apply.)

##We create a binary variable for each level except the reference level, so we would create all these variables except for raceethWhite.

3.1

Because the race variable takes on text values, it was loaded as a factor variable when we read in the dataset with read.csv() – you can see this when you run str(pisaTrain) or str(pisaTest). However, by default R selects the first level alphabetically (“American Indian/Alaska Native”) as the reference level of our factor instead of the most common level (“White”). Set the reference level of the factor by typing the following two lines in your R console: pisaTrain\(raceeth = relevel(pisaTrain\)raceeth, “White”) pisaTest\(raceeth = relevel(pisaTest\)raceeth, “White”) Now, build a linear regression model (call it lmScore) using the training set to predict readingScore using all the remaining variables.

It would be time-consuming to type all the variables, but R provides the shorthand notation “readingScore ~ .” to mean “predict readingScore using all the other variables in the data frame.” The period is used to replace listing out all of the independent variables. As an example, if your dependent variable is called “Y”, your independent variables are called “X1”, “X2”, and “X3”, and your training data set is called “Train”, instead of the regular notation:

LinReg = lm(Y ~ X1 + X2 + X3, data = Train) You would use the following command to build your model: LinReg = lm(Y ~ ., data = Train) What is the Multiple R-squared value of lmScore on the training set?

lmScore = lm(readingScore~., data=pisaTrain)
summary(lmScore)

Call:
lm(formula = readingScore ~ ., data = pisaTrain)

Residuals:
    Min      1Q  Median      3Q     Max 
-247.44  -48.86    1.86   49.77  217.18 

Coefficients:
                                                Estimate Std. Error t value Pr(>|t|)    
(Intercept)                                    76.489006  37.302678   2.050 0.040425 *  
grade                                          29.542707   2.937399  10.057  < 2e-16 ***
male                                          -14.521653   3.155926  -4.601 4.42e-06 ***
raceethAsian                                   63.167002  18.972648   3.329 0.000884 ***
raceethBlack                                    0.264980  17.369507   0.015 0.987830    
raceethHispanic                                28.301842  17.258860   1.640 0.101169    
raceethMore than one race                      50.354805  18.570123   2.712 0.006744 ** 
raceethNative Hawaiian/Other Pacific Islander  62.175726  23.782766   2.614 0.008997 ** 
raceethWhite                                   67.277327  16.786935   4.008 6.32e-05 ***
preschool                                      -4.463670   3.486055  -1.280 0.200516    
expectBachelors                                55.267080   4.293893  12.871  < 2e-16 ***
motherHS                                        6.058774   6.091423   0.995 0.320012    
motherBachelors                                12.638068   3.861457   3.273 0.001080 ** 
motherWork                                     -2.809101   3.521827  -0.798 0.425167    
fatherHS                                        4.018214   5.579269   0.720 0.471470    
fatherBachelors                                16.929755   3.995253   4.237 2.35e-05 ***
fatherWork                                      5.842798   4.395978   1.329 0.183934    
selfBornUS                                     -3.806278   7.323718  -0.520 0.603307    
motherBornUS                                   -8.798153   6.587621  -1.336 0.181821    
fatherBornUS                                    4.306994   6.263875   0.688 0.491776    
englishAtHome                                   8.035685   6.859492   1.171 0.241527    
computerForSchoolwork                          22.500232   5.702562   3.946 8.19e-05 ***
read30MinsADay                                 34.871924   3.408447  10.231  < 2e-16 ***
minutesPerWeekEnglish                           0.012788   0.010712   1.194 0.232644    
studentsInEnglish                              -0.286631   0.227819  -1.258 0.208460    
schoolHasLibrary                               12.215085   9.264884   1.318 0.187487    
publicSchool                                  -16.857475   6.725614  -2.506 0.012261 *  
urban                                          -0.110132   3.962724  -0.028 0.977830    
schoolSize                                      0.006540   0.002197   2.977 0.002942 ** 
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 73.81 on 2385 degrees of freedom
Multiple R-squared:  0.3251,    Adjusted R-squared:  0.3172 
F-statistic: 41.04 on 28 and 2385 DF,  p-value: < 2.2e-16

3.2

What is the training-set root-mean squared error (RMSE) of lmScore?

SSE = sum(lmScore$residuals^2)
RMSE = sqrt(SSE / nrow(pisaTrain))
RMSE
[1] 73.36555

3.3

Consider two students A and B. They have all variable values the same, except that student A is in grade 11 and student B is in grade 9. What is the predicted reading score of student A minus the predicted reading score of student B?

29.542707*2
[1] 59.08541

3.4

What is the meaning of the coefficient associated with variable raceethAsian?

##Predicted difference in the reading score between an Asian student and a white student who is otherwise identical

3.5

Based on the significance codes, which variables are candidates for removal from the model? Select all that apply. (We’ll assume that the factor variable raceeth should only be removed if none of its levels are significant.)

summary(lmScore)

Call:
lm(formula = readingScore ~ ., data = pisaTrain)

Residuals:
    Min      1Q  Median      3Q     Max 
-247.44  -48.86    1.86   49.77  217.18 

Coefficients:
                                                Estimate Std. Error t value Pr(>|t|)    
(Intercept)                                    76.489006  37.302678   2.050 0.040425 *  
grade                                          29.542707   2.937399  10.057  < 2e-16 ***
male                                          -14.521653   3.155926  -4.601 4.42e-06 ***
raceethAsian                                   63.167002  18.972648   3.329 0.000884 ***
raceethBlack                                    0.264980  17.369507   0.015 0.987830    
raceethHispanic                                28.301842  17.258860   1.640 0.101169    
raceethMore than one race                      50.354805  18.570123   2.712 0.006744 ** 
raceethNative Hawaiian/Other Pacific Islander  62.175726  23.782766   2.614 0.008997 ** 
raceethWhite                                   67.277327  16.786935   4.008 6.32e-05 ***
preschool                                      -4.463670   3.486055  -1.280 0.200516    
expectBachelors                                55.267080   4.293893  12.871  < 2e-16 ***
motherHS                                        6.058774   6.091423   0.995 0.320012    
motherBachelors                                12.638068   3.861457   3.273 0.001080 ** 
motherWork                                     -2.809101   3.521827  -0.798 0.425167    
fatherHS                                        4.018214   5.579269   0.720 0.471470    
fatherBachelors                                16.929755   3.995253   4.237 2.35e-05 ***
fatherWork                                      5.842798   4.395978   1.329 0.183934    
selfBornUS                                     -3.806278   7.323718  -0.520 0.603307    
motherBornUS                                   -8.798153   6.587621  -1.336 0.181821    
fatherBornUS                                    4.306994   6.263875   0.688 0.491776    
englishAtHome                                   8.035685   6.859492   1.171 0.241527    
computerForSchoolwork                          22.500232   5.702562   3.946 8.19e-05 ***
read30MinsADay                                 34.871924   3.408447  10.231  < 2e-16 ***
minutesPerWeekEnglish                           0.012788   0.010712   1.194 0.232644    
studentsInEnglish                              -0.286631   0.227819  -1.258 0.208460    
schoolHasLibrary                               12.215085   9.264884   1.318 0.187487    
publicSchool                                  -16.857475   6.725614  -2.506 0.012261 *  
urban                                          -0.110132   3.962724  -0.028 0.977830    
schoolSize                                      0.006540   0.002197   2.977 0.002942 ** 
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 73.81 on 2385 degrees of freedom
Multiple R-squared:  0.3251,    Adjusted R-squared:  0.3172 
F-statistic: 41.04 on 28 and 2385 DF,  p-value: < 2.2e-16

4.1

Using the “predict” function and supplying the “newdata” argument, use the lmScore model to predict the reading scores of students in pisaTest. Call this vector of predictions “predTest”. Do not change the variables in the model (for example, do not remove variables that we found were not significant in the previous part of this problem). Use the summary function to describe the test set predictions.

What is the range between the maximum and minimum predicted reading score on the test set?

predTest = predict(lmScore, newdata=pisaTest)
summary((predTest))
   Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
  353.2   482.0   524.0   516.7   555.7   637.7 

4.2

What is the sum of squared errors (SSE) of lmScore on the testing set?

sum((predTest-pisaTest$readingScore)^2)
[1] 5762082
sqrt(mean((predTest-pisaTest$readingScore)^2))
[1] 76.29079

4.3

What is the predicted test score used in the baseline model? Remember to compute this value using the training set and not the test set.

baseline = mean(pisaTrain$readingScore)
sum((baseline-pisaTest$readingScore)^2)
[1] 7802354
baseline
[1] 517.9629

4.4

What is the test-set R-squared value of lmScore?

1-5762082/7802354
[1] 0.2614944
LS0tDQp0aXRsZTogIkFTMi0yIFJlYWRpbmcgVGVzdCBTY29yZXMiDQphdXRob3I6ICI8bmFtZT4gp0XCYKfKIDxzdHVkZW50IElEPiBNMDY0MTExMDMyIg0Kb3V0cHV0OiBodG1sX25vdGVib29rDQotLS0NCg0KLSAtIC0gDQoNCiMjIyBTZWN0aW9uIDEgLSBEYXRhc2V0IHNpemUNCg0KIyMjIyAxLjEgDQoNCkxvYWQgdGhlIHRyYWluaW5nIGFuZCB0ZXN0aW5nIHNldHMgdXNpbmcgdGhlIHJlYWQuY3N2KCkgZnVuY3Rpb24sIGFuZCBzYXZlIHRoZW0gYXMgdmFyaWFibGVzIHdpdGggdGhlIG5hbWVzIHBpc2FUcmFpbiBhbmQgcGlzYVRlc3QuDQpIb3cgbWFueSBzdHVkZW50cyBhcmUgdGhlcmUgaW4gdGhlIHRyYWluaW5nIHNldD8NCg0KYGBge3J9DQpwaXNhVHJhaW4gPSByZWFkLmNzdigicGlzYTIwMDl0cmFpbi5jc3YiKQ0KcGlzYVRlc3QgPSByZWFkLmNzdigicGlzYTIwMDl0ZXN0LmNzdiIpDQpzdHIocGlzYVRyYWluKQ0KYGBgDQoNCiMjIyMgMS4yIA0KVXNpbmcgdGFwcGx5KCkgb24gcGlzYVRyYWluLCB3aGF0IGlzIHRoZSBhdmVyYWdlIHJlYWRpbmcgdGVzdCBzY29yZSBvZiBtYWxlcz8NCmBgYHtyfQ0KdGFwcGx5KHBpc2FUcmFpbiRyZWFkaW5nU2NvcmUsIHBpc2FUcmFpbiRtYWxlLCBtZWFuKQ0KYGBgDQoNCiMjIyMgMS4zDQpXaGljaCB2YXJpYWJsZXMgYXJlIG1pc3NpbmcgZGF0YSBpbiBhdCBsZWFzdCBvbmUgb2JzZXJ2YXRpb24gaW4gdGhlIHRyYWluaW5nIHNldD8gU2VsZWN0IGFsbCB0aGF0IGFwcGx5Lg0KDQpgYGB7cn0NCnN1bW1hcnkocGlzYVRyYWluKQ0KYGBgDQoNCiMjIyMgMS40DQpMaW5lYXIgcmVncmVzc2lvbiBkaXNjYXJkcyBvYnNlcnZhdGlvbnMgd2l0aCBtaXNzaW5nIGRhdGEsIHNvIHdlIHdpbGwgcmVtb3ZlIGFsbCBzdWNoIG9ic2VydmF0aW9ucyBmcm9tIHRoZSB0cmFpbmluZyBhbmQgdGVzdGluZyBzZXRzLiBMYXRlciBpbiB0aGUgY291cnNlLCB3ZSB3aWxsIGxlYXJuIGFib3V0IGltcHV0YXRpb24sIHdoaWNoIGRlYWxzIHdpdGggbWlzc2luZyBkYXRhIGJ5IGZpbGxpbmcgaW4gbWlzc2luZyB2YWx1ZXMgd2l0aCBwbGF1c2libGUgaW5mb3JtYXRpb24uDQoNClR5cGUgdGhlIGZvbGxvd2luZyBjb21tYW5kcyBpbnRvIHlvdXIgUiBjb25zb2xlIHRvIHJlbW92ZSBvYnNlcnZhdGlvbnMgd2l0aCBhbnkgbWlzc2luZyB2YWx1ZSBmcm9tIHBpc2FUcmFpbiBhbmQgcGlzYVRlc3Q6DQpwaXNhVHJhaW4gPSBuYS5vbWl0KHBpc2FUcmFpbikNCnBpc2FUZXN0ID0gbmEub21pdChwaXNhVGVzdCkNCkhvdyBtYW55IG9ic2VydmF0aW9ucyBhcmUgbm93IGluIHRoZSB0cmFpbmluZyBzZXQ/DQoNCmBgYHtyfQ0KcGlzYVRyYWluID0gbmEub21pdChwaXNhVHJhaW4pDQpwaXNhVGVzdCA9IG5hLm9taXQocGlzYVRlc3QpDQpgYGANCg0KIyMjIFNlY3Rpb24gMiAtIEZhY3RvciB2YXJpYWJsZXMNCg0KIyMjIyAyLjENCg0KRmFjdG9yIHZhcmlhYmxlcyBhcmUgdmFyaWFibGVzIHRoYXQgdGFrZSBvbiBhIGRpc2NyZXRlIHNldCBvZiB2YWx1ZXMsIGxpa2UgdGhlICJSZWdpb24iIHZhcmlhYmxlIGluIHRoZSBXSE8gZGF0YXNldCBmcm9tIHRoZSBzZWNvbmQgbGVjdHVyZSBvZiBVbml0IDEuIFRoaXMgaXMgYW4gdW5vcmRlcmVkIGZhY3RvciBiZWNhdXNlIHRoZXJlIGlzbid0IGFueSBuYXR1cmFsIG9yZGVyaW5nIGJldHdlZW4gdGhlIGxldmVscy4gQW4gb3JkZXJlZCBmYWN0b3IgaGFzIGEgbmF0dXJhbCBvcmRlcmluZyBiZXR3ZWVuIHRoZSBsZXZlbHMgKGFuIGV4YW1wbGUgd291bGQgYmUgdGhlIGNsYXNzaWZpY2F0aW9ucyAibGFyZ2UsIiAibWVkaXVtLCIgYW5kICJzbWFsbCIpLg0KDQpXaGljaCBvZiB0aGUgZm9sbG93aW5nIHZhcmlhYmxlcyBpcyBhbiB1bm9yZGVyZWQgZmFjdG9yIHdpdGggYXQgbGVhc3QgMyBsZXZlbHM/IChTZWxlY3QgYWxsIHRoYXQgYXBwbHkuKQ0KDQpgYGB7cn0NCnN0cihwaXNhVHJhaW4pDQojIyNyYWNlZXRoDQpgYGANCg0KIyMjIyAyLjINCg0KVG8gaW5jbHVkZSB1bm9yZGVyZWQgZmFjdG9ycyBpbiBhIGxpbmVhciByZWdyZXNzaW9uIG1vZGVsLCB3ZSBkZWZpbmUgb25lIGxldmVsIGFzIHRoZSAicmVmZXJlbmNlIGxldmVsIiBhbmQgYWRkIGEgYmluYXJ5IHZhcmlhYmxlIGZvciBlYWNoIG9mIHRoZSByZW1haW5pbmcgbGV2ZWxzLiBJbiB0aGlzIHdheSwgYSBmYWN0b3Igd2l0aCBuIGxldmVscyBpcyByZXBsYWNlZCBieSBuLTEgYmluYXJ5IHZhcmlhYmxlcy4gVGhlIHJlZmVyZW5jZSBsZXZlbCBpcyB0eXBpY2FsbHkgc2VsZWN0ZWQgdG8gYmUgdGhlIG1vc3QgZnJlcXVlbnRseSBvY2N1cnJpbmcgbGV2ZWwgaW4gdGhlIGRhdGFzZXQuDQoNCkFzIGFuIGV4YW1wbGUsIGNvbnNpZGVyIHRoZSB1bm9yZGVyZWQgZmFjdG9yIHZhcmlhYmxlICJjb2xvciIsIHdpdGggbGV2ZWxzICJyZWQiLCAiZ3JlZW4iLCBhbmQgImJsdWUiLiBJZiAiZ3JlZW4iIHdlcmUgdGhlIHJlZmVyZW5jZSBsZXZlbCwgdGhlbiB3ZSB3b3VsZCBhZGQgYmluYXJ5IHZhcmlhYmxlcyAiY29sb3JyZWQiIGFuZCAiY29sb3JibHVlIiB0byBhIGxpbmVhciByZWdyZXNzaW9uIHByb2JsZW0uIEFsbCByZWQgZXhhbXBsZXMgd291bGQgaGF2ZSBjb2xvcnJlZD0xIGFuZCBjb2xvcmJsdWU9MC4gQWxsIGJsdWUgZXhhbXBsZXMgd291bGQgaGF2ZSBjb2xvcnJlZD0wIGFuZCBjb2xvcmJsdWU9MS4gQWxsIGdyZWVuIGV4YW1wbGVzIHdvdWxkIGhhdmUgY29sb3JyZWQ9MCBhbmQgY29sb3JibHVlPTAuDQoNCk5vdywgY29uc2lkZXIgdGhlIHZhcmlhYmxlICJyYWNlZXRoIiBpbiBvdXIgcHJvYmxlbSwgd2hpY2ggaGFzIGxldmVscyAiQW1lcmljYW4gSW5kaWFuL0FsYXNrYSBOYXRpdmUiLCAiQXNpYW4iLCAiQmxhY2siLCAiSGlzcGFuaWMiLCAiTW9yZSB0aGFuIG9uZSByYWNlIiwgIk5hdGl2ZSBIYXdhaWlhbi9PdGhlciBQYWNpZmljIElzbGFuZGVyIiwgYW5kICJXaGl0ZSIuIEJlY2F1c2UgaXQgaXMgdGhlIG1vc3QgY29tbW9uIGluIG91ciBwb3B1bGF0aW9uLCB3ZSB3aWxsIHNlbGVjdCBXaGl0ZSBhcyB0aGUgcmVmZXJlbmNlIGxldmVsLg0KDQpXaGljaCBiaW5hcnkgdmFyaWFibGVzIHdpbGwgYmUgaW5jbHVkZWQgaW4gdGhlIHJlZ3Jlc3Npb24gbW9kZWw/IChTZWxlY3QgYWxsIHRoYXQgYXBwbHkuKQ0KDQpgYGB7cn0NCiMjV2UgY3JlYXRlIGEgYmluYXJ5IHZhcmlhYmxlIGZvciBlYWNoIGxldmVsIGV4Y2VwdCB0aGUgcmVmZXJlbmNlIGxldmVsLCBzbyB3ZSB3b3VsZCBjcmVhdGUgYWxsIHRoZXNlIHZhcmlhYmxlcyBleGNlcHQgZm9yIHJhY2VldGhXaGl0ZS4NCmBgYA0KDQojIyMjIDMuMQ0KDQpCZWNhdXNlIHRoZSByYWNlIHZhcmlhYmxlIHRha2VzIG9uIHRleHQgdmFsdWVzLCBpdCB3YXMgbG9hZGVkIGFzIGEgZmFjdG9yIHZhcmlhYmxlIHdoZW4gd2UgcmVhZCBpbiB0aGUgZGF0YXNldCB3aXRoIHJlYWQuY3N2KCkgLS0geW91IGNhbiBzZWUgdGhpcyB3aGVuIHlvdSBydW4gc3RyKHBpc2FUcmFpbikgb3Igc3RyKHBpc2FUZXN0KS4gSG93ZXZlciwgYnkgZGVmYXVsdCBSIHNlbGVjdHMgdGhlIGZpcnN0IGxldmVsIGFscGhhYmV0aWNhbGx5ICgiQW1lcmljYW4gSW5kaWFuL0FsYXNrYSBOYXRpdmUiKSBhcyB0aGUgcmVmZXJlbmNlIGxldmVsIG9mIG91ciBmYWN0b3IgaW5zdGVhZCBvZiB0aGUgbW9zdCBjb21tb24gbGV2ZWwgKCJXaGl0ZSIpLiBTZXQgdGhlIHJlZmVyZW5jZSBsZXZlbCBvZiB0aGUgZmFjdG9yIGJ5IHR5cGluZyB0aGUgZm9sbG93aW5nIHR3byBsaW5lcyBpbiB5b3VyIFIgY29uc29sZToNCnBpc2FUcmFpbiRyYWNlZXRoID0gcmVsZXZlbChwaXNhVHJhaW4kcmFjZWV0aCwgIldoaXRlIikNCnBpc2FUZXN0JHJhY2VldGggPSByZWxldmVsKHBpc2FUZXN0JHJhY2VldGgsICJXaGl0ZSIpDQpOb3csIGJ1aWxkIGEgbGluZWFyIHJlZ3Jlc3Npb24gbW9kZWwgKGNhbGwgaXQgbG1TY29yZSkgdXNpbmcgdGhlIHRyYWluaW5nIHNldCB0byBwcmVkaWN0IHJlYWRpbmdTY29yZSB1c2luZyBhbGwgdGhlIHJlbWFpbmluZyB2YXJpYWJsZXMuDQoNCkl0IHdvdWxkIGJlIHRpbWUtY29uc3VtaW5nIHRvIHR5cGUgYWxsIHRoZSB2YXJpYWJsZXMsIGJ1dCBSIHByb3ZpZGVzIHRoZSBzaG9ydGhhbmQgbm90YXRpb24gInJlYWRpbmdTY29yZSB+IC4iIHRvIG1lYW4gInByZWRpY3QgcmVhZGluZ1Njb3JlIHVzaW5nIGFsbCB0aGUgb3RoZXIgdmFyaWFibGVzIGluIHRoZSBkYXRhIGZyYW1lLiIgVGhlIHBlcmlvZCBpcyB1c2VkIHRvIHJlcGxhY2UgbGlzdGluZyBvdXQgYWxsIG9mIHRoZSBpbmRlcGVuZGVudCB2YXJpYWJsZXMuIEFzIGFuIGV4YW1wbGUsIGlmIHlvdXIgZGVwZW5kZW50IHZhcmlhYmxlIGlzIGNhbGxlZCAiWSIsIHlvdXIgaW5kZXBlbmRlbnQgdmFyaWFibGVzIGFyZSBjYWxsZWQgIlgxIiwgIlgyIiwgYW5kICJYMyIsIGFuZCB5b3VyIHRyYWluaW5nIGRhdGEgc2V0IGlzIGNhbGxlZCAiVHJhaW4iLCBpbnN0ZWFkIG9mIHRoZSByZWd1bGFyIG5vdGF0aW9uOg0KDQpMaW5SZWcgPSBsbShZIH4gWDEgKyBYMiArIFgzLCBkYXRhID0gVHJhaW4pDQpZb3Ugd291bGQgdXNlIHRoZSBmb2xsb3dpbmcgY29tbWFuZCB0byBidWlsZCB5b3VyIG1vZGVsOg0KTGluUmVnID0gbG0oWSB+IC4sIGRhdGEgPSBUcmFpbikNCldoYXQgaXMgdGhlIE11bHRpcGxlIFItc3F1YXJlZCB2YWx1ZSBvZiBsbVNjb3JlIG9uIHRoZSB0cmFpbmluZyBzZXQ/DQoNCmBgYHtyfQ0KbG1TY29yZSA9IGxtKHJlYWRpbmdTY29yZX4uLCBkYXRhPXBpc2FUcmFpbikNCnN1bW1hcnkobG1TY29yZSkNCmBgYA0KDQojIyMjIDMuMg0KDQpXaGF0IGlzIHRoZSB0cmFpbmluZy1zZXQgcm9vdC1tZWFuIHNxdWFyZWQgZXJyb3IgKFJNU0UpIG9mIGxtU2NvcmU/DQoNCmBgYHtyfQ0KU1NFID0gc3VtKGxtU2NvcmUkcmVzaWR1YWxzXjIpDQpSTVNFID0gc3FydChTU0UgLyBucm93KHBpc2FUcmFpbikpDQpSTVNFDQpgYGANCg0KIyMjIyAzLjMNCg0KQ29uc2lkZXIgdHdvIHN0dWRlbnRzIEEgYW5kIEIuIFRoZXkgaGF2ZSBhbGwgdmFyaWFibGUgdmFsdWVzIHRoZSBzYW1lLCBleGNlcHQgdGhhdCBzdHVkZW50IEEgaXMgaW4gZ3JhZGUgMTEgYW5kIHN0dWRlbnQgQiBpcyBpbiBncmFkZSA5LiBXaGF0IGlzIHRoZSBwcmVkaWN0ZWQgcmVhZGluZyBzY29yZSBvZiBzdHVkZW50IEEgbWludXMgdGhlIHByZWRpY3RlZCByZWFkaW5nIHNjb3JlIG9mIHN0dWRlbnQgQj8NCg0KYGBge3J9DQoyOS41NDI3MDcqMg0KYGBgDQoNCiMjIyMgMy40DQoNCldoYXQgaXMgdGhlIG1lYW5pbmcgb2YgdGhlIGNvZWZmaWNpZW50IGFzc29jaWF0ZWQgd2l0aCB2YXJpYWJsZSByYWNlZXRoQXNpYW4/DQoNCmBgYHtyfQ0KIyNQcmVkaWN0ZWQgZGlmZmVyZW5jZSBpbiB0aGUgcmVhZGluZyBzY29yZSBiZXR3ZWVuIGFuIEFzaWFuIHN0dWRlbnQgYW5kIGEgd2hpdGUgc3R1ZGVudCB3aG8gaXMgb3RoZXJ3aXNlIGlkZW50aWNhbA0KYGBgDQojIyMjIDMuNQ0KDQpCYXNlZCBvbiB0aGUgc2lnbmlmaWNhbmNlIGNvZGVzLCB3aGljaCB2YXJpYWJsZXMgYXJlIGNhbmRpZGF0ZXMgZm9yIHJlbW92YWwgZnJvbSB0aGUgbW9kZWw/IFNlbGVjdCBhbGwgdGhhdCBhcHBseS4gKFdlJ2xsIGFzc3VtZSB0aGF0IHRoZSBmYWN0b3IgdmFyaWFibGUgcmFjZWV0aCBzaG91bGQgb25seSBiZSByZW1vdmVkIGlmIG5vbmUgb2YgaXRzIGxldmVscyBhcmUgc2lnbmlmaWNhbnQuKQ0KDQpgYGB7cn0NCnN1bW1hcnkobG1TY29yZSkNCmBgYA0KDQojIyMjIDQuMQ0KDQpVc2luZyB0aGUgInByZWRpY3QiIGZ1bmN0aW9uIGFuZCBzdXBwbHlpbmcgdGhlICJuZXdkYXRhIiBhcmd1bWVudCwgdXNlIHRoZSBsbVNjb3JlIG1vZGVsIHRvIHByZWRpY3QgdGhlIHJlYWRpbmcgc2NvcmVzIG9mIHN0dWRlbnRzIGluIHBpc2FUZXN0LiBDYWxsIHRoaXMgdmVjdG9yIG9mIHByZWRpY3Rpb25zICJwcmVkVGVzdCIuIERvIG5vdCBjaGFuZ2UgdGhlIHZhcmlhYmxlcyBpbiB0aGUgbW9kZWwgKGZvciBleGFtcGxlLCBkbyBub3QgcmVtb3ZlIHZhcmlhYmxlcyB0aGF0IHdlIGZvdW5kIHdlcmUgbm90IHNpZ25pZmljYW50IGluIHRoZSBwcmV2aW91cyBwYXJ0IG9mIHRoaXMgcHJvYmxlbSkuIFVzZSB0aGUgc3VtbWFyeSBmdW5jdGlvbiB0byBkZXNjcmliZSB0aGUgdGVzdCBzZXQgcHJlZGljdGlvbnMuDQoNCldoYXQgaXMgdGhlIHJhbmdlIGJldHdlZW4gdGhlIG1heGltdW0gYW5kIG1pbmltdW0gcHJlZGljdGVkIHJlYWRpbmcgc2NvcmUgb24gdGhlIHRlc3Qgc2V0Pw0KDQpgYGB7cn0NCnByZWRUZXN0ID0gcHJlZGljdChsbVNjb3JlLCBuZXdkYXRhPXBpc2FUZXN0KQ0Kc3VtbWFyeSgocHJlZFRlc3QpKQ0KYGBgDQoNCiMjIyMgNC4yDQoNCldoYXQgaXMgdGhlIHN1bSBvZiBzcXVhcmVkIGVycm9ycyAoU1NFKSBvZiBsbVNjb3JlIG9uIHRoZSB0ZXN0aW5nIHNldD8NCg0KYGBge3J9DQpzdW0oKHByZWRUZXN0LXBpc2FUZXN0JHJlYWRpbmdTY29yZSleMikNCnNxcnQobWVhbigocHJlZFRlc3QtcGlzYVRlc3QkcmVhZGluZ1Njb3JlKV4yKSkNCmBgYA0KDQojIyMjIDQuMw0KDQpXaGF0IGlzIHRoZSBwcmVkaWN0ZWQgdGVzdCBzY29yZSB1c2VkIGluIHRoZSBiYXNlbGluZSBtb2RlbD8gUmVtZW1iZXIgdG8gY29tcHV0ZSB0aGlzIHZhbHVlIHVzaW5nIHRoZSB0cmFpbmluZyBzZXQgYW5kIG5vdCB0aGUgdGVzdCBzZXQuDQoNCmBgYHtyfQ0KYmFzZWxpbmUgPSBtZWFuKHBpc2FUcmFpbiRyZWFkaW5nU2NvcmUpDQpzdW0oKGJhc2VsaW5lLXBpc2FUZXN0JHJlYWRpbmdTY29yZSleMikNCmJhc2VsaW5lDQpgYGANCg0KIyMjIyA0LjQNCg0KV2hhdCBpcyB0aGUgdGVzdC1zZXQgUi1zcXVhcmVkIHZhbHVlIG9mIGxtU2NvcmU/DQoNCmBgYHtyfQ0KMS01NzYyMDgyLzc4MDIzNTQNCmBgYA0KDQo=