1. Create a new variable, grade_0, which goes from 0-5 instead of 3-8 like our current grade variable does. Why is using a time variable that starts at 0 helpful in interpreting our results?

It is helpful to use a time variable that starts with 0 because it is easier to interpret the intercept at 0, than 3. As stated in our text, “the time variable may be grand-mean centered, which means at 0, so that the intercept is the value of the outcome half-way through the time period under study” (p. 39). If you scale the year to 0, it can be controled at the starting point.

  1. Run and interpret a null model, AND a null growth model, with science scores science as the DV, with scores clustered within students (stdntid). How much variation in science scores is at the student level for each null model? Why does this value change when moving from a regular null model to a null growth model?

The average science score in the entire sample is 730.The random variance component at the student-level is 729.4. The random variance component at the observation level is 3361.7. Most of the variability is within student, rather than between student.17.8% of the variability is at the student-level. However, the element of time is not accounted for. When we run a growth model, we add a predictor of grade_0 to the equation. The student-level variability in the growth model is 1073the residual variability went down to 1300. The intercept also changed to 669. The intercept is the predicted science score at grade_0 (grade 3). The growth ICC increased significantly because we introduced a growth parameter that explained a lot of variability within the student.

Part 2: Adding Predictors, Interactions, and Testing Random Slopes

  1. Now, run a random intercept growth model with science scores (science) as the DV, and grade (grade0), gender (gender), race/ethnicity (race), and academic motivation (ac_mot) as predictors. Interpret the results and evaluate model fit using AIC/BIC and your choice of effect size.

The AIC of the Growth Null Model is 90959.4, BIC is 90987.7. The AIC and BIC of the new model is 90811.3 and 90882.1 respectively. This decrease in AIC and BIC indicate that the model is improving, compared to the null model. Grade, gender, and black all appear to be significant predictors. This is not talking about slope, but rather starting point.

  1. Use an interaction effect between ac_mot and grade0 to test whether students with different levels of academic motivation have different growth in science scores. Interpret the results.

The interaction between grade_0 and ac_mot does not show to be significant. It does not appear to impact the growth measures in change over time.

  1. Try adding a random slope for time (grade) at the student level. What do these random slopes “mean”? Use lrtest (Stata) or the rand function (R) to evaluate whether the slope should be treated as random.

Random slopes allow variability for each student. That is, the model accounts for variation of the slope (grade) over time. However, for this model, when you include the random slope, it does not improve the model significantly. The Chi-squared statistic is 0.7382.

Load in Our MVP Packages

suppressPackageStartupMessages(library(tidyverse))
suppressPackageStartupMessages(library(Hmisc))
suppressPackageStartupMessages(library(lme4))

Load in the Data

starlong <- haven::read_dta("STAR_long.dta")

glimpse(starlong)
Rows: 8,826
Columns: 7
$ stdntid    <dbl> 10023, 10023, 10023, 10023, 10023, 10023, 1004...
$ grade      <dbl> 3, 4, 5, 6, 7, 8, 3, 4, 5, 6, 7, 8, 3, 4, 5, 6...
$ race       <dbl+lbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
$ gender     <dbl+lbl> 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, ...
$ ac_mot     <dbl> 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50...
$ ever_lunch <dbl+lbl> 2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 2, 2, ...
$ science    <dbl> 632, 756, 729, 762, 778, 783, 618, 710, 738, 7...
starlong.clean <- starlong %>%
  mutate(.,
         race.fac = as_factor(race),
         gender.fac = as_factor(gender),
         lunch.fac = as_factor(ever_lunch),
         grade_0 = grade - 3)

glimpse(starlong.clean)
Rows: 8,826
Columns: 11
$ stdntid    <dbl> 10023, 10023, 10023, 10023, 10023, 10023, 1004...
$ grade      <dbl> 3, 4, 5, 6, 7, 8, 3, 4, 5, 6, 7, 8, 3, 4, 5, 6...
$ race       <dbl+lbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
$ gender     <dbl+lbl> 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, ...
$ ac_mot     <dbl> 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50, 50...
$ ever_lunch <dbl+lbl> 2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 2, 2, ...
$ science    <dbl> 632, 756, 729, 762, 778, 783, 618, 710, 738, 7...
$ race.fac   <fct> WHITE, WHITE, WHITE, WHITE, WHITE, WHITE, WHIT...
$ gender.fac <fct> MALE, MALE, MALE, MALE, MALE, MALE, FEMALE, FE...
$ lunch.fac  <fct> NON-FREE LUNCH, NON-FREE LUNCH, NON-FREE LUNCH...
$ grade_0    <dbl> 0, 1, 2, 3, 4, 5, 0, 1, 2, 3, 4, 5, 0, 1, 2, 3...

#null model

model.null <- lmer(science ~ (1|stdntid), data = starlong.clean)
summary(model.null)
Linear mixed model fit by REML ['lmerMod']
Formula: science ~ (1 | stdntid)
   Data: starlong.clean

REML criterion at convergence: 97939.7

Scaled residuals: 
    Min      1Q  Median      3Q     Max 
-3.1898 -0.3885  0.2381  0.6370  2.7353 

Random effects:
 Groups   Name        Variance Std.Dev.
 stdntid  (Intercept)  729.4   27.01   
 Residual             3361.7   57.98   
Number of obs: 8826, groups:  stdntid, 1471

Fixed effects:
            Estimate Std. Error t value
(Intercept) 730.0741     0.9363   779.7

Calculate ICC

null.icc <- 729.4/(729.4 + 3361.7)
null.icc
[1] 0.1782895

GROWTH null model

model.null.growth <- lmer(science ~ grade_0 + (1|stdntid), REML = FALSE, data = starlong.clean)
summary(model.null.growth)
Linear mixed model fit by maximum likelihood  ['lmerMod']
Formula: science ~ grade_0 + (1 | stdntid)
   Data: starlong.clean

     AIC      BIC   logLik deviance df.resid 
 90959.4  90987.7 -45475.7  90951.4     8822 

Scaled residuals: 
    Min      1Q  Median      3Q     Max 
-4.0254 -0.6361  0.0105  0.6367  4.0226 

Random effects:
 Groups   Name        Variance Std.Dev.
 stdntid  (Intercept) 1072     32.74   
 Residual             1300     36.05   
Number of obs: 8826, groups:  stdntid, 1471

Fixed effects:
            Estimate Std. Error t value
(Intercept) 669.3920     1.0916   613.2
grade_0      24.2729     0.2247   108.0

Correlation of Fixed Effects:
        (Intr)
grade_0 -0.515

Calculate GROWTH ICC

null.growth.icc <- 669.3920/(669.3920 + 24.2729)
null.growth.icc
[1] 0.9650077

Add a student-level predictors

model.1 <- lmer(science ~ grade_0 + gender.fac + race.fac + ac_mot + (1|stdntid), REML=FALSE, data = starlong.clean)
summary(model.1)
Linear mixed model fit by maximum likelihood  ['lmerMod']
Formula: 
science ~ grade_0 + gender.fac + race.fac + ac_mot + (1 | stdntid)
   Data: starlong.clean

     AIC      BIC   logLik deviance df.resid 
 90811.3  90882.1 -45395.6  90791.3     8816 

Scaled residuals: 
    Min      1Q  Median      3Q     Max 
-4.0880 -0.6310  0.0112  0.6372  3.9908 

Random effects:
 Groups   Name        Variance Std.Dev.
 stdntid  (Intercept)  939.3   30.65   
 Residual             1299.6   36.05   
Number of obs: 8826, groups:  stdntid, 1471

Fixed effects:
                 Estimate Std. Error t value
(Intercept)      691.3021    10.9167  63.325
grade_0           24.2729     0.2247 108.029
gender.facFEMALE  -6.4660     1.8242  -3.545
race.facBLACK    -28.7016     2.3547 -12.189
race.facASIAN     22.4758    15.2670   1.472
race.facHISPANIC  -6.8712    19.6583  -0.350
race.facOTHER     27.0196    24.0648   1.123
ac_mot            -0.2766     0.2245  -1.232

Correlation of Fixed Effects:
            (Intr) grad_0 g.FEMA r.BLAC r.ASIA r.HISP r.OTHE
grade_0     -0.051                                          
gndr.FEMALE  0.128  0.000                                   
rac.fcBLACK  0.002  0.000  0.049                            
rac.fcASIAN -0.058  0.000  0.029  0.026                     
rc.HISPANIC  0.004  0.000  0.021  0.022  0.003              
rac.fcOTHER  0.014  0.000  0.007  0.018  0.002  0.002       
ac_mot      -0.991  0.000 -0.217 -0.043  0.049 -0.010 -0.018

This is an interaction effect.

model.2 <- lmer(science ~ grade_0 + gender.fac + race.fac + ac_mot + +ac_mot:grade_0 + (1|stdntid), data = starlong.clean)
summary(model.2)
Linear mixed model fit by REML ['lmerMod']
Formula: 
science ~ grade_0 + gender.fac + race.fac + ac_mot + +ac_mot:grade_0 +  
    (1 | stdntid)
   Data: starlong.clean

REML criterion at convergence: 90766.1

Scaled residuals: 
    Min      1Q  Median      3Q     Max 
-4.0845 -0.6315  0.0109  0.6369  3.9896 

Random effects:
 Groups   Name        Variance Std.Dev.
 stdntid  (Intercept)  944.8   30.74   
 Residual             1300.0   36.06   
Number of obs: 8826, groups:  stdntid, 1471

Fixed effects:
                   Estimate Std. Error t value
(Intercept)      692.054985  12.891599  53.683
grade_0           23.971719   2.735455   8.763
gender.facFEMALE  -6.465992   1.828568  -3.536
race.facBLACK    -28.701558   2.360372 -12.160
race.facASIAN     22.475804  15.303458   1.469
race.facHISPANIC  -6.871236  19.705237  -0.349
race.facOTHER     27.019612  24.122216   1.120
ac_mot            -0.291857   0.264239  -1.105
grade_0:ac_mot     0.006121   0.055418   0.110

Correlation of Fixed Effects:
            (Intr) grad_0 g.FEMA r.BLAC r.ASIA r.HISP r.OTHE ac_mot
grade_0     -0.530                                                 
gndr.FEMALE  0.108  0.000                                          
rac.fcBLACK  0.001  0.000  0.049                                   
rac.fcASIAN -0.049  0.000  0.029  0.026                            
rc.HISPANIC  0.003  0.000  0.021  0.022  0.003                     
rac.fcOTHER  0.012  0.000  0.007  0.018  0.002  0.002              
ac_mot      -0.993  0.523 -0.185 -0.036  0.042 -0.009 -0.015       
grad_0:c_mt  0.529 -0.997  0.000  0.000  0.000  0.000  0.000 -0.524

#random slope

model.3 <- lmer(science ~ grade_0 + gender.fac + race.fac + ac_mot + (grade_0|stdntid), data = starlong.clean)
boundary (singular) fit: see ?isSingular
summary(model.3)
Linear mixed model fit by REML ['lmerMod']
Formula: 
science ~ grade_0 + gender.fac + race.fac + ac_mot + (grade_0 |  
    stdntid)
   Data: starlong.clean

REML criterion at convergence: 90761.5

Scaled residuals: 
    Min      1Q  Median      3Q     Max 
-4.0652 -0.6306  0.0104  0.6359  4.0013 

Random effects:
 Groups   Name        Variance  Std.Dev. Corr
 stdntid  (Intercept) 9.166e+02 30.2751      
          grade_0     3.431e-02  0.1852  1.00
 Residual             1.300e+03 36.0508      
Number of obs: 8826, groups:  stdntid, 1471

Fixed effects:
                 Estimate Std. Error t value
(Intercept)      691.3784    10.9397  63.199
grade_0           24.2729     0.2247 108.002
gender.facFEMALE  -6.4023     1.8282  -3.502
race.facBLACK    -28.6254     2.3599 -12.130
race.facASIAN     22.2725    15.3006   1.456
race.facHISPANIC  -6.5058    19.7015  -0.330
race.facOTHER     26.3064    24.1177   1.091
ac_mot            -0.2790     0.2250  -1.240

Correlation of Fixed Effects:
            (Intr) grad_0 g.FEMA r.BLAC r.ASIA r.HISP r.OTHE
grade_0     -0.050                                          
gndr.FEMALE  0.128  0.000                                   
rac.fcBLACK  0.002  0.000  0.049                            
rac.fcASIAN -0.058  0.000  0.029  0.026                     
rc.HISPANIC  0.004  0.000  0.021  0.022  0.003              
rac.fcOTHER  0.014  0.000  0.007  0.018  0.002  0.002       
ac_mot      -0.991  0.000 -0.217 -0.043  0.049 -0.010 -0.018
convergence code: 0
boundary (singular) fit: see ?isSingular

Should we keep that random slope for time (year)?

lmerTest::rand(model.3)
ANOVA-like table for random-effects: Single term deletions

Model:
science ~ grade_0 + gender.fac + race.fac + ac_mot + (grade_0 | 
    stdntid)
                               npar logLik   AIC   LRT Df Pr(>Chisq)
<none>                           12 -45381 90786                    
grade_0 in (grade_0 | stdntid)   10 -45381 90782 0.607  2     0.7382
LS0tDQp0aXRsZTogJ01vZHVsZSA4LCBQYXJ0IDE6IEludHJvIHRvIEdyb3d0aCBNb2RlbHMnDQphdXRob3I6ICdKYWtlIFJleW5vbGRzIC0gT2N0b2JlciAxMiwgMjAyMCcNCm91dHB1dDogaHRtbF9ub3RlYm9vaw0KLS0tDQoxLiBDcmVhdGUgYSBuZXcgdmFyaWFibGUsIGdyYWRlXzAsIHdoaWNoIGdvZXMgZnJvbSAwLTUgaW5zdGVhZCBvZiAzLTggbGlrZSBvdXIgY3VycmVudCBncmFkZSB2YXJpYWJsZSBkb2VzLiBXaHkgaXMgdXNpbmcgYSB0aW1lIHZhcmlhYmxlIHRoYXQgc3RhcnRzIGF0IDAgaGVscGZ1bCBpbiBpbnRlcnByZXRpbmcgb3VyIHJlc3VsdHM/DQoNCkl0IGlzIGhlbHBmdWwgdG8gdXNlIGEgdGltZSB2YXJpYWJsZSB0aGF0IHN0YXJ0cyB3aXRoIDAgYmVjYXVzZSBpdCBpcyBlYXNpZXIgdG8gaW50ZXJwcmV0IHRoZSBpbnRlcmNlcHQgYXQgMCwgdGhhbiAzLiBBcyBzdGF0ZWQgaW4gb3VyIHRleHQsICJ0aGUgdGltZSB2YXJpYWJsZSBtYXkgYmUgZ3JhbmQtbWVhbiBjZW50ZXJlZCwgd2hpY2ggbWVhbnMgYXQgMCwgc28gdGhhdCB0aGUgaW50ZXJjZXB0IGlzIHRoZSB2YWx1ZSBvZiB0aGUgb3V0Y29tZSBoYWxmLXdheSB0aHJvdWdoIHRoZSB0aW1lIHBlcmlvZCB1bmRlciBzdHVkeSIgKHAuIDM5KS4gSWYgeW91IHNjYWxlIHRoZSB5ZWFyIHRvIDAsIGl0IGNhbiBiZSBjb250cm9sZWQgYXQgdGhlIHN0YXJ0aW5nIHBvaW50LiAgDQoNCg0KDQoyLiBSdW4gYW5kIGludGVycHJldCBhIG51bGwgbW9kZWwsIEFORCBhIG51bGwgZ3Jvd3RoIG1vZGVsLCB3aXRoIHNjaWVuY2Ugc2NvcmVzIHNjaWVuY2UgYXMgdGhlIERWLCB3aXRoIHNjb3JlcyBjbHVzdGVyZWQgd2l0aGluIHN0dWRlbnRzIChzdGRudGlkKS4gSG93IG11Y2ggdmFyaWF0aW9uIGluIHNjaWVuY2Ugc2NvcmVzIGlzIGF0IHRoZSBzdHVkZW50IGxldmVsIGZvciBlYWNoIG51bGwgbW9kZWw/IFdoeSBkb2VzIHRoaXMgdmFsdWUgY2hhbmdlIHdoZW4gbW92aW5nIGZyb20gYSByZWd1bGFyIG51bGwgbW9kZWwgdG8gYSBudWxsIGdyb3d0aCBtb2RlbD8NCg0KVGhlIGF2ZXJhZ2Ugc2NpZW5jZSBzY29yZSBpbiB0aGUgZW50aXJlIHNhbXBsZSBpcyA3MzAuVGhlIHJhbmRvbSB2YXJpYW5jZSBjb21wb25lbnQgYXQgdGhlIHN0dWRlbnQtbGV2ZWwgaXMgNzI5LjQuIFRoZSByYW5kb20gdmFyaWFuY2UgY29tcG9uZW50IGF0IHRoZSBvYnNlcnZhdGlvbiBsZXZlbCBpcyAzMzYxLjcuIE1vc3Qgb2YgdGhlIHZhcmlhYmlsaXR5IGlzIHdpdGhpbiBzdHVkZW50LCByYXRoZXIgdGhhbiBiZXR3ZWVuIHN0dWRlbnQuMTcuOCUgb2YgdGhlIHZhcmlhYmlsaXR5IGlzIGF0IHRoZSBzdHVkZW50LWxldmVsLiBIb3dldmVyLCB0aGUgZWxlbWVudCBvZiB0aW1lIGlzIG5vdCBhY2NvdW50ZWQgZm9yLiBXaGVuIHdlIHJ1biBhIGdyb3d0aCBtb2RlbCwgd2UgYWRkIGEgcHJlZGljdG9yIG9mIGdyYWRlXzAgdG8gdGhlIGVxdWF0aW9uLiBUaGUgc3R1ZGVudC1sZXZlbCB2YXJpYWJpbGl0eSBpbiB0aGUgZ3Jvd3RoIG1vZGVsIGlzIDEwNzN0aGUgcmVzaWR1YWwgdmFyaWFiaWxpdHkgd2VudCBkb3duIHRvIDEzMDAuIFRoZSBpbnRlcmNlcHQgYWxzbyBjaGFuZ2VkIHRvIDY2OS4gVGhlIGludGVyY2VwdCBpcyB0aGUgcHJlZGljdGVkIHNjaWVuY2Ugc2NvcmUgYXQgZ3JhZGVfMCAoZ3JhZGUgMykuIFRoZSBncm93dGggSUNDIGluY3JlYXNlZCBzaWduaWZpY2FudGx5IGJlY2F1c2Ugd2UgaW50cm9kdWNlZCBhIGdyb3d0aCBwYXJhbWV0ZXIgdGhhdCBleHBsYWluZWQgYSBsb3Qgb2YgdmFyaWFiaWxpdHkgd2l0aGluIHRoZSBzdHVkZW50LiANCg0KDQpQYXJ0IDI6IEFkZGluZyBQcmVkaWN0b3JzLCBJbnRlcmFjdGlvbnMsIGFuZCBUZXN0aW5nIFJhbmRvbSBTbG9wZXMNCg0KMy4gTm93LCBydW4gYSByYW5kb20gaW50ZXJjZXB0IGdyb3d0aCBtb2RlbCB3aXRoIHNjaWVuY2Ugc2NvcmVzIChzY2llbmNlKSBhcyB0aGUgRFYsIGFuZCBncmFkZSAoZ3JhZGUwKSwgZ2VuZGVyIChnZW5kZXIpLCByYWNlL2V0aG5pY2l0eSAocmFjZSksIGFuZCBhY2FkZW1pYyBtb3RpdmF0aW9uIChhY19tb3QpICBhcyBwcmVkaWN0b3JzLiBJbnRlcnByZXQgdGhlIHJlc3VsdHMgYW5kIGV2YWx1YXRlIG1vZGVsIGZpdCB1c2luZyBBSUMvQklDIGFuZCB5b3VyIGNob2ljZSBvZiBlZmZlY3Qgc2l6ZS4NCg0KVGhlIEFJQyBvZiB0aGUgR3Jvd3RoIE51bGwgTW9kZWwgaXMgOTA5NTkuNCwgQklDIGlzIDkwOTg3LjcuIFRoZSBBSUMgYW5kIEJJQyBvZiB0aGUgbmV3IG1vZGVsIGlzIDkwODExLjMgYW5kIDkwODgyLjEgcmVzcGVjdGl2ZWx5LiBUaGlzIGRlY3JlYXNlIGluIEFJQyBhbmQgQklDIGluZGljYXRlIHRoYXQgdGhlIG1vZGVsIGlzIGltcHJvdmluZywgY29tcGFyZWQgdG8gdGhlIG51bGwgbW9kZWwuIEdyYWRlLCBnZW5kZXIsIGFuZCBibGFjayBhbGwgYXBwZWFyIHRvIGJlIHNpZ25pZmljYW50IHByZWRpY3RvcnMuIFRoaXMgaXMgbm90IHRhbGtpbmcgYWJvdXQgc2xvcGUsIGJ1dCByYXRoZXIgc3RhcnRpbmcgcG9pbnQuIA0KDQoNCjQuIFVzZSBhbiBpbnRlcmFjdGlvbiBlZmZlY3QgYmV0d2VlbiBhY19tb3QgYW5kIGdyYWRlMCB0byB0ZXN0IHdoZXRoZXIgc3R1ZGVudHMgd2l0aCBkaWZmZXJlbnQgbGV2ZWxzIG9mIGFjYWRlbWljIG1vdGl2YXRpb24gaGF2ZSBkaWZmZXJlbnQgZ3Jvd3RoIGluIHNjaWVuY2Ugc2NvcmVzLiBJbnRlcnByZXQgdGhlIHJlc3VsdHMuDQoNClRoZSBpbnRlcmFjdGlvbiBiZXR3ZWVuIGdyYWRlXzAgYW5kIGFjX21vdCBkb2VzIG5vdCBzaG93IHRvIGJlIHNpZ25pZmljYW50LiBJdCBkb2VzIG5vdCBhcHBlYXIgdG8gaW1wYWN0IHRoZSBncm93dGggbWVhc3VyZXMgaW4gY2hhbmdlIG92ZXIgdGltZS4gDQoNCg0KDQo1LiBUcnkgYWRkaW5nIGEgcmFuZG9tIHNsb3BlIGZvciB0aW1lIChncmFkZSkgYXQgdGhlIHN0dWRlbnQgbGV2ZWwuIFdoYXQgZG8gdGhlc2UgcmFuZG9tIHNsb3BlcyDigJxtZWFu4oCdPyBVc2UgbHJ0ZXN0IChTdGF0YSkgb3IgdGhlIHJhbmQgZnVuY3Rpb24gKFIpIHRvIGV2YWx1YXRlIHdoZXRoZXIgdGhlIHNsb3BlIHNob3VsZCBiZSB0cmVhdGVkIGFzIHJhbmRvbS4NCg0KUmFuZG9tIHNsb3BlcyBhbGxvdyB2YXJpYWJpbGl0eSBmb3IgZWFjaCBzdHVkZW50LiBUaGF0IGlzLCB0aGUgbW9kZWwgYWNjb3VudHMgZm9yIHZhcmlhdGlvbiBvZiB0aGUgc2xvcGUgKGdyYWRlKSBvdmVyIHRpbWUuIEhvd2V2ZXIsIGZvciB0aGlzIG1vZGVsLCB3aGVuIHlvdSBpbmNsdWRlIHRoZSByYW5kb20gc2xvcGUsIGl0IGRvZXMgbm90IGltcHJvdmUgdGhlIG1vZGVsIHNpZ25pZmljYW50bHkuIFRoZSBDaGktc3F1YXJlZCBzdGF0aXN0aWMgaXMgMC43MzgyLg0KDQoNCg0KIyBMb2FkIGluIE91ciBNVlAgUGFja2FnZXMNCmBgYHtyfQ0Kc3VwcHJlc3NQYWNrYWdlU3RhcnR1cE1lc3NhZ2VzKGxpYnJhcnkodGlkeXZlcnNlKSkNCnN1cHByZXNzUGFja2FnZVN0YXJ0dXBNZXNzYWdlcyhsaWJyYXJ5KEhtaXNjKSkNCnN1cHByZXNzUGFja2FnZVN0YXJ0dXBNZXNzYWdlcyhsaWJyYXJ5KGxtZTQpKQ0KYGBgDQoNCiMgTG9hZCBpbiB0aGUgRGF0YQ0KYGBge3J9DQpzdGFybG9uZyA8LSBoYXZlbjo6cmVhZF9kdGEoIlNUQVJfbG9uZy5kdGEiKQ0KDQpnbGltcHNlKHN0YXJsb25nKQ0KYGBgDQoNCmBgYHtyfQ0Kc3RhcmxvbmcuY2xlYW4gPC0gc3RhcmxvbmcgJT4lDQogIG11dGF0ZSguLA0KICAgICAgICAgcmFjZS5mYWMgPSBhc19mYWN0b3IocmFjZSksDQogICAgICAgICBnZW5kZXIuZmFjID0gYXNfZmFjdG9yKGdlbmRlciksDQogICAgICAgICBsdW5jaC5mYWMgPSBhc19mYWN0b3IoZXZlcl9sdW5jaCksDQogICAgICAgICBncmFkZV8wID0gZ3JhZGUgLSAzKQ0KDQpnbGltcHNlKHN0YXJsb25nLmNsZWFuKQ0KYGBgDQoNCiNudWxsIG1vZGVsIA0KYGBge3J9DQptb2RlbC5udWxsIDwtIGxtZXIoc2NpZW5jZSB+ICgxfHN0ZG50aWQpLCBkYXRhID0gc3RhcmxvbmcuY2xlYW4pDQpzdW1tYXJ5KG1vZGVsLm51bGwpDQpgYGANCg0KIyMgQ2FsY3VsYXRlIElDQw0KYGBge3J9DQpudWxsLmljYyA8LSA3MjkuNC8oNzI5LjQgKyAzMzYxLjcpDQpudWxsLmljYw0KYGBgDQoNCiMgR1JPV1RIIG51bGwgbW9kZWwgDQpgYGB7cn0NCm1vZGVsLm51bGwuZ3Jvd3RoIDwtIGxtZXIoc2NpZW5jZSB+IGdyYWRlXzAgKyAoMXxzdGRudGlkKSwgUkVNTCA9IEZBTFNFLCBkYXRhID0gc3RhcmxvbmcuY2xlYW4pDQpzdW1tYXJ5KG1vZGVsLm51bGwuZ3Jvd3RoKQ0KYGBgDQoNCiMjIENhbGN1bGF0ZSBHUk9XVEggSUNDDQpgYGB7cn0NCm51bGwuZ3Jvd3RoLmljYyA8LSA2NjkuMzkyMC8oNjY5LjM5MjAgKyAyNC4yNzI5KQ0KbnVsbC5ncm93dGguaWNjDQpgYGANCg0KIyBBZGQgYSBzdHVkZW50LWxldmVsIHByZWRpY3RvcnMNCmBgYHtyfQ0KbW9kZWwuMSA8LSBsbWVyKHNjaWVuY2UgfiBncmFkZV8wICsgZ2VuZGVyLmZhYyArIHJhY2UuZmFjICsgYWNfbW90ICsgKDF8c3RkbnRpZCksIFJFTUw9RkFMU0UsIGRhdGEgPSBzdGFybG9uZy5jbGVhbikNCnN1bW1hcnkobW9kZWwuMSkNCmBgYA0KIyBUaGlzIGlzIGFuIGludGVyYWN0aW9uIGVmZmVjdC4NCmBgYHtyfQ0KbW9kZWwuMiA8LSBsbWVyKHNjaWVuY2UgfiBncmFkZV8wICsgZ2VuZGVyLmZhYyArIHJhY2UuZmFjICsgYWNfbW90ICsgK2FjX21vdDpncmFkZV8wICsgKDF8c3RkbnRpZCksIGRhdGEgPSBzdGFybG9uZy5jbGVhbikNCnN1bW1hcnkobW9kZWwuMikNCmBgYA0KDQojcmFuZG9tIHNsb3BlDQpgYGB7cn0NCm1vZGVsLjMgPC0gbG1lcihzY2llbmNlIH4gZ3JhZGVfMCArIGdlbmRlci5mYWMgKyByYWNlLmZhYyArIGFjX21vdCArIChncmFkZV8wfHN0ZG50aWQpLCBkYXRhID0gc3RhcmxvbmcuY2xlYW4pDQpzdW1tYXJ5KG1vZGVsLjMpDQpgYGANCg0KIyBTaG91bGQgd2Uga2VlcCB0aGF0IHJhbmRvbSBzbG9wZSBmb3IgdGltZSAoeWVhcik/IA0KYGBge3J9DQpsbWVyVGVzdDo6cmFuZChtb2RlbC4zKQ0KYGBgDQo=