Standard Error Computations for Uncertainty Quantification

We computationally investigate two approaches for uncertainty quantification in inverse problems for nonlinear parameter dependent dynamical systems.
A standard methods to quantify uncertatinty involves computation of standard errors (SE) to be used in confidence intervals (CI) for the parameter estimates
In the paper, they compare the bootstrapping approach with the asymptotic theory approach by looking at parameter standard errors corresponding to data with various noise levels (Nl).
For both absolute error (with constant variance) measurement data and relative error (and hence non-constant variance) measurement data sets in obtaining parameter estimates. Both types of errors are found widely in measurement procedures used in science and engineering investigations

Constant varince - we can use OLS

Non-constant variance - we can’t use OLS. we use GLS instead.

Heteroskedasticity is said to occur when the variance of the unobservable error u, conditional on independent variables, is not constant.
Since we do not need the homoskedasticity asssumption to show the unbiasedness of OLS, therefore, OLS is still unbiased under heteroskedasticity condition. Even it is consistent. However, the homoskedasticity assumption is needed to show the efficiency of OLS. Hence, OLS is not efficient any longer.
The variances of the OLS estimators are biased in this case. Thus, the usual OLS t statistic and confidence intervals are no longer valid for inference problem.

First of all, we give a detailed algorithm for computing the bootstrap estimates for constant variance data.

# Model with the true parameter values K= 17.5, r=0.7, x=0.1
set.seed(7)
par <- c(17.5, 0.7, 0.1)
n <- 50
p <- 3

The model we use to carry out this demonstration analysis is the following well known logistic model

\[ K/[1 + ((K / x_0) - 1 )exp(-r*t)] \]

options(digits=14)
# Function in page 2 with true paramter values above
myfunc <- function(x, par=c(17.5, 0.7, 0.1))
{par[1] / (1 + ((par[1] / par[3]) - 1 ) * exp((-par[2]*x)))}
curve(myfunc, from=-5, to=25)

# Create noisy data sets from t=0 to 25 (n=50)
t <- sample(0:250000, 50) * (10**-4)
y <- myfunc(t)
t
 [1] 24.7228  9.9436  2.8924  1.7437  6.0936 19.7999  8.5013 24.3009  4.1462 11.4772  4.2935
[12]  5.7866 19.3194  2.4074 11.3356  2.1173 14.0158  0.2176 24.6417  7.9140 15.9850  7.3799
[23] 24.9154 22.6485 24.7162  1.6409 15.6744 12.2606 24.2729  9.0545 16.9978  6.5922  4.6422
[34]  4.6279  9.4811 21.1727 12.4501 19.7617 20.9584 11.4208 19.9837  9.5470 18.9894 10.9175
[45] 22.6015  7.9869  2.0638 20.4034 22.4576 24.1577
# A noise vector of length 50 ~ N(0,NI)
NI <- c(0.01, 0.05, 0.1)
noise <- rnorm(50, NI[3]^2)
noise
 [1]  0.194192771235767  0.762279895740033  0.601745052462727 -0.973052595771021
 [5] -0.266063955112006 -0.860851022568591  0.728710553084245  0.120652877769336
 [9] -0.068466767971704 -0.410490459341998 -0.552125876285266  1.007513444755305
[13] -1.095130058813263 -0.132287830774585  0.324994904887913  1.228550534507348
[17] -0.689317078685514 -0.275432751528726 -1.301552672609386 -0.381012431449258
[21] -0.391526613094972  1.360517580922953  0.601190027089221  0.110525455628569
[25]  0.941071995520097 -0.252742348566532  0.002331895287336  0.377153006545634
[29]  1.717162545137607  0.733740262528380  0.491036048707917 -1.557868244225251
[33]  0.328250283480828  0.175991450677350 -0.889907629628171  0.086371473860574
[37]  0.169155278262728  0.553674184709699  0.714807352613047  0.328969142557110
[41]  1.119249788971056  0.779154194657112  1.163473674778936  1.270683502680938
[45]  0.710623506572324  0.442627160845955 -0.912601718256921 -0.605584206630919
[49] -0.856659688251375 -1.629517087181138
# Create simulation data, which is the constant variance data sets obtained by the equation in page 9
simul_data <- y + noise
dataset <- cbind(t, simul_data)
head(dataset)
           t        simul_data
[1,] 24.7228 17.69409993830508
[2,]  9.9436 15.78300482204131
[3,]  2.8924  1.33169400213693
[4,]  1.7437 -0.63869868473135
[5,]  6.0936  4.81560013089058
[6,] 19.7999 16.63623675930722

Process 1

First estimate parameter vector theta0 from the entire sample using OLS

result01 <- nls(simul_data ~ (K / (1 + ((K / x) - 1 ) * exp((-r*t)))), start = c(K=15, r=1, x=1))
summary(result01)

Formula: simul_data ~ (K/(1 + ((K/x) - 1) * exp((-r * t))))

Parameters:
         Estimate      Std. Error   t value   Pr(>|t|)    
K 17.570781466490  0.161444040179 108.83512 < 2.22e-16 ***
r  0.738652501936  0.044423695388  16.62744 < 2.22e-16 ***
x  0.079479912861  0.026249949810   3.02781  0.0039913 ** 
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.79950810723 on 47 degrees of freedom

Number of iterations to convergence: 8 
Achieved convergence tolerance: 4.3722002195e-06
par01 <- summary(result01)$coefficients[,1]
par01
                K                 r                 x 
17.57078146649041  0.73865250193588  0.07947991286073 

Process 2

List of Residuals

resid <- resid(result01)
rbar01 <- resid * sqrt(n / (n-p))
rbar01
 [1]  0.1272400542943683  0.4115160591333110  0.7019187103496453 -0.9524865906034092
 [5] -0.2979895585533322 -0.9621347262287100  0.3435821215536065  0.0513736482742717
 [9]  0.0350505116254614 -0.6668335243146165 -0.4637781788410772  1.0589277713279224
[13] -1.2042198243474149 -0.0684710070275257  0.0804682473542763  1.3274702648774945
[17] -0.8282403742900334 -0.2602832968411239 -1.4155062815378066 -0.7444504377168575
[21] -0.4904672378606594  1.1385772716762643  0.5470315466544816  0.0408002158440991
[25]  0.8975869397528292 -0.2119069187416608 -0.0870833072406919  0.1997055865532192
[29]  1.6980464272810871  0.3372732349822224  0.4261756843335837 -1.7176598886982464
[33]  0.4398100331122135  0.2830937562799092 -1.3231831942014756  0.0155745129646496
[37] -0.0039517593909775  0.4968060161203969  0.6636810715150278  0.0914126248398750
[41]  1.0803206368602045  0.4018911379756337  1.1249581951941678  1.0204091447701014
[45]  0.6597480611241529  0.0955838643387519 -0.8823260368280093 -0.6984504705361555
[49] -0.9568007905010125 -1.7537952769055594
attr(,"label")
[1] "Residuals"

Calculate yhat01

 yhat01 <- myfunc(dataset[,1], par01)
 yhat01
 [1] 17.570736128653209 15.384025199868530  0.651158563395804  0.284771327214106
 [5]  5.104511727034458 17.569060985799158 12.439610387173524 17.570719550498982
 [9]  1.556056806129751 16.801456823436308  1.717432782563306  4.323692343107130
[13] 17.568328042657228  0.460230719192410 16.720735654778931  0.373347545220466
[17] 17.448277289850314  0.093265165178795 17.570733329712386 10.736681351937065
[21] 17.542022517568814  9.036702657200220 17.570742140807585 17.570571631919016
[25] 17.570735907087354  0.264261414741659 17.534621505314618 17.131000503391050
[29] 17.570718256606135 13.790479045502071 17.557159219804863  6.532376919791502
[33]  2.159956657856137  2.140025812691301 14.641941226791277 17.570157311170991
[37] 17.187190499951065 17.569011753383801 17.570050266954471 16.770220897327000
[41] 17.569279384973065 14.758825138349275 17.567650939632198 16.433077469782120
[45] 17.570564219326243 10.960159156300234  0.359177064366525 17.569679757989704
[49] 17.570539856112717 17.570712642448214

Process 3,4,5,6,7

Carry out the above iterative process M (=1000) times and add the results in to capital theta

result <- matrix(nrow=500, ncol=3)
for(i in 1:500)
{
  # Process 3
  # Create a bootstrap sample of size n using random sampling with replacement from the data(realization) to form a bootstrap sample
  rboot_for <- sample(rbar01, n, replace = TRUE) 
  # Process 4
  # Create bootstrap sample points
  boot_sample_for <- yhat01 + rboot_for 
  # Process 5
  # Obtain a new estimate parameter vector theta_hat from the bootstrap sample using OLS.
  result_for <- nls(boot_sample_for ~ (K / (1 + ((K / x) - 1 ) * exp((-r*t)))), start = c(K=15, r=1, x=1))
  result[i,] <- summary(result_for)$coefficients[,1]
}
i
[1] 500

Process 8

Calculate the mean, standard error

mean <- colMeans(result)
mean
[1] 17.557577263932632  0.744037383216128  0.081209462794069
#mean_vector <- cbind(c(rep(mean[1],500)),c(rep(mean[2],500)),c(rep(mean[3],500)))
#head(mean_vector)
#mean_diversion <- as.matrix(result-mean_vector)
#head(mean_diversion)
#result[30,]
#head(mean_diversion)
result1 <- matrix(nrow=3, ncol=3, 0)
result2 <- matrix(nrow=3, ncol=3, 0)
for (i in 1:500) {
  result1 <- as.matrix(result[i,]-mean)%*%t((as.matrix(result[i,]-mean)))
  result2 <- (result1 + result2)
}
result2
                  [,1]             [,2]              [,3]
[1,] 13.76190483426159 -1.3271449836691  0.66061037784698
[2,] -1.32714498366906  1.0866488872438 -0.62339433874310
[3,]  0.66061037784698 -0.6233943387431  0.39328903942420
covmat = 1/499*result2
covmat
                    [,1]                [,2]                 [,3]
[1,]  0.0275789676037306 -0.0026596091857095  0.00132386849267931
[2,] -0.0026596091857095  0.0021776530806489 -0.00124928725199019
[3,]  0.0013238684926793 -0.0012492872519902  0.00078815438762365
SE_k = sqrt(diag(covmat))
SE_k
[1] 0.166069165120231 0.046665330606874 0.028074087476241
library(ggplot2)
result <- as.data.frame(result)
ggplot(data=result, aes(V1)) + geom_histogram(binwidth=0.1)

LS0tCnRpdGxlOiAiQm9vdHN0cmFwcGluZyBmb3IgQ29uc3RhbnQgVmFyaWFuY2UgRGF0YSIKYXV0aG9yOiAiRGFlSG9vbiwgSHl1bldvbyBpbiBTdGF0aXN0aWNzIDIgQ2xhc3MiCm91dHB1dDoKICBodG1sX25vdGVib29rOiBkZWZhdWx0CiAgaHRtbF9kb2N1bWVudDogZGVmYXVsdAogIHBkZl9kb2N1bWVudDogZGVmYXVsdAotLS0KCiMjIFN0YW5kYXJkIEVycm9yIENvbXB1dGF0aW9ucyBmb3IgVW5jZXJ0YWludHkgUXVhbnRpZmljYXRpb24KCiMjIyMjIFdlIGNvbXB1dGF0aW9uYWxseSBpbnZlc3RpZ2F0ZSB0d28gYXBwcm9hY2hlcyBmb3IgdW5jZXJ0YWludHkgcXVhbnRpZmljYXRpb24gaW4gaW52ZXJzZSBwcm9ibGVtcyBmb3Igbm9ubGluZWFyIHBhcmFtZXRlciBkZXBlbmRlbnQgZHluYW1pY2FsIHN5c3RlbXMuIAoKIyMjIyMgQSBzdGFuZGFyZCBtZXRob2RzIHRvIHF1YW50aWZ5IHVuY2VydGF0aW50eSBpbnZvbHZlcyBjb21wdXRhdGlvbiBvZiBzdGFuZGFyZCBlcnJvcnMgKFNFKSB0byBiZSB1c2VkIGluIGNvbmZpZGVuY2UgaW50ZXJ2YWxzIChDSSkgZm9yIHRoZSBwYXJhbWV0ZXIgZXN0aW1hdGVzCgojIyMjIyBJbiB0aGUgcGFwZXIsIHRoZXkgY29tcGFyZSB0aGUgYm9vdHN0cmFwcGluZyBhcHByb2FjaCB3aXRoIHRoZSBhc3ltcHRvdGljIHRoZW9yeSBhcHByb2FjaCBieSBsb29raW5nIGF0IHBhcmFtZXRlciBzdGFuZGFyZCBlcnJvcnMgY29ycmVzcG9uZGluZyB0byBkYXRhIHdpdGggdmFyaW91cyBub2lzZSBsZXZlbHMgKE5sKS4gCiMjIyMjIEZvciBib3RoIGFic29sdXRlIGVycm9yICh3aXRoIGNvbnN0YW50IHZhcmlhbmNlKSBtZWFzdXJlbWVudCBkYXRhIGFuZCByZWxhdGl2ZSBlcnJvciAoYW5kIGhlbmNlIG5vbi1jb25zdGFudCB2YXJpYW5jZSkgbWVhc3VyZW1lbnQgZGF0YSBzZXRzIGluIG9idGFpbmluZyBwYXJhbWV0ZXIgZXN0aW1hdGVzLiBCb3RoIHR5cGVzIG9mIGVycm9ycyBhcmUgZm91bmQgd2lkZWx5IGluIG1lYXN1cmVtZW50IHByb2NlZHVyZXMgdXNlZCBpbiBzY2llbmNlIGFuZCBlbmdpbmVlcmluZyBpbnZlc3RpZ2F0aW9ucwoKIyMjIyBDb25zdGFudCB2YXJpbmNlIC0gd2UgY2FuIHVzZSBPTFMKIyMjIyBOb24tY29uc3RhbnQgdmFyaWFuY2UgLSB3ZSBjYW4ndCB1c2UgT0xTLiB3ZSB1c2UgR0xTIGluc3RlYWQuCgojIyMjIyBIZXRlcm9za2VkYXN0aWNpdHkgaXMgc2FpZCB0byBvY2N1ciB3aGVuIHRoZSB2YXJpYW5jZSBvZiB0aGUgdW5vYnNlcnZhYmxlIGVycm9yIHUsIGNvbmRpdGlvbmFsIG9uIGluZGVwZW5kZW50IHZhcmlhYmxlcywgaXMgbm90IGNvbnN0YW50LgojIyMjIyBTaW5jZSB3ZSBkbyBub3QgbmVlZCB0aGUgaG9tb3NrZWRhc3RpY2l0eSBhc3NzdW1wdGlvbiB0byBzaG93IHRoZSB1bmJpYXNlZG5lc3Mgb2YgT0xTLCB0aGVyZWZvcmUsIE9MUyBpcyBzdGlsbCB1bmJpYXNlZCB1bmRlciBoZXRlcm9za2VkYXN0aWNpdHkgY29uZGl0aW9uLiBFdmVuIGl0IGlzIGNvbnNpc3RlbnQuIEhvd2V2ZXIsIHRoZSBob21vc2tlZGFzdGljaXR5IGFzc3VtcHRpb24gaXMgbmVlZGVkIHRvIHNob3cgdGhlIGVmZmljaWVuY3kgb2YgT0xTLiBIZW5jZSwgT0xTIGlzIG5vdCBlZmZpY2llbnQgYW55IGxvbmdlci4KCiMjIyMjIFRoZSB2YXJpYW5jZXMgb2YgdGhlIE9MUyBlc3RpbWF0b3JzIGFyZSBiaWFzZWQgaW4gdGhpcyBjYXNlLiBUaHVzLCB0aGUgdXN1YWwgT0xTIHQgc3RhdGlzdGljIGFuZCBjb25maWRlbmNlIGludGVydmFscyBhcmUgbm8gbG9uZ2VyIHZhbGlkIGZvciBpbmZlcmVuY2UgcHJvYmxlbS4KCiMjIyBGaXJzdCBvZiBhbGwsIHdlIGdpdmUgYSBkZXRhaWxlZCBhbGdvcml0aG0gZm9yIGNvbXB1dGluZyB0aGUgYm9vdHN0cmFwIGVzdGltYXRlcyBmb3IgY29uc3RhbnQgdmFyaWFuY2UgZGF0YS4KCmBgYHtyfQojIE1vZGVsIHdpdGggdGhlIHRydWUgcGFyYW1ldGVyIHZhbHVlcyBLPSAxNy41LCByPTAuNywgeD0wLjEKc2V0LnNlZWQoNykKcGFyIDwtIGMoMTcuNSwgMC43LCAwLjEpCm4gPC0gNTAKcCA8LSAzCmBgYAoKIyMjIyBUaGUgbW9kZWwgd2UgdXNlIHRvIGNhcnJ5IG91dCB0aGlzIGRlbW9uc3RyYXRpb24gYW5hbHlzaXMgaXMgdGhlIGZvbGxvd2luZyB3ZWxsIGtub3duIGxvZ2lzdGljIG1vZGVsIAokJCBLL1sxICsgKChLIC8geF8wKSAtIDEgKWV4cCgtcip0KV0gJCQKCmBgYHtyLCBtZXNzYWdlPVRSVUUsIHdhcm5pbmc9RkFMU0V9Cm9wdGlvbnMoZGlnaXRzPTE0KQojIEZ1bmN0aW9uIGluIHBhZ2UgMiB3aXRoIHRydWUgcGFyYW10ZXIgdmFsdWVzIGFib3ZlCm15ZnVuYyA8LSBmdW5jdGlvbih4LCBwYXI9YygxNy41LCAwLjcsIDAuMSkpCntwYXJbMV0gLyAoMSArICgocGFyWzFdIC8gcGFyWzNdKSAtIDEgKSAqIGV4cCgoLXBhclsyXSp4KSkpfQpjdXJ2ZShteWZ1bmMsIGZyb209LTUsIHRvPTI1KQpgYGAKCmBgYHtyLCBlY2hvPVRSVUUsIHdhcm5pbmc9RkFMU0V9CiMgQ3JlYXRlIG5vaXN5IGRhdGEgc2V0cyBmcm9tIHQ9MCB0byAyNSAobj01MCkKdCA8LSBzYW1wbGUoMDoyNTAwMDAsIDUwKSAqICgxMCoqLTQpCnkgPC0gbXlmdW5jKHQpCnQKYGBgCgpgYGB7cn0KIyBBIG5vaXNlIHZlY3RvciBvZiBsZW5ndGggNTAgfiBOKDAsTkkpCk5JIDwtIGMoMC4wMSwgMC4wNSwgMC4xKQpub2lzZSA8LSBybm9ybSg1MCwgTklbM11eMikKbm9pc2UKYGBgCgpgYGB7cn0KIyBDcmVhdGUgc2ltdWxhdGlvbiBkYXRhLCB3aGljaCBpcyB0aGUgY29uc3RhbnQgdmFyaWFuY2UgZGF0YSBzZXRzIG9idGFpbmVkIGJ5IHRoZSBlcXVhdGlvbiBpbiBwYWdlIDkKc2ltdWxfZGF0YSA8LSB5ICsgbm9pc2UKZGF0YXNldCA8LSBjYmluZCh0LCBzaW11bF9kYXRhKQpoZWFkKGRhdGFzZXQpCmBgYAoKIyMjIFByb2Nlc3MgMQojIyMjIEZpcnN0IGVzdGltYXRlIHBhcmFtZXRlciB2ZWN0b3IgdGhldGEwIGZyb20gdGhlIGVudGlyZSBzYW1wbGUgdXNpbmcgT0xTCmBgYHtyfQpyZXN1bHQwMSA8LSBubHMoc2ltdWxfZGF0YSB+IChLIC8gKDEgKyAoKEsgLyB4KSAtIDEgKSAqIGV4cCgoLXIqdCkpKSksIHN0YXJ0ID0gYyhLPTE1LCByPTEsIHg9MSkpCnN1bW1hcnkocmVzdWx0MDEpCnBhcjAxIDwtIHN1bW1hcnkocmVzdWx0MDEpJGNvZWZmaWNpZW50c1ssMV0KcGFyMDEKYGBgCgojIyMgUHJvY2VzcyAyCiMjIyMgTGlzdCBvZiBSZXNpZHVhbHMKYGBge3J9CnJlc2lkIDwtIHJlc2lkKHJlc3VsdDAxKQpyYmFyMDEgPC0gcmVzaWQgKiBzcXJ0KG4gLyAobi1wKSkKcmJhcjAxCmBgYAoKIyMjIENhbGN1bGF0ZSB5aGF0MDEKYGBge3J9CiB5aGF0MDEgPC0gbXlmdW5jKGRhdGFzZXRbLDFdLCBwYXIwMSkKIHloYXQwMQpgYGAKCgojIyMgUHJvY2VzcyAzLDQsNSw2LDcKIyMjIyBDYXJyeSBvdXQgdGhlIGFib3ZlIGl0ZXJhdGl2ZSBwcm9jZXNzIE0gKD0xMDAwKSB0aW1lcyBhbmQgYWRkIHRoZSByZXN1bHRzIGluIHRvIGNhcGl0YWwgdGhldGEKYGBge3J9CnJlc3VsdCA8LSBtYXRyaXgobnJvdz01MDAsIG5jb2w9MykKZm9yKGkgaW4gMTo1MDApCnsKICAjIFByb2Nlc3MgMwogICMgQ3JlYXRlIGEgYm9vdHN0cmFwIHNhbXBsZSBvZiBzaXplIG4gdXNpbmcgcmFuZG9tIHNhbXBsaW5nIHdpdGggcmVwbGFjZW1lbnQgZnJvbSB0aGUgZGF0YShyZWFsaXphdGlvbikgdG8gZm9ybSBhIGJvb3RzdHJhcCBzYW1wbGUKICByYm9vdF9mb3IgPC0gc2FtcGxlKHJiYXIwMSwgbiwgcmVwbGFjZSA9IFRSVUUpIAogICMgUHJvY2VzcyA0CiAgIyBDcmVhdGUgYm9vdHN0cmFwIHNhbXBsZSBwb2ludHMKICBib290X3NhbXBsZV9mb3IgPC0geWhhdDAxICsgcmJvb3RfZm9yIAogICMgUHJvY2VzcyA1CiAgIyBPYnRhaW4gYSBuZXcgZXN0aW1hdGUgcGFyYW1ldGVyIHZlY3RvciB0aGV0YV9oYXQgZnJvbSB0aGUgYm9vdHN0cmFwIHNhbXBsZSB1c2luZyBPTFMuCiAgcmVzdWx0X2ZvciA8LSBubHMoYm9vdF9zYW1wbGVfZm9yIH4gKEsgLyAoMSArICgoSyAvIHgpIC0gMSApICogZXhwKCgtcip0KSkpKSwgc3RhcnQgPSBjKEs9MTUsIHI9MSwgeD0xKSkKICByZXN1bHRbaSxdIDwtIHN1bW1hcnkocmVzdWx0X2ZvcikkY29lZmZpY2llbnRzWywxXQp9CmkKYGBgCgojIyMgUHJvY2VzcyA4CiMjIyMgQ2FsY3VsYXRlIHRoZSBtZWFuLCBzdGFuZGFyZCBlcnJvcgpgYGB7cn0KbWVhbiA8LSBjb2xNZWFucyhyZXN1bHQpCm1lYW4KI21lYW5fdmVjdG9yIDwtIGNiaW5kKGMocmVwKG1lYW5bMV0sNTAwKSksYyhyZXAobWVhblsyXSw1MDApKSxjKHJlcChtZWFuWzNdLDUwMCkpKQojaGVhZChtZWFuX3ZlY3RvcikKI21lYW5fZGl2ZXJzaW9uIDwtIGFzLm1hdHJpeChyZXN1bHQtbWVhbl92ZWN0b3IpCiNoZWFkKG1lYW5fZGl2ZXJzaW9uKQojcmVzdWx0WzMwLF0KI2hlYWQobWVhbl9kaXZlcnNpb24pCnJlc3VsdDEgPC0gbWF0cml4KG5yb3c9MywgbmNvbD0zLCAwKQpyZXN1bHQyIDwtIG1hdHJpeChucm93PTMsIG5jb2w9MywgMCkKZm9yIChpIGluIDE6NTAwKSB7CiAgcmVzdWx0MSA8LSBhcy5tYXRyaXgocmVzdWx0W2ksXS1tZWFuKSUqJXQoKGFzLm1hdHJpeChyZXN1bHRbaSxdLW1lYW4pKSkKICByZXN1bHQyIDwtIChyZXN1bHQxICsgcmVzdWx0MikKfQpyZXN1bHQyCmBgYAoKYGBge3J9CmNvdm1hdCA9IDEvNDk5KnJlc3VsdDIKY292bWF0ClNFX2sgPSBzcXJ0KGRpYWcoY292bWF0KSkKU0VfawpgYGAKCmBgYHtyfQpsaWJyYXJ5KGdncGxvdDIpCnJlc3VsdCA8LSBhcy5kYXRhLmZyYW1lKHJlc3VsdCkKZ2dwbG90KGRhdGE9cmVzdWx0LCBhZXMoVjEpKSArIGdlb21faGlzdG9ncmFtKGJpbndpZHRoPTAuMSkKYGBgCgoK