tutorial original: https://www.datacamp.com/community/tutorials/keras-r-deep-learning

Carregando os dados

Para esse exemplo iremos usar os dados fornecidos pelo “UCI Machine Learning Repository”, que nesse caso será o conhecido Iris Data Set, para mais detalhes segue o link: http://archive.ics.uci.edu/ml/datasets/Iris.

# Read in `iris` data
iris <- read.csv(url("http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data"), header = FALSE)

Em seguida iremos checar os dados importados usando alguns comandos.

# Return the first part of `iris`
head(iris)
# Inspect the structure
str(iris)
'data.frame':   150 obs. of  5 variables:
 $ V1: num  5.1 4.9 4.7 4.6 5 5.4 4.6 5 4.4 4.9 ...
 $ V2: num  3.5 3 3.2 3.1 3.6 3.9 3.4 3.4 2.9 3.1 ...
 $ V3: num  1.4 1.4 1.3 1.5 1.4 1.7 1.4 1.5 1.4 1.5 ...
 $ V4: num  0.2 0.2 0.2 0.2 0.2 0.4 0.3 0.2 0.2 0.1 ...
 $ V5: Factor w/ 3 levels "Iris-setosa",..: 1 1 1 1 1 1 1 1 1 1 ...
# Obtain the dimensions
dim(iris)
[1] 150   5

Explorando os dados

Na imagem a seguir podemos ver os três diferentes tipo de iris usadas nos dados.

Como pudemos notar na função “str()” nosso data frame não possui nomes de colunas que facilitam o entendimento dos dados, no momento temos “V1, V2, V3, V4 e V5”, primeiramente iremos renomear as colunas para algo que faça mais sentido.

names(iris) <- c("Sepal.Length", "Sepal.Width", "Petal.Length", "Petal.Width", "Species")

Agora iremos montar um gráfico com os dados relacionando o tamanho com a largura das pétalas.

plot(iris$Petal.Length, 
     iris$Petal.Width, 
     pch=21, bg=c("red","green3","blue")[unclass(iris$Species)], 
     xlab="Petal Length", 
     ylab="Petal Width")

obs: a função “unclass()” converte o nome das espécies em números (seria semelhante a one hot encoding).

Observando a imagem parece existir uma correlação entre tamanho e largura das pétalas para as diferentes espécies, podemos confirmar essa hipótese e também verificar a correlação entre os outros atributos usando a função “corrplot()” junto de “cor()” para cada atributo.

# Store the overall correlation in `M`
correlacoes <- cor(iris[,1:4])
# Plot the correlation plot with `M`
corrplot(correlacoes, method="circle")

# Overall correlation between `Petal.Length` and `Petal.Width` 
cor(iris$Petal.Length, iris$Petal.Width)
[1] 0.9627571

como podemos observar existe uma grande correlação entre tamanho e largura das pétalas, que em valores numéricos seria de 0.9627571.

Processando os dados

Algumas práticas básicas para usar dados são relacionadas a limpeza e normalização dos dados e como iremos usar os dados em um algorítimo de aprendizagem de máquina também precisamos dividir os dados entre treino e validação.

Primeiro iremos sumarizar o data frame para avaliar os dados.

# Pull up a summary of `iris`
summary(iris)
  Sepal.Length    Sepal.Width     Petal.Length    Petal.Width               Species  
 Min.   :4.300   Min.   :2.000   Min.   :1.000   Min.   :0.100   Iris-setosa    :50  
 1st Qu.:5.100   1st Qu.:2.800   1st Qu.:1.600   1st Qu.:0.300   Iris-versicolor:50  
 Median :5.800   Median :3.000   Median :4.350   Median :1.300   Iris-virginica :50  
 Mean   :5.843   Mean   :3.054   Mean   :3.759   Mean   :1.199                       
 3rd Qu.:6.400   3rd Qu.:3.300   3rd Qu.:5.100   3rd Qu.:1.800                       
 Max.   :7.900   Max.   :4.400   Max.   :6.900   Max.   :2.500                       

Aqui um ponto bem importante de observar é que os dados estão bem balanceados, isso irá facilitar o treinamento do nosso algoritmo.

Agora iremos normalizar os dados, para isso iremos usar a função “normalize” do pacote keras, mas primeiro precisamos transformar nosso data frame em uma matriz.

# transforma os dados em valores numéricos (one hot encoding)
iris[,5] <- as.numeric(iris[,5]) -1
# Turn `iris` into a matrix
iris <- as.matrix(iris)
# Set `iris` `dimnames` to `NULL`
dimnames(iris) <- NULL

Em seguida podemos normalizar os dados.

# Normalize the `iris` data
iris[,1:4] <- normalize(iris[,1:4])

Em seguida podemos olhar como ficou nossa matriz de dados normalizados, observe que agora cada valor varia entre 0 e 1.

# Return the summary of `iris`
summary(iris)
       V1               V2               V3               V4                V5   
 Min.   :0.6539   Min.   :0.2384   Min.   :0.1678   Min.   :0.01473   Min.   :0  
 1st Qu.:0.7153   1st Qu.:0.3267   1st Qu.:0.2509   1st Qu.:0.04873   1st Qu.:0  
 Median :0.7549   Median :0.3544   Median :0.5364   Median :0.16415   Median :1  
 Mean   :0.7516   Mean   :0.4048   Mean   :0.4550   Mean   :0.14096   Mean   :1  
 3rd Qu.:0.7884   3rd Qu.:0.5252   3rd Qu.:0.5800   3rd Qu.:0.19753   3rd Qu.:2  
 Max.   :0.8609   Max.   :0.6071   Max.   :0.6370   Max.   :0.28042   Max.   :2  

Agora que possuirmos um conjunto de dados de qualidade, nós podemos dividir os dados em treino e validação, para que possamos construir nosso modelo, mas antes disso iremos definir uma “seed” usando a função “set.seed()”, para possamos ter uma “aleatoriedade determinística”, assim o nosso código fica mais reproduzível.

Iremos usar a função “sample()” para gerar um array com valores 1 ou 2 com probabilidade de 67% e 33% respectivamente, em seguida os valores que possuem 1 serão a matriz de test e os demais serão a matriz de validação

# Determine sample size
ind <- sample(2, nrow(iris), replace=TRUE, prob=c(0.67, 0.33))
# Split the `iris` data
iris.training <- iris[ind==1, 1:4]
iris.validation <- iris[ind==2, 1:4]
# Split the class attribute
iris.trainingtarget <- iris[ind==1, 5]
iris.validationtarget <- iris[ind==2, 5]

O último passo na manipulação dos dados é aplicar One Hot Encoding (OHE) em nosso atributo alvo “Species”, em um modelo de classificação multi-classe como esse, é recomendado que o vetor resposta “Species” seja uma matriz com um vetor para cada classe e nesses vetores apenas 1 ou 0 simbolizando se o exemplo é de determinada classe ou não.

Keras traz a função built-in “to_categorical()” que aplica one hot encoding em uma variável, então iremos passar o vetor alvo da nossa matriz de treino e validação para essa função.

# One hot encode training target values
iris.trainLabels <- to_categorical(iris.trainingtarget)
# One hot encode test target values
iris.validationLabels <- to_categorical(iris.validationtarget)
# Print out the iris.testLabels head to double check the result
head(iris.validationLabels)
     [,1] [,2] [,3]
[1,]    1    0    0
[2,]    1    0    0
[3,]    1    0    0
[4,]    1    0    0
[5,]    1    0    0
[6,]    1    0    0

Como podemos ver no lugar de apenas um vetor com valores de 1 a 3 agora temos 3 vetores com valores 1 ou 0, e como foi dito cada um dos vetores corresponde a uma das classes possíveis do nossos dados (setosa, versicolor e virginica).

Construindo o modelo

Antes de construir o modelo é bom revisitar o propósito inicial do exercício, que é prever qual a espécie de Iris dados determinados dados, e nesse caso seria o nosso vetor “Species” que foi transformado em um “One hot Encoding”, assim nosso resultado final será um desses três vetores com valor 1 e os outros 2 com valor 0.

Para montar o modelo iremos usar a função “keras_model_sequential()”, isso significa que iremos construir um modelo de forma sequencial, ou seja cada camada é adicionada uma após a outra de forma sequencial, isso ficará mais claro com o código.

O tipo de rede que iremos usar é a “MLP” ou “Multi-layer perceptron”, que nada mais é que um conjunto de camadas totalmente conectadas também conhecidas como densas, isso significa que a saída de cada neurônio de uma camada é usada como entrada para todos os neurônios da próxima camada.

Para as funções de ativação das camadas intermediárias ou escondidas, iremos usar a mais comum que seria “Relu”, essa escolha está relacionada ao problema de “exploding e vanishing gradients” que irá refletir na eficiência de treinamento do nosso modelo, e como se trata de uma classificação nossa última função de ativação ou output do modelo será uma “softmax”, que serve para converter uma probabilidade de cada classe em 0 ou 1.

# Initialize a sequential model
model <- keras_model_sequential() 
# Add layers to the model
model %>% 
    layer_dense(units = 8, activation = 'relu', input_shape = c(4)) %>% 
    layer_dense(units = 3, activation = 'softmax')

Alguns detalhe sobre o modelo, ele possui input 4 (os 4 atributos dos dados), output 3 (1 para cada tipo de Iris) e 8 nós em sua camada escondida (esse valor é arbitrário).

Agora podemos ver a representação do modelo:

summary(model)
_____________________________________________________________________________________________________________________________________
Layer (type)                                               Output Shape                                          Param #             
=====================================================================================================================================
dense_9 (Dense)                                            (None, 8)                                             40                  
_____________________________________________________________________________________________________________________________________
dense_10 (Dense)                                           (None, 3)                                             27                  
=====================================================================================================================================
Total params: 67
Trainable params: 67
Non-trainable params: 0
_____________________________________________________________________________________________________________________________________

Agora que a arquitetura foi definida é hora de compilar o modelo, para esse exemplo iremos usar “categorical_crossentropy” como nossa função de loss, “adam” como o otimizador e “accuracy” será nossa métrica.

# Compile the model
model %>% compile(
     loss = 'categorical_crossentropy',
     optimizer = 'adam',
     metrics = 'accuracy'
 )

Em relação ao otimizador “adam” é importante notar que também podemos tunar outros parâmetros além da taxa de aprendizado, como por exemplo beta1, beta2 e epsilon. Para a métrica acurácia se adéqua melhor ao nosso problema mas temos outras opções como Mean Squared Error (MSE). Nossa função loss “categorical_crossentropy”, seria a função padrão para classificação multi-classe(mais de 2 classes).

Tendo isso definido podemos “encaixar” nossos dados no modelo.

Aqui definimos os parâmetros para o treinamento, primeiro definimos o número de epochs, cada epoch é uma iteração do nosso modelo sobre os dados de treinamento seguidos pela validação dos resultados(foward, backward propagation e update dos pesos), “batch_size” é referente a quantidade dos dados de treinamento que vão ser processados por vez (isso pode melhorar o uso da memória), além de que o modelo vai ser atualizado mais frequentemente(1 vez a cada batch).

Algo muito interessante que podemos fazer é visualizar os gráficos do nosso modelo referentes a função loss e a acurácia, isso com base tanto nos dados de treino quanto nos dados de validação.

# Plot the model loss of the training data
plot(history$metrics$loss, main="Model Loss", xlab = "epoch", ylab="loss", col="blue", type="l", ylim=c(0,1.5))
# Plot the model loss of the test data
lines(history$metrics$val_loss, col="green")
# Add legend
legend("topright", c("train","test"), col=c("blue", "green"), lty=c(1,1))

# Plot the accuracy of the training data 
plot(history$metrics$acc, main="Model Accuracy", xlab = "epoch", ylab="accuracy", col="blue", type="l", ylim=c(0,1))
# Plot the accuracy of the validation data
lines(history$metrics$val_acc, col="green")
# Add Legend
legend("bottomright", c("train","test"), col=c("blue", "green"), lty=c(1,1))

Aqui podemos observar o seguinte:

  • A função loss tem um comportamento dentro do esperado, ela tende a diminuir conforme o número de epochs aumenta até chegar um ponto onde ela parece estabilizar.
  • Para a acurácia o observado também está dentro do esperado, a acurácia tende a aumentar conforme aumentam os epochs até estabilizar em um ponto.
  • Obs: Se a acurácia parece estar aumentado nos últimos epoch, é um sinal que modelo ainda não acabou de aprender.
  • Obs2: Se a acurácia para o treino está aumentando mas a acurácia para o teste está diminuindo o modelo provavelmente está sofrendo de overfitting.

Agora que nosso modelo foi criado, compilado e treinado, nós podemos usá-lo para prever resultados para nossos dados de teste.

# Predict the classes for the test data
classes <- model %>% predict_classes(iris.validation, batch_size = 128)

Com nossas predições uma forma interessante de visualizar os dados é usando uma matriz de confusão

# Confusion matrix
table(iris.validationtarget, classes)
                     classes
iris.validationtarget  0  1
                    0 13  0
                    1  0 20
                    2  0 14

AVALIAR OS RESULTADOS

Outra forma interessante de avaliar o modelo é usando a função “evaluate()”, para isso basta passar os dados e labels de validação.

# Evaluate on test data and labels
score <- model %>% evaluate(iris.validation, iris.validationLabels, batch_size = 128)

47/47 [==============================] - 0s 43us/step
# Print the score
print(score)
$loss
[1] 0.4384495

$acc
[1] 0.7021276

Busca por hiperparâmetros é provavelmente onde se gasta mais tempo quando se monta um modelo, mas também é o que diferencia um bom modelo de outro ruim, ou pouco eficiente. Mas isso é algo que depende muito do problema em questão, no nosso caso nossos dados são bem simples, então não é preciso fazer muito.

Dentre as várias possibilidades de ajustes, iremos cobrir três: o número de camadas escondidas, número de nós e o algoritmo de otimização.

Adicionando camadas

Aqui iremos usar a mesma estrutura de modelo mas com uma camada a mais.

# Plot the model loss
plot(history2$metrics$loss, main="Model Loss", xlab = "epoch", ylab="loss", col="blue", type="l", ylim=c(0,1.7))
lines(history2$metrics$val_loss, col="green")
legend("topright", c("train","test"), col=c("blue", "green"), lty=c(1,1))

# Plot the model accuracy
plot(history2$metrics$acc, main="Model Accuracy", xlab = "epoch", ylab="accuracy", col="blue", type="l", ylim=c(0,1))
lines(history2$metrics$val_acc, col="green")
legend("bottomright", c("train","test"), col=c("blue", "green"), lty=c(1,1))

# Evaluate the model
score2 <- model2 %>% evaluate(iris.validation, iris.validationLabels, batch_size = 128)

47/47 [==============================] - 0s 64us/step
# Print the score
print(score2)
$loss
[1] 0.2639041

$acc
[1] 0.9148936

Nós escondidos

Agora iremos mais uma vez usar a mesma estrutura inicial mas desa vez iremos adicionar mais nós a camada escondida.

# Plot the model loss
plot(history3$metrics$loss, main="Model Loss", xlab = "epoch", ylab="loss", col="blue", type="l", ylim=c(0,1.5))
lines(history3$metrics$val_loss, col="green")
legend("topright", c("train","test"), col=c("blue", "green"), lty=c(1,1))

# Plot the model accuracy
plot(history3$metrics$acc, main="Model Accuracy", xlab = "epoch", ylab="accuracy", col="blue", type="l", ylim=c(0,1))
lines(history3$metrics$val_acc, col="green")
legend("bottomright", c("train","test"), col=c("blue", "green"), lty=c(1,1))

# Evaluate the model
score3 <- model3 %>% evaluate(iris.validation, iris.validationLabels, batch_size = 128)

47/47 [==============================] - 0s 53us/step
# Print the score
print(score3)
$loss
[1] 0.2327757

$acc
[1] 0.9361702

Em relação a topologia da rede (quantidade de camadas e nós), a princípio pode parecer uma boa ideia adicionar mais camadas e nós, para aumentar a complexidade da nossa função e poder capturar mais dados, mas isso vai fazer com que o modelo se ajuste demais aos dados de treinamento e perca a capacidade de capturar também os dados de validação (overfitting). Ou seja além de dificultar o overfitting redes menores também vão ser treinadas mais rápido, por esses motivos de forma geral nós sempre iremos preferir redes mais simples.

Otimizador

Um hiperperâmetro que também podemos ajustar é o otimizador e até mesmo os próprios parâmetros do otimizador a seguir iremos usar o Stochastic Gradient Descent (SGD) como nosso otimizador e também mudar a taxa de aprendizado.

# Plot the model loss
plot(history4$metrics$loss, main="Model Loss", xlab = "epoch", ylab="loss", col="blue", type="l", ylim=c(0,1.6))
lines(history4$metrics$val_loss, col="green")
legend("topright", c("train","test"), col=c("blue", "green"), lty=c(1,1))

# Plot the model accuracy
plot(history4$metrics$acc, main="Model Accuracy", xlab = "epoch", ylab="accuracy", col="blue", type="l", ylim=c(0,1))
lines(history4$metrics$val_acc, col="green")
legend("bottomright", c("train","test"), col=c("blue", "green"), lty=c(1,1))

# Evaluate the model
score4 <- model4 %>% evaluate(iris.validation, iris.validationLabels, batch_size = 128)

47/47 [==============================] - 0s 53us/step
# Print the loss and accuracy metrics
print(score4)
$loss
[1] 0.4923409

$acc
[1] 0.7021276

Salvar, carregar ou exportar o modelo

Salvar e carregar um modelo é muito importante, principalmente quando se trata de modelos mais complexos e robustos, pode se tornar quase impraticável replicar o treinamento de um modelo em outro ambiente, por exemplo, você não vai querer tentar treinar um modelo em seu computador que levou dias para ser treinado em um super computador, ou até mesmo você pode treinar seu modelo em dias diferentes.

Isso pode ser facilmente feito usando as funções da biblioteca “hdf5”, “save_model_hdf5()” e “load_model_hdf5()”, isso é muito importante quando se usa “transfer learning”, que em resumo seria usar um modelo já treinado e usar seus pesos como base em outro modelo, isso normalmente é feito usando um modelo de propósito geral como base para outro de propósito específico.

save_model_hdf5(model, "my_model.h5")
model <- load_model_hdf5("my_model.h5")

Também é possível salvar os pesos (weights) do modelo.

save_model_weights_hdf5(model, "my_model_weights.h5")
model %>% load_model_weights_hdf5("my_model_weights.h5")

Também é possível exportar o modelo para JSON ou YAML.

json_string <- model_to_json(model)
model <- model_from_json(json_string)
yaml_string <- model_to_yaml(model)
model <- model_from_yaml(yaml_string)

Usando dados das eleições dos deputados de 2014

Para um segundo experimento iremos usar dados reais, mais interessantes e de maior complexidade, que serão os dados referentes a eleição dos deputados de 2014.

Carregando os dados pt2

trainDp <- read.csv("data/train5.csv", encoding="UTF-8")
testDp <- read.csv("data/test5.csv", encoding="UTF-8")

Explorando os dados pt2

Para qualquer conjunto de dados que vai ser submetido para treinar um modelo de predição é importante verificar a distribuição entre as classes.

total = nrow(trainDp)
dist_classes <- trainDp %>% count(situacao_final)
ggplot(dist_classes, aes(y = dist_classes$n/total * 100, x = dist_classes$situacao_final))+
  geom_bar(stat="identity") +
  labs(title = "Distribuição de classes", x = "Situação final", y = "Proporção (%)") +
  theme(axis.text.x = element_text(angle = 0, hjust = 1), legend.position="none") +
  theme(axis.text=element_text(size=8), axis.title=element_text(size=12,face="bold"))

Como podemos observar na imagem, existe um grande desbalanceamento nas classes, mais de 80% dos dados são referentes a candidatos que não foram eleitos, isso faz bastante sentido já que apenas uma quantidade específica foi eleita, e normalmente é bem menor que o total de candidatos, mas para treinar um modelo isso acaba sendo ruim já que a classe dos não eleitos tem uma representação muito maior que a outra, isso pode enviesar o modelo para esses casos, ou seja, o modelo pode representar bem melhor esses dados que possuem mais exemplos (overfitting), e não representar tão bem a outra classe, já que existem poucos exemplos da mesma. Outro problema de dados desbalanceados é que nesse caso se o modelo prever todos os exemplos como “nao_eleitos” ainda sim ele conseguirá algo próximo de 80% de acurácia, e obviamente essa predição foi muito ruim, mas analisando apenas acurácia fica difícil de identificar isso.

Processando os dados pt2

Ao observar os dados podemos ver que os valores nulos são representados por “#NULO” no conjunto de dados, então vamos substituir esses valores por “NA”, assim poderemos avaliar melhor os dados.

# obs o processamento deve ser feito para os dados de treino e teste
trainDp[trainDp == '#NULO'] <- NA
testDp[testDp == '#NULO'] <- NA
# observando a quantidade de valores nulos para cada atributo
sapply(trainDp, function(y) sum(length(which(is.na(y)))))
                                   ID                                  nome                       numero_cadidato 
                                    0                                     0                                     0 
                                   UF                               partido               setor_economico_receita 
                                    0                                     0                                  2140 
                   quantidade_doacoes                   quantidade_doadores                         total_receita 
                                    0                                     0                                     0 
                        media_receita recursos_de_outros_candidatos.comites                  recursos_de_partidos 
                                    0                                     0                                     0 
          recursos_de_pessoas_físicas         recursos_de_pessoas_juridicas                     recursos_proprios 
                                    0                                     0                                     0 
                  quantidade_despesas               quantidade_fornecedores                         total_despesa 
                                    0                                     0                                     0 
                        media_despesa               setor_economico_despesa                                 idade 
                                    0                                  2310                                     0 
                                 sexo                                  grau                          estado_civil 
                                    0                                     0                                     0 
                   descricao_ocupacao                    descricao_cor_raca                  despesa_max_campanha 
                                    0                                     0                                     0 
                       situacao_final 
                                    0 

Com essa informação podemos chegar a algumas conclusões, apenas os atributos “setor_economico_receita” e “setor_economico_despesa” possuem dados nulos, e levando em conta que o nosso total de dados é 4135 podemos concluir que nesses duas camadas a moda seria na verdade os dado nulos, esse tipo de situação dificulta muito substituir esses valores por outros derivados de alguma forma, devido a isso optei por remover esses atributos.

Para submeter os dados para um modelo de deep learning precisamos transformar nossos atributos categóricos em atributos numéricos, para isso usaremos one hot encoding.

# transformando os valores categóricos para o formato one hot
trainDp <- dummy.data.frame(trainDp, names=c('estado_civil'), sep="_")
trainDp <- dummy.data.frame(trainDp, names=c('sexo'), sep="_")
trainDp <- dummy.data.frame(trainDp, names=c('grau'), sep="_")
trainDp <- dummy.data.frame(trainDp, names=c('descricao_cor_raca'), sep="_")
# testar
# to_categorical(trainDp$descricao_cor_raca)
testDp <- dummy.data.frame(testDp, names=c('estado_civil'), sep="_")
testDp <- dummy.data.frame(testDp, names=c('sexo'), sep="_")
testDp <- dummy.data.frame(testDp, names=c('grau'), sep="_")
testDp <- dummy.data.frame(testDp, names=c('descricao_cor_raca'), sep="_")
# removendo atributos não usados (atributos com alto índice de nulos, e atributos com pouca importância: nome, ID, numero_candidato, estado_civil e descricao_ocupacao)
trainDPF <- trainDp %>% select(quantidade_doacoes, quantidade_doadores, total_receita, media_receita, recursos_de_outros_candidatos.comites, recursos_de_partidos, recursos_de_pessoas_físicas, recursos_de_pessoas_juridicas, recursos_proprios, quantidade_despesas, quantidade_fornecedores, total_despesa, media_despesa, idade, despesa_max_campanha, situacao_final)
testDPF <- testDp %>% select(quantidade_doacoes, quantidade_doadores, total_receita, media_receita, recursos_de_outros_candidatos.comites, recursos_de_partidos, recursos_de_pessoas_físicas, recursos_de_pessoas_juridicas, recursos_proprios, quantidade_despesas, quantidade_fornecedores, total_despesa, media_despesa, idade, despesa_max_campanha)

Construindo o modelo pt2

inputSize = ncol(trainDPF)
outputSize = length(unique(trainDPF$situacao_final))
# transforma os dados em valores numéricos (one hot encoding)
trainDPF[,inputSize] <- as.numeric(trainDPF[,inputSize]) -1
# Turn into a matrix
trainDPF <- as.matrix(trainDPF)
# Set `dimnames` to `NULL`
dimnames(trainDPF) <- NULL
# Normalize the data
trainDPF[,1:(inputSize-1)] <- normalize(trainDPF[,1:(inputSize-1)])
# Determine sample size
ind2 <- sample(2, nrow(trainDPF), replace=TRUE, prob=c(0.70, 0.30))
# Split the data
dp.training <- trainDPF[ind2==1, 1:(inputSize-1)]
dp.validation <- trainDPF[ind2==2, 1:(inputSize-1)]
# Split the class attribute
dp.trainingtarget <- trainDPF[ind2==1, inputSize]
dp.validationtarget <- trainDPF[ind2==2, inputSize]
# One hot encode training target dp.trainingtarget
dp.trainLabels <- to_categorical(dp.trainingtarget)
# One hot encode test target values
dp.validationLabels <- to_categorical(dp.validationtarget)

Para esse exemplo iremos usar mais camadas pois se trata de um problema mais complexo, também iremos utilizar outra função de loss “binary_crossentropy”, pois se trata de classificação de apenas 2 classes.

# Plot the model loss
plot(historyDP$metrics$loss, main="Model Loss", xlab = "epoch", ylab="loss", col="blue", type="l", ylim=c(0,1))
lines(historyDP$metrics$val_loss, col="green")
legend("topright", c("train"), col=c("blue"), lty=c(1,1))

# Plot the model accuracy
plot(historyDP$metrics$acc, main="Model Accuracy", xlab = "epoch", ylab="accuracy", col="blue", type="l", ylim=c(0,1))
lines(historyDP$metrics$val_acc, col="green")
legend("bottomright", c("train"), col=c("blue"), lty=c(1,1))

# Evaluate the model
scoreDP <- modelDP %>% evaluate(dp.validation, dp.validationLabels, batch_size = 128)

 128/1265 [==>...........................] - ETA: 0s
1265/1265 [==============================] - 0s 30us/step
# Print the loss and accuracy metrics
print("Validação")
[1] "Validação"
print(scoreDP)
$loss
[1] 0.1626936

$acc
[1] 0.9241107

Analisando esses gráficos, o que percebemos é que desde o início o nosso modelo já encontrou o valor bem próximo do ótimo, isso é um pouco estranho pois os dados possuem uma certa complexidade, mas isso também pode ser devido a eficiência dos parâmetros que fizeram com que a função encontrasse um ponto ótimo de forma bem rápida.

inputSizeT = ncol(testDPF)
# transforma os dados em valores numéricos (one hot encoding)
testDPF[,inputSizeT] <- as.numeric(testDPF[,inputSizeT]) -1
# Turn into a matrix
testDPF <- as.matrix(testDPF)
# Set `dimnames` to `NULL`
dimnames(testDPF) <- NULL
# Normalize the data
testDPF[,1:(inputSizeT-1)] <- normalize(testDPF[,1:(inputSizeT-1)])

Como podemos ver nosso modelo na verdade classificou tudo como “nao_eleito” isso pode ter acontecido por conta dos desbalanceamento nos dados de treinamento, mas também pode ser que o modelo não seja eficiente.

Isso enfatiza o ponto de analisar as predições do modelo de uma forma mais criteriosa, e não observar apenas a acurácia final, nesse caso a acurácia na validação foi de 92.41%, que seria muito boa, mas analisando as predições finais podemos ver que provavelmente não se trata de um bom modelo.

totalPred = nrow(submission_predict.df)
dist_classesPred <- data.frame( "situacao_final" = integer(), "n" = integer())
dist_classesPred[nrow(dist_classesPred) + 1, ] <- c( 'eleito', sum(submission_predict.df$preds == 'eleito'))
dist_classesPred[nrow(dist_classesPred) + 1, ] <- c( 'nao_eleito', sum(submission_predict.df$preds == 'nao_eleito'))
dist_classesPred$n <- as.numeric(dist_classesPred$n)
ggplot(dist_classesPred, aes(y = dist_classesPred$n/totalPred * 100, x = dist_classesPred$situacao_final))+
  geom_bar(stat="identity") +
  labs(title = "Distribuição de classes", x = "Situação final", y = "Proporção (%)") +
  theme(axis.text.x = element_text(angle = 0, hjust = 1), legend.position="none") +
  theme(axis.text=element_text(size=8), axis.title=element_text(size=12,face="bold"))

Agora com os dados de treino e as predições nos dados de teste, nós podemos unir os dois conjuntos e usar os dados para realizar análises no que seria o conjunto completo dos dados.

# merge both dataframes to get the "complete data"
dados_totais <- rbind(trainDp, submission)
write.csv(dados_totais, file = "C:/Users/dimit/Desktop/Projetos/AD2/data/dadosTotais.csv", row.names = FALSE)
LS0tDQp0aXRsZTogIkxBQjA2LURlZXAgTGVhcm5pbmcgZW0gUiBjb20gS2VyYXMiDQpvdXRwdXQ6DQogIGh0bWxfbm90ZWJvb2s6IA0KICAgIGRmX3ByaW50OiBwYWdlZA0KICAgIGZpZzpoZWlnaHQ6IDQNCiAgICBmaWdfd2lkdGg6IDUNCiAgICB0aGVtZTogcmVhZGFibGUNCiAgICB0b2M6IHllcw0KICAgIHRvY19mbG9hdDogeWVzDQogIGh0bWxfZG9jdW1lbnQ6DQogICAgZGZfcHJpbnQ6IHBhZ2VkDQogICAgZmlnOmhlaWdodDogNA0KICAgIGZpZ193aWR0aDogNQ0KICAgIHRoZW1lOiByZWFkYWJsZQ0KICAgIHRvYzogeWVzDQogICAgdG9jX2Zsb2F0OiB5ZXMNCiAgZWRpdG9yX29wdGlvbnM6DQogICAgY2h1bmtfb3V0cHV0X3R5cGU6IGlubGluZQ0KLS0tDQoNCg0KYGBge3IgbWVzc2FnZT1GQUxTRSwgaW5jbHVkZT1GQUxTRX0NCiNzZXQgdXANCmxpYnJhcnkoY29ycnBsb3QpDQpsaWJyYXJ5KGtlcmFzKQ0KYGBgDQoNCnR1dG9yaWFsIG9yaWdpbmFsOiBodHRwczovL3d3dy5kYXRhY2FtcC5jb20vY29tbXVuaXR5L3R1dG9yaWFscy9rZXJhcy1yLWRlZXAtbGVhcm5pbmcNCg0KDQojIyMgQ2FycmVnYW5kbyBvcyBkYWRvcw0KDQpQYXJhIGVzc2UgZXhlbXBsbyBpcmVtb3MgdXNhciBvcyBkYWRvcyBmb3JuZWNpZG9zIHBlbG8gIlVDSSBNYWNoaW5lIExlYXJuaW5nIFJlcG9zaXRvcnkiLCBxdWUgbmVzc2UgY2FzbyBzZXLhIG8gY29uaGVjaWRvIElyaXMgRGF0YSBTZXQsIHBhcmEgbWFpcyBkZXRhbGhlcyBzZWd1ZSBvIGxpbms6IGh0dHA6Ly9hcmNoaXZlLmljcy51Y2kuZWR1L21sL2RhdGFzZXRzL0lyaXMuDQpgYGB7cn0NCiMgUmVhZCBpbiBgaXJpc2AgZGF0YQ0KaXJpcyA8LSByZWFkLmNzdih1cmwoImh0dHA6Ly9hcmNoaXZlLmljcy51Y2kuZWR1L21sL21hY2hpbmUtbGVhcm5pbmctZGF0YWJhc2VzL2lyaXMvaXJpcy5kYXRhIiksIGhlYWRlciA9IEZBTFNFKQ0KYGBgDQoNCkVtIHNlZ3VpZGEgaXJlbW9zIGNoZWNhciBvcyBkYWRvcyBpbXBvcnRhZG9zIHVzYW5kbyBhbGd1bnMgY29tYW5kb3MuDQpgYGB7cn0NCiMgUmV0dXJuIHRoZSBmaXJzdCBwYXJ0IG9mIGBpcmlzYA0KaGVhZChpcmlzKQ0KDQojIEluc3BlY3QgdGhlIHN0cnVjdHVyZQ0Kc3RyKGlyaXMpDQoNCiMgT2J0YWluIHRoZSBkaW1lbnNpb25zDQpkaW0oaXJpcykNCmBgYA0KDQojIyMgRXhwbG9yYW5kbyBvcyBkYWRvcw0KDQpOYSBpbWFnZW0gYSBzZWd1aXIgcG9kZW1vcyB2ZXIgb3MgdHLqcyBkaWZlcmVudGVzIHRpcG8gZGUgaXJpcyB1c2FkYXMgbm9zIGRhZG9zLg0KDQohW10oaW1hZ2VzL2lyaXMtbWFjaGluZWxlYXJuaW5nLnBuZykNCg0KQ29tbyBwdWRlbW9zIG5vdGFyIG5hIGZ1bufjbyAic3RyKCkiIG5vc3NvIGRhdGEgZnJhbWUgbuNvIHBvc3N1aSBub21lcyBkZSBjb2x1bmFzIHF1ZSBmYWNpbGl0YW0gbyBlbnRlbmRpbWVudG8gZG9zIGRhZG9zLCBubyBtb21lbnRvIHRlbW9zICJWMSwgVjIsIFYzLCBWNCBlIFY1IiwgcHJpbWVpcmFtZW50ZSBpcmVtb3MgcmVub21lYXIgYXMgY29sdW5hcyBwYXJhIGFsZ28gcXVlIGZh52EgbWFpcyBzZW50aWRvLg0KYGBge3J9DQpuYW1lcyhpcmlzKSA8LSBjKCJTZXBhbC5MZW5ndGgiLCAiU2VwYWwuV2lkdGgiLCAiUGV0YWwuTGVuZ3RoIiwgIlBldGFsLldpZHRoIiwgIlNwZWNpZXMiKQ0KYGBgDQoNCkFnb3JhIGlyZW1vcyBtb250YXIgdW0gZ3LhZmljbyBjb20gb3MgZGFkb3MgcmVsYWNpb25hbmRvIG8gdGFtYW5obyBjb20gYSBsYXJndXJhIGRhcyBw6XRhbGFzLg0KYGBge3J9DQpwbG90KGlyaXMkUGV0YWwuTGVuZ3RoLCANCiAgICAgaXJpcyRQZXRhbC5XaWR0aCwgDQogICAgIHBjaD0yMSwgYmc9YygicmVkIiwiZ3JlZW4zIiwiYmx1ZSIpW3VuY2xhc3MoaXJpcyRTcGVjaWVzKV0sIA0KICAgICB4bGFiPSJQZXRhbCBMZW5ndGgiLCANCiAgICAgeWxhYj0iUGV0YWwgV2lkdGgiKQ0KYGBgDQpvYnM6IGEgZnVu5+NvICJ1bmNsYXNzKCkiIGNvbnZlcnRlIG8gbm9tZSBkYXMgZXNw6WNpZXMgZW0gbvptZXJvcyAoc2VyaWEgc2VtZWxoYW50ZSBhIG9uZSBob3QgZW5jb2RpbmcpLg0KDQpPYnNlcnZhbmRvIGEgaW1hZ2VtIHBhcmVjZSBleGlzdGlyIHVtYSBjb3JyZWxh5+NvIGVudHJlIHRhbWFuaG8gZSBsYXJndXJhIGRhcyBw6XRhbGFzIHBhcmEgYXMgZGlmZXJlbnRlcyBlc3DpY2llcywgcG9kZW1vcyBjb25maXJtYXIgZXNzYSBoaXDzdGVzZSBlIHRhbWLpbSB2ZXJpZmljYXIgYSBjb3JyZWxh5+NvIGVudHJlIG9zIG91dHJvcyBhdHJpYnV0b3MgdXNhbmRvIGEgZnVu5+NvICJjb3JycGxvdCgpIiBqdW50byBkZSAiY29yKCkiIHBhcmEgY2FkYSBhdHJpYnV0by4NCmBgYHtyfQ0KIyBTdG9yZSB0aGUgb3ZlcmFsbCBjb3JyZWxhdGlvbiBpbiBgTWANCmNvcnJlbGFjb2VzIDwtIGNvcihpcmlzWywxOjRdKQ0KDQojIFBsb3QgdGhlIGNvcnJlbGF0aW9uIHBsb3Qgd2l0aCBgTWANCmNvcnJwbG90KGNvcnJlbGFjb2VzLCBtZXRob2Q9ImNpcmNsZSIpDQoNCiMgT3ZlcmFsbCBjb3JyZWxhdGlvbiBiZXR3ZWVuIGBQZXRhbC5MZW5ndGhgIGFuZCBgUGV0YWwuV2lkdGhgIA0KY29yKGlyaXMkUGV0YWwuTGVuZ3RoLCBpcmlzJFBldGFsLldpZHRoKQ0KYGBgDQpjb21vIHBvZGVtb3Mgb2JzZXJ2YXIgZXhpc3RlIHVtYSBncmFuZGUgY29ycmVsYefjbyBlbnRyZSB0YW1hbmhvIGUgbGFyZ3VyYSBkYXMgcOl0YWxhcywgcXVlIGVtIHZhbG9yZXMgbnVt6XJpY29zIHNlcmlhIGRlIDAuOTYyNzU3MS4NCg0KIyMjIFByb2Nlc3NhbmRvIG9zIGRhZG9zDQoNCkFsZ3VtYXMgcHLhdGljYXMgYuFzaWNhcyBwYXJhIHVzYXIgZGFkb3Mgc+NvIHJlbGFjaW9uYWRhcyBhIGxpbXBlemEgZSBub3JtYWxpemHn428gZG9zIGRhZG9zIGUgY29tbyBpcmVtb3MgdXNhciBvcyBkYWRvcyBlbSB1bSBhbGdvcu10aW1vIGRlIGFwcmVuZGl6YWdlbSBkZSBt4XF1aW5hIHRhbWLpbSBwcmVjaXNhbW9zIGRpdmlkaXIgb3MgZGFkb3MgZW50cmUgdHJlaW5vIGUgdmFsaWRh5+NvLg0KDQpQcmltZWlybyBpcmVtb3Mgc3VtYXJpemFyIG8gZGF0YSBmcmFtZSBwYXJhIGF2YWxpYXIgb3MgZGFkb3MuDQpgYGB7cn0NCiMgUHVsbCB1cCBhIHN1bW1hcnkgb2YgYGlyaXNgDQpzdW1tYXJ5KGlyaXMpDQpgYGANCg0KQXF1aSB1bSBwb250byBiZW0gaW1wb3J0YW50ZSBkZSBvYnNlcnZhciDpIHF1ZSBvcyBkYWRvcyBlc3TjbyBiZW0gYmFsYW5jZWFkb3MsIGlzc28gaXLhIGZhY2lsaXRhciBvIHRyZWluYW1lbnRvIGRvIG5vc3NvIGFsZ29yaXRtby4NCg0KQWdvcmEgaXJlbW9zIG5vcm1hbGl6YXIgb3MgZGFkb3MsIHBhcmEgaXNzbyBpcmVtb3MgdXNhciBhIGZ1bufjbyAibm9ybWFsaXplIiBkbyBwYWNvdGUga2VyYXMsIG1hcyBwcmltZWlybyBwcmVjaXNhbW9zIHRyYW5zZm9ybWFyIG5vc3NvIGRhdGEgZnJhbWUgZW0gdW1hIG1hdHJpei4NCmBgYHtyfQ0KIyB0cmFuc2Zvcm1hIG9zIGRhZG9zIGVtIHZhbG9yZXMgbnVt6XJpY29zIChvbmUgaG90IGVuY29kaW5nKQ0KaXJpc1ssNV0gPC0gYXMubnVtZXJpYyhpcmlzWyw1XSkgLTENCg0KIyBUdXJuIGBpcmlzYCBpbnRvIGEgbWF0cml4DQppcmlzIDwtIGFzLm1hdHJpeChpcmlzKQ0KDQojIFNldCBgaXJpc2AgYGRpbW5hbWVzYCB0byBgTlVMTGANCmRpbW5hbWVzKGlyaXMpIDwtIE5VTEwNCmBgYA0KDQpFbSBzZWd1aWRhIHBvZGVtb3Mgbm9ybWFsaXphciBvcyBkYWRvcy4NCmBgYHtyfQ0KIyBOb3JtYWxpemUgdGhlIGBpcmlzYCBkYXRhDQppcmlzWywxOjRdIDwtIG5vcm1hbGl6ZShpcmlzWywxOjRdKQ0KYGBgDQoNCkVtIHNlZ3VpZGEgcG9kZW1vcyBvbGhhciBjb21vIGZpY291IG5vc3NhIG1hdHJpeiBkZSBkYWRvcyBub3JtYWxpemFkb3MsIG9ic2VydmUgcXVlIGFnb3JhIGNhZGEgdmFsb3IgdmFyaWEgZW50cmUgMCBlIDEuDQpgYGB7cn0NCiMgUmV0dXJuIHRoZSBzdW1tYXJ5IG9mIGBpcmlzYA0Kc3VtbWFyeShpcmlzKQ0KYGBgDQoNCkFnb3JhIHF1ZSBwb3NzdWlybW9zIHVtIGNvbmp1bnRvIGRlIGRhZG9zIGRlIHF1YWxpZGFkZSwgbvNzIHBvZGVtb3MgZGl2aWRpciBvcyBkYWRvcyBlbSB0cmVpbm8gZSB2YWxpZGHn428sIHBhcmEgcXVlIHBvc3NhbW9zIGNvbnN0cnVpciBub3NzbyBtb2RlbG8sIG1hcyBhbnRlcyBkaXNzbyBpcmVtb3MgZGVmaW5pciB1bWEgInNlZWQiIHVzYW5kbyBhIGZ1bufjbyAic2V0LnNlZWQoKSIsIHBhcmEgcG9zc2Ftb3MgdGVyIHVtYSAiYWxlYXRvcmllZGFkZSBkZXRlcm1pbu1zdGljYSIsIGFzc2ltIG8gbm9zc28gY/NkaWdvIGZpY2EgbWFpcyByZXByb2R1eu12ZWwuDQoNCklyZW1vcyB1c2FyIGEgZnVu5+NvICJzYW1wbGUoKSIgcGFyYSBnZXJhciB1bSBhcnJheSBjb20gdmFsb3JlcyAxIG91IDIgY29tIHByb2JhYmlsaWRhZGUgZGUgNjclIGUgMzMlIHJlc3BlY3RpdmFtZW50ZSwgZW0gc2VndWlkYSBvcyB2YWxvcmVzIHF1ZSBwb3NzdWVtIDEgc2Vy428gYSBtYXRyaXogZGUgdGVzdCBlIG9zIGRlbWFpcyBzZXLjbyBhIG1hdHJpeiBkZSB2YWxpZGHn428NCmBgYHtyfQ0KIyBEZXRlcm1pbmUgc2FtcGxlIHNpemUNCmluZCA8LSBzYW1wbGUoMiwgbnJvdyhpcmlzKSwgcmVwbGFjZT1UUlVFLCBwcm9iPWMoMC42NywgMC4zMykpDQoNCiMgU3BsaXQgdGhlIGBpcmlzYCBkYXRhDQppcmlzLnRyYWluaW5nIDwtIGlyaXNbaW5kPT0xLCAxOjRdDQppcmlzLnZhbGlkYXRpb24gPC0gaXJpc1tpbmQ9PTIsIDE6NF0NCg0KIyBTcGxpdCB0aGUgY2xhc3MgYXR0cmlidXRlDQppcmlzLnRyYWluaW5ndGFyZ2V0IDwtIGlyaXNbaW5kPT0xLCA1XQ0KaXJpcy52YWxpZGF0aW9udGFyZ2V0IDwtIGlyaXNbaW5kPT0yLCA1XQ0KYGBgDQoNCk8g+mx0aW1vIHBhc3NvIG5hIG1hbmlwdWxh5+NvIGRvcyBkYWRvcyDpIGFwbGljYXIgT25lIEhvdCBFbmNvZGluZyAoT0hFKSBlbSBub3NzbyBhdHJpYnV0byBhbHZvICJTcGVjaWVzIiwgZW0gdW0gbW9kZWxvIGRlIGNsYXNzaWZpY2Hn428gbXVsdGktY2xhc3NlIGNvbW8gZXNzZSwg6SByZWNvbWVuZGFkbyBxdWUgbyB2ZXRvciByZXNwb3N0YSAiU3BlY2llcyIgc2VqYSB1bWEgbWF0cml6IGNvbSB1bSB2ZXRvciBwYXJhIGNhZGEgY2xhc3NlIGUgbmVzc2VzIHZldG9yZXMgYXBlbmFzIDEgb3UgMCBzaW1ib2xpemFuZG8gc2UgbyBleGVtcGxvIOkgZGUgZGV0ZXJtaW5hZGEgY2xhc3NlIG91IG7jby4NCg0KS2VyYXMgdHJheiBhIGZ1bufjbyBidWlsdC1pbiAidG9fY2F0ZWdvcmljYWwoKSIgcXVlIGFwbGljYSBvbmUgaG90IGVuY29kaW5nIGVtIHVtYSB2YXJp4XZlbCwgZW50428gaXJlbW9zIHBhc3NhciBvIHZldG9yIGFsdm8gZGEgbm9zc2EgbWF0cml6IGRlIHRyZWlubyBlIHZhbGlkYefjbyBwYXJhIGVzc2EgZnVu5+NvLg0KYGBge3J9DQojIE9uZSBob3QgZW5jb2RlIHRyYWluaW5nIHRhcmdldCB2YWx1ZXMNCmlyaXMudHJhaW5MYWJlbHMgPC0gdG9fY2F0ZWdvcmljYWwoaXJpcy50cmFpbmluZ3RhcmdldCkNCg0KIyBPbmUgaG90IGVuY29kZSB0ZXN0IHRhcmdldCB2YWx1ZXMNCmlyaXMudmFsaWRhdGlvbkxhYmVscyA8LSB0b19jYXRlZ29yaWNhbChpcmlzLnZhbGlkYXRpb250YXJnZXQpDQoNCiMgUHJpbnQgb3V0IHRoZSBpcmlzLnRlc3RMYWJlbHMgaGVhZCB0byBkb3VibGUgY2hlY2sgdGhlIHJlc3VsdA0KaGVhZChpcmlzLnZhbGlkYXRpb25MYWJlbHMpDQpgYGANCg0KQ29tbyBwb2RlbW9zIHZlciBubyBsdWdhciBkZSBhcGVuYXMgdW0gdmV0b3IgY29tIHZhbG9yZXMgZGUgMSBhIDMgYWdvcmEgdGVtb3MgMyB2ZXRvcmVzIGNvbSB2YWxvcmVzIDEgb3UgMCwgZSBjb21vIGZvaSBkaXRvIGNhZGEgdW0gZG9zIHZldG9yZXMgY29ycmVzcG9uZGUgYSB1bWEgZGFzIGNsYXNzZXMgcG9zc+12ZWlzIGRvIG5vc3NvcyBkYWRvcyAoc2V0b3NhLCB2ZXJzaWNvbG9yIGUgdmlyZ2luaWNhKS4NCg0KIyMjIENvbnN0cnVpbmRvIG8gbW9kZWxvDQoNCkFudGVzIGRlIGNvbnN0cnVpciBvIG1vZGVsbyDpIGJvbSByZXZpc2l0YXIgbyBwcm9w83NpdG8gaW5pY2lhbCBkbyBleGVyY+1jaW8sIHF1ZSDpIHByZXZlciBxdWFsIGEgZXNw6WNpZSBkZSBJcmlzIGRhZG9zIGRldGVybWluYWRvcyBkYWRvcywgZSBuZXNzZSBjYXNvIHNlcmlhIG8gbm9zc28gdmV0b3IgIlNwZWNpZXMiIHF1ZSBmb2kgdHJhbnNmb3JtYWRvIGVtIHVtICJPbmUgaG90IEVuY29kaW5nIiwgYXNzaW0gbm9zc28gcmVzdWx0YWRvIGZpbmFsIHNlcuEgdW0gZGVzc2VzIHRy6nMgdmV0b3JlcyBjb20gdmFsb3IgMSBlIG9zIG91dHJvcyAyIGNvbSB2YWxvciAwLg0KDQpQYXJhIG1vbnRhciBvIG1vZGVsbyBpcmVtb3MgdXNhciBhIGZ1bufjbyAia2VyYXNfbW9kZWxfc2VxdWVudGlhbCgpIiwgaXNzbyBzaWduaWZpY2EgcXVlIGlyZW1vcyBjb25zdHJ1aXIgdW0gbW9kZWxvIGRlIGZvcm1hIHNlcXVlbmNpYWwsIG91IHNlamEgY2FkYSBjYW1hZGEg6SBhZGljaW9uYWRhIHVtYSBhcPNzIGEgb3V0cmEgZGUgZm9ybWEgc2VxdWVuY2lhbCwgaXNzbyBmaWNhcuEgbWFpcyBjbGFybyBjb20gbyBj82RpZ28uDQoNCk8gdGlwbyBkZSByZWRlIHF1ZSBpcmVtb3MgdXNhciDpIGEgIk1MUCIgb3UgIk11bHRpLWxheWVyIHBlcmNlcHRyb24iLCBxdWUgbmFkYSBtYWlzIOkgcXVlIHVtIGNvbmp1bnRvIGRlIGNhbWFkYXMgdG90YWxtZW50ZSBjb25lY3RhZGFzIHRhbWLpbSBjb25oZWNpZGFzIGNvbW8gZGVuc2FzLCBpc3NvIHNpZ25pZmljYSBxdWUgYSBzYe1kYSBkZSBjYWRhIG5ldXL0bmlvIGRlIHVtYSBjYW1hZGEg6SB1c2FkYSBjb21vIGVudHJhZGEgcGFyYSB0b2RvcyBvcyBuZXVy9G5pb3MgZGEgcHLzeGltYSBjYW1hZGEuDQoNClBhcmEgYXMgZnVu5/VlcyBkZSBhdGl2YefjbyBkYXMgY2FtYWRhcyBpbnRlcm1lZGnhcmlhcyBvdSBlc2NvbmRpZGFzLCBpcmVtb3MgdXNhciBhIG1haXMgY29tdW0gcXVlIHNlcmlhICJSZWx1IiwgZXNzYSBlc2NvbGhhIGVzdOEgcmVsYWNpb25hZGEgYW8gcHJvYmxlbWEgZGUgImV4cGxvZGluZyBlIHZhbmlzaGluZyBncmFkaWVudHMiIHF1ZSBpcuEgcmVmbGV0aXIgbmEgZWZpY2nqbmNpYSBkZSB0cmVpbmFtZW50byBkbyBub3NzbyBtb2RlbG8sIGUgY29tbyBzZSB0cmF0YSBkZSB1bWEgY2xhc3NpZmljYefjbyBub3NzYSD6bHRpbWEgZnVu5+NvIGRlIGF0aXZh5+NvIG91IG91dHB1dCBkbyBtb2RlbG8gc2Vy4SB1bWEgInNvZnRtYXgiLCBxdWUgc2VydmUgcGFyYSBjb252ZXJ0ZXIgdW1hIHByb2JhYmlsaWRhZGUgZGUgY2FkYSBjbGFzc2UgZW0gMCBvdSAxLg0KDQohW10oaW1hZ2VzL2FjdGl2YXRpb24tZnVuY3Rpb25zLnBuZykNCg0KYGBge3J9DQojIEluaXRpYWxpemUgYSBzZXF1ZW50aWFsIG1vZGVsDQptb2RlbCA8LSBrZXJhc19tb2RlbF9zZXF1ZW50aWFsKCkgDQoNCiMgQWRkIGxheWVycyB0byB0aGUgbW9kZWwNCm1vZGVsICU+JSANCiAgICBsYXllcl9kZW5zZSh1bml0cyA9IDgsIGFjdGl2YXRpb24gPSAncmVsdScsIGlucHV0X3NoYXBlID0gYyg0KSkgJT4lIA0KICAgIGxheWVyX2RlbnNlKHVuaXRzID0gMywgYWN0aXZhdGlvbiA9ICdzb2Z0bWF4JykNCmBgYA0KDQpBbGd1bnMgZGV0YWxoZSBzb2JyZSBvIG1vZGVsbywgZWxlIHBvc3N1aSBpbnB1dCA0IChvcyA0IGF0cmlidXRvcyBkb3MgZGFkb3MpLCBvdXRwdXQgMyAoMSBwYXJhIGNhZGEgdGlwbyBkZSBJcmlzKSBlIDggbvNzIGVtIHN1YSBjYW1hZGEgZXNjb25kaWRhIChlc3NlIHZhbG9yIOkgYXJiaXRy4XJpbykuDQoNCkFnb3JhIHBvZGVtb3MgdmVyIGEgcmVwcmVzZW50YefjbyBkbyBtb2RlbG86DQpgYGB7cn0NCnN1bW1hcnkobW9kZWwpDQpgYGANCg0KQWdvcmEgcXVlIGEgYXJxdWl0ZXR1cmEgZm9pIGRlZmluaWRhIOkgaG9yYSBkZSBjb21waWxhciBvIG1vZGVsbywgcGFyYSBlc3NlIGV4ZW1wbG8gaXJlbW9zIHVzYXIgImNhdGVnb3JpY2FsX2Nyb3NzZW50cm9weSIgY29tbyBub3NzYSBmdW7n428gZGUgbG9zcywgImFkYW0iIGNvbW8gbyBvdGltaXphZG9yIGUgImFjY3VyYWN5IiBzZXLhIG5vc3NhIG3pdHJpY2EuDQpgYGB7cn0NCiMgQ29tcGlsZSB0aGUgbW9kZWwNCm1vZGVsICU+JSBjb21waWxlKA0KICAgICBsb3NzID0gJ2NhdGVnb3JpY2FsX2Nyb3NzZW50cm9weScsDQogICAgIG9wdGltaXplciA9ICdhZGFtJywNCiAgICAgbWV0cmljcyA9ICdhY2N1cmFjeScNCiApDQpgYGANCg0KRW0gcmVsYefjbyBhbyBvdGltaXphZG9yICJhZGFtIiDpIGltcG9ydGFudGUgbm90YXIgcXVlIHRhbWLpbSBwb2RlbW9zIHR1bmFyIG91dHJvcyBwYXLibWV0cm9zIGFs6W0gZGEgdGF4YSBkZSBhcHJlbmRpemFkbywgY29tbyBwb3IgZXhlbXBsbyBiZXRhMSwgYmV0YTIgZSBlcHNpbG9uLiBQYXJhIGEgbel0cmljYSBhY3Vy4WNpYSBzZSBhZOlxdWEgbWVsaG9yIGFvIG5vc3NvIHByb2JsZW1hIG1hcyB0ZW1vcyBvdXRyYXMgb3Dn9WVzIGNvbW8gTWVhbiBTcXVhcmVkIEVycm9yIChNU0UpLg0KTm9zc2EgZnVu5+NvIGxvc3MgImNhdGVnb3JpY2FsX2Nyb3NzZW50cm9weSIsIHNlcmlhIGEgZnVu5+NvIHBhZHLjbyBwYXJhIGNsYXNzaWZpY2Hn428gbXVsdGktY2xhc3NlKG1haXMgZGUgMiBjbGFzc2VzKS4NCg0KVGVuZG8gaXNzbyBkZWZpbmlkbyBwb2RlbW9zICJlbmNhaXhhciIgbm9zc29zIGRhZG9zIG5vIG1vZGVsby4NCmBgYHtyIGluY2x1ZGU9RkFMU0V9DQojIEZpdCB0aGUgbW9kZWwgDQpoaXN0b3J5IDwtIG1vZGVsICU+JSBmaXQoDQogICAgIGlyaXMudHJhaW5pbmcsIA0KICAgICBpcmlzLnRyYWluTGFiZWxzLCANCiAgICAgZXBvY2hzID0gMjAwLCANCiAgICAgYmF0Y2hfc2l6ZSA9IDUsIA0KICAgICB2YWxpZGF0aW9uX3NwbGl0ID0gMC4yDQogKQ0KYGBgDQoNCkFxdWkgZGVmaW5pbW9zIG9zIHBhcuJtZXRyb3MgcGFyYSBvIHRyZWluYW1lbnRvLCBwcmltZWlybyBkZWZpbmltb3MgbyBu+m1lcm8gZGUgZXBvY2hzLCBjYWRhIGVwb2NoIOkgdW1hIGl0ZXJh5+NvIGRvIG5vc3NvIG1vZGVsbyBzb2JyZSBvcyBkYWRvcyBkZSB0cmVpbmFtZW50byBzZWd1aWRvcyBwZWxhIHZhbGlkYefjbyBkb3MgcmVzdWx0YWRvcyhmb3dhcmQsIGJhY2t3YXJkIHByb3BhZ2F0aW9uIGUgdXBkYXRlIGRvcyBwZXNvcyksICJiYXRjaF9zaXplIiDpIHJlZmVyZW50ZSBhIHF1YW50aWRhZGUgZG9zIGRhZG9zIGRlIHRyZWluYW1lbnRvIHF1ZSB2428gc2VyIHByb2Nlc3NhZG9zIHBvciB2ZXogKGlzc28gcG9kZSBtZWxob3JhciBvIHVzbyBkYSBtZW3zcmlhKSwgYWzpbSBkZSBxdWUgbyBtb2RlbG8gdmFpIHNlciBhdHVhbGl6YWRvIG1haXMgZnJlcXVlbnRlbWVudGUoMSB2ZXogYSBjYWRhIGJhdGNoKS4NCg0KQWxnbyBtdWl0byBpbnRlcmVzc2FudGUgcXVlIHBvZGVtb3MgZmF6ZXIg6SB2aXN1YWxpemFyIG9zIGdy4WZpY29zIGRvIG5vc3NvIG1vZGVsbyByZWZlcmVudGVzIGEgZnVu5+NvIGxvc3MgZSBhIGFjdXLhY2lhLCBpc3NvIGNvbSBiYXNlIHRhbnRvIG5vcyBkYWRvcyBkZSB0cmVpbm8gcXVhbnRvIG5vcyBkYWRvcyBkZSB2YWxpZGHn428uDQpgYGB7cn0NCiMgUGxvdCB0aGUgbW9kZWwgbG9zcyBvZiB0aGUgdHJhaW5pbmcgZGF0YQ0KcGxvdChoaXN0b3J5JG1ldHJpY3MkbG9zcywgbWFpbj0iTW9kZWwgTG9zcyIsIHhsYWIgPSAiZXBvY2giLCB5bGFiPSJsb3NzIiwgY29sPSJibHVlIiwgdHlwZT0ibCIsIHlsaW09YygwLDEuNSkpDQoNCiMgUGxvdCB0aGUgbW9kZWwgbG9zcyBvZiB0aGUgdGVzdCBkYXRhDQpsaW5lcyhoaXN0b3J5JG1ldHJpY3MkdmFsX2xvc3MsIGNvbD0iZ3JlZW4iKQ0KDQojIEFkZCBsZWdlbmQNCmxlZ2VuZCgidG9wcmlnaHQiLCBjKCJ0cmFpbiIsInRlc3QiKSwgY29sPWMoImJsdWUiLCAiZ3JlZW4iKSwgbHR5PWMoMSwxKSkNCmBgYA0KDQpgYGB7cn0NCiMgUGxvdCB0aGUgYWNjdXJhY3kgb2YgdGhlIHRyYWluaW5nIGRhdGEgDQpwbG90KGhpc3RvcnkkbWV0cmljcyRhY2MsIG1haW49Ik1vZGVsIEFjY3VyYWN5IiwgeGxhYiA9ICJlcG9jaCIsIHlsYWI9ImFjY3VyYWN5IiwgY29sPSJibHVlIiwgdHlwZT0ibCIsIHlsaW09YygwLDEpKQ0KDQojIFBsb3QgdGhlIGFjY3VyYWN5IG9mIHRoZSB2YWxpZGF0aW9uIGRhdGENCmxpbmVzKGhpc3RvcnkkbWV0cmljcyR2YWxfYWNjLCBjb2w9ImdyZWVuIikNCg0KIyBBZGQgTGVnZW5kDQpsZWdlbmQoImJvdHRvbXJpZ2h0IiwgYygidHJhaW4iLCJ0ZXN0IiksIGNvbD1jKCJibHVlIiwgImdyZWVuIiksIGx0eT1jKDEsMSkpDQpgYGANCg0KQXF1aSBwb2RlbW9zIG9ic2VydmFyIG8gc2VndWludGU6DQoNCi0gQSBmdW7n428gbG9zcyB0ZW0gdW0gY29tcG9ydGFtZW50byBkZW50cm8gZG8gZXNwZXJhZG8sIGVsYSB0ZW5kZSBhIGRpbWludWlyIGNvbmZvcm1lIG8gbvptZXJvIGRlIGVwb2NocyBhdW1lbnRhIGF06SBjaGVnYXIgdW0gcG9udG8gb25kZSBlbGEgcGFyZWNlIGVzdGFiaWxpemFyLg0KLSBQYXJhIGEgYWN1cuFjaWEgbyBvYnNlcnZhZG8gdGFtYultIGVzdOEgZGVudHJvIGRvIGVzcGVyYWRvLCBhIGFjdXLhY2lhIHRlbmRlIGEgYXVtZW50YXIgY29uZm9ybWUgYXVtZW50YW0gb3MgZXBvY2hzIGF06SBlc3RhYmlsaXphciBlbSB1bSBwb250by4NCi0gT2JzOiBTZSBhIGFjdXLhY2lhIHBhcmVjZSBlc3RhciBhdW1lbnRhZG8gbm9zIPpsdGltb3MgZXBvY2gsIOkgdW0gc2luYWwgcXVlICBtb2RlbG8gYWluZGEgbuNvIGFjYWJvdSBkZSBhcHJlbmRlci4NCi0gT2JzMjogU2UgYSBhY3Vy4WNpYSBwYXJhIG8gdHJlaW5vIGVzdOEgYXVtZW50YW5kbyBtYXMgYSBhY3Vy4WNpYSBwYXJhIG8gdGVzdGUgZXN04SBkaW1pbnVpbmRvIG8gbW9kZWxvIHByb3ZhdmVsbWVudGUgZXN04SBzb2ZyZW5kbyBkZSBvdmVyZml0dGluZy4NCg0KQWdvcmEgcXVlIG5vc3NvIG1vZGVsbyBmb2kgY3JpYWRvLCBjb21waWxhZG8gZSB0cmVpbmFkbywgbvNzIHBvZGVtb3MgdXPhLWxvIHBhcmEgcHJldmVyIHJlc3VsdGFkb3MgcGFyYSBub3Nzb3MgZGFkb3MgZGUgdGVzdGUuDQpgYGB7cn0NCiMgUHJlZGljdCB0aGUgY2xhc3NlcyBmb3IgdGhlIHRlc3QgZGF0YQ0KY2xhc3NlcyA8LSBtb2RlbCAlPiUgcHJlZGljdF9jbGFzc2VzKGlyaXMudmFsaWRhdGlvbiwgYmF0Y2hfc2l6ZSA9IDEyOCkNCmBgYA0KDQpDb20gbm9zc2FzIHByZWRp5/VlcyB1bWEgZm9ybWEgaW50ZXJlc3NhbnRlIGRlIHZpc3VhbGl6YXIgb3MgZGFkb3Mg6SB1c2FuZG8gdW1hIG1hdHJpeiBkZSBjb25mdXPjbw0KYGBge3J9DQojIENvbmZ1c2lvbiBtYXRyaXgNCnRhYmxlKGlyaXMudmFsaWRhdGlvbnRhcmdldCwgY2xhc3NlcykNCmBgYA0KDQoNCkFWQUxJQVIgT1MgUkVTVUxUQURPUw0KDQoNCk91dHJhIGZvcm1hIGludGVyZXNzYW50ZSBkZSBhdmFsaWFyIG8gbW9kZWxvIOkgdXNhbmRvIGEgZnVu5+NvICJldmFsdWF0ZSgpIiwgcGFyYSBpc3NvIGJhc3RhIHBhc3NhciBvcyBkYWRvcyBlIGxhYmVscyBkZSB2YWxpZGHn428uDQpgYGB7cn0NCiMgRXZhbHVhdGUgb24gdGVzdCBkYXRhIGFuZCBsYWJlbHMNCnNjb3JlIDwtIG1vZGVsICU+JSBldmFsdWF0ZShpcmlzLnZhbGlkYXRpb24sIGlyaXMudmFsaWRhdGlvbkxhYmVscywgYmF0Y2hfc2l6ZSA9IDEyOCkNCg0KIyBQcmludCB0aGUgc2NvcmUNCnByaW50KHNjb3JlKQ0KYGBgDQoNCkJ1c2NhIHBvciBoaXBlcnBhcuJtZXRyb3Mg6SBwcm92YXZlbG1lbnRlIG9uZGUgc2UgZ2FzdGEgbWFpcyB0ZW1wbyBxdWFuZG8gc2UgbW9udGEgdW0gbW9kZWxvLCBtYXMgdGFtYultIOkgbyBxdWUgZGlmZXJlbmNpYSB1bSBib20gbW9kZWxvIGRlIG91dHJvIHJ1aW0sIG91IHBvdWNvIGVmaWNpZW50ZS4gTWFzIGlzc28g6SBhbGdvIHF1ZSBkZXBlbmRlIG11aXRvIGRvIHByb2JsZW1hIGVtIHF1ZXN0428sIG5vIG5vc3NvIGNhc28gbm9zc29zIGRhZG9zIHPjbyBiZW0gc2ltcGxlcywgZW50428gbuNvIOkgcHJlY2lzbyBmYXplciBtdWl0by4NCg0KRGVudHJlIGFzIHbhcmlhcyBwb3NzaWJpbGlkYWRlcyBkZSBhanVzdGVzLCBpcmVtb3MgY29icmlyIHRy6nM6IG8gbvptZXJvIGRlIGNhbWFkYXMgZXNjb25kaWRhcywgbvptZXJvIGRlIG7zcyBlIG8gYWxnb3JpdG1vIGRlIG90aW1pemHn428uDQoNCkFkaWNpb25hbmRvIGNhbWFkYXMNCg0KQXF1aSBpcmVtb3MgdXNhciBhIG1lc21hIGVzdHJ1dHVyYSBkZSBtb2RlbG8gbWFzIGNvbSB1bWEgY2FtYWRhIGEgbWFpcy4NCmBgYHtyIGluY2x1ZGU9RkFMU0V9DQojIEluaXRpYWxpemUgYSBzZXF1ZW50aWFsIG1vZGVsDQptb2RlbDIgPC0ga2VyYXNfbW9kZWxfc2VxdWVudGlhbCgpIA0KDQojIEFkZCBsYXllcnMgdG8gdGhlIG1vZGVsDQptb2RlbDIgJT4lIA0KICAgIGxheWVyX2RlbnNlKHVuaXRzID0gOCwgYWN0aXZhdGlvbiA9ICdyZWx1JywgaW5wdXRfc2hhcGUgPSBjKDQpKSAlPiUgDQogICAgbGF5ZXJfZGVuc2UodW5pdHMgPSA1LCBhY3RpdmF0aW9uID0gJ3JlbHUnKSAlPiUgDQogICAgbGF5ZXJfZGVuc2UodW5pdHMgPSAzLCBhY3RpdmF0aW9uID0gJ3NvZnRtYXgnKQ0KDQojIENvbXBpbGUgdGhlIG1vZGVsDQptb2RlbDIgJT4lIGNvbXBpbGUoDQogICAgIGxvc3MgPSAnY2F0ZWdvcmljYWxfY3Jvc3NlbnRyb3B5JywNCiAgICAgb3B0aW1pemVyID0gJ2FkYW0nLA0KICAgICBtZXRyaWNzID0gJ2FjY3VyYWN5Jw0KICkNCg0KIyBTYXZlIHRoZSB0cmFpbmluZyBoaXN0b3J5IGluIGhpc3RvcnkNCmhpc3RvcnkyIDwtIG1vZGVsMiAlPiUgZml0KA0KICBpcmlzLnRyYWluaW5nLCBpcmlzLnRyYWluTGFiZWxzLCANCiAgZXBvY2hzID0gMjAwLCBiYXRjaF9zaXplID0gNSwNCiAgdmFsaWRhdGlvbl9zcGxpdCA9IDAuMg0KICkNCmBgYA0KDQpgYGB7cn0NCiMgUGxvdCB0aGUgbW9kZWwgbG9zcw0KcGxvdChoaXN0b3J5MiRtZXRyaWNzJGxvc3MsIG1haW49Ik1vZGVsIExvc3MiLCB4bGFiID0gImVwb2NoIiwgeWxhYj0ibG9zcyIsIGNvbD0iYmx1ZSIsIHR5cGU9ImwiLCB5bGltPWMoMCwxLjcpKQ0KbGluZXMoaGlzdG9yeTIkbWV0cmljcyR2YWxfbG9zcywgY29sPSJncmVlbiIpDQpsZWdlbmQoInRvcHJpZ2h0IiwgYygidHJhaW4iLCJ0ZXN0IiksIGNvbD1jKCJibHVlIiwgImdyZWVuIiksIGx0eT1jKDEsMSkpDQoNCiMgUGxvdCB0aGUgbW9kZWwgYWNjdXJhY3kNCnBsb3QoaGlzdG9yeTIkbWV0cmljcyRhY2MsIG1haW49Ik1vZGVsIEFjY3VyYWN5IiwgeGxhYiA9ICJlcG9jaCIsIHlsYWI9ImFjY3VyYWN5IiwgY29sPSJibHVlIiwgdHlwZT0ibCIsIHlsaW09YygwLDEpKQ0KbGluZXMoaGlzdG9yeTIkbWV0cmljcyR2YWxfYWNjLCBjb2w9ImdyZWVuIikNCmxlZ2VuZCgiYm90dG9tcmlnaHQiLCBjKCJ0cmFpbiIsInRlc3QiKSwgY29sPWMoImJsdWUiLCAiZ3JlZW4iKSwgbHR5PWMoMSwxKSkNCg0KIyBFdmFsdWF0ZSB0aGUgbW9kZWwNCnNjb3JlMiA8LSBtb2RlbDIgJT4lIGV2YWx1YXRlKGlyaXMudmFsaWRhdGlvbiwgaXJpcy52YWxpZGF0aW9uTGFiZWxzLCBiYXRjaF9zaXplID0gMTI4KQ0KDQojIFByaW50IHRoZSBzY29yZQ0KcHJpbnQoc2NvcmUyKQ0KYGBgDQoNCk7zcyBlc2NvbmRpZG9zDQoNCkFnb3JhIGlyZW1vcyBtYWlzIHVtYSB2ZXogdXNhciBhIG1lc21hIGVzdHJ1dHVyYSBpbmljaWFsIG1hcyBkZXNhIHZleiBpcmVtb3MgYWRpY2lvbmFyIG1haXMgbvNzIGEgY2FtYWRhIGVzY29uZGlkYS4NCmBgYHtyIGluY2x1ZGU9RkFMU0V9DQojIEluaXRpYWxpemUgdGhlIHNlcXVlbnRpYWwgbW9kZWwNCm1vZGVsMyA8LSBrZXJhc19tb2RlbF9zZXF1ZW50aWFsKCkgDQoNCiMgQWRkIGxheWVycyB0byB0aGUgbW9kZWwNCm1vZGVsMyAlPiUgDQogICAgbGF5ZXJfZGVuc2UodW5pdHMgPSAyOCwgYWN0aXZhdGlvbiA9ICdyZWx1JywgaW5wdXRfc2hhcGUgPSBjKDQpKSAlPiUgDQogICAgbGF5ZXJfZGVuc2UodW5pdHMgPSAzLCBhY3RpdmF0aW9uID0gJ3NvZnRtYXgnKQ0KDQojIENvbXBpbGUgdGhlIG1vZGVsDQptb2RlbDMgJT4lIGNvbXBpbGUoDQogICAgIGxvc3MgPSAnY2F0ZWdvcmljYWxfY3Jvc3NlbnRyb3B5JywNCiAgICAgb3B0aW1pemVyID0gJ2FkYW0nLA0KICAgICBtZXRyaWNzID0gJ2FjY3VyYWN5Jw0KICkNCg0KIyBTYXZlIHRoZSB0cmFpbmluZyBoaXN0b3J5IGluIHRoZSBoaXN0b3J5IHZhcmlhYmxlDQpoaXN0b3J5MyA8LSBtb2RlbDMgJT4lIGZpdCgNCiAgaXJpcy50cmFpbmluZywgaXJpcy50cmFpbkxhYmVscywgDQogIGVwb2NocyA9IDIwMCwgYmF0Y2hfc2l6ZSA9IDUsIA0KICB2YWxpZGF0aW9uX3NwbGl0ID0gMC4yDQogKQ0KYGBgDQoNCmBgYHtyfQ0KIyBQbG90IHRoZSBtb2RlbCBsb3NzDQpwbG90KGhpc3RvcnkzJG1ldHJpY3MkbG9zcywgbWFpbj0iTW9kZWwgTG9zcyIsIHhsYWIgPSAiZXBvY2giLCB5bGFiPSJsb3NzIiwgY29sPSJibHVlIiwgdHlwZT0ibCIsIHlsaW09YygwLDEuNSkpDQpsaW5lcyhoaXN0b3J5MyRtZXRyaWNzJHZhbF9sb3NzLCBjb2w9ImdyZWVuIikNCmxlZ2VuZCgidG9wcmlnaHQiLCBjKCJ0cmFpbiIsInRlc3QiKSwgY29sPWMoImJsdWUiLCAiZ3JlZW4iKSwgbHR5PWMoMSwxKSkNCg0KIyBQbG90IHRoZSBtb2RlbCBhY2N1cmFjeQ0KcGxvdChoaXN0b3J5MyRtZXRyaWNzJGFjYywgbWFpbj0iTW9kZWwgQWNjdXJhY3kiLCB4bGFiID0gImVwb2NoIiwgeWxhYj0iYWNjdXJhY3kiLCBjb2w9ImJsdWUiLCB0eXBlPSJsIiwgeWxpbT1jKDAsMSkpDQpsaW5lcyhoaXN0b3J5MyRtZXRyaWNzJHZhbF9hY2MsIGNvbD0iZ3JlZW4iKQ0KbGVnZW5kKCJib3R0b21yaWdodCIsIGMoInRyYWluIiwidGVzdCIpLCBjb2w9YygiYmx1ZSIsICJncmVlbiIpLCBsdHk9YygxLDEpKQ0KDQojIEV2YWx1YXRlIHRoZSBtb2RlbA0Kc2NvcmUzIDwtIG1vZGVsMyAlPiUgZXZhbHVhdGUoaXJpcy52YWxpZGF0aW9uLCBpcmlzLnZhbGlkYXRpb25MYWJlbHMsIGJhdGNoX3NpemUgPSAxMjgpDQoNCiMgUHJpbnQgdGhlIHNjb3JlDQpwcmludChzY29yZTMpDQpgYGANCg0KRW0gcmVsYefjbyBhIHRvcG9sb2dpYSBkYSByZWRlIChxdWFudGlkYWRlIGRlIGNhbWFkYXMgZSBu83MpLCBhIHByaW5j7XBpbyBwb2RlIHBhcmVjZXIgdW1hIGJvYSBpZGVpYSBhZGljaW9uYXIgbWFpcyBjYW1hZGFzIGUgbvNzLCBwYXJhIGF1bWVudGFyIGEgY29tcGxleGlkYWRlIGRhIG5vc3NhIGZ1bufjbyBlIHBvZGVyIGNhcHR1cmFyIG1haXMgZGFkb3MsIG1hcyBpc3NvIHZhaSBmYXplciBjb20gcXVlIG8gbW9kZWxvIHNlIGFqdXN0ZSBkZW1haXMgYW9zIGRhZG9zIGRlIHRyZWluYW1lbnRvIGUgcGVyY2EgYSBjYXBhY2lkYWRlIGRlIGNhcHR1cmFyIHRhbWLpbSBvcyBkYWRvcyBkZSB2YWxpZGHn428gKG92ZXJmaXR0aW5nKS4gT3Ugc2VqYSBhbOltIGRlIGRpZmljdWx0YXIgbyBvdmVyZml0dGluZyByZWRlcyBtZW5vcmVzIHRhbWLpbSB2428gc2VyIHRyZWluYWRhcyBtYWlzIHLhcGlkbywgcG9yIGVzc2VzIG1vdGl2b3MgZGUgZm9ybWEgZ2VyYWwgbvNzIHNlbXByZSBpcmVtb3MgcHJlZmVyaXIgcmVkZXMgbWFpcyBzaW1wbGVzLg0KDQpPdGltaXphZG9yDQoNClVtIGhpcGVycGVy4m1ldHJvIHF1ZSB0YW1i6W0gcG9kZW1vcyBhanVzdGFyIOkgbyBvdGltaXphZG9yIGUgYXTpIG1lc21vIG9zIHBy83ByaW9zIHBhcuJtZXRyb3MgZG8gb3RpbWl6YWRvciBhIHNlZ3VpciBpcmVtb3MgdXNhciBvIFN0b2NoYXN0aWMgR3JhZGllbnQgRGVzY2VudCAoU0dEKSBjb21vIG5vc3NvIG90aW1pemFkb3IgZSB0YW1i6W0gbXVkYXIgYSB0YXhhIGRlIGFwcmVuZGl6YWRvLg0KYGBge3IgaW5jbHVkZT1GQUxTRX0NCm1vZGVsNCA8LSBrZXJhc19tb2RlbF9zZXF1ZW50aWFsKCkgDQoNCiMgQWRkIGxheWVycyB0byB0aGUgbW9kZWwNCm1vZGVsNCAlPiUgDQogICAgbGF5ZXJfZGVuc2UodW5pdHMgPSA4LCBhY3RpdmF0aW9uID0gJ3JlbHUnLCBpbnB1dF9zaGFwZSA9IGMoNCkpICU+JSANCiAgICBsYXllcl9kZW5zZSh1bml0cyA9IDMsIGFjdGl2YXRpb24gPSAnc29mdG1heCcpDQoNCiMgRGVmaW5lIGFuIG9wdGltaXplcg0Kc2dkIDwtIG9wdGltaXplcl9zZ2QobHIgPSAwLjAxKQ0KDQojIENvbXBpbGUgdGhlIG1vZGVsDQptb2RlbDQgJT4lIGNvbXBpbGUob3B0aW1pemVyPXNnZCwgDQogICAgICAgICAgICAgICAgICBsb3NzPSdjYXRlZ29yaWNhbF9jcm9zc2VudHJvcHknLCANCiAgICAgICAgICAgICAgICAgIG1ldHJpY3M9J2FjY3VyYWN5JykNCg0KIyBGaXQgdGhlIG1vZGVsIHRvIHRoZSB0cmFpbmluZyBkYXRhDQpoaXN0b3J5NCA8LSBtb2RlbDQgJT4lIGZpdCgNCiAgaXJpcy50cmFpbmluZywgaXJpcy50cmFpbkxhYmVscywgDQogIGVwb2NocyA9IDIwMCwgYmF0Y2hfc2l6ZSA9IDUsIA0KICB2YWxpZGF0aW9uX3NwbGl0ID0gMC4yDQogKQ0KYGBgDQoNCmBgYHtyfQ0KIyBQbG90IHRoZSBtb2RlbCBsb3NzDQpwbG90KGhpc3Rvcnk0JG1ldHJpY3MkbG9zcywgbWFpbj0iTW9kZWwgTG9zcyIsIHhsYWIgPSAiZXBvY2giLCB5bGFiPSJsb3NzIiwgY29sPSJibHVlIiwgdHlwZT0ibCIsIHlsaW09YygwLDEuNikpDQpsaW5lcyhoaXN0b3J5NCRtZXRyaWNzJHZhbF9sb3NzLCBjb2w9ImdyZWVuIikNCmxlZ2VuZCgidG9wcmlnaHQiLCBjKCJ0cmFpbiIsInRlc3QiKSwgY29sPWMoImJsdWUiLCAiZ3JlZW4iKSwgbHR5PWMoMSwxKSkNCg0KIyBQbG90IHRoZSBtb2RlbCBhY2N1cmFjeQ0KcGxvdChoaXN0b3J5NCRtZXRyaWNzJGFjYywgbWFpbj0iTW9kZWwgQWNjdXJhY3kiLCB4bGFiID0gImVwb2NoIiwgeWxhYj0iYWNjdXJhY3kiLCBjb2w9ImJsdWUiLCB0eXBlPSJsIiwgeWxpbT1jKDAsMSkpDQpsaW5lcyhoaXN0b3J5NCRtZXRyaWNzJHZhbF9hY2MsIGNvbD0iZ3JlZW4iKQ0KbGVnZW5kKCJib3R0b21yaWdodCIsIGMoInRyYWluIiwidGVzdCIpLCBjb2w9YygiYmx1ZSIsICJncmVlbiIpLCBsdHk9YygxLDEpKQ0KDQojIEV2YWx1YXRlIHRoZSBtb2RlbA0Kc2NvcmU0IDwtIG1vZGVsNCAlPiUgZXZhbHVhdGUoaXJpcy52YWxpZGF0aW9uLCBpcmlzLnZhbGlkYXRpb25MYWJlbHMsIGJhdGNoX3NpemUgPSAxMjgpDQoNCiMgUHJpbnQgdGhlIGxvc3MgYW5kIGFjY3VyYWN5IG1ldHJpY3MNCnByaW50KHNjb3JlNCkNCmBgYA0KDQoNCiMjIyBTYWx2YXIsIGNhcnJlZ2FyIG91IGV4cG9ydGFyIG8gbW9kZWxvDQoNClNhbHZhciBlIGNhcnJlZ2FyIHVtIG1vZGVsbyDpIG11aXRvIGltcG9ydGFudGUsIHByaW5jaXBhbG1lbnRlIHF1YW5kbyBzZSB0cmF0YSBkZSBtb2RlbG9zIG1haXMgY29tcGxleG9zIGUgcm9idXN0b3MsIHBvZGUgc2UgdG9ybmFyIHF1YXNlIGltcHJhdGlj4XZlbCByZXBsaWNhciBvIHRyZWluYW1lbnRvIGRlIHVtIG1vZGVsbyBlbSBvdXRybyBhbWJpZW50ZSwgcG9yIGV4ZW1wbG8sIHZvY+ogbuNvIHZhaSBxdWVyZXIgdGVudGFyIHRyZWluYXIgdW0gbW9kZWxvIGVtIHNldSBjb21wdXRhZG9yIHF1ZSBsZXZvdSBkaWFzIHBhcmEgc2VyIHRyZWluYWRvIGVtIHVtIHN1cGVyIGNvbXB1dGFkb3IsIG91IGF06SBtZXNtbyB2b2PqIHBvZGUgdHJlaW5hciBzZXUgbW9kZWxvIGVtIGRpYXMgZGlmZXJlbnRlcy4NCg0KSXNzbyBwb2RlIHNlciBmYWNpbG1lbnRlIGZlaXRvIHVzYW5kbyBhcyBmdW7n9WVzIGRhIGJpYmxpb3RlY2EgImhkZjUiLCAic2F2ZV9tb2RlbF9oZGY1KCkiIGUgImxvYWRfbW9kZWxfaGRmNSgpIiwgaXNzbyDpIG11aXRvIGltcG9ydGFudGUgcXVhbmRvIHNlIHVzYSAidHJhbnNmZXIgbGVhcm5pbmciLCBxdWUgZW0gcmVzdW1vIHNlcmlhIHVzYXIgdW0gbW9kZWxvIGrhIHRyZWluYWRvIGUgdXNhciBzZXVzIHBlc29zIGNvbW8gYmFzZSBlbSBvdXRybyBtb2RlbG8sIGlzc28gbm9ybWFsbWVudGUg6SBmZWl0byB1c2FuZG8gdW0gbW9kZWxvIGRlIHByb3Dzc2l0byBnZXJhbCBjb21vIGJhc2UgcGFyYSBvdXRybyBkZSBwcm9w83NpdG8gZXNwZWPtZmljby4NCg0KIVtdKGltYWdlcy90cmFuc2ZlcmxlYXJuaW5nLnBuZykNCmBgYHtyfQ0Kc2F2ZV9tb2RlbF9oZGY1KG1vZGVsLCAibXlfbW9kZWwuaDUiKQ0KbW9kZWwgPC0gbG9hZF9tb2RlbF9oZGY1KCJteV9tb2RlbC5oNSIpDQpgYGANCg0KVGFtYultIOkgcG9zc+12ZWwgc2FsdmFyIG9zIHBlc29zICh3ZWlnaHRzKSBkbyBtb2RlbG8uDQpgYGB7cn0NCnNhdmVfbW9kZWxfd2VpZ2h0c19oZGY1KG1vZGVsLCAibXlfbW9kZWxfd2VpZ2h0cy5oNSIpDQptb2RlbCAlPiUgbG9hZF9tb2RlbF93ZWlnaHRzX2hkZjUoIm15X21vZGVsX3dlaWdodHMuaDUiKQ0KYGBgDQoNClRhbWLpbSDpIHBvc3PtdmVsIGV4cG9ydGFyIG8gbW9kZWxvIHBhcmEgSlNPTiBvdSBZQU1MLg0KYGBge3J9DQpqc29uX3N0cmluZyA8LSBtb2RlbF90b19qc29uKG1vZGVsKQ0KbW9kZWwgPC0gbW9kZWxfZnJvbV9qc29uKGpzb25fc3RyaW5nKQ0KDQp5YW1sX3N0cmluZyA8LSBtb2RlbF90b195YW1sKG1vZGVsKQ0KbW9kZWwgPC0gbW9kZWxfZnJvbV95YW1sKHlhbWxfc3RyaW5nKQ0KYGBgDQoNCg0KIyMjIFVzYW5kbyBkYWRvcyBkYXMgZWxlaef1ZXMgZG9zIGRlcHV0YWRvcyBkZSAyMDE0DQoNCmBgYHtyIGluY2x1ZGU9RkFMU0V9DQojc2V0IHVwDQpsaWJyYXJ5KGRwbHlyKQ0KbGlicmFyeShnZ3Bsb3QyKQ0KbGlicmFyeShkdW1taWVzKQ0KYGBgDQoNClBhcmEgdW0gc2VndW5kbyBleHBlcmltZW50byBpcmVtb3MgdXNhciBkYWRvcyByZWFpcywgbWFpcyBpbnRlcmVzc2FudGVzIGUgZGUgbWFpb3IgY29tcGxleGlkYWRlLCBxdWUgc2Vy428gb3MgZGFkb3MgcmVmZXJlbnRlcyBhIGVsZWnn428gZG9zIGRlcHV0YWRvcyBkZSAyMDE0Lg0KDQojIyMgQ2FycmVnYW5kbyBvcyBkYWRvcyBwdDINCmBgYHtyfQ0KdHJhaW5EcCA8LSByZWFkLmNzdigiZGF0YS90cmFpbjUuY3N2IiwgZW5jb2Rpbmc9IlVURi04IikNCnRlc3REcCA8LSByZWFkLmNzdigiZGF0YS90ZXN0NS5jc3YiLCBlbmNvZGluZz0iVVRGLTgiKQ0KYGBgDQoNCiMjIyBFeHBsb3JhbmRvIG9zIGRhZG9zIHB0Mg0KDQpQYXJhIHF1YWxxdWVyIGNvbmp1bnRvIGRlIGRhZG9zIHF1ZSB2YWkgc2VyIHN1Ym1ldGlkbyBwYXJhIHRyZWluYXIgdW0gbW9kZWxvIGRlIHByZWRp5+NvIOkgaW1wb3J0YW50ZSB2ZXJpZmljYXIgYSBkaXN0cmlidWnn428gZW50cmUgYXMgY2xhc3Nlcy4NCmBgYHtyfQ0KdG90YWwgPSBucm93KHRyYWluRHApDQpkaXN0X2NsYXNzZXMgPC0gdHJhaW5EcCAlPiUgY291bnQoc2l0dWFjYW9fZmluYWwpDQpnZ3Bsb3QoZGlzdF9jbGFzc2VzLCBhZXMoeSA9IGRpc3RfY2xhc3NlcyRuL3RvdGFsICogMTAwLCB4ID0gZGlzdF9jbGFzc2VzJHNpdHVhY2FvX2ZpbmFsKSkrDQogIGdlb21fYmFyKHN0YXQ9ImlkZW50aXR5IikgKw0KICBsYWJzKHRpdGxlID0gIkRpc3RyaWJ1aefjbyBkZSBjbGFzc2VzIiwgeCA9ICJTaXR1YefjbyBmaW5hbCIsIHkgPSAiUHJvcG9y5+NvICglKSIpICsNCiAgdGhlbWUoYXhpcy50ZXh0LnggPSBlbGVtZW50X3RleHQoYW5nbGUgPSAwLCBoanVzdCA9IDEpLCBsZWdlbmQucG9zaXRpb249Im5vbmUiKSArDQogIHRoZW1lKGF4aXMudGV4dD1lbGVtZW50X3RleHQoc2l6ZT04KSwgYXhpcy50aXRsZT1lbGVtZW50X3RleHQoc2l6ZT0xMixmYWNlPSJib2xkIikpDQpgYGANCg0KQ29tbyBwb2RlbW9zIG9ic2VydmFyIG5hIGltYWdlbSwgZXhpc3RlIHVtIGdyYW5kZSBkZXNiYWxhbmNlYW1lbnRvIG5hcyBjbGFzc2VzLCBtYWlzIGRlIDgwJSBkb3MgZGFkb3Mgc+NvIHJlZmVyZW50ZXMgYSBjYW5kaWRhdG9zIHF1ZSBu428gZm9yYW0gZWxlaXRvcywgaXNzbyBmYXogYmFzdGFudGUgc2VudGlkbyBq4SBxdWUgYXBlbmFzIHVtYSBxdWFudGlkYWRlIGVzcGVj7WZpY2EgZm9pIGVsZWl0YSwgZSBub3JtYWxtZW50ZSDpIGJlbSBtZW5vciBxdWUgbyB0b3RhbCBkZSBjYW5kaWRhdG9zLCBtYXMgcGFyYSB0cmVpbmFyIHVtIG1vZGVsbyBpc3NvIGFjYWJhIHNlbmRvIHJ1aW0gauEgcXVlIGEgY2xhc3NlIGRvcyBu428gZWxlaXRvcyB0ZW0gdW1hIHJlcHJlc2VudGHn428gbXVpdG8gbWFpb3IgcXVlIGEgb3V0cmEsIGlzc28gcG9kZSBlbnZpZXNhciBvIG1vZGVsbyBwYXJhIGVzc2VzIGNhc29zLCBvdSBzZWphLCBvIG1vZGVsbyBwb2RlIHJlcHJlc2VudGFyIGJlbSBtZWxob3IgZXNzZXMgZGFkb3MgcXVlIHBvc3N1ZW0gbWFpcyBleGVtcGxvcyAob3ZlcmZpdHRpbmcpLCBlIG7jbyByZXByZXNlbnRhciB0428gYmVtIGEgb3V0cmEgY2xhc3NlLCBq4SBxdWUgZXhpc3RlbSBwb3Vjb3MgZXhlbXBsb3MgZGEgbWVzbWEuIE91dHJvIHByb2JsZW1hIGRlIGRhZG9zIGRlc2JhbGFuY2VhZG9zIOkgcXVlIG5lc3NlIGNhc28gc2UgbyBtb2RlbG8gcHJldmVyIHRvZG9zIG9zIGV4ZW1wbG9zIGNvbW8gIm5hb19lbGVpdG9zIiBhaW5kYSBzaW0gZWxlIGNvbnNlZ3VpcuEgYWxnbyBwcvN4aW1vIGRlIDgwJSBkZSBhY3Vy4WNpYSwgZSBvYnZpYW1lbnRlIGVzc2EgcHJlZGnn428gZm9pIG11aXRvIHJ1aW0sIG1hcyBhbmFsaXNhbmRvIGFwZW5hcyBhY3Vy4WNpYSBmaWNhIGRpZu1jaWwgZGUgaWRlbnRpZmljYXIgaXNzby4NCg0KIyMjIFByb2Nlc3NhbmRvIG9zIGRhZG9zIHB0Mg0KDQpBbyBvYnNlcnZhciBvcyBkYWRvcyBwb2RlbW9zIHZlciBxdWUgb3MgdmFsb3JlcyBudWxvcyBz428gcmVwcmVzZW50YWRvcyBwb3IgIiNOVUxPIiBubyBjb25qdW50byBkZSBkYWRvcywgZW50428gdmFtb3Mgc3Vic3RpdHVpciBlc3NlcyB2YWxvcmVzIHBvciAiTkEiLCBhc3NpbSBwb2RlcmVtb3MgYXZhbGlhciBtZWxob3Igb3MgZGFkb3MuDQpgYGB7cn0NCiMgb2JzIG8gcHJvY2Vzc2FtZW50byBkZXZlIHNlciBmZWl0byBwYXJhIG9zIGRhZG9zIGRlIHRyZWlubyBlIHRlc3RlDQp0cmFpbkRwW3RyYWluRHAgPT0gJyNOVUxPJ10gPC0gTkENCnRlc3REcFt0ZXN0RHAgPT0gJyNOVUxPJ10gPC0gTkENCiMgb2JzZXJ2YW5kbyBhIHF1YW50aWRhZGUgZGUgdmFsb3JlcyBudWxvcyBwYXJhIGNhZGEgYXRyaWJ1dG8NCnNhcHBseSh0cmFpbkRwLCBmdW5jdGlvbih5KSBzdW0obGVuZ3RoKHdoaWNoKGlzLm5hKHkpKSkpKQ0KYGBgDQoNCkNvbSBlc3NhIGluZm9ybWHn428gcG9kZW1vcyBjaGVnYXIgYSBhbGd1bWFzIGNvbmNsdXP1ZXMsIGFwZW5hcyBvcyBhdHJpYnV0b3MgInNldG9yX2Vjb25vbWljb19yZWNlaXRhIiBlICJzZXRvcl9lY29ub21pY29fZGVzcGVzYSIgcG9zc3VlbSBkYWRvcyBudWxvcywgZSBsZXZhbmRvIGVtIGNvbnRhIHF1ZSBvIG5vc3NvIHRvdGFsIGRlIGRhZG9zIOkgNDEzNSBwb2RlbW9zIGNvbmNsdWlyIHF1ZSBuZXNzZXMgZHVhcyBjYW1hZGFzIGEgbW9kYSBzZXJpYSBuYSB2ZXJkYWRlIG9zIGRhZG8gbnVsb3MsIGVzc2UgdGlwbyBkZSBzaXR1YefjbyBkaWZpY3VsdGEgbXVpdG8gc3Vic3RpdHVpciBlc3NlcyB2YWxvcmVzIHBvciBvdXRyb3MgZGVyaXZhZG9zIGRlIGFsZ3VtYSBmb3JtYSwgZGV2aWRvIGEgaXNzbyBvcHRlaSBwb3IgcmVtb3ZlciBlc3NlcyBhdHJpYnV0b3MuDQoNClBhcmEgc3VibWV0ZXIgb3MgZGFkb3MgcGFyYSB1bSBtb2RlbG8gZGUgZGVlcCBsZWFybmluZyBwcmVjaXNhbW9zIHRyYW5zZm9ybWFyIG5vc3NvcyBhdHJpYnV0b3MgY2F0ZWfzcmljb3MgZW0gYXRyaWJ1dG9zIG51belyaWNvcywgcGFyYSBpc3NvIHVzYXJlbW9zIG9uZSBob3QgZW5jb2RpbmcuDQpgYGB7cn0NCiMgdHJhbnNmb3JtYW5kbyBvcyB2YWxvcmVzIGNhdGVn83JpY29zIHBhcmEgbyBmb3JtYXRvIG9uZSBob3QNCnRyYWluRHAgPC0gZHVtbXkuZGF0YS5mcmFtZSh0cmFpbkRwLCBuYW1lcz1jKCdlc3RhZG9fY2l2aWwnKSwgc2VwPSJfIikNCnRyYWluRHAgPC0gZHVtbXkuZGF0YS5mcmFtZSh0cmFpbkRwLCBuYW1lcz1jKCdzZXhvJyksIHNlcD0iXyIpDQp0cmFpbkRwIDwtIGR1bW15LmRhdGEuZnJhbWUodHJhaW5EcCwgbmFtZXM9YygnZ3JhdScpLCBzZXA9Il8iKQ0KdHJhaW5EcCA8LSBkdW1teS5kYXRhLmZyYW1lKHRyYWluRHAsIG5hbWVzPWMoJ2Rlc2NyaWNhb19jb3JfcmFjYScpLCBzZXA9Il8iKQ0KDQojIHRlc3Rhcg0KIyB0b19jYXRlZ29yaWNhbCh0cmFpbkRwJGRlc2NyaWNhb19jb3JfcmFjYSkNCg0KdGVzdERwIDwtIGR1bW15LmRhdGEuZnJhbWUodGVzdERwLCBuYW1lcz1jKCdlc3RhZG9fY2l2aWwnKSwgc2VwPSJfIikNCnRlc3REcCA8LSBkdW1teS5kYXRhLmZyYW1lKHRlc3REcCwgbmFtZXM9Yygnc2V4bycpLCBzZXA9Il8iKQ0KdGVzdERwIDwtIGR1bW15LmRhdGEuZnJhbWUodGVzdERwLCBuYW1lcz1jKCdncmF1JyksIHNlcD0iXyIpDQp0ZXN0RHAgPC0gZHVtbXkuZGF0YS5mcmFtZSh0ZXN0RHAsIG5hbWVzPWMoJ2Rlc2NyaWNhb19jb3JfcmFjYScpLCBzZXA9Il8iKQ0KYGBgDQoNCmBgYHtyfQ0KIyByZW1vdmVuZG8gYXRyaWJ1dG9zIG7jbyB1c2Fkb3MgKGF0cmlidXRvcyBjb20gYWx0byDtbmRpY2UgZGUgbnVsb3MsIGUgYXRyaWJ1dG9zIGNvbSBwb3VjYSBpbXBvcnTibmNpYTogbm9tZSwgSUQsIG51bWVyb19jYW5kaWRhdG8sIGVzdGFkb19jaXZpbCBlIGRlc2NyaWNhb19vY3VwYWNhbykNCnRyYWluRFBGIDwtIHRyYWluRHAgJT4lIHNlbGVjdChxdWFudGlkYWRlX2RvYWNvZXMsIHF1YW50aWRhZGVfZG9hZG9yZXMsIHRvdGFsX3JlY2VpdGEsIG1lZGlhX3JlY2VpdGEsIHJlY3Vyc29zX2RlX291dHJvc19jYW5kaWRhdG9zLmNvbWl0ZXMsIHJlY3Vyc29zX2RlX3BhcnRpZG9zLCByZWN1cnNvc19kZV9wZXNzb2FzX2btc2ljYXMsIHJlY3Vyc29zX2RlX3Blc3NvYXNfanVyaWRpY2FzLCByZWN1cnNvc19wcm9wcmlvcywgcXVhbnRpZGFkZV9kZXNwZXNhcywgcXVhbnRpZGFkZV9mb3JuZWNlZG9yZXMsIHRvdGFsX2Rlc3Blc2EsIG1lZGlhX2Rlc3Blc2EsIGlkYWRlLCBkZXNwZXNhX21heF9jYW1wYW5oYSwgc2l0dWFjYW9fZmluYWwpDQoNCnRlc3REUEYgPC0gdGVzdERwICU+JSBzZWxlY3QocXVhbnRpZGFkZV9kb2Fjb2VzLCBxdWFudGlkYWRlX2RvYWRvcmVzLCB0b3RhbF9yZWNlaXRhLCBtZWRpYV9yZWNlaXRhLCByZWN1cnNvc19kZV9vdXRyb3NfY2FuZGlkYXRvcy5jb21pdGVzLCByZWN1cnNvc19kZV9wYXJ0aWRvcywgcmVjdXJzb3NfZGVfcGVzc29hc19m7XNpY2FzLCByZWN1cnNvc19kZV9wZXNzb2FzX2p1cmlkaWNhcywgcmVjdXJzb3NfcHJvcHJpb3MsIHF1YW50aWRhZGVfZGVzcGVzYXMsIHF1YW50aWRhZGVfZm9ybmVjZWRvcmVzLCB0b3RhbF9kZXNwZXNhLCBtZWRpYV9kZXNwZXNhLCBpZGFkZSwgZGVzcGVzYV9tYXhfY2FtcGFuaGEpDQpgYGANCg0KIyMjIENvbnN0cnVpbmRvIG8gbW9kZWxvIHB0Mg0KDQpgYGB7cn0NCmlucHV0U2l6ZSA9IG5jb2wodHJhaW5EUEYpDQpvdXRwdXRTaXplID0gbGVuZ3RoKHVuaXF1ZSh0cmFpbkRQRiRzaXR1YWNhb19maW5hbCkpDQoNCiMgdHJhbnNmb3JtYSBvcyBkYWRvcyBlbSB2YWxvcmVzIG51belyaWNvcyAob25lIGhvdCBlbmNvZGluZykNCnRyYWluRFBGWyxpbnB1dFNpemVdIDwtIGFzLm51bWVyaWModHJhaW5EUEZbLGlucHV0U2l6ZV0pIC0xDQoNCiMgVHVybiBpbnRvIGEgbWF0cml4DQp0cmFpbkRQRiA8LSBhcy5tYXRyaXgodHJhaW5EUEYpDQoNCiMgU2V0IGBkaW1uYW1lc2AgdG8gYE5VTExgDQpkaW1uYW1lcyh0cmFpbkRQRikgPC0gTlVMTA0KDQojIE5vcm1hbGl6ZSB0aGUgZGF0YQ0KdHJhaW5EUEZbLDE6KGlucHV0U2l6ZS0xKV0gPC0gbm9ybWFsaXplKHRyYWluRFBGWywxOihpbnB1dFNpemUtMSldKQ0KDQojIERldGVybWluZSBzYW1wbGUgc2l6ZQ0KaW5kMiA8LSBzYW1wbGUoMiwgbnJvdyh0cmFpbkRQRiksIHJlcGxhY2U9VFJVRSwgcHJvYj1jKDAuNzAsIDAuMzApKQ0KDQojIFNwbGl0IHRoZSBkYXRhDQpkcC50cmFpbmluZyA8LSB0cmFpbkRQRltpbmQyPT0xLCAxOihpbnB1dFNpemUtMSldDQpkcC52YWxpZGF0aW9uIDwtIHRyYWluRFBGW2luZDI9PTIsIDE6KGlucHV0U2l6ZS0xKV0NCg0KIyBTcGxpdCB0aGUgY2xhc3MgYXR0cmlidXRlDQpkcC50cmFpbmluZ3RhcmdldCA8LSB0cmFpbkRQRltpbmQyPT0xLCBpbnB1dFNpemVdDQpkcC52YWxpZGF0aW9udGFyZ2V0IDwtIHRyYWluRFBGW2luZDI9PTIsIGlucHV0U2l6ZV0NCg0KIyBPbmUgaG90IGVuY29kZSB0cmFpbmluZyB0YXJnZXQgZHAudHJhaW5pbmd0YXJnZXQNCmRwLnRyYWluTGFiZWxzIDwtIHRvX2NhdGVnb3JpY2FsKGRwLnRyYWluaW5ndGFyZ2V0KQ0KDQojIE9uZSBob3QgZW5jb2RlIHRlc3QgdGFyZ2V0IHZhbHVlcw0KZHAudmFsaWRhdGlvbkxhYmVscyA8LSB0b19jYXRlZ29yaWNhbChkcC52YWxpZGF0aW9udGFyZ2V0KQ0KYGBgDQoNClBhcmEgZXNzZSBleGVtcGxvIGlyZW1vcyB1c2FyIG1haXMgY2FtYWRhcyBwb2lzIHNlIHRyYXRhIGRlIHVtIHByb2JsZW1hIG1haXMgY29tcGxleG8sIHRhbWLpbSBpcmVtb3MgdXRpbGl6YXIgb3V0cmEgZnVu5+NvIGRlIGxvc3MgImJpbmFyeV9jcm9zc2VudHJvcHkiLCBwb2lzIHNlIHRyYXRhIGRlIGNsYXNzaWZpY2Hn428gZGUgYXBlbmFzIDIgY2xhc3Nlcy4NCg0KYGBge3IgaW5jbHVkZT1GQUxTRX0NCm1vZGVsRFAgPC0ga2VyYXNfbW9kZWxfc2VxdWVudGlhbCgpIA0KDQptb2RlbERQICU+JSANCiAgICBsYXllcl9kZW5zZSh1bml0cyA9IDgsIGFjdGl2YXRpb24gPSAncmVsdScsIGlucHV0X3NoYXBlID0gKGlucHV0U2l6ZS0xKSkgJT4lIA0KICAgIGxheWVyX2RlbnNlKHVuaXRzID0gMTYsIGFjdGl2YXRpb24gPSAncmVsdScpICU+JSANCiAgICBsYXllcl9kZW5zZSh1bml0cyA9IDIwLCBhY3RpdmF0aW9uID0gJ3JlbHUnKSAlPiUgDQogICAgbGF5ZXJfZGVuc2UodW5pdHMgPSBvdXRwdXRTaXplLCBhY3RpdmF0aW9uID0gJ3NvZnRtYXgnKQ0KDQptb2RlbERQICU+JSBjb21waWxlKG9wdGltaXplcj0iYWRhbSIsIA0KICAgICAgICAgICAgICAgICAgbG9zcz0nYmluYXJ5X2Nyb3NzZW50cm9weScsIA0KICAgICAgICAgICAgICAgICAgbWV0cmljcz0nYWNjdXJhY3knKQ0KDQpoaXN0b3J5RFAgPC0gbW9kZWxEUCAlPiUgZml0KA0KICBkcC50cmFpbmluZywgZHAudHJhaW5MYWJlbHMsIA0KICBlcG9jaHMgPSAyMCwgYmF0Y2hfc2l6ZSA9IDQNCiApDQpgYGANCg0KYGBge3J9DQojIFBsb3QgdGhlIG1vZGVsIGxvc3MNCnBsb3QoaGlzdG9yeURQJG1ldHJpY3MkbG9zcywgbWFpbj0iTW9kZWwgTG9zcyIsIHhsYWIgPSAiZXBvY2giLCB5bGFiPSJsb3NzIiwgY29sPSJibHVlIiwgdHlwZT0ibCIsIHlsaW09YygwLDEpKQ0KbGluZXMoaGlzdG9yeURQJG1ldHJpY3MkdmFsX2xvc3MsIGNvbD0iZ3JlZW4iKQ0KbGVnZW5kKCJ0b3ByaWdodCIsIGMoInRyYWluIiksIGNvbD1jKCJibHVlIiksIGx0eT1jKDEsMSkpDQoNCiMgUGxvdCB0aGUgbW9kZWwgYWNjdXJhY3kNCnBsb3QoaGlzdG9yeURQJG1ldHJpY3MkYWNjLCBtYWluPSJNb2RlbCBBY2N1cmFjeSIsIHhsYWIgPSAiZXBvY2giLCB5bGFiPSJhY2N1cmFjeSIsIGNvbD0iYmx1ZSIsIHR5cGU9ImwiLCB5bGltPWMoMCwxKSkNCmxpbmVzKGhpc3RvcnlEUCRtZXRyaWNzJHZhbF9hY2MsIGNvbD0iZ3JlZW4iKQ0KbGVnZW5kKCJib3R0b21yaWdodCIsIGMoInRyYWluIiksIGNvbD1jKCJibHVlIiksIGx0eT1jKDEsMSkpDQoNCiMgRXZhbHVhdGUgdGhlIG1vZGVsDQpzY29yZURQIDwtIG1vZGVsRFAgJT4lIGV2YWx1YXRlKGRwLnZhbGlkYXRpb24sIGRwLnZhbGlkYXRpb25MYWJlbHMsIGJhdGNoX3NpemUgPSAxMjgpDQoNCiMgUHJpbnQgdGhlIGxvc3MgYW5kIGFjY3VyYWN5IG1ldHJpY3MNCnByaW50KCJWYWxpZGHn428iKQ0KcHJpbnQoc2NvcmVEUCkNCmBgYA0KDQpBbmFsaXNhbmRvIGVzc2VzIGdy4WZpY29zLCBvIHF1ZSBwZXJjZWJlbW9zIOkgcXVlIGRlc2RlIG8gaW7tY2lvIG8gbm9zc28gbW9kZWxvIGrhIGVuY29udHJvdSBvIHZhbG9yIGJlbSBwcvN4aW1vIGRvIPN0aW1vLCBpc3NvIOkgdW0gcG91Y28gZXN0cmFuaG8gcG9pcyBvcyBkYWRvcyBwb3NzdWVtIHVtYSBjZXJ0YSBjb21wbGV4aWRhZGUsIG1hcyBpc3NvIHRhbWLpbSBwb2RlIHNlciBkZXZpZG8gYSBlZmljaepuY2lhIGRvcyBwYXLibWV0cm9zIHF1ZSBmaXplcmFtIGNvbSBxdWUgYSBmdW7n428gZW5jb250cmFzc2UgdW0gcG9udG8g83RpbW8gZGUgZm9ybWEgYmVtIHLhcGlkYS4NCmBgYHtyfQ0KaW5wdXRTaXplVCA9IG5jb2wodGVzdERQRikNCg0KIyB0cmFuc2Zvcm1hIG9zIGRhZG9zIGVtIHZhbG9yZXMgbnVt6XJpY29zIChvbmUgaG90IGVuY29kaW5nKQ0KdGVzdERQRlssaW5wdXRTaXplVF0gPC0gYXMubnVtZXJpYyh0ZXN0RFBGWyxpbnB1dFNpemVUXSkgLTENCg0KIyBUdXJuIGludG8gYSBtYXRyaXgNCnRlc3REUEYgPC0gYXMubWF0cml4KHRlc3REUEYpDQoNCiMgU2V0IGBkaW1uYW1lc2AgdG8gYE5VTExgDQpkaW1uYW1lcyh0ZXN0RFBGKSA8LSBOVUxMDQoNCiMgTm9ybWFsaXplIHRoZSBkYXRhDQp0ZXN0RFBGWywxOihpbnB1dFNpemVULTEpXSA8LSBub3JtYWxpemUodGVzdERQRlssMTooaW5wdXRTaXplVC0xKV0pDQpgYGANCg0KYGBge3IgZXZhbD1GQUxTRSwgaW5jbHVkZT1GQUxTRX0NCiMgUHJlZGljdCB0aGUgY2xhc3NlcyBmb3IgdGhlIHRlc3QgZGF0YQ0Kc3VibWlzc2lvbiA8LSB0ZXN0RHANCnByZWRzIDwtIG1vZGVsRFAgJT4lIHByZWRpY3RfY2xhc3Nlcyh0ZXN0RFBGLCBiYXRjaF9zaXplID0gMTI4KQ0Kc3VibWlzc2lvbl9wcmVkaWN0LmRmIDwtIGFzLmRhdGEuZnJhbWUocHJlZHMpDQoNCnN1Ym1pc3Npb25fcHJlZGljdC5kZltzdWJtaXNzaW9uX3ByZWRpY3QuZGYgPT0gMV0gPC0gJ25hb19lbGVpdG8nDQpzdWJtaXNzaW9uX3ByZWRpY3QuZGZbc3VibWlzc2lvbl9wcmVkaWN0LmRmID09IDBdIDwtICdlbGVpdG8nDQoNCmZvciAocm93IGluIDE6bnJvdyhzdWJtaXNzaW9uX3ByZWRpY3QuZGYpKSB7DQogIHN1Ym1pc3Npb25bcm93LCAic2l0dWFjYW9fZmluYWwiXSA9IHN1Ym1pc3Npb25fcHJlZGljdC5kZltyb3csICJwcmVkcyJdDQp9DQpgYGANCg0KQ29tbyBwb2RlbW9zIHZlciBub3NzbyBtb2RlbG8gbmEgdmVyZGFkZSBjbGFzc2lmaWNvdSB0dWRvIGNvbW8gIm5hb19lbGVpdG8iIGlzc28gcG9kZSB0ZXIgYWNvbnRlY2lkbyBwb3IgY29udGEgZG9zIGRlc2JhbGFuY2VhbWVudG8gbm9zIGRhZG9zIGRlIHRyZWluYW1lbnRvLCBtYXMgdGFtYultIHBvZGUgc2VyIHF1ZSBvIG1vZGVsbyBu428gc2VqYSBlZmljaWVudGUuDQoNCklzc28gZW5mYXRpemEgbyBwb250byBkZSBhbmFsaXNhciBhcyBwcmVkaef1ZXMgZG8gbW9kZWxvIGRlIHVtYSBmb3JtYSBtYWlzIGNyaXRlcmlvc2EsIGUgbuNvIG9ic2VydmFyIGFwZW5hcyBhIGFjdXLhY2lhIGZpbmFsLCBuZXNzZSBjYXNvIGEgYWN1cuFjaWEgbmEgdmFsaWRh5+NvIGZvaSBkZSA5Mi40MSUsIHF1ZSBzZXJpYSBtdWl0byBib2EsIG1hcyBhbmFsaXNhbmRvIGFzIHByZWRp5/VlcyBmaW5haXMgcG9kZW1vcyB2ZXIgcXVlIHByb3ZhdmVsbWVudGUgbuNvIHNlIHRyYXRhIGRlIHVtIGJvbSBtb2RlbG8uDQpgYGB7cn0NCnRvdGFsUHJlZCA9IG5yb3coc3VibWlzc2lvbl9wcmVkaWN0LmRmKQ0KZGlzdF9jbGFzc2VzUHJlZCA8LSBkYXRhLmZyYW1lKCAic2l0dWFjYW9fZmluYWwiID0gaW50ZWdlcigpLCAibiIgPSBpbnRlZ2VyKCkpDQpkaXN0X2NsYXNzZXNQcmVkW25yb3coZGlzdF9jbGFzc2VzUHJlZCkgKyAxLCBdIDwtIGMoICdlbGVpdG8nLCBzdW0oc3VibWlzc2lvbl9wcmVkaWN0LmRmJHByZWRzID09ICdlbGVpdG8nKSkNCmRpc3RfY2xhc3Nlc1ByZWRbbnJvdyhkaXN0X2NsYXNzZXNQcmVkKSArIDEsIF0gPC0gYyggJ25hb19lbGVpdG8nLCBzdW0oc3VibWlzc2lvbl9wcmVkaWN0LmRmJHByZWRzID09ICduYW9fZWxlaXRvJykpDQpkaXN0X2NsYXNzZXNQcmVkJG4gPC0gYXMubnVtZXJpYyhkaXN0X2NsYXNzZXNQcmVkJG4pDQoNCmdncGxvdChkaXN0X2NsYXNzZXNQcmVkLCBhZXMoeSA9IGRpc3RfY2xhc3Nlc1ByZWQkbi90b3RhbFByZWQgKiAxMDAsIHggPSBkaXN0X2NsYXNzZXNQcmVkJHNpdHVhY2FvX2ZpbmFsKSkrDQogIGdlb21fYmFyKHN0YXQ9ImlkZW50aXR5IikgKw0KICBsYWJzKHRpdGxlID0gIkRpc3RyaWJ1aefjbyBkZSBjbGFzc2VzIiwgeCA9ICJTaXR1YefjbyBmaW5hbCIsIHkgPSAiUHJvcG9y5+NvICglKSIpICsNCiAgdGhlbWUoYXhpcy50ZXh0LnggPSBlbGVtZW50X3RleHQoYW5nbGUgPSAwLCBoanVzdCA9IDEpLCBsZWdlbmQucG9zaXRpb249Im5vbmUiKSArDQogIHRoZW1lKGF4aXMudGV4dD1lbGVtZW50X3RleHQoc2l6ZT04KSwgYXhpcy50aXRsZT1lbGVtZW50X3RleHQoc2l6ZT0xMixmYWNlPSJib2xkIikpDQpgYGANCg0KQWdvcmEgY29tIG9zIGRhZG9zIGRlIHRyZWlubyBlIGFzIHByZWRp5/VlcyBub3MgZGFkb3MgZGUgdGVzdGUsIG7zcyBwb2RlbW9zIHVuaXIgb3MgZG9pcyBjb25qdW50b3MgZSB1c2FyIG9zIGRhZG9zIHBhcmEgcmVhbGl6YXIgYW7hbGlzZXMgbm8gcXVlIHNlcmlhIG8gY29uanVudG8gY29tcGxldG8gZG9zIGRhZG9zLg0KYGBge3J9DQojIG1lcmdlIGJvdGggZGF0YWZyYW1lcyB0byBnZXQgdGhlICJjb21wbGV0ZSBkYXRhIg0KZGFkb3NfdG90YWlzIDwtIHJiaW5kKHRyYWluRHAsIHN1Ym1pc3Npb24pDQoNCndyaXRlLmNzdihkYWRvc190b3RhaXMsIGZpbGUgPSAiQzovVXNlcnMvZGltaXQvRGVza3RvcC9Qcm9qZXRvcy9BRDIvZGF0YS9kYWRvc1RvdGFpcy5jc3YiLCByb3cubmFtZXMgPSBGQUxTRSkNCmBgYA0KDQo=