Classificação para prever quais candidatos à Câmara de Deputados serão eleitos nas eleições de 2014.

# Set-up
library(ggplot2)
library(caret)
library(tidyr)
library(dplyr)
library(dummies)

setwd("C:/Users/dimit/Desktop/Projetos/AD2")
train <- read.csv("data/train5.csv", encoding="UTF-8")
test <- read.csv("data/test5.csv", encoding="UTF-8")

Para o modelo inicial optei por não usar todas as variáveis (por motivos de otimização e paras implificar os modelos), algumas variáveis foram bem simples de remover, como por exemplo ID, nome e numero_cadidato, nesse caso é fácil de perceber que elas não deveriam impactar no modelo, algumas como setor_economico_receita e setor_economico_despesa, foram removidas por possuir muitos valores faltantes, técnicas de substituir artificialmente esses dados poderiam enviesar o modelo e por fim as variáveis UF, estado_civil e descricao_ocupacao merecem um pouco mais de cuidado, mas em teoria UF não deveria impactar já que os candidatos de mesmo UF “competem” entre si, estado_civil supostamente também não deveria impactar e finalmente descricao_ocupacao tem uma grande variação de valores categóricos o que pode deixar alguns valores mal representados e outros com muita representação.

# variáveis envolvidas no modelo 
# "situacao_final" (eleito ou não eleito)
train_dadosFiltrados <- train %>% select(partido, quantidade_doacoes, quantidade_doadores, total_receita, media_receita, recursos_de_outros_candidatos.comites, recursos_de_partidos, recursos_de_pessoas_físicas, recursos_de_pessoas_juridicas, recursos_proprios, quantidade_despesas, quantidade_fornecedores, total_despesa, media_despesa, idade, sexo, grau, descricao_cor_raca, despesa_max_campanha, situacao_final)
test_dadosFiltrados <- test %>% select(partido, quantidade_doacoes, quantidade_doadores, total_receita, media_receita, recursos_de_outros_candidatos.comites, recursos_de_partidos, recursos_de_pessoas_físicas, recursos_de_pessoas_juridicas, recursos_proprios, quantidade_despesas, quantidade_fornecedores, total_despesa, media_despesa, idade, sexo, grau, descricao_cor_raca, despesa_max_campanha)
formula = as.formula(situacao_final ~ partido + quantidade_doacoes + quantidade_doadores + total_receita + media_receita + recursos_de_outros_candidatos.comites + recursos_de_partidos + recursos_de_pessoas_físicas + recursos_de_pessoas_juridicas + recursos_proprios + quantidade_despesas + quantidade_fornecedores + total_despesa + media_despesa + idade + sexo + grau + descricao_cor_raca + despesa_max_campanha + situacao_final)

Para esse experimento iremo separar os dados de treinamento em treino e partição, na proporção de 75% e 25% respectivamente.

# Separa os dados em treino e validação
dataPartition <- createDataPartition(y = train_dadosFiltrados$situacao_final, p=0.75, list=FALSE)
treino <- train_dadosFiltrados[ dataPartition, ]
validacao <- train_dadosFiltrados[ -dataPartition, ]

1. Há desbalanceamento das classes (isto é, uma classe tem muito mais instâncias que outra)? Em que proporção? Quais efeitos colaterais o desbalanceamento de classes pode causar no classificador?

total = nrow(train)
dist_classes <- train %>% count(situacao_final)
ggplot(dist_classes, aes(y = dist_classes$n/total * 100, x = dist_classes$situacao_final))+
  geom_bar(stat="identity") +
  labs(title = "Distribuição de classes", x = "Situação final", y = "Proporção (%)") +
  theme(axis.text.x = element_text(angle = 0, hjust = 1), legend.position="none") +
  theme(axis.text=element_text(size=8), axis.title=element_text(size=12,face="bold"))

Como podemos observar na imagem, existe um grande desbalanceamento nas classes, mais de 80% dos dados são referentes a candidatos que não foram eleitos, isso faz bastante sentido já que apenas uma quantidade específica foi eleita, e normalmente é bem menor que o total de candidatos, mas para treinar um modelo isso acaba sendo ruim já que a classe dos não eleitos tem uma representação muito maior que a outra, isso pode enviesar o modelo para esses casos, ou seja, o modelo pode representar bem melhor esses dados que possuem mais exemplos (overfitting), e não representar tão bem a outra classe, já que existem poucos exemplos da mesma. Outro problema de dados desbalanceados é que nesse caso se o modelo prever todos os exemplos como “nao_eleitos” ainda sim ele conseguirá algo próximo de 80% de acurácia, e obviamente essa predição foi muito ruim, mas analisando apenas acurácia fica difícil de identificar isso.

2. Treine: um modelo de regressão logística, uma árvore de decisão e um modelo de adaboost. Tune esses modelos usando validação cruzada e controle overfitting se necessário, considerando as particularidades de cada modelo.

Para esses modelos iniciais iremos utilizar validação cruzada (5 k-fold) e uma busca aleatória por hiperparâmetros.

#Definição dos parâmetros.
# K-fold cross-validation
fitControl <- trainControl(method = "cv",
                    number = 5,
                    search= "random")

Regressão logística

model_glm <- train(formula,
                 data = treino,
                 method="glm",
                 trControl = fitControl,
                 family="binomial",
                 na.action = na.omit)

Árvore de decisão

model_rpart <- train(formula,
                 data=treino,
                 method = "rpart",
                 trControl = fitControl,
                 cp=0.001,
                 maxdepth=25)

Adaboost

model_adaboost <- train(formula,
                data=treino,
                trControl = fitControl,
                method = "adaboost")

3. Reporte acurácia, precision, recall e f-measure no treino e validação. Como você avalia os resultados? Justifique sua resposta.

Algumas métricas conhecidas para avaliarmos a eficácia de um modelo são:

-Accuracy (acurácia) -Precision -Recall -F-measure

Essas métricas são definidas em termos de Verdadeiros Positivos (TP), Verdadeiros Negativos (TN) Falsos Positivos (FP) e Falsos Negativos (FN).

Acurácia = (TP + TN)/(TP + TN + FP + FN) Nos diz a proporção de observações corretamente classificadas.

Precision = TP / (TP + FP) Diz respeito a quantas das observações preditas como positivas são realmente positivas

Recall = TP / (TP + FN) Diz respeito a quantas das observações positivas foram corretamente classificadas

F-measure = 2 * (Precision * Recall) / (Precision + Recall) Representa um valor que relaciona tanto precision quanto recall, ou seja caso você queira apenas um valor para avaliar o modelo, F-measure seria mais indicado do que usar precision ou recall sozinhos.

Aplicando essas medidas ao nosso contexto teríamos a seguinte leitura: -Accuracy: Proporção de políticos que foram corretamente classificados seja eleito ou não eleito. -Precision: Proporção dos políticos que foram classificados como eleito e realmente foram eleitos. -Recall: Proporção dos políticos eleitos que foram classificados como eleito.

validacao$predicaoGlm <- predict(model_glm, validacao)
validacao$predicaoRpart <- predict(model_rpart, validacao)
validacao$predicaoAdaboost <- predict(model_adaboost, validacao)
treino$predicaoGlm <- predict(model_glm, treino)
treino$predicaoRpart <- predict(model_rpart, treino)
treino$predicaoAdaboost <- predict(model_adaboost, treino)
TPGlmTrain <- treino %>% filter(situacao_final == "eleito", predicaoGlm == "eleito") %>% nrow()
TNGlmTrain <- treino %>% filter(situacao_final == "nao_eleito" , predicaoGlm == "nao_eleito" ) %>% nrow()
FPGlmTrain <- treino %>% filter(situacao_final == "nao_eleito" , predicaoGlm == "eleito") %>% nrow()
FNGlmTrain <- treino %>% filter(situacao_final == "eleito", predicaoGlm == "nao_eleito" ) %>% nrow()
TPRpartTrain <- treino %>% filter(situacao_final == "eleito", predicaoRpart == "eleito") %>% nrow()
TNRpartTrain <- treino %>% filter(situacao_final == "nao_eleito" , predicaoRpart == "nao_eleito" ) %>% nrow()
FPRpartTrain <- treino %>% filter(situacao_final == "nao_eleito" , predicaoRpart == "eleito") %>% nrow() 
FNRpartTrain <- treino %>% filter(situacao_final == "eleito", predicaoRpart == "nao_eleito" ) %>% nrow()
TPAdaboostTrain <- treino %>% filter(situacao_final == "eleito", predicaoAdaboost == "eleito") %>% nrow()
TNAdaboostTrain <- treino %>% filter(situacao_final == "nao_eleito" , predicaoAdaboost == "nao_eleito" ) %>% nrow()
FPAdaboostTrain <- treino %>% filter(situacao_final == "nao_eleito" , predicaoAdaboost == "eleito") %>% nrow() 
FNAdaboostTrain <- treino %>% filter(situacao_final == "eleito", predicaoAdaboost == "nao_eleito" ) %>% nrow()
TPGlmValidation <- validacao %>% filter(situacao_final == "eleito", predicaoGlm == "eleito") %>% nrow()
TNGlmValidation <- validacao %>% filter(situacao_final == "nao_eleito" , predicaoGlm == "nao_eleito" ) %>% nrow()
FPGlmValidation <- validacao %>% filter(situacao_final == "nao_eleito" , predicaoGlm == "eleito") %>% nrow()
FNGlmValidation <- validacao %>% filter(situacao_final == "eleito", predicaoGlm == "nao_eleito" ) %>% nrow()
TPRpartValidation <- validacao %>% filter(situacao_final == "eleito", predicaoRpart == "eleito") %>% nrow()
TNRpartValidation <- validacao %>% filter(situacao_final == "nao_eleito" , predicaoRpart == "nao_eleito" ) %>% nrow()
FPRpartValidation <- validacao %>% filter(situacao_final == "nao_eleito" , predicaoRpart == "eleito") %>% nrow() 
FNRpartValidation <- validacao %>% filter(situacao_final == "eleito", predicaoRpart == "nao_eleito" ) %>% nrow()
TPAdaboostValidation <- validacao %>% filter(situacao_final == "eleito", predicaoAdaboost == "eleito") %>% nrow()
TNAdaboostValidation <- validacao %>% filter(situacao_final == "nao_eleito" , predicaoAdaboost == "nao_eleito" ) %>% nrow()
FPAdaboostValidation <- validacao %>% filter(situacao_final == "nao_eleito" , predicaoAdaboost == "eleito") %>% nrow() 
FNAdaboostValidation <- validacao %>% filter(situacao_final == "eleito", predicaoAdaboost == "nao_eleito" ) %>% nrow()
accuracyGlmTrain <- (TPGlmTrain + TNGlmTrain)/(TPGlmTrain + TNGlmTrain + FPGlmTrain + FNGlmTrain) * 100
precisionGlmTrain <- TPGlmTrain / (TPGlmTrain + FPGlmTrain) * 100
recallGlmTrain <- TPGlmTrain / (TPGlmTrain + FNGlmTrain) * 100
fMeasureGlmTrain <- 2 * (precisionGlmTrain * recallGlmTrain) / (precisionGlmTrain + recallGlmTrain)
accuracyRpartTrain <- (TPRpartTrain + TNRpartTrain)/(TPRpartTrain + TNRpartTrain + FPRpartTrain + FNRpartTrain)  * 100
precisionRpartTrain <- TPRpartTrain / (TPRpartTrain + FPRpartTrain) * 100
recallRpartTrain <- TPRpartTrain / (TPRpartTrain + FNRpartTrain) * 100
fMeasureRpartTrain <- 2 * (precisionRpartTrain * recallRpartTrain) / (precisionRpartTrain + recallRpartTrain)
accuracyAdaboostTrain <- (TPAdaboostTrain + TNAdaboostTrain)/(TPAdaboostTrain + TNAdaboostTrain + FPAdaboostTrain + FNAdaboostTrain)  * 100
precisionAdaboostTrain <- TPAdaboostTrain / (TPAdaboostTrain + FPAdaboostTrain) * 100
recallAdaboostTrain <- TPAdaboostTrain / (TPAdaboostTrain + FNAdaboostTrain) * 100
fMeasureAdaboostTrain <- 2 * (precisionAdaboostTrain * recallAdaboostTrain) / (precisionAdaboostTrain + recallAdaboostTrain)
accuracyGlmValidation <- (TPGlmValidation + TNGlmValidation)/(TPGlmValidation + TNGlmValidation + FPGlmValidation + FNGlmValidation) * 100
precisionGlmValidation <- TPGlmValidation / (TPGlmValidation + FPGlmValidation) * 100
recallGlmValidation <- TPGlmValidation / (TPGlmValidation + FNGlmValidation) * 100
fMeasureGlmValidation <- 2 * (precisionGlmValidation * recallGlmValidation) / (precisionGlmValidation + recallGlmValidation)
accuracyRpartValidation <- (TPRpartValidation + TNRpartValidation)/(TPRpartValidation + TNRpartValidation + FPRpartValidation + FNRpartValidation)  * 100
precisionRpartValidation <- TPRpartValidation / (TPRpartValidation + FPRpartValidation) * 100
recallRpartValidation <- TPRpartValidation / (TPRpartValidation + FNRpartValidation) * 100
fMeasureRpartValidation <- 2 * (precisionRpartValidation * recallRpartValidation) / (precisionRpartValidation + recallRpartValidation)
accuracyAdaboostValidation <- (TPAdaboostValidation + TNAdaboostValidation)/(TPAdaboostValidation + TNAdaboostValidation + FPAdaboostValidation + FNAdaboostValidation)  * 100
precisionAdaboostValidation <- TPAdaboostValidation / (TPAdaboostValidation + FPAdaboostValidation) * 100
recallAdaboostValidation <- TPAdaboostValidation / (TPAdaboostValidation + FNAdaboostValidation) * 100
fMeasureAdaboostValidation <- 2 * (precisionAdaboostValidation * recallAdaboostValidation) / (precisionAdaboostValidation + recallAdaboostValidation)
print('Treino')
[1] "Treino"
print('Regressão logística')
[1] "Regressão logística"
sprintf(" -acurácia: %.2f%%", accuracyGlmTrain)
[1] " -acurácia: 93.52%"
sprintf(" -precisão: %.2f%%", precisionGlmTrain)
[1] " -precisão: 73.62%"
sprintf(" -recall: %.2f%%", recallGlmTrain)
[1] " -recall: 55.45%"
sprintf(" -f-measure: %.2f%%", fMeasureGlmTrain)
[1] " -f-measure: 63.25%"
print('Árvore de decisão')
[1] "Árvore de decisão"
sprintf(" -acurácia: %.2f%%", accuracyRpartTrain)
[1] " -acurácia: 95.52%"
sprintf(" -precisão: %.2f%%", precisionRpartTrain)
[1] " -precisão: 80.57%"
sprintf(" -recall: %.2f%%", recallRpartTrain)
[1] " -recall: 73.08%"
sprintf(" -f-measure: %.2f%%", fMeasureRpartTrain)
[1] " -f-measure: 76.64%"
print('Adaboost')
[1] "Adaboost"
sprintf(" -acurácia: %.2f%%", accuracyAdaboostTrain)
[1] " -acurácia: 98.55%"
sprintf(" -precisão: %.2f%%", precisionAdaboostTrain)
[1] " -precisão: 93.20%"
sprintf(" -recall: %.2f%%", recallAdaboostTrain)
[1] " -recall: 92.31%"
sprintf(" -f-measure: %.2f%%", fMeasureAdaboostTrain)
[1] " -f-measure: 92.75%"
print('Validação')
[1] "Validação"
print('Regressão logística')
[1] "Regressão logística"
sprintf(" -acurácia: %.2f%%", accuracyGlmValidation)
[1] " -acurácia: 94.29%"
sprintf(" -precisão: %.2f%%", precisionGlmValidation)
[1] " -precisão: 80.82%"
sprintf(" -recall: %.2f%%", recallGlmValidation)
[1] " -recall: 56.73%"
sprintf(" -f-measure: %.2f%%", fMeasureGlmValidation)
[1] " -f-measure: 66.67%"
print('Árvore de decisão')
[1] "Árvore de decisão"
sprintf(" -acurácia: %.2f%%", accuracyRpartValidation)
[1] " -acurácia: 95.26%"
sprintf(" -precisão: %.2f%%", precisionRpartValidation)
[1] " -precisão: 80.90%"
sprintf(" -recall: %.2f%%", recallRpartValidation)
[1] " -recall: 69.23%"
sprintf(" -f-measure: %.2f%%", fMeasureRpartValidation)
[1] " -f-measure: 74.61%"
print('Adaboost')
[1] "Adaboost"
sprintf(" -acurácia: %.2f%%", accuracyAdaboostValidation)
[1] " -acurácia: 98.55%"
sprintf(" -precisão: %.2f%%", precisionAdaboostValidation)
[1] " -precisão: 95.88%"
sprintf(" -recall: %.2f%%", recallAdaboostValidation)
[1] " -recall: 89.42%"
sprintf(" -f-measure: %.2f%%", fMeasureAdaboostValidation)
[1] " -f-measure: 92.54%"

Avaliando os resultado do treino na questão 2 e a validação mostrada acima, com base na descrição dos atributos, primeiramente podemos avaliar se houve ou não overfitting, para o modelo glm tivemos 93.52% e 94.29% de acurácia, para rpart tivemos 95.52% e 95.26% de acurácia e para adaboost 98.55% e 98.55% de acurácia no treino e validação respectivamente.

No caso da regressão logística e do modelo Rpart como os valores estão muito próximos isso é um índice que não houve overfitting, obviamente supondo que houve uma distribuição e variação interessante entre dados de treino e validação.

Para o modelo Adaboost não houve variância entre treino e validação, além disso o resultado foi muito bom, sendo assim o modelo usando adaboot se mostrou o melhor nesse experimento inicial.

4. Interprete as saídas dos modelos. Quais atributos parecem ser mais importantes de acordo com cada modelo? Crie pelo menos um novo atributo que não está nos dados originais e estude o impacto desse atributo.

# Regressão logística
ggplot(varImp(model_glm))

# Rpart
ggplot(varImp(model_rpart))

# Adaboost
ggplot(varImp(model_adaboost))

É bem interessante notar que para cada modelo nós temos diferente importâncias entre as variáveis, inclusive a variável mais importante não foi a mesma para todos os modelos, isso enfatiza o fato de que possivelmente não existe uma manipulação de dados genérica que funciona bem para qualquer modelo, isso vai depender principalmente de qual modelo será usado.

Para o modelo de regressão logística temos o seguinte, as 3 mais importantes foram despesa_max_campanha, media_despesa e sexoMasculino, e as 3 menos importantes foram descricao_cor_racaIndigena, partidoPCO e “grauLÊ E Escreve”.

Para o modelo Rpart o resultado foi: as 3 mais importantes, total_receita, total_despesa, quantidade_doacoes e as 3 piores foram partidoPSDB, partidoPCO e partidoPMDB

Para o modelo Adaboost ficou o seguinte: 3 mais importantes foram total_receita, total_despesa e quantidade_despesas já as 3 piores foram partido, idade e sexo.

Com essa análise podemos observar mais algumas coisas interessantes, como por exemplo 8 das 9 variáveis mais importantes estão relacionadas com dinheiro investido na campanha o que faz bastante sentido, outro detalhe é que em todos os modelos partidos parecem ser irrelevantes de forma geral.

Para criar um novo atributo e medir sua eficiência iremos utilizar os modelos Rpart e regressão logística, pois eles se mostraram eficientes e possuem um tempo de treinamento bem rápido.

O novo atributo será “idadeBin”, que seria nada mais que o atributo idade alterado para que corresponda a um intervalo, que nesse caso seria [18, 38, 58, 78, 98, 200], usei esses dados para tentar representar a idade dos candidatos e intervalos de 20 anos dos mais jovens até os mais velhos, e assim tentar analisar se a faixa etária vista de uma forma agrupada faz mais sentido do que usar idades da forma convencional.

# novo atributo (idadeBin) (aplicando também aos dados de teste)
train$idadeBin <- cut(train$idade, breaks = c(18, 38, 58, 78, 98, 200), labels=FALSE)
test$idadeBin <- cut(test$idade, breaks = c(18, 38, 58, 78, 98, 200), labels=FALSE)
train_dadosFiltradosEngineered <- train %>% select(partido, quantidade_doacoes, quantidade_doadores, total_receita, media_receita, recursos_de_outros_candidatos.comites, recursos_de_partidos, recursos_de_pessoas_físicas, recursos_de_pessoas_juridicas, recursos_proprios, quantidade_despesas, quantidade_fornecedores, total_despesa, media_despesa, idade, sexo, grau, descricao_cor_raca, despesa_max_campanha, situacao_final, idadeBin)
# variáveis envolvidas no modelo
formulaEngineered = as.formula(situacao_final ~ partido + quantidade_doacoes + quantidade_doadores + total_receita + media_receita + recursos_de_outros_candidatos.comites + recursos_de_partidos + recursos_de_pessoas_físicas + recursos_de_pessoas_juridicas + recursos_proprios + quantidade_despesas + quantidade_fornecedores + total_despesa + media_despesa + idade + sexo + grau + descricao_cor_raca + despesa_max_campanha + situacao_final + idadeBin)
# partição
dataPartitionEngineered <- createDataPartition(y = train_dadosFiltradosEngineered$situacao_final, p=0.75, list=FALSE)
treinoEngineered <- train_dadosFiltradosEngineered[ dataPartitionEngineered, ]
validacaoEngineered <- train_dadosFiltradosEngineered[ -dataPartitionEngineered, ]
# Rpart
model_rpart2 <- train(formulaEngineered,
                 data=treinoEngineered,
                 method = "rpart",
                 trControl = fitControl,
                 cp=0.001,
                 maxdepth=25)

# Regressão logística
model_glm2 <- train(formulaEngineered,
                 data = treinoEngineered,
                 method="glm",
                 trControl = fitControl,
                 family="binomial",
                 na.action = na.omit)
# importância das variáveis
ggplot(varImp(model_rpart2))

ggplot(varImp(model_glm2))

Nesses dois casos podemos observar justamente o que foi dito anteriormente, o novo atributo não ajudou em nada no modelo rpart, já para o modelo de regressão logística ele teve um impacto positivo, olhando pelo gráfico ele parece ser quase 3 vezes melhor que o anterior, ou seja a criação de novos atributos está ligado ao modelo na qual ele será usado.

5. Envie seus melhores modelos à competição do Kaggle. Sugestões para melhorar o modelo:

1. Experimente outros modelos (e.g. SVM, RandomForests e GradientBoosting)

2. Crie pelo menos um novo atributo (opcional).

Uma das formas mais simples de ajustar os atributos dos dados é substituindo os dados faltantes por algum valor que faça sentido no contexto do atributo, para isso iremos verificar quais atributos possuem valores faltantes.

# substituindo valores marcados como "#NULO" pra "NA", e valores numéricos marcados como 0 por "NA" (isso não irá ser aplicado em todas as colunas numéricas, alguma fazem sentido ter valores 0)
# obs o processamento deve ser feito para os dados de treino e teste
train[train == '#NULO'] <- NA
test[test == '#NULO'] <- NA
# observando a quantidade de valores nulos para cada atributo
sapply(train, function(y) sum(length(which(is.na(y)))))
                                   ID                                  nome                       numero_cadidato 
                                    0                                     0                                     0 
                                   UF                               partido               setor_economico_receita 
                                    0                                     0                                  2140 
                   quantidade_doacoes                   quantidade_doadores                         total_receita 
                                    0                                     0                                     0 
                        media_receita recursos_de_outros_candidatos.comites                  recursos_de_partidos 
                                    0                                     0                                     0 
          recursos_de_pessoas_físicas         recursos_de_pessoas_juridicas                     recursos_proprios 
                                    0                                     0                                     0 
                  quantidade_despesas               quantidade_fornecedores                         total_despesa 
                                    0                                     0                                     0 
                        media_despesa               setor_economico_despesa                                 idade 
                                    0                                  2310                                     0 
                                 sexo                                  grau                          estado_civil 
                                    0                                     0                                     0 
                   descricao_ocupacao                    descricao_cor_raca                  despesa_max_campanha 
                                    0                                     0                                     0 
                       situacao_final                              idadeBin 
                                    0                                     0 

Por coincidência os atributos que possuem valores nulos foram os que já haviam sido removidos, no caso dos dois é importante notar que existem mais atributos nulos que os demais, então seria difícil de inferir esses valores de forma eficiente sem que acabe atrapalhando o modelo.

Como foi observador na questão 1 existe um grande desbalanceamento no conjunto de dados, e isso provavelmente está prejudicando nosso modelo, então uma forma simples de melhorar seria balancear as classes (eleito e não eleito), as duas maneiras mais simples seriam usando oversampling (adicionar novos exemplos da classe menor) e undersampling (remover exemplos da classe maior)

# balanceado as classes
# K-fold cross-validation e uso de undersampling
fitControlUnder <- trainControl(method = "cv",
                    number = 10,
                    search= "random",
                    sampling = "down")
# K-fold cross-validation e uso de oversampling
fitControlOver <- trainControl(method = "cv",
                    number = 10,
                    search= "random",
                    sampling = "up")
fitControlRose <- trainControl(method = "cv",
                    number = 10,
                    search= "random",
                    sampling = "rose")
fitControlSmote <- trainControl(method = "cv",
                    number = 10,
                    search= "random",
                    sampling = "smote")
# removendo atributos não usados
trainUpdated <- train %>% select(partido, quantidade_doacoes, quantidade_doadores, total_receita, media_receita, recursos_de_outros_candidatos.comites, recursos_de_partidos, recursos_de_pessoas_físicas, recursos_de_pessoas_juridicas, recursos_proprios, quantidade_despesas, quantidade_fornecedores, total_despesa, media_despesa, idade, sexo, grau, descricao_cor_raca, despesa_max_campanha, situacao_final, idadeBin)
testUpdated <- test %>% select(partido, quantidade_doacoes, quantidade_doadores, total_receita, media_receita, recursos_de_outros_candidatos.comites, recursos_de_partidos, recursos_de_pessoas_físicas, recursos_de_pessoas_juridicas, recursos_proprios, quantidade_despesas, quantidade_fornecedores, total_despesa, media_despesa, idade, sexo, grau, descricao_cor_raca, despesa_max_campanha, idadeBin)
# transformando os valores categóricos para o formato one hot
trainUpdated <- dummy.data.frame(trainUpdated, names=c('estado_civil'), sep="_")
trainUpdated <- dummy.data.frame(trainUpdated, names=c('sexo'), sep="_")
trainUpdated <- dummy.data.frame(trainUpdated, names=c('grau'), sep="_")
trainUpdated <- dummy.data.frame(trainUpdated, names=c('descricao_cor_raca'), sep="_")
testUpdated <- dummy.data.frame(testUpdated, names=c('estado_civil'), sep="_")
testUpdated <- dummy.data.frame(testUpdated, names=c('sexo'), sep="_")
testUpdated <- dummy.data.frame(testUpdated, names=c('grau'), sep="_")
testUpdated <- dummy.data.frame(testUpdated, names=c('descricao_cor_raca'), sep="_")
# variáveis envolvidas no modelo
formulaEngineered = as.formula(situacao_final ~ partido + quantidade_doacoes + quantidade_doadores + total_receita + media_receita + recursos_de_outros_candidatos.comites + recursos_de_partidos + recursos_de_pessoas_físicas + recursos_de_pessoas_juridicas + recursos_proprios + quantidade_despesas + quantidade_fornecedores + total_despesa + media_despesa + idade + sexo + grau + descricao_cor_raca + despesa_max_campanha + situacao_final + idadeBin)
# partição
dataPartitionEngineered <- createDataPartition(y = trainUpdated$situacao_final, p=0.80, list=FALSE)
treinoEngineered <- trainUpdated[ dataPartitionEngineered, ]
validacaoEngineered <- trainUpdated[ -dataPartitionEngineered, ]
# modelos balanceados com os novos dados
model <- train(situacao_final ~ .,
                 data=treinoEngineered,
                 method = "adaboost",
                 preProcess = c("scale", "center"),
                 trControl = fitControlUnder)

Link com os dados: https://www.kaggle.com/c/ufcg-ad2-20172-lab3

LS0tDQp0aXRsZTogIlByZWRp5+NvIGRlIERlcHV0YWRvcyBFbGVpdG9zIDIwMTQiDQpvdXRwdXQ6DQogIGh0bWxfbm90ZWJvb2s6IA0KICAgIGRmX3ByaW50OiBwYWdlZA0KICAgIGZpZzpoZWlnaHQ6IDQNCiAgICBmaWdfd2lkdGg6IDUNCiAgICB0aGVtZTogcmVhZGFibGUNCiAgICB0b2M6IHllcw0KICAgIHRvY19mbG9hdDogeWVzDQogIGh0bWxfZG9jdW1lbnQ6DQogICAgZGZfcHJpbnQ6IHBhZ2VkDQogICAgZmlnOmhlaWdodDogNA0KICAgIGZpZ193aWR0aDogNQ0KICAgIHRoZW1lOiByZWFkYWJsZQ0KICAgIHRvYzogeWVzDQogICAgdG9jX2Zsb2F0OiB5ZXMNCiAgZWRpdG9yX29wdGlvbnM6DQogICAgY2h1bmtfb3V0cHV0X3R5cGU6IGlubGluZQ0KLS0tDQoNCiNDbGFzc2lmaWNh5+NvIHBhcmEgcHJldmVyIHF1YWlzIGNhbmRpZGF0b3Mg4CBD4m1hcmEgZGUgRGVwdXRhZG9zIHNlcuNvIGVsZWl0b3MgbmFzIGVsZWnn9WVzIGRlIDIwMTQuDQoNCg0KYGBge3J9DQojIFNldC11cA0KbGlicmFyeShnZ3Bsb3QyKQ0KbGlicmFyeShjYXJldCkNCmxpYnJhcnkodGlkeXIpDQpsaWJyYXJ5KGRwbHlyKQ0KbGlicmFyeShkdW1taWVzKQ0KDQpzZXR3ZCgiQzovVXNlcnMvZGltaXQvRGVza3RvcC9Qcm9qZXRvcy9BRDIiKQ0KdHJhaW4gPC0gcmVhZC5jc3YoImRhdGEvdHJhaW41LmNzdiIsIGVuY29kaW5nPSJVVEYtOCIpDQp0ZXN0IDwtIHJlYWQuY3N2KCJkYXRhL3Rlc3Q1LmNzdiIsIGVuY29kaW5nPSJVVEYtOCIpDQpgYGANCg0KUGFyYSBvIG1vZGVsbyBpbmljaWFsIG9wdGVpIHBvciBu428gdXNhciB0b2RhcyBhcyB2YXJp4XZlaXMgKHBvciBtb3Rpdm9zIGRlIG90aW1pemHn428gZSBwYXJhcyBpbXBsaWZpY2FyIG9zIG1vZGVsb3MpLCBhbGd1bWFzIHZhcmnhdmVpcyBmb3JhbSBiZW0gc2ltcGxlcyBkZSByZW1vdmVyLCBjb21vIHBvciBleGVtcGxvIElELCBub21lIGUgbnVtZXJvX2NhZGlkYXRvLCBuZXNzZSBjYXNvIOkgZuFjaWwgZGUgcGVyY2ViZXIgcXVlIGVsYXMgbuNvIGRldmVyaWFtIGltcGFjdGFyIG5vIG1vZGVsbywgYWxndW1hcyBjb21vIHNldG9yX2Vjb25vbWljb19yZWNlaXRhIGUgc2V0b3JfZWNvbm9taWNvX2Rlc3Blc2EsIGZvcmFtIHJlbW92aWRhcyBwb3IgcG9zc3VpciBtdWl0b3MgdmFsb3JlcyBmYWx0YW50ZXMsIHTpY25pY2FzIGRlIHN1YnN0aXR1aXIgYXJ0aWZpY2lhbG1lbnRlIGVzc2VzIGRhZG9zIHBvZGVyaWFtIGVudmllc2FyIG8gbW9kZWxvIGUgcG9yIGZpbSBhcyB2YXJp4XZlaXMgVUYsIGVzdGFkb19jaXZpbCBlIGRlc2NyaWNhb19vY3VwYWNhbyBtZXJlY2VtIHVtIHBvdWNvIG1haXMgZGUgY3VpZGFkbywgbWFzIGVtIHRlb3JpYSBVRiBu428gZGV2ZXJpYSBpbXBhY3RhciBq4SBxdWUgb3MgY2FuZGlkYXRvcyBkZSBtZXNtbyBVRiAiY29tcGV0ZW0iIGVudHJlIHNpLCBlc3RhZG9fY2l2aWwgc3Vwb3N0YW1lbnRlIHRhbWLpbSBu428gZGV2ZXJpYSBpbXBhY3RhciBlIGZpbmFsbWVudGUgZGVzY3JpY2FvX29jdXBhY2FvIHRlbSB1bWEgZ3JhbmRlIHZhcmlh5+NvIGRlIHZhbG9yZXMgY2F0ZWfzcmljb3MgbyBxdWUgcG9kZSBkZWl4YXIgYWxndW5zIHZhbG9yZXMgbWFsIHJlcHJlc2VudGFkb3MgZSBvdXRyb3MgY29tIG11aXRhIHJlcHJlc2VudGHn428uDQoNCmBgYHtyfQ0KIyB2YXJp4XZlaXMgZW52b2x2aWRhcyBubyBtb2RlbG8gDQojICJzaXR1YWNhb19maW5hbCIgKGVsZWl0byBvdSBu428gZWxlaXRvKQ0KdHJhaW5fZGFkb3NGaWx0cmFkb3MgPC0gdHJhaW4gJT4lIHNlbGVjdChwYXJ0aWRvLCBxdWFudGlkYWRlX2RvYWNvZXMsIHF1YW50aWRhZGVfZG9hZG9yZXMsIHRvdGFsX3JlY2VpdGEsIG1lZGlhX3JlY2VpdGEsIHJlY3Vyc29zX2RlX291dHJvc19jYW5kaWRhdG9zLmNvbWl0ZXMsIHJlY3Vyc29zX2RlX3BhcnRpZG9zLCByZWN1cnNvc19kZV9wZXNzb2FzX2btc2ljYXMsIHJlY3Vyc29zX2RlX3Blc3NvYXNfanVyaWRpY2FzLCByZWN1cnNvc19wcm9wcmlvcywgcXVhbnRpZGFkZV9kZXNwZXNhcywgcXVhbnRpZGFkZV9mb3JuZWNlZG9yZXMsIHRvdGFsX2Rlc3Blc2EsIG1lZGlhX2Rlc3Blc2EsIGlkYWRlLCBzZXhvLCBncmF1LCBkZXNjcmljYW9fY29yX3JhY2EsIGRlc3Blc2FfbWF4X2NhbXBhbmhhLCBzaXR1YWNhb19maW5hbCkNCg0KdGVzdF9kYWRvc0ZpbHRyYWRvcyA8LSB0ZXN0ICU+JSBzZWxlY3QocGFydGlkbywgcXVhbnRpZGFkZV9kb2Fjb2VzLCBxdWFudGlkYWRlX2RvYWRvcmVzLCB0b3RhbF9yZWNlaXRhLCBtZWRpYV9yZWNlaXRhLCByZWN1cnNvc19kZV9vdXRyb3NfY2FuZGlkYXRvcy5jb21pdGVzLCByZWN1cnNvc19kZV9wYXJ0aWRvcywgcmVjdXJzb3NfZGVfcGVzc29hc19m7XNpY2FzLCByZWN1cnNvc19kZV9wZXNzb2FzX2p1cmlkaWNhcywgcmVjdXJzb3NfcHJvcHJpb3MsIHF1YW50aWRhZGVfZGVzcGVzYXMsIHF1YW50aWRhZGVfZm9ybmVjZWRvcmVzLCB0b3RhbF9kZXNwZXNhLCBtZWRpYV9kZXNwZXNhLCBpZGFkZSwgc2V4bywgZ3JhdSwgZGVzY3JpY2FvX2Nvcl9yYWNhLCBkZXNwZXNhX21heF9jYW1wYW5oYSkNCg0KZm9ybXVsYSA9IGFzLmZvcm11bGEoc2l0dWFjYW9fZmluYWwgfiBwYXJ0aWRvICsgcXVhbnRpZGFkZV9kb2Fjb2VzICsgcXVhbnRpZGFkZV9kb2Fkb3JlcyArIHRvdGFsX3JlY2VpdGEgKyBtZWRpYV9yZWNlaXRhICsgcmVjdXJzb3NfZGVfb3V0cm9zX2NhbmRpZGF0b3MuY29taXRlcyArIHJlY3Vyc29zX2RlX3BhcnRpZG9zICsgcmVjdXJzb3NfZGVfcGVzc29hc19m7XNpY2FzICsgcmVjdXJzb3NfZGVfcGVzc29hc19qdXJpZGljYXMgKyByZWN1cnNvc19wcm9wcmlvcyArIHF1YW50aWRhZGVfZGVzcGVzYXMgKyBxdWFudGlkYWRlX2Zvcm5lY2Vkb3JlcyArIHRvdGFsX2Rlc3Blc2EgKyBtZWRpYV9kZXNwZXNhICsgaWRhZGUgKyBzZXhvICsgZ3JhdSArIGRlc2NyaWNhb19jb3JfcmFjYSArIGRlc3Blc2FfbWF4X2NhbXBhbmhhICsgc2l0dWFjYW9fZmluYWwpDQpgYGANCg0KUGFyYSBlc3NlIGV4cGVyaW1lbnRvIGlyZW1vIHNlcGFyYXIgb3MgZGFkb3MgZGUgdHJlaW5hbWVudG8gZW0gdHJlaW5vIGUgcGFydGnn428sIG5hIHByb3BvcufjbyBkZSA3NSUgZSAyNSUgcmVzcGVjdGl2YW1lbnRlLg0KDQpgYGB7cn0NCiMgU2VwYXJhIG9zIGRhZG9zIGVtIHRyZWlubyBlIHZhbGlkYefjbw0KZGF0YVBhcnRpdGlvbiA8LSBjcmVhdGVEYXRhUGFydGl0aW9uKHkgPSB0cmFpbl9kYWRvc0ZpbHRyYWRvcyRzaXR1YWNhb19maW5hbCwgcD0wLjc1LCBsaXN0PUZBTFNFKQ0KDQp0cmVpbm8gPC0gdHJhaW5fZGFkb3NGaWx0cmFkb3NbIGRhdGFQYXJ0aXRpb24sIF0NCnZhbGlkYWNhbyA8LSB0cmFpbl9kYWRvc0ZpbHRyYWRvc1sgLWRhdGFQYXJ0aXRpb24sIF0NCmBgYA0KDQoNCiMjIzEuIEjhIGRlc2JhbGFuY2VhbWVudG8gZGFzIGNsYXNzZXMgKGlzdG8g6SwgdW1hIGNsYXNzZSB0ZW0gbXVpdG8gbWFpcyBpbnN04m5jaWFzIHF1ZSBvdXRyYSk/IEVtIHF1ZSBwcm9wb3Ln428/IFF1YWlzIGVmZWl0b3MgY29sYXRlcmFpcyBvIGRlc2JhbGFuY2VhbWVudG8gZGUgY2xhc3NlcyBwb2RlIGNhdXNhciBubyBjbGFzc2lmaWNhZG9yPw0KDQpgYGB7ciwgaW5jbHVkZT1UUlVFfQ0KdG90YWwgPSBucm93KHRyYWluKQ0KZGlzdF9jbGFzc2VzIDwtIHRyYWluICU+JSBjb3VudChzaXR1YWNhb19maW5hbCkNCg0KZ2dwbG90KGRpc3RfY2xhc3NlcywgYWVzKHkgPSBkaXN0X2NsYXNzZXMkbi90b3RhbCAqIDEwMCwgeCA9IGRpc3RfY2xhc3NlcyRzaXR1YWNhb19maW5hbCkpKw0KICBnZW9tX2JhcihzdGF0PSJpZGVudGl0eSIpICsNCiAgbGFicyh0aXRsZSA9ICJEaXN0cmlidWnn428gZGUgY2xhc3NlcyIsIHggPSAiU2l0dWHn428gZmluYWwiLCB5ID0gIlByb3BvcufjbyAoJSkiKSArDQogIHRoZW1lKGF4aXMudGV4dC54ID0gZWxlbWVudF90ZXh0KGFuZ2xlID0gMCwgaGp1c3QgPSAxKSwgbGVnZW5kLnBvc2l0aW9uPSJub25lIikgKw0KICB0aGVtZShheGlzLnRleHQ9ZWxlbWVudF90ZXh0KHNpemU9OCksIGF4aXMudGl0bGU9ZWxlbWVudF90ZXh0KHNpemU9MTIsZmFjZT0iYm9sZCIpKQ0KYGBgDQoNCkNvbW8gcG9kZW1vcyBvYnNlcnZhciBuYSBpbWFnZW0sIGV4aXN0ZSB1bSBncmFuZGUgZGVzYmFsYW5jZWFtZW50byBuYXMgY2xhc3NlcywgbWFpcyBkZSA4MCUgZG9zIGRhZG9zIHPjbyByZWZlcmVudGVzIGEgY2FuZGlkYXRvcyBxdWUgbuNvIGZvcmFtIGVsZWl0b3MsIGlzc28gZmF6IGJhc3RhbnRlIHNlbnRpZG8gauEgcXVlIGFwZW5hcyB1bWEgcXVhbnRpZGFkZSBlc3BlY+1maWNhIGZvaSBlbGVpdGEsIGUgbm9ybWFsbWVudGUg6SBiZW0gbWVub3IgcXVlIG8gdG90YWwgZGUgY2FuZGlkYXRvcywgbWFzIHBhcmEgdHJlaW5hciB1bSBtb2RlbG8gaXNzbyBhY2FiYSBzZW5kbyBydWltIGrhIHF1ZSBhIGNsYXNzZSBkb3MgbuNvIGVsZWl0b3MgdGVtIHVtYSByZXByZXNlbnRh5+NvIG11aXRvIG1haW9yIHF1ZSBhIG91dHJhLCBpc3NvIHBvZGUgZW52aWVzYXIgbyBtb2RlbG8gcGFyYSBlc3NlcyBjYXNvcywgb3Ugc2VqYSwgbyBtb2RlbG8gcG9kZSByZXByZXNlbnRhciBiZW0gbWVsaG9yIGVzc2VzIGRhZG9zIHF1ZSBwb3NzdWVtIG1haXMgZXhlbXBsb3MgKG92ZXJmaXR0aW5nKSwgZSBu428gcmVwcmVzZW50YXIgdONvIGJlbSBhIG91dHJhIGNsYXNzZSwgauEgcXVlIGV4aXN0ZW0gcG91Y29zIGV4ZW1wbG9zIGRhIG1lc21hLiBPdXRybyBwcm9ibGVtYSBkZSBkYWRvcyBkZXNiYWxhbmNlYWRvcyDpIHF1ZSBuZXNzZSBjYXNvIHNlIG8gbW9kZWxvIHByZXZlciB0b2RvcyBvcyBleGVtcGxvcyBjb21vICJuYW9fZWxlaXRvcyIgYWluZGEgc2ltIGVsZSBjb25zZWd1aXLhIGFsZ28gcHLzeGltbyBkZSA4MCUgZGUgYWN1cuFjaWEsIGUgb2J2aWFtZW50ZSBlc3NhIHByZWRp5+NvIGZvaSBtdWl0byBydWltLCBtYXMgYW5hbGlzYW5kbyBhcGVuYXMgYWN1cuFjaWEgZmljYSBkaWbtY2lsIGRlIGlkZW50aWZpY2FyIGlzc28uDQoNCg0KIyMjMi4gVHJlaW5lOiB1bSBtb2RlbG8gZGUgcmVncmVzc+NvIGxvZ+1zdGljYSwgdW1hIOFydm9yZSBkZSBkZWNpc+NvIGUgdW0gbW9kZWxvIGRlIGFkYWJvb3N0LiBUdW5lIGVzc2VzIG1vZGVsb3MgdXNhbmRvIHZhbGlkYefjbyBjcnV6YWRhIGUgY29udHJvbGUgb3ZlcmZpdHRpbmcgc2UgbmVjZXNz4XJpbywgY29uc2lkZXJhbmRvIGFzIHBhcnRpY3VsYXJpZGFkZXMgZGUgY2FkYSBtb2RlbG8uDQoNCg0KUGFyYSBlc3NlcyBtb2RlbG9zIGluaWNpYWlzIGlyZW1vcyB1dGlsaXphciB2YWxpZGHn428gY3J1emFkYSAoNSBrLWZvbGQpIGUgdW1hIGJ1c2NhIGFsZWF083JpYSBwb3IgaGlwZXJwYXLibWV0cm9zLg0KDQpgYGB7cn0NCiNEZWZpbmnn428gZG9zIHBhcuJtZXRyb3MuDQoNCiMgSy1mb2xkIGNyb3NzLXZhbGlkYXRpb24NCmZpdENvbnRyb2wgPC0gdHJhaW5Db250cm9sKG1ldGhvZCA9ICJjdiIsDQogICAgICAgICAgICAgICAgICAgIG51bWJlciA9IDUsDQogICAgICAgICAgICAgICAgICAgIHNlYXJjaD0gInJhbmRvbSIpDQpgYGANCg0KDQojIyMjIFJlZ3Jlc3PjbyBsb2ftc3RpY2ENCg0KYGBge3J9DQptb2RlbF9nbG0gPC0gdHJhaW4oZm9ybXVsYSwNCiAgICAgICAgICAgICAgICAgZGF0YSA9IHRyZWlubywNCiAgICAgICAgICAgICAgICAgbWV0aG9kPSJnbG0iLA0KICAgICAgICAgICAgICAgICB0ckNvbnRyb2wgPSBmaXRDb250cm9sLA0KICAgICAgICAgICAgICAgICBmYW1pbHk9ImJpbm9taWFsIiwNCiAgICAgICAgICAgICAgICAgbmEuYWN0aW9uID0gbmEub21pdCkNCmBgYA0KDQojIyMjIMFydm9yZSBkZSBkZWNpc+NvDQoNCmBgYHtyfQ0KbW9kZWxfcnBhcnQgPC0gdHJhaW4oZm9ybXVsYSwNCiAgICAgICAgICAgICAgICAgZGF0YT10cmVpbm8sDQogICAgICAgICAgICAgICAgIG1ldGhvZCA9ICJycGFydCIsDQogICAgICAgICAgICAgICAgIHRyQ29udHJvbCA9IGZpdENvbnRyb2wsDQogICAgICAgICAgICAgICAgIGNwPTAuMDAxLA0KICAgICAgICAgICAgICAgICBtYXhkZXB0aD0yNSkNCmBgYA0KDQojIyMjIEFkYWJvb3N0DQoNCmBgYHtyfQ0KbW9kZWxfYWRhYm9vc3QgPC0gdHJhaW4oZm9ybXVsYSwNCiAgICAgICAgICAgICAgICBkYXRhPXRyZWlubywNCiAgICAgICAgICAgICAgICB0ckNvbnRyb2wgPSBmaXRDb250cm9sLA0KICAgICAgICAgICAgICAgIG1ldGhvZCA9ICJhZGFib29zdCIpDQpgYGANCg0KIyMjMy4gUmVwb3J0ZSBhY3Vy4WNpYSwgcHJlY2lzaW9uLCByZWNhbGwgZSBmLW1lYXN1cmUgbm8gdHJlaW5vIGUgdmFsaWRh5+NvLiBDb21vIHZvY+ogYXZhbGlhIG9zIHJlc3VsdGFkb3M/IEp1c3RpZmlxdWUgc3VhIHJlc3Bvc3RhLg0KDQpBbGd1bWFzIG3pdHJpY2FzIGNvbmhlY2lkYXMgcGFyYSBhdmFsaWFybW9zIGEgZWZpY+FjaWEgZGUgdW0gbW9kZWxvIHPjbzoNCg0KIC1BY2N1cmFjeSAoYWN1cuFjaWEpDQogLVByZWNpc2lvbg0KIC1SZWNhbGwNCiAtRi1tZWFzdXJlDQogDQpFc3NhcyBt6XRyaWNhcyBz428gZGVmaW5pZGFzIGVtIHRlcm1vcyBkZSBWZXJkYWRlaXJvcyBQb3NpdGl2b3MgKFRQKSwgVmVyZGFkZWlyb3MgTmVnYXRpdm9zIChUTikgRmFsc29zIFBvc2l0aXZvcyAoRlApIGUgRmFsc29zIE5lZ2F0aXZvcyAoRk4pLg0KDQpBY3Vy4WNpYSA9IChUUCArIFROKS8oVFAgKyBUTiArIEZQICsgRk4pIE5vcyBkaXogYSBwcm9wb3Ln428gZGUgb2JzZXJ2Yef1ZXMgY29ycmV0YW1lbnRlIGNsYXNzaWZpY2FkYXMuDQoNClByZWNpc2lvbiA9IFRQIC8gKFRQICsgRlApIERpeiByZXNwZWl0byBhIHF1YW50YXMgZGFzIG9ic2VydmHn9WVzIHByZWRpdGFzIGNvbW8gcG9zaXRpdmFzIHPjbyByZWFsbWVudGUgcG9zaXRpdmFzDQoNClJlY2FsbCA9IFRQIC8gKFRQICsgRk4pIERpeiByZXNwZWl0byBhIHF1YW50YXMgZGFzIG9ic2VydmHn9WVzIHBvc2l0aXZhcyBmb3JhbSBjb3JyZXRhbWVudGUgY2xhc3NpZmljYWRhcw0KDQpGLW1lYXN1cmUgPSAyICogKFByZWNpc2lvbiAqIFJlY2FsbCkgLyAoUHJlY2lzaW9uICsgUmVjYWxsKSBSZXByZXNlbnRhIHVtIHZhbG9yIHF1ZSByZWxhY2lvbmEgdGFudG8gcHJlY2lzaW9uIHF1YW50byByZWNhbGwsIG91IHNlamEgY2FzbyB2b2PqIHF1ZWlyYSBhcGVuYXMgdW0gdmFsb3IgcGFyYSBhdmFsaWFyIG8gbW9kZWxvLCBGLW1lYXN1cmUgc2VyaWEgbWFpcyBpbmRpY2FkbyBkbyBxdWUgdXNhciBwcmVjaXNpb24gb3UgcmVjYWxsIHNvemluaG9zLg0KDQpBcGxpY2FuZG8gZXNzYXMgbWVkaWRhcyBhbyBub3NzbyBjb250ZXh0byB0ZXLtYW1vcyBhIHNlZ3VpbnRlIGxlaXR1cmE6DQogLUFjY3VyYWN5OiBQcm9wb3Ln428gZGUgcG9s7XRpY29zIHF1ZSBmb3JhbSBjb3JyZXRhbWVudGUgY2xhc3NpZmljYWRvcyBzZWphIGVsZWl0byBvdSBu428gZWxlaXRvLg0KIC1QcmVjaXNpb246IFByb3BvcufjbyBkb3MgcG9s7XRpY29zIHF1ZSBmb3JhbSBjbGFzc2lmaWNhZG9zIGNvbW8gZWxlaXRvIGUgcmVhbG1lbnRlIGZvcmFtIGVsZWl0b3MuDQogLVJlY2FsbDogUHJvcG9y5+NvIGRvcyBwb2ztdGljb3MgZWxlaXRvcyBxdWUgZm9yYW0gY2xhc3NpZmljYWRvcyBjb21vIGVsZWl0by4NCg0KYGBge3IsIGluY2x1ZGU9VFJVRX0NCnZhbGlkYWNhbyRwcmVkaWNhb0dsbSA8LSBwcmVkaWN0KG1vZGVsX2dsbSwgdmFsaWRhY2FvKQ0KdmFsaWRhY2FvJHByZWRpY2FvUnBhcnQgPC0gcHJlZGljdChtb2RlbF9ycGFydCwgdmFsaWRhY2FvKQ0KdmFsaWRhY2FvJHByZWRpY2FvQWRhYm9vc3QgPC0gcHJlZGljdChtb2RlbF9hZGFib29zdCwgdmFsaWRhY2FvKQ0KDQp0cmVpbm8kcHJlZGljYW9HbG0gPC0gcHJlZGljdChtb2RlbF9nbG0sIHRyZWlubykNCnRyZWlubyRwcmVkaWNhb1JwYXJ0IDwtIHByZWRpY3QobW9kZWxfcnBhcnQsIHRyZWlubykNCnRyZWlubyRwcmVkaWNhb0FkYWJvb3N0IDwtIHByZWRpY3QobW9kZWxfYWRhYm9vc3QsIHRyZWlubykNCg0KVFBHbG1UcmFpbiA8LSB0cmVpbm8gJT4lIGZpbHRlcihzaXR1YWNhb19maW5hbCA9PSAiZWxlaXRvIiwgcHJlZGljYW9HbG0gPT0gImVsZWl0byIpICU+JSBucm93KCkNClROR2xtVHJhaW4gPC0gdHJlaW5vICU+JSBmaWx0ZXIoc2l0dWFjYW9fZmluYWwgPT0gIm5hb19lbGVpdG8iICwgcHJlZGljYW9HbG0gPT0gIm5hb19lbGVpdG8iICkgJT4lIG5yb3coKQ0KRlBHbG1UcmFpbiA8LSB0cmVpbm8gJT4lIGZpbHRlcihzaXR1YWNhb19maW5hbCA9PSAibmFvX2VsZWl0byIgLCBwcmVkaWNhb0dsbSA9PSAiZWxlaXRvIikgJT4lIG5yb3coKQ0KRk5HbG1UcmFpbiA8LSB0cmVpbm8gJT4lIGZpbHRlcihzaXR1YWNhb19maW5hbCA9PSAiZWxlaXRvIiwgcHJlZGljYW9HbG0gPT0gIm5hb19lbGVpdG8iICkgJT4lIG5yb3coKQ0KDQpUUFJwYXJ0VHJhaW4gPC0gdHJlaW5vICU+JSBmaWx0ZXIoc2l0dWFjYW9fZmluYWwgPT0gImVsZWl0byIsIHByZWRpY2FvUnBhcnQgPT0gImVsZWl0byIpICU+JSBucm93KCkNClROUnBhcnRUcmFpbiA8LSB0cmVpbm8gJT4lIGZpbHRlcihzaXR1YWNhb19maW5hbCA9PSAibmFvX2VsZWl0byIgLCBwcmVkaWNhb1JwYXJ0ID09ICJuYW9fZWxlaXRvIiApICU+JSBucm93KCkNCkZQUnBhcnRUcmFpbiA8LSB0cmVpbm8gJT4lIGZpbHRlcihzaXR1YWNhb19maW5hbCA9PSAibmFvX2VsZWl0byIgLCBwcmVkaWNhb1JwYXJ0ID09ICJlbGVpdG8iKSAlPiUgbnJvdygpIA0KRk5ScGFydFRyYWluIDwtIHRyZWlubyAlPiUgZmlsdGVyKHNpdHVhY2FvX2ZpbmFsID09ICJlbGVpdG8iLCBwcmVkaWNhb1JwYXJ0ID09ICJuYW9fZWxlaXRvIiApICU+JSBucm93KCkNCg0KVFBBZGFib29zdFRyYWluIDwtIHRyZWlubyAlPiUgZmlsdGVyKHNpdHVhY2FvX2ZpbmFsID09ICJlbGVpdG8iLCBwcmVkaWNhb0FkYWJvb3N0ID09ICJlbGVpdG8iKSAlPiUgbnJvdygpDQpUTkFkYWJvb3N0VHJhaW4gPC0gdHJlaW5vICU+JSBmaWx0ZXIoc2l0dWFjYW9fZmluYWwgPT0gIm5hb19lbGVpdG8iICwgcHJlZGljYW9BZGFib29zdCA9PSAibmFvX2VsZWl0byIgKSAlPiUgbnJvdygpDQpGUEFkYWJvb3N0VHJhaW4gPC0gdHJlaW5vICU+JSBmaWx0ZXIoc2l0dWFjYW9fZmluYWwgPT0gIm5hb19lbGVpdG8iICwgcHJlZGljYW9BZGFib29zdCA9PSAiZWxlaXRvIikgJT4lIG5yb3coKSANCkZOQWRhYm9vc3RUcmFpbiA8LSB0cmVpbm8gJT4lIGZpbHRlcihzaXR1YWNhb19maW5hbCA9PSAiZWxlaXRvIiwgcHJlZGljYW9BZGFib29zdCA9PSAibmFvX2VsZWl0byIgKSAlPiUgbnJvdygpDQoNClRQR2xtVmFsaWRhdGlvbiA8LSB2YWxpZGFjYW8gJT4lIGZpbHRlcihzaXR1YWNhb19maW5hbCA9PSAiZWxlaXRvIiwgcHJlZGljYW9HbG0gPT0gImVsZWl0byIpICU+JSBucm93KCkNClROR2xtVmFsaWRhdGlvbiA8LSB2YWxpZGFjYW8gJT4lIGZpbHRlcihzaXR1YWNhb19maW5hbCA9PSAibmFvX2VsZWl0byIgLCBwcmVkaWNhb0dsbSA9PSAibmFvX2VsZWl0byIgKSAlPiUgbnJvdygpDQpGUEdsbVZhbGlkYXRpb24gPC0gdmFsaWRhY2FvICU+JSBmaWx0ZXIoc2l0dWFjYW9fZmluYWwgPT0gIm5hb19lbGVpdG8iICwgcHJlZGljYW9HbG0gPT0gImVsZWl0byIpICU+JSBucm93KCkNCkZOR2xtVmFsaWRhdGlvbiA8LSB2YWxpZGFjYW8gJT4lIGZpbHRlcihzaXR1YWNhb19maW5hbCA9PSAiZWxlaXRvIiwgcHJlZGljYW9HbG0gPT0gIm5hb19lbGVpdG8iICkgJT4lIG5yb3coKQ0KDQpUUFJwYXJ0VmFsaWRhdGlvbiA8LSB2YWxpZGFjYW8gJT4lIGZpbHRlcihzaXR1YWNhb19maW5hbCA9PSAiZWxlaXRvIiwgcHJlZGljYW9ScGFydCA9PSAiZWxlaXRvIikgJT4lIG5yb3coKQ0KVE5ScGFydFZhbGlkYXRpb24gPC0gdmFsaWRhY2FvICU+JSBmaWx0ZXIoc2l0dWFjYW9fZmluYWwgPT0gIm5hb19lbGVpdG8iICwgcHJlZGljYW9ScGFydCA9PSAibmFvX2VsZWl0byIgKSAlPiUgbnJvdygpDQpGUFJwYXJ0VmFsaWRhdGlvbiA8LSB2YWxpZGFjYW8gJT4lIGZpbHRlcihzaXR1YWNhb19maW5hbCA9PSAibmFvX2VsZWl0byIgLCBwcmVkaWNhb1JwYXJ0ID09ICJlbGVpdG8iKSAlPiUgbnJvdygpIA0KRk5ScGFydFZhbGlkYXRpb24gPC0gdmFsaWRhY2FvICU+JSBmaWx0ZXIoc2l0dWFjYW9fZmluYWwgPT0gImVsZWl0byIsIHByZWRpY2FvUnBhcnQgPT0gIm5hb19lbGVpdG8iICkgJT4lIG5yb3coKQ0KDQpUUEFkYWJvb3N0VmFsaWRhdGlvbiA8LSB2YWxpZGFjYW8gJT4lIGZpbHRlcihzaXR1YWNhb19maW5hbCA9PSAiZWxlaXRvIiwgcHJlZGljYW9BZGFib29zdCA9PSAiZWxlaXRvIikgJT4lIG5yb3coKQ0KVE5BZGFib29zdFZhbGlkYXRpb24gPC0gdmFsaWRhY2FvICU+JSBmaWx0ZXIoc2l0dWFjYW9fZmluYWwgPT0gIm5hb19lbGVpdG8iICwgcHJlZGljYW9BZGFib29zdCA9PSAibmFvX2VsZWl0byIgKSAlPiUgbnJvdygpDQpGUEFkYWJvb3N0VmFsaWRhdGlvbiA8LSB2YWxpZGFjYW8gJT4lIGZpbHRlcihzaXR1YWNhb19maW5hbCA9PSAibmFvX2VsZWl0byIgLCBwcmVkaWNhb0FkYWJvb3N0ID09ICJlbGVpdG8iKSAlPiUgbnJvdygpIA0KRk5BZGFib29zdFZhbGlkYXRpb24gPC0gdmFsaWRhY2FvICU+JSBmaWx0ZXIoc2l0dWFjYW9fZmluYWwgPT0gImVsZWl0byIsIHByZWRpY2FvQWRhYm9vc3QgPT0gIm5hb19lbGVpdG8iICkgJT4lIG5yb3coKQ0KDQphY2N1cmFjeUdsbVRyYWluIDwtIChUUEdsbVRyYWluICsgVE5HbG1UcmFpbikvKFRQR2xtVHJhaW4gKyBUTkdsbVRyYWluICsgRlBHbG1UcmFpbiArIEZOR2xtVHJhaW4pICogMTAwDQpwcmVjaXNpb25HbG1UcmFpbiA8LSBUUEdsbVRyYWluIC8gKFRQR2xtVHJhaW4gKyBGUEdsbVRyYWluKSAqIDEwMA0KcmVjYWxsR2xtVHJhaW4gPC0gVFBHbG1UcmFpbiAvIChUUEdsbVRyYWluICsgRk5HbG1UcmFpbikgKiAxMDANCmZNZWFzdXJlR2xtVHJhaW4gPC0gMiAqIChwcmVjaXNpb25HbG1UcmFpbiAqIHJlY2FsbEdsbVRyYWluKSAvIChwcmVjaXNpb25HbG1UcmFpbiArIHJlY2FsbEdsbVRyYWluKQ0KDQphY2N1cmFjeVJwYXJ0VHJhaW4gPC0gKFRQUnBhcnRUcmFpbiArIFROUnBhcnRUcmFpbikvKFRQUnBhcnRUcmFpbiArIFROUnBhcnRUcmFpbiArIEZQUnBhcnRUcmFpbiArIEZOUnBhcnRUcmFpbikgICogMTAwDQpwcmVjaXNpb25ScGFydFRyYWluIDwtIFRQUnBhcnRUcmFpbiAvIChUUFJwYXJ0VHJhaW4gKyBGUFJwYXJ0VHJhaW4pICogMTAwDQpyZWNhbGxScGFydFRyYWluIDwtIFRQUnBhcnRUcmFpbiAvIChUUFJwYXJ0VHJhaW4gKyBGTlJwYXJ0VHJhaW4pICogMTAwDQpmTWVhc3VyZVJwYXJ0VHJhaW4gPC0gMiAqIChwcmVjaXNpb25ScGFydFRyYWluICogcmVjYWxsUnBhcnRUcmFpbikgLyAocHJlY2lzaW9uUnBhcnRUcmFpbiArIHJlY2FsbFJwYXJ0VHJhaW4pDQoNCmFjY3VyYWN5QWRhYm9vc3RUcmFpbiA8LSAoVFBBZGFib29zdFRyYWluICsgVE5BZGFib29zdFRyYWluKS8oVFBBZGFib29zdFRyYWluICsgVE5BZGFib29zdFRyYWluICsgRlBBZGFib29zdFRyYWluICsgRk5BZGFib29zdFRyYWluKSAgKiAxMDANCnByZWNpc2lvbkFkYWJvb3N0VHJhaW4gPC0gVFBBZGFib29zdFRyYWluIC8gKFRQQWRhYm9vc3RUcmFpbiArIEZQQWRhYm9vc3RUcmFpbikgKiAxMDANCnJlY2FsbEFkYWJvb3N0VHJhaW4gPC0gVFBBZGFib29zdFRyYWluIC8gKFRQQWRhYm9vc3RUcmFpbiArIEZOQWRhYm9vc3RUcmFpbikgKiAxMDANCmZNZWFzdXJlQWRhYm9vc3RUcmFpbiA8LSAyICogKHByZWNpc2lvbkFkYWJvb3N0VHJhaW4gKiByZWNhbGxBZGFib29zdFRyYWluKSAvIChwcmVjaXNpb25BZGFib29zdFRyYWluICsgcmVjYWxsQWRhYm9vc3RUcmFpbikNCg0KYWNjdXJhY3lHbG1WYWxpZGF0aW9uIDwtIChUUEdsbVZhbGlkYXRpb24gKyBUTkdsbVZhbGlkYXRpb24pLyhUUEdsbVZhbGlkYXRpb24gKyBUTkdsbVZhbGlkYXRpb24gKyBGUEdsbVZhbGlkYXRpb24gKyBGTkdsbVZhbGlkYXRpb24pICogMTAwDQpwcmVjaXNpb25HbG1WYWxpZGF0aW9uIDwtIFRQR2xtVmFsaWRhdGlvbiAvIChUUEdsbVZhbGlkYXRpb24gKyBGUEdsbVZhbGlkYXRpb24pICogMTAwDQpyZWNhbGxHbG1WYWxpZGF0aW9uIDwtIFRQR2xtVmFsaWRhdGlvbiAvIChUUEdsbVZhbGlkYXRpb24gKyBGTkdsbVZhbGlkYXRpb24pICogMTAwDQpmTWVhc3VyZUdsbVZhbGlkYXRpb24gPC0gMiAqIChwcmVjaXNpb25HbG1WYWxpZGF0aW9uICogcmVjYWxsR2xtVmFsaWRhdGlvbikgLyAocHJlY2lzaW9uR2xtVmFsaWRhdGlvbiArIHJlY2FsbEdsbVZhbGlkYXRpb24pDQoNCmFjY3VyYWN5UnBhcnRWYWxpZGF0aW9uIDwtIChUUFJwYXJ0VmFsaWRhdGlvbiArIFROUnBhcnRWYWxpZGF0aW9uKS8oVFBScGFydFZhbGlkYXRpb24gKyBUTlJwYXJ0VmFsaWRhdGlvbiArIEZQUnBhcnRWYWxpZGF0aW9uICsgRk5ScGFydFZhbGlkYXRpb24pICAqIDEwMA0KcHJlY2lzaW9uUnBhcnRWYWxpZGF0aW9uIDwtIFRQUnBhcnRWYWxpZGF0aW9uIC8gKFRQUnBhcnRWYWxpZGF0aW9uICsgRlBScGFydFZhbGlkYXRpb24pICogMTAwDQpyZWNhbGxScGFydFZhbGlkYXRpb24gPC0gVFBScGFydFZhbGlkYXRpb24gLyAoVFBScGFydFZhbGlkYXRpb24gKyBGTlJwYXJ0VmFsaWRhdGlvbikgKiAxMDANCmZNZWFzdXJlUnBhcnRWYWxpZGF0aW9uIDwtIDIgKiAocHJlY2lzaW9uUnBhcnRWYWxpZGF0aW9uICogcmVjYWxsUnBhcnRWYWxpZGF0aW9uKSAvIChwcmVjaXNpb25ScGFydFZhbGlkYXRpb24gKyByZWNhbGxScGFydFZhbGlkYXRpb24pDQoNCmFjY3VyYWN5QWRhYm9vc3RWYWxpZGF0aW9uIDwtIChUUEFkYWJvb3N0VmFsaWRhdGlvbiArIFROQWRhYm9vc3RWYWxpZGF0aW9uKS8oVFBBZGFib29zdFZhbGlkYXRpb24gKyBUTkFkYWJvb3N0VmFsaWRhdGlvbiArIEZQQWRhYm9vc3RWYWxpZGF0aW9uICsgRk5BZGFib29zdFZhbGlkYXRpb24pICAqIDEwMA0KcHJlY2lzaW9uQWRhYm9vc3RWYWxpZGF0aW9uIDwtIFRQQWRhYm9vc3RWYWxpZGF0aW9uIC8gKFRQQWRhYm9vc3RWYWxpZGF0aW9uICsgRlBBZGFib29zdFZhbGlkYXRpb24pICogMTAwDQpyZWNhbGxBZGFib29zdFZhbGlkYXRpb24gPC0gVFBBZGFib29zdFZhbGlkYXRpb24gLyAoVFBBZGFib29zdFZhbGlkYXRpb24gKyBGTkFkYWJvb3N0VmFsaWRhdGlvbikgKiAxMDANCmZNZWFzdXJlQWRhYm9vc3RWYWxpZGF0aW9uIDwtIDIgKiAocHJlY2lzaW9uQWRhYm9vc3RWYWxpZGF0aW9uICogcmVjYWxsQWRhYm9vc3RWYWxpZGF0aW9uKSAvIChwcmVjaXNpb25BZGFib29zdFZhbGlkYXRpb24gKyByZWNhbGxBZGFib29zdFZhbGlkYXRpb24pDQoNCnByaW50KCdUcmVpbm8nKQ0KcHJpbnQoJ1JlZ3Jlc3PjbyBsb2ftc3RpY2EnKQ0Kc3ByaW50ZigiIC1hY3Vy4WNpYTogJS4yZiUlIiwgYWNjdXJhY3lHbG1UcmFpbikNCnNwcmludGYoIiAtcHJlY2lz4286ICUuMmYlJSIsIHByZWNpc2lvbkdsbVRyYWluKQ0Kc3ByaW50ZigiIC1yZWNhbGw6ICUuMmYlJSIsIHJlY2FsbEdsbVRyYWluKQ0Kc3ByaW50ZigiIC1mLW1lYXN1cmU6ICUuMmYlJSIsIGZNZWFzdXJlR2xtVHJhaW4pDQpwcmludCgnwXJ2b3JlIGRlIGRlY2lz428nKQ0Kc3ByaW50ZigiIC1hY3Vy4WNpYTogJS4yZiUlIiwgYWNjdXJhY3lScGFydFRyYWluKQ0Kc3ByaW50ZigiIC1wcmVjaXPjbzogJS4yZiUlIiwgcHJlY2lzaW9uUnBhcnRUcmFpbikNCnNwcmludGYoIiAtcmVjYWxsOiAlLjJmJSUiLCByZWNhbGxScGFydFRyYWluKQ0Kc3ByaW50ZigiIC1mLW1lYXN1cmU6ICUuMmYlJSIsIGZNZWFzdXJlUnBhcnRUcmFpbikNCnByaW50KCdBZGFib29zdCcpDQpzcHJpbnRmKCIgLWFjdXLhY2lhOiAlLjJmJSUiLCBhY2N1cmFjeUFkYWJvb3N0VHJhaW4pDQpzcHJpbnRmKCIgLXByZWNpc+NvOiAlLjJmJSUiLCBwcmVjaXNpb25BZGFib29zdFRyYWluKQ0Kc3ByaW50ZigiIC1yZWNhbGw6ICUuMmYlJSIsIHJlY2FsbEFkYWJvb3N0VHJhaW4pDQpzcHJpbnRmKCIgLWYtbWVhc3VyZTogJS4yZiUlIiwgZk1lYXN1cmVBZGFib29zdFRyYWluKQ0KDQpwcmludCgnVmFsaWRh5+NvJykNCnByaW50KCdSZWdyZXNz428gbG9n7XN0aWNhJykNCnNwcmludGYoIiAtYWN1cuFjaWE6ICUuMmYlJSIsIGFjY3VyYWN5R2xtVmFsaWRhdGlvbikNCnNwcmludGYoIiAtcHJlY2lz4286ICUuMmYlJSIsIHByZWNpc2lvbkdsbVZhbGlkYXRpb24pDQpzcHJpbnRmKCIgLXJlY2FsbDogJS4yZiUlIiwgcmVjYWxsR2xtVmFsaWRhdGlvbikNCnNwcmludGYoIiAtZi1tZWFzdXJlOiAlLjJmJSUiLCBmTWVhc3VyZUdsbVZhbGlkYXRpb24pDQpwcmludCgnwXJ2b3JlIGRlIGRlY2lz428nKQ0Kc3ByaW50ZigiIC1hY3Vy4WNpYTogJS4yZiUlIiwgYWNjdXJhY3lScGFydFZhbGlkYXRpb24pDQpzcHJpbnRmKCIgLXByZWNpc+NvOiAlLjJmJSUiLCBwcmVjaXNpb25ScGFydFZhbGlkYXRpb24pDQpzcHJpbnRmKCIgLXJlY2FsbDogJS4yZiUlIiwgcmVjYWxsUnBhcnRWYWxpZGF0aW9uKQ0Kc3ByaW50ZigiIC1mLW1lYXN1cmU6ICUuMmYlJSIsIGZNZWFzdXJlUnBhcnRWYWxpZGF0aW9uKQ0KcHJpbnQoJ0FkYWJvb3N0JykNCnNwcmludGYoIiAtYWN1cuFjaWE6ICUuMmYlJSIsIGFjY3VyYWN5QWRhYm9vc3RWYWxpZGF0aW9uKQ0Kc3ByaW50ZigiIC1wcmVjaXPjbzogJS4yZiUlIiwgcHJlY2lzaW9uQWRhYm9vc3RWYWxpZGF0aW9uKQ0Kc3ByaW50ZigiIC1yZWNhbGw6ICUuMmYlJSIsIHJlY2FsbEFkYWJvb3N0VmFsaWRhdGlvbikNCnNwcmludGYoIiAtZi1tZWFzdXJlOiAlLjJmJSUiLCBmTWVhc3VyZUFkYWJvb3N0VmFsaWRhdGlvbikNCmBgYA0KDQpBdmFsaWFuZG8gb3MgcmVzdWx0YWRvIGRvIHRyZWlubyBuYSBxdWVzdONvIDIgZSBhIHZhbGlkYefjbyBtb3N0cmFkYSBhY2ltYSwgY29tIGJhc2UgbmEgZGVzY3Jp5+NvIGRvcyBhdHJpYnV0b3MsIHByaW1laXJhbWVudGUgcG9kZW1vcyBhdmFsaWFyIHNlIGhvdXZlIG91IG7jbyBvdmVyZml0dGluZywgcGFyYSBvIG1vZGVsbyBnbG0gdGl2ZW1vcyA5My41MiUgZSA5NC4yOSUgZGUgYWN1cuFjaWEsIHBhcmEgcnBhcnQgdGl2ZW1vcyA5NS41MiUgZSA5NS4yNiUgZGUgYWN1cuFjaWEgZSBwYXJhIGFkYWJvb3N0IDk4LjU1JSBlIDk4LjU1JSBkZSBhY3Vy4WNpYSBubyB0cmVpbm8gZSB2YWxpZGHn428gcmVzcGVjdGl2YW1lbnRlLg0KDQpObyBjYXNvIGRhIHJlZ3Jlc3PjbyBsb2ftc3RpY2EgZSBkbyBtb2RlbG8gUnBhcnQgY29tbyBvcyB2YWxvcmVzIGVzdONvIG11aXRvIHBy83hpbW9zIGlzc28g6SB1bSDtbmRpY2UgcXVlIG7jbyBob3V2ZSBvdmVyZml0dGluZywgb2J2aWFtZW50ZSBzdXBvbmRvIHF1ZSBob3V2ZSB1bWEgZGlzdHJpYnVp5+NvIGUgdmFyaWHn428gaW50ZXJlc3NhbnRlIGVudHJlIGRhZG9zIGRlIHRyZWlubyBlIHZhbGlkYefjby4NCg0KUGFyYSBvIG1vZGVsbyBBZGFib29zdCBu428gaG91dmUgdmFyaeJuY2lhIGVudHJlIHRyZWlubyBlIHZhbGlkYefjbywgYWzpbSBkaXNzbyBvIHJlc3VsdGFkbyBmb2kgbXVpdG8gYm9tLCBzZW5kbyBhc3NpbSBvIG1vZGVsbyB1c2FuZG8gYWRhYm9vdCBzZSBtb3N0cm91IG8gbWVsaG9yIG5lc3NlIGV4cGVyaW1lbnRvIGluaWNpYWwuDQoNCg0KIyMjNC4gSW50ZXJwcmV0ZSBhcyBzYe1kYXMgZG9zIG1vZGVsb3MuIFF1YWlzIGF0cmlidXRvcyBwYXJlY2VtIHNlciBtYWlzIGltcG9ydGFudGVzIGRlIGFjb3JkbyBjb20gY2FkYSBtb2RlbG8/IENyaWUgcGVsbyBtZW5vcyB1bSBub3ZvIGF0cmlidXRvIHF1ZSBu428gZXN04SBub3MgZGFkb3Mgb3JpZ2luYWlzIGUgZXN0dWRlIG8gaW1wYWN0byBkZXNzZSBhdHJpYnV0by4NCg0KYGBge3IsIGluY2x1ZGU9VFJVRX0NCiMgUmVncmVzc+NvIGxvZ+1zdGljYQ0KZ2dwbG90KHZhckltcChtb2RlbF9nbG0pKQ0KYGBgDQoNCmBgYHtyLCBpbmNsdWRlPVRSVUV9DQojIFJwYXJ0DQpnZ3Bsb3QodmFySW1wKG1vZGVsX3JwYXJ0KSkNCmBgYA0KDQpgYGB7ciwgaW5jbHVkZT1UUlVFfQ0KIyBBZGFib29zdA0KZ2dwbG90KHZhckltcChtb2RlbF9hZGFib29zdCkpDQpgYGANCg0KySBiZW0gaW50ZXJlc3NhbnRlIG5vdGFyIHF1ZSBwYXJhIGNhZGEgbW9kZWxvIG7zcyB0ZW1vcyBkaWZlcmVudGUgaW1wb3J04m5jaWFzIGVudHJlIGFzIHZhcmnhdmVpcywgaW5jbHVzaXZlIGEgdmFyaeF2ZWwgbWFpcyBpbXBvcnRhbnRlIG7jbyBmb2kgYSBtZXNtYSBwYXJhIHRvZG9zIG9zIG1vZGVsb3MsIGlzc28gZW5mYXRpemEgbyBmYXRvIGRlIHF1ZSBwb3NzaXZlbG1lbnRlIG7jbyBleGlzdGUgdW1hIG1hbmlwdWxh5+NvIGRlIGRhZG9zIGdlbulyaWNhIHF1ZSBmdW5jaW9uYSBiZW0gcGFyYSBxdWFscXVlciBtb2RlbG8sIGlzc28gdmFpIGRlcGVuZGVyIHByaW5jaXBhbG1lbnRlIGRlIHF1YWwgbW9kZWxvIHNlcuEgdXNhZG8uDQoNClBhcmEgbyBtb2RlbG8gZGUgcmVncmVzc+NvIGxvZ+1zdGljYSB0ZW1vcyBvIHNlZ3VpbnRlLCBhcyAzIG1haXMgaW1wb3J0YW50ZXMgZm9yYW0gZGVzcGVzYV9tYXhfY2FtcGFuaGEsIG1lZGlhX2Rlc3Blc2EgZSBzZXhvTWFzY3VsaW5vLCBlIGFzIDMgbWVub3MgaW1wb3J0YW50ZXMgZm9yYW0gZGVzY3JpY2FvX2Nvcl9yYWNhSW5kaWdlbmEsIHBhcnRpZG9QQ08gZSAiZ3JhdUzKIEUgRXNjcmV2ZSIuDQoNClBhcmEgbyBtb2RlbG8gUnBhcnQgbyByZXN1bHRhZG8gZm9pOiBhcyAzIG1haXMgaW1wb3J0YW50ZXMsIHRvdGFsX3JlY2VpdGEsIHRvdGFsX2Rlc3Blc2EsIHF1YW50aWRhZGVfZG9hY29lcyBlIGFzIDMgcGlvcmVzIGZvcmFtIHBhcnRpZG9QU0RCLCBwYXJ0aWRvUENPIGUgcGFydGlkb1BNREINCg0KUGFyYSBvIG1vZGVsbyBBZGFib29zdCBmaWNvdSBvIHNlZ3VpbnRlOiAzIG1haXMgaW1wb3J0YW50ZXMgZm9yYW0gdG90YWxfcmVjZWl0YSwgdG90YWxfZGVzcGVzYSBlIHF1YW50aWRhZGVfZGVzcGVzYXMgauEgYXMgMyBwaW9yZXMgZm9yYW0gcGFydGlkbywgaWRhZGUgZSBzZXhvLg0KDQpDb20gZXNzYSBhbuFsaXNlIHBvZGVtb3Mgb2JzZXJ2YXIgbWFpcyBhbGd1bWFzIGNvaXNhcyBpbnRlcmVzc2FudGVzLCBjb21vIHBvciBleGVtcGxvIDggZGFzIDkgdmFyaeF2ZWlzIG1haXMgaW1wb3J0YW50ZXMgZXN0428gcmVsYWNpb25hZGFzIGNvbSBkaW5oZWlybyBpbnZlc3RpZG8gbmEgY2FtcGFuaGEgbyBxdWUgZmF6IGJhc3RhbnRlIHNlbnRpZG8sIG91dHJvIGRldGFsaGUg6SBxdWUgZW0gdG9kb3Mgb3MgbW9kZWxvcyBwYXJ0aWRvcyBwYXJlY2VtIHNlciBpcnJlbGV2YW50ZXMgZGUgZm9ybWEgZ2VyYWwuDQoNCg0KUGFyYSBjcmlhciB1bSBub3ZvIGF0cmlidXRvIGUgbWVkaXIgc3VhIGVmaWNp6m5jaWEgaXJlbW9zIHV0aWxpemFyIG9zIG1vZGVsb3MgUnBhcnQgZSByZWdyZXNz428gbG9n7XN0aWNhLCBwb2lzIGVsZXMgc2UgbW9zdHJhcmFtIGVmaWNpZW50ZXMgZSBwb3NzdWVtIHVtIHRlbXBvIGRlIHRyZWluYW1lbnRvIGJlbSBy4XBpZG8uDQoNCk8gbm92byBhdHJpYnV0byBzZXLhICJpZGFkZUJpbiIsIHF1ZSBzZXJpYSBuYWRhIG1haXMgcXVlIG8gYXRyaWJ1dG8gaWRhZGUgYWx0ZXJhZG8gcGFyYSBxdWUgY29ycmVzcG9uZGEgYSB1bSBpbnRlcnZhbG8sIHF1ZSBuZXNzZSBjYXNvIHNlcmlhIFsxOCwgMzgsIDU4LCA3OCwgOTgsIDIwMF0sIHVzZWkgZXNzZXMgZGFkb3MgcGFyYSB0ZW50YXIgcmVwcmVzZW50YXIgYSBpZGFkZSBkb3MgY2FuZGlkYXRvcyBlIGludGVydmFsb3MgZGUgMjAgYW5vcyBkb3MgbWFpcyBqb3ZlbnMgYXTpIG9zIG1haXMgdmVsaG9zLCBlIGFzc2ltIHRlbnRhciBhbmFsaXNhciBzZSBhIGZhaXhhIGV04XJpYSB2aXN0YSBkZSB1bWEgZm9ybWEgYWdydXBhZGEgZmF6IG1haXMgc2VudGlkbyBkbyBxdWUgdXNhciBpZGFkZXMgZGEgZm9ybWEgY29udmVuY2lvbmFsLg0KDQpgYGB7cn0NCiMgbm92byBhdHJpYnV0byAoaWRhZGVCaW4pIChhcGxpY2FuZG8gdGFtYultIGFvcyBkYWRvcyBkZSB0ZXN0ZSkNCnRyYWluJGlkYWRlQmluIDwtIGN1dCh0cmFpbiRpZGFkZSwgYnJlYWtzID0gYygxOCwgMzgsIDU4LCA3OCwgOTgsIDIwMCksIGxhYmVscz1GQUxTRSkNCnRlc3QkaWRhZGVCaW4gPC0gY3V0KHRlc3QkaWRhZGUsIGJyZWFrcyA9IGMoMTgsIDM4LCA1OCwgNzgsIDk4LCAyMDApLCBsYWJlbHM9RkFMU0UpDQoNCnRyYWluX2RhZG9zRmlsdHJhZG9zRW5naW5lZXJlZCA8LSB0cmFpbiAlPiUgc2VsZWN0KHBhcnRpZG8sIHF1YW50aWRhZGVfZG9hY29lcywgcXVhbnRpZGFkZV9kb2Fkb3JlcywgdG90YWxfcmVjZWl0YSwgbWVkaWFfcmVjZWl0YSwgcmVjdXJzb3NfZGVfb3V0cm9zX2NhbmRpZGF0b3MuY29taXRlcywgcmVjdXJzb3NfZGVfcGFydGlkb3MsIHJlY3Vyc29zX2RlX3Blc3NvYXNfZu1zaWNhcywgcmVjdXJzb3NfZGVfcGVzc29hc19qdXJpZGljYXMsIHJlY3Vyc29zX3Byb3ByaW9zLCBxdWFudGlkYWRlX2Rlc3Blc2FzLCBxdWFudGlkYWRlX2Zvcm5lY2Vkb3JlcywgdG90YWxfZGVzcGVzYSwgbWVkaWFfZGVzcGVzYSwgaWRhZGUsIHNleG8sIGdyYXUsIGRlc2NyaWNhb19jb3JfcmFjYSwgZGVzcGVzYV9tYXhfY2FtcGFuaGEsIHNpdHVhY2FvX2ZpbmFsLCBpZGFkZUJpbikNCg0KIyB2YXJp4XZlaXMgZW52b2x2aWRhcyBubyBtb2RlbG8NCmZvcm11bGFFbmdpbmVlcmVkID0gYXMuZm9ybXVsYShzaXR1YWNhb19maW5hbCB+IHBhcnRpZG8gKyBxdWFudGlkYWRlX2RvYWNvZXMgKyBxdWFudGlkYWRlX2RvYWRvcmVzICsgdG90YWxfcmVjZWl0YSArIG1lZGlhX3JlY2VpdGEgKyByZWN1cnNvc19kZV9vdXRyb3NfY2FuZGlkYXRvcy5jb21pdGVzICsgcmVjdXJzb3NfZGVfcGFydGlkb3MgKyByZWN1cnNvc19kZV9wZXNzb2FzX2btc2ljYXMgKyByZWN1cnNvc19kZV9wZXNzb2FzX2p1cmlkaWNhcyArIHJlY3Vyc29zX3Byb3ByaW9zICsgcXVhbnRpZGFkZV9kZXNwZXNhcyArIHF1YW50aWRhZGVfZm9ybmVjZWRvcmVzICsgdG90YWxfZGVzcGVzYSArIG1lZGlhX2Rlc3Blc2EgKyBpZGFkZSArIHNleG8gKyBncmF1ICsgZGVzY3JpY2FvX2Nvcl9yYWNhICsgZGVzcGVzYV9tYXhfY2FtcGFuaGEgKyBzaXR1YWNhb19maW5hbCArIGlkYWRlQmluKQ0KDQojIHBhcnRp5+NvDQpkYXRhUGFydGl0aW9uRW5naW5lZXJlZCA8LSBjcmVhdGVEYXRhUGFydGl0aW9uKHkgPSB0cmFpbl9kYWRvc0ZpbHRyYWRvc0VuZ2luZWVyZWQkc2l0dWFjYW9fZmluYWwsIHA9MC43NSwgbGlzdD1GQUxTRSkNCg0KdHJlaW5vRW5naW5lZXJlZCA8LSB0cmFpbl9kYWRvc0ZpbHRyYWRvc0VuZ2luZWVyZWRbIGRhdGFQYXJ0aXRpb25FbmdpbmVlcmVkLCBdDQp2YWxpZGFjYW9FbmdpbmVlcmVkIDwtIHRyYWluX2RhZG9zRmlsdHJhZG9zRW5naW5lZXJlZFsgLWRhdGFQYXJ0aXRpb25FbmdpbmVlcmVkLCBdDQpgYGANCg0KYGBge3J9DQojIFJwYXJ0DQptb2RlbF9ycGFydDIgPC0gdHJhaW4oZm9ybXVsYUVuZ2luZWVyZWQsDQogICAgICAgICAgICAgICAgIGRhdGE9dHJlaW5vRW5naW5lZXJlZCwNCiAgICAgICAgICAgICAgICAgbWV0aG9kID0gInJwYXJ0IiwNCiAgICAgICAgICAgICAgICAgdHJDb250cm9sID0gZml0Q29udHJvbCwNCiAgICAgICAgICAgICAgICAgY3A9MC4wMDEsDQogICAgICAgICAgICAgICAgIG1heGRlcHRoPTI1KQ0KDQojIFJlZ3Jlc3PjbyBsb2ftc3RpY2ENCm1vZGVsX2dsbTIgPC0gdHJhaW4oZm9ybXVsYUVuZ2luZWVyZWQsDQogICAgICAgICAgICAgICAgIGRhdGEgPSB0cmVpbm9FbmdpbmVlcmVkLA0KICAgICAgICAgICAgICAgICBtZXRob2Q9ImdsbSIsDQogICAgICAgICAgICAgICAgIHRyQ29udHJvbCA9IGZpdENvbnRyb2wsDQogICAgICAgICAgICAgICAgIGZhbWlseT0iYmlub21pYWwiLA0KICAgICAgICAgICAgICAgICBuYS5hY3Rpb24gPSBuYS5vbWl0KQ0KYGBgDQoNCmBgYHtyLCBpbmNsdWRlPVRSVUV9DQojIGltcG9ydOJuY2lhIGRhcyB2YXJp4XZlaXMNCmdncGxvdCh2YXJJbXAobW9kZWxfcnBhcnQyKSkNCg0KZ2dwbG90KHZhckltcChtb2RlbF9nbG0yKSkNCg0KYGBgDQoNCk5lc3NlcyBkb2lzIGNhc29zIHBvZGVtb3Mgb2JzZXJ2YXIganVzdGFtZW50ZSBvIHF1ZSBmb2kgZGl0byBhbnRlcmlvcm1lbnRlLCBvIG5vdm8gYXRyaWJ1dG8gbuNvIGFqdWRvdSBlbSBuYWRhIG5vIG1vZGVsbyBycGFydCwgauEgcGFyYSBvIG1vZGVsbyBkZSByZWdyZXNz428gbG9n7XN0aWNhIGVsZSB0ZXZlIHVtIGltcGFjdG8gcG9zaXRpdm8sIG9saGFuZG8gcGVsbyBncuFmaWNvIGVsZSBwYXJlY2Ugc2VyIHF1YXNlIDMgdmV6ZXMgbWVsaG9yIHF1ZSBvIGFudGVyaW9yLCBvdSBzZWphIGEgY3JpYefjbyBkZSBub3ZvcyBhdHJpYnV0b3MgZXN04SBsaWdhZG8gYW8gbW9kZWxvIG5hIHF1YWwgZWxlIHNlcuEgdXNhZG8uDQoNCiMjIzUuIEVudmllIHNldXMgbWVsaG9yZXMgbW9kZWxvcyDgIGNvbXBldGnn428gZG8gS2FnZ2xlLiBTdWdlc3T1ZXMgcGFyYSBtZWxob3JhciBvIG1vZGVsbzoNCiMjIyAgICAxLiBFeHBlcmltZW50ZSBvdXRyb3MgbW9kZWxvcyAoZS5nLiBTVk0sIFJhbmRvbUZvcmVzdHMgZSBHcmFkaWVudEJvb3N0aW5nKQ0KIyMjICAgIDIuIENyaWUgcGVsbyBtZW5vcyB1bSBub3ZvIGF0cmlidXRvIChvcGNpb25hbCkuDQoNClVtYSBkYXMgZm9ybWFzIG1haXMgc2ltcGxlcyBkZSBhanVzdGFyIG9zIGF0cmlidXRvcyBkb3MgZGFkb3Mg6SBzdWJzdGl0dWluZG8gb3MgZGFkb3MgZmFsdGFudGVzIHBvciBhbGd1bSB2YWxvciBxdWUgZmHnYSBzZW50aWRvIG5vIGNvbnRleHRvIGRvIGF0cmlidXRvLCBwYXJhIGlzc28gaXJlbW9zIHZlcmlmaWNhciBxdWFpcyBhdHJpYnV0b3MgcG9zc3VlbSB2YWxvcmVzIGZhbHRhbnRlcy4NCg0KYGBge3IsIGluY2x1ZGU9VFJVRX0NCiMgc3Vic3RpdHVpbmRvIHZhbG9yZXMgbWFyY2Fkb3MgY29tbyAiI05VTE8iIHByYSAiTkEiLCBlIHZhbG9yZXMgbnVt6XJpY29zIG1hcmNhZG9zIGNvbW8gMCBwb3IgIk5BIiAoaXNzbyBu428gaXLhIHNlciBhcGxpY2FkbyBlbSB0b2RhcyBhcyBjb2x1bmFzIG51belyaWNhcywgYWxndW1hIGZhemVtIHNlbnRpZG8gdGVyIHZhbG9yZXMgMCkNCiMgb2JzIG8gcHJvY2Vzc2FtZW50byBkZXZlIHNlciBmZWl0byBwYXJhIG9zIGRhZG9zIGRlIHRyZWlubyBlIHRlc3RlDQoNCnRyYWluW3RyYWluID09ICcjTlVMTyddIDwtIE5BDQp0ZXN0W3Rlc3QgPT0gJyNOVUxPJ10gPC0gTkENCg0KIyBvYnNlcnZhbmRvIGEgcXVhbnRpZGFkZSBkZSB2YWxvcmVzIG51bG9zIHBhcmEgY2FkYSBhdHJpYnV0bw0Kc2FwcGx5KHRyYWluLCBmdW5jdGlvbih5KSBzdW0obGVuZ3RoKHdoaWNoKGlzLm5hKHkpKSkpKQ0KYGBgDQoNClBvciBjb2luY2lk6m5jaWEgb3MgYXRyaWJ1dG9zIHF1ZSBwb3NzdWVtIHZhbG9yZXMgbnVsb3MgZm9yYW0gb3MgcXVlIGrhIGhhdmlhbSBzaWRvIHJlbW92aWRvcywgbm8gY2FzbyBkb3MgZG9pcyDpIGltcG9ydGFudGUgbm90YXIgcXVlIGV4aXN0ZW0gbWFpcyBhdHJpYnV0b3MgbnVsb3MgcXVlIG9zIGRlbWFpcywgZW50428gc2VyaWEgZGlm7WNpbCBkZSBpbmZlcmlyIGVzc2VzIHZhbG9yZXMgZGUgZm9ybWEgZWZpY2llbnRlIHNlbSBxdWUgYWNhYmUgYXRyYXBhbGhhbmRvIG8gbW9kZWxvLg0KDQpDb21vIGZvaSBvYnNlcnZhZG9yIG5hIHF1ZXN0428gMSBleGlzdGUgdW0gZ3JhbmRlIGRlc2JhbGFuY2VhbWVudG8gbm8gY29uanVudG8gZGUgZGFkb3MsIGUgaXNzbyBwcm92YXZlbG1lbnRlIGVzdOEgcHJlanVkaWNhbmRvIG5vc3NvIG1vZGVsbywgZW50428gdW1hIGZvcm1hIHNpbXBsZXMgZGUgbWVsaG9yYXIgc2VyaWEgYmFsYW5jZWFyIGFzIGNsYXNzZXMgKGVsZWl0byBlIG7jbyBlbGVpdG8pLCBhcyBkdWFzIG1hbmVpcmFzIG1haXMgc2ltcGxlcyBzZXJpYW0gdXNhbmRvIG92ZXJzYW1wbGluZyAoYWRpY2lvbmFyIG5vdm9zIGV4ZW1wbG9zIGRhIGNsYXNzZSBtZW5vcikgZSB1bmRlcnNhbXBsaW5nIChyZW1vdmVyIGV4ZW1wbG9zIGRhIGNsYXNzZSBtYWlvcikNCg0KYGBge3J9DQojIGJhbGFuY2VhZG8gYXMgY2xhc3Nlcw0KDQojIEstZm9sZCBjcm9zcy12YWxpZGF0aW9uIGUgdXNvIGRlIHVuZGVyc2FtcGxpbmcNCmZpdENvbnRyb2xVbmRlciA8LSB0cmFpbkNvbnRyb2wobWV0aG9kID0gImN2IiwNCiAgICAgICAgICAgICAgICAgICAgbnVtYmVyID0gMTAsDQogICAgICAgICAgICAgICAgICAgIHNlYXJjaD0gInJhbmRvbSIsDQogICAgICAgICAgICAgICAgICAgIHNhbXBsaW5nID0gImRvd24iKQ0KDQojIEstZm9sZCBjcm9zcy12YWxpZGF0aW9uIGUgdXNvIGRlIG92ZXJzYW1wbGluZw0KZml0Q29udHJvbE92ZXIgPC0gdHJhaW5Db250cm9sKG1ldGhvZCA9ICJjdiIsDQogICAgICAgICAgICAgICAgICAgIG51bWJlciA9IDEwLA0KICAgICAgICAgICAgICAgICAgICBzZWFyY2g9ICJyYW5kb20iLA0KICAgICAgICAgICAgICAgICAgICBzYW1wbGluZyA9ICJ1cCIpDQoNCmZpdENvbnRyb2xSb3NlIDwtIHRyYWluQ29udHJvbChtZXRob2QgPSAiY3YiLA0KICAgICAgICAgICAgICAgICAgICBudW1iZXIgPSAxMCwNCiAgICAgICAgICAgICAgICAgICAgc2VhcmNoPSAicmFuZG9tIiwNCiAgICAgICAgICAgICAgICAgICAgc2FtcGxpbmcgPSAicm9zZSIpDQoNCmZpdENvbnRyb2xTbW90ZSA8LSB0cmFpbkNvbnRyb2wobWV0aG9kID0gImN2IiwNCiAgICAgICAgICAgICAgICAgICAgbnVtYmVyID0gMTAsDQogICAgICAgICAgICAgICAgICAgIHNlYXJjaD0gInJhbmRvbSIsDQogICAgICAgICAgICAgICAgICAgIHNhbXBsaW5nID0gInNtb3RlIikNCg0KYGBgDQoNCmBgYHtyfQ0KIyByZW1vdmVuZG8gYXRyaWJ1dG9zIG7jbyB1c2Fkb3MNCnRyYWluVXBkYXRlZCA8LSB0cmFpbiAlPiUgc2VsZWN0KHBhcnRpZG8sIHF1YW50aWRhZGVfZG9hY29lcywgcXVhbnRpZGFkZV9kb2Fkb3JlcywgdG90YWxfcmVjZWl0YSwgbWVkaWFfcmVjZWl0YSwgcmVjdXJzb3NfZGVfb3V0cm9zX2NhbmRpZGF0b3MuY29taXRlcywgcmVjdXJzb3NfZGVfcGFydGlkb3MsIHJlY3Vyc29zX2RlX3Blc3NvYXNfZu1zaWNhcywgcmVjdXJzb3NfZGVfcGVzc29hc19qdXJpZGljYXMsIHJlY3Vyc29zX3Byb3ByaW9zLCBxdWFudGlkYWRlX2Rlc3Blc2FzLCBxdWFudGlkYWRlX2Zvcm5lY2Vkb3JlcywgdG90YWxfZGVzcGVzYSwgbWVkaWFfZGVzcGVzYSwgaWRhZGUsIHNleG8sIGdyYXUsIGRlc2NyaWNhb19jb3JfcmFjYSwgZGVzcGVzYV9tYXhfY2FtcGFuaGEsIHNpdHVhY2FvX2ZpbmFsLCBpZGFkZUJpbikNCg0KdGVzdFVwZGF0ZWQgPC0gdGVzdCAlPiUgc2VsZWN0KHBhcnRpZG8sIHF1YW50aWRhZGVfZG9hY29lcywgcXVhbnRpZGFkZV9kb2Fkb3JlcywgdG90YWxfcmVjZWl0YSwgbWVkaWFfcmVjZWl0YSwgcmVjdXJzb3NfZGVfb3V0cm9zX2NhbmRpZGF0b3MuY29taXRlcywgcmVjdXJzb3NfZGVfcGFydGlkb3MsIHJlY3Vyc29zX2RlX3Blc3NvYXNfZu1zaWNhcywgcmVjdXJzb3NfZGVfcGVzc29hc19qdXJpZGljYXMsIHJlY3Vyc29zX3Byb3ByaW9zLCBxdWFudGlkYWRlX2Rlc3Blc2FzLCBxdWFudGlkYWRlX2Zvcm5lY2Vkb3JlcywgdG90YWxfZGVzcGVzYSwgbWVkaWFfZGVzcGVzYSwgaWRhZGUsIHNleG8sIGdyYXUsIGRlc2NyaWNhb19jb3JfcmFjYSwgZGVzcGVzYV9tYXhfY2FtcGFuaGEsIGlkYWRlQmluKQ0KDQojIHRyYW5zZm9ybWFuZG8gb3MgdmFsb3JlcyBjYXRlZ/NyaWNvcyBwYXJhIG8gZm9ybWF0byBvbmUgaG90DQp0cmFpblVwZGF0ZWQgPC0gZHVtbXkuZGF0YS5mcmFtZSh0cmFpblVwZGF0ZWQsIG5hbWVzPWMoJ2VzdGFkb19jaXZpbCcpLCBzZXA9Il8iKQ0KdHJhaW5VcGRhdGVkIDwtIGR1bW15LmRhdGEuZnJhbWUodHJhaW5VcGRhdGVkLCBuYW1lcz1jKCdzZXhvJyksIHNlcD0iXyIpDQp0cmFpblVwZGF0ZWQgPC0gZHVtbXkuZGF0YS5mcmFtZSh0cmFpblVwZGF0ZWQsIG5hbWVzPWMoJ2dyYXUnKSwgc2VwPSJfIikNCnRyYWluVXBkYXRlZCA8LSBkdW1teS5kYXRhLmZyYW1lKHRyYWluVXBkYXRlZCwgbmFtZXM9YygnZGVzY3JpY2FvX2Nvcl9yYWNhJyksIHNlcD0iXyIpDQoNCnRlc3RVcGRhdGVkIDwtIGR1bW15LmRhdGEuZnJhbWUodGVzdFVwZGF0ZWQsIG5hbWVzPWMoJ2VzdGFkb19jaXZpbCcpLCBzZXA9Il8iKQ0KdGVzdFVwZGF0ZWQgPC0gZHVtbXkuZGF0YS5mcmFtZSh0ZXN0VXBkYXRlZCwgbmFtZXM9Yygnc2V4bycpLCBzZXA9Il8iKQ0KdGVzdFVwZGF0ZWQgPC0gZHVtbXkuZGF0YS5mcmFtZSh0ZXN0VXBkYXRlZCwgbmFtZXM9YygnZ3JhdScpLCBzZXA9Il8iKQ0KdGVzdFVwZGF0ZWQgPC0gZHVtbXkuZGF0YS5mcmFtZSh0ZXN0VXBkYXRlZCwgbmFtZXM9YygnZGVzY3JpY2FvX2Nvcl9yYWNhJyksIHNlcD0iXyIpDQoNCiMgdmFyaeF2ZWlzIGVudm9sdmlkYXMgbm8gbW9kZWxvDQpmb3JtdWxhRW5naW5lZXJlZCA9IGFzLmZvcm11bGEoc2l0dWFjYW9fZmluYWwgfiBwYXJ0aWRvICsgcXVhbnRpZGFkZV9kb2Fjb2VzICsgcXVhbnRpZGFkZV9kb2Fkb3JlcyArIHRvdGFsX3JlY2VpdGEgKyBtZWRpYV9yZWNlaXRhICsgcmVjdXJzb3NfZGVfb3V0cm9zX2NhbmRpZGF0b3MuY29taXRlcyArIHJlY3Vyc29zX2RlX3BhcnRpZG9zICsgcmVjdXJzb3NfZGVfcGVzc29hc19m7XNpY2FzICsgcmVjdXJzb3NfZGVfcGVzc29hc19qdXJpZGljYXMgKyByZWN1cnNvc19wcm9wcmlvcyArIHF1YW50aWRhZGVfZGVzcGVzYXMgKyBxdWFudGlkYWRlX2Zvcm5lY2Vkb3JlcyArIHRvdGFsX2Rlc3Blc2EgKyBtZWRpYV9kZXNwZXNhICsgaWRhZGUgKyBzZXhvICsgZ3JhdSArIGRlc2NyaWNhb19jb3JfcmFjYSArIGRlc3Blc2FfbWF4X2NhbXBhbmhhICsgc2l0dWFjYW9fZmluYWwgKyBpZGFkZUJpbikNCg0KIyBwYXJ0aefjbw0KZGF0YVBhcnRpdGlvbkVuZ2luZWVyZWQgPC0gY3JlYXRlRGF0YVBhcnRpdGlvbih5ID0gdHJhaW5VcGRhdGVkJHNpdHVhY2FvX2ZpbmFsLCBwPTAuODAsIGxpc3Q9RkFMU0UpDQoNCnRyZWlub0VuZ2luZWVyZWQgPC0gdHJhaW5VcGRhdGVkWyBkYXRhUGFydGl0aW9uRW5naW5lZXJlZCwgXQ0KdmFsaWRhY2FvRW5naW5lZXJlZCA8LSB0cmFpblVwZGF0ZWRbIC1kYXRhUGFydGl0aW9uRW5naW5lZXJlZCwgXQ0KYGBgDQoNCg0KYGBge3J9DQojIG1vZGVsb3MgYmFsYW5jZWFkb3MgY29tIG9zIG5vdm9zIGRhZG9zDQptb2RlbCA8LSB0cmFpbihzaXR1YWNhb19maW5hbCB+IC4sDQogICAgICAgICAgICAgICAgIGRhdGE9dHJlaW5vRW5naW5lZXJlZCwNCiAgICAgICAgICAgICAgICAgbWV0aG9kID0gImFkYWJvb3N0IiwNCiAgICAgICAgICAgICAgICAgcHJlUHJvY2VzcyA9IGMoInNjYWxlIiwgImNlbnRlciIpLA0KICAgICAgICAgICAgICAgICB0ckNvbnRyb2wgPSBmaXRDb250cm9sVW5kZXIpDQpgYGANCg0KYGBge3IgZXZhbD1GQUxTRSwgaW5jbHVkZT1GQUxTRX0NCnN1Ym1pc3Npb24gPC0gcmVhZC5jc3YoImRhdGEvc2FtcGxlX3N1Ym1pc3Npb241LmNzdiIpDQpzdWJtaXNzaW9uX3ByZWRpY3QgPC0gcHJlZGljdChtb2RlbCwgdGVzdFVwZGF0ZWQpDQoNCmZvcihpIGluIDE6bGVuZ3RoKHN1Ym1pc3Npb25fcHJlZGljdCkpew0KICBzdWJtaXNzaW9uJHByZWRpY3Rpb25baV0gPSBzdWJtaXNzaW9uX3ByZWRpY3RbaV0NCn0NCg0Kd3JpdGUuY3N2KHN1Ym1pc3Npb24sIGZpbGUgPSAiQzovVXNlcnMvZGltaXQvRGVza3RvcC9Qcm9qZXRvcy9BRDIvZGF0YS9zdWJtaXNzaW9ucy9tb2RlbC5jc3YiLCByb3cubmFtZXMgPSBGQUxTRSkNCmBgYA0KDQpMaW5rIGNvbSBvcyBkYWRvczogaHR0cHM6Ly93d3cua2FnZ2xlLmNvbS9jL3VmY2ctYWQyLTIwMTcyLWxhYjM=