Training Models From GA selected features for Gravity Prediction

In previous steps features were selected by using GA technology to assess the suitability of such features.

Differnt GA strategies for fitness evaluation and we have tested nearcenter and randomforest technologies.

Feature selection was prepared acording to document paso2

suppressPackageStartupMessages(library(googleVis))
suppressPackageStartupMessages(library(xtable))
suppressPackageStartupMessages(library(Peaks))
suppressPackageStartupMessages(library(magic))
suppressPackageStartupMessages(library(segmented))
suppressPackageStartupMessages(library(fftw))
suppressPackageStartupMessages(library(FITSio))
suppressPackageStartupMessages(library(stringr))
suppressPackageStartupMessages(library(utils))
suppressPackageStartupMessages(library(e1071))
suppressPackageStartupMessages(library(quantmod))
suppressPackageStartupMessages(library(JADE))
suppressPackageStartupMessages(library(zoo))
suppressPackageStartupMessages(library(plyr))
suppressPackageStartupMessages(library(doMC))
suppressPackageStartupMessages(library(multicore))
suppressPackageStartupMessages(library(parallel))
suppressPackageStartupMessages(library(foreach))
suppressPackageStartupMessages(library(compiler))
suppressPackageStartupMessages(library(galgo))

##Feature extraction as defined by the GA

For feature selection BT-Settl 2012 library from France Allard was selected and wavelength reduction was performed to become compatible with the data coming from the satellite IPAC.

# Procesado del genético de Tª
setwd("~/git/M_sel")
# load('nearcenter_ALL_g_5_900.RData') plot(bb.nc.2,type='fitness')
# plot(bb.nc.2,type='fitness', filter='nosolutions')
# plot(bb.nc.2,type='confusion') cpm<-classPredictionMatrix(bb.nc.2)
# cm<-confusionMatrix(bb.nc.2,cpm) sec<-sensitivityClass(bb.nc.2,cm)
# spc<-specificityClass(bb.nc.2,cm)
# plot(bb.nc.2,type='confusion',set=c(1,0), splits=1)
# plot(bb.nc.2,type='confusion',set=c(1,0),
# splits=1,chromosomes=list(bb.nc.2$bestChromosomes[[1]]))
# plot(bb.nc.2,type='generankstability')
# rchr<-lapply(bb.nc.2$bestChromosomes[1:300],
# robustGeneBackwardElimination,bb.nc.2,result='shortest')
# fsm<-forwardSelectionModels(bb.nc.2, plot=FALSE) fsm$models
# rownames(ALL)[fsm$models[[1]]]
load("m_g-settl_5_rf.RData")
datos <- `m_g-settl_5_rf`
features <- list()
features$G <- c("5:12_13:24", "46:60_61:72", "52:59_64:71", "59:66_70:77", "67:74_55:62", 
    "88:95_97:104", "97:103_88:95", "97:104_106:113", "106:113_97:104", "114:123_13:29", 
    "138:147_127:134")
output <- 12
# 

Now, we start to load the original data bp_clean and bq_clean, and we will process the features according to the feature list.

In order to generate features we will use the formula \( \int{\lambda_{1}}{\lambda_{2}}{1-\frac{F_{line}}{F_{cont}} d\lambda} \) as depicted in formula (1) of the above reference.

The band frequences are defined in the previous table but \( F_{cont} \) is not defined. As such a genetic algorithm testing different potential continuum spectra will be tested and looking the best matching to explain spectral parameters by precious feature functions.

Once the signal points and continuous regions were identified, models were setted-up and assessed by crossvalidation procedure. Iedas for the implementation were taken from http://moderntoolmaking.blogspot.com.es/2013/03/caretensemble-classification-example.html

# Cargamos bp_clean (BT_SETTL) & bq_clean (IPAC)
load("~/git/M_prep/M_prep_cleanip_BT-Settl.RData")
rm(xtmp)
# Buscamos las catacterísticas para extraerla del conjunto de train par T
signal <- unlist(lapply(features$G, function(x) {
    a <- strsplit(x, "_")
    return(a[[1]][1])
}))
noise <- unlist(lapply(features$G, function(x) {
    a <- strsplit(x, "_")
    return(a[[1]][2])
}))
sn <- cbind(signal, noise)
int_spec <- function(x, idx, norm = 0) {
    y <- x$data[[1]][eval(parse(text = idx)), ]
    xz <- diff(as.numeric(y[, 1]), 1)
    yz <- as.numeric(y[, 2])
    if (norm > 0) {
        z <- sum(xz)
    } else {
        z <- sum(xz * rollmean(yz, 2))
    }
    return(z)
}
# 
feature_extr <- function(sn, bp) {
    sig <- sn[1]
    noi <- sn[2]
    Fcont <- unlist(lapply(bp, int_spec, noi, 0))/unlist(lapply(bp, int_spec, 
        noi, 1))
    fea <- unlist(lapply(bp, int_spec, sig, 1)) - unlist(lapply(bp, int_spec, 
        sig, 0))/Fcont
    return(fea)
}
xx <- apply(sn, 1, feature_extr, bp_clean)
colnames(xx) <- as.character(sn[, 1])
colnames(xx) <- str_replace(paste("X", colnames(xx), sep = ""), ":", ".")
xx <- cbind(xx, unlist(lapply(bp_clean, function(x) {
    return(-1 * x$stellarp[2])
})))
colnames(xx)[ncol(xx)] <- "G"
# Just informing for the features
newfea <- cbind(t(apply(sn, 1, function(x) {
    return(range(bp_clean[[1]]$data[[1]][eval(parse(text = x[1])), 1]))
})), t(apply(sn, 1, function(x) {
    return(range(bp_clean[[1]]$data[[1]][eval(parse(text = x[2])), 1]))
})))
colnames(newfea) <- c("Signal_from", "Signal_To", "Cont_From", "Cont_To")
print(xtable(newfea), type = "html")
Signal_from Signal_To Cont_From Cont_To
1 8475.40 8500.60 8504.20 8543.80
2 8623.00 8673.40 8677.00 8716.60
3 8644.60 8669.80 8687.80 8713.00
4 8669.80 8695.00 8709.40 8734.60
5 8698.60 8723.80 8655.40 8680.60
6 8774.20 8799.40 8806.60 8831.80
7 8806.60 8828.20 8774.20 8799.40
8 8806.60 8831.80 8839.00 8864.20
9 8839.00 8864.20 8806.60 8831.80
10 8867.80 8900.20 8504.20 8561.80
11 8954.20 8986.60 8914.60 8939.80
# 

Regression and modeling

Let's build up several models and compare their performance. Generally speaking we split randomly the learning set and we will learn by ten folder cross validation. Then we will test the models against the unseen data and we will test the ensamble technology, even.

# Setup
gc(reset = TRUE)
##             used   (Mb) gc trigger   (Mb)  max used   (Mb)
## Ncells    803031   42.9    1265230   67.6    803031   42.9
## Vcells 520515201 3971.3  752488759 5741.1 520515201 3971.3
set.seed(42)  #From random.org

# Libraries
library(caret)
## Loading required package: lattice
## 
## Attaching package: 'lattice'
## 
## The following object is masked from 'package:multicore':
## 
##     parallel
## 
## Loading required package: ggplot2
## 
## Attaching package: 'caret'
## 
## The following objects are masked from 'package:galgo':
## 
##     best, confusionMatrix
library(devtools)
## 
## Attaching package: 'devtools'
## 
## The following objects are masked from 'package:R.oo':
## 
##     check, unload
# Solo una vez: install_github('caretEnsemble', 'zachmayer') #Install zach's
# caretEnsemble package Code gathered from the author's post.
library(caretEnsemble)

# Data
library(mlbench)
xx <- as.data.frame(xx)
X <- xx[, -ncol(xx)]
rownames(X) <- 1:nrow(X)
X <- data.frame(X)
Y <- xx[, ncol(xx)]

# Split train/test
train <- runif(nrow(X)) <= 0.66

# Setup CV Folds returnData=FALSE saves some space
folds = 10
repeats = 1
myControl <- trainControl(method = "cv", number = folds, repeats = repeats, 
    returnResamp = "none", returnData = FALSE, savePredictions = TRUE, verboseIter = FALSE, 
    allowParallel = TRUE, index = createMultiFolds(Y[train], k = folds, times = repeats))
# Train some models
model1 <- train(X[train, ], Y[train], method = "gbm", trControl = myControl, 
    tuneGrid = expand.grid(.n.trees = 500, .interaction.depth = 15, .shrinkage = 0.01), 
    verbose = FALSE)
## Loading required package: gbm
## Loading required package: survival
## Loading required package: splines
## 
## Attaching package: 'survival'
## 
## The following object is masked from 'package:caret':
## 
##     cluster
## 
## Loaded gbm 2.1
model2 <- train(X[train, ], Y[train], method = "blackboost", trControl = myControl)
## Loading required package: party
## Loading required package: grid
## Loading required package: sandwich
## Loading required package: strucchange
## Loading required package: modeltools
## Loading required package: stats4
## 
## Attaching package: 'modeltools'
## 
## The following objects are masked from 'package:R.oo':
## 
##     clone, dimension
## 
## The following object is masked from 'package:plyr':
## 
##     empty
## 
## Loading required package: mboost
## This is mboost 2.2-3. See 'package?mboost' and the NEWS file
## for a complete list of changes.
## Note: The default for the computation of the degrees of freedom has changed.
##       For details see section 'Global Options' of '?bols'.
## 
## Attaching package: 'mboost'
## 
## The following object is masked from 'package:ggplot2':
## 
##     %+%
model3 <- train(X[train, ], Y[train], method = "rf", trControl = myControl)
## Loading required package: randomForest
## randomForest 4.6-7
## Type rfNews() to see new features/changes/bug fixes.
model4 <- train(X[train, ], Y[train], method = "mlpWeightDecay", trControl = myControl, 
    trace = FALSE)
## Loading required package: RSNNS
## Loading required package: Rcpp
## 
## Attaching package: 'RSNNS'
## 
## The following objects are masked from 'package:caret':
## 
##     confusionMatrix, train
## 
## The following object is masked from 'package:galgo':
## 
##     confusionMatrix
model5 <- train(X[train, ], Y[train], method = "ppr", trControl = myControl)
model6 <- train(X[train, ], Y[train], method = "earth", trControl = myControl)
## Loading required package: earth
## Loading required package: plotmo
## Loading required package: plotrix
model7 <- train(X[train, ], Y[train], method = "glm", trControl = myControl)
model8 <- train(X[train, ], Y[train], method = "svmRadial", trControl = myControl)
## Loading required package: kernlab
## 
## Attaching package: 'kernlab'
## 
## The following object is masked from 'package:modeltools':
## 
##     prior
## 
## The following object is masked from 'package:galgo':
## 
##     scaling
model9 <- train(X[train, ], Y[train], method = "gam", trControl = myControl)
## Loading required package: mgcv
## Loading required package: nlme
## This is mgcv 1.7-28. For overview type 'help("mgcv-package")'.
## 
## Attaching package: 'mgcv'
## 
## The following object is masked from 'package:magic':
## 
##     magic
model10 <- train(X[train, ], Y[train], method = "glmnet", trControl = myControl)
## Loading required package: glmnet
## Loading required package: Matrix
## Loaded glmnet 1.9-5
model11 <- train(X[train, ], Y[train], method = "nnet", trControl = myControl, 
    trace = FALSE, maxit = 10000, reltol = 1e-11, abstol = 1e-06)
## Loading required package: nnet

# Make a list of all the models
all.models <- list(model1, model2, model3, model4, model5, model6, model7, model8, 
    model9, model10, model11)
names(all.models) <- sapply(all.models, function(x) x$method)
sort(sapply(all.models, function(x) min(x$results$RMSE)))
##             rf            gbm            ppr          earth mlpWeightDecay 
##         0.3308         0.3569         0.3621         0.3832         0.3887 
##      svmRadial            glm     blackboost            gam         glmnet 
##         0.4427         0.4765         0.5166         0.5496         0.6011 
##           nnet 
##         3.5711

# Make a greedy ensemble - currently can only use RMSE
greedy <- caretEnsemble(all.models, iter = 1000L)
## Loading required package: pbapply
sort(greedy$weights, decreasing = TRUE)
##             rf            ppr mlpWeightDecay          earth            gam 
##          0.376          0.270          0.264          0.066          0.024
greedy$error
## RMSE 
##  0.3

# Make a linear regression ensemble
linear <- caretStack(all.models, method = "glm", trControl = trainControl(method = "cv"))
summary(linear$ens_model$finalModel)
## 
## Call:
## NULL
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -2.1893  -0.1133   0.0164   0.1537   0.8054  
## 
## Coefficients: (1 not defined because of singularities)
##                Estimate Std. Error t value Pr(>|t|)    
## (Intercept)    -0.11092    0.15490   -0.72  0.47471    
## gbm            -0.11848    0.15409   -0.77  0.44279    
## blackboost     -0.21035    0.09544   -2.20  0.02855 *  
## rf              0.67136    0.16439    4.08  6.2e-05 ***
## mlpWeightDecay  0.24040    0.06408    3.75  0.00022 ***
## ppr             0.28412    0.06550    4.34  2.2e-05 ***
## earth           0.12043    0.07718    1.56  0.12010    
## glm            -0.08783    0.07833   -1.12  0.26336    
## svmRadial       0.12569    0.08439    1.49  0.13779    
## gam             0.00444    0.04100    0.11  0.91381    
## glmnet         -0.00491    0.08156   -0.06  0.95203    
## nnet                 NA         NA      NA       NA    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for gaussian family taken to be 0.08699)
## 
##     Null deviance: 216.840  on 230  degrees of freedom
## Residual deviance:  19.138  on 220  degrees of freedom
## AIC: 104.2
## 
## Number of Fisher Scoring iterations: 2
linear$error
##   parameter   RMSE Rsquared RMSESD RsquaredSD
## 1      none 0.3061   0.9073 0.1138    0.08031

# Predict for test set:
preds <- data.frame(sapply(all.models, predict, newdata = X[!train, ]))
preds$ENS_greedy <- predict(greedy, newdata = X[!train, ])
preds$ENS_linear <- predict(linear, newdata = X[!train, ])
sort(sqrt(colMeans((preds - Y[!train])^2)))
##     ENS_linear     ENS_greedy             rf mlpWeightDecay            gbm 
##         0.2366         0.2514         0.2913         0.3027         0.3092 
##            gam          earth            ppr     blackboost      svmRadial 
##         0.3562         0.3816         0.4512         0.4640         0.4875 
##            glm         glmnet           nnet 
##         0.5028         0.5335         3.5655

plot(Y[!train], preds$ENS_greedy)
lines(c(-5, 10), c(-5, 10), col = 2)

plot of chunk lee02.1

IPAC validation

After having the models built as well as the ensamble of them it is time to predict for the IPAC dataset. Thus we start to prepare the data,

# Cargamos bq_clean (IPAC)
yy <- apply(sn, 1, feature_extr, bq_clean)
yy <- as.data.frame(yy)
colnames(yy) <- as.character(sn[, 1])
colnames(yy) <- str_replace(paste("X", colnames(yy), sep = ""), ":", ".")
# Predict for new dataset:
predf <- data.frame(sapply(all.models, predict, newdata = yy))
predf$ENS_greedy <- predict(greedy, newdata = yy)
predf$ENS_linear <- predict(linear, newdata = yy)

Neighbourhood Analysis

In order to realize how close or isolated the both sets (BT-SETTL and IPAC) are a PCA analysis is performed:

# 
zz <- rbind(X, yy)
pcaz <- prcomp(zz)
plot(pcaz$x[, 1], pcaz$x[, 2], pch = ".")
points(pcaz$x[(nrow(X) + 1):nrow(zz), 1], pcaz$x[(nrow(X) + 1):nrow(zz), 2], 
    pch = "x", col = 3)
points(pcaz$x[1:nrow(X), 1], pcaz$x[1:nrow(X), 2], pch = "+", col = 2)

plot of chunk lee04

According to the obtained predictions, some comparison will be performed against the prediction carried out by using spectrafull projection by means of ICA/JADE technology. This was carried out by Miss Prendes Gero at http://innova.uned.es, so we will download her prediction datasets and just compare them.

# Cargamos estimacion (IPAC)
load("~/git/M_sel/IPAC_Belen_G.RData")
load("~/git/M_sel/belen_resul_T.RData")
rownames(dG) = rownames(dd)
lnm <- unlist(lapply(bq_clean, function(x) {
    return(x$name)
}))
idx = apply(data.frame(rownames(dG)), 1, function(x, y) {
    return(which(x == y))
}, lnm)
XY = cbind(dG[, ncol(dG)], predf$ENS_greedy[idx])
plot(XY)

plot of chunk lee05

plot(XY, xlim = c(-5, 5), ylim = c(-5, 5))
lines(c(-5, 5), c(-5, 5), col = 2)

plot of chunk lee05

hist((XY[, 1] - XY[, 2])/sd(XY[, 1] - XY[, 2]), breaks = 20)

plot of chunk lee05

mer1 = mean(XY[, 1] - XY[, 2])
sder1 = sd(XY[, 1] - XY[, 2])

Then, the structure of the residual has an average of -6.679 and a standard deviation of 1.5185.

What if we test the data windows proposed by the paper?

We have no estimation of bands relevant log(g).