I have used Lung dataset with sex being the only predictor.
library(xgboost)
library("survival")
data("lung")
lung[,"y"] = ifelse(lung[,"status"]==1,-lung[,'time'],lung[,'time'])
param <- list( objective = "survival:cox",
eta = 0.01,
max.depth = 2,
subsample = 0.5,
nthread = 2
)
df_train <- xgb.DMatrix(as.matrix(lung[,"sex"]), label = as.matrix(lung[,"y"]))
bstSparse <- xgb.cv(data = df_train, nrounds = 10, params = param, nfold = 5, showsd = F,prediction = T)
## [1] train-cox-nloglik:4.325408 test-cox-nloglik:2.980906
## [2] train-cox-nloglik:4.324707 test-cox-nloglik:2.980228
## [3] train-cox-nloglik:4.324192 test-cox-nloglik:2.979716
## [4] train-cox-nloglik:4.323563 test-cox-nloglik:2.979122
## [5] train-cox-nloglik:4.322896 test-cox-nloglik:2.978465
## [6] train-cox-nloglik:4.322362 test-cox-nloglik:2.977881
## [7] train-cox-nloglik:4.321909 test-cox-nloglik:2.977393
## [8] train-cox-nloglik:4.321303 test-cox-nloglik:2.976775
## [9] train-cox-nloglik:4.320720 test-cox-nloglik:2.976190
## [10] train-cox-nloglik:4.320322 test-cox-nloglik:2.975753
Sample output in console
Iter Number : 94 Prediction: -0.682183 True: 269
Iter Number : 95 Prediction: -0.682183 True: 270
Iter Number : 96 Prediction: -0.682183 True: -279
Iter Number : 97 Prediction: -0.682183 True: 284
Iter Number : 98 Prediction: -0.682183 True: 285
Iter Number : 99 Prediction: -0.703774 True: 285
Iter Number : 100 Prediction: -0.682183 True: 286
Iter Number : 101 Prediction: -0.682183 True: 288
Iter Number : 102 Prediction: -0.682183 True: 291
Iter Number : 103 Prediction: -0.703774 True: -292
Iter Number : 104 Prediction: -0.682183 True: -292