We are assessing and comparing the impact of 1m and 30m DEM-based factors on landslide susceptibility assessment. Previously, we used logistic regression and random forest models to assess two scenarios:
Only DEM-based causal factors (a total of 8) were used. DEM-based causal factors were complemented with 8 non-DEM-based causal factors (a total of 16 factors). The random forest model outperformed the logistic regression model. Previously, we used the default version of the random forest model and did not perform hyperparameter tuning. Here, we are conducting hyper parameter tuning to improve the accuracy of the random forest model and identify the best combination of parameters for the model.
In a random forest model, hyper parameters include the number of decision trees in the model and the number of features considered by each tree when splitting the nodes. In the randomForest() function in R, these two parameters are labeled as mtry (the number of variables randomly sampled as candidates at each split) and ntree (the number of trees to grow). We will test various combinations of these two parameters.
Since hyperparameter tuning is time-consuming, we are providing the code, and the tuning was done separately. The saved models and their results will be presented here. All codes are taken from this link.
library(randomForest)
library(caret)
library(mlbench)
Data_m <- read.csv("Data_All_1m.csv")
Data_m$V <- as.factor(Data_m$V )
Data_m$Soil <- as.factor(Data_m$Soil )
Data_m$Geology <- as.factor(Data_m$Geology )
Data_m$Landuse <- as.factor(Data_m$Landuse )
Data_m$Change <- as.factor(Data_m$Change )
##Extend Caret
customRF <- list(type = "Classification", library = "randomForest", loop = NULL)
customRF$parameters <- data.frame(parameter = c("mtry", "ntree"), class = rep("numeric", 2), label = c("mtry", "ntree"))
customRF$grid <- function(x, y, len = NULL, search = "grid") {}
customRF$fit <- function(x, y, wts, param, lev, last, weights, classProbs, ...) {
randomForest(x, y, mtry = param$mtry, ntree=param$ntree, ...)
}
customRF$predict <- function(modelFit, newdata, preProc = NULL, submodels = NULL)
predict(modelFit, newdata)
customRF$prob <- function(modelFit, newdata, preProc = NULL, submodels = NULL)
predict(modelFit, newdata, type = "prob")
customRF$sort <- function(x) x[order(x[,1]),]
customRF$levels <- function(x) x$classes
# train model
control <- trainControl(method="repeatedcv", number=10, repeats=3)
tunegrid <- expand.grid(.mtry=c(1:15), .ntree=c(100, 500, 1000, 1500, 2000, 2500))
set.seed(seed)
custom <- train(V~., data=Data_m, method=customRF, metric=metric, tuneGrid=tunegrid, trControl=control)
summary(custom)
png(filename="C:/R/Shape/Shape Files/Data/rf_random_plot_1m_All.png")
plot(custom)
dev.off()
saveRDS(custom, file= "C:/R/Shape/Shape Files/Data/custom_model_1m_All.rda")
plot(custom)
#C:\R\Shape\Shape Files\Data
mdel_1m_DEM <- readRDS("custom_model_1m_DEM.rda")
#C:\R\Shape\Shape Files\Data
#mdel_1m_All<- readRDS("custom_model_1m_All.rda")
##summary(mdel_1m_DEM )
mdel_1m_DEM
## 4486 samples
## 8 predictor
## 2 classes: '0', '1'
##
## No pre-processing
## Resampling: Cross-Validated (10 fold, repeated 3 times)
## Summary of sample sizes: 4037, 4038, 4038, 4037, 4037, 4038, ...
## Resampling results across tuning parameters:
##
## mtry ntree Accuracy Kappa
## 1 100 0.8419532 0.6838963
## 1 500 0.8421724 0.6843327
## 1 1000 0.8441050 0.6881970
## 1 1500 0.8436580 0.6873025
## 1 2000 0.8438816 0.6877511
## 1 2500 0.8443273 0.6886428
## 2 100 0.8420997 0.6841879
## 2 500 0.8412801 0.6825497
## 2 1000 0.8422482 0.6844848
## 2 1500 0.8436597 0.6873081
## 2 2000 0.8429915 0.6859721
## 2 2500 0.8423214 0.6846314
## 3 100 0.8397968 0.6795833
## 3 500 0.8406151 0.6812189
## 3 1000 0.8403913 0.6807714
## 3 1500 0.8403173 0.6806230
## 3 2000 0.8402437 0.6804760
## 3 2500 0.8401698 0.6803286
## 4 100 0.8366017 0.6731934
## 4 500 0.8383103 0.6766096
## 4 1000 0.8383100 0.6766094
## 4 1500 0.8389054 0.6777995
## 4 2000 0.8383112 0.6766101
## 4 2500 0.8374198 0.6748285
## 5 100 0.8356366 0.6712643
## 5 500 0.8360089 0.6720084
## 5 1000 0.8375683 0.6751272
## 5 1500 0.8369734 0.6739359
## 5 2000 0.8362300 0.6724498
## 5 2500 0.8371220 0.6742329
## 6 100 0.8335565 0.6671042
## 6 500 0.8360818 0.6721540
## 6 1000 0.8361557 0.6723001
## 6 1500 0.8361562 0.6723027
## 6 2000 0.8359332 0.6718553
## 6 2500 0.8362298 0.6724490
## 7 100 0.8342976 0.6685864
## 7 500 0.8354874 0.6709641
## 7 1000 0.8352645 0.6705192
## 7 1500 0.8351149 0.6702192
## 7 2000 0.8363786 0.6727470
## 7 2500 0.8350398 0.6700692
##
## Accuracy was used to select the optimal model using the largest value.
## The final values used for the model were mtry = 1 and ntree = 2500.
d <- as.data.frame(mdel_1m_DEM$results)
# Find the index of the row with the highest accuracy
max_accuracy_index <- which.max(d$Accuracy)
# Print the row with the highest accuracy
highest_accuracy_row1 <- d[max_accuracy_index, ]
print(highest_accuracy_row1)
## mtry ntree Accuracy Kappa AccuracySD KappaSD
## 6 1 2500 0.8443273 0.6886428 0.0149221 0.02984453
#print(colnames(mdel_1m_DEM))[max.col(mdel_1m_DEM)]
plot(mdel_1m_DEM)
Here, the highest accuracy (0.844) is achieved when mtry=1 and ntree=2500. We have used repeated cross-validation where K-fold was set to 10 and it was repeated 3 times. Therefore, there were 30 training and validation datasets, and the accuracy was the mean. We also have a standard deviation.
#C:\R\Shape\Shape Files\Data
mdel_1m_All<- readRDS("custom_model_1m_All.rda")
##summary(mdel_1m_DEM )
mdel_1m_All
## 4479 samples
## 16 predictor
## 2 classes: '0', '1'
##
## No pre-processing
## Resampling: Cross-Validated (10 fold, repeated 3 times)
## Summary of sample sizes: 4030, 4031, 4031, 4031, 4030, 4032, ...
## Resampling results across tuning parameters:
##
## mtry ntree Accuracy Kappa
## 1 100 0.8184824 0.6368346
## 1 500 0.8259236 0.6517055
## 1 1000 0.8261460 0.6521421
## 1 1500 0.8264441 0.6527432
## 1 2000 0.8259983 0.6518547
## 1 2500 0.8262219 0.6523012
## 2 100 0.8589674 0.7178911
## 2 500 0.8620225 0.7239958
## 2 1000 0.8618715 0.7236949
## 2 1500 0.8626891 0.7253318
## 2 2000 0.8622424 0.7244384
## 2 2500 0.8620949 0.7241435
## 3 100 0.8697606 0.7394950
## 3 500 0.8725149 0.7450086
## 3 1000 0.8708019 0.7415797
## 3 1500 0.8703557 0.7406877
## 3 2000 0.8702062 0.7403890
## 3 2500 0.8713219 0.7426201
## 4 100 0.8710235 0.7420331
## 4 500 0.8740040 0.7479933
## 4 1000 0.8743008 0.7485851
## 4 1500 0.8738567 0.7476975
## 4 2000 0.8736308 0.7472459
## 4 2500 0.8741518 0.7482887
## 5 100 0.8728897 0.7457693
## 5 500 0.8752704 0.7505280
## 5 1000 0.8756419 0.7512712
## 5 1500 0.8760888 0.7521642
## 5 2000 0.8743023 0.7485923
## 5 2500 0.8758641 0.7517141
## 6 100 0.8741525 0.7482959
## 6 500 0.8747502 0.7494887
## 6 1000 0.8746011 0.7491910
## 6 1500 0.8755663 0.7511209
## 6 2000 0.8745999 0.7491897
## 6 2500 0.8760875 0.7521648
## 7 100 0.8731857 0.7463621
## 7 500 0.8746006 0.7491925
## 7 1000 0.8748231 0.7496380
## 7 1500 0.8748234 0.7496387
## 7 2000 0.8751212 0.7502338
## 7 2500 0.8751211 0.7502333
## 8 100 0.8737079 0.7474090
## 8 500 0.8747475 0.7494892
## 8 1000 0.8738575 0.7477076
## 8 1500 0.8745999 0.7491927
## 8 2000 0.8744501 0.7488935
## 8 2500 0.8746736 0.7493390
## 9 100 0.8717715 0.7435362
## 9 500 0.8744518 0.7488960
## 9 1000 0.8742300 0.7484532
## 9 1500 0.8745996 0.7491925
## 9 2000 0.8737826 0.7475587
## 9 2500 0.8739309 0.7478566
## 10 100 0.8739331 0.7478611
## 10 500 0.8737094 0.7474146
## 10 1000 0.8737811 0.7475555
## 10 1500 0.8737089 0.7474132
## 10 2000 0.8735581 0.7471107
## 10 2500 0.8750485 0.7500915
## 11 100 0.8722953 0.7445873
## 11 500 0.8741551 0.7483045
## 11 1000 0.8741545 0.7483038
## 11 1500 0.8740804 0.7481540
## 11 2000 0.8744524 0.7488993
## 11 2500 0.8737816 0.7475581
## 12 100 0.8733355 0.7466651
## 12 500 0.8728904 0.7457754
## 12 1000 0.8733363 0.7466675
## 12 1500 0.8734112 0.7468169
## 12 2000 0.8737828 0.7475609
## 12 2500 0.8737829 0.7475612
## 13 100 0.8711769 0.7423498
## 13 500 0.8735606 0.7471177
## 13 1000 0.8735602 0.7471184
## 13 1500 0.8735597 0.7471149
## 13 2000 0.8741553 0.7483075
## 13 2500 0.8733375 0.7466712
## 14 100 0.8714005 0.7427979
## 14 500 0.8743048 0.7486058
## 14 1000 0.8735594 0.7471146
## 14 1500 0.8736326 0.7472607
## 14 2000 0.8734852 0.7469668
## 14 2500 0.8731870 0.7463712
## 15 100 0.8719967 0.7439905
## 15 500 0.8743777 0.7487503
## 15 1000 0.8745262 0.7490487
## 15 1500 0.8733363 0.7466697
## 15 2000 0.8728153 0.7456269
## 15 2500 0.8732619 0.7465201
##
## Accuracy was used to select the optimal model using the largest value.
## The final values used for the model were mtry = 5 and ntree = 1500.
d <- as.data.frame(mdel_1m_All$results)
# Find the index of the row with the highest accuracy
max_accuracy_index <- which.max(d$Accuracy)
# Print the row with the highest accuracy
highest_accuracy_row2 <- d[max_accuracy_index, ]
print(highest_accuracy_row2)
## mtry ntree Accuracy Kappa AccuracySD KappaSD
## 28 5 1500 0.8760888 0.7521642 0.01924262 0.03850091
#print(colnames(mdel_1m_DEM))[max.col(mdel_1m_DEM)]
plot(mdel_1m_All)
When DEM-based factors were complemented with eight non-DEM-based factors, accuracy increased by 3.80%.
#C:\R\Shape\Shape Files\Data
mdel_30m_DEM<- readRDS("custom_model_30m_DEM.rda")
##summary(mdel_1m_DEM )
#C:\R\Shape\Shape Files\Data
#mdel_30m_DEM<- readRDS("custom_model_1m_All.rda")
##summary(mdel_1m_DEM )
mdel_30m_DEM
## 4509 samples
## 8 predictor
## 2 classes: '0', '1'
##
## No pre-processing
## Resampling: Cross-Validated (10 fold, repeated 3 times)
## Summary of sample sizes: 4057, 4057, 4058, 4059, 4058, 4059, ...
## Resampling results across tuning parameters:
##
## mtry ntree Accuracy Kappa
## 1 100 0.8033562 0.6067107
## 1 500 0.8089737 0.6179503
## 1 1000 0.8077161 0.6154332
## 1 1500 0.8074203 0.6148399
## 1 2000 0.8073472 0.6146942
## 1 2500 0.8065351 0.6130710
## 2 100 0.8037957 0.6075891
## 2 500 0.8054235 0.6108488
## 2 1000 0.8054984 0.6109972
## 2 1500 0.8052010 0.6104044
## 2 2000 0.8057194 0.6114388
## 2 2500 0.8054973 0.6109976
## 3 100 0.8026171 0.6052383
## 3 500 0.8042400 0.6084806
## 3 1000 0.8057189 0.6114390
## 3 1500 0.8063088 0.6126195
## 3 2000 0.8062362 0.6124734
## 3 2500 0.8062366 0.6124753
## 4 100 0.8038701 0.6077421
## 4 500 0.8053483 0.6106981
## 4 1000 0.8051264 0.6102547
## 4 1500 0.8053483 0.6106987
## 4 2000 0.8054960 0.6109943
## 4 2500 0.8046818 0.6093656
## 5 100 0.8042390 0.6084775
## 5 500 0.8066784 0.6133608
## 5 1000 0.8047566 0.6095151
## 5 1500 0.8060137 0.6120285
## 5 2000 0.8063101 0.6126216
## 5 2500 0.8045363 0.6090736
## 6 100 0.8055737 0.6111510
## 6 500 0.8060161 0.6120319
## 6 1000 0.8054237 0.6108477
## 6 1500 0.8056443 0.6112895
## 6 2000 0.8052754 0.6105523
## 6 2500 0.8054227 0.6108482
## 7 100 0.8044635 0.6089283
## 7 500 0.8056464 0.6112954
## 7 1000 0.8051276 0.6102560
## 7 1500 0.8054975 0.6109950
## 7 2000 0.8054227 0.6108459
## 7 2500 0.8060147 0.6120292
##
## Accuracy was used to select the optimal model using the largest value.
## The final values used for the model were mtry = 1 and ntree = 500.
d <- as.data.frame(mdel_30m_DEM$results)
# Find the index of the row with the highest accuracy
max_accuracy_index <- which.max(d$Accuracy)
# Print the row with the highest accuracy
highest_accuracy_row3 <- d[max_accuracy_index, ]
print(highest_accuracy_row3)
## mtry ntree Accuracy Kappa AccuracySD KappaSD
## 2 1 500 0.8089737 0.6179503 0.01552408 0.03105439
#print(colnames(mdel_1m_DEM))[max.col(mdel_1m_DEM)]
plot(mdel_30m_DEM)
Previously, we have seen that the 1m DEM-derived factor-based landslide susceptibility model gave better accuracy than the 30m (SRTM) DEM-derived factor-based landslide susceptibility model. Similarly, after hyper parameter tuning, we got the same result. The accuracy is 4.3% lower than the 1m DEM-derived factor-based model.
#C:\R\Shape\Shape Files\Data
mdel_30m_All<- readRDS("custom_model_30m_All.rda")
##summary(mdel_1m_DEM )
#C:\R\Shape\Shape Files\Data
#mdel_30m_DEM<- readRDS("custom_model_1m_All.rda")
##summary(mdel_1m_DEM )
mdel_30m_All
## 4506 samples
## 16 predictor
## 2 classes: '0', '1'
##
## No pre-processing
## Resampling: Cross-Validated (10 fold, repeated 3 times)
## Summary of sample sizes: 4056, 4055, 4055, 4056, 4054, 4056, ...
## Resampling results across tuning parameters:
##
## mtry ntree Accuracy Kappa
## 1 100 0.7751087 0.5501445
## 1 500 0.7742947 0.5485106
## 1 1000 0.7748144 0.5495416
## 1 1500 0.7744434 0.5488009
## 1 2000 0.7747394 0.5493903
## 1 2500 0.7737771 0.5474663
## 2 100 0.8223789 0.6447355
## 2 500 0.8240055 0.6479886
## 2 1000 0.8248938 0.6497676
## 2 1500 0.8253367 0.6506518
## 2 2000 0.8244477 0.6488740
## 2 2500 0.8256307 0.6512411
## 3 100 0.8408731 0.6817398
## 3 500 0.8445733 0.6891381
## 3 1000 0.8447941 0.6895794
## 3 1500 0.8439058 0.6878030
## 3 2000 0.8441286 0.6882478
## 3 2500 0.8444234 0.6888374
## 4 100 0.8527841 0.7055639
## 4 500 0.8539694 0.7079339
## 4 1000 0.8550040 0.7100038
## 4 1500 0.8547083 0.7094127
## 4 2000 0.8540418 0.7080800
## 4 2500 0.8541895 0.7083746
## 5 100 0.8565576 0.7131134
## 5 500 0.8574473 0.7148943
## 5 1000 0.8577419 0.7154830
## 5 1500 0.8575944 0.7151883
## 5 2000 0.8585562 0.7171126
## 5 2500 0.8580377 0.7160756
## 6 100 0.8596660 0.7193344
## 6 500 0.8590741 0.7181482
## 6 1000 0.8595174 0.7190371
## 6 1500 0.8604787 0.7209588
## 6 2000 0.8602563 0.7205133
## 6 2500 0.8588517 0.7177053
## 7 100 0.8613660 0.7227330
## 7 500 0.8600358 0.7200734
## 7 1000 0.8614402 0.7228822
## 7 1500 0.8615146 0.7230305
## 7 2000 0.8610725 0.7221468
## 7 2500 0.8609235 0.7218498
## 8 100 0.8607009 0.7214043
## 8 500 0.8609956 0.7219915
## 8 1000 0.8607757 0.7215535
## 8 1500 0.8607754 0.7215524
## 8 2000 0.8615151 0.7230330
## 8 2500 0.8621067 0.7242165
## 9 100 0.8590733 0.7181493
## 9 500 0.8604787 0.7209606
## 9 1000 0.8626988 0.7254007
## 9 1500 0.8614396 0.7228811
## 9 2000 0.8620328 0.7240688
## 9 2500 0.8633633 0.7267294
## 10 100 0.8596637 0.7193307
## 10 500 0.8617367 0.7234763
## 10 1000 0.8623288 0.7246607
## 10 1500 0.8626986 0.7254000
## 10 2000 0.8617357 0.7234743
## 10 2500 0.8626988 0.7254008
## 11 100 0.8618114 0.7236270
## 11 500 0.8623289 0.7246607
## 11 1000 0.8620323 0.7240676
## 11 1500 0.8632901 0.7265831
## 11 2000 0.8627726 0.7255481
## 11 2500 0.8621803 0.7243635
## 12 100 0.8584835 0.7169704
## 12 500 0.8614410 0.7228850
## 12 1000 0.8625512 0.7251058
## 12 1500 0.8624037 0.7248106
## 12 2000 0.8624030 0.7248090
## 12 2500 0.8629951 0.7259942
## 13 100 0.8597396 0.7194836
## 13 500 0.8615159 0.7230353
## 13 1000 0.8615887 0.7231803
## 13 1500 0.8616633 0.7233299
## 13 2000 0.8620320 0.7240669
## 13 2500 0.8626991 0.7254018
## 14 100 0.8601826 0.7203688
## 14 500 0.8619587 0.7239213
## 14 1000 0.8618848 0.7237720
## 14 1500 0.8609981 0.7220005
## 14 2000 0.8624027 0.7248089
## 14 2500 0.8618861 0.7237755
## 15 100 0.8583358 0.7166766
## 15 500 0.8598878 0.7197792
## 15 1000 0.8603316 0.7206663
## 15 1500 0.8610716 0.7221472
## 15 2000 0.8609984 0.7219996
## 15 2500 0.8618119 0.7236275
##
## Accuracy was used to select the optimal model using the largest value.
## The final values used for the model were mtry = 9 and ntree = 2500.
d <- as.data.frame(mdel_30m_All$results)
# Find the index of the row with the highest accuracy
max_accuracy_index <- which.max(d$Accuracy)
# Print the row with the highest accuracy
highest_accuracy_row4 <- d[max_accuracy_index, ]
print(highest_accuracy_row4)
## mtry ntree Accuracy Kappa AccuracySD KappaSD
## 54 9 2500 0.8633633 0.7267294 0.01453502 0.02907021
#print(colnames(mdel_1m_DEM))[max.col(mdel_1m_DEM)]
plot(mdel_30m_All)
When 30m DEM-derived factors were complemented with eight non-DEM factors, the accuracy increased by 6.73%.
library(ggplot2)
library(plotly)
##
## Attaching package: 'plotly'
## The following object is masked from 'package:ggplot2':
##
## last_plot
## The following object is masked from 'package:stats':
##
## filter
## The following object is masked from 'package:graphics':
##
## layout
library(gapminder)
# df <- data.frame(Acc_1m_DEM=highest_accuracy_row1$Accuracy, SD_1m_DEM= highest_accuracy_row1$AccuracySD, Acc_1m_All= highest_accuracy_row2$Accuracy,
# SD_1m_All= highest_accuracy_row2$AccuracySD,
# Acc_30m_DEM=highest_accuracy_row3$Accuracy, SD_30m_DEM= highest_accuracy_row3$AccuracySD, Acc_30m_All= highest_accuracy_row4$Accuracy,
# SD_30m_All= highest_accuracy_row4$AccuracySD)
# df1 <- data.frame(Model= c("1m_DEM", "1m_All", "30m_DEM", "30m_All"),
# Accuracy= c(highest_accuracy_row1$Accuracy, highest_accuracy_row2$Accuracy,highest_accuracy_row3$Accuracy,highest_accuracy_row4$Accuracy),
# SD= c(highest_accuracy_row1$AccuracySD, highest_accuracy_row2$AccuracySD, highest_accuracy_row3$AccuracySD, highest_accuracy_row4$AccuracySD))
#
# # Generate the plot
# f <-
# ggplot(df1, aes(x = Accuracy, y = Model)) +
# geom_point(size =6, color = "blue") + # Plot the points
# geom_errorbarh(aes(xmin = Accuracy - SD, xmax = Accuracy + SD), height = 0.2, color = "red") + # Add horizontal error bars
# labs(title = "Model Accuracy with Standard Deviation", x = "Accuracy", y = "Model") +
# theme_minimal() # Use a minimal theme
# ggplotly(f)
# # Print the plot
# print(f)
# Load necessary libraries
library(ggplot2)
library(plotly)
# Create the dataframe
df1 <- data.frame(
Model = c("1m_DEM", "1m_All", "30m_DEM", "30m_All"),
Accuracy = c(highest_accuracy_row1$Accuracy, highest_accuracy_row2$Accuracy,
highest_accuracy_row3$Accuracy, highest_accuracy_row4$Accuracy),
SD = c(highest_accuracy_row1$AccuracySD, highest_accuracy_row2$AccuracySD,
highest_accuracy_row3$AccuracySD, highest_accuracy_row4$AccuracySD)
)
# Calculate xmin and xmax
df1 <- df1 %>%
mutate(xmin = Accuracy - SD, xmax = Accuracy + SD)
# Generate the plot with tooltips on error bar ends and the middle point
f <- ggplot(df1, aes(x = Accuracy, y = Model)) +
geom_point(aes(text = paste("Model:", Model,
"<br>Accuracy:", round(Accuracy, 2),
"<br>SD:", round(SD, 2))),
size = 6, color = "blue") + # Plot the points with tooltips
geom_errorbarh(aes(xmin = xmin, xmax = xmax), height = 0.2, color = "red") + # Add horizontal error bars
geom_point(aes(x = xmin, y = Model,
text = paste("xmin:", round(xmin, 2))), color = "transparent") + # Transparent points for xmin tooltips
geom_point(aes(x = xmax, y = Model,
text = paste("xmax:", round(xmax, 2))), color = "transparent") + # Transparent points for xmax tooltips
labs(title = "Model Accuracy with Standard Deviation", x = "Accuracy", y = "Model") +
theme_minimal() # Use a minimal theme
## Warning in geom_point(aes(text = paste("Model:", Model, "<br>Accuracy:", :
## Ignoring unknown aesthetics: text
## Warning in geom_point(aes(x = xmin, y = Model, text = paste("xmin:",
## round(xmin, : Ignoring unknown aesthetics: text
## Warning in geom_point(aes(x = xmax, y = Model, text = paste("xmax:",
## round(xmax, : Ignoring unknown aesthetics: text
# Convert ggplot to an interactive plotly plot
ggplotly(f, tooltip = "text")
The comparison plot shows that only the 30m DEM-derived based factor model is lower than the rest of the three models. However, when it is complemented by non-DEM based factors, it surpasses the 1m DEM-only based model. The maximum accuracy exceeds the mean accuracy of the 1m_Al model. This indicates that when high-resolution DEM is not available, we need to complement it with non-DEM based factors for susceptibility mapping.