Classifying drone RF signals with statistical learning and a small data set

Ben Horvath

July 2022

Table of Contents

This brief note puts together a couple (non-deep learning) algorithms to classify RF signals using a small open-source data set.This work agrees with Medaiyese, et al. (2021) that large labeled data sets and complicated deep learning may not be essential for classifying drone RF signals.

GitHub repo: http://github.com/benhorvath/dronerf_classifier

1 Data and summary

The data is available for download at: DroneRF dataset: A dataset of drones for RF-based detection, classification, and identification. Uncompressed, it is 43 gigabytes in size. It is summarized in Allahham, et al. (2019):

… the DroneRF dataset: a radio frequency (RF) based dataset of drones functioning in different modes, including off, on and connected, hovering, flying, and video recording. The dataset contains recordings of RF activities, composed of 227 recorded segments collected from 3 different drones, as well as recordings of background RF activities with no drones. The data has been collected by RF receivers that intercepts the drone’s communications with the flight control module. The receivers are connected to two laptops, via PCIe cables, that runs a program responsible for fetching, processing and storing the sensed RF data in a database.

The data set is very clean, collected under laboratory conditions using expensive scientific equipment. The entire 2.4 Ghz wifi bandwidth was monitored with two RF receivers, producing both a lower and higher half of the frequency band. Medaiyese, et al. (2021) determined the lower alone contained sufficient information for good classification performance. Each of the 227 records is equal to or less than 10 seconds of recording, indicating this data set is fairly small.

After downloading and decompressing the data, the first step is to process the records and transform them into spectrograms of size \(122 x 122\) pixels. Each spectrograms’ pixel values (intensity) is taken, and then converted into a \(1 x (122*122) = 1 x 14,884\) matrix. This produces a feature matrix appropriate for input to algorithms. The data is split into a training partition and a final test partition. R’s caret library is used to train several algorithms amenable to \(n << p\) classification tasks with \(5-\)fold cross-validation. These are GLM with ElasticNet regularization (Zou and Hastie 2005; Friedman, Hastie, and Tibshirani 2010) and Random Forest. Performance is quite satisfactory on both binary and multi-class tasks, especially so considered the limited hardware thrown at the problem and the size of the data set. This work agrees with Medaiyese, et al. (2021) that large labeled data sets and complicated deep learning may not be essential for classifying drone RF signals.

Three models are developed and tested on a reserved hold-out set:

  • \(M_0\): Binary task, binomial GLM with ElasticNet regularization (R library: glmnet); hold-out performance:
    • precision: 0.67
    • recall: 1.0
    • F-score: 0.80
    • balanced accuracy: 0.95
  • \(M_1\): Binary task, Random Forest (R library: ranger); hold-out performance:
    • precision: 0.91
    • recall: 1.0
    • F-score: 0.95
    • balanced accuracy: 0.99
  • \(M_2\): Multiclass task, Random Forest; hold-out performance:
    • mean balanced accuracy: 0.88
    • mean sensitivity: 0.83
    • mean specificity: 0.92

Model performance could easily be improved with slightly more powerful hardware, which would allow spectograms to encode more information, i.e., larger than \(122 x 122\) pixels.

2 Load and process data

The process here is to ‘summarize’ each ~100 megabyte RF recording as a ~4 kilobyte spectrogram PNG image, and then the spectrogram into a ~100 kilobyte \(1 x (122*122)\) matrix.

Create necessary directories (here input refers to input to classifier):

dronerf_path <- file.path(DATA_PATH, 'dronerf')
dir.create(dronerf_path, showWarnings=FALSE)

input_path <- file.path(DATA_PATH, 'input')
dir.create(input_path, showWarnings=FALSE)

input_files <- file.path(dronerf_path, list.files(dronerf_path, recursive=TRUE))

lower <- input_files[str_detect(input_files, '_L[\\d]?/') == TRUE]

sample_rate <- 40 * 10^6

2.1 Create spectrograms

Process lower half and save as PNG to disk:

name_map <- list('AR drone' = 'ar',
                 'Bepop drone' = 'bepop',
                 'Background RF activites' = 'bg',
                 'Phantom drone' = 'phantom')
                 
for (f in lower) {
  
  f_prefix <- str_extract(f, 'AR drone|Bepop drone|Background RF activites|Phantom drone')  # NOTE the typo in 'activites'
  drone <- name_map[[ f_prefix ]]
  
  f_suffix <- str_match_all(f, '(\\d{5}L_\\d+).csv$')[[1]][2]
  
  sig <- scan(f, what=character(1), sep=',') %>%
    as.numeric %>%
    tuneR::Wave(samp.rate=sample_rate) %>%
    tuneR::normalize(unit = c('1'))
  
  y = sig@left
  
  N <- 32e-3*sample_rate
  
  # ccompute spectorgram
  p <- seewave::ggspectro(y, sample_rate, wl=N, wn='hamming', ovlp=50, fftw=TRUE) +
    geom_tile(aes(fill = amplitude))+
    scale_fill_gradient(low='white', high='black') +
    theme(axis.line=element_blank(),
          axis.text.x=element_blank(),
          axis.text.y=element_blank(),
          axis.ticks=element_blank(),
          axis.title.x=element_blank(),
          axis.title.y=element_blank(),
          legend.position='none',
          panel.background=element_blank(),
          panel.border=element_blank(),
          panel.grid.major=element_blank(),
          panel.grid.minor=element_blank(),
          plot.background=element_blank())
  
  export_path <- file.path(input_path, drone)
  dir.create(export_path, showWarnings=FALSE)
  
  export_file <- paste(file.path(export_path, f_suffix), '.png', sep='')
  
  ggsave(export_file, p, width=IMG_SIZE, height=IMG_SIZE, units='px')
  
}

2.2 Spectrograms to feature matrix

Convert each PNG to a matrix, then combine to form a feature matrix \(X\) for machine learning:

standardize <- function(x) (x - min(x)) / (max(x) - min(x))

X_list <- list()
y <- list()

for (drone in c('ar', 'bepop', 'bg', 'phantom')) {
  
  pngs_path <- file.path(input_path, drone)
  drone_pngs_path <- file.path(pngs_path, list.files(pngs_path))
  
  for (png_path in drone_pngs_path) {
    
    img <- readPNG(png_path)
    x <- img[,,1]
    x <- t(apply(x, 2, rev))  # rotate: not necessary except for plotting correct orientation
    x <- standardize(x)
    dim(x) <- c(1, IMG_SIZE*IMG_SIZE)
    
    X_list[[1+length(X_list)]] <- x
    y[[1+length(y)]] <- drone
    
  }
  
}

rm(x, sig, p); gc()  # free up some memory
##           used  (Mb) gc trigger  (Mb) limit (Mb) max used  (Mb)
## Ncells 2313072 123.6    4421975 236.2         NA  4209474 224.9
## Vcells 7316947  55.9   12335285  94.2      16384 10201088  77.9
# Combine into single matrix for input to algorithm
X <- do.call(rbind, X_list) %>% data.frame
colnames(X) <- paste('p', 1:(IMG_SIZE*IMG_SIZE), sep='')

rm(X_list); gc()
##           used  (Mb) gc trigger  (Mb) limit (Mb) max used  (Mb)
## Ncells 2343301 125.2    4421975 236.2         NA  4209474 224.9
## Vcells 7363009  56.2   16818882 128.4      16384 15306281 116.8

3 Classifier: Drone or no drone

3.1 Build test and training sets

We will use \(5-\)fold cross-validation to train and tune the learning algorithms, but retain 25% of the data set for final testing.

For all models, under-sampling will be used to balance the classes.

df <- X %>%
  mutate(y=unlist(y)) %>%
  select(y, everything())

df <- df %>%
  mutate(y = as.factor(if_else(y == 'bg', 'bg', 'drone')))

set.seed(1804)
train_ix <- createDataPartition(df$y, p=0.75, list=FALSE) %>% as.numeric

train <- df[train_ix,]
test <- df[-train_ix,]

# check distributions are similar
prop.table(table(df$y))
## 
##     bg  drone 
## 0.1806 0.8194
prop.table(table(df$y[train_ix]))
## 
##     bg  drone 
## 0.1813 0.8187
prop.table(table(df$y[-train_ix]))
## 
##     bg  drone 
## 0.1786 0.8214

3.2 \(M_0\): Binomial GLM with Elasticnet regularization

Preprocessing occurs within each fold, and includes: Eliminating near-zero-variance predictors, scaling and centering:

Sys.time()
## [1] "2022-07-05 14:47:18 EDT"
cv5 <- trainControl(method='repeatedcv',
                     number=5,
                     classProbs=TRUE,
                     sampling='down',
                     summaryFunction = twoClassSummary,
                     savePredictions=TRUE)

m0_grid <- expand.grid(alpha = c(1e-5, 1e-4, 1e-3, 1e-2, 1e-1, 0.1, 1),
                       lambda = c((1:5) / 10))

m0 <- train(y ~ .,
            preProc = c('nzv', 'center', 'scale'),
            data=train, 
            tuneGrid=m0_grid,
            method='glmnet',
            trControl=cv5,
            metric='ROC')

Sys.time()
## [1] "2022-07-05 14:56:31 EDT"
m0
## glmnet 
## 
##   171 samples
## 14884 predictors
##     2 classes: 'bg', 'drone' 
## 
## Pre-processing: centered (3481), scaled (3481), remove (11403) 
## Resampling: Cross-Validated (5 fold, repeated 1 times) 
## Summary of sample sizes: 137, 137, 137, 137, 136 
## Addtional sampling using down-sampling prior to pre-processing
## 
## Resampling results across tuning parameters:
## 
##   alpha    lambda  ROC     Sens    Spec  
##   0.00001  0.1     0.9869  0.9667  0.8929
##   0.00001  0.2     0.9869  0.9667  0.8929
##   0.00001  0.3     0.9869  0.9667  0.8929
##   0.00001  0.4     0.9869  0.9667  0.8929
##   0.00001  0.5     0.9869  0.9667  0.8929
##   0.00010  0.1     0.9929  1.0000  0.9143
##   0.00010  0.2     0.9929  1.0000  0.9143
##   0.00010  0.3     0.9929  1.0000  0.9143
##   0.00010  0.4     0.9929  1.0000  0.9143
##   0.00010  0.5     0.9929  1.0000  0.9143
##   0.00100  0.1     0.9857  1.0000  0.9286
##   0.00100  0.2     0.9857  1.0000  0.9286
##   0.00100  0.3     0.9857  1.0000  0.9286
##   0.00100  0.4     0.9857  1.0000  0.9286
##   0.00100  0.5     0.9857  1.0000  0.9286
##   0.01000  0.1     0.9810  1.0000  0.9143
##   0.01000  0.2     0.9810  1.0000  0.9143
##   0.01000  0.3     0.9810  1.0000  0.9143
##   0.01000  0.4     0.9810  1.0000  0.9143
##   0.01000  0.5     0.9798  1.0000  0.9143
##   0.10000  0.1     0.9643  0.9333  0.8643
##   0.10000  0.2     0.9619  0.9333  0.8643
##   0.10000  0.3     0.9560  0.9333  0.8571
##   0.10000  0.4     0.9548  0.9333  0.8571
##   0.10000  0.5     0.9500  0.9333  0.8571
##   1.00000  0.1     0.9444  0.9667  0.8214
##   1.00000  0.2     0.9077  0.8667  0.8214
##   1.00000  0.3     0.8881  0.8333  0.7929
##   1.00000  0.4     0.7459  0.6000  0.8071
##   1.00000  0.5     0.5000  0.2000  0.8000
## 
## ROC was used to select the optimal model using the largest value.
## The final values used for the model were alpha = 0.0001 and lambda = 0.5.

1e-04 0.5 0.9928571 1.0000000 0.9142857

3.3 \(M_1\): Random forest

Sys.time()
## [1] "2022-07-05 14:56:31 EDT"
m1_cv <- trainControl(method='repeatedcv',
                     number=5,
                     classProbs=TRUE,
                     sampling='down',
                     summaryFunction = twoClassSummary,
                     savePredictions=TRUE)

# common mtry values are sqrt(p) and log2(p)
m1_tune <- expand.grid(mtry=c(14, 122), splitrule=c('gini'), min.node.size=c(5, 10))

m1 <- train(y ~ .,
            data=train,
            tuneGrid=m1_tune,
            method='ranger',
            num.trees=1000,
            trControl=m1_cv,
            metric='ROC')

Sys.time()
## [1] "2022-07-05 15:00:20 EDT"
m1
## Random Forest 
## 
##   171 samples
## 14884 predictors
##     2 classes: 'bg', 'drone' 
## 
## No pre-processing
## Resampling: Cross-Validated (5 fold, repeated 1 times) 
## Summary of sample sizes: 137, 137, 137, 136, 137 
## Addtional sampling using down-sampling
## 
## Resampling results across tuning parameters:
## 
##   mtry  min.node.size  ROC     Sens    Spec  
##    14    5             0.9988  0.9333  0.9786
##    14   10             0.9976  0.9333  0.9929
##   122    5             1.0000  1.0000  0.9714
##   122   10             0.9976  0.9667  0.9786
## 
## Tuning parameter 'splitrule' was held constant at a value of gini
## ROC was used to select the optimal model using the largest value.
## The final values used for the model were mtry = 122, splitrule = gini
##  and min.node.size = 5.

3.4 Evaluating \(M_0\) and \(M_1\) on hold out set

Performance metrics for both algorithms on the same hold-out set is presented below. Both perform similarly, with a slight edge to the Random Forest model, which also trains moderately faster.

m0_pred <- predict(m0, test)
m1_pred <- predict(m1, test)

confusionMatrix(m0_pred, test$y)
## Confusion Matrix and Statistics
## 
##           Reference
## Prediction bg drone
##      bg    10     5
##      drone  0    41
##                                        
##                Accuracy : 0.911        
##                  95% CI : (0.804, 0.97)
##     No Information Rate : 0.821        
##     P-Value [Acc > NIR] : 0.0501       
##                                        
##                   Kappa : 0.745        
##                                        
##  Mcnemar's Test P-Value : 0.0736       
##                                        
##             Sensitivity : 1.000        
##             Specificity : 0.891        
##          Pos Pred Value : 0.667        
##          Neg Pred Value : 1.000        
##              Prevalence : 0.179        
##          Detection Rate : 0.179        
##    Detection Prevalence : 0.268        
##       Balanced Accuracy : 0.946        
##                                        
##        'Positive' Class : bg           
## 
confusionMatrix(m1_pred, test$y)
## Confusion Matrix and Statistics
## 
##           Reference
## Prediction bg drone
##      bg    10     1
##      drone  0    45
##                                     
##                Accuracy : 0.982     
##                  95% CI : (0.904, 1)
##     No Information Rate : 0.821     
##     P-Value [Acc > NIR] : 0.000217  
##                                     
##                   Kappa : 0.941     
##                                     
##  Mcnemar's Test P-Value : 1.000000  
##                                     
##             Sensitivity : 1.000     
##             Specificity : 0.978     
##          Pos Pred Value : 0.909     
##          Neg Pred Value : 1.000     
##              Prevalence : 0.179     
##          Detection Rate : 0.179     
##    Detection Prevalence : 0.196     
##       Balanced Accuracy : 0.989     
##                                     
##        'Positive' Class : bg        
## 

4 Classifier: Drone model + background

4.1 Build test and training sets

df <- X %>%
  mutate(y=unlist(y)) %>%
  select(y, everything())

df <- df %>%
  mutate(y = as.factor(y))

set.seed(1804)
train_ix <- createDataPartition(df$y, p=0.75, list=FALSE) %>% as.numeric

train <- df[train_ix,]
test <- df[-train_ix,]

4.2 \(M_2\): Random forest: All drones v. background

Sys.time()
## [1] "2022-07-05 15:00:38 EDT"
m2_cv <- trainControl(method='repeatedcv',
                     number=5,
                     classProbs=TRUE,
                     sampling='down',
                     summaryFunction = multiClassSummary,
                     savePredictions=TRUE)

# common mtry values are sqrt(p) and log2(p)
m2_tune <- expand.grid(mtry=c(14, 122), splitrule=c('gini'), min.node.size=c(5, 10))

m2 <- train(y ~ .,
            data=train,
            tuneGrid=m2_tune,
            method='ranger',
            num.trees=1000,
            trControl=m2_cv,
            metric='AUC')

Sys.time()
## [1] "2022-07-05 15:06:11 EDT"
m2
## Random Forest 
## 
##   171 samples
## 14884 predictors
##     4 classes: 'ar', 'bepop', 'bg', 'phantom' 
## 
## No pre-processing
## Resampling: Cross-Validated (5 fold, repeated 1 times) 
## Summary of sample sizes: 138, 137, 136, 136, 137 
## Addtional sampling using down-sampling
## 
## Resampling results across tuning parameters:
## 
##   mtry  min.node.size  logLoss  AUC     prAUC   Accuracy  Kappa   Mean_F1
##    14    5             0.7486   0.9409  0.7398  0.7895    0.7012  0.8315 
##    14   10             0.7688   0.9431  0.7441  0.7721    0.6768  0.8121 
##   122    5             0.6985   0.9326  0.7215  0.7552    0.6517  0.8009 
##   122   10             0.7201   0.9244  0.7118  0.7485    0.6438  0.7991 
##   Mean_Sensitivity  Mean_Specificity  Mean_Pos_Pred_Value  Mean_Neg_Pred_Value
##   0.8492            0.9208            0.8445               0.9248             
##   0.8322            0.9150            0.8159               0.9165             
##   0.8216            0.9078            0.8084               0.9105             
##   0.8207            0.9057            0.8011               0.9079             
##   Mean_Precision  Mean_Recall  Mean_Detection_Rate  Mean_Balanced_Accuracy
##   0.8445          0.8492       0.1974               0.8850                
##   0.8159          0.8322       0.1930               0.8736                
##   0.8084          0.8216       0.1888               0.8647                
##   0.8011          0.8207       0.1871               0.8632                
## 
## Tuning parameter 'splitrule' was held constant at a value of gini
## AUC was used to select the optimal model using the largest value.
## The final values used for the model were mtry = 14, splitrule = gini
##  and min.node.size = 10.

4.3 Evaluating \(M_2\) on hold out set

\(M_2\) performs less successfully than the previous models on binary target, but still does reasonably well.

m2_pred <- predict(m2, test)

confusionMatrix(m2_pred, test$y)
## Confusion Matrix and Statistics
## 
##           Reference
## Prediction ar bepop bg phantom
##    ar      16     6  0       0
##    bepop    4    15  0       0
##    bg       0     0 10       1
##    phantom  0     0  0       4
## 
## Overall Statistics
##                                          
##                Accuracy : 0.804          
##                  95% CI : (0.676, 0.898) 
##     No Information Rate : 0.375          
##     P-Value [Acc > NIR] : 0.0000000000668
##                                          
##                   Kappa : 0.716          
##                                          
##  Mcnemar's Test P-Value : NA             
## 
## Statistics by Class:
## 
##                      Class: ar Class: bepop Class: bg Class: phantom
## Sensitivity              0.800        0.714     1.000         0.8000
## Specificity              0.833        0.886     0.978         1.0000
## Pos Pred Value           0.727        0.789     0.909         1.0000
## Neg Pred Value           0.882        0.838     1.000         0.9808
## Prevalence               0.357        0.375     0.179         0.0893
## Detection Rate           0.286        0.268     0.179         0.0714
## Detection Prevalence     0.393        0.339     0.196         0.0714
## Balanced Accuracy        0.817        0.800     0.989         0.9000

5 References

  • Allahham, MHD Saria, Mohammad F. Al-Sa’d, Abdulla Al-Ali, Amr Mohamed, Tamer Khattab, and Aiman Erbad. 2019. ‘DroneRF dataset: A dataset of drones for RF-based detection, classification and identification.’ Data in Brief 26: 104313.

  • Friedman, Jerome, Trevor Hastie, and Rob Tibshirani. 2010. ‘Regularization paths for generalized linear models via coordinate descent.’ Journal of Statistical Software 33, no. 1: 1.

  • Medaiyese, Olusiji O., Abbas Syed, and Adrian P. Lauf. 2021. ‘Machine learning framework for RF-based drone detection and identification system.’ 2021 2nd International Conference on Smart Cities, Automation & Intelligent Computing Systems. IEEE.

  • Zou, Hui, and Trevor Hastie. 2005. ‘Regularization and variable selection via the elastic net.’ Journal of the Royal Statistical Society: Series B (Statistical Methodology) 67, no. 2: 301–20.