Introduction

In this worked example you will replicate a PCA on a published dataset.

The example is split into 2 Parts:

In this Data Preparation phase, you will do the following things:

  1. Load the SNP genotypes in .vcf format (vcfR::read.vcfR())
  2. Extract the genotypes into an R-compatible format (vcfR::extract.gt())
  3. Rotate the data into the standard R analysis format (t())
  4. Remove individuals (rows) from the data set that have >50% NAs (using a function I wrote)
  5. Remove SNPs (columns) that are fixed
  6. Impute remaining NAs (using a for() loop)
  7. Save the prepared data as a .csv file for the next step (write.csv())

Biological background

This worked example is based on a paper in the journal Molecular Ecology from 2017 by Jennifer Walsh titled Subspecies delineation amid phenotypic, geographic and genetic discordance in a songbird.

The study investigated variation between two bird species in the genus Ammodramus: A. nenlsoni and A. caudacutus.

The species A. nenlsoni has been divided into 3 sub-species: A. n. nenlsoni, A.n. alterus, and A n. subvirgatus. The other species, A. caudacutus, has been divided into two subspecies, A.c. caudacutus and A.c. diversus.

The purpose of this study was to investigate to what extent these five subspecies recognized by taxonomists are supported by genetic data. The author’s collected DNA from 75 birds (15 per subspecies) and genotyped 1929 SNPs. They then analyzed the data with Principal Components Analysis (PCA), among other genetic analyzes.

This tutorial will work through all of the steps necessary to re-analyze Walsh et al.s data

Tasks

In the code below all code is provided. Your tasks will be to do 2 things:

  1. Give a meaningful title to all sections marked “TODO: TITLE”
  2. Write 1 to 2 sentences describing what is being done and why in all sections marked “TODO: EXPLAIN”

Preliminaries

Load the vcfR and other packages with library().

library(vcfR)    
## 
##    *****       ***   vcfR   ***       *****
##    This is vcfR 1.13.0 
##      browseVignettes('vcfR') # Documentation
##      citation('vcfR') # Citation
##    *****       *****      *****       *****
library(vegan)
## Loading required package: permute
## Loading required package: lattice
## This is vegan 2.6-4
library(ggplot2)
library(ggpubr)

Make sure that your working directory is set to the location of the file all_loci.vcf.

getwd()
## [1] "/Users/james/Documents/Comp Bio Fall 2022/FinalProject"
list.files()
##  [1] "07-mean_imputation.docx"                                        
##  [2] "07-mean_imputation.html"                                        
##  [3] "07-mean_imputation.Rmd"                                         
##  [4] "08-PCA_worked.html"                                             
##  [5] "08-PCA_worked.Rmd"                                              
##  [6] "09-PCA_worked_example-SNPs-part1.Rmd"                           
##  [7] "1.159051856-159301856.ALL.chr1_GRCh38.genotypes.20170504.vcf.gz"
##  [8] "8.21122256-21362256.ALL.chr8_GRCh38.genotypes.20170504.vcf"     
##  [9] "all_loci.vcf"                                                   
## [10] "bird_snps_remove_NAs.html"                                      
## [11] "bird_snps_remove_NAs.Rmd"                                       
## [12] "fst_exploration_in_class-STUDENT.html"                          
## [13] "fst_exploration_in_class-STUDENT.Rmd"                           
## [14] "removing_fixed_alleles.html"                                    
## [15] "removing_fixed_alleles.Rmd"                                     
## [16] "rsconnect"                                                      
## [17] "transpose_VCF_data.html"                                        
## [18] "transpose_VCF_data.Rmd"                                         
## [19] "walsh2017morphology.csv"
list.files(pattern = "vcf")
## [1] "1.159051856-159301856.ALL.chr1_GRCh38.genotypes.20170504.vcf.gz"
## [2] "8.21122256-21362256.ALL.chr8_GRCh38.genotypes.20170504.vcf"     
## [3] "all_loci.vcf"

Data preparation

TODO: Load the VCF file using read.vcfR

TODO: using a function in the vcfR library, we read the vcf file located in the directory you chose

snps <- vcfR::read.vcfR("all_loci.vcf", convertNA  = TRUE)
## Scanning file to determine attributes.
## File attributes:
##   meta lines: 8
##   header_line: 9
##   variant count: 1929
##   column count: 81
## 
Meta line 8 read in.
## All meta lines processed.
## gt matrix initialized.
## Character matrix gt created.
##   Character matrix gt rows: 1929
##   Character matrix gt cols: 81
##   skip: 0
##   nrows: 1929
##   row_num: 0
## 
Processed variant 1000
Processed variant: 1929
## All variants processed

TODO: Get genotype scores (allele counts)

TODO: using a function in the vcfR library, we extract genotype scores using extract.gt. Loaded into an object called snps_num

snps_num <- vcfR::extract.gt(snps, 
           element = "GT",
           IDtoRowNames  = F,
           as.numeric = T,
           convertNA = T,
           return.alleles = F)

TODO: Tranpose

TODO: t function transposes the genotype score data from last block, properly orientates it and saves data to snps_num_t

snps_num_t <- t(snps_num) 

TODO: convert previous data into a data frame

snps_num_df <- data.frame(snps_num_t) 

TODO: NA seeking function

TODO: function that checks what you feed it for if there are NAs, which ones they are, and how many using length

find_NAs <- function(x){
  NAs_TF <- is.na(x)
  i_NA <- which(NAs_TF == TRUE)
  N_NA <- length(i_NA)
  
  cat("Results:",N_NA, "NAs present\n.")
  return(i_NA)
}

TODO: for loop that that automates the find_NAs process to speed it up

# N_rows
# number of rows (individuals)
N_rows <- nrow(snps_num_t)

# N_NA
# vector to hold output (number of NAs)
N_NA   <- rep(x = 0, times = N_rows)

# N_SNPs
# total number of columns (SNPs)
N_SNPs <- ncol(snps_num_t)

# the for() loop
for(i in 1:N_rows){
  
  # for each row, find the location of
  ## NAs with snps_num_t()
  i_NA <- find_NAs(snps_num_t[i,]) 
  
  # then determine how many NAs
  ## with length()
  N_NA_i <- length(i_NA)
  
  # then save the output to 
  ## our storage vector
  N_NA[i] <- N_NA_i
}
## Results: 28 NAs present
## .Results: 20 NAs present
## .Results: 28 NAs present
## .Results: 24 NAs present
## .Results: 23 NAs present
## .Results: 63 NAs present
## .Results: 51 NAs present
## .Results: 38 NAs present
## .Results: 34 NAs present
## .Results: 24 NAs present
## .Results: 48 NAs present
## .Results: 21 NAs present
## .Results: 42 NAs present
## .Results: 78 NAs present
## .Results: 45 NAs present
## .Results: 21 NAs present
## .Results: 42 NAs present
## .Results: 34 NAs present
## .Results: 66 NAs present
## .Results: 54 NAs present
## .Results: 59 NAs present
## .Results: 52 NAs present
## .Results: 47 NAs present
## .Results: 31 NAs present
## .Results: 63 NAs present
## .Results: 40 NAs present
## .Results: 40 NAs present
## .Results: 22 NAs present
## .Results: 60 NAs present
## .Results: 48 NAs present
## .Results: 961 NAs present
## .Results: 478 NAs present
## .Results: 59 NAs present
## .Results: 26 NAs present
## .Results: 285 NAs present
## .Results: 409 NAs present
## .Results: 1140 NAs present
## .Results: 600 NAs present
## .Results: 1905 NAs present
## .Results: 25 NAs present
## .Results: 1247 NAs present
## .Results: 23 NAs present
## .Results: 750 NAs present
## .Results: 179 NAs present
## .Results: 433 NAs present
## .Results: 123 NAs present
## .Results: 65 NAs present
## .Results: 49 NAs present
## .Results: 192 NAs present
## .Results: 433 NAs present
## .Results: 66 NAs present
## .Results: 597 NAs present
## .Results: 1891 NAs present
## .Results: 207 NAs present
## .Results: 41 NAs present
## .Results: 268 NAs present
## .Results: 43 NAs present
## .Results: 110 NAs present
## .Results: 130 NAs present
## .Results: 90 NAs present
## .Results: 271 NAs present
## .Results: 92 NAs present
## .Results: 103 NAs present
## .Results: 175 NAs present
## .Results: 31 NAs present
## .Results: 66 NAs present
## .Results: 64 NAs present
## .Results: 400 NAs present
## .Results: 192 NAs present
## .Results: 251 NAs present
## .Results: 69 NAs present
## .Results: 58 NAs present
## .

TODO: researchers degrees of freedom, decide to use 50% as the cutoff. Creates a histogram and plots a 50% line using abline

# 50% of N_SNPs
cutoff50 <- N_SNPs*0.5

hist(N_NA)            
abline(v = cutoff50, 
       col = 2, 
       lwd = 2, 
       lty = 2)

TODO: converts however many nas there are into a percent, can determine index value of each row with >50% NAs. Removed these rows using negative indexing

percent_NA <- N_NA/N_SNPs*100

# Call which() on percent_NA
i_NA_50percent <- which(percent_NA > 50) 

snps_num_t02 <- snps_num_t[-i_NA_50percent, ]

TODO: getting row names and removing the “sample”, “ATCG” letters, and numbers from the row names

TODO: using gsub. remove a different part of the row names to get just the population codes

row_names <- row.names(snps_num_t02) # Key

row_names02 <- gsub("sample_","",row_names)

sample_id <- gsub("^([ATCG]*)(_)(.*)",
                  "\\3",
                  row_names02)
pop_id <- gsub("[01-9]*",    
               "",
               sample_id)

table(pop_id)  
## pop_id
## Alt Cau Div Nel Sub 
##  15  12  15  15  11

TODO: Create invar_omit function

TODO: make a custom function and removes columns

invar_omit <- function(x){
  cat("Dataframe of dim",dim(x), "processed...\n")
  sds <- apply(x, 2, sd, na.rm = TRUE)
  i_var0 <- which(sds == 0)
 
  
  cat(length(i_var0),"columns removed\n")
  
  if(length(i_var0) > 0){
     x <- x[, -i_var0]
  }
  
  ## add return()  with x in it
  return(x)                      
}


snps_no_invar <- invar_omit(snps_num_t02) 
## Dataframe of dim 68 1929 processed...
## 591 columns removed

TODO: For loop that automates replacing NAs

TODO: each step of the for loop gets mean of current column and gets NAs in current columns and replaces the NAs with mean, mean imputation

snps_noNAs <- snps_no_invar

N_col <- ncol(snps_no_invar)
for(i in 1:N_col){
  
  # get the current column
  column_i <- snps_noNAs[, i]
  
  # get the mean of the current column
  mean_i <- mean(column_i, na.rm = TRUE)
  
  # get the NAs in the current column
  NAs_i <- which(is.na(column_i))
  
  # record the number of NAs
  N_NAs <- length(NAs_i)

  # replace the NAs in the current column
  column_i[NAs_i] <- mean_i
  
  # replace the original column with the
  ## updated columns
  snps_noNAs[, i] <- column_i
  
}

Save the data

Save the data as a .csv file which can be loaded again later.

write.csv(snps_noNAs, file = "SNPs_cleaned.csv",
          row.names = F)

Check for the presence of the file with list.files()

list.files(pattern = ".csv")
## [1] "SNPs_cleaned.csv"        "walsh2017morphology.csv"

Next steps:

In Part 2, we will re-load the SNPs_cleaned.csv file and carry an an analysis with PCA.