In this worked example you will replicate a PCA on a published dataset.
The example is split into 2 Parts:
In this Data Preparation phase, you will do the following things:
vcfR::read.vcfR())vcfR::extract.gt())t())for()
loop).csv file
for the next step (write.csv())This worked example is based on a paper in the journal Molecular Ecology from 2017 by Jennifer Walsh titled Subspecies delineation amid phenotypic, geographic and genetic discordance in a songbird.
The study investigated variation between two bird species in the genus Ammodramus: A. nenlsoni and A. caudacutus.
The species A. nenlsoni has been divided into 3 sub-species: A. n. nenlsoni, A.n. alterus, and A n. subvirgatus. The other species, A. caudacutus, has been divided into two subspecies, A.c. caudacutus and A.c. diversus.
The purpose of this study was to investigate to what extent these five subspecies recognized by taxonomists are supported by genetic data. The author’s collected DNA from 75 birds (15 per subspecies) and genotyped 1929 SNPs. They then analyzed the data with Principal Components Analysis (PCA), among other genetic analyzes.
This tutorial will work through all of the steps necessary to re-analyze Walsh et al.s data
In the code below all code is provided. Your tasks will be to do 2 things:
Load the vcfR and other packages with library().
library(vcfR)
##
## ***** *** vcfR *** *****
## This is vcfR 1.13.0
## browseVignettes('vcfR') # Documentation
## citation('vcfR') # Citation
## ***** ***** ***** *****
library(vegan)
## Loading required package: permute
## Loading required package: lattice
## This is vegan 2.6-4
library(ggplot2)
library(ggpubr)
Make sure that your working directory is set to the location of the
file all_loci.vcf.
getwd()
## [1] "C:/Users/Casth/OneDrive/old stuff/Desktop/R"
list.files()
## [1] "07-mean_imputation.docx"
## [2] "07-mean_imputation.html"
## [3] "07-mean_imputation.Rmd"
## [4] "08-PCA_worked.html"
## [5] "08-PCA_worked.Rmd"
## [6] "09-PCA_worked_example-SNPs-part1.html"
## [7] "09-PCA_worked_example-SNPs-part1.Rmd"
## [8] "all_loci.vcf"
## [9] "bird_snps_remove_NAs.html"
## [10] "bird_snps_remove_NAs.Rmd"
## [11] "center_function.R"
## [12] "dinog.csv"
## [13] "feature_engineering.Rmd"
## [14] "FINAL PROJECT"
## [15] "Older_stuff"
## [16] "pair.html"
## [17] "pair.Rmd"
## [18] "PCA-missing_data.Rmd"
## [19] "removing_fixed_alleles.html"
## [20] "removing_fixed_alleles.Rmd"
## [21] "rsconnect"
## [22] "SNPs_cleaned.csv"
## [23] "test.docx"
## [24] "test.html"
## [25] "test.Rmd"
## [26] "test3.Rmd"
## [27] "transpose_VCF_data.html"
## [28] "transpose_VCF_data.Rmd"
## [29] "twst.Rmd"
## [30] "vegan_PCA_amino_acids-STUDENT.html"
## [31] "vegan_PCA_amino_acids-STUDENT.Rmd"
## [32] "walsh2017morphology.csv"
## [33] "working_directory_practice.Rmd"
list.files(pattern = "vcf")
## [1] "all_loci.vcf"
##loading data from .vcf files
Load the .vcf file into a variable caled snps using the vcfr::read.vcfr command and using the arguement convertNA = True to convert NA’s
snps <- vcfR::read.vcfR("all_loci.vcf", convertNA = TRUE)
## Scanning file to determine attributes.
## File attributes:
## meta lines: 8
## header_line: 9
## variant count: 1929
## column count: 81
##
Meta line 8 read in.
## All meta lines processed.
## gt matrix initialized.
## Character matrix gt created.
## Character matrix gt rows: 1929
## Character matrix gt cols: 81
## skip: 0
## nrows: 1929
## row_num: 0
##
Processed variant 1000
Processed variant: 1929
## All variants processed
Extract genotype scores using vcfR::extract.gt()
snps_num <- vcfR::extract.gt(snps,
element = "GT",
IDtoRowNames = F,
as.numeric = T,
convertNA = T,
return.alleles = F)
Use the function t() to transpose the data in spns_nums
snps_num_t <- t(snps_num)
Use data.frame to add snps_num_t to a data frame named snps_num_df
snps_num_df <- data.frame(snps_num_t)
Writing a function to find Na’s by using is.na(), find which values in the data is an NA using which and then find the length of the Data once the NA’s are removed
find_NAs <- function(x){
NAs_TF <- is.na(x)
i_NA <- which(NAs_TF == TRUE)
N_NA <- length(i_NA)
cat("Results:",N_NA, "NAs present\n.")
return(i_NA)
}
find the number of rows using nrow(), replicate the norws using rep(), find the total columns using ncol use a for loop to loop though the rows to find NA’s and find the length using length() and output the result to a storage vector
# N_rows,
# number of rows (individuals)
N_rows <- nrow(snps_num_t)
# N_NA
# vector to hold output (number of NAs)
N_NA <- rep(x = 0, times = N_rows)
# N_SNPs
# total number of columns (SNPs)
N_SNPs <- ncol(snps_num_t)
# the for() loop
for(i in 1:N_rows){
# for each row, find the location of
## NAs with snps_num_t()
i_NA <- find_NAs(snps_num_t[i,])
# then determine how many NAs
## with length()
N_NA_i <- length(i_NA)
# then save the output to
## our storage vector
N_NA[i] <- N_NA_i
}
## Results: 28 NAs present
## .Results: 20 NAs present
## .Results: 28 NAs present
## .Results: 24 NAs present
## .Results: 23 NAs present
## .Results: 63 NAs present
## .Results: 51 NAs present
## .Results: 38 NAs present
## .Results: 34 NAs present
## .Results: 24 NAs present
## .Results: 48 NAs present
## .Results: 21 NAs present
## .Results: 42 NAs present
## .Results: 78 NAs present
## .Results: 45 NAs present
## .Results: 21 NAs present
## .Results: 42 NAs present
## .Results: 34 NAs present
## .Results: 66 NAs present
## .Results: 54 NAs present
## .Results: 59 NAs present
## .Results: 52 NAs present
## .Results: 47 NAs present
## .Results: 31 NAs present
## .Results: 63 NAs present
## .Results: 40 NAs present
## .Results: 40 NAs present
## .Results: 22 NAs present
## .Results: 60 NAs present
## .Results: 48 NAs present
## .Results: 961 NAs present
## .Results: 478 NAs present
## .Results: 59 NAs present
## .Results: 26 NAs present
## .Results: 285 NAs present
## .Results: 409 NAs present
## .Results: 1140 NAs present
## .Results: 600 NAs present
## .Results: 1905 NAs present
## .Results: 25 NAs present
## .Results: 1247 NAs present
## .Results: 23 NAs present
## .Results: 750 NAs present
## .Results: 179 NAs present
## .Results: 433 NAs present
## .Results: 123 NAs present
## .Results: 65 NAs present
## .Results: 49 NAs present
## .Results: 192 NAs present
## .Results: 433 NAs present
## .Results: 66 NAs present
## .Results: 597 NAs present
## .Results: 1891 NAs present
## .Results: 207 NAs present
## .Results: 41 NAs present
## .Results: 268 NAs present
## .Results: 43 NAs present
## .Results: 110 NAs present
## .Results: 130 NAs present
## .Results: 90 NAs present
## .Results: 271 NAs present
## .Results: 92 NAs present
## .Results: 103 NAs present
## .Results: 175 NAs present
## .Results: 31 NAs present
## .Results: 66 NAs present
## .Results: 64 NAs present
## .Results: 400 NAs present
## .Results: 192 NAs present
## .Results: 251 NAs present
## .Results: 69 NAs present
## .Results: 58 NAs present
## .
Create a cutoff variable by dividing N_SNPs/2 and then create a hist() plot of N_NA and ad the cutoff line using abline()
# 50% of N_SNPs
cutoff50 <- N_SNPs*0.5
hist(N_NA)
abline(v = cutoff50,
col = 2,
lwd = 2,
lty = 2)
finding the percentage of of N-NA v N_SNPs and call which on the result to detetmine which of the percentages are > 50 and store it into i_NA_50percent. find the SNP’s that are less than 50% by using [-i_NA_50percent] to remove the NA’s
percent_NA <- N_NA/N_SNPs*100
# Call which() on percent_NA
i_NA_50percent <- which(percent_NA > 50)
snps_num_t02 <- snps_num_t[-i_NA_50percent, ]
Add the rownames of snps_num_t02 to a vector called row_names use gsub to find sample in row names repeat for the following matches to replace with the ideal match
row_names <- row.names(snps_num_t02) # Key
row_names02 <- gsub("sample_","",row_names)
sample_id <- gsub("^([ATCG]*)(_)(.*)",
"\\3",
row_names02)
pop_id <- gsub("[01-9]*",
"",
sample_id)
table(pop_id)
## pop_id
## Alt Cau Div Nel Sub
## 15 12 15 15 11
use cat to print out the data frame with its dimension and use the apply function to add a dataframe, and remove the na’s from the column use the which function to find which sds are Na’s use cat to printout the columns that remained return x
invar_omit <- function(x){
cat("Dataframe of dim",dim(x), "processed...\n")
sds <- apply(x, 2, sd, na.rm = TRUE)
i_var0 <- which(sds == 0)
cat(length(i_var0),"columns removed\n")
if(length(i_var0) > 0){
x <- x[, -i_var0]
}
## add return() with x in it
return(x)
}
snps_no_invar <- invar_omit(snps_num_t02)
## Dataframe of dim 68 1929 processed...
## 591 columns removed
first get the current column using a for loop find the mean of the column and use na.rm = true to remove the na’s use which to fin the NA’s in the column Find the total number of NA’s and then replace them with the meanand update the column in the for loop
snps_noNAs <- snps_no_invar
N_col <- ncol(snps_no_invar)
for(i in 1:N_col){
# get the current column
column_i <- snps_noNAs[, i]
# get the mean of the current column
mean_i <- mean(column_i, na.rm = TRUE)
# get the NAs in the current column
NAs_i <- which(is.na(column_i))
# record the number of NAs
N_NAs <- length(NAs_i)
# replace the NAs in the current column
column_i[NAs_i] <- mean_i
# replace the original column with the
## updated columns
snps_noNAs[, i] <- column_i
}
Save the data as a .csv file which can be loaded again later.
write.csv(snps_noNAs, file = "SNPs_cleaned.csv",
row.names = F)
Check for the presence of the file with list.files()
list.files(pattern = ".csv")
## [1] "dinog.csv" "SNPs_cleaned.csv"
## [3] "walsh2017morphology.csv"
In Part 2, we will re-load the SNPs_cleaned.csv file and
carry an an analysis with PCA.