This is a tutorial on how to use R markdown for reproducible research.

Here we can type long passages or descriptions of our data without the need of “hashing” out our comments with the # symbol. In our first example, we will be using the ToothGrowth dataset. In this experiment, Guinea Pigs (literal) were given different amounts of Vitamin C to see the effects on the animal’s tooth growth.

To run R code in a markdwon file, we need to denote the section that is considered R code. We call these “code chunks.”

Below is a code chunk:

Toothdata <- ToothGrowth 

head(Toothdata)
##    len supp dose
## 1  4.2   VC  0.5
## 2 11.5   VC  0.5
## 3  7.3   VC  0.5
## 4  5.8   VC  0.5
## 5  6.4   VC  0.5
## 6 10.0   VC  0.5

As you can see, from running “play” button on the code chunk, the results are printed in line of the r markdown file.

fit <- lm(len ~ dose, data = Toothdata)

b <- fit$coefficients

plot(len ~ dose, data = Toothdata)

abline(lm(len ~ dose, data = Toothdata))
Figure 1: The tooth growth of Guinea Pigs when given variable amounts of Vitamin C

Figure 1: The tooth growth of Guinea Pigs when given variable amounts of Vitamin C

The slope of the regression line is ‘r b[2]’ .

Section Headers

We can also put sections and subsections in our r markdown file, similar to numbers or bullet points in a word document. This is done with the “#” that we previously used to denote text in an R script.

First level header

Second level header

Third level header

Make sure that you put a space after the hashtag, otherwise it will not work!

We can also add bullet point type marks in our r-markdown file.

  • one item
  • one item
  • one item
    • one more item
    • one more item
    • one more item
      • one last item

Its important to note here that in R Markdown indentation matters!

  1. First Item
  2. Second Item
  3. Third Item
  1. subitem 1
  2. subitem 2
  3. subitem 3

Block Quotes

We can put really nice quotes into the markdown document. We do this by using the “>” symbol.

“Genes are like the story, and DNA is the language that the story is written in.”

— Sam Kean

Formulas

We can also put nice formatted formulas into Markdown using two dollar signs.

Hard-Weinberg Formula

\[p^2 + 2pq + q^2 = 1\]

And you get really complex as well!

\[\Theta = \begin{pmatrix}\alpha & \beta\\ \gamma & \delta \end{pmatrix}\]

Code Chunks

Code chunk options

There are also options for your R Markdown file on how knitr intreprets the code chunk. There are the following options.

  • Eval (T or F): whether or not to evaluate the code chunk.

  • Echo (T or F): whether or not to show the code for the chunk, but results will still print.

  • Cache: If enable, the same code chunk will not be evaluated the next time the knitr is run. Great for code that has LONG run times.

  • fig.width or fig.height: the (graphical device) size of the R plots in inches. The figures are first written to the knitr document then to files that are saved separately.

  • out.width or out.height: The output size of the R plots IN THE R DOCUMENT.

  • fig.cap: the words for the figure caption.

Table of Contents

We can also add a table of contents to our HTML Document. We do this by altering the YAML code (the weird code chunk at the VERY top of the document.) We can add this:

title: “HTML_Tutorial” author: “Natalie Garnett” date: ‘2024-07-10’ output: html_document: toc: true toc_float: true

This will give us a very nice floating table of contents on the right hand side of the document.

Tabs

You can also add TABS in our report. To do this you need to specify each section that you want to become a tab by placing “{.tabset}” after the line. Every subsequent header will be a new tab.

Themes

You can also add themes to your HTML document that change the highlighting color and hyperlink color of your html output. This can be nice aesthetically. To do this, you can chang eyour theme in the YAML to one of the following:

  • cerulean
  • journal
  • flatly
  • readable
  • spacelab
  • united
  • cosmo
  • lumen
  • paper
  • sandstone
  • simplex
  • yeti
  • null

You can also change the color by specifying highlight:

  • default
  • tango
  • payments
  • kate
  • monochrome
  • espresso
  • zenburn
  • haddock
  • textmate

Code Folding

you can also use the code_folding option to allow the reader to toggle between displaying the code and hiding the code. This is done with:

code_folding: hide

Summary

There are a TON of options and ways for you to customize your R code using the HTML format. This is also a great way to display a “portfolio” of your work if you are trying to market yourself to interested parties.

Data wrangling with R

First thing is to load the library and look at the top of the data

library(tidyverse)
## ── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
## ✔ dplyr     1.1.4     ✔ readr     2.1.4
## ✔ forcats   1.0.0     ✔ stringr   1.5.1
## ✔ ggplot2   3.4.4     ✔ tibble    3.2.1
## ✔ lubridate 1.9.3     ✔ tidyr     1.3.0
## ✔ purrr     1.0.2     
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag()    masks stats::lag()
## ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
library(stringr)

??flights

my_data <- nycflights13::flights

head(my_data)
## # A tibble: 6 × 19
##    year month   day dep_time sched_dep_time dep_delay arr_time sched_arr_time
##   <int> <int> <int>    <int>          <int>     <dbl>    <int>          <int>
## 1  2013     1     1      517            515         2      830            819
## 2  2013     1     1      533            529         4      850            830
## 3  2013     1     1      542            540         2      923            850
## 4  2013     1     1      544            545        -1     1004           1022
## 5  2013     1     1      554            600        -6      812            837
## 6  2013     1     1      554            558        -4      740            728
## # ℹ 11 more variables: arr_delay <dbl>, carrier <chr>, flight <int>,
## #   tailnum <chr>, origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>,
## #   hour <dbl>, minute <dbl>, time_hour <dttm>

First we will just look at the data on October 14th.

filter(my_data, month == 10, day ==14)
## # A tibble: 987 × 19
##     year month   day dep_time sched_dep_time dep_delay arr_time sched_arr_time
##    <int> <int> <int>    <int>          <int>     <dbl>    <int>          <int>
##  1  2013    10    14      451            500        -9      624            648
##  2  2013    10    14      511            517        -6      733            757
##  3  2013    10    14      536            545        -9      814            855
##  4  2013    10    14      540            545        -5      932            933
##  5  2013    10    14      548            545         3      824            827
##  6  2013    10    14      549            600       -11      719            730
##  7  2013    10    14      552            600        -8      650            659
##  8  2013    10    14      553            600        -7      646            700
##  9  2013    10    14      554            600        -6      836            829
## 10  2013    10    14      555            600        -5      832            855
## # ℹ 977 more rows
## # ℹ 11 more variables: arr_delay <dbl>, carrier <chr>, flight <int>,
## #   tailnum <chr>, origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>,
## #   hour <dbl>, minute <dbl>, time_hour <dttm>

If we want to subset this into a new variable, we do the following:

oct_14_flight <- filter(my_data, month == 10, day ==14)

What if you want to do both print and save the variable?

(oct_14_flight_2 <- filter(my_data, month == 10, day ==14))
## # A tibble: 987 × 19
##     year month   day dep_time sched_dep_time dep_delay arr_time sched_arr_time
##    <int> <int> <int>    <int>          <int>     <dbl>    <int>          <int>
##  1  2013    10    14      451            500        -9      624            648
##  2  2013    10    14      511            517        -6      733            757
##  3  2013    10    14      536            545        -9      814            855
##  4  2013    10    14      540            545        -5      932            933
##  5  2013    10    14      548            545         3      824            827
##  6  2013    10    14      549            600       -11      719            730
##  7  2013    10    14      552            600        -8      650            659
##  8  2013    10    14      553            600        -7      646            700
##  9  2013    10    14      554            600        -6      836            829
## 10  2013    10    14      555            600        -5      832            855
## # ℹ 977 more rows
## # ℹ 11 more variables: arr_delay <dbl>, carrier <chr>, flight <int>,
## #   tailnum <chr>, origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>,
## #   hour <dbl>, minute <dbl>, time_hour <dttm>

If you want to filter based on different opperators, you can use the following:

(flight_through_september <- filter(my_data, month < 10))
## # A tibble: 252,484 × 19
##     year month   day dep_time sched_dep_time dep_delay arr_time sched_arr_time
##    <int> <int> <int>    <int>          <int>     <dbl>    <int>          <int>
##  1  2013     1     1      517            515         2      830            819
##  2  2013     1     1      533            529         4      850            830
##  3  2013     1     1      542            540         2      923            850
##  4  2013     1     1      544            545        -1     1004           1022
##  5  2013     1     1      554            600        -6      812            837
##  6  2013     1     1      554            558        -4      740            728
##  7  2013     1     1      555            600        -5      913            854
##  8  2013     1     1      557            600        -3      709            723
##  9  2013     1     1      557            600        -3      838            846
## 10  2013     1     1      558            600        -2      753            745
## # ℹ 252,474 more rows
## # ℹ 11 more variables: arr_delay <dbl>, carrier <chr>, flight <int>,
## #   tailnum <chr>, origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>,
## #   hour <dbl>, minute <dbl>, time_hour <dttm>

If we don’t use the == to mean equals, we get this:

#(oct_14_flight_2 <- filter(my_data, month = 10, day = 14))

You can also use logical opperators to be more selective

Lets use the “or” function to pick flights in march and april

March_April_Flights <- filter(my_data, month == 3 | month == 4)

March_4th_Flights <- filter(my_data, month == 3 & day == 4)

Non_jan_flights <- filter(my_data, month != 1)

Arrange

Arrange allows us to arrange the dataset based on the variables we desire.

arrange(my_data, year, day, month)
## # A tibble: 336,776 × 19
##     year month   day dep_time sched_dep_time dep_delay arr_time sched_arr_time
##    <int> <int> <int>    <int>          <int>     <dbl>    <int>          <int>
##  1  2013     1     1      517            515         2      830            819
##  2  2013     1     1      533            529         4      850            830
##  3  2013     1     1      542            540         2      923            850
##  4  2013     1     1      544            545        -1     1004           1022
##  5  2013     1     1      554            600        -6      812            837
##  6  2013     1     1      554            558        -4      740            728
##  7  2013     1     1      555            600        -5      913            854
##  8  2013     1     1      557            600        -3      709            723
##  9  2013     1     1      557            600        -3      838            846
## 10  2013     1     1      558            600        -2      753            745
## # ℹ 336,766 more rows
## # ℹ 11 more variables: arr_delay <dbl>, carrier <chr>, flight <int>,
## #   tailnum <chr>, origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>,
## #   hour <dbl>, minute <dbl>, time_hour <dttm>

We can also do this in descending fashion

descending <- arrange(my_data, desc(year), desc(day), desc(month))

Missing values are always placed at the end of the dataframe regardless of ascending or descending.

Select

We can also select specific columns that we want to look at

calendar <- select(my_data, year, month, day)
print(calendar)
## # A tibble: 336,776 × 3
##     year month   day
##    <int> <int> <int>
##  1  2013     1     1
##  2  2013     1     1
##  3  2013     1     1
##  4  2013     1     1
##  5  2013     1     1
##  6  2013     1     1
##  7  2013     1     1
##  8  2013     1     1
##  9  2013     1     1
## 10  2013     1     1
## # ℹ 336,766 more rows

We can also look a range of columns

calendar2 <- select(my_data, year:day)

Lets look at all columns months through carrier

calendar3 <- select(my_data, year:carrier)

We can also choose which columns NOT to include

everything_else <- select(my_data, -(year:day))

In this instance we can also use the “not” opperator !

everything_else2 <- select(my_data, !(year:day))

There are also some other helper functions that can help you select the columnsor data you’re looking for

  • starts_with(“xyz”) – will select the values that start with xyz
  • ends_with(“xyz”) — will select the values that end with xyz
  • contains(“xyz”) — will select the values that end with xyz
  • matches(“xyz”) —- will match the identical value xyz

Renaming

head(my_data)
## # A tibble: 6 × 19
##    year month   day dep_time sched_dep_time dep_delay arr_time sched_arr_time
##   <int> <int> <int>    <int>          <int>     <dbl>    <int>          <int>
## 1  2013     1     1      517            515         2      830            819
## 2  2013     1     1      533            529         4      850            830
## 3  2013     1     1      542            540         2      923            850
## 4  2013     1     1      544            545        -1     1004           1022
## 5  2013     1     1      554            600        -6      812            837
## 6  2013     1     1      554            558        -4      740            728
## # ℹ 11 more variables: arr_delay <dbl>, carrier <chr>, flight <int>,
## #   tailnum <chr>, origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>,
## #   hour <dbl>, minute <dbl>, time_hour <dttm>
rename(my_data, departure_time = dep_time)
## # A tibble: 336,776 × 19
##     year month   day departure_time sched_dep_time dep_delay arr_time
##    <int> <int> <int>          <int>          <int>     <dbl>    <int>
##  1  2013     1     1            517            515         2      830
##  2  2013     1     1            533            529         4      850
##  3  2013     1     1            542            540         2      923
##  4  2013     1     1            544            545        -1     1004
##  5  2013     1     1            554            600        -6      812
##  6  2013     1     1            554            558        -4      740
##  7  2013     1     1            555            600        -5      913
##  8  2013     1     1            557            600        -3      709
##  9  2013     1     1            557            600        -3      838
## 10  2013     1     1            558            600        -2      753
## # ℹ 336,766 more rows
## # ℹ 12 more variables: sched_arr_time <int>, arr_delay <dbl>, carrier <chr>,
## #   flight <int>, tailnum <chr>, origin <chr>, dest <chr>, air_time <dbl>,
## #   distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
my_data <- rename(my_data, departure_time = dep_time)

Mutate

What if you want to add new columns to your data frame? We have the mutate() function for that.

First, lets make smaller data frame so we can see what we’re doing.

my_data_small <- select(my_data, year:day, distance, air_time)

Lets calculate the speed of the flights.

mutate(my_data_small, speed = distance / air_time * 60)
## # A tibble: 336,776 × 6
##     year month   day distance air_time speed
##    <int> <int> <int>    <dbl>    <dbl> <dbl>
##  1  2013     1     1     1400      227  370.
##  2  2013     1     1     1416      227  374.
##  3  2013     1     1     1089      160  408.
##  4  2013     1     1     1576      183  517.
##  5  2013     1     1      762      116  394.
##  6  2013     1     1      719      150  288.
##  7  2013     1     1     1065      158  404.
##  8  2013     1     1      229       53  259.
##  9  2013     1     1      944      140  405.
## 10  2013     1     1      733      138  319.
## # ℹ 336,766 more rows
my_data_small <- mutate(my_data_small, speed = distance / air_time * 60)

What if we wanted to create a new dataframe with ONLY your calculations (transmute)

airspeed <- transmute(my_data_small, speed = distance / air_time * 60 , speed2 = distance / air_time)

Summarize and by_group()

We can use summarize to run a function on a data column to get a single return

summarize(my_data, delay = mean(dep_delay, na.rm = TRUE))
## # A tibble: 1 × 1
##   delay
##   <dbl>
## 1  12.6

So we can see here that the average delay is about 12 minutes

We gain additional value in summarize by pairing it with by_group()

by_day <- group_by(my_data, year, month, day)
summarize(by_day, delay = mean(dep_delay, na.rm = TRUE))
## `summarise()` has grouped output by 'year', 'month'. You can override using the
## `.groups` argument.
## # A tibble: 365 × 4
## # Groups:   year, month [12]
##     year month   day delay
##    <int> <int> <int> <dbl>
##  1  2013     1     1 11.5 
##  2  2013     1     2 13.9 
##  3  2013     1     3 11.0 
##  4  2013     1     4  8.95
##  5  2013     1     5  5.73
##  6  2013     1     6  7.15
##  7  2013     1     7  5.42
##  8  2013     1     8  2.55
##  9  2013     1     9  2.28
## 10  2013     1    10  2.84
## # ℹ 355 more rows

As you can see, we now have the delay by the days of the year

Missing Data

What happens if we don’t tell R what to do with the missing data?

summarize(by_day, delay = mean(dep_delay))
## `summarise()` has grouped output by 'year', 'month'. You can override using the
## `.groups` argument.
## # A tibble: 365 × 4
## # Groups:   year, month [12]
##     year month   day delay
##    <int> <int> <int> <dbl>
##  1  2013     1     1    NA
##  2  2013     1     2    NA
##  3  2013     1     3    NA
##  4  2013     1     4    NA
##  5  2013     1     5    NA
##  6  2013     1     6    NA
##  7  2013     1     7    NA
##  8  2013     1     8    NA
##  9  2013     1     9    NA
## 10  2013     1    10    NA
## # ℹ 355 more rows

We can also filter our data based on NA (which in this dataset was cancelled flights)

not_cancelled <- filter(my_data, !is.na(dep_delay), !is.na(arr_delay))

Lets run summarize again on this data

summarize(not_cancelled, delay = mean(dep_delay))
## # A tibble: 1 × 1
##   delay
##   <dbl>
## 1  12.6

Counts

We can also count the number of variables that are NA

sum(is.na(my_data$dep_delay))
## [1] 8255

We can also count the numbers that are a NOT NA

sum(!is.na(my_data$dep_delay))
## [1] 328521

Piping

With tibble datasets (more on them soon), we can pipe results to get rid of the need to use the dollar signs.

We can then summarize the number of flights by minutes delayed.

my_data %>%
  group_by(year, month, day) %>%
  summarize(mean = mean(departure_time, na.rm = TRUE))
## `summarise()` has grouped output by 'year', 'month'. You can override using the
## `.groups` argument.
## # A tibble: 365 × 4
## # Groups:   year, month [12]
##     year month   day  mean
##    <int> <int> <int> <dbl>
##  1  2013     1     1 1385.
##  2  2013     1     2 1354.
##  3  2013     1     3 1357.
##  4  2013     1     4 1348.
##  5  2013     1     5 1326.
##  6  2013     1     6 1399.
##  7  2013     1     7 1341.
##  8  2013     1     8 1335.
##  9  2013     1     9 1326.
## 10  2013     1    10 1333.
## # ℹ 355 more rows

Tibbles

library(tibble)

Now we will take the time to explore tibbles. Tibbles are modified dataframes which tweak some of the older features from data frames. R is an older language, and useful things from 20 years ago are not as useful anymore.

as_tibble(iris)
## # A tibble: 150 × 5
##    Sepal.Length Sepal.Width Petal.Length Petal.Width Species
##           <dbl>       <dbl>        <dbl>       <dbl> <fct>  
##  1          5.1         3.5          1.4         0.2 setosa 
##  2          4.9         3            1.4         0.2 setosa 
##  3          4.7         3.2          1.3         0.2 setosa 
##  4          4.6         3.1          1.5         0.2 setosa 
##  5          5           3.6          1.4         0.2 setosa 
##  6          5.4         3.9          1.7         0.4 setosa 
##  7          4.6         3.4          1.4         0.3 setosa 
##  8          5           3.4          1.5         0.2 setosa 
##  9          4.4         2.9          1.4         0.2 setosa 
## 10          4.9         3.1          1.5         0.1 setosa 
## # ℹ 140 more rows

As we can see, we have the same data frame, but we have different features

tibble(
  x = 1:5, 
  y = 1, 
  z = x ^ 2 + y 
)
## # A tibble: 5 × 3
##       x     y     z
##   <int> <dbl> <dbl>
## 1     1     1     2
## 2     2     1     5
## 3     3     1    10
## 4     4     1    17
## 5     5     1    26

You can also use tribble() for basic table creation

tribble(
  ~genea, ~ geneb, ~ genec, 
  #########################
  110,      112,     114, 
  6,        5,       4, 
)
## # A tibble: 2 × 3
##   genea geneb genec
##   <dbl> <dbl> <dbl>
## 1   110   112   114
## 2     6     5     4

Tibbles are built to not overwhelm your console when printing data, only showing the first few lines.

This is how a data frame prints - print(by_day) - as.data.frame(by_day) - head(by_day)

nycflights13::flights %>%
  print(n=10, width = Inf)
## # A tibble: 336,776 × 19
##     year month   day dep_time sched_dep_time dep_delay arr_time sched_arr_time
##    <int> <int> <int>    <int>          <int>     <dbl>    <int>          <int>
##  1  2013     1     1      517            515         2      830            819
##  2  2013     1     1      533            529         4      850            830
##  3  2013     1     1      542            540         2      923            850
##  4  2013     1     1      544            545        -1     1004           1022
##  5  2013     1     1      554            600        -6      812            837
##  6  2013     1     1      554            558        -4      740            728
##  7  2013     1     1      555            600        -5      913            854
##  8  2013     1     1      557            600        -3      709            723
##  9  2013     1     1      557            600        -3      838            846
## 10  2013     1     1      558            600        -2      753            745
##    arr_delay carrier flight tailnum origin dest  air_time distance  hour minute
##        <dbl> <chr>    <int> <chr>   <chr>  <chr>    <dbl>    <dbl> <dbl>  <dbl>
##  1        11 UA        1545 N14228  EWR    IAH        227     1400     5     15
##  2        20 UA        1714 N24211  LGA    IAH        227     1416     5     29
##  3        33 AA        1141 N619AA  JFK    MIA        160     1089     5     40
##  4       -18 B6         725 N804JB  JFK    BQN        183     1576     5     45
##  5       -25 DL         461 N668DN  LGA    ATL        116      762     6      0
##  6        12 UA        1696 N39463  EWR    ORD        150      719     5     58
##  7        19 B6         507 N516JB  EWR    FLL        158     1065     6      0
##  8       -14 EV        5708 N829AS  LGA    IAD         53      229     6      0
##  9        -8 B6          79 N593JB  JFK    MCO        140      944     6      0
## 10         8 AA         301 N3ALAA  LGA    ORD        138      733     6      0
##    time_hour          
##    <dttm>             
##  1 2013-01-01 05:00:00
##  2 2013-01-01 05:00:00
##  3 2013-01-01 05:00:00
##  4 2013-01-01 05:00:00
##  5 2013-01-01 06:00:00
##  6 2013-01-01 05:00:00
##  7 2013-01-01 06:00:00
##  8 2013-01-01 06:00:00
##  9 2013-01-01 06:00:00
## 10 2013-01-01 06:00:00
## # ℹ 336,766 more rows

Subsetting

Subsetting tibbles is easy, similar to data.frames

df_tibble <- tibble(nycflights13::flights)

df_tibble
## # A tibble: 336,776 × 19
##     year month   day dep_time sched_dep_time dep_delay arr_time sched_arr_time
##    <int> <int> <int>    <int>          <int>     <dbl>    <int>          <int>
##  1  2013     1     1      517            515         2      830            819
##  2  2013     1     1      533            529         4      850            830
##  3  2013     1     1      542            540         2      923            850
##  4  2013     1     1      544            545        -1     1004           1022
##  5  2013     1     1      554            600        -6      812            837
##  6  2013     1     1      554            558        -4      740            728
##  7  2013     1     1      555            600        -5      913            854
##  8  2013     1     1      557            600        -3      709            723
##  9  2013     1     1      557            600        -3      838            846
## 10  2013     1     1      558            600        -2      753            745
## # ℹ 336,766 more rows
## # ℹ 11 more variables: arr_delay <dbl>, carrier <chr>, flight <int>,
## #   tailnum <chr>, origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>,
## #   hour <dbl>, minute <dbl>, time_hour <dttm>

We can subset by column name using the $

We can subset by position using [[]]

If you want to use this in a pipe, you need to use the “.” placeholder

Some older functions do not like tibbles, thus you might have to convert them back to dataframes

class(df_tibble)
## [1] "tbl_df"     "tbl"        "data.frame"
df_tibble_2 <- as.data.frame(df_tibble)

class(df_tibble_2)
## [1] "data.frame"
df_tibble
## # A tibble: 336,776 × 19
##     year month   day dep_time sched_dep_time dep_delay arr_time sched_arr_time
##    <int> <int> <int>    <int>          <int>     <dbl>    <int>          <int>
##  1  2013     1     1      517            515         2      830            819
##  2  2013     1     1      533            529         4      850            830
##  3  2013     1     1      542            540         2      923            850
##  4  2013     1     1      544            545        -1     1004           1022
##  5  2013     1     1      554            600        -6      812            837
##  6  2013     1     1      554            558        -4      740            728
##  7  2013     1     1      555            600        -5      913            854
##  8  2013     1     1      557            600        -3      709            723
##  9  2013     1     1      557            600        -3      838            846
## 10  2013     1     1      558            600        -2      753            745
## # ℹ 336,766 more rows
## # ℹ 11 more variables: arr_delay <dbl>, carrier <chr>, flight <int>,
## #   tailnum <chr>, origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>,
## #   hour <dbl>, minute <dbl>, time_hour <dttm>
head(df_tibble_2)
##   year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
## 1 2013     1   1      517            515         2      830            819
## 2 2013     1   1      533            529         4      850            830
## 3 2013     1   1      542            540         2      923            850
## 4 2013     1   1      544            545        -1     1004           1022
## 5 2013     1   1      554            600        -6      812            837
## 6 2013     1   1      554            558        -4      740            728
##   arr_delay carrier flight tailnum origin dest air_time distance hour minute
## 1        11      UA   1545  N14228    EWR  IAH      227     1400    5     15
## 2        20      UA   1714  N24211    LGA  IAH      227     1416    5     29
## 3        33      AA   1141  N619AA    JFK  MIA      160     1089    5     40
## 4       -18      B6    725  N804JB    JFK  BQN      183     1576    5     45
## 5       -25      DL    461  N668DN    LGA  ATL      116      762    6      0
## 6        12      UA   1696  N39463    EWR  ORD      150      719    5     58
##             time_hour
## 1 2013-01-01 05:00:00
## 2 2013-01-01 05:00:00
## 3 2013-01-01 05:00:00
## 4 2013-01-01 05:00:00
## 5 2013-01-01 06:00:00
## 6 2013-01-01 05:00:00

Tidyr

library(tidyverse)

How do we make a tidy dataset? Well the tidyverse follows three rules.

  • #1 - Each variable must have its own column
  • #2 - Each observation has its own row
  • #3 - Each value has its own cell.

It is impossible to satisfy two of the three rules.

This leads to the following instructions for tidy data

  • #1 put each dataset into a a tibble
  • #2 put each variable into a column
  • #3 profit

Picking one consistent method of data storage makes for easier understanding of your code and what is happening “under the hood” or behind the scenes.

Lets now look at working with tibbles

bmi <- tibble(women)

bmi %>% 
  mutate(bmi = (703 * weight)/(height)^2)
## # A tibble: 15 × 3
##    height weight   bmi
##     <dbl>  <dbl> <dbl>
##  1     58    115  24.0
##  2     59    117  23.6
##  3     60    120  23.4
##  4     61    123  23.2
##  5     62    126  23.0
##  6     63    129  22.8
##  7     64    132  22.7
##  8     65    135  22.5
##  9     66    139  22.4
## 10     67    142  22.2
## 11     68    146  22.2
## 12     69    150  22.1
## 13     70    154  22.1
## 14     71    159  22.2
## 15     72    164  22.2

Spreading and Gathering

Sometimes you’ll find datasets that don’t fit well into a tibble

We’ll use the built-in data from tidyverse for this part

table4a
## # A tibble: 3 × 3
##   country     `1999` `2000`
##   <chr>        <dbl>  <dbl>
## 1 Afghanistan    745   2666
## 2 Brazil       37737  80488
## 3 China       212258 213766

As you can see from this data, we have on variable in column A (country) but columns b and c are two of the same. Thus, there are two observations in each row

To fix this, we can use the gather function

table4a %>%
  gather('1999', '2000', key = 'year', value = 'cases')
## # A tibble: 6 × 3
##   country     year   cases
##   <chr>       <chr>  <dbl>
## 1 Afghanistan 1999     745
## 2 Brazil      1999   37737
## 3 China       1999  212258
## 4 Afghanistan 2000    2666
## 5 Brazil      2000   80488
## 6 China       2000  213766

Let’s look at another example

table4b
## # A tibble: 3 × 3
##   country         `1999`     `2000`
##   <chr>            <dbl>      <dbl>
## 1 Afghanistan   19987071   20595360
## 2 Brazil       172006362  174504898
## 3 China       1272915272 1280428583

As you can see we have the same problem in table 4b

table4b %>%
  gather("1999", "2000", key = "year", value = "population")
## # A tibble: 6 × 3
##   country     year  population
##   <chr>       <chr>      <dbl>
## 1 Afghanistan 1999    19987071
## 2 Brazil      1999   172006362
## 3 China       1999  1272915272
## 4 Afghanistan 2000    20595360
## 5 Brazil      2000   174504898
## 6 China       2000  1280428583

Now what if we want to join these two tables? We can use dplyr

table4a <- table4a %>%
  gather('1999', '2000', key = 'year', value = 'cases')
table4b <- table4b %>%
  gather("1999", "2000", key = "year", value = "population")

left_join(table4a, table4b)
## Joining with `by = join_by(country, year)`
## # A tibble: 6 × 4
##   country     year   cases population
##   <chr>       <chr>  <dbl>      <dbl>
## 1 Afghanistan 1999     745   19987071
## 2 Brazil      1999   37737  172006362
## 3 China       1999  212258 1272915272
## 4 Afghanistan 2000    2666   20595360
## 5 Brazil      2000   80488  174504898
## 6 China       2000  213766 1280428583

Spreading

Spreading is the opposite of gathering

table2
## # A tibble: 12 × 4
##    country      year type            count
##    <chr>       <dbl> <chr>           <dbl>
##  1 Afghanistan  1999 cases             745
##  2 Afghanistan  1999 population   19987071
##  3 Afghanistan  2000 cases            2666
##  4 Afghanistan  2000 population   20595360
##  5 Brazil       1999 cases           37737
##  6 Brazil       1999 population  172006362
##  7 Brazil       2000 cases           80488
##  8 Brazil       2000 population  174504898
##  9 China        1999 cases          212258
## 10 China        1999 population 1272915272
## 11 China        2000 cases          213766
## 12 China        2000 population 1280428583

You can see that we have redundant info in columns 1 and 2 We can fix that by combining rows 1&2, 3&4, etc.

spread(table2, key = type, value = count)
## # A tibble: 6 × 4
##   country      year  cases population
##   <chr>       <dbl>  <dbl>      <dbl>
## 1 Afghanistan  1999    745   19987071
## 2 Afghanistan  2000   2666   20595360
## 3 Brazil       1999  37737  172006362
## 4 Brazil       2000  80488  174504898
## 5 China        1999 212258 1272915272
## 6 China        2000 213766 1280428583

Type is the key of what we are turning into columns, the value is what becomes rows/ observations.

In summary, spread makes long tables shorter and wider gather makes wide tables, narrower and longer.

Separating and pull

Now what happens when we have two observations stuck in one column?

table3
## # A tibble: 6 × 3
##   country      year rate             
##   <chr>       <dbl> <chr>            
## 1 Afghanistan  1999 745/19987071     
## 2 Afghanistan  2000 2666/20595360    
## 3 Brazil       1999 37737/172006362  
## 4 Brazil       2000 80488/174504898  
## 5 China        1999 212258/1272915272
## 6 China        2000 213766/1280428583

As you can see, the rate is just the population and cases combined. We can use separate to fix this

table3 %>%
  separate(rate, into = c("cases", "population"))
## # A tibble: 6 × 4
##   country      year cases  population
##   <chr>       <dbl> <chr>  <chr>     
## 1 Afghanistan  1999 745    19987071  
## 2 Afghanistan  2000 2666   20595360  
## 3 Brazil       1999 37737  172006362 
## 4 Brazil       2000 80488  174504898 
## 5 China        1999 212258 1272915272
## 6 China        2000 213766 1280428583

However, if you notice, the column type is not correct

table3 %>%
  separate(rate, into =c("cases", "populate"), conver = TRUE)
## # A tibble: 6 × 4
##   country      year  cases   populate
##   <chr>       <dbl>  <int>      <int>
## 1 Afghanistan  1999    745   19987071
## 2 Afghanistan  2000   2666   20595360
## 3 Brazil       1999  37737  172006362
## 4 Brazil       2000  80488  174504898
## 5 China        1999 212258 1272915272
## 6 China        2000 213766 1280428583

You can specify what you want to separate based on.

table3 %>%
  separate(rate, into =c("cases", "populate"), sep = "/", conver = TRUE)
## # A tibble: 6 × 4
##   country      year  cases   populate
##   <chr>       <dbl>  <int>      <int>
## 1 Afghanistan  1999    745   19987071
## 2 Afghanistan  2000   2666   20595360
## 3 Brazil       1999  37737  172006362
## 4 Brazil       2000  80488  174504898
## 5 China        1999 212258 1272915272
## 6 China        2000 213766 1280428583

Lets make this look more tidy

table3 %>%
  separate(
    year, 
    into = c("cases", "population"),
    convert= TRUE, 
    sep = 2 
    )
## # A tibble: 6 × 4
##   country     cases population rate             
##   <chr>       <int>      <int> <chr>            
## 1 Afghanistan    19         99 745/19987071     
## 2 Afghanistan    20          0 2666/20595360    
## 3 Brazil         19         99 37737/172006362  
## 4 Brazil         20          0 80488/174504898  
## 5 China          19         99 212258/1272915272
## 6 China          20          0 213766/1280428583

Unite

What happens if we want to do the inverse of separate?

table5
## # A tibble: 6 × 4
##   country     century year  rate             
##   <chr>       <chr>   <chr> <chr>            
## 1 Afghanistan 19      99    745/19987071     
## 2 Afghanistan 20      00    2666/20595360    
## 3 Brazil      19      99    37737/172006362  
## 4 Brazil      20      00    80488/174504898  
## 5 China       19      99    212258/1272915272
## 6 China       20      00    213766/1280428583
table5 %>%
  unite(date, century, year)
## # A tibble: 6 × 3
##   country     date  rate             
##   <chr>       <chr> <chr>            
## 1 Afghanistan 19_99 745/19987071     
## 2 Afghanistan 20_00 2666/20595360    
## 3 Brazil      19_99 37737/172006362  
## 4 Brazil      20_00 80488/174504898  
## 5 China       19_99 212258/1272915272
## 6 China       20_00 213766/1280428583
table5 %>%
  unite(date, century, year, sep = "")
## # A tibble: 6 × 3
##   country     date  rate             
##   <chr>       <chr> <chr>            
## 1 Afghanistan 1999  745/19987071     
## 2 Afghanistan 2000  2666/20595360    
## 3 Brazil      1999  37737/172006362  
## 4 Brazil      2000  80488/174504898  
## 5 China       1999  212258/1272915272
## 6 China       2000  213766/1280428583

Missing values

There can be two types of missing values. NA (explicit) or just no entry (implicit)

gene_data <- tibble(
  gene = c('a', 'a', 'a', 'a', 'b', 'b', 'b'), 
  nuc = c(20, 22, 24, 25, NA, 42, 67), 
  run = c(1,2,3,4,2,3,4)
)

Gene_data

  • The nucleotide count for b Gene b run 2 is explicitly missing.
  • The nucleotide count for gene b run 1 is implicitly missing.

Only way we can make implicitly missing values explicit is by putting runs in columns

gene_data %>%
  spread(gene, nuc)
## # A tibble: 4 × 3
##     run     a     b
##   <dbl> <dbl> <dbl>
## 1     1    20    NA
## 2     2    22    NA
## 3     3    24    42
## 4     4    25    67

If we want to remove the missing values, we can use spreader and gather, and na.rm = TRUE

gene_data %>%
  spread(gene, nuc) %>%
  gather(gene, nuc, 'a':'b', na.rm = TRUE)
## # A tibble: 6 × 3
##     run gene    nuc
##   <dbl> <chr> <dbl>
## 1     1 a        20
## 2     2 a        22
## 3     3 a        24
## 4     4 a        25
## 5     3 b        42
## 6     4 b        67

Another way we can make missing values explicit is complete()

gene_data %>%
  complete(gene, run)
## # A tibble: 8 × 3
##   gene    run   nuc
##   <chr> <dbl> <dbl>
## 1 a         1    20
## 2 a         2    22
## 3 a         3    24
## 4 a         4    25
## 5 b         1    NA
## 6 b         2    NA
## 7 b         3    42
## 8 b         4    67

Sometimes an NA is present to represent a value being carried forward

treatment <- tribble(
  ~ person,           ~treament,          ~response, 
  #################################################
  "Isaac",                1,                   7, 
  NA,                     2,                   10,
  NA,                     3,                   9,
  "VDB",                  1,                   8,
  NA,                     2,                   11,
  NA,                     3,                   10, 
)

treatment
## # A tibble: 6 × 3
##   person treament response
##   <chr>     <dbl>    <dbl>
## 1 Isaac         1        7
## 2 <NA>          2       10
## 3 <NA>          3        9
## 4 VDB           1        8
## 5 <NA>          2       11
## 6 <NA>          3       10

What we can do here is use the fill() option

treatment %>%
  fill(person)
## # A tibble: 6 × 3
##   person treament response
##   <chr>     <dbl>    <dbl>
## 1 Isaac         1        7
## 2 Isaac         2       10
## 3 Isaac         3        9
## 4 VDB           1        8
## 5 VDB           2       11
## 6 VDB           3       10

DPLYR

It is rare that you will be working with a single data table. The DPLYR package allows you to join two data tables based on common values.

  • Mutate joins - add new variables to one data frame from the matching observations in another
  • Filtering joins - filters observations from one data frame based on whether or not
    • They are present in another.
  • Set operations - treats observations as they are set elements.
library(tidyverse)
library(nycflights13)

Lets pull full carrier names based on letter codes

airlines
## # A tibble: 16 × 2
##    carrier name                       
##    <chr>   <chr>                      
##  1 9E      Endeavor Air Inc.          
##  2 AA      American Airlines Inc.     
##  3 AS      Alaska Airlines Inc.       
##  4 B6      JetBlue Airways            
##  5 DL      Delta Air Lines Inc.       
##  6 EV      ExpressJet Airlines Inc.   
##  7 F9      Frontier Airlines Inc.     
##  8 FL      AirTran Airways Corporation
##  9 HA      Hawaiian Airlines Inc.     
## 10 MQ      Envoy Air                  
## 11 OO      SkyWest Airlines Inc.      
## 12 UA      United Air Lines Inc.      
## 13 US      US Airways Inc.            
## 14 VX      Virgin America             
## 15 WN      Southwest Airlines Co.     
## 16 YV      Mesa Airlines Inc.

Lets get info about airports

airports
## # A tibble: 1,458 × 8
##    faa   name                             lat    lon   alt    tz dst   tzone    
##    <chr> <chr>                          <dbl>  <dbl> <dbl> <dbl> <chr> <chr>    
##  1 04G   Lansdowne Airport               41.1  -80.6  1044    -5 A     America/…
##  2 06A   Moton Field Municipal Airport   32.5  -85.7   264    -6 A     America/…
##  3 06C   Schaumburg Regional             42.0  -88.1   801    -6 A     America/…
##  4 06N   Randall Airport                 41.4  -74.4   523    -5 A     America/…
##  5 09J   Jekyll Island Airport           31.1  -81.4    11    -5 A     America/…
##  6 0A9   Elizabethton Municipal Airport  36.4  -82.2  1593    -5 A     America/…
##  7 0G6   Williams County Airport         41.5  -84.5   730    -5 A     America/…
##  8 0G7   Finger Lakes Regional Airport   42.9  -76.8   492    -5 A     America/…
##  9 0P2   Shoestring Aviation Airfield    39.8  -76.6  1000    -5 U     America/…
## 10 0S9   Jefferson County Intl           48.1 -123.    108    -8 A     America/…
## # ℹ 1,448 more rows

Lets get info about each plane

planes 
## # A tibble: 3,322 × 9
##    tailnum  year type              manufacturer model engines seats speed engine
##    <chr>   <int> <chr>             <chr>        <chr>   <int> <int> <int> <chr> 
##  1 N10156   2004 Fixed wing multi… EMBRAER      EMB-…       2    55    NA Turbo…
##  2 N102UW   1998 Fixed wing multi… AIRBUS INDU… A320…       2   182    NA Turbo…
##  3 N103US   1999 Fixed wing multi… AIRBUS INDU… A320…       2   182    NA Turbo…
##  4 N104UW   1999 Fixed wing multi… AIRBUS INDU… A320…       2   182    NA Turbo…
##  5 N10575   2002 Fixed wing multi… EMBRAER      EMB-…       2    55    NA Turbo…
##  6 N105UW   1999 Fixed wing multi… AIRBUS INDU… A320…       2   182    NA Turbo…
##  7 N107US   1999 Fixed wing multi… AIRBUS INDU… A320…       2   182    NA Turbo…
##  8 N108UW   1999 Fixed wing multi… AIRBUS INDU… A320…       2   182    NA Turbo…
##  9 N109UW   1999 Fixed wing multi… AIRBUS INDU… A320…       2   182    NA Turbo…
## 10 N110UW   1999 Fixed wing multi… AIRBUS INDU… A320…       2   182    NA Turbo…
## # ℹ 3,312 more rows

Lets get some info on the weather at the airports

weather 
## # A tibble: 26,115 × 15
##    origin  year month   day  hour  temp  dewp humid wind_dir wind_speed
##    <chr>  <int> <int> <int> <int> <dbl> <dbl> <dbl>    <dbl>      <dbl>
##  1 EWR     2013     1     1     1  39.0  26.1  59.4      270      10.4 
##  2 EWR     2013     1     1     2  39.0  27.0  61.6      250       8.06
##  3 EWR     2013     1     1     3  39.0  28.0  64.4      240      11.5 
##  4 EWR     2013     1     1     4  39.9  28.0  62.2      250      12.7 
##  5 EWR     2013     1     1     5  39.0  28.0  64.4      260      12.7 
##  6 EWR     2013     1     1     6  37.9  28.0  67.2      240      11.5 
##  7 EWR     2013     1     1     7  39.0  28.0  64.4      240      15.0 
##  8 EWR     2013     1     1     8  39.9  28.0  62.2      250      10.4 
##  9 EWR     2013     1     1     9  39.9  28.0  62.2      260      15.0 
## 10 EWR     2013     1     1    10  41    28.0  59.6      260      13.8 
## # ℹ 26,105 more rows
## # ℹ 5 more variables: wind_gust <dbl>, precip <dbl>, pressure <dbl>,
## #   visib <dbl>, time_hour <dttm>

Lets get info on singular flights

flights
## # A tibble: 336,776 × 19
##     year month   day dep_time sched_dep_time dep_delay arr_time sched_arr_time
##    <int> <int> <int>    <int>          <int>     <dbl>    <int>          <int>
##  1  2013     1     1      517            515         2      830            819
##  2  2013     1     1      533            529         4      850            830
##  3  2013     1     1      542            540         2      923            850
##  4  2013     1     1      544            545        -1     1004           1022
##  5  2013     1     1      554            600        -6      812            837
##  6  2013     1     1      554            558        -4      740            728
##  7  2013     1     1      555            600        -5      913            854
##  8  2013     1     1      557            600        -3      709            723
##  9  2013     1     1      557            600        -3      838            846
## 10  2013     1     1      558            600        -2      753            745
## # ℹ 336,766 more rows
## # ℹ 11 more variables: arr_delay <dbl>, carrier <chr>, flight <int>,
## #   tailnum <chr>, origin <chr>, dest <chr>, air_time <dbl>, distance <dbl>,
## #   hour <dbl>, minute <dbl>, time_hour <dttm>

Lets look at how these tables connect

  • Flights -> planes based on tailnumber
  • Flights -> airlines through carrier
  • Flights -> airports origin AND dest
  • Flights -> weather via origin, year/month/day/hour

Keys

Keys are unique identifiers per observation

Primary key uniquely identifies an observation in its own table.

One way to identify a primary key is as follows:

planes %>%
  count(tailnum) %>%
  filter(n>1)
## # A tibble: 0 × 2
## # ℹ 2 variables: tailnum <chr>, n <int>

This indicates that the tailnumber is unique

planes %>%
  count(model) %>%
  filter(n>1)
## # A tibble: 79 × 2
##    model       n
##    <chr>   <int>
##  1 717-200    88
##  2 737-301     2
##  3 737-3G7     2
##  4 737-3H4   105
##  5 737-3K2     2
##  6 737-3L9     2
##  7 737-3Q8     5
##  8 737-3TO     2
##  9 737-401     4
## 10 737-4B7    18
## # ℹ 69 more rows

Mutate join

flights2 <- flights %>%
  select(year:day, hour, origin, dest, tailnum, carrier)
flights2
## # A tibble: 336,776 × 8
##     year month   day  hour origin dest  tailnum carrier
##    <int> <int> <int> <dbl> <chr>  <chr> <chr>   <chr>  
##  1  2013     1     1     5 EWR    IAH   N14228  UA     
##  2  2013     1     1     5 LGA    IAH   N24211  UA     
##  3  2013     1     1     5 JFK    MIA   N619AA  AA     
##  4  2013     1     1     5 JFK    BQN   N804JB  B6     
##  5  2013     1     1     6 LGA    ATL   N668DN  DL     
##  6  2013     1     1     5 EWR    ORD   N39463  UA     
##  7  2013     1     1     6 EWR    FLL   N516JB  B6     
##  8  2013     1     1     6 LGA    IAD   N829AS  EV     
##  9  2013     1     1     6 JFK    MCO   N593JB  B6     
## 10  2013     1     1     6 LGA    ORD   N3ALAA  AA     
## # ℹ 336,766 more rows
flights2 %>%
  select(-origin, -dest) %>%
  left_join(airlines, by = 'carrier') 
## # A tibble: 336,776 × 7
##     year month   day  hour tailnum carrier name                    
##    <int> <int> <int> <dbl> <chr>   <chr>   <chr>                   
##  1  2013     1     1     5 N14228  UA      United Air Lines Inc.   
##  2  2013     1     1     5 N24211  UA      United Air Lines Inc.   
##  3  2013     1     1     5 N619AA  AA      American Airlines Inc.  
##  4  2013     1     1     5 N804JB  B6      JetBlue Airways         
##  5  2013     1     1     6 N668DN  DL      Delta Air Lines Inc.    
##  6  2013     1     1     5 N39463  UA      United Air Lines Inc.   
##  7  2013     1     1     6 N516JB  B6      JetBlue Airways         
##  8  2013     1     1     6 N829AS  EV      ExpressJet Airlines Inc.
##  9  2013     1     1     6 N593JB  B6      JetBlue Airways         
## 10  2013     1     1     6 N3ALAA  AA      American Airlines Inc.  
## # ℹ 336,766 more rows

We’ve now added the airline name to our dataframe from the airline dataframe

Other types of joins

  • Inner joins (inner_join) matches a pair of observation when their key is equal
  • Outer joins (outer_join) keeps observations that appear in at least one table.

Stringer

library(tidyverse)
library(stringr)

You can create strings using single or double quotes

string1 <- "this is a string"
string2 <- 'to put a "quote" in your string, use the opposite'

string1
## [1] "this is a string"
string2
## [1] "to put a \"quote\" in your string, use the opposite"

If you forget to close your string, you’ll get this:

string3 <- "where is this string going?"

string3
## [1] "where is this string going?"

Just hit escape and try again.

Multiple strings are stored in character vectors

string4 <- c("one", "two", "three")

string4
## [1] "one"   "two"   "three"

Measuring string length

str_length(string3)
## [1] 27
str_length(string4)
## [1] 3 3 5

Lets combine two strings

str_c("X", "Y")
## [1] "XY"
str_c(string1, string2)
## [1] "this is a stringto put a \"quote\" in your string, use the opposite"

You can use sep to control how they are separated

str_c(string1, string2, sep = " ")
## [1] "this is a string to put a \"quote\" in your string, use the opposite"
str_c("x", "y", "z", sep = "_")
## [1] "x_y_z"

Subsetting strings

You can subset a string using str_sub()

HSP <- c("HSP123", "HSP234", "HSP456")

str_sub(HSP, 4,6)
## [1] "123" "234" "456"

This just drops the first four letters from the strings

Or you can use negatives to count backf rom the end

str_sub(HSP, -3, -1)
## [1] "123" "234" "456"

You can covert the cases of strings like follows:

HSP
## [1] "HSP123" "HSP234" "HSP456"
str_to_lower(HSP)
## [1] "hsp123" "hsp234" "hsp456"
str_to_upper(HSP)
## [1] "HSP123" "HSP234" "HSP456"

Regular Expression

First, you will need to install the following package:

install.packages("htmlwidgets")
## Installing package into '/home/student/R/x86_64-pc-linux-gnu-library/4.3'
## (as 'lib' is unspecified)
x <- c('ATTAGA', 'CGCCCCCGGAT', 'TATTA')

str_view(x, "G")
## [1] │ ATTA<G>A
## [2] │ C<G>CCCCC<G><G>AT
str_view(x, "TA")
## [1] │ AT<TA>GA
## [3] │ <TA>T<TA>

The next step is, “.” where the “.” matches an entry

str_view(x, ".G.")
## [1] │ ATT<AGA>
## [2] │ <CGC>CCC<CGG>AT

Anchors allow you to match at the or the ending

str_view(x, "^TA")
## [3] │ <TA>TTA
str_view(x, "TA$")
## [3] │ TAT<TA>

Character classes/ alternatives

  • atches any digit
  • matches any space
  • [abc] matches a,b or c
str_view(x, "TA[GT]")
## [1] │ AT<TAG>A
## [3] │ <TAT>TA

[^anc] matches anything BUT a, b or x

str_view(x, "TA[^T]")
## [1] │ AT<TAG>A

You can also use | to pick between the two alternatives

str_view(x, "TA[G|T]")
## [1] │ AT<TAG>A
## [3] │ <TAT>TA

Detect Matches

str_detect() returns a logical vector the same length of input

y <- c("apple", "banana", "pear")
y
## [1] "apple"  "banana" "pear"
str_detect(y, "e")
## [1]  TRUE FALSE  TRUE

How many common words start with letter e

words
##   [1] "a"           "able"        "about"       "absolute"    "accept"     
##   [6] "account"     "achieve"     "across"      "act"         "active"     
##  [11] "actual"      "add"         "address"     "admit"       "advertise"  
##  [16] "affect"      "afford"      "after"       "afternoon"   "again"      
##  [21] "against"     "age"         "agent"       "ago"         "agree"      
##  [26] "air"         "all"         "allow"       "almost"      "along"      
##  [31] "already"     "alright"     "also"        "although"    "always"     
##  [36] "america"     "amount"      "and"         "another"     "answer"     
##  [41] "any"         "apart"       "apparent"    "appear"      "apply"      
##  [46] "appoint"     "approach"    "appropriate" "area"        "argue"      
##  [51] "arm"         "around"      "arrange"     "art"         "as"         
##  [56] "ask"         "associate"   "assume"      "at"          "attend"     
##  [61] "authority"   "available"   "aware"       "away"        "awful"      
##  [66] "baby"        "back"        "bad"         "bag"         "balance"    
##  [71] "ball"        "bank"        "bar"         "base"        "basis"      
##  [76] "be"          "bear"        "beat"        "beauty"      "because"    
##  [81] "become"      "bed"         "before"      "begin"       "behind"     
##  [86] "believe"     "benefit"     "best"        "bet"         "between"    
##  [91] "big"         "bill"        "birth"       "bit"         "black"      
##  [96] "bloke"       "blood"       "blow"        "blue"        "board"      
## [101] "boat"        "body"        "book"        "both"        "bother"     
## [106] "bottle"      "bottom"      "box"         "boy"         "break"      
## [111] "brief"       "brilliant"   "bring"       "britain"     "brother"    
## [116] "budget"      "build"       "bus"         "business"    "busy"       
## [121] "but"         "buy"         "by"          "cake"        "call"       
## [126] "can"         "car"         "card"        "care"        "carry"      
## [131] "case"        "cat"         "catch"       "cause"       "cent"       
## [136] "centre"      "certain"     "chair"       "chairman"    "chance"     
## [141] "change"      "chap"        "character"   "charge"      "cheap"      
## [146] "check"       "child"       "choice"      "choose"      "Christ"     
## [151] "Christmas"   "church"      "city"        "claim"       "class"      
## [156] "clean"       "clear"       "client"      "clock"       "close"      
## [161] "closes"      "clothe"      "club"        "coffee"      "cold"       
## [166] "colleague"   "collect"     "college"     "colour"      "come"       
## [171] "comment"     "commit"      "committee"   "common"      "community"  
## [176] "company"     "compare"     "complete"    "compute"     "concern"    
## [181] "condition"   "confer"      "consider"    "consult"     "contact"    
## [186] "continue"    "contract"    "control"     "converse"    "cook"       
## [191] "copy"        "corner"      "correct"     "cost"        "could"      
## [196] "council"     "count"       "country"     "county"      "couple"     
## [201] "course"      "court"       "cover"       "create"      "cross"      
## [206] "cup"         "current"     "cut"         "dad"         "danger"     
## [211] "date"        "day"         "dead"        "deal"        "dear"       
## [216] "debate"      "decide"      "decision"    "deep"        "definite"   
## [221] "degree"      "department"  "depend"      "describe"    "design"     
## [226] "detail"      "develop"     "die"         "difference"  "difficult"  
## [231] "dinner"      "direct"      "discuss"     "district"    "divide"     
## [236] "do"          "doctor"      "document"    "dog"         "door"       
## [241] "double"      "doubt"       "down"        "draw"        "dress"      
## [246] "drink"       "drive"       "drop"        "dry"         "due"        
## [251] "during"      "each"        "early"       "east"        "easy"       
## [256] "eat"         "economy"     "educate"     "effect"      "egg"        
## [261] "eight"       "either"      "elect"       "electric"    "eleven"     
## [266] "else"        "employ"      "encourage"   "end"         "engine"     
## [271] "english"     "enjoy"       "enough"      "enter"       "environment"
## [276] "equal"       "especial"    "europe"      "even"        "evening"    
## [281] "ever"        "every"       "evidence"    "exact"       "example"    
## [286] "except"      "excuse"      "exercise"    "exist"       "expect"     
## [291] "expense"     "experience"  "explain"     "express"     "extra"      
## [296] "eye"         "face"        "fact"        "fair"        "fall"       
## [301] "family"      "far"         "farm"        "fast"        "father"     
## [306] "favour"      "feed"        "feel"        "few"         "field"      
## [311] "fight"       "figure"      "file"        "fill"        "film"       
## [316] "final"       "finance"     "find"        "fine"        "finish"     
## [321] "fire"        "first"       "fish"        "fit"         "five"       
## [326] "flat"        "floor"       "fly"         "follow"      "food"       
## [331] "foot"        "for"         "force"       "forget"      "form"       
## [336] "fortune"     "forward"     "four"        "france"      "free"       
## [341] "friday"      "friend"      "from"        "front"       "full"       
## [346] "fun"         "function"    "fund"        "further"     "future"     
## [351] "game"        "garden"      "gas"         "general"     "germany"    
## [356] "get"         "girl"        "give"        "glass"       "go"         
## [361] "god"         "good"        "goodbye"     "govern"      "grand"      
## [366] "grant"       "great"       "green"       "ground"      "group"      
## [371] "grow"        "guess"       "guy"         "hair"        "half"       
## [376] "hall"        "hand"        "hang"        "happen"      "happy"      
## [381] "hard"        "hate"        "have"        "he"          "head"       
## [386] "health"      "hear"        "heart"       "heat"        "heavy"      
## [391] "hell"        "help"        "here"        "high"        "history"    
## [396] "hit"         "hold"        "holiday"     "home"        "honest"     
## [401] "hope"        "horse"       "hospital"    "hot"         "hour"       
## [406] "house"       "how"         "however"     "hullo"       "hundred"    
## [411] "husband"     "idea"        "identify"    "if"          "imagine"    
## [416] "important"   "improve"     "in"          "include"     "income"     
## [421] "increase"    "indeed"      "individual"  "industry"    "inform"     
## [426] "inside"      "instead"     "insure"      "interest"    "into"       
## [431] "introduce"   "invest"      "involve"     "issue"       "it"         
## [436] "item"        "jesus"       "job"         "join"        "judge"      
## [441] "jump"        "just"        "keep"        "key"         "kid"        
## [446] "kill"        "kind"        "king"        "kitchen"     "knock"      
## [451] "know"        "labour"      "lad"         "lady"        "land"       
## [456] "language"    "large"       "last"        "late"        "laugh"      
## [461] "law"         "lay"         "lead"        "learn"       "leave"      
## [466] "left"        "leg"         "less"        "let"         "letter"     
## [471] "level"       "lie"         "life"        "light"       "like"       
## [476] "likely"      "limit"       "line"        "link"        "list"       
## [481] "listen"      "little"      "live"        "load"        "local"      
## [486] "lock"        "london"      "long"        "look"        "lord"       
## [491] "lose"        "lot"         "love"        "low"         "luck"       
## [496] "lunch"       "machine"     "main"        "major"       "make"       
## [501] "man"         "manage"      "many"        "mark"        "market"     
## [506] "marry"       "match"       "matter"      "may"         "maybe"      
## [511] "mean"        "meaning"     "measure"     "meet"        "member"     
## [516] "mention"     "middle"      "might"       "mile"        "milk"       
## [521] "million"     "mind"        "minister"    "minus"       "minute"     
## [526] "miss"        "mister"      "moment"      "monday"      "money"      
## [531] "month"       "more"        "morning"     "most"        "mother"     
## [536] "motion"      "move"        "mrs"         "much"        "music"      
## [541] "must"        "name"        "nation"      "nature"      "near"       
## [546] "necessary"   "need"        "never"       "new"         "news"       
## [551] "next"        "nice"        "night"       "nine"        "no"         
## [556] "non"         "none"        "normal"      "north"       "not"        
## [561] "note"        "notice"      "now"         "number"      "obvious"    
## [566] "occasion"    "odd"         "of"          "off"         "offer"      
## [571] "office"      "often"       "okay"        "old"         "on"         
## [576] "once"        "one"         "only"        "open"        "operate"    
## [581] "opportunity" "oppose"      "or"          "order"       "organize"   
## [586] "original"    "other"       "otherwise"   "ought"       "out"        
## [591] "over"        "own"         "pack"        "page"        "paint"      
## [596] "pair"        "paper"       "paragraph"   "pardon"      "parent"     
## [601] "park"        "part"        "particular"  "party"       "pass"       
## [606] "past"        "pay"         "pence"       "pension"     "people"     
## [611] "per"         "percent"     "perfect"     "perhaps"     "period"     
## [616] "person"      "photograph"  "pick"        "picture"     "piece"      
## [621] "place"       "plan"        "play"        "please"      "plus"       
## [626] "point"       "police"      "policy"      "politic"     "poor"       
## [631] "position"    "positive"    "possible"    "post"        "pound"      
## [636] "power"       "practise"    "prepare"     "present"     "press"      
## [641] "pressure"    "presume"     "pretty"      "previous"    "price"      
## [646] "print"       "private"     "probable"    "problem"     "proceed"    
## [651] "process"     "produce"     "product"     "programme"   "project"    
## [656] "proper"      "propose"     "protect"     "provide"     "public"     
## [661] "pull"        "purpose"     "push"        "put"         "quality"    
## [666] "quarter"     "question"    "quick"       "quid"        "quiet"      
## [671] "quite"       "radio"       "rail"        "raise"       "range"      
## [676] "rate"        "rather"      "read"        "ready"       "real"       
## [681] "realise"     "really"      "reason"      "receive"     "recent"     
## [686] "reckon"      "recognize"   "recommend"   "record"      "red"        
## [691] "reduce"      "refer"       "regard"      "region"      "relation"   
## [696] "remember"    "report"      "represent"   "require"     "research"   
## [701] "resource"    "respect"     "responsible" "rest"        "result"     
## [706] "return"      "rid"         "right"       "ring"        "rise"       
## [711] "road"        "role"        "roll"        "room"        "round"      
## [716] "rule"        "run"         "safe"        "sale"        "same"       
## [721] "saturday"    "save"        "say"         "scheme"      "school"     
## [726] "science"     "score"       "scotland"    "seat"        "second"     
## [731] "secretary"   "section"     "secure"      "see"         "seem"       
## [736] "self"        "sell"        "send"        "sense"       "separate"   
## [741] "serious"     "serve"       "service"     "set"         "settle"     
## [746] "seven"       "sex"         "shall"       "share"       "she"        
## [751] "sheet"       "shoe"        "shoot"       "shop"        "short"      
## [756] "should"      "show"        "shut"        "sick"        "side"       
## [761] "sign"        "similar"     "simple"      "since"       "sing"       
## [766] "single"      "sir"         "sister"      "sit"         "site"       
## [771] "situate"     "six"         "size"        "sleep"       "slight"     
## [776] "slow"        "small"       "smoke"       "so"          "social"     
## [781] "society"     "some"        "son"         "soon"        "sorry"      
## [786] "sort"        "sound"       "south"       "space"       "speak"      
## [791] "special"     "specific"    "speed"       "spell"       "spend"      
## [796] "square"      "staff"       "stage"       "stairs"      "stand"      
## [801] "standard"    "start"       "state"       "station"     "stay"       
## [806] "step"        "stick"       "still"       "stop"        "story"      
## [811] "straight"    "strategy"    "street"      "strike"      "strong"     
## [816] "structure"   "student"     "study"       "stuff"       "stupid"     
## [821] "subject"     "succeed"     "such"        "sudden"      "suggest"    
## [826] "suit"        "summer"      "sun"         "sunday"      "supply"     
## [831] "support"     "suppose"     "sure"        "surprise"    "switch"     
## [836] "system"      "table"       "take"        "talk"        "tape"       
## [841] "tax"         "tea"         "teach"       "team"        "telephone"  
## [846] "television"  "tell"        "ten"         "tend"        "term"       
## [851] "terrible"    "test"        "than"        "thank"       "the"        
## [856] "then"        "there"       "therefore"   "they"        "thing"      
## [861] "think"       "thirteen"    "thirty"      "this"        "thou"       
## [866] "though"      "thousand"    "three"       "through"     "throw"      
## [871] "thursday"    "tie"         "time"        "to"          "today"      
## [876] "together"    "tomorrow"    "tonight"     "too"         "top"        
## [881] "total"       "touch"       "toward"      "town"        "trade"      
## [886] "traffic"     "train"       "transport"   "travel"      "treat"      
## [891] "tree"        "trouble"     "true"        "trust"       "try"        
## [896] "tuesday"     "turn"        "twelve"      "twenty"      "two"        
## [901] "type"        "under"       "understand"  "union"       "unit"       
## [906] "unite"       "university"  "unless"      "until"       "up"         
## [911] "upon"        "use"         "usual"       "value"       "various"    
## [916] "very"        "video"       "view"        "village"     "visit"      
## [921] "vote"        "wage"        "wait"        "walk"        "wall"       
## [926] "want"        "war"         "warm"        "wash"        "waste"      
## [931] "watch"       "water"       "way"         "we"          "wear"       
## [936] "wednesday"   "wee"         "week"        "weigh"       "welcome"    
## [941] "well"        "west"        "what"        "when"        "where"      
## [946] "whether"     "which"       "while"       "white"       "who"        
## [951] "whole"       "why"         "wide"        "wife"        "will"       
## [956] "win"         "wind"        "window"      "wish"        "with"       
## [961] "within"      "without"     "woman"       "wonder"      "wood"       
## [966] "word"        "work"        "world"       "worry"       "worse"      
## [971] "worth"       "would"       "write"       "wrong"       "year"       
## [976] "yes"         "yesterday"   "yet"         "you"         "young"
sum(str_detect(words, "e"))
## [1] 536

Lets get more complex, what proportion words end in a vowel?

mean(str_detect(words, "[aeiou]$"))
## [1] 0.2765306
mean(str_detect(words, "^[aeiou]"))
## [1] 0.1785714

Lets find all the words that don’t contain “o” or “u”

no_o <- !str_detect(words, "[ou]")

no_o
##   [1]  TRUE  TRUE FALSE FALSE  TRUE FALSE  TRUE FALSE  TRUE  TRUE FALSE  TRUE
##  [13]  TRUE  TRUE  TRUE  TRUE FALSE  TRUE FALSE  TRUE  TRUE  TRUE  TRUE FALSE
##  [25]  TRUE  TRUE  TRUE FALSE FALSE FALSE  TRUE  TRUE FALSE FALSE  TRUE  TRUE
##  [37] FALSE  TRUE FALSE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE FALSE FALSE FALSE
##  [49]  TRUE FALSE  TRUE FALSE  TRUE  TRUE  TRUE  TRUE FALSE FALSE  TRUE  TRUE
##  [61] FALSE  TRUE  TRUE  TRUE FALSE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE
##  [73]  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE FALSE FALSE FALSE  TRUE FALSE  TRUE
##  [85]  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE FALSE
##  [97] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [109] FALSE  TRUE  TRUE  TRUE  TRUE  TRUE FALSE FALSE FALSE FALSE FALSE FALSE
## [121] FALSE FALSE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE
## [133]  TRUE FALSE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE
## [145]  TRUE  TRUE  TRUE FALSE FALSE  TRUE  TRUE FALSE  TRUE  TRUE  TRUE  TRUE
## [157]  TRUE  TRUE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [169] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [181] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [193] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE  TRUE
## [205] FALSE FALSE FALSE FALSE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE
## [217]  TRUE FALSE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE FALSE  TRUE
## [229]  TRUE FALSE  TRUE  TRUE FALSE  TRUE  TRUE FALSE FALSE FALSE FALSE FALSE
## [241] FALSE FALSE FALSE  TRUE  TRUE  TRUE  TRUE FALSE  TRUE FALSE FALSE  TRUE
## [253]  TRUE  TRUE  TRUE  TRUE FALSE FALSE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE
## [265]  TRUE  TRUE FALSE FALSE  TRUE  TRUE  TRUE FALSE FALSE  TRUE FALSE FALSE
## [277]  TRUE FALSE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE FALSE  TRUE
## [289]  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE
## [301]  TRUE  TRUE  TRUE  TRUE  TRUE FALSE  TRUE  TRUE  TRUE  TRUE  TRUE FALSE
## [313]  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE
## [325]  TRUE  TRUE FALSE  TRUE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [337] FALSE FALSE  TRUE  TRUE  TRUE  TRUE FALSE FALSE FALSE FALSE FALSE FALSE
## [349] FALSE FALSE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE FALSE
## [361] FALSE FALSE FALSE FALSE  TRUE  TRUE  TRUE  TRUE FALSE FALSE FALSE FALSE
## [373] FALSE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE
## [385]  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE FALSE  TRUE
## [397] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [409] FALSE FALSE FALSE  TRUE  TRUE  TRUE  TRUE FALSE FALSE  TRUE FALSE FALSE
## [421]  TRUE  TRUE FALSE FALSE FALSE  TRUE  TRUE FALSE  TRUE FALSE FALSE  TRUE
## [433] FALSE FALSE  TRUE  TRUE FALSE FALSE FALSE FALSE FALSE FALSE  TRUE  TRUE
## [445]  TRUE  TRUE  TRUE  TRUE  TRUE FALSE FALSE FALSE  TRUE  TRUE  TRUE FALSE
## [457]  TRUE  TRUE  TRUE FALSE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE
## [469]  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE
## [481]  TRUE  TRUE  TRUE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [493] FALSE FALSE FALSE FALSE  TRUE  TRUE FALSE  TRUE  TRUE  TRUE  TRUE  TRUE
## [505]  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE FALSE  TRUE  TRUE FALSE
## [517]  TRUE  TRUE  TRUE  TRUE FALSE  TRUE  TRUE FALSE FALSE  TRUE  TRUE FALSE
## [529] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE  TRUE FALSE FALSE
## [541] FALSE  TRUE FALSE FALSE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE
## [553]  TRUE  TRUE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [565] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [577] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [589] FALSE FALSE FALSE FALSE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE FALSE  TRUE
## [601]  TRUE  TRUE FALSE  TRUE  TRUE  TRUE  TRUE  TRUE FALSE FALSE  TRUE  TRUE
## [613]  TRUE  TRUE FALSE FALSE FALSE  TRUE FALSE  TRUE  TRUE  TRUE  TRUE  TRUE
## [625] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [637]  TRUE  TRUE  TRUE  TRUE FALSE FALSE  TRUE FALSE  TRUE  TRUE  TRUE FALSE
## [649] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [661] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [673]  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE FALSE  TRUE
## [685]  TRUE FALSE FALSE FALSE FALSE  TRUE FALSE  TRUE  TRUE FALSE FALSE  TRUE
## [697] FALSE  TRUE FALSE  TRUE FALSE  TRUE FALSE  TRUE FALSE FALSE  TRUE  TRUE
## [709]  TRUE  TRUE FALSE FALSE FALSE FALSE FALSE FALSE FALSE  TRUE  TRUE  TRUE
## [721] FALSE  TRUE  TRUE  TRUE FALSE  TRUE FALSE FALSE  TRUE FALSE  TRUE FALSE
## [733] FALSE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE FALSE  TRUE  TRUE  TRUE
## [745]  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE FALSE FALSE FALSE FALSE FALSE
## [757] FALSE FALSE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE
## [769]  TRUE  TRUE FALSE  TRUE  TRUE  TRUE  TRUE FALSE  TRUE FALSE FALSE FALSE
## [781] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE  TRUE  TRUE  TRUE  TRUE
## [793]  TRUE  TRUE  TRUE FALSE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE FALSE
## [805]  TRUE  TRUE  TRUE  TRUE FALSE FALSE  TRUE  TRUE  TRUE  TRUE FALSE FALSE
## [817] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [829] FALSE FALSE FALSE FALSE FALSE FALSE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE
## [841]  TRUE  TRUE  TRUE  TRUE FALSE FALSE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE
## [853]  TRUE  TRUE  TRUE  TRUE  TRUE FALSE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE
## [865] FALSE FALSE FALSE  TRUE FALSE FALSE FALSE  TRUE  TRUE FALSE FALSE FALSE
## [877] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE  TRUE  TRUE  TRUE FALSE
## [889]  TRUE  TRUE  TRUE FALSE FALSE FALSE  TRUE FALSE FALSE  TRUE  TRUE FALSE
## [901]  TRUE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [913] FALSE FALSE FALSE  TRUE FALSE  TRUE  TRUE  TRUE FALSE  TRUE  TRUE  TRUE
## [925]  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE
## [937]  TRUE  TRUE  TRUE FALSE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE
## [949]  TRUE FALSE FALSE  TRUE  TRUE  TRUE  TRUE  TRUE  TRUE FALSE  TRUE  TRUE
## [961]  TRUE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
## [973]  TRUE FALSE  TRUE  TRUE  TRUE  TRUE FALSE FALSE

Now let’s extract

words[!str_detect(words, "[ou]")]
##   [1] "a"          "able"       "accept"     "achieve"    "act"       
##   [6] "active"     "add"        "address"    "admit"      "advertise" 
##  [11] "affect"     "after"      "again"      "against"    "age"       
##  [16] "agent"      "agree"      "air"        "all"        "already"   
##  [21] "alright"    "always"     "america"    "and"        "answer"    
##  [26] "any"        "apart"      "apparent"   "appear"     "apply"     
##  [31] "area"       "arm"        "arrange"    "art"        "as"        
##  [36] "ask"        "at"         "attend"     "available"  "aware"     
##  [41] "away"       "baby"       "back"       "bad"        "bag"       
##  [46] "balance"    "ball"       "bank"       "bar"        "base"      
##  [51] "basis"      "be"         "bear"       "beat"       "bed"       
##  [56] "begin"      "behind"     "believe"    "benefit"    "best"      
##  [61] "bet"        "between"    "big"        "bill"       "birth"     
##  [66] "bit"        "black"      "break"      "brief"      "brilliant" 
##  [71] "bring"      "britain"    "by"         "cake"       "call"      
##  [76] "can"        "car"        "card"       "care"       "carry"     
##  [81] "case"       "cat"        "catch"      "cent"       "centre"    
##  [86] "certain"    "chair"      "chairman"   "chance"     "change"    
##  [91] "chap"       "character"  "charge"     "cheap"      "check"     
##  [96] "child"      "Christ"     "Christmas"  "city"       "claim"     
## [101] "class"      "clean"      "clear"      "client"     "create"    
## [106] "dad"        "danger"     "date"       "day"        "dead"      
## [111] "deal"       "dear"       "debate"     "decide"     "deep"      
## [116] "definite"   "degree"     "department" "depend"     "describe"  
## [121] "design"     "detail"     "die"        "difference" "dinner"    
## [126] "direct"     "district"   "divide"     "draw"       "dress"     
## [131] "drink"      "drive"      "dry"        "each"       "early"     
## [136] "east"       "easy"       "eat"        "effect"     "egg"       
## [141] "eight"      "either"     "elect"      "electric"   "eleven"    
## [146] "else"       "end"        "engine"     "english"    "enter"     
## [151] "especial"   "even"       "evening"    "ever"       "every"     
## [156] "evidence"   "exact"      "example"    "except"     "exercise"  
## [161] "exist"      "expect"     "expense"    "experience" "explain"   
## [166] "express"    "extra"      "eye"        "face"       "fact"      
## [171] "fair"       "fall"       "family"     "far"        "farm"      
## [176] "fast"       "father"     "feed"       "feel"       "few"       
## [181] "field"      "fight"      "file"       "fill"       "film"      
## [186] "final"      "finance"    "find"       "fine"       "finish"    
## [191] "fire"       "first"      "fish"       "fit"        "five"      
## [196] "flat"       "fly"        "france"     "free"       "friday"    
## [201] "friend"     "game"       "garden"     "gas"        "general"   
## [206] "germany"    "get"        "girl"       "give"       "glass"     
## [211] "grand"      "grant"      "great"      "green"      "hair"      
## [216] "half"       "hall"       "hand"       "hang"       "happen"    
## [221] "happy"      "hard"       "hate"       "have"       "he"        
## [226] "head"       "health"     "hear"       "heart"      "heat"      
## [231] "heavy"      "hell"       "help"       "here"       "high"      
## [236] "hit"        "idea"       "identify"   "if"         "imagine"   
## [241] "in"         "increase"   "indeed"     "inside"     "instead"   
## [246] "interest"   "invest"     "it"         "item"       "keep"      
## [251] "key"        "kid"        "kill"       "kind"       "king"      
## [256] "kitchen"    "lad"        "lady"       "land"       "large"     
## [261] "last"       "late"       "law"        "lay"        "lead"      
## [266] "learn"      "leave"      "left"       "leg"        "less"      
## [271] "let"        "letter"     "level"      "lie"        "life"      
## [276] "light"      "like"       "likely"     "limit"      "line"      
## [281] "link"       "list"       "listen"     "little"     "live"      
## [286] "machine"    "main"       "make"       "man"        "manage"    
## [291] "many"       "mark"       "market"     "marry"      "match"     
## [296] "matter"     "may"        "maybe"      "mean"       "meaning"   
## [301] "meet"       "member"     "middle"     "might"      "mile"      
## [306] "milk"       "mind"       "minister"   "miss"       "mister"    
## [311] "mrs"        "name"       "near"       "necessary"  "need"      
## [316] "never"      "new"        "news"       "next"       "nice"      
## [321] "night"      "nine"       "pack"       "page"       "paint"     
## [326] "pair"       "paper"      "paragraph"  "parent"     "park"      
## [331] "part"       "party"      "pass"       "past"       "pay"       
## [336] "pence"      "per"        "percent"    "perfect"    "perhaps"   
## [341] "pick"       "piece"      "place"      "plan"       "play"      
## [346] "please"     "practise"   "prepare"    "present"    "press"     
## [351] "pretty"     "price"      "print"      "private"    "rail"      
## [356] "raise"      "range"      "rate"       "rather"     "read"      
## [361] "ready"      "real"       "realise"    "really"     "receive"   
## [366] "recent"     "red"        "refer"      "regard"     "remember"  
## [371] "represent"  "research"   "respect"    "rest"       "rid"       
## [376] "right"      "ring"       "rise"       "safe"       "sale"      
## [381] "same"       "save"       "say"        "scheme"     "science"   
## [386] "seat"       "secretary"  "see"        "seem"       "self"      
## [391] "sell"       "send"       "sense"      "separate"   "serve"     
## [396] "service"    "set"        "settle"     "seven"      "sex"       
## [401] "shall"      "share"      "she"        "sheet"      "sick"      
## [406] "side"       "sign"       "similar"    "simple"     "since"     
## [411] "sing"       "single"     "sir"        "sister"     "sit"       
## [416] "site"       "six"        "size"       "sleep"      "slight"    
## [421] "small"      "space"      "speak"      "special"    "specific"  
## [426] "speed"      "spell"      "spend"      "staff"      "stage"     
## [431] "stairs"     "stand"      "standard"   "start"      "state"     
## [436] "stay"       "step"       "stick"      "still"      "straight"  
## [441] "strategy"   "street"     "strike"     "switch"     "system"    
## [446] "table"      "take"       "talk"       "tape"       "tax"       
## [451] "tea"        "teach"      "team"       "tell"       "ten"       
## [456] "tend"       "term"       "terrible"   "test"       "than"      
## [461] "thank"      "the"        "then"       "there"      "they"      
## [466] "thing"      "think"      "thirteen"   "thirty"     "this"      
## [471] "three"      "tie"        "time"       "trade"      "traffic"   
## [476] "train"      "travel"     "treat"      "tree"       "try"       
## [481] "twelve"     "twenty"     "type"       "very"       "view"      
## [486] "village"    "visit"      "wage"       "wait"       "walk"      
## [491] "wall"       "want"       "war"        "warm"       "wash"      
## [496] "waste"      "watch"      "water"      "way"        "we"        
## [501] "wear"       "wednesday"  "wee"        "week"       "weigh"     
## [506] "well"       "west"       "what"       "when"       "where"     
## [511] "whether"    "which"      "while"      "white"      "why"       
## [516] "wide"       "wife"       "will"       "win"        "wind"      
## [521] "wish"       "with"       "within"     "write"      "year"      
## [526] "yes"        "yesterday"  "yet"

You can also use str_count() to say how many matches there are in string

x
## [1] "ATTAGA"      "CGCCCCCGGAT" "TATTA"
str_count(x, "[GC]")
## [1] 1 9 0

Lets couple this with mutate

df <- tibble(
  word = words,
  count = seq_along(word)
)

df
## # A tibble: 980 × 2
##    word     count
##    <chr>    <int>
##  1 a            1
##  2 able         2
##  3 about        3
##  4 absolute     4
##  5 accept       5
##  6 account      6
##  7 achieve      7
##  8 across       8
##  9 act          9
## 10 active      10
## # ℹ 970 more rows
df %>%
  mutate(
    vowels = str_count(words, "[aeiou]"),
    constonants = str_count(words, "[aeiou]")
  )
## # A tibble: 980 × 4
##    word     count vowels constonants
##    <chr>    <int>  <int>       <int>
##  1 a            1      1           1
##  2 able         2      2           2
##  3 about        3      3           3
##  4 absolute     4      4           4
##  5 accept       5      2           2
##  6 account      6      3           3
##  7 achieve      7      4           4
##  8 across       8      2           2
##  9 act          9      1           1
## 10 active      10      3           3
## # ℹ 970 more rows

Limma Microarray Analysis

First, you will need to load the following packages.

library(ath1121501cdf)
## Warning: replacing previous import 'AnnotationDbi::tail' by 'utils::tail' when
## loading 'ath1121501cdf'
## Warning: replacing previous import 'AnnotationDbi::head' by 'utils::head' when
## loading 'ath1121501cdf'
## 
library(ath1121501.db)
## Loading required package: AnnotationDbi
## Loading required package: stats4
## Loading required package: BiocGenerics
## 
## Attaching package: 'BiocGenerics'
## The following objects are masked from 'package:lubridate':
## 
##     intersect, setdiff, union
## The following objects are masked from 'package:dplyr':
## 
##     combine, intersect, setdiff, union
## The following objects are masked from 'package:stats':
## 
##     IQR, mad, sd, var, xtabs
## The following objects are masked from 'package:base':
## 
##     anyDuplicated, aperm, append, as.data.frame, basename, cbind,
##     colnames, dirname, do.call, duplicated, eval, evalq, Filter, Find,
##     get, grep, grepl, intersect, is.unsorted, lapply, Map, mapply,
##     match, mget, order, paste, pmax, pmax.int, pmin, pmin.int,
##     Position, rank, rbind, Reduce, rownames, sapply, setdiff, sort,
##     table, tapply, union, unique, unsplit, which.max, which.min
## Loading required package: Biobase
## Welcome to Bioconductor
## 
##     Vignettes contain introductory material; view with
##     'browseVignettes()'. To cite Bioconductor, see
##     'citation("Biobase")', and for packages 'citation("pkgname")'.
## Loading required package: IRanges
## Loading required package: S4Vectors
## 
## Attaching package: 'S4Vectors'
## The following objects are masked from 'package:lubridate':
## 
##     second, second<-
## The following objects are masked from 'package:dplyr':
## 
##     first, rename
## The following object is masked from 'package:tidyr':
## 
##     expand
## The following object is masked from 'package:utils':
## 
##     findMatches
## The following objects are masked from 'package:base':
## 
##     expand.grid, I, unname
## 
## Attaching package: 'IRanges'
## The following object is masked from 'package:lubridate':
## 
##     %within%
## The following objects are masked from 'package:dplyr':
## 
##     collapse, desc, slice
## The following object is masked from 'package:purrr':
## 
##     reduce
## 
## Attaching package: 'AnnotationDbi'
## The following object is masked from 'package:dplyr':
## 
##     select
## Loading required package: org.At.tair.db
## 
## 
library(annotate)
## Loading required package: XML
library(affy)
## 
## Attaching package: 'affy'
## The following object is masked from 'package:lubridate':
## 
##     pm
library(limma)
## 
## Attaching package: 'limma'
## The following object is masked from 'package:BiocGenerics':
## 
##     plotMA
library(oligo)
## Loading required package: oligoClasses
## Welcome to oligoClasses version 1.64.0
## 
## Attaching package: 'oligoClasses'
## The following object is masked from 'package:affy':
## 
##     list.celfiles
## Loading required package: Biostrings
## Loading required package: XVector
## 
## Attaching package: 'XVector'
## The following object is masked from 'package:purrr':
## 
##     compact
## Loading required package: GenomeInfoDb
## 
## Attaching package: 'Biostrings'
## The following object is masked from 'package:base':
## 
##     strsplit
## ================================================================================
## Welcome to oligo version 1.66.0
## ================================================================================
## 
## Attaching package: 'oligo'
## The following object is masked from 'package:limma':
## 
##     backgroundCorrect
## The following objects are masked from 'package:affy':
## 
##     intensity, MAplot, mm, mm<-, mmindex, pm, pm<-, pmindex,
##     probeNames, rma
## The following object is masked from 'package:lubridate':
## 
##     pm
## The following object is masked from 'package:dplyr':
## 
##     summarize
library(dplyr)
library(stats)
library(reshape)
## 
## Attaching package: 'reshape'
## The following objects are masked from 'package:S4Vectors':
## 
##     expand, rename
## The following object is masked from 'package:lubridate':
## 
##     stamp
## The following object is masked from 'package:dplyr':
## 
##     rename
## The following objects are masked from 'package:tidyr':
## 
##     expand, smiths

Read all the cell files into the directory

targets <- readTargets("Bric16_Targets.csv", sep = ",", row.names = "filename")

ab <- ReadAffy()
hist(ab)

Normalizing the microarray experiments

eset <- affy::rma(ab)
## Background correcting
## Normalizing
## Calculating Expression
pData(eset)
##                                          sample
## Atha_Ler-0_sShoots_FLT_Rep1_F-F2_raw.CEL      1
## Atha_Ler-0_sShoots_FLT_Rep2_F-F3_raw.CEL      2
## Atha_Ler-0_sShoots_FLT_Rep3_F-F4_raw.CEL      3
## Atha_Ler-0_sShoots_GC_Rep1_H-F2_raw.CEL       4
## Atha_Ler-0_sShoots_GC_Rep2_H-F3_raw.CEL       5
## Atha_Ler-0_sShoots_GC_Rep3_H-F4_raw.CEL       6

Lets visualize the normalized data

hist(eset)

Lets characterize the data a bit

ID <- featureNames(eset)
Symbol <- getSYMBOL(ID, "ath1121501.db")

affydata <- read.delim("AFFY_ATH1_array_elements.txt")
## Warning in scan(file = file, what = what, sep = sep, quote = quote, dec = dec,
## : EOF within quoted string

Differential Gene Expression Analysis

Flight vs Ground

condition <- targets$gravity

design <- model.matrix(~factor(condition))
colnames(design) <- c("flight", "ground")

fit <- lmFit(eset, design)
fit <- eBayes(fit)
options(digits = 2)
top <- topTable(fit, coef =2, n=Inf, adjust= 'fdr')

Lets combine annotations

Annot <- data.frame(GENE = sapply(as.list(ath1121501GENENAME), paste, collapse = ", "),
                    KEGG = sapply(as.list(ath1121501PATH), paste, collapse = ", "),
                    GO = sapply(as.list(ath1121501GO), paste, collapse = ", "), 
                    SYMBOL = sapply(as.list(ath1121501SYMBOL), paste, collapse = ", "), 
                    ACCNUM = sapply(as.list(ath1121501ACCNUM), paste, collapse = ", "))

Lets merge all the data into one dataframe

all <- merge(Annot, top, by.x = 0, by.y = 0, all = TRUE)

all2 <- merge(all, affydata, by.x = "Row.names", by.y = "array_element_name")

Lets convert the ACCNUM to a character value

all2$ACCNUM <- as.character(all2$ACCNUM)

write.table(all2, file="BRIC_16_Final.csv", row.names = TRUE, col.names = TRUE, sep = "\t")


columns(org.At.tair.db)
##  [1] "ARACYC"       "ARACYCENZYME" "ENTREZID"     "ENZYME"       "EVIDENCE"    
##  [6] "EVIDENCEALL"  "GENENAME"     "GO"           "GOALL"        "ONTOLOGY"    
## [11] "ONTOLOGYALL"  "PATH"         "PMID"         "REFSEQ"       "SYMBOL"      
## [16] "TAIR"
foldchanges <- as.data.frame(all2$logFC)

all2$entrez = mapIds(org.At.tair.db,
                     keys = all2$ACCNUM, 
                     column = "ENTREZID", 
                     keytype = "TAIR", 
                     multiVals = "first")
## 'select()' returned 1:1 mapping between keys and columns
head(all2, 10)
##    Row.names                 GENE KEGG
## 1  244903_at hypothetical protein   NA
## 2  244904_at hypothetical protein   NA
## 3  244906_at hypothetical protein   NA
## 4  244907_at hypothetical protein   NA
## 5  244908_at hypothetical protein   NA
## 6  244911_at hypothetical protein   NA
## 7  244913_at hypothetical protein   NA
## 8  244914_at hypothetical protein   NA
## 9  244916_at hypothetical protein   NA
## 10 244917_at hypothetical protein   NA
##                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     GO
## 1                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               list(GOID = "GO:0008150", Evidence = "ND", Ontology = "BP"), list(GOID = "GO:0005739", Evidence = "ISM", Ontology = "CC"), list(GOID = "GO:0003674", Evidence = "ND", Ontology = "MF")
## 2                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            list(GOID = "GO:0005739", Evidence = "ISM", Ontology = "CC"), list(GOID = "GO:0003674", Evidence = "ND", Ontology = "MF")
## 3                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               list(GOID = "GO:0008150", Evidence = "ND", Ontology = "BP"), list(GOID = "GO:0005739", Evidence = "ISM", Ontology = "CC"), list(GOID = "GO:0003674", Evidence = "ND", Ontology = "MF")
## 4                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               list(GOID = "GO:0008150", Evidence = "ND", Ontology = "BP"), list(GOID = "GO:0005739", Evidence = "ISM", Ontology = "CC"), list(GOID = "GO:0003674", Evidence = "ND", Ontology = "MF")
## 5                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               list(GOID = "GO:0008150", Evidence = "ND", Ontology = "BP"), list(GOID = "GO:0005739", Evidence = "ISM", Ontology = "CC"), list(GOID = "GO:0003674", Evidence = "ND", Ontology = "MF")
## 6                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                list(GOID = "GO:0008150", Evidence = "ND", Ontology = "BP"), list(GOID = "GO:0005575", Evidence = "ND", Ontology = "CC"), list(GOID = "GO:0003674", Evidence = "ND", Ontology = "MF")
## 7                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               list(GOID = "GO:0008150", Evidence = "ND", Ontology = "BP"), list(GOID = "GO:0005739", Evidence = "ISM", Ontology = "CC"), list(GOID = "GO:0003674", Evidence = "ND", Ontology = "MF")
## 8  list(GOID = "GO:0018130", Evidence = "IEA", Ontology = "BP"), list(GOID = "GO:0018130", Evidence = "IEA", Ontology = "BP"), list(GOID = "GO:0018130", Evidence = "IEA", Ontology = "BP"), list(GOID = "GO:0019438", Evidence = "IEA", Ontology = "BP"), list(GOID = "GO:0019438", Evidence = "IEA", Ontology = "BP"), list(GOID = "GO:0019438", Evidence = "IEA", Ontology = "BP"), list(GOID = "GO:0044271", Evidence = "IEA", Ontology = "BP"), list(GOID = "GO:0044271", Evidence = "IEA", Ontology = "BP"), list(GOID = "GO:0044271", Evidence = "IEA", Ontology = "BP"), list(GOID = "GO:1901362", Evidence = "IEA", Ontology = "BP"), list(GOID = "GO:1901362", Evidence = "IEA", Ontology = "BP"), list(GOID = "GO:0005739", Evidence = "ISM", Ontology = "CC"), list(GOID = "GO:0003674", Evidence = "ND", Ontology = "MF")
## 9                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               list(GOID = "GO:0008150", Evidence = "ND", Ontology = "BP"), list(GOID = "GO:0005634", Evidence = "ISM", Ontology = "CC"), list(GOID = "GO:0003674", Evidence = "ND", Ontology = "MF")
## 10                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              list(GOID = "GO:0008150", Evidence = "ND", Ontology = "BP"), list(GOID = "GO:0005739", Evidence = "ISM", Ontology = "CC"), list(GOID = "GO:0003674", Evidence = "ND", Ontology = "MF")
##     SYMBOL    ACCNUM  logFC AveExpr     t P.Value adj.P.Val    B
## 1   ORF149 ATMG00660 -1.214     7.8 -6.20  0.0004     0.018  0.5
## 2   ORF275 ATMG00670 -0.568     3.2 -2.07  0.0758     0.300 -4.9
## 3  ORF240A ATMG00690  0.055     9.1  0.18  0.8586     0.947 -6.8
## 4   ORF120 ATMG00710 -0.883     4.3 -2.91  0.0220     0.154 -3.7
## 5  ORF107D ATMG00720 -0.463     2.2 -1.73  0.1262     0.396 -5.4
## 6   ORF170 ATMG00820 -0.193     2.7 -0.72  0.4948     0.749 -6.5
## 7  ORF121B ATMG00840 -0.339     1.8 -1.18  0.2764     0.574 -6.1
## 8  ORF107E ATMG00850 -0.305     5.5 -1.30  0.2338     0.534 -6.0
## 9   ORF187 ATMG00880 -0.888     2.8 -2.32  0.0527     0.245 -4.6
## 10  ORF184 ATMG00870 -0.379     3.6 -1.31  0.2307     0.531 -5.9
##    array_element_type             organism is_control     locus
## 1     oligonucleotide Arabidopsis thaliana         no ATMG00660
## 2     oligonucleotide Arabidopsis thaliana         no ATMG00670
## 3     oligonucleotide Arabidopsis thaliana         no ATMG00690
## 4     oligonucleotide Arabidopsis thaliana         no ATMG00710
## 5     oligonucleotide Arabidopsis thaliana         no ATMG00720
## 6     oligonucleotide Arabidopsis thaliana         no ATMG00820
## 7     oligonucleotide Arabidopsis thaliana         no AT2G07626
## 8     oligonucleotide Arabidopsis thaliana         no ATMG00850
## 9     oligonucleotide Arabidopsis thaliana         no ATMG00880
## 10    oligonucleotide Arabidopsis thaliana         no ATMG00870
##                                                                                description
## 1                                                  hypothetical protein;(source:Araport11)
## 2                                                 transmembrane protein;(source:Araport11)
## 3                                                     FO-ATPase subunit;(source:Araport11)
## 4  Polynucleotidyl transferase, ribonuclease H-like superfamily protein;(source:Araport11)
## 5                                                  hypothetical protein;(source:Araport11)
## 6                  Reverse transcriptase (RNA-dependent DNA polymerase);(source:Araport11)
## 7                                                  hypothetical protein;(source:Araport11)
## 8                               DNA/RNA polymerases superfamily protein;(source:Araport11)
## 9                                                  hypothetical protein;(source:Araport11)
## 10                                                 hypothetical protein;(source:Araport11)
##    chromosome entrez
## 1           M   <NA>
## 2           M   <NA>
## 3           M   <NA>
## 4           M   <NA>
## 5           M   <NA>
## 6           M   <NA>
## 7           2   <NA>
## 8           M   <NA>
## 9           M   <NA>
## 10          M   <NA>

Pathview

First, you will need to load the following packages.

library(pathview)
## 
## ##############################################################################
## Pathview is an open source software package distributed under GNU General
## Public License version 3 (GPLv3). Details of GPLv3 is available at
## http://www.gnu.org/licenses/gpl-3.0.html. Particullary, users are required to
## formally cite the original Pathview paper (not just mention it) in publications
## or products. For details, do citation("pathview") within R.
## 
## The pathview downloads and uses KEGG data. Non-academic uses may require a KEGG
## license agreement (details at http://www.kegg.jp/kegg/legal.html).
## ##############################################################################
library(gage)
## 
library(gageData)
data(kegg.sets.hs)
foldchanges = all2$logFC
names(foldchanges) = all2$entrez

head(foldchanges)
##   <NA>   <NA>   <NA>   <NA>   <NA>   <NA> 
## -1.214 -0.568  0.055 -0.883 -0.463 -0.193
kegg.ath = kegg.gsets("ath", id.type = "entrez")
kegg.ath.sigmet = kegg.ath$kg.sets[kegg.ath$sigmet.idx]

Lets get the results

keggres = gage(foldchanges, gsets=kegg.ath.sigmet, same.dir = TRUE)

lapply(keggres, head)
## $greater
##                                                   p.geomean stat.mean   p.val
## ath03010 Ribosome                                   1.5e-14       8.2 1.5e-14
## ath01230 Biosynthesis of amino acids                2.9e-04       3.5 2.9e-04
## ath00040 Pentose and glucuronate interconversions   2.0e-03       3.0 2.0e-03
## ath00195 Photosynthesis                             2.6e-03       3.0 2.6e-03
## ath00966 Glucosinolate biosynthesis                 7.0e-03       2.7 7.0e-03
## ath01232 Nucleotide metabolism                      9.2e-03       2.4 9.2e-03
##                                                     q.val set.size    exp1
## ath03010 Ribosome                                 1.6e-12      129 1.5e-14
## ath01230 Biosynthesis of amino acids              1.5e-02       87 2.9e-04
## ath00040 Pentose and glucuronate interconversions 6.6e-02       49 2.0e-03
## ath00195 Photosynthesis                           6.6e-02       19 2.6e-03
## ath00966 Glucosinolate biosynthesis               1.3e-01       12 7.0e-03
## ath01232 Nucleotide metabolism                    1.3e-01       42 9.2e-03
## 
## $less
##                                          p.geomean stat.mean p.val q.val
## ath04120 Ubiquitin mediated proteolysis      0.043      -1.7 0.043     1
## ath04016 MAPK signaling pathway - plant      0.044      -1.7 0.044     1
## ath00592 alpha-Linolenic acid metabolism     0.045      -1.7 0.045     1
## ath03040 Spliceosome                         0.075      -1.4 0.075     1
## ath00350 Tyrosine metabolism                 0.093      -1.4 0.093     1
## ath00906 Carotenoid biosynthesis             0.116      -1.2 0.116     1
##                                          set.size  exp1
## ath04120 Ubiquitin mediated proteolysis        63 0.043
## ath04016 MAPK signaling pathway - plant        73 0.044
## ath00592 alpha-Linolenic acid metabolism       19 0.045
## ath03040 Spliceosome                           77 0.075
## ath00350 Tyrosine metabolism                   19 0.093
## ath00906 Carotenoid biosynthesis               18 0.116
## 
## $stats
##                                                   stat.mean exp1
## ath03010 Ribosome                                       8.2  8.2
## ath01230 Biosynthesis of amino acids                    3.5  3.5
## ath00040 Pentose and glucuronate interconversions       3.0  3.0
## ath00195 Photosynthesis                                 3.0  3.0
## ath00966 Glucosinolate biosynthesis                     2.7  2.7
## ath01232 Nucleotide metabolism                          2.4  2.4
greater <- keggres$greater
lessers <- keggres$less

write.csv(greater, file = "BRIC_16_pathview_Greater_Pathways.csv")
write.csv(lessers, file = "BRIC_16_pathview_Lesser_Pathways.csv")

Lets map to pathways (greater)

Greater Pathway

keggrespathways = data.frame(id=rownames(keggres$greater), keggres$greater) %>%
  tibble::as_tibble() %>%
  filter(row_number() <=5) %>%
  .$id %>%
  as.character()
keggrespathways
## [1] "ath03010 Ribosome"                                
## [2] "ath01230 Biosynthesis of amino acids"             
## [3] "ath00040 Pentose and glucuronate interconversions"
## [4] "ath00195 Photosynthesis"                          
## [5] "ath00966 Glucosinolate biosynthesis"
keggresids_greater = substr(keggrespathways, start=1, stop=8)
keggresids_greater
## [1] "ath03010" "ath01230" "ath00040" "ath00195" "ath00966"
genedata <- as.character(all2$logFC)

Define a plotting function to apply next

plot_pathway = function(pid) pathview(gene.data=foldchanges, pathway.id = pid, species = "ath", new.signature = FALSE)

Plot multiple pathways (plots are saved to disk and returns a throwaway object list)

tmp = sapply(keggresids_greater, function(pid) pathview(gene.data=foldchanges, pathway.id = pid, species = "ath", kegg.native = FALSE))
## Info: Getting gene ID data from KEGG...
## Info: Done with data retrieval!
## Note: ath03010 not rendered, 0 or 1 connected nodes!
## Try "kegg.native=T" instead!
## Info: Getting gene ID data from KEGG...
## Info: Done with data retrieval!
## Info: Working in directory /home/student/Desktop/classroom/myfiles/garnettnatalie@gmail.com
## Info: Writing image file ath01230.pathview.pdf
## Info: Getting gene ID data from KEGG...
## Info: Done with data retrieval!
## Warning in .local(from, to, graph): edges replaced: '217|104', '217|216'
## Info: Working in directory /home/student/Desktop/classroom/myfiles/garnettnatalie@gmail.com
## Info: Writing image file ath00040.pathview.pdf
## Info: Getting gene ID data from KEGG...
## Info: Done with data retrieval!
## Note: ath00195 not rendered, 0 or 1 connected nodes!
## Try "kegg.native=T" instead!
## Info: Getting gene ID data from KEGG...
## Info: Done with data retrieval!
## Info: Working in directory /home/student/Desktop/classroom/myfiles/garnettnatalie@gmail.com
## Info: Writing image file ath00966.pathview.pdf

Lets map to pathways (lessers)

Lesser Pathway

keggrespathways = data.frame(id=rownames(keggres$less), keggres$less) %>%
  tibble::as_tibble() %>%
  filter(row_number() <=5) %>%
  .$id %>%
  as.character()
keggrespathways
## [1] "ath04120 Ubiquitin mediated proteolysis" 
## [2] "ath04016 MAPK signaling pathway - plant" 
## [3] "ath00592 alpha-Linolenic acid metabolism"
## [4] "ath03040 Spliceosome"                    
## [5] "ath00350 Tyrosine metabolism"
keggresids_less = substr(keggrespathways, start=1, stop=8)
keggresids_less
## [1] "ath04120" "ath04016" "ath00592" "ath03040" "ath00350"
genedata <- as.character(all2$logFC)

Define a plotting function to apply next

plot_pathway = function(pid) pathview(gene.data=foldchanges, pathway.id = pid, species = "ath", new.signature = FALSE)

Plot multiple pathways (plots are saved to disk and returns a throwaway object list)

tmp = sapply(keggresids_less, function(pid) pathview(gene.data=foldchanges, pathway.id = pid, species = "ath"))
## Info: Getting gene ID data from KEGG...
## Info: Done with data retrieval!
## Info: Working in directory /home/student/Desktop/classroom/myfiles/garnettnatalie@gmail.com
## Info: Writing image file ath04120.pathview.png
## Info: Getting gene ID data from KEGG...
## Info: Done with data retrieval!
## Info: Working in directory /home/student/Desktop/classroom/myfiles/garnettnatalie@gmail.com
## Info: Writing image file ath04016.pathview.png
## Info: Getting gene ID data from KEGG...
## Info: Done with data retrieval!
## Info: Working in directory /home/student/Desktop/classroom/myfiles/garnettnatalie@gmail.com
## Info: Writing image file ath00592.pathview.png
## Info: Getting gene ID data from KEGG...
## Info: Done with data retrieval!
## Info: Working in directory /home/student/Desktop/classroom/myfiles/garnettnatalie@gmail.com
## Info: Writing image file ath03040.pathview.png
## Info: Getting gene ID data from KEGG...
## Info: Done with data retrieval!
## Info: Working in directory /home/student/Desktop/classroom/myfiles/garnettnatalie@gmail.com
## Info: Writing image file ath00350.pathview.png

Mouse Edge R

First, you will need to install the following packages.

library("edgeR")
library("dplyr")
library("AnnotationDbi")
library("org.Mm.eg.db")
## 
rawdata = read.csv("GLDS-102_rna_seq_Normalized_Counts.csv", row.names = "gene_id")

group <- factor(c('1', '1', '1', '1', '1', '1', '2', '2', '2', '2', '2', '2'))

dgeGlm <- DGEList(counts = rawdata, group = group)
str(dgeGlm)
## Formal class 'DGEList' [package "edgeR"] with 1 slot
##   ..@ .Data:List of 2
##   .. ..$ : num [1:24035, 1:12] 2976.8 59.8 21.2 40.1 0 ...
##   .. .. ..- attr(*, "dimnames")=List of 2
##   .. .. .. ..$ : chr [1:24035] "ENSMUSG00000000001" "ENSMUSG00000000028" "ENSMUSG00000000031" "ENSMUSG00000000037" ...
##   .. .. .. ..$ : chr [1:12] "Mmus_C57.6J_KDN_FLT_Rep1_M23" "Mmus_C57.6J_KDN_FLT_Rep2_M24" "Mmus_C57.6J_KDN_FLT_Rep3_M25" "Mmus_C57.6J_KDN_FLT_Rep4_M26" ...
##   .. ..$ :'data.frame':  12 obs. of  3 variables:
##   .. .. ..$ group       : Factor w/ 2 levels "1","2": 1 1 1 1 1 1 2 2 2 2 ...
##   .. .. ..$ lib.size    : num [1:12] 40266365 40740336 37739541 39196969 36820645 ...
##   .. .. ..$ norm.factors: num [1:12] 1 1 1 1 1 1 1 1 1 1 ...
##   ..$ names: chr [1:2] "counts" "samples"
keep <- rowSums(cpm(dgeGlm)>2) >=4 

dgeGlm <- dgeGlm[keep,]

names(dgeGlm)
## [1] "counts"  "samples"
dgeGlm[["samples"]]
##                              group lib.size norm.factors
## Mmus_C57.6J_KDN_FLT_Rep1_M23     1  4.0e+07            1
## Mmus_C57.6J_KDN_FLT_Rep2_M24     1  4.1e+07            1
## Mmus_C57.6J_KDN_FLT_Rep3_M25     1  3.8e+07            1
## Mmus_C57.6J_KDN_FLT_Rep4_M26     1  3.9e+07            1
## Mmus_C57.6J_KDN_FLT_Rep5_M27     1  3.7e+07            1
## Mmus_C57.6J_KDN_FLT_Rep6_M28     1  3.6e+07            1
## Mmus_C57.6J_KDN_GC_Rep1_M33      2  3.7e+07            1
## Mmus_C57.6J_KDN_GC_Rep2_M34      2  3.7e+07            1
## Mmus_C57.6J_KDN_GC_Rep3_M35      2  4.0e+07            1
## Mmus_C57.6J_KDN_GC_Rep4_M36      2  3.6e+07            1
## Mmus_C57.6J_KDN_GC_Rep5_M37      2  3.8e+07            1
## Mmus_C57.6J_KDN_GC_Rep6_M38      2  3.5e+07            1
design <- model.matrix(~group)
design
##    (Intercept) group2
## 1            1      0
## 2            1      0
## 3            1      0
## 4            1      0
## 5            1      0
## 6            1      0
## 7            1      1
## 8            1      1
## 9            1      1
## 10           1      1
## 11           1      1
## 12           1      1
## attr(,"assign")
## [1] 0 1
## attr(,"contrasts")
## attr(,"contrasts")$group
## [1] "contr.treatment"
dgeGlmComDisp <- estimateGLMCommonDisp(dgeGlm, design, verbose = TRUE)
## Disp = 0.026 , BCV = 0.16
dgeGlmTrendDisp <- estimateGLMTrendedDisp(dgeGlmComDisp, design)

dgeGlmTagDisp <- estimateGLMTagwiseDisp(dgeGlmTrendDisp, design)

plotBCV(dgeGlmTagDisp)

fit <- glmFit(dgeGlmTagDisp, design)

colnames(coef(fit))
## [1] "(Intercept)" "group2"
lrt <- glmLRT(fit, coef =2 )

ttGlm <- topTags(lrt, n = Inf)

class(ttGlm)
## [1] "TopTags"
## attr(,"package")
## [1] "edgeR"
summary(deGlm <- decideTestsDGE(lrt, p = 0.1, adjust = "fdr"))
##        group2
## Down       64
## NotSig  13390
## Up        159
tagsGlm <- rownames(dgeGlmTagDisp)[as.logical(deGlm)]

plotSmear(lrt, de.tags = tagsGlm)

hits2 <- ttGlm$table[ttGlm$table$FDR < 0.1, ]

write.csv(hits2, "Mouse_EdgeR_Results_Zero_vs.1.csv")
columns(org.Mm.eg.db)
##  [1] "ACCNUM"       "ALIAS"        "ENSEMBL"      "ENSEMBLPROT"  "ENSEMBLTRANS"
##  [6] "ENTREZID"     "ENZYME"       "EVIDENCE"     "EVIDENCEALL"  "GENENAME"    
## [11] "GENETYPE"     "GO"           "GOALL"        "IPI"          "MGI"         
## [16] "ONTOLOGY"     "ONTOLOGYALL"  "PATH"         "PFAM"         "PMID"        
## [21] "PROSITE"      "REFSEQ"       "SYMBOL"       "UNIPROT"
ttGlm2 <- as.data.frame(ttGlm$table)

ttGlm2$symbol = mapIds(org.Mm.eg.db,
                       keys=row.names(ttGlm2),
                       column = "SYMBOL",
                       keytype = "ENSEMBL",
                       multiVals = "first")
## 'select()' returned 1:many mapping between keys and columns
ttGlm2$entrez = mapIds(org.Mm.eg.db,
                       keys=row.names(ttGlm2),
                       column = "ENTREZID",
                       keytype = "ENSEMBL",
                       multiVals = "first")
## 'select()' returned 1:many mapping between keys and columns
ttGlm2$name = mapIds(org.Mm.eg.db,
                       keys=row.names(ttGlm2),
                       column = "GENENAME",
                       keytype = "ENSEMBL",
                       multiVals = "first")
## 'select()' returned 1:many mapping between keys and columns
head(ttGlm2)
##                    logFC logCPM LR  PValue     FDR symbol entrez
## ENSMUSG00000026077 -1.36    3.6 80 4.3e-19 5.9e-15  Npas2  18143
## ENSMUSG00000007872  0.89    5.5 77 1.9e-18 1.3e-14    Id3  15903
## ENSMUSG00000021775  0.95    6.2 63 2.0e-15 9.1e-12  Nr1d2 353187
## ENSMUSG00000002250 -0.83    5.3 62 2.7e-15 9.2e-12  Ppard  19015
## ENSMUSG00000059824  2.26    4.6 58 2.6e-14 7.2e-11    Dbp  13170
## ENSMUSG00000074715 -1.99    3.8 47 7.0e-12 1.6e-08  Ccl28  56838
##                                                                name
## ENSMUSG00000026077                    neuronal PAS domain protein 2
## ENSMUSG00000007872                       inhibitor of DNA binding 3
## ENSMUSG00000021775  nuclear receptor subfamily 1, group D, member 2
## ENSMUSG00000002250 peroxisome proliferator activator receptor delta
## ENSMUSG00000059824          D site albumin promoter binding protein
## ENSMUSG00000074715                    C-C motif chemokine ligand 28

Install the following packages.

library(pathview)
library(gage)
library(gageData)
data(kegg.sets.mm)
data(go.subs.mm)
foldchanges <- ttGlm2$logFC
names(foldchanges) <- ttGlm2$entrez
kegg.mm = kegg.gsets("mouse", id.type = "entrez")
kegg.mm.sigmet = kegg.mm$kg.sets[kegg.mm$sigmet.idx]

Lets get the results

keggres = gage(foldchanges, gsets=kegg.mm.sigmet, same.dir = TRUE)

lapply(keggres, head)
## $greater
##                                                                   p.geomean
## mmu03010 Ribosome                                                   9.5e-05
## mmu04550 Signaling pathways regulating pluripotency of stem cells   2.0e-03
## mmu04330 Notch signaling pathway                                    6.1e-03
## mmu04350 TGF-beta signaling pathway                                 1.3e-02
## mmu04390 Hippo signaling pathway                                    2.0e-02
## mmu00830 Retinol metabolism                                         2.1e-02
##                                                                   stat.mean
## mmu03010 Ribosome                                                       3.8
## mmu04550 Signaling pathways regulating pluripotency of stem cells       2.9
## mmu04330 Notch signaling pathway                                        2.6
## mmu04350 TGF-beta signaling pathway                                     2.2
## mmu04390 Hippo signaling pathway                                        2.1
## mmu00830 Retinol metabolism                                             2.1
##                                                                     p.val q.val
## mmu03010 Ribosome                                                 9.5e-05 0.023
## mmu04550 Signaling pathways regulating pluripotency of stem cells 2.0e-03 0.235
## mmu04330 Notch signaling pathway                                  6.1e-03 0.488
## mmu04350 TGF-beta signaling pathway                               1.3e-02 0.783
## mmu04390 Hippo signaling pathway                                  2.0e-02 0.826
## mmu00830 Retinol metabolism                                       2.1e-02 0.826
##                                                                   set.size
## mmu03010 Ribosome                                                      127
## mmu04550 Signaling pathways regulating pluripotency of stem cells      100
## mmu04330 Notch signaling pathway                                        54
## mmu04350 TGF-beta signaling pathway                                     84
## mmu04390 Hippo signaling pathway                                       125
## mmu00830 Retinol metabolism                                             37
##                                                                      exp1
## mmu03010 Ribosome                                                 9.5e-05
## mmu04550 Signaling pathways regulating pluripotency of stem cells 2.0e-03
## mmu04330 Notch signaling pathway                                  6.1e-03
## mmu04350 TGF-beta signaling pathway                               1.3e-02
## mmu04390 Hippo signaling pathway                                  2.0e-02
## mmu00830 Retinol metabolism                                       2.1e-02
## 
## $less
##                                                  p.geomean stat.mean   p.val
## mmu04613 Neutrophil extracellular trap formation   0.00012      -3.7 0.00012
## mmu04145 Phagosome                                 0.00192      -2.9 0.00192
## mmu04110 Cell cycle                                0.00276      -2.8 0.00276
## mmu04714 Thermogenesis                             0.00472      -2.6 0.00472
## mmu04217 Necroptosis                               0.00614      -2.5 0.00614
## mmu00910 Nitrogen metabolism                       0.00867      -2.6 0.00867
##                                                  q.val set.size    exp1
## mmu04613 Neutrophil extracellular trap formation 0.029      137 0.00012
## mmu04145 Phagosome                               0.221      121 0.00192
## mmu04110 Cell cycle                              0.221      134 0.00276
## mmu04714 Thermogenesis                           0.283      208 0.00472
## mmu04217 Necroptosis                             0.295      113 0.00614
## mmu00910 Nitrogen metabolism                     0.347       13 0.00867
## 
## $stats
##                                                                   stat.mean
## mmu03010 Ribosome                                                       3.8
## mmu04550 Signaling pathways regulating pluripotency of stem cells       2.9
## mmu04330 Notch signaling pathway                                        2.6
## mmu04350 TGF-beta signaling pathway                                     2.2
## mmu04390 Hippo signaling pathway                                        2.1
## mmu00830 Retinol metabolism                                             2.1
##                                                                   exp1
## mmu03010 Ribosome                                                  3.8
## mmu04550 Signaling pathways regulating pluripotency of stem cells  2.9
## mmu04330 Notch signaling pathway                                   2.6
## mmu04350 TGF-beta signaling pathway                                2.2
## mmu04390 Hippo signaling pathway                                   2.1
## mmu00830 Retinol metabolism                                        2.1
greaters <- keggres$greater
lessers <- keggres$less
keggrespathways = data.frame(id = rownames(keggres$greater), keggres$greater) %>%
  tbl_df() %>%
  filter(row_number()<=5) %>%
  .$id %>%
  as.character()
## Warning: `tbl_df()` was deprecated in dplyr 1.0.0.
## ℹ Please use `tibble::as_tibble()` instead.
## Call `lifecycle::last_lifecycle_warnings()` to see where this warning was
## generated.
keggrespathways
## [1] "mmu03010 Ribosome"                                                
## [2] "mmu04550 Signaling pathways regulating pluripotency of stem cells"
## [3] "mmu04330 Notch signaling pathway"                                 
## [4] "mmu04350 TGF-beta signaling pathway"                              
## [5] "mmu04390 Hippo signaling pathway"
keggresids = substr(keggrespathways, start=1, stop=8)
keggresids
## [1] "mmu03010" "mmu04550" "mmu04330" "mmu04350" "mmu04390"
plot_pathway = function(pid) pathview(gene.data=foldchanges, pathway.id =pid, species = "mouse", new.signature = FALSE) 

Plot multiple pathways

tmp = sapply(keggresids, function(pid) pathview(gene.data=foldchanges, pathway.id = pid, species = "mouse"))
## 'select()' returned 1:1 mapping between keys and columns
## Info: Working in directory /home/student/Desktop/classroom/myfiles/garnettnatalie@gmail.com
## Info: Writing image file mmu03010.pathview.png
## 'select()' returned 1:1 mapping between keys and columns
## Info: Working in directory /home/student/Desktop/classroom/myfiles/garnettnatalie@gmail.com
## Info: Writing image file mmu04550.pathview.png
## 'select()' returned 1:1 mapping between keys and columns
## Info: Working in directory /home/student/Desktop/classroom/myfiles/garnettnatalie@gmail.com
## Info: Writing image file mmu04330.pathview.png
## 'select()' returned 1:1 mapping between keys and columns
## Info: Working in directory /home/student/Desktop/classroom/myfiles/garnettnatalie@gmail.com
## Info: Writing image file mmu04350.pathview.png
## 'select()' returned 1:1 mapping between keys and columns
## Info: Working in directory /home/student/Desktop/classroom/myfiles/garnettnatalie@gmail.com
## Info: Writing image file mmu04390.pathview.png
keggrespathways = data.frame(id = rownames(keggres$less), keggres$less) %>%
  tbl_df() %>%
  filter(row_number()<=5) %>%
  .$id %>%
  as.character()
## Warning: `tbl_df()` was deprecated in dplyr 1.0.0.
## ℹ Please use `tibble::as_tibble()` instead.
## Call `lifecycle::last_lifecycle_warnings()` to see where this warning was
## generated.
keggrespathways
## [1] "mmu04613 Neutrophil extracellular trap formation"
## [2] "mmu04145 Phagosome"                              
## [3] "mmu04110 Cell cycle"                             
## [4] "mmu04714 Thermogenesis"                          
## [5] "mmu04217 Necroptosis"
keggresids = substr(keggrespathways, start=1, stop=8)
keggresids
## [1] "mmu04613" "mmu04145" "mmu04110" "mmu04714" "mmu04217"
plot_pathway = function(pid) pathview(gene.data=foldchanges, pathway.id =pid, species = "mouse", new.signature = FALSE) 

# plot multiple pathways 
tmp = sapply(keggresids, function(pid) pathview(gene.data=foldchanges, pathway.id = pid, species = "mouse"))
## 'select()' returned 1:1 mapping between keys and columns
## Info: Working in directory /home/student/Desktop/classroom/myfiles/garnettnatalie@gmail.com
## Info: Writing image file mmu04613.pathview.png
## 'select()' returned 1:1 mapping between keys and columns
## Info: Working in directory /home/student/Desktop/classroom/myfiles/garnettnatalie@gmail.com
## Info: Writing image file mmu04145.pathview.png
## 'select()' returned 1:1 mapping between keys and columns
## Info: Working in directory /home/student/Desktop/classroom/myfiles/garnettnatalie@gmail.com
## Info: Writing image file mmu04110.pathview.png
## 'select()' returned 1:1 mapping between keys and columns
## Info: Working in directory /home/student/Desktop/classroom/myfiles/garnettnatalie@gmail.com
## Info: Writing image file mmu04714.pathview.png
## 'select()' returned 1:1 mapping between keys and columns
## Info: Working in directory /home/student/Desktop/classroom/myfiles/garnettnatalie@gmail.com
## Info: Writing image file mmu04217.pathview.png

Install the following package.

library(imager)
## Loading required package: magrittr
## 
## Attaching package: 'magrittr'
## The following object is masked from 'package:purrr':
## 
##     set_names
## The following object is masked from 'package:tidyr':
## 
##     extract
## 
## Attaching package: 'imager'
## The following object is masked from 'package:magrittr':
## 
##     add
## The following object is masked from 'package:Biostrings':
## 
##     width
## The following object is masked from 'package:XVector':
## 
##     width
## The following objects are masked from 'package:oligoClasses':
## 
##     B, B<-
## The following objects are masked from 'package:IRanges':
## 
##     resize, width
## The following object is masked from 'package:S4Vectors':
## 
##     width
## The following object is masked from 'package:Biobase':
## 
##     channel
## The following object is masked from 'package:BiocGenerics':
## 
##     width
## The following object is masked from 'package:stringr':
## 
##     boundary
## The following object is masked from 'package:dplyr':
## 
##     where
## The following object is masked from 'package:tidyr':
## 
##     fill
## The following objects are masked from 'package:stats':
## 
##     convolve, spectrum
## The following object is masked from 'package:graphics':
## 
##     frame
## The following object is masked from 'package:base':
## 
##     save.image
filenames <- list.files(path = "E:/Bioinformatics/Bisc_450_Scripts/mouse_edgeR", pattern = ".*pathview.png")

all_images <- lapply(filenames, load.image)
knitr::include_graphics(filenames)

Mouse DESeq2

Lets load the requited libraries for this analysis

library("DESeq2")
## Loading required package: GenomicRanges
## 
## Attaching package: 'GenomicRanges'
## The following object is masked from 'package:magrittr':
## 
##     subtract
## Loading required package: SummarizedExperiment
## Loading required package: MatrixGenerics
## Loading required package: matrixStats
## 
## Attaching package: 'matrixStats'
## The following objects are masked from 'package:Biobase':
## 
##     anyMissing, rowMedians
## The following object is masked from 'package:dplyr':
## 
##     count
## 
## Attaching package: 'MatrixGenerics'
## The following objects are masked from 'package:matrixStats':
## 
##     colAlls, colAnyNAs, colAnys, colAvgsPerRowSet, colCollapse,
##     colCounts, colCummaxs, colCummins, colCumprods, colCumsums,
##     colDiffs, colIQRDiffs, colIQRs, colLogSumExps, colMadDiffs,
##     colMads, colMaxs, colMeans2, colMedians, colMins, colOrderStats,
##     colProds, colQuantiles, colRanges, colRanks, colSdDiffs, colSds,
##     colSums2, colTabulates, colVarDiffs, colVars, colWeightedMads,
##     colWeightedMeans, colWeightedMedians, colWeightedSds,
##     colWeightedVars, rowAlls, rowAnyNAs, rowAnys, rowAvgsPerColSet,
##     rowCollapse, rowCounts, rowCummaxs, rowCummins, rowCumprods,
##     rowCumsums, rowDiffs, rowIQRDiffs, rowIQRs, rowLogSumExps,
##     rowMadDiffs, rowMads, rowMaxs, rowMeans2, rowMedians, rowMins,
##     rowOrderStats, rowProds, rowQuantiles, rowRanges, rowRanks,
##     rowSdDiffs, rowSds, rowSums2, rowTabulates, rowVarDiffs, rowVars,
##     rowWeightedMads, rowWeightedMeans, rowWeightedMedians,
##     rowWeightedSds, rowWeightedVars
## The following object is masked from 'package:Biobase':
## 
##     rowMedians
library("dplyr")
library("apeglm")

Lets load in our data

countData <- read.csv("GLDS-102_rna_seq_Unnormalized_Counts.csv")

colData <- read.csv("PHENO_DATA_mouse.csv", sep= ",", row.names = 1)

Now we need to add leveling factors to the colData

colData$group <- factor(colData$group, levels = c("0", "1"))

Now lets make sure we have the same number of individuals as well as groups

all(rownames(colData)) %in% colnames(countData)
## Warning in all(rownames(colData)): coercing argument of type 'character' to
## logical
## [1] FALSE

We need to round up the counts, because DESeq2 only allows integers as an input, and not fractional numbers. This depends on the method of alignment that was used upstream.

dds <- DESeqDataSetFromMatrix(countData = countData, colData = colData, design = ~group)

dds <- dds[rowSums(counts(dds)>2) >=4]

dds <- DESeq(dds)
## estimating size factors
## estimating dispersions
## gene-wise dispersion estimates
## mean-dispersion relationship
## final dispersion estimates
## fitting model and testing
res <- results(dds)
resLFC <- lfcShrink(dds, coef= 2)
## using 'apeglm' for LFC shrinkage. If used in published research, please cite:
##     Zhu, A., Ibrahim, J.G., Love, M.I. (2018) Heavy-tailed prior distributions for
##     sequence count data: removing the noise and preserving large differences.
##     Bioinformatics. https://doi.org/10.1093/bioinformatics/bty895
(resOrdered <- res[order(res$padj), ])
## log2 fold change (MLE): group 1 vs 0 
## Wald test p-value: group 1 vs 0 
## DataFrame with 22008 rows and 6 columns
##                     baseMean log2FoldChange     lfcSE        stat      pvalue
##                    <numeric>      <numeric> <numeric>   <numeric>   <numeric>
## ENSMUSG00000002250  1459.223      -0.926292 0.1112628    -8.32526 8.41430e-17
## ENSMUSG00000007872  1719.375       0.818829 0.1122820     7.29261 3.04014e-13
## ENSMUSG00000026077   437.035      -1.191812 0.1655873    -7.19748 6.13338e-13
## ENSMUSG00000040998 14579.593      -0.506307 0.0703771    -7.19421 6.28252e-13
## ENSMUSG00000021775  2804.923       0.842511 0.1233312     6.83129 8.41546e-12
## ...                      ...            ...       ...         ...         ...
## ENSMUSG00000118345   4.22314    -0.12097478  0.599072 -0.20193699    0.839966
## ENSMUSG00000118353   6.60578     0.56456713  0.481195  1.17326031    0.240691
## ENSMUSG00000118358   3.30902    -0.00273584  0.763559 -0.00358301    0.997141
## ENSMUSG00000118369   2.91657    -1.11623145  0.790702 -1.41169702    0.158039
## ENSMUSG00000118384   7.43136     0.23830798  0.489273  0.48706567    0.626212
##                           padj
##                      <numeric>
## ENSMUSG00000002250 1.24077e-12
## ENSMUSG00000007872 2.24149e-09
## ENSMUSG00000026077 2.31605e-09
## ENSMUSG00000040998 2.31605e-09
## ENSMUSG00000021775 2.48189e-08
## ...                        ...
## ENSMUSG00000118345          NA
## ENSMUSG00000118353          NA
## ENSMUSG00000118358          NA
## ENSMUSG00000118369          NA
## ENSMUSG00000118384          NA
summary(res)
## 
## out of 22008 with nonzero total read count
## adjusted p-value < 0.1
## LFC > 0 (up)       : 325, 1.5%
## LFC < 0 (down)     : 327, 1.5%
## outliers [1]       : 15, 0.068%
## low counts [2]     : 7247, 33%
## (mean count < 38)
## [1] see 'cooksCutoff' argument of ?results
## [2] see 'independentFiltering' argument of ?results
FLT_Vs_GC <- as.data.frame(res$log2FoldChange)

head(FLT_Vs_GC)
##   res$log2FoldChange
## 1             0.0421
## 2            -0.1334
## 3            -0.0185
## 4            -0.0882
## 5            -0.0079
## 6             0.1136
plotMA(resLFC, ylim = c(-2,2))

pdf(file = "MA_Plot_FLT_vs_GC.pdf")

dev.off()
## png 
##   2

Lets save our differential expression results to file.

write.csv(as.data.frame(resOrdered), file = "Mouse_DESeq.csv")

Lets perform a custom transformation

dds <- estimateSizeFactors(dds)

se <- SummarizedExperiment(log2(counts(dds, normalize = TRUE) +1), colData = colData(dds))

pdf(file = "PCA_PLOT_FLT_vs_GC.pdf")

plotPCA(DESeqTransform(se), intgroup = "group")
## using ntop=500 top features by variance

Lets load our required packages

library(AnnotationDbi)
library(org.Mm.eg.db)
columns(org.Mm.eg.db)
##  [1] "ACCNUM"       "ALIAS"        "ENSEMBL"      "ENSEMBLPROT"  "ENSEMBLTRANS"
##  [6] "ENTREZID"     "ENZYME"       "EVIDENCE"     "EVIDENCEALL"  "GENENAME"    
## [11] "GENETYPE"     "GO"           "GOALL"        "IPI"          "MGI"         
## [16] "ONTOLOGY"     "ONTOLOGYALL"  "PATH"         "PFAM"         "PMID"        
## [21] "PROSITE"      "REFSEQ"       "SYMBOL"       "UNIPROT"
foldchanges <- as.data.frame(res$log2FoldChange, row.names = row.names(res))
head(foldchanges)
##                    res$log2FoldChange
## ENSMUSG00000000001             0.0421
## ENSMUSG00000000028            -0.1334
## ENSMUSG00000000031            -0.0185
## ENSMUSG00000000037            -0.0882
## ENSMUSG00000000049            -0.0079
## ENSMUSG00000000056             0.1136
res$symbol = mapIds(org.Mm.eg.db,
                    keys = row.names(res), 
                    column = "SYMBOL",
                    keytype = "ENSEMBL", 
                    multiVals = "first")
## 'select()' returned 1:many mapping between keys and columns
res$entrez = mapIds(org.Mm.eg.db,
                    keys = row.names(res), 
                    column = "ENTREZID",
                    keytype = "ENSEMBL", 
                    multiVals = "first")
## 'select()' returned 1:many mapping between keys and columns
res$name = mapIds(org.Mm.eg.db,
                    keys = row.names(res), 
                    column = "GENENAME",
                    keytype = "ENSEMBL", 
                    multiVals = "first")
## 'select()' returned 1:many mapping between keys and columns
head(res, 10) 
## log2 fold change (MLE): group 1 vs 0 
## Wald test p-value: group 1 vs 0 
## DataFrame with 10 rows and 9 columns
##                      baseMean log2FoldChange     lfcSE       stat    pvalue
##                     <numeric>      <numeric> <numeric>  <numeric> <numeric>
## ENSMUSG00000000001 3132.35128     0.04214340 0.0436714  0.9650117  0.334539
## ENSMUSG00000000028   68.75801    -0.13342706 0.1565936 -0.8520597  0.394181
## ENSMUSG00000000031   21.05397    -0.01853142 0.2486477 -0.0745288  0.940590
## ENSMUSG00000000037   24.42314    -0.08817270 0.2982220 -0.2956613  0.767489
## ENSMUSG00000000049    3.24919    -0.00790342 0.9613572 -0.0082211  0.993441
## ENSMUSG00000000056 1424.88216     0.11355979 0.0777635  1.4603234  0.144201
## ENSMUSG00000000058 1420.78992    -0.03893850 0.0850602 -0.4577759  0.647113
## ENSMUSG00000000078 2254.53129    -0.07540275 0.0874314 -0.8624214  0.388456
## ENSMUSG00000000085  822.68179     0.04586772 0.0584699  0.7844667  0.432766
## ENSMUSG00000000088 5946.81754    -0.05461549 0.0691178 -0.7901795  0.429423
##                         padj      symbol      entrez                   name
##                    <numeric> <character> <character>            <character>
## ENSMUSG00000000001  0.739637       Gnai3       14679 G protein subunit al..
## ENSMUSG00000000028  0.777085       Cdc45       12544 cell division cycle 45
## ENSMUSG00000000031        NA         H19       14955 H19, imprinted mater..
## ENSMUSG00000000037        NA       Scml2      107815 Scm polycomb group p..
## ENSMUSG00000000049        NA        Apoh       11818       apolipoprotein H
## ENSMUSG00000000056  0.547527        Narf       67608 nuclear prelamin A r..
## ENSMUSG00000000058  0.901904        Cav2       12390             caveolin 2
## ENSMUSG00000000078  0.774294        Klf6       23849 Kruppel-like transcr..
## ENSMUSG00000000085  0.800018       Scmh1       29871 sex comb on midleg h..
## ENSMUSG00000000088  0.798420       Cox5a       12858 cytochrome c oxidase..

Lets load our pathview packages

library(pathview)
library(gage)
library(gageData)
data(kegg.sets.mm)
data(go.subs.mm)

foldchanges <- res$log2FoldChange
names(foldchanges) = res$entrez

head(foldchanges)
##   14679   12544   14955  107815   11818   67608 
##  0.0421 -0.1334 -0.0185 -0.0882 -0.0079  0.1136
kegg.mm = kegg.gsets("mouse", id.type = "entrez")
kegg.mm.sigmet <- kegg.mm$kg.sets[kegg.mm$sigmet.idx]

Lets map the results

keggres <- gage(foldchanges, gsets = kegg.mm.sigmet, same.dir = TRUE)

lapply(keggres, head)
## $greater
##                                           p.geomean stat.mean p.val q.val
## mmu03010 Ribosome                             0.017       2.1 0.017   0.9
## mmu04022 cGMP-PKG signaling pathway           0.030       1.9 0.030   0.9
## mmu04360 Axon guidance                        0.038       1.8 0.038   0.9
## mmu04330 Notch signaling pathway              0.042       1.8 0.042   0.9
## mmu04658 Th1 and Th2 cell differentiation     0.054       1.6 0.054   0.9
## mmu02010 ABC transporters                     0.075       1.5 0.075   0.9
##                                           set.size  exp1
## mmu03010 Ribosome                              139 0.017
## mmu04022 cGMP-PKG signaling pathway            152 0.030
## mmu04360 Axon guidance                         176 0.038
## mmu04330 Notch signaling pathway                58 0.042
## mmu04658 Th1 and Th2 cell differentiation       78 0.054
## mmu02010 ABC transporters                       46 0.075
## 
## $less
##                                                   p.geomean stat.mean  p.val
## mmu04613 Neutrophil extracellular trap formation     0.0043      -2.6 0.0043
## mmu04110 Cell cycle                                  0.0062      -2.5 0.0062
## mmu04657 IL-17 signaling pathway                     0.0113      -2.3 0.0113
## mmu04145 Phagosome                                   0.0332      -1.8 0.0332
## mmu04621 NOD-like receptor signaling pathway         0.0388      -1.8 0.0388
## mmu04625 C-type lectin receptor signaling pathway    0.0441      -1.7 0.0441
##                                                   q.val set.size   exp1
## mmu04613 Neutrophil extracellular trap formation   0.75      160 0.0043
## mmu04110 Cell cycle                                0.75      151 0.0062
## mmu04657 IL-17 signaling pathway                   0.90       75 0.0113
## mmu04145 Phagosome                                 0.91      144 0.0332
## mmu04621 NOD-like receptor signaling pathway       0.91      154 0.0388
## mmu04625 C-type lectin receptor signaling pathway  0.91       99 0.0441
## 
## $stats
##                                           stat.mean exp1
## mmu03010 Ribosome                               2.1  2.1
## mmu04022 cGMP-PKG signaling pathway             1.9  1.9
## mmu04360 Axon guidance                          1.8  1.8
## mmu04330 Notch signaling pathway                1.8  1.8
## mmu04658 Th1 and Th2 cell differentiation       1.6  1.6
## mmu02010 ABC transporters                       1.5  1.5

Lets save our greater and less than pathways

greaters <- keggres$greater
lessers <- keggres$less
keggrespathways <- data.frame(id = rownames(keggres$greater), keggres$greater) %>%
  tbl_df() %>%
  filter(row_number() <= 3) %>%
  .$id %>%
  as.character()
## Warning: `tbl_df()` was deprecated in dplyr 1.0.0.
## ℹ Please use `tibble::as_tibble()` instead.
## Call `lifecycle::last_lifecycle_warnings()` to see where this warning was
## generated.
keggrespathways
## [1] "mmu03010 Ribosome"                   "mmu04022 cGMP-PKG signaling pathway"
## [3] "mmu04360 Axon guidance"
keggresids <- substr(keggrespathways, start = 1, stop = 8)
keggresids
## [1] "mmu03010" "mmu04022" "mmu04360"

PLOT!

plot_pathway = function(pid) pathview(gene.data = foldchange, pathway.id = pid, species = "mouse", new.signature = FALSE)

tmp = sapply(keggresids, function(pid) pathview(gene.data = foldchanges, pathway.id = pid, species = "mouse"))
## 'select()' returned 1:1 mapping between keys and columns
## Info: Working in directory /home/student/Desktop/classroom/myfiles/garnettnatalie@gmail.com
## Info: Writing image file mmu03010.pathview.png
## 'select()' returned 1:1 mapping between keys and columns
## Info: Working in directory /home/student/Desktop/classroom/myfiles/garnettnatalie@gmail.com
## Info: Writing image file mmu04022.pathview.png
## 'select()' returned 1:1 mapping between keys and columns
## Info: Working in directory /home/student/Desktop/classroom/myfiles/garnettnatalie@gmail.com
## Info: Writing image file mmu04360.pathview.png
keggrespathways <- data.frame(id = rownames(keggres$less), keggres$less) %>%
  tbl_df() %>%
  filter(row_number() <= 3) %>%
  .$id %>%
  as.character()
## Warning: `tbl_df()` was deprecated in dplyr 1.0.0.
## ℹ Please use `tibble::as_tibble()` instead.
## Call `lifecycle::last_lifecycle_warnings()` to see where this warning was
## generated.
keggrespathways
## [1] "mmu04613 Neutrophil extracellular trap formation"
## [2] "mmu04110 Cell cycle"                             
## [3] "mmu04657 IL-17 signaling pathway"
keggresids <- substr(keggrespathways, start = 1, stop = 8)
keggresids
## [1] "mmu04613" "mmu04110" "mmu04657"

PLOT!

plot_pathway = function(pid) pathview(gene.data = foldchange, pathway.id = pid, species = "mouse", new.signature = FALSE)

tmp = sapply(keggresids, function(pid) pathview(gene.data = foldchanges, pathway.id = pid, species = "mouse"))
## 'select()' returned 1:1 mapping between keys and columns
## Info: Working in directory /home/student/Desktop/classroom/myfiles/garnettnatalie@gmail.com
## Info: Writing image file mmu04613.pathview.png
## 'select()' returned 1:1 mapping between keys and columns
## Info: Working in directory /home/student/Desktop/classroom/myfiles/garnettnatalie@gmail.com
## Info: Writing image file mmu04110.pathview.png
## 'select()' returned 1:1 mapping between keys and columns
## Info: Working in directory /home/student/Desktop/classroom/myfiles/garnettnatalie@gmail.com
## Info: Writing image file mmu04657.pathview.png
library(imager)

filenames <- list.files(path = "/home/student/Desktop/classroom/myfiles/garnettnatalie@gmail.com", pattern = ".*pathview.png")

all_images <- lapply(filenames, load.image)
knitr::include_graphics(filenames)

Edge R vs. DESeq2

Install the following package

library(tidyverse)

Run the following chunk

EdgeR <- read.csv("Mouse_EdgeR_Results_Zero_vs.1.csv")

DESeq <- read.csv("Mouse_DESeq.csv")

DESeq2 <- DESeq %>%
  filter(padj < 0.05)

DESeq2 <- DESeq2[,c(1,3)]

EdgeR <- EdgeR[, 1:2]

colnames(DESeq2) <- c("ID", "logFC")
colnames(EdgeR) <- c("ID", "logFC")

Install “GOplot” package

library(GOplot)
## Loading required package: ggdendro
## 
## Attaching package: 'ggdendro'
## The following object is masked from 'package:imager':
## 
##     label
## Loading required package: gridExtra
## 
## Attaching package: 'gridExtra'
## The following object is masked from 'package:Biobase':
## 
##     combine
## The following object is masked from 'package:BiocGenerics':
## 
##     combine
## The following object is masked from 'package:dplyr':
## 
##     combine
## Loading required package: RColorBrewer
comp <- GOVenn(DESeq2, EdgeR, label = c("DESeq1", "EdgeR"), title = "Comparison of DESeq and EdgeR DE Genes", plot = FALSE)

comp$plot

comp$table
## $A_only
##                    logFC Trend
## ENSMUSG00000022580 -0.34  DOWN
## ENSMUSG00000006517 -0.36  DOWN
## ENSMUSG00000038375 -0.31  DOWN
## ENSMUSG00000032374 -0.33  DOWN
## ENSMUSG00000014158 -0.25  DOWN
## ENSMUSG00000026179 -0.28  DOWN
## ENSMUSG00000037455 -0.32  DOWN
## ENSMUSG00000020232 -0.21  DOWN
## ENSMUSG00000023367 -0.22  DOWN
## ENSMUSG00000079465 -0.35  DOWN
## ENSMUSG00000032911 -0.25  DOWN
## ENSMUSG00000028671 -0.53  DOWN
## ENSMUSG00000022763 -0.46  DOWN
## ENSMUSG00000069565 -0.23  DOWN
## ENSMUSG00000029763 -0.32  DOWN
## ENSMUSG00000040263 -0.27  DOWN
## ENSMUSG00000039047 -0.20  DOWN
## ENSMUSG00000023938 -0.30  DOWN
## ENSMUSG00000048707 -0.26  DOWN
## ENSMUSG00000024958 -0.22  DOWN
## ENSMUSG00000022351 -0.37  DOWN
## ENSMUSG00000092500 -0.68  DOWN
## ENSMUSG00000031594 -0.71  DOWN
## ENSMUSG00000053414 -0.42  DOWN
## ENSMUSG00000035910 -0.47  DOWN
## ENSMUSG00000046079 -0.23  DOWN
## ENSMUSG00000039834 -0.21  DOWN
## ENSMUSG00000024064 -0.32  DOWN
## ENSMUSG00000017428 -0.19  DOWN
## ENSMUSG00000040188 -0.18  DOWN
## ENSMUSG00000026399 -0.46  DOWN
## ENSMUSG00000000594 -0.35  DOWN
## ENSMUSG00000001473 -0.52  DOWN
## ENSMUSG00000029032 -0.17  DOWN
## ENSMUSG00000034457 -0.66  DOWN
## ENSMUSG00000038704 -0.44  DOWN
## ENSMUSG00000004565 -0.19  DOWN
## ENSMUSG00000020381 -0.36  DOWN
## ENSMUSG00000041889 -0.41  DOWN
## ENSMUSG00000023259 -0.55  DOWN
## ENSMUSG00000100131 -0.74  DOWN
## ENSMUSG00000062203 -0.17  DOWN
## ENSMUSG00000025735 -0.55  DOWN
## ENSMUSG00000028545 -0.35  DOWN
## ENSMUSG00000058454 -0.32  DOWN
## ENSMUSG00000066441 -0.21  DOWN
## ENSMUSG00000022540 -0.22  DOWN
## ENSMUSG00000041939 -0.37  DOWN
## ENSMUSG00000035561 -0.63  DOWN
## ENSMUSG00000038023 -0.16  DOWN
## ENSMUSG00000101249 -0.42  DOWN
## ENSMUSG00000032370 -0.28  DOWN
## ENSMUSG00000004266 -0.24  DOWN
## ENSMUSG00000001627 -0.37  DOWN
## ENSMUSG00000029810 -0.21  DOWN
## ENSMUSG00000024579 -0.34  DOWN
## ENSMUSG00000030739 -0.26  DOWN
## ENSMUSG00000011382 -0.29  DOWN
## ENSMUSG00000032306 -0.20  DOWN
## ENSMUSG00000031960 -0.23  DOWN
## ENSMUSG00000022512 -0.31  DOWN
## ENSMUSG00000037243 -0.38  DOWN
## ENSMUSG00000028041 -0.23  DOWN
## ENSMUSG00000026307 -0.18  DOWN
## ENSMUSG00000029580 -0.23  DOWN
## ENSMUSG00000033107 -0.46  DOWN
## ENSMUSG00000112324 -1.18  DOWN
## ENSMUSG00000020744 -0.24  DOWN
## ENSMUSG00000055745 -0.33  DOWN
## ENSMUSG00000041959 -0.30  DOWN
## ENSMUSG00000036957 -0.24  DOWN
## ENSMUSG00000021697 -0.24  DOWN
## ENSMUSG00000037894 -0.15  DOWN
## ENSMUSG00000021792 -0.27  DOWN
## ENSMUSG00000037347 -0.58  DOWN
## ENSMUSG00000020877 -0.30  DOWN
## ENSMUSG00000087574 -0.53  DOWN
## ENSMUSG00000030512 -0.30  DOWN
## ENSMUSG00000006800 -0.38  DOWN
## ENSMUSG00000027412 -0.24  DOWN
## ENSMUSG00000031161 -0.19  DOWN
## ENSMUSG00000027429 -0.20  DOWN
## ENSMUSG00000045594 -0.32  DOWN
## ENSMUSG00000018171 -0.21  DOWN
## ENSMUSG00000032349 -0.20  DOWN
## ENSMUSG00000015094 -0.27  DOWN
## ENSMUSG00000028538 -0.21  DOWN
## ENSMUSG00000029616 -0.17  DOWN
## ENSMUSG00000019797 -0.26  DOWN
## ENSMUSG00000095193 -0.47  DOWN
## ENSMUSG00000068876 -0.37  DOWN
## ENSMUSG00000026784 -0.34  DOWN
## ENSMUSG00000057363 -0.23  DOWN
## ENSMUSG00000064367 -0.40  DOWN
## ENSMUSG00000024319 -0.16  DOWN
## ENSMUSG00000030137 -0.73  DOWN
## ENSMUSG00000030538 -0.21  DOWN
## ENSMUSG00000032978 -0.41  DOWN
## ENSMUSG00000069633 -0.24  DOWN
## ENSMUSG00000067158 -0.17  DOWN
## ENSMUSG00000024292 -0.70  DOWN
## ENSMUSG00000031958 -0.33  DOWN
## ENSMUSG00000064341 -0.43  DOWN
## ENSMUSG00000024993 -0.24  DOWN
## ENSMUSG00000074170 -0.32  DOWN
## ENSMUSG00000030641 -0.71  DOWN
## ENSMUSG00000034858 -0.24  DOWN
## ENSMUSG00000036040 -0.45  DOWN
## ENSMUSG00000063229 -0.27  DOWN
## ENSMUSG00000027490 -0.50  DOWN
## ENSMUSG00000047735 -0.30  DOWN
## ENSMUSG00000011752 -0.22  DOWN
## ENSMUSG00000034714 -0.25  DOWN
## ENSMUSG00000034613 -0.20  DOWN
## ENSMUSG00000032452 -0.37  DOWN
## ENSMUSG00000038775 -0.43  DOWN
## ENSMUSG00000045294 -0.41  DOWN
## ENSMUSG00000084128 -0.20  DOWN
## ENSMUSG00000037686 -0.49  DOWN
## ENSMUSG00000014245 -0.34  DOWN
## ENSMUSG00000029093 -0.70  DOWN
## ENSMUSG00000024736 -0.29  DOWN
## ENSMUSG00000028937 -0.46  DOWN
## ENSMUSG00000096795 -0.43  DOWN
## ENSMUSG00000051518 -0.24  DOWN
## ENSMUSG00000000934 -0.21  DOWN
## ENSMUSG00000030880 -0.37  DOWN
## ENSMUSG00000025726 -0.47  DOWN
## ENSMUSG00000052117 -0.38  DOWN
## ENSMUSG00000006342 -0.32  DOWN
## ENSMUSG00000062825 -0.21  DOWN
## ENSMUSG00000041733 -0.16  DOWN
## ENSMUSG00000028780 -0.39  DOWN
## ENSMUSG00000024665 -0.28  DOWN
## ENSMUSG00000025317 -0.45  DOWN
## ENSMUSG00000020142 -0.51  DOWN
## ENSMUSG00000082016 -0.50  DOWN
## ENSMUSG00000034371 -0.36  DOWN
## ENSMUSG00000032492 -0.24  DOWN
## ENSMUSG00000110755 -0.85  DOWN
## ENSMUSG00000064254 -0.35  DOWN
## ENSMUSG00000035845 -0.23  DOWN
## ENSMUSG00000017210 -0.18  DOWN
## ENSMUSG00000023832 -0.24  DOWN
## ENSMUSG00000037999 -0.14  DOWN
## ENSMUSG00000068220 -0.33  DOWN
## ENSMUSG00000064345 -0.47  DOWN
## ENSMUSG00000109532 -0.47  DOWN
## ENSMUSG00000024503 -0.30  DOWN
## ENSMUSG00000004843 -0.19  DOWN
## ENSMUSG00000030298 -0.19  DOWN
## ENSMUSG00000048578 -0.18  DOWN
## ENSMUSG00000085042 -0.54  DOWN
## ENSMUSG00000016942 -0.57  DOWN
## ENSMUSG00000020116 -0.22  DOWN
## ENSMUSG00000028989 -0.83  DOWN
## ENSMUSG00000017776  0.14    UP
## ENSMUSG00000050310  0.16    UP
## ENSMUSG00000039068  0.17    UP
## ENSMUSG00000021540  0.17    UP
## ENSMUSG00000031393  0.17    UP
## ENSMUSG00000036550  0.12    UP
## ENSMUSG00000020257  0.17    UP
## ENSMUSG00000022604  0.25    UP
## ENSMUSG00000028053  0.13    UP
## ENSMUSG00000030213  0.23    UP
## ENSMUSG00000060657  0.17    UP
## ENSMUSG00000002266  0.80    UP
## ENSMUSG00000027351  0.20    UP
## ENSMUSG00000041530  0.14    UP
## ENSMUSG00000031216  0.19    UP
## ENSMUSG00000025223  0.20    UP
## ENSMUSG00000032086  0.16    UP
## ENSMUSG00000050812  0.12    UP
## ENSMUSG00000038290  0.18    UP
## ENSMUSG00000038530  0.46    UP
## ENSMUSG00000038766  0.19    UP
## ENSMUSG00000050947  0.19    UP
## ENSMUSG00000045098  0.17    UP
## ENSMUSG00000026918  0.15    UP
## ENSMUSG00000037003  0.25    UP
## ENSMUSG00000074748  0.13    UP
## ENSMUSG00000097412  0.36    UP
## ENSMUSG00000031729  0.14    UP
## ENSMUSG00000060419  0.57    UP
## ENSMUSG00000027519  0.13    UP
## ENSMUSG00000021669  0.17    UP
## ENSMUSG00000051817  0.21    UP
## ENSMUSG00000043090  0.25    UP
## ENSMUSG00000037029  0.16    UP
## ENSMUSG00000027395  0.23    UP
## ENSMUSG00000021488  0.14    UP
## ENSMUSG00000073678  0.24    UP
## ENSMUSG00000041378  0.25    UP
## ENSMUSG00000046947  0.23    UP
## ENSMUSG00000058793  0.16    UP
## ENSMUSG00000037369  0.17    UP
## ENSMUSG00000035247  0.14    UP
## ENSMUSG00000026436  0.15    UP
## ENSMUSG00000046897  0.14    UP
## ENSMUSG00000057133  0.15    UP
## ENSMUSG00000027680  0.15    UP
## ENSMUSG00000043991  0.14    UP
## ENSMUSG00000034189  0.22    UP
## ENSMUSG00000018076  0.14    UP
## ENSMUSG00000052446  0.21    UP
## ENSMUSG00000000901  0.46    UP
## ENSMUSG00000040209  0.15    UP
## ENSMUSG00000021140  0.14    UP
## ENSMUSG00000015942  0.28    UP
## ENSMUSG00000043241  0.15    UP
## ENSMUSG00000017897  0.48    UP
## ENSMUSG00000022353  0.17    UP
## ENSMUSG00000005893  0.13    UP
## ENSMUSG00000044791  0.11    UP
## ENSMUSG00000034156  0.30    UP
## ENSMUSG00000037736  0.16    UP
## ENSMUSG00000059486  0.19    UP
## ENSMUSG00000040865  0.15    UP
## ENSMUSG00000035666  0.18    UP
## ENSMUSG00000029647  0.15    UP
## ENSMUSG00000038486  0.22    UP
## ENSMUSG00000025927  0.26    UP
## ENSMUSG00000022415  0.26    UP
## ENSMUSG00000092558  0.23    UP
## ENSMUSG00000014195  0.13    UP
## ENSMUSG00000019866  0.17    UP
## ENSMUSG00000078202  0.24    UP
## ENSMUSG00000035495  0.16    UP
## ENSMUSG00000044674  0.15    UP
## ENSMUSG00000102869  0.13    UP
## ENSMUSG00000026923  0.24    UP
## ENSMUSG00000020642  0.30    UP
## ENSMUSG00000037503  0.13    UP
## ENSMUSG00000043716  0.16    UP
## ENSMUSG00000037822  0.12    UP
## ENSMUSG00000035413  0.28    UP
## ENSMUSG00000021395  0.15    UP
## ENSMUSG00000036097  0.17    UP
## ENSMUSG00000034297  0.14    UP
## ENSMUSG00000037742  0.18    UP
## ENSMUSG00000031841  0.37    UP
## ENSMUSG00000021661  0.24    UP
## ENSMUSG00000021959  0.20    UP
## ENSMUSG00000020594  0.13    UP
## ENSMUSG00000027079  0.24    UP
## ENSMUSG00000087150  0.37    UP
## ENSMUSG00000091811  0.29    UP
## ENSMUSG00000066415  0.15    UP
## ENSMUSG00000027524  0.33    UP
## ENSMUSG00000008683  0.18    UP
## ENSMUSG00000066235  0.24    UP
## ENSMUSG00000099689  0.33    UP
## ENSMUSG00000074994  0.15    UP
## 
## $B_only
##                    logFC Trend
## ENSMUSG00000019944 -0.50  DOWN
## ENSMUSG00000095616 -2.18  DOWN
## ENSMUSG00000055254 -1.26  DOWN
## ENSMUSG00000050097 -0.66  DOWN
## ENSMUSG00000052562 -1.12  DOWN
## ENSMUSG00000036083 -0.42  DOWN
## ENSMUSG00000046070 -0.42  DOWN
## ENSMUSG00000006216  0.34    UP
## ENSMUSG00000039783  0.45    UP
## ENSMUSG00000018900  0.52    UP
## ENSMUSG00000062901  0.37    UP
## ENSMUSG00000066113  0.54    UP
## ENSMUSG00000042745  0.53    UP
## ENSMUSG00000047861  0.48    UP
## ENSMUSG00000005483  0.42    UP
## ENSMUSG00000068270  0.37    UP
## ENSMUSG00000047878  0.35    UP
## ENSMUSG00000026315  0.48    UP
## ENSMUSG00000010651  0.37    UP
## ENSMUSG00000025197  0.36    UP
## ENSMUSG00000041351  0.46    UP
## ENSMUSG00000057914  0.65    UP
## ENSMUSG00000038567  0.79    UP
## ENSMUSG00000021379  0.43    UP
## ENSMUSG00000033411  0.36    UP
## ENSMUSG00000038528  0.28    UP
## ENSMUSG00000021876  1.02    UP
## ENSMUSG00000020566  0.38    UP
## ENSMUSG00000002289  0.96    UP
## ENSMUSG00000025880  0.33    UP
## ENSMUSG00000032898  0.30    UP
## ENSMUSG00000039108  0.26    UP
## ENSMUSG00000051344  0.40    UP
## ENSMUSG00000026313  0.32    UP
## ENSMUSG00000028234  0.28    UP
## ENSMUSG00000059173  0.31    UP
## ENSMUSG00000003849  0.39    UP
## ENSMUSG00000061410  0.26    UP
## ENSMUSG00000104445  0.29    UP
## ENSMUSG00000029004  0.26    UP
## ENSMUSG00000047215  0.28    UP
## ENSMUSG00000006599  0.30    UP
## ENSMUSG00000031530  0.46    UP
## ENSMUSG00000075520  0.32    UP
## ENSMUSG00000032604  0.27    UP
## ENSMUSG00000013089  0.48    UP
## ENSMUSG00000037058  0.25    UP
## ENSMUSG00000090165  0.72    UP
## ENSMUSG00000078429  0.25    UP
## ENSMUSG00000040363  0.29    UP
## ENSMUSG00000032554  0.52    UP
## ENSMUSG00000006333  0.26    UP
## ENSMUSG00000028266  0.28    UP
## ENSMUSG00000035504  0.44    UP
## ENSMUSG00000042379  0.66    UP
## ENSMUSG00000039789  0.39    UP
## ENSMUSG00000063415  0.58    UP
## ENSMUSG00000074179  0.55    UP
## ENSMUSG00000061477  0.23    UP
## ENSMUSG00000006494  0.29    UP
## ENSMUSG00000024472  0.31    UP
## 
## $AB
##                    logFC_A logFC_B Trend
## ENSMUSG00000000253   -0.36   -0.29  DOWN
## ENSMUSG00000002250   -0.93   -0.83  DOWN
## ENSMUSG00000002797   -0.53   -0.45  DOWN
## ENSMUSG00000010663   -0.38   -0.31  DOWN
## ENSMUSG00000020326   -0.35   -0.28  DOWN
## ENSMUSG00000020538   -0.34   -0.41  DOWN
## ENSMUSG00000021135   -1.24   -1.17  DOWN
## ENSMUSG00000021185   -0.37   -0.31  DOWN
## ENSMUSG00000021214   -1.29   -1.23  DOWN
## ENSMUSG00000021364   -0.35   -0.29  DOWN
## ENSMUSG00000021670   -0.52   -0.51  DOWN
## ENSMUSG00000022797   -0.76   -0.61  DOWN
## ENSMUSG00000023067   -0.92   -0.86  DOWN
## ENSMUSG00000023120   -0.66   -0.74  DOWN
## ENSMUSG00000024772   -0.21   -0.26  DOWN
## ENSMUSG00000024866   -0.46   -0.40  DOWN
## ENSMUSG00000025185   -0.78   -0.72  DOWN
## ENSMUSG00000026077   -1.19   -1.36  DOWN
## ENSMUSG00000026188   -0.97   -0.86  DOWN
## ENSMUSG00000026189   -0.49   -0.42  DOWN
## ENSMUSG00000026202   -0.48   -0.41  DOWN
## ENSMUSG00000026827   -0.58   -0.53  DOWN
## ENSMUSG00000027111   -0.47   -0.39  DOWN
## ENSMUSG00000028357   -0.37   -0.32  DOWN
## ENSMUSG00000028919   -0.42   -0.39  DOWN
## ENSMUSG00000029361   -0.40   -0.61  DOWN
## ENSMUSG00000029752   -0.83   -0.74  DOWN
## ENSMUSG00000031283   -0.82   -0.81  DOWN
## ENSMUSG00000031349   -0.39   -0.31  DOWN
## ENSMUSG00000031725   -0.63   -0.59  DOWN
## ENSMUSG00000031994   -1.33   -1.26  DOWN
## ENSMUSG00000032420   -0.39   -0.32  DOWN
## ENSMUSG00000032758   -1.47   -1.41  DOWN
## ENSMUSG00000036752   -0.49   -0.42  DOWN
## ENSMUSG00000038224   -0.77   -0.69  DOWN
## ENSMUSG00000039062   -0.37   -0.32  DOWN
## ENSMUSG00000040998   -0.51   -0.43  DOWN
## ENSMUSG00000041605   -0.51   -0.45  DOWN
## ENSMUSG00000041920   -0.70   -0.66  DOWN
## ENSMUSG00000042487   -0.49   -0.42  DOWN
## ENSMUSG00000043091   -0.43   -0.35  DOWN
## ENSMUSG00000043681   -0.81   -0.73  DOWN
## ENSMUSG00000045136   -0.53   -0.44  DOWN
## ENSMUSG00000053303   -1.45   -1.36  DOWN
## ENSMUSG00000054520   -0.53   -0.55  DOWN
## ENSMUSG00000054986   -1.13   -1.04  DOWN
## ENSMUSG00000055116   -1.04   -0.97  DOWN
## ENSMUSG00000056749   -1.03   -0.95  DOWN
## ENSMUSG00000058258   -0.50   -0.47  DOWN
## ENSMUSG00000058672   -0.49   -0.41  DOWN
## ENSMUSG00000059743   -0.42   -0.35  DOWN
## ENSMUSG00000063694   -0.35   -0.30  DOWN
## ENSMUSG00000070985   -0.49   -0.38  DOWN
## ENSMUSG00000074261   -0.37   -0.30  DOWN
## ENSMUSG00000074715   -2.05   -1.99  DOWN
## ENSMUSG00000090236   -0.36   -0.33  DOWN
## ENSMUSG00000090264   -0.85   -0.79  DOWN
## ENSMUSG00000001280    0.17    0.26    UP
## ENSMUSG00000002265    0.34    0.39    UP
## ENSMUSG00000002346    0.41    0.49    UP
## ENSMUSG00000003477    0.62    0.70    UP
## ENSMUSG00000004105    0.31    0.37    UP
## ENSMUSG00000005034    0.15    0.23    UP
## ENSMUSG00000006127    0.24    0.28    UP
## ENSMUSG00000006269    0.19    0.26    UP
## ENSMUSG00000007872    0.82    0.89    UP
## ENSMUSG00000008682    0.24    0.30    UP
## ENSMUSG00000009927    0.16    0.23    UP
## ENSMUSG00000012848    0.22    0.28    UP
## ENSMUSG00000015656    0.36    0.39    UP
## ENSMUSG00000015957    1.15    1.41    UP
## ENSMUSG00000020372    0.20    0.28    UP
## ENSMUSG00000020427    0.64    0.71    UP
## ENSMUSG00000020473    0.26    0.33    UP
## ENSMUSG00000020482    0.34    0.41    UP
## ENSMUSG00000020607    0.38    0.45    UP
## ENSMUSG00000020653    0.81    0.89    UP
## ENSMUSG00000020889    0.74    0.79    UP
## ENSMUSG00000021482    0.22    0.27    UP
## ENSMUSG00000021508    0.75    0.81    UP
## ENSMUSG00000021775    0.84    0.95    UP
## ENSMUSG00000022122    0.42    0.47    UP
## ENSMUSG00000022389    0.65    0.71    UP
## ENSMUSG00000022949    0.72    0.80    UP
## ENSMUSG00000023022    0.23    0.27    UP
## ENSMUSG00000024298    0.22    0.32    UP
## ENSMUSG00000024900    0.38    0.45    UP
## ENSMUSG00000025019    0.26    0.30    UP
## ENSMUSG00000025511    0.44    0.51    UP
## ENSMUSG00000025764    0.23    0.30    UP
## ENSMUSG00000025815    0.59    0.44    UP
## ENSMUSG00000027314    0.59    0.64    UP
## ENSMUSG00000027796    0.72    0.79    UP
## ENSMUSG00000027875    1.68    1.72    UP
## ENSMUSG00000028081    0.23    0.30    UP
## ENSMUSG00000028957    1.23    1.26    UP
## ENSMUSG00000029587    0.24    0.39    UP
## ENSMUSG00000029714    0.20    0.34    UP
## ENSMUSG00000030201    0.26    0.30    UP
## ENSMUSG00000030256    0.98    1.19    UP
## ENSMUSG00000031167    0.53    0.61    UP
## ENSMUSG00000031320    0.26    0.32    UP
## ENSMUSG00000032097    0.20    0.26    UP
## ENSMUSG00000032594    0.18    0.29    UP
## ENSMUSG00000032624    0.22    0.28    UP
## ENSMUSG00000033327    0.43    0.50    UP
## ENSMUSG00000033350    0.37    0.43    UP
## ENSMUSG00000034111    0.21    0.28    UP
## ENSMUSG00000034450    1.88    1.93    UP
## ENSMUSG00000034460    0.32    0.38    UP
## ENSMUSG00000035469    0.19    0.29    UP
## ENSMUSG00000035530    0.20    0.27    UP
## ENSMUSG00000035614    0.18    0.32    UP
## ENSMUSG00000037172    0.22    0.28    UP
## ENSMUSG00000037465    0.58    0.61    UP
## ENSMUSG00000037523    0.18    0.24    UP
## ENSMUSG00000037621    0.52    0.59    UP
## ENSMUSG00000038393    0.68    0.77    UP
## ENSMUSG00000039831    0.19    0.32    UP
## ENSMUSG00000040423    0.21    0.30    UP
## ENSMUSG00000040584    0.52    0.59    UP
## ENSMUSG00000040740    0.58    0.65    UP
## ENSMUSG00000041075    0.28    0.35    UP
## ENSMUSG00000041841    0.25    0.30    UP
## ENSMUSG00000042046    0.20    0.26    UP
## ENSMUSG00000042659    0.36    0.45    UP
## ENSMUSG00000043144    0.60    0.67    UP
## ENSMUSG00000044026    0.23    0.30    UP
## ENSMUSG00000045382    0.58    0.69    UP
## ENSMUSG00000045441    0.34    0.41    UP
## ENSMUSG00000045519    0.37    0.53    UP
## ENSMUSG00000048826    0.26    0.34    UP
## ENSMUSG00000049241    0.55    0.61    UP
## ENSMUSG00000050100    0.73    0.76    UP
## ENSMUSG00000053411    0.35    0.44    UP
## ENSMUSG00000053964    0.64    0.82    UP
## ENSMUSG00000054499    0.26    0.32    UP
## ENSMUSG00000054793    0.22    0.29    UP
## ENSMUSG00000055866    0.81    0.77    UP
## ENSMUSG00000055980    0.27    0.34    UP
## ENSMUSG00000056851    0.18    0.27    UP
## ENSMUSG00000058056    0.34    0.41    UP
## ENSMUSG00000058600    0.19    0.30    UP
## ENSMUSG00000058655    0.26    0.33    UP
## ENSMUSG00000059824    2.26    2.26    UP
## ENSMUSG00000061143    0.28    0.34    UP
## ENSMUSG00000061353    0.55    0.60    UP
## ENSMUSG00000062563    0.29    0.35    UP
## ENSMUSG00000063681    0.66    0.76    UP
## ENSMUSG00000064065    0.26    0.31    UP
## ENSMUSG00000067586    0.36    0.42    UP
## ENSMUSG00000068742    0.45    0.48    UP
## ENSMUSG00000069495    0.18    0.28    UP
## ENSMUSG00000070348    0.32    0.39    UP
## ENSMUSG00000071415    0.19    0.26    UP
## ENSMUSG00000074063    0.48    0.55    UP
## ENSMUSG00000074578    0.32    0.45    UP
## ENSMUSG00000086583    0.42    0.48    UP
## ENSMUSG00000098557    0.28    0.34    UP
## ENSMUSG00000106847    0.29    0.36    UP
## ENSMUSG00000110185    0.26    0.33    UP
## ENSMUSG00000110195    0.32    0.33    UP

Protein Alignment

Install the following package

library(msa)

Run the following chunk

# seq <- readAAStringSet(file.path(getwd(), "datasets", "ch3", "hglobin.fa"))
seq <- readAAStringSet("hglobin.fa")

seq
## AAStringSet object of length 3:
##     width seq                                               names               
## [1]   142 MVLSPADKTNVKAAWGKVGAHAG...PAVHASLDKFLASVSTVLTSKYR HBA_HUMAN
## [2]   142 MVLSGEDKSNIKAAWGKIGGHGA...PAVHASLDKFLASVSTVLTSKYR HBA_MOUSE
## [3]   142 MSLTRTERTIILSLWSKISTQAD...ADAHAAWDKFLSIVSGVLTEKYR HBAZ_CAPHI

Lets align the 8 different amino acid sequences

alignments <- msa(seq, method = "ClustalW")
## use default substitution matrix
alignments 
## CLUSTAL 2.1  
## 
## Call:
##    msa(seq, method = "ClustalW")
## 
## MsaAAMultipleAlignment with 3 rows and 142 columns
##     aln                                                    names
## [1] MVLSPADKTNVKAAWGKVGAHAGEYG...FTPAVHASLDKFLASVSTVLTSKYR HBA_HUMAN
## [2] MVLSGEDKSNIKAAWGKIGGHGAEYG...FTPAVHASLDKFLASVSTVLTSKYR HBA_MOUSE
## [3] MSLTRTERTIILSLWSKISTQADVIG...FTADAHAAWDKFLSIVSGVLTEKYR HBAZ_CAPHI
## Con MVLS??DKTNIKAAWGKIG?HA?EYG...FTPAVHASLDKFLASVSTVLTSKYR Consensus
msaPrettyPrint(alignments, output = "pdf", showNames = "left", 
               showLogo = "none", askForOverwrite = FALSE, 
               verbose = TRUE, file = "whole_align.pdf")
## Multiple alignment written to temporary file /tmp/Rtmp5WeDw2/seqaca50b72b55.fasta
## File whole_align.tex created
## Warning in texi2dvi(texfile, quiet = !verbose, pdf = identical(output, "pdf"),
## : texi2dvi script/program not available, using emulation
## Output file whole_align.pdf created
msaPrettyPrint(alignments, c(10,30),  output = "pdf", showNames = "left",
               file = "Zoomed_align.pdf", showLogo = "top", askForOverwrite = FALSE, 
               verbose = TRUE)
## Multiple alignment written to temporary file /tmp/Rtmp5WeDw2/seqaca163068e4.fasta
## File Zoomed_align.tex created
## Warning in texi2dvi(texfile, quiet = !verbose, pdf = identical(output, "pdf"),
## : texi2dvi script/program not available, using emulation
## Output file Zoomed_align.pdf created

Lets make a tree from our alignment

Install the following packages to do so.

library(ape)
## 
## Attaching package: 'ape'
## The following object is masked from 'package:imager':
## 
##     where
## The following object is masked from 'package:Biostrings':
## 
##     complement
## The following object is masked from 'package:dplyr':
## 
##     where
library(seqinr)
## 
## Attaching package: 'seqinr'
## The following objects are masked from 'package:ape':
## 
##     as.alignment, consensus
## The following object is masked from 'package:matrixStats':
## 
##     count
## The following object is masked from 'package:Biostrings':
## 
##     translate
## The following object is masked from 'package:limma':
## 
##     zscore
## The following object is masked from 'package:dplyr':
## 
##     count

Convert to seqinr alignment -> get the distances and make a tree

alignment_seqinr <- msaConvert(alignments, type = "seqinr::alignment")

distances1 <- seqinr::dist.alignment(alignment_seqinr, "identity")

tree <- ape::nj(distances1)
plot(tree, main = "Phylogenetic Tre of HBA Sequences")

Synteny

Install the following package.

library(DECIPHER)
## Loading required package: RSQLite
## Loading required package: parallel

In the first step, we load the libraries and the sequence into long_seqs This is a DNAStringSet object ~Desktop/classroom/myfiles

long_seq <- readDNAStringSet("plastid_genomes.fa")
long_seq
## DNAStringSet object of length 5:
##      width seq                                              names               
## [1] 130584 GGCATAAGCTATCTTCCCAAAGG...ATGATTCAAACATAAAAGTCCT NC_018523.1 Sacch...
## [2] 161592 ATGGGCGAACGACGGGAATTGAA...AAAGAAAAAAAAATAGGAGTAA NC_022431.1 Ascle...
## [3] 117672 ATGAGTACAACTCGAAAGTCCAC...TTGATTTCATCCACAAACGAAC NC_022259.1 Nanno...
## [4] 154731 TTATCCATTTGTAGATGGAACTT...CATATACACTAAGACAAAAGTC NC_022417.1 Cocos...
## [5] 156618 GGGCGAACGACGGGAATTGAACC...CCTTTTGTAGCGAATCCGTTAT NC_022459.1 Camel...

Now lets build a temporary SQLite database

Seqs2DB(long_seq, "XStringSet", "long_db", names(long_seq))
## Adding 5 sequences to the database.
## 
## Added 5 new sequences to table Seqs.
## 30 total sequences in table Seqs.
## Time difference of 0.16 secs

Now that we’ve built the database, we can do the following: Find the syntenic blocks

synteny <- FindSynteny("long_db")
## ================================================================================
## 
## Time difference of 5.1 secs

View blocks with plotting

pairs(synteny)

plot(synteny)

Make an actual alignment file

alignment <- AlignSynteny(synteny, "long_db")
## ================================================================================
## 
## Time difference of 43 secs

Lets create a structure holding all aligned syntentic blocks for a pair of sequences

block <- unlist(alignment[[1]])

We can write to file on alignment at a time

writeXStringSet(block, "genome_blocks_out.fa")

Unannotated Gene Regions

Install the following packages.

library(locfit)
## locfit 1.5-9.8    2023-06-11
## 
## Attaching package: 'locfit'
## The following object is masked from 'package:purrr':
## 
##     none
library(Rsamtools)

Lets create a function that will load the gene region information in a GFF file and convert it to a bioconductor GRanges object

get_annotated_regions_from_GFF <- function(file_name) {
  gff <- rtracklayer::import.gff(file_name)
  as(gff, "GRanges")
}

Get count in windows across the genome in 500bp segments

whole_genome <- csaw::windowCounts(
  # file.path(~Desktop/classroom/myfiles/garnettnatalie@gmail.com),
  file.path(getwd(), "windows.bam"),
  bin = TRUE, 
  filter = 0, 
  width = 500, 
  param = csaw::readParam(
    minq = 20, # determines what is a passing read
    dedup = TRUE, # removes pcr duplicate 
    pe = "both" # requires that both pairs of reads are aligned 
  )
)

Since this is a single column of data, let’s rename it

colnames(whole_genome) <- c("small_data")

annotated_regions <- get_annotated_regions_from_GFF(file.path(getwd(), "genes.gff"))

Now that we have the windows of high expression, we wantt to see if any of them overlap with annotated regions.

Install the next set of packages.

library(IRanges)
library(SummarizedExperiment)

Find the overlaps between the windows we made

windows_in_genes <- IRanges::overlapsAny(
  SummarizedExperiment::rowRanges(whole_genome), # creates a Granges object
  annotated_regions
)

windows_in_genes
##  [1] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE  TRUE  TRUE  TRUE FALSE

Here we subset the whole_genome object into annotated and nonannotated regions

annotated_windows_counts <- whole_genome[windows_in_genes,]
non_annotated_windows_counts <- whole_genome[!windows_in_genes]

Use assay () to extract the actual counts from the Granges object

assay(non_annotated_windows_counts)
##       small_data
##  [1,]          0
##  [2,]         31
##  [3,]         25
##  [4,]          0
##  [5,]          0
##  [6,]         24
##  [7,]         25
##  [8,]          0
##  [9,]          0

In this step, we use Rsamtools Puleup() function to get a per-base coverage dataframe. reach row represents a single nucleotide in the reference count and the count column gives the depth of coverage at that point

Install the final package

library(bumphunter)
## Loading required package: foreach
## Parallel computing support for 'oligo/crlmm': Disabled
##      - Load 'ff'
##      - Load and register a 'foreach' adaptor
##         Example - Using 'multicore' for 2 cores:
##              library(doMC)
##              registerDoMC(2)
## ================================================================================
## 
## Attaching package: 'foreach'
## 
## The following objects are masked from 'package:purrr':
## 
##     accumulate, when
## 
## Loading required package: iterators
pile_df <- Rsamtools::pileup(file.path(getwd(), "windows.bam"))

This step groups the reads with certain distance of each other into a cluster. We give the sequence names, position and distance.

clusters <- bumphunter::clusterMaker(pile_df$seqnames, pile_df$pos, maxGap = 100)

table(clusters)
## clusters
##    1    2    3 
## 1486 1552 1520

In this step, we will map the reads to the regions we found for the genome

bumphunter::regionFinder(pile_df$counter, pile_df$seqnames, pile_df$pos, cutoff = 1)
## getSegments: segmenting
## getSegments: splitting
## [1] L        clusterL
## <0 rows> (or 0-length row.names)

Treeio

Lets load the required packages

library(ggplot2)
library(ggtree)
## ggtree v3.10.0 For help: https://yulab-smu.top/treedata-book/
## 
## If you use the ggtree package suite in published research, please cite
## the appropriate paper(s):
## 
## Guangchuang Yu, David Smith, Huachen Zhu, Yi Guan, Tommy Tsan-Yuk Lam.
## ggtree: an R package for visualization and annotation of phylogenetic
## trees with their covariates and other associated data. Methods in
## Ecology and Evolution. 2017, 8(1):28-36. doi:10.1111/2041-210X.12628
## 
## Guangchuang Yu.  Data Integration, Manipulation and Visualization of
## Phylogenetic Trees (1st edition). Chapman and Hall/CRC. 2022,
## doi:10.1201/9781003279242
## 
## LG Wang, TTY Lam, S Xu, Z Dai, L Zhou, T Feng, P Guo, CW Dunn, BR
## Jones, T Bradley, H Zhu, Y Guan, Y Jiang, G Yu. treeio: an R package
## for phylogenetic tree input and output with richly annotated and
## associated data. Molecular Biology and Evolution. 2020, 37(2):599-603.
## doi: 10.1093/molbev/msz240
## 
## Attaching package: 'ggtree'
## The following object is masked from 'package:ape':
## 
##     rotate
## The following object is masked from 'package:magrittr':
## 
##     inset
## The following object is masked from 'package:reshape':
## 
##     expand
## The following object is masked from 'package:Biostrings':
## 
##     collapse
## The following object is masked from 'package:IRanges':
## 
##     collapse
## The following object is masked from 'package:S4Vectors':
## 
##     expand
## The following object is masked from 'package:tidyr':
## 
##     expand
library(treeio)
## treeio v1.26.0 For help: https://yulab-smu.top/treedata-book/
## 
## If you use the ggtree package suite in published research, please cite
## the appropriate paper(s):
## 
## LG Wang, TTY Lam, S Xu, Z Dai, L Zhou, T Feng, P Guo, CW Dunn, BR
## Jones, T Bradley, H Zhu, Y Guan, Y Jiang, G Yu. treeio: an R package
## for phylogenetic tree input and output with richly annotated and
## associated data. Molecular Biology and Evolution. 2020, 37(2):599-603.
## doi: 10.1093/molbev/msz240
## 
## Guangchuang Yu. Using ggtree to visualize data on tree-like structures.
## Current Protocols in Bioinformatics. 2020, 69:e96. doi:10.1002/cpbi.96
## 
## S Xu, Z Dai, P Guo, X Fu, S Liu, L Zhou, W Tang, T Feng, M Chen, L
## Zhan, T Wu, E Hu, Y Jiang, X Bo, G Yu. ggtreeExtra: Compact
## visualization of richly annotated phylogenetic data. Molecular Biology
## and Evolution. 2021, 38(9):4039-4042. doi: 10.1093/molbev/msab166
## 
## Attaching package: 'treeio'
## The following object is masked from 'package:seqinr':
## 
##     read.fasta
## The following object is masked from 'package:Biostrings':
## 
##     mask

First we need to load our raw tree data. Its a Newick format so we use:

itol <- ape::read.tree("itol.nwk")

Now we will print out a very basic phylogenetic tree

ggtree(itol)

We can also change the format to make it a circular tree

ggtree(itol, layout = "circular")

We can also change the left-right/ up-down direction

ggtree(itol) + coord_flip() + scale_x_reverse()

by using geom_tipla() we can add names to the end of tips

ggtree(itol) + geom_tiplab(color = "indianred", size = 1.5) 

by adding a geom_strip() layer we can annotate clades in the tree with a block of color

ggtree(itol, layer = "unrooted") + geom_strip(13,14, color = "red", barsize = 1)
## Warning in stat_tree(data = data, mapping = mapping, geom = "segment", position = position, : Ignoring unknown parameters: `layer`
## Ignoring unknown parameters: `layer`

we can highlight clades is unrooted trees with blobs of color using geom_hilight

ggtree(itol, layout = "unrooted") + geom_hilight(node = 11, type = "encircle", fill = "steelblue")
## "daylight" method was used as default layout for unrooted tree.
## Average angle change [1] 0.174910612627282
## Average angle change [2] 0.161645191380673
## Average angle change [3] 0.129304375923319
## Average angle change [4] 0.0825706767962636
## Average angle change [5] 0.100056259084497

Install the following packages.

install.packages("BAMMtools")
## Installing package into '/home/student/R/x86_64-pc-linux-gnu-library/4.3'
## (as 'lib' is unspecified)
library(BAMMtools)

We can use the MRCA (most recent common ancestor) function to find the node we want

getmrca(itol, "Photorhabdus_luminescens", "Blochmannia_floridanus")
## [1] 206

Now if we want to highlight the section of the most recent common ancestor between the two above

ggtree(itol, layout = "unrooted") + geom_hilight(node = 206, type = "encircle", fill = "steelblue")
## "daylight" method was used as default layout for unrooted tree.
## Average angle change [1] 0.174910612627282
## Average angle change [2] 0.161645191380673
## Average angle change [3] 0.129304375923319
## Average angle change [4] 0.0825706767962636
## Average angle change [5] 0.100056259084497

Treespace

Quantifying difference between trees with treespace

First lets load the required packages

library(ape)
library(adegraphics)
## Registered S3 methods overwritten by 'adegraphics':
##   method         from
##   biplot.dudi    ade4
##   kplot.foucart  ade4
##   kplot.mcoa     ade4
##   kplot.mfa      ade4
##   kplot.pta      ade4
##   kplot.sepan    ade4
##   kplot.statis   ade4
##   scatter.coa    ade4
##   scatter.dudi   ade4
##   scatter.nipals ade4
##   scatter.pco    ade4
##   score.acm      ade4
##   score.mix      ade4
##   score.pca      ade4
##   screeplot.dudi ade4
## 
## Attaching package: 'adegraphics'
## The following object is masked from 'package:ape':
## 
##     zoom
## The following object is masked from 'package:GenomicRanges':
## 
##     score
## The following object is masked from 'package:Biostrings':
## 
##     score
## The following object is masked from 'package:BiocGenerics':
## 
##     score
library(treespace)
## Loading required package: ade4
## 
## Attaching package: 'ade4'
## The following objects are masked from 'package:adegraphics':
## 
##     kplotsepan.coa, s.arrow, s.class, s.corcircle, s.distri, s.image,
##     s.label, s.logo, s.match, s.traject, s.value, table.value,
##     triangle.class
## The following object is masked from 'package:GenomicRanges':
## 
##     score
## The following object is masked from 'package:Biostrings':
## 
##     score
## The following object is masked from 'package:BiocGenerics':
## 
##     score

Now we need to load all the treefiles into a multiPhylo object

treefiles <- list.files(file.path(getwd(), "genetrees"), full.names = TRUE)
tree_list <- lapply(treefiles, read.tree)
class(tree_list)
## [1] "list"
class(tree_list) <- "multiPhylo"

class(tree_list)
## [1] "multiPhylo"

Now we can compute the kendall-coljin distance between trees. This function does a LOT of analyses.

First it runs a pairwise comparison of all trees in the input Second it carries out clustering using PCA These results are returned in a list of objects, where $D contains the pairwise metric of the trees, and $pco contains the PCA. The method we use (Kendal-Coljin) is particularly suitable for rooted trees as we have here. The option NF tells us how many principal components to retain.

comparisons <- treespace(tree_list, nf = 3)

We can plot the pairwise distances between trees with table.image

table.image(comparisons$D, nclass= 25)

not lets print the PCA and clusters, this shows us how the group of tree clusters

plotGroves(comparisons$pco, lab.show = TRUE, lab.cex = 1.5)

groves <- findGroves(comparisons, nclust = 2)
plotGroves(groves)

Subtrees

Extracting and working with subtrees using APE

Load our required packges

library(ape)

Now lets load the tree data we will be working with

newick <- read.tree("mammal_tree.nwk")

l <- subtrees(newick)

Lets plot the tree to see what it looks like

plot(newick)

We can subset this plot using the “node” function

plot(l[[4]], sub = "Node 4")

extract the tree manually

small_tree <- extract.clade(newick, 9)

plot(small_tree)

Now what if we want to bind two trees together

new_tree <- bind.tree(newick, small_tree, 3)
plot(new_tree)

Trees from Alignment

Reconstructing trees from alignments

Lets load the packages

library(Biostrings)
library(msa)
library(phangorn)

First we’ll load the sequences into a seqs variable

seqs <- readAAStringSet("abc.fa")

Now, lets construct an alignment with the msa package and ClustalOmega

aln <- msa::msa(seqs, method = c("ClustalOmega"))
## using Gonnet

To create a tree, we need to convert the alignment to a phyData objects

aln <- as.phyDat(aln, type = "AA")

class(aln)
## [1] "phyDat"

In this step, we’ll actually make the trees. Trees are made froma distance matrix, which can be computed with dist.ml() - ML stands for maximum likelihood

dist_mat <- dist.ml(aln)

Now we pass the distance matrix to upgma to make a UPGMA tree

upgma_tree <- upgma(dist_mat)
plot(upgma_tree, main = "UPGMA")

Now we can conversley pass the distance matrix to a neighbor joining function

nj_tree <- NJ(dist_mat)
plot(nj_tree, main = "NJ")

lastly, we are going to use the bootstraps.phyDat() function to compute bootstrap support for the branches of the tree. The first argument is the object(aln), while the second argument in the function nj

Bootstraps are essentially confidence intervals for how the clade is placed in the correct position

fit <- pml(nj_tree, aln)

bootstraps <- bootstrap.phyDat(aln, FUN = function(x) {NJ(dist.ml(x))}, bs = 100)

plotBS(nj_tree, bootstraps, p = 10)

Variant Tools

First lets load the required libraries

library(GenomicRanges)
library(gmapR)
library(rtracklayer)
library(VariantAnnotation)
## 
## Attaching package: 'VariantAnnotation'
## The following object is masked from 'package:stringr':
## 
##     fixed
## The following object is masked from 'package:base':
## 
##     tabulate
library(VariantTools)

Now we want to load our datasets. We need .Bam and .fa files fo rthis pipeline to work

bam_file <- file.path(getwd(), "hg17_snps.bam")

fasta_file <- file.path(getwd(), "chr17_83k.fa")

Now we need to set up the genome object and the parameters object:

fa <- rtracklayer::FastaFile(fasta_file)

Now we create a GMapGenome object, which describes the genome to the later variant calling function

genome <- gmapR::GmapGenome(fa, create = TRUE)
## Creating directory /home/student/.local/share/gmap

This next step sets our parameter for what is considered a variant. There can be a lot of fine-tuning to this function. We are just going to use the standard settings

qual_params <- TallyVariantsParam(
  genome = genome, 
  minimum_mapq = 20)

var_params <- VariantCallingFilters(read.count = 19, p.lower = 0.01)

Now we use callVariants function in accordance with our parameters we defined above

called_variants <- callVariants(bam_file, 
                                qual_params,
                                calling.filter = var_params)

head(called_variants)
## VRanges object with 6 ranges and 17 metadata columns:
##           seqnames    ranges strand         ref              alt     totalDepth
##              <Rle> <IRanges>  <Rle> <character> <characterOrRle> <integerOrRle>
##   [1] NC_000017.10        64      *           G                T            759
##   [2] NC_000017.10        69      *           G                T            812
##   [3] NC_000017.10        70      *           G                T            818
##   [4] NC_000017.10        73      *           T                A            814
##   [5] NC_000017.10        77      *           T                A            802
##   [6] NC_000017.10        78      *           G                T            798
##             refDepth       altDepth   sampleNames softFilterMatrix | n.read.pos
##       <integerOrRle> <integerOrRle> <factorOrRle>         <matrix> |  <integer>
##   [1]            739             20          <NA>                  |         17
##   [2]            790             22          <NA>                  |         19
##   [3]            796             22          <NA>                  |         20
##   [4]            795             19          <NA>                  |         13
##   [5]            780             22          <NA>                  |         19
##   [6]            777             21          <NA>                  |         17
##       n.read.pos.ref raw.count.total count.plus count.plus.ref count.minus
##            <integer>       <integer>  <integer>      <integer>   <integer>
##   [1]             64             759         20            739           0
##   [2]             69             812         22            790           0
##   [3]             70             818         22            796           0
##   [4]             70             814         19            795           0
##   [5]             70             802         22            780           0
##   [6]             70             798         21            777           0
##       count.minus.ref count.del.plus count.del.minus read.pos.mean
##             <integer>      <integer>       <integer>     <numeric>
##   [1]               0              0               0       30.9000
##   [2]               0              0               0       40.7273
##   [3]               0              0               0       34.7727
##   [4]               0              0               0       36.1579
##   [5]               0              0               0       38.3636
##   [6]               0              0               0       39.7143
##       read.pos.mean.ref read.pos.var read.pos.var.ref     mdfne mdfne.ref
##               <numeric>    <numeric>        <numeric> <numeric> <numeric>
##   [1]           32.8755      318.558          347.804        NA        NA
##   [2]           35.4190      377.004          398.876        NA        NA
##   [3]           36.3442      497.762          402.360        NA        NA
##   [4]           36.2176      519.551          402.843        NA        NA
##   [5]           36.0064      472.327          397.070        NA        NA
##   [6]           35.9241      609.076          390.463        NA        NA
##       count.high.nm count.high.nm.ref
##           <integer>         <integer>
##   [1]            20               738
##   [2]            22               789
##   [3]            22               796
##   [4]            19               769
##   [5]            22               780
##   [6]            21               777
##   -------
##   seqinfo: 1 sequence from chr17_83k genome
##   hardFilters(4): nonRef nonNRef readCount likelihoodRatio

we have identified 6 variants

Now, we move onto annotation and load the feature position information from genome

get_annotated_regions_from_gff <- function(file_name) {
  gff <- rtracklayer::import.gff(file_name)
  as(gff, "GRanges")
}

Note you can also load this data from a bad file.

genes <- get_annotated_regions_from_gff("chr17.83k.gff3")

Now we can calculate which variants overlap which genes

overlaps <- GenomicRanges::findOverlaps(called_variants, genes)

overlaps
## Hits object with 12684 hits and 0 metadata columns:
##           queryHits subjectHits
##           <integer>   <integer>
##       [1]     35176           1
##       [2]     35176           2
##       [3]     35176           3
##       [4]     35177           1
##       [5]     35177           2
##       ...       ...         ...
##   [12680]     40944           2
##   [12681]     40944           7
##   [12682]     40945           1
##   [12683]     40945           2
##   [12684]     40945           7
##   -------
##   queryLength: 44949 / subjectLength: 8

and lastly, we subset the genes with the list of overlaps

identified <- genes[subjectHits(overlaps)]

Predicting ORF

First thing, lets load the required packages

library(Biostrings)
library(systemPipeR)
## Loading required package: ShortRead
## Loading required package: BiocParallel
## Loading required package: GenomicAlignments
## 
## Attaching package: 'GenomicAlignments'
## The following object is masked from 'package:dplyr':
## 
##     last
## 
## Attaching package: 'ShortRead'
## The following object is masked from 'package:adegraphics':
## 
##     zoom
## The following objects are masked from 'package:locfit':
## 
##     left, right
## The following object is masked from 'package:ape':
## 
##     zoom
## The following object is masked from 'package:imager':
## 
##     clean
## The following object is masked from 'package:magrittr':
## 
##     functions
## The following object is masked from 'package:oligo':
## 
##     intensity
## The following objects are masked from 'package:oligoClasses':
## 
##     chromosome, position
## The following object is masked from 'package:affy':
## 
##     intensity
## The following object is masked from 'package:dplyr':
## 
##     id
## The following object is masked from 'package:purrr':
## 
##     compose
## The following object is masked from 'package:tibble':
## 
##     view
## 
## Attaching package: 'systemPipeR'
## The following object is masked from 'package:VariantAnnotation':
## 
##     reference
## The following object is masked from 'package:DESeq2':
## 
##     results

Lets load the data into a DNAStrings object, in this case, an Arabidopsis chloroplast genome

dna_object <- readDNAStringSet("arabidopsis_chloroplast.fa")

Now lets predict the open reading frames with predORF(), we’ll predict all ORF on both strands

predict_orfs <- predORF(dna_object, n = 'all', type = 'gr', mode = 'ORF', strand = 'both', 
                        longest_disjoint = TRUE)
predict_orfs
## GRanges object with 2501 ranges and 2 metadata columns:
##           seqnames        ranges strand | subject_id inframe2end
##              <Rle>     <IRanges>  <Rle> |  <integer>   <numeric>
##      1 chloroplast   86762-93358      + |          1           2
##   1162 chloroplast     2056-2532      - |          1           3
##      2 chloroplast   72371-73897      + |          2           2
##   1163 chloroplast   77901-78362      - |          2           1
##      3 chloroplast   54937-56397      + |          3           3
##    ...         ...           ...    ... .        ...         ...
##   2497 chloroplast 129757-129762      - |       1336           3
##   2498 chloroplast 139258-139263      - |       1337           3
##   2499 chloroplast 140026-140031      - |       1338           3
##   2500 chloroplast 143947-143952      - |       1339           3
##   2501 chloroplast 153619-153624      - |       1340           3
##   -------
##   seqinfo: 1 sequence from an unspecified genome; no seqlengths

This printed out a GRanges object in return, with 2,501 open reading frames. This is FAR too many open reading frames.

to filter out erronous ORFs, we do a simulation. We first estimate the length an ORF can reach by chance. We will create a string of random nucleotides that is the length of our chloroplast genome and determine the longer ORF that can arise by chance.

bases <- c("A", "T", "G", "C")
raw_seq_string <- strsplit(as.character(dna_object), "")

Now we need to ensure that our random nucleotides match the proportion of nucleotides in our chloroplast genome so we have no bias.

seq_length <- width(dna_object[1])
counts <- lapply(bases, function(x) {sum(grepl(x, raw_seq_string))} )
probs <- unlist(lapply(counts, function(base_count){signif(base_count/seq_length, 2)}))
probs
## [1] 6.5e-06 6.5e-06 6.5e-06 6.5e-06

Now we can build our function to simulate a genome.

get_longest_orf_in_random_genome <- function(x, 
                                            length = 1000,
                                            probs = c(0.25, 0.25, 0.25, 0.25),
                                            bases = c("A", "T", "G", "C")){
  # Here we create our tandom genome and allow replacement for the next iteration 
  random_genome <- paste0(sample(bases, size = length, replace = TRUE, prob = probs), collapse = "")
  random_dna_object <- DNAStringSet(random_genome)
  names(random_dna_object) <- c("random_dna_string")
  
  orfs <- predORF(random_dna_object, n =1, type = 'gr', mode = 'ORF', strand = 'both', longest_disjoint = TRUE)
  return(max(width(orfs)))
}

Now we use the function we just created to predict the ORFs in 10 random genomes

random_lengths <- unlist(lapply(1:10, get_longest_orf_in_random_genome, length = seq_length, probs = probs, bases = bases))

Lets pull out the longest length from our 10 simulations

longest_random_orf <- max(random_lengths)

Lets only keep the frames that are longer in our chloroplast genome

keep <- width(predict_orfs) > longest_random_orf

orfs_to_keep <- predict_orfs[keep]
orfs_to_keep
## GRanges object with 8 ranges and 2 metadata columns:
##        seqnames        ranges strand | subject_id inframe2end
##           <Rle>     <IRanges>  <Rle> |  <integer>   <numeric>
##   1 chloroplast   86762-93358      + |          1           2
##   2 chloroplast   72371-73897      + |          2           2
##   3 chloroplast   54937-56397      + |          3           3
##   4 chloroplast   57147-58541      + |          4           1
##   5 chloroplast   33918-35141      + |          5           1
##   6 chloroplast   32693-33772      + |          6           2
##   7 chloroplast 109408-110436      + |          7           3
##   8 chloroplast 114461-115447      + |          8           2
##   -------
##   seqinfo: 1 sequence from an unspecified genome; no seqlengths

write this data to file

extracted_orfs <- BSgenome::getSeq(dna_object, orfs_to_keep)
names(extracted_orfs) <- paste0("orf_", 1:length(orfs_to_keep))
writeXStringSet(extracted_orfs, "saved_orfs.fa")

Karyplote R

First lets load the required packages

library(karyoploteR)
## Loading required package: regioneR
library(GenomicRanges)

Now we need to set up the genome object for our karyotype

genome_df <- data.frame(
  # first we dictate the number of chromosomes
  chr = paste0("chr", 1:5),
  start = rep(1, 5), 
  # and then we will dictate the length of each chromosome 
  end = c(34964571, 22037565, 25499034, 20862711, 31270811)
) 

Now we convert the dataframe to a granges object

genome_gr <- makeGRangesFromDataFrame(genome_df)

Now lets create some snp positions to map out. We do this by using the sample() function

snp_pos <- sample(1:1e7, 25)
snps <- data.frame(
  chr = paste0("chr", sample(1:5,25, replace = TRUE)),
  start = snp_pos, 
  end = snp_pos
)

again we convert the dataframe to granges

snps_gr <- makeGRangesFromDataFrame(snps)

Now lets create some snp labels

snp_labels <- paste0("snp_", 1:25)

Here will set the margins for our plot

plot.params <- getDefaultPlotParams(plot.type = 1)

Here we will set the margins of our plot

plot.params$data1outmargin <- 600

Now lets plot our snps

kp <- plotKaryotype(genome= genome_gr, plot.type = 1, plot.params = plot.params, chromosomes = "all")
kpPlotMarkers(kp, snps_gr, labels = snp_labels )

We can also add some numeric data to our plots. We will generate 100 random numbers that plot to 100 windows on chromosome 4

numeric_data <- data.frame(
  y = rnorm(100,mean = 1, sd = 0.5), 
  chr = rep("chr4", 100), 
  start = seq(1,20862711, 20862711/100),
  end = seq(1,20682711, 20862711/100)
)

Now lets make the data a granges object

numeric_data_gr <- makeGRangesFromDataFrame(numeric_data)

Again let set out plot parameters

plot.paramas <- getDefaultPlotParams(plot.type = 2)
plot.paramas$data1outmargin <- 800
plot.paramas$data2outmargin <- 800
plot.paramas$topmargin <- 800

Lets plot the data

kp <- plotKaryotype(genome= genome_gr, plot.type = 1, plot.params = plot.params, chromosomes = "all")
kpPlotMarkers(kp, snps_gr, labels = snp_labels )
kpLines(kp, numeric_data_gr, y = numeric_data$y, data.panel = 2)