To be announced soon..
Due to DataCamp case, the contribution of grade points is planned to be as follows (subject to change).
| Item | Total contribution to 100 points |
|---|---|
| Midterm | 30 |
| Final | 40 |
| Quiz | 15 |
| DataCamp assignments | 5 |
| Question Pool | 5 |
| Attendance | 5 |
| Project (Bonus) | 7 |
A bonus Project item has been added. The details of the project are as follows:
Before we start, please make sure the following libraries are installed
library(tidytext)
library(janeaustenr)
library(stringr)
library(tidyverse)
library(ggplot2)
library(ggraph)
library(igraph)
library(tidygraph)
library(widyr) # OPTIONAL library(devtools) then install_github("dgrtwo/widyr")
Last time, we removed stop words and added sentiment from 3 different sources. We were able to track the positive or negative sentiments throughout the chapters. Finally, we calculated most common negative and positive words in Jane Austen’s 6 books.
Now, we’ll do more fun stuff. It will be a rough ride, with many diverse topics, so please buckle up. We’re still following the book Text Mining with R which can be accessed online here. The R code of the book is available at this Github repo. Some sections will be from different resources, and necessary links to those resources will be provided.
In order to quantify what a document is about, we can looking at the words that make up the document. Term frequency (tf), reports how frequently a word occurs in a document. It is possible that some of these words might be more important in some documents than others. A list of stop words is not a very sophisticated approach to adjusting term frequency for commonly used words.
A term’s inverse document frequency (idf), which decreases the weight for commonly used words and increases the weight for words that are not used very much in a collection of documents. This can be combined with term frequency to calculate a term’s tf-idf (the two quantities multiplied together), the frequency of a term adjusted for how rarely it is used.
The statistic tf-idf is intended to measure how important a word is to a document in a collection (or corpus) of documents, for example, to one novel in a collection of novels or to one website in a collection of websites.
The inverse document frequency for any given term is defined as
\[idf(\text{term}) = \ln{\left(\frac{n_{\text{documents}}}{n_{\text{documents containing term}}}\right)}\]
Let’s calculate tf in Jane Austen’s books with tidy principles
book_words <- austen_books() %>%
unnest_tokens(word, text) %>%
count(book, word, sort = TRUE)
total_words <- book_words %>%
group_by(book) %>%
summarize(total = sum(n))
book_words <- left_join(book_words, total_words)
Joining, by = "book"
book_words
In book_words, n is the number of times that word is used in that book and total is the total words in that book. Term frequency is the number of times a word appears in a novel divided by the total number of terms (words) in that novel.
Below is the distribution of
ggplot(book_words, aes(n/total, fill = book)) +
geom_histogram(show.legend = FALSE) +
xlim(NA, 0.0009) +
facet_wrap(~book, ncol = 2, scales = "free_y")
Zipf’s law states that the frequency that a word appears is inversely proportional to its rank.
Zipf’s law can be observed in natural languages. Can we observe it in DNA sequence?
freq_by_rank <- book_words %>%
group_by(book) %>%
mutate(rank = row_number(),
tf = n/total)
freq_by_rank
The rank column here tells us the rank of each word within the frequency table; the table was already ordered by n so we could use row_number() to find the rank.
freq_by_rank %>%
ggplot(aes(rank, tf, color = book)) +
geom_line(size = 1.1, alpha = 0.8, show.legend = FALSE) +
scale_x_log10() +
scale_y_log10()
Notice that figure above is in log-log coordinates. We see that all six of Jane Austen’s novels are similar to each other, and that the relationship between rank and frequency does have negative slope. It is not quite constant, though; perhaps we could view this as a broken power law with, say, three sections.
bind_tf_idf functionThe idea of tf-idf is to find the important words for the content of each document by decreasing the weight for commonly used words and increasing the weight for words that are not used very much in a collection or corpus of documents, in this case, the group of Jane Austen’s novels as a whole. Calculating tf-idf attempts to find the words that are important (i.e., common) in a text, but not too common. Let’s do that now.
The bind_tf_idf function in the tidytext package takes a tidy text dataset as input with one row per token (term), per document. One column (word here) contains the terms/tokens, one column contains the documents (book in this case), and the last necessary column contains the counts, how many times each document contains each term (n in this example). We calculated a total for each book for our explorations in previous sections, but it is not necessary for the bind_tf_idf function; the table only needs to contain all the words in each document.
book_words <- book_words %>%
bind_tf_idf(word, book, n)
book_words
Calculate tf and idf from scratch
austen_books() %>%
unnest_tokens(word, text) %>%
count(book, word, sort = TRUE) %>%
bind_tf_idf(word,book,n)
Notice that idf and thus tf-idf are zero for these extremely common words. These are all words that appear in all six of Jane Austen’s novels, so the idf term (which will then be the natural log of 1) is zero. The inverse document frequency (and thus tf-idf) is very low (near zero) for words that occur in many of the documents in a collection; this is how this approach decreases the weight for common words. The inverse document frequency will be a higher number for words that occur in fewer of the documents in the collection.
Let’s look at terms with high tf-idf in Jane Austen’s works.
book_words %>%
select(-total) %>%
arrange(desc(tf_idf))
Here we see all proper nouns, names that are in fact important in these novels. None of them occur in all of novels, and they are important, characteristic words for each text within the corpus of Jane Austen’s novels.
Let’s look at a visualization for these high tf-idf words (please fix the code below for sorted view and please refer to stackoverflow answer about ordering the words)
book_words %>%
arrange(desc(tf_idf)) %>%
# mutate(word = factor(word, levels = rev(unique(word)))) %>%
mutate(word = reorder(word,n)) %>%
group_by(book) %>%
top_n(15) %>%
ungroup %>%
ggplot(aes(word, tf_idf, fill = book)) +
geom_col(show.legend = FALSE) +
labs(x = NULL, y = "tf-idf") +
facet_wrap(~book, ncol = 2, scales = "free") +
coord_flip()
Selecting by tf_idf
Let’s extract small portion of the table in order to understand the ordering
# TODO a small data frame as example
book_words %>%
arrange(desc(tf_idf)) %>%
filter(word %in% c("elizabeth","lizzy","fanny","thomas","bertram","emma","weston")) %>%
# mutate(word = factor(word, levels = rev(unique(word)))) %>%
mutate(word= reorder(word,tf_idf)) %>% select(word,tf_idf) -> test
Still all proper nouns in Figure @ref(fig:plotseparate)! These words are, as measured by tf-idf, the most important to each novel and most readers would likely agree. What measuring tf-idf has done here is show us that Jane Austen used similar language across her six novels, and what distinguishes one novel from the rest within the collection of her works are the proper nouns, the names of people and places. This is the point of tf-idf; it identifies words that are important to one document within a collection of documents.
In summary, using term frequency and inverse document frequency allows us to find words that are characteristic for one document within a collection of documents, whether that document is a novel or physics text or webpage.
So far we’ve considered words as individual units, and considered their relationships to sentiments or to documents. However, many interesting text analyses are based on the relationships between words, whether examining which words tend to follow others immediately, or that tend to co-occur within the same documents.
In this chapter, we’ll explore some of the methods tidytext offers for calculating and visualizing relationships between words in your text dataset. This includes the token = "ngrams" argument, which tokenizes by pairs of adjacent words rather than by individual ones. We’ll also introduce two new packages: ggraph, which extends ggplot2 to construct network plots, and widyr, which calculates pairwise correlations and distances within a tidy data frame. Together these expand our toolbox for exploring text within the tidy data framework.
We’ve been using the unnest_tokens function to tokenize by word, or sometimes by sentence, which is useful for the kinds of sentiment and frequency analyses we’ve been doing so far. But we can also use the function to tokenize into consecutive sequences of words, called n-grams. By seeing how often word X is followed by word Y, we can then build a model of the relationships between them.
We do this by adding the token = "ngrams" option to unnest_tokens(), and setting n to the number of words we wish to capture in each n-gram. When we set n to 2, we are examining pairs of two consecutive words, often called “bigrams”:
austen_bigrams <- austen_books() %>%
unnest_tokens(bigram, text, token = "ngrams", n = 2)
austen_bigrams
This data structure is still a variation of the tidy text format. It is structured as one-token-per-row (with extra metadata, such as book, still preserved), but each token now represents a bigram.
Notice that these bigrams overlap: “sense and” is one token, while “and sensibility” is another.
Our usual tidy tools apply equally well to n-gram analysis. We can examine the most common bigrams using dplyr’s count():
austen_bigrams %>%
count(bigram, sort = TRUE)
As one might expect, a lot of the most common bigrams are pairs of common (uninteresting) words, such as of the and to be: what we call “stop-words”. This is a useful time to use tidyr’s separate(), which splits a column into multiple based on a delimiter. This lets us separate it into two columns, “word1” and “word2”, at which point we can remove cases where either is a stop-word.
bigrams_separated <- austen_bigrams %>%
separate(bigram, c("word1", "word2"), sep = " ")
# bigrams_separated %>%
# count(word1,word2,sort=TRUE)
bigrams_filtered <- bigrams_separated %>%
filter(!word1 %in% stop_words$word) %>%
filter(!word2 %in% stop_words$word)
# OR
# bigrams_separated %>%
# anti_join(stop_words, by=c("word1"="word")) %>%
# anti_join(stop_words, by=c("word2"="word"))
# new bigram counts:
bigram_counts <- bigrams_filtered %>%
count(word1, word2, sort = TRUE)
bigram_counts
We can see that names (whether first and last or with a salutation) are the most common pairs in Jane Austen books.
In other analyses, we may want to work with the recombined words. tidyr’s unite() function is the inverse of separate(), and lets us recombine the columns into one. Thus, “separate/filter/count/unite” let us find the most common bigrams not containing stop-words.
bigrams_united <- bigrams_filtered %>%
unite(bigram, word1, word2, sep = " ")
bigrams_united
In other analyses you may be interested in the most common trigrams, which are consecutive sequences of 3 words. We can find this by setting n = 3:
austen_books() %>%
unnest_tokens(trigram, text, token = "ngrams", n = 3) %>%
separate(trigram, c("word1", "word2", "word3"), sep = " ") %>%
filter(!word1 %in% stop_words$word,
!word2 %in% stop_words$word,
!word3 %in% stop_words$word) %>%
count(word1, word2, word3, sort = TRUE)
This one-bigram-per-row format is helpful for exploratory analyses of the text. As a simple example, we might be interested in the most common “streets” mentioned in each book:
bigrams_filtered %>%
filter(word2 == "street") %>%
count(book, word1, sort = TRUE)
A bigram can also be treated as a term in a document in the same way that we treated individual words. For example, we can look at the tf-idf of bigrams across Austen novels. These tf-idf values can be visualized within each book, just as we did for words.
bigrams_united <- bigrams_filtered %>%
unite(bigram, word1, word2, sep = " ")
bigram_tf_idf <- bigrams_united %>%
count(book, bigram) %>%
bind_tf_idf(bigram, book, n) %>%
arrange(desc(tf_idf))
bigram_tf_idf
Much as we discovered in Chapter @ref(tfidf), the units that distinguish each Austen book are almost exclusively names. We also notice some pairings of a common verb and a name, such as “replied elizabeth” in Pride & Prejudice, or “cried emma” in Emma.
There are advantages and disadvantages to examining the tf-idf of bigrams rather than individual words. Pairs of consecutive words might capture structure that isn’t present when one is just counting single words, and may provide context that makes tokens more understandable (for example, “pulteney street”, in Northanger Abbey, is more informative than “pulteney”). However, the per-bigram counts are also sparser: a typical two-word pair is rarer than either of its component words. Thus, bigrams can be especially useful when you have a very large text dataset.
Our sentiment analysis approach in Chapter @ref(sentiment) simply counted the appearance of positive or negative words, according to a reference lexicon. One of the problems with this approach is that a word’s context can matter nearly as much as its presence. For example, the words “happy” and “like” will be counted as positive, even in a sentence like “I’m not happy and I don’t like it!”
Now that we have the data organized into bigrams, it’s easy to tell how often words are preceded by a word like “not”:
bigrams_separated %>%
filter(word1 == "not") %>%
count(word1, word2, sort = TRUE)
By performing sentiment analysis on the bigram data, we can examine how often sentiment-associated words are preceded by “not” or other negating words. We could use this to ignore or even reverse their contribution to the sentiment score.
Let’s use the AFINN lexicon for sentiment analysis, which you may recall gives a numeric sentiment score for each word, with positive or negative numbers indicating the direction of the sentiment.
AFINN <- get_sentiments("afinn")
AFINN
We can then examine the most frequent words that were preceded by “not” and were associated with a sentiment.
not_words <- bigrams_separated %>%
filter(word1 == "not") %>%
inner_join(AFINN, by = c(word2 = "word")) %>%
count(word2, score, sort = TRUE) %>%
ungroup()
not_words
For example, the most common sentiment-associated word to follow “not” was “like”, which would normally have a (positive) score of 2.
It’s worth asking which words contributed the most in the “wrong” direction. To compute that, we can multiply their score by the number of times they appear (so that a word with a score of +3 occurring 10 times has as much impact as a word with a sentiment score of +1 occurring 30 times). We visualize the result with a bar plot.
not_words %>%
mutate(contribution = n * score) %>%
arrange(desc(abs(contribution))) %>%
head(20) %>%
mutate(word2 = reorder(word2, contribution)) %>%
ggplot(aes(word2, n * score, fill = n * score > 0)) +
geom_col(show.legend = FALSE) +
xlab("Words preceded by \"not\"") +
ylab("Sentiment score * number of occurrences") +
coord_flip()
The bigrams “not like” and “not help” were overwhelmingly the largest causes of misidentification, making the text seem much more positive than it is. But we can see phrases like “not afraid” and “not fail” sometimes suggest text is more negative than it is.
Please refer to related chapter for more about negation words, such as “not”, “no”, “never”, “without”.
We may be interested in visualizing all of the relationships among words simultaneously, rather than just the top few at a time. As one common visualization, we can arrange the words into a network, or “graph.” Here we’ll be referring to a “graph” not in the sense of a visualization, but as a combination of connected nodes. A graph can be constructed from a tidy object since it has three variables:
The igraph package has many powerful functions for manipulating and analyzing networks. One way to create an igraph object from tidy data is the graph_from_data_frame() function, which takes a data frame of edges with columns for “from”, “to”, and edge attributes (in this case n):
library(igraph)
# original counts
bigram_counts
# filter for only relatively common combinations
bigram_graph <- bigram_counts %>%
filter(n > 20) %>%
graph_from_data_frame()
bigram_graph
IGRAPH dce38b3 DN-- 91 77 --
+ attr: name (v/c), n (e/n)
+ edges from dce38b3 (vertex names):
[1] sir ->thomas miss ->crawford captain ->wentworth miss ->woodhouse frank ->churchill lady ->russell
[7] lady ->bertram sir ->walter miss ->fairfax colonel ->brandon miss ->bates lady ->catherine
[13] sir ->john jane ->fairfax miss ->tilney lady ->middleton miss ->bingley thousand ->pounds
[19] miss ->dashwood miss ->bennet john ->knightley miss ->morland captain ->benwick dear ->miss
[25] miss ->smith miss ->crawford's henry ->crawford miss ->elliot dr ->grant miss ->bertram
[31] sir ->thomas's ten ->minutes miss ->price miss ->taylor sir ->william john ->dashwood
[37] de ->bourgh dear ->sir dear ->fanny miss ->darcy mansfield->park captain ->harville
[43] charles ->hayter dear ->emma maple ->grove lady ->russell's miss ->steeles cried ->emma
+ ... omitted several edges
The internals of igraph package might be overwhelming. Thus, there’s a tidy solution for graph analysis. The tidygraph package can represent the graph in tibble format. More than that, the graph nodes or edges can be manipulated with dplyr verbs. There are very interesting additional verbs which are worth checking.
The same import into graph can be achieved with tidygraph as well. You can import an existing igraph object or generate a graph from data frame.
Importing an existing igraph object
as_tbl_graph(bigram_graph)
# A tbl_graph: 91 nodes and 77 edges
#
# A directed acyclic simple graph with 17 components
#
# Node Data: 91 x 1 (active)
name
<chr>
1 sir
2 miss
3 captain
4 frank
5 lady
6 colonel
# ... with 85 more rows
#
# Edge Data: 77 x 3
from to n
<int> <int> <int>
1 1 28 287
2 2 29 215
3 3 30 170
# ... with 74 more rows
Tbl_graph from a data frame
bigram_counts %>%
filter(n > 20) %>%
as_tbl_graph()
# A tbl_graph: 91 nodes and 77 edges
#
# A directed acyclic simple graph with 17 components
#
# Node Data: 91 x 1 (active)
name
<chr>
1 sir
2 miss
3 captain
4 frank
5 lady
6 colonel
# ... with 85 more rows
#
# Edge Data: 77 x 3
from to n
<int> <int> <int>
1 1 28 287
2 2 29 215
3 3 30 170
# ... with 74 more rows
igraph has plotting functions built in, but they’re not what the package is designed to do, so many other packages have developed visualization methods for graph objects. We recommend the ggraph package, because it implements these visualizations in terms of the grammar of graphics, which we are already familiar with from ggplot2.
We can convert an igraph object into a ggraph with the ggraph function, after which we add layers to it, much like layers are added in ggplot2. For example, for a basic graph we need to add three layers: nodes, edges, and text.
library(ggraph)
set.seed(2017)
#plot(bigram_graph)
ggraph(bigram_graph, layout = "fr") +
geom_edge_link() +
geom_node_point() +
geom_node_text(aes(label = name), vjust = 1, hjust = 1)
In Figure, we can visualize some details of the text structure. For example, we see that salutations such as “miss”, “lady”, “sir”, “and”colonel" form common centers of nodes, which are often followed by names. We also see pairs or triplets along the outside that form common short phrases (“half hour”, “thousand pounds”, or “short time/pause”).
We conclude with a few polishing operations to make a better looking graph:
edge_alpha aesthetic to the link layer to make links transparent based on how common or rare the bigram isgrid::arrow(), including an end_cap option that tells the arrow to end before touching the nodetheme_void()set.seed(2016)
a <- grid::arrow(type = "closed", length = unit(.15, "inches"))
ggraph(bigram_graph, layout = "fr") +
geom_edge_link(aes(edge_alpha = n), show.legend = FALSE,
arrow = a, end_cap = circle(.07, 'inches')) +
geom_node_point(color = "lightblue", size = 5) +
geom_node_text(aes(label = name), vjust = 1, hjust = 1) +
theme_void()
It may take a some experimentation with ggraph to get your networks into a presentable format like this, but the network structure is useful and flexible way to visualize relational tidy data.
Note that this is a visualization of a Markov chain, a common model in text processing. In a Markov chain, each choice of word depends only on the previous word. In this case, a random generator following this model might spit out “dear”, then “sir”, then “william/walter/thomas/thomas’s”, by following each word to the most common words that follow it. To make the visualization interpretable, we chose to show only the most common word to word connections, but one could imagine an enormous graph representing all connections that occur in the text.
Please refer to related section in Text Mining book which analyzes bigrams in Bible and generates the following diagram.
King James version Bible word pair network
Figure above thus lays out a common “blueprint” of language within the Bible, particularly focused around “thy” and “thou” (which could probably be considered stopwords!) You can use the gutenbergr package and these count_bigrams/visualize_bigrams functions to visualize bigrams in other classic books you’re interested in.
Tokenizing by n-gram is a useful way to explore pairs of adjacent words. However, we may also be interested in words that tend to co-occur within particular documents or particular chapters, even if they don’t occur next to each other.
Tidy data is a useful structure for comparing between variables or grouping by rows, but it can be challenging to compare between rows: for example, to count the number of times that two words appear within the same document, or to see how correlated they are. Most operations for finding pairwise counts or correlations need to turn the data into a wide matrix first.
widyr package
We’ll examine some of the ways tidy text can be turned into a wide matrix, but in this case it isn’t necessary. The widyr package makes operations such as computing counts and correlations easy, by simplifying the pattern of “widen data, perform an operation, then re-tidy data” (Figure above). We’ll focus on a set of functions that make pairwise comparisons between groups of observations (for example, between documents, or sections of text).
Consider the book “Pride and Prejudice” divided into 10-line sections, as we did (with larger sections) for sentiment analysis Chapter. We may be interested in what words tend to appear within the same section.
austen_section_words <- austen_books() %>%
filter(book == "Pride & Prejudice") %>%
mutate(section = row_number() %/% 10) %>%
filter(section > 0) %>%
unnest_tokens(word, text) %>%
filter(!word %in% stop_words$word)
austen_section_words
One useful function from widyr is the pairwise_count() function. The prefix pairwise_ means it will result in one row for each pair of words in the word variable. This lets us count common pairs of words co-appearing within the same section:
library(widyr)
# count words co-occuring within sections
word_pairs <- austen_section_words %>%
pairwise_count(word, section, sort = TRUE)
word_pairs
Notice that while the input had one row for each pair of a document (a 10-line section) and a word, the output has one row for each pair of words. This is also a tidy format, but of a very different structure that we can use to answer new questions.
For example, we can see that the most common pair of words in a section is “Elizabeth” and “Darcy” (the two main characters). We can easily find the words that most often occur with Darcy:
word_pairs %>%
filter(item1 == "darcy")
Pairs like “Elizabeth” and “Darcy” are the most common co-occurring words, but that’s not particularly meaningful since they’re also the most common individual words. We may instead want to examine correlation among words, which indicates how often they appear together relative to how often they appear separately.
In particular, here we’ll focus on the phi coefficient, a common measure for binary correlation. The focus of the phi coefficient is how much more likely it is that either both word X and Y appear, or neither do, than that one appears without the other.
Consider the following table:
| Has word Y | No word Y | Total | ||
|---|---|---|---|---|
| Has word X | \(n_{11}\) | \(n_{10}\) | \(n_{1\cdot}\) | |
| No word X | \(n_{01}\) | \(n_{00}\) | \(n_{0\cdot}\) | |
| Total | \(n_{\cdot 1}\) | \(n_{\cdot 0}\) | n |
For example, that \(n_{11}\) represents the number of documents where both word X and word Y appear, \(n_{00}\) the number where neither appears, and \(n_{10}\) and \(n_{01}\) the cases where one appears without the other. In terms of this table, the phi coefficient is:
\[\phi=\frac{n_{11}n_{00}-n_{10}n_{01}}{\sqrt{n_{1\cdot}n_{0\cdot}n_{\cdot0}n_{\cdot1}}}\]
The phi coefficient is equivalent to the Pearson correlation, which you may have heard of elsewhere, when it is applied to binary data).
The pairwise_cor() function in widyr lets us find the phi coefficient between words based on how often they appear in the same section. Its syntax is similar to pairwise_count().
# we need to filter for at least relatively common words first
word_cors <- austen_section_words %>%
group_by(word) %>%
filter(n() >= 20) %>% # try lower numbers and see what happens
pairwise_cor(word, section, sort = TRUE)
word_cors
This output format is helpful for exploration. For example, we could find the words most correlated with a word like “pounds” using a filter operation.
word_cors %>%
filter(item1 == "pounds")
This lets us pick particular interesting words and find the other words most associated with them (Figure @ref(fig:wordcors)).
word_cors %>%
filter(item1 %in% c("elizabeth", "pounds", "married", "pride")) %>%
group_by(item1) %>%
top_n(6) %>%
ungroup() %>%
mutate(item2 = reorder(item2, correlation)) %>%
ggplot(aes(item2, correlation)) +
geom_bar(stat = "identity") +
facet_wrap(~ item1, scales = "free") +
coord_flip()
Selecting by correlation
Just as we used ggraph to visualize bigrams, we can use it to visualize the correlations and clusters of words that were found by the widyr package (Figure @ref(fig:wordcorsnetwork)).
set.seed(2016)
word_cors %>%
filter(correlation > .15) %>%
graph_from_data_frame() %>%
ggraph(layout = "fr") +
geom_edge_link(aes(edge_alpha = correlation), show.legend = FALSE) +
geom_node_point(color = "lightblue", size = 5) +
geom_node_text(aes(label = name), repel = TRUE) +
theme_void()
Note that unlike the bigram analysis, the relationships here are symmetrical, rather than directional (there are no arrows). We can also see that while pairings of names and titles that dominated bigram pairings are common, such as “colonel/fitzwilliam”, we can also see pairings of words that appear close to each other, such as “walk” and “park”, or “dance” and “ball”.
This chapter showed how the tidy text approach is useful not only for analyzing individual words, but also for exploring the relationships and connections between words. Such relationships can involve n-grams, which enable us to see what words tend to appear after others, or co-occurences and correlations, for words that appear in proximity to each other. This chapter also demonstrated the ggraph package for visualizing both of these types of relationships as networks. These network visualizations are a flexible tool for exploring relationships, and will play an important role in the case studies in later chapters.
Here’s the overview of the packages:
Text Analysis Flowchart
The structure of Document Term Matrix
library(tm)
data("AssociatedPress", package = "topicmodels")
AssociatedPress
## <<DocumentTermMatrix (documents: 2246, terms: 10473)>>
## Non-/sparse entries: 302031/23220327
## Sparsity : 99%
## Maximal term length: 18
## Weighting : term frequency (tf)
A 99% sparse matrix is converted to a tidy table (only non-zero values are used)
library(dplyr)
library(tidytext)
ap_td <- tidy(AssociatedPress)
ap_td
## # A tibble: 302,031 × 3
## document term count
## <int> <chr> <dbl>
## 1 1 adding 1
## 2 1 adult 2
## 3 1 ago 1
## 4 1 alcohol 1
## 5 1 allegedly 1
## 6 1 allen 1
## 7 1 apparently 2
## 8 1 appeared 1
## 9 1 arrested 1
## 10 1 assault 1
## # ... with 302,021 more rows
ap_td %>%
cast_dtm(document, term, count)
## <<DocumentTermMatrix (documents: 2246, terms: 10473)>>
## Non-/sparse entries: 302031/23220327
## Sparsity : 99%
## Maximal term length: 18
## Weighting : term frequency (tf)
Some tools simply require a sparse matrix:
library(Matrix)
# cast into a Matrix object
m <- ap_td %>%
cast_sparse(document, term, count)
class(m)
## [1] "dgCMatrix"
## attr(,"package")
## [1] "Matrix"
dim(m)
## [1] 2246 10473
An example from Jane Austen books
library(janeaustenr)
austen_dtm <- austen_books() %>%
unnest_tokens(word, text) %>%
count(book, word) %>%
cast_dtm(book, word, n)
austen_dtm
<<DocumentTermMatrix (documents: 6, terms: 14520)>>
Non-/sparse entries: 40379/46741
Sparsity : 54%
Maximal term length: 19
Weighting : term frequency (tf)
Please refer to Chapter 5 contents for more examples.
Text Analysis Flowchart with Topic Modeling
Latent Dirichlet allocation is one of the most common algorithms for topic modeling. Without diving into the math behind the model, we can understand it as being guided by two principles.
LDA is a mathematical method for estimating both of these at the same time: finding the mixture of words that is associated with each topic, while also determining the mixture of topics that describes each document.
Please refer to Chapter 6 contents for examples.
please refer to Julia Silge’s blog posts
Where she deeps dive into very nice concepts:
King - Man + Woman = Queenanalogy() function in second post!To be added soon..
# install
# source("https://bioconductor.org/biocLite.R")
# biocLite("BSgenome.Ecoli.NCBI.20080805")
library(BSgenome.Ecoli.NCBI.20080805)
library(biobroom)
eco = Ecoli$NC_008563
length(eco)
ecoli <- data_frame(organism="ecoli", seq=as.character(eco))
words <- substring(as.character(eco), 1:(length(seq)-8+1), 8:length(seq))
ecoli %>%
rowwise() %>%
mutate(word=split(substring(seq, 1:(length(seq)-8+1), 8:nchar(seq)),"seq")) %>%
select(-seq) %>%
unnest(word) %>%
count(word, sort=TRUE)
It will be about calculation of correlations with text analysis.