In Text Mining with R, Chapter 2 looks at Sentiment Analysis. In this assignment, you should start by getting the primary example code from chapter 2 working in an R Markdown document. You should provide a citation to this base code.
You’re then asked to extend the code in two ways: Work with a different corpus of your choosing, and Incorporate at least one additional sentiment lexicon (possibly from another R package that you’ve found through research).
Citation: Text Mining with R, Chapter 2: https://www.tidytextmining.com/sentiment.html
## Warning: package 'tidytext' was built under R version 4.3.3
## # A tibble: 2,477 × 2
## word value
## <chr> <dbl>
## 1 abandon -2
## 2 abandoned -2
## 3 abandons -2
## 4 abducted -2
## 5 abduction -2
## 6 abductions -2
## 7 abhor -3
## 8 abhorred -3
## 9 abhorrent -3
## 10 abhors -3
## # ℹ 2,467 more rows
## # A tibble: 6,786 × 2
## word sentiment
## <chr> <chr>
## 1 2-faces negative
## 2 abnormal negative
## 3 abolish negative
## 4 abominable negative
## 5 abominably negative
## 6 abominate negative
## 7 abomination negative
## 8 abort negative
## 9 aborted negative
## 10 aborts negative
## # ℹ 6,776 more rows
## # A tibble: 13,872 × 2
## word sentiment
## <chr> <chr>
## 1 abacus trust
## 2 abandon fear
## 3 abandon negative
## 4 abandon sadness
## 5 abandoned anger
## 6 abandoned fear
## 7 abandoned negative
## 8 abandoned sadness
## 9 abandonment anger
## 10 abandonment fear
## # ℹ 13,862 more rows
## Warning: package 'janeaustenr' was built under R version 4.3.3
library(dplyr)
library(stringr)
tidy_books <- austen_books() %>%
group_by(book) %>%
mutate(
linenumber = row_number(),
chapter = cumsum(str_detect(text,
regex("^chapter [\\divxlc]",
ignore_case = TRUE)))) %>%
ungroup() %>%
unnest_tokens(word, text)nrc_joy <- get_sentiments("nrc") %>%
filter(sentiment == "joy")
tidy_books %>%
filter(book == "Emma") %>%
inner_join(nrc_joy) %>%
count(word, sort = TRUE)## Joining with `by = join_by(word)`
## # A tibble: 301 × 2
## word n
## <chr> <int>
## 1 good 359
## 2 friend 166
## 3 hope 143
## 4 happy 125
## 5 love 117
## 6 deal 92
## 7 found 92
## 8 present 89
## 9 kind 82
## 10 happiness 76
## # ℹ 291 more rows
library(tidyr)
jane_austen_sentiment <- tidy_books %>%
inner_join(get_sentiments("bing")) %>%
count(book, index = linenumber %/% 80, sentiment) %>%
pivot_wider(names_from = sentiment, values_from = n, values_fill = 0) %>%
mutate(sentiment = positive - negative)## Joining with `by = join_by(word)`
## Warning in inner_join(., get_sentiments("bing")): Detected an unexpected many-to-many relationship between `x` and `y`.
## ℹ Row 435434 of `x` matches multiple rows in `y`.
## ℹ Row 5051 of `y` matches multiple rows in `x`.
## ℹ If a many-to-many relationship is expected, set `relationship =
## "many-to-many"` to silence this warning.
library(ggplot2)
ggplot(jane_austen_sentiment, aes(index, sentiment, fill = book)) +
geom_col(show.legend = FALSE) +
facet_wrap(~book, ncol = 2, scales = "free_x")
### 2.3 Comparing the three sentiment dictionaries
## # A tibble: 122,204 × 4
## book linenumber chapter word
## <fct> <int> <int> <chr>
## 1 Pride & Prejudice 1 0 pride
## 2 Pride & Prejudice 1 0 and
## 3 Pride & Prejudice 1 0 prejudice
## 4 Pride & Prejudice 3 0 by
## 5 Pride & Prejudice 3 0 jane
## 6 Pride & Prejudice 3 0 austen
## 7 Pride & Prejudice 7 1 chapter
## 8 Pride & Prejudice 7 1 1
## 9 Pride & Prejudice 10 1 it
## 10 Pride & Prejudice 10 1 is
## # ℹ 122,194 more rows
afinn <- pride_prejudice %>%
inner_join(get_sentiments("afinn")) %>%
group_by(index = linenumber %/% 80) %>%
summarise(sentiment = sum(value)) %>%
mutate(method = "AFINN")## Joining with `by = join_by(word)`
bing_and_nrc <- bind_rows(
pride_prejudice %>%
inner_join(get_sentiments("bing")) %>%
mutate(method = "Bing et al."),
pride_prejudice %>%
inner_join(get_sentiments("nrc") %>%
filter(sentiment %in% c("positive",
"negative"))
) %>%
mutate(method = "NRC")) %>%
count(method, index = linenumber %/% 80, sentiment) %>%
pivot_wider(names_from = sentiment,
values_from = n,
values_fill = 0) %>%
mutate(sentiment = positive - negative)## Joining with `by = join_by(word)`
## Joining with `by = join_by(word)`
## Warning in inner_join(., get_sentiments("nrc") %>% filter(sentiment %in% : Detected an unexpected many-to-many relationship between `x` and `y`.
## ℹ Row 215 of `x` matches multiple rows in `y`.
## ℹ Row 5178 of `y` matches multiple rows in `x`.
## ℹ If a many-to-many relationship is expected, set `relationship =
## "many-to-many"` to silence this warning.
bind_rows(afinn,
bing_and_nrc) %>%
ggplot(aes(index, sentiment, fill = method)) +
geom_col(show.legend = FALSE) +
facet_wrap(~method, ncol = 1, scales = "free_y")## # A tibble: 2 × 2
## sentiment n
## <chr> <int>
## 1 negative 3316
## 2 positive 2308
## # A tibble: 2 × 2
## sentiment n
## <chr> <int>
## 1 negative 4781
## 2 positive 2005
bing_word_counts <- tidy_books %>%
inner_join(get_sentiments("bing")) %>%
count(word, sentiment, sort = TRUE) %>%
ungroup()## Joining with `by = join_by(word)`
## Warning in inner_join(., get_sentiments("bing")): Detected an unexpected many-to-many relationship between `x` and `y`.
## ℹ Row 435434 of `x` matches multiple rows in `y`.
## ℹ Row 5051 of `y` matches multiple rows in `x`.
## ℹ If a many-to-many relationship is expected, set `relationship =
## "many-to-many"` to silence this warning.
## # A tibble: 2,585 × 3
## word sentiment n
## <chr> <chr> <int>
## 1 miss negative 1855
## 2 well positive 1523
## 3 good positive 1380
## 4 great positive 981
## 5 like positive 725
## 6 better positive 639
## 7 enough positive 613
## 8 happy positive 534
## 9 love positive 495
## 10 pleasure positive 462
## # ℹ 2,575 more rows
bing_word_counts %>%
group_by(sentiment) %>%
slice_max(n, n = 10) %>%
ungroup() %>%
mutate(word = reorder(word, n)) %>%
ggplot(aes(n, word, fill = sentiment)) +
geom_col(show.legend = FALSE) +
facet_wrap(~sentiment, scales = "free_y") +
labs(x = "Contribution to sentiment",
y = NULL)custom_stop_words <- bind_rows(tibble(word = c("miss"),
lexicon = c("custom")),
stop_words)
custom_stop_words## # A tibble: 1,150 × 2
## word lexicon
## <chr> <chr>
## 1 miss custom
## 2 a SMART
## 3 a's SMART
## 4 able SMART
## 5 about SMART
## 6 above SMART
## 7 according SMART
## 8 accordingly SMART
## 9 across SMART
## 10 actually SMART
## # ℹ 1,140 more rows
## Warning: package 'wordcloud' was built under R version 4.3.3
## Loading required package: RColorBrewer
## Joining with `by = join_by(word)`
## Warning: package 'reshape2' was built under R version 4.3.3
##
## Attaching package: 'reshape2'
## The following object is masked from 'package:openintro':
##
## tips
## The following object is masked from 'package:tidyr':
##
## smiths
tidy_books %>%
inner_join(get_sentiments("bing")) %>%
count(word, sentiment, sort = TRUE) %>%
acast(word ~ sentiment, value.var = "n", fill = 0) %>%
comparison.cloud(colors = c("gray20", "gray80"),
max.words = 100)## Joining with `by = join_by(word)`
## Warning in inner_join(., get_sentiments("bing")): Detected an unexpected many-to-many relationship between `x` and `y`.
## ℹ Row 435434 of `x` matches multiple rows in `y`.
## ℹ Row 5051 of `y` matches multiple rows in `x`.
## ℹ If a many-to-many relationship is expected, set `relationship =
## "many-to-many"` to silence this warning.
### 2.6 Looking at units beyond just words
p_and_p_sentences <- tibble(text = prideprejudice) %>%
unnest_tokens(sentence, text, token = "sentences")## [1] "by jane austen"
austen_chapters <- austen_books() %>%
group_by(book) %>%
unnest_tokens(chapter, text, token = "regex",
pattern = "Chapter|CHAPTER [\\dIVXLC]") %>%
ungroup()
austen_chapters %>%
group_by(book) %>%
summarise(chapters = n())## # A tibble: 6 × 2
## book chapters
## <fct> <int>
## 1 Sense & Sensibility 51
## 2 Pride & Prejudice 62
## 3 Mansfield Park 49
## 4 Emma 56
## 5 Northanger Abbey 32
## 6 Persuasion 25
bingnegative <- get_sentiments("bing") %>%
filter(sentiment == "negative")
wordcounts <- tidy_books %>%
group_by(book, chapter) %>%
summarize(words = n())## `summarise()` has grouped output by 'book'. You can override using the
## `.groups` argument.
tidy_books %>%
semi_join(bingnegative) %>%
group_by(book, chapter) %>%
summarize(negativewords = n()) %>%
left_join(wordcounts, by = c("book", "chapter")) %>%
mutate(ratio = negativewords/words) %>%
filter(chapter != 0) %>%
slice_max(ratio, n = 1) %>%
ungroup()## Joining with `by = join_by(word)`
## `summarise()` has grouped output by 'book'. You can override using the
## `.groups` argument.
## # A tibble: 6 × 5
## book chapter negativewords words ratio
## <fct> <int> <int> <int> <dbl>
## 1 Sense & Sensibility 43 161 3405 0.0473
## 2 Pride & Prejudice 34 111 2104 0.0528
## 3 Mansfield Park 46 173 3685 0.0469
## 4 Emma 15 151 3340 0.0452
## 5 Northanger Abbey 21 149 2982 0.0500
## 6 Persuasion 4 62 1807 0.0343
I selected the corpus: The Beautiful and Damned from the gutenbergr library.
## Warning: package 'gutenbergr' was built under R version 4.3.3
# Get metadata for all available works in the Gutenberg corpus
gutenberg_metadata <- gutenberg_works()# Download the text of "The Beautiful and Damned" using its Gutenberg ID
beautiful_and_damned_text <- gutenberg_download(9830)## Determining mirror for Project Gutenberg from https://www.gutenberg.org/robot/harvest
## Using mirror http://aleph.gutenberg.org
## # A tibble: 6 × 2
## gutenberg_id text
## <int> <chr>
## 1 9830 "THE BEAUTIFUL AND DAMNED"
## 2 9830 ""
## 3 9830 "BY F. SCOTT FITZGERALD"
## 4 9830 ""
## 5 9830 "1922"
## 6 9830 ""
Lexicon Filtering: I filtered the Loughran sentiment lexicon to include only positive and negative words. This lexicon is used for sentiment analysis on the selected corpus.
Tokenization: The text is tokenized into individual words, making it easier to analyze sentiment on a word-by-word basis.
# Tokenize the text into words
beautiful_and_damned_words <- beautiful_and_damned_text %>%
unnest_tokens(word, text)
beautiful_and_damned_words## # A tibble: 125,941 × 2
## gutenberg_id word
## <int> <chr>
## 1 9830 the
## 2 9830 beautiful
## 3 9830 and
## 4 9830 damned
## 5 9830 by
## 6 9830 f
## 7 9830 scott
## 8 9830 fitzgerald
## 9 9830 1922
## 10 9830 _novels_
## # ℹ 125,931 more rows
Joining with Lexicon:
In this part, the tokenized words are joined with the filtered Loughran sentiment lexicon to associate sentiment scores with each word in the corpus.
# Join with the Loughran sentiment lexicon
beautiful_and_damned_sentiments <- beautiful_and_damned_words %>%
inner_join(loughran_posneg)## Joining with `by = join_by(word)`
## # A tibble: 3,089 × 3
## gutenberg_id word sentiment
## <int> <chr> <chr>
## 1 9830 beautiful positive
## 2 9830 great positive
## 3 9830 beautiful positive
## 4 9830 encouragement positive
## 5 9830 broken negative
## 6 9830 honor positive
## 7 9830 obscene negative
## 8 9830 exceptional positive
## 9 9830 pleasant positive
## 10 9830 attractive positive
## # ℹ 3,079 more rows
Sentiment Counting:
Count the occurrences of positive and negative words in the corpus to understand the overall sentiment distribution.
# Count the occurrences of positive and negative words
sentiment_counts <- beautiful_and_damned_sentiments %>%
count(sentiment)
sentiment_counts## # A tibble: 2 × 2
## sentiment n
## <chr> <int>
## 1 negative 2008
## 2 positive 1081
Sentiment Visualization:
The sentiment distribution is visualized using a bar plot, where each sentiment (positive/negative) is represented along with its frequency count
# Visualize the sentiment distribution with value counts
ggplot(sentiment_counts, aes(x = sentiment, y = n, fill = sentiment)) +
geom_bar(stat = "identity") +
geom_text(aes(label = n), vjust = -0.5, color = "black", size = 3) + # Add value counts
labs(title = "Sentiment Distribution in 'The Beautiful and Damned'",
x = "Sentiment",
y = "Frequency") +
theme_minimal() +
theme(legend.position = "none")Positive Sentiment Word Cloud:
I created a word cloud visualization for words associated with positive sentiment in the corpus, showing the most frequently occurring positive words.
# Create wordclouds for positive and negative sentiments
positive_words <- beautiful_and_damned_sentiments %>%
filter(sentiment == "positive") %>%
count(word, sort = TRUE)
print(positive_words)## # A tibble: 147 × 2
## word n
## <chr> <int>
## 1 good 120
## 2 great 99
## 3 beautiful 52
## 4 better 47
## 5 best 42
## 6 pleasant 26
## 7 strong 22
## 8 tremendous 22
## 9 happy 21
## 10 success 20
## # ℹ 137 more rows
# Plot wordcloud for positive sentiment
wordcloud(positive_words$word, positive_words$n,
max.words = 100, scale=c(3,0.5),
colors = brewer.pal(8, "Dark2"),
random.order = FALSE,
rot.per = 0.35,
main = "Wordcloud for Positive Sentiment")Negative Sentiment Word Cloud:
Similarly, I created a word cloud visualization for words associated with negative sentiment in the corpus, showing the most frequently occurring negative words.
negative_words <- beautiful_and_damned_sentiments %>%
filter(sentiment == "negative") %>%
count(word, sort = TRUE)
print(negative_words)## # A tibble: 530 × 2
## word n
## <chr> <int>
## 1 late 63
## 2 against 59
## 3 broken 40
## 4 lost 31
## 5 poor 29
## 6 question 28
## 7 bad 26
## 8 closed 24
## 9 interrupted 24
## 10 miss 22
## # ℹ 520 more rows
# Plot wordcloud for negative sentiment
wordcloud(negative_words$word, negative_words$n,
max.words = 100, scale=c(3,0.5),
colors = brewer.pal(8, "Dark2"),
random.order = FALSE,
rot.per = 0.35,
main = "Wordcloud for Negative Sentiment")In this extended R Markdown document, I have retained the base code for sentiment analysis with Jane Austen’s texts and incorporated additional analysis with “The Beautiful and Damned” by F. Scott Fitzgerald. Additionally, I have included sentiment analysis using the Loughran lexicon for “The Beautiful and Damned” text. The base code used in this R markdown document is from the “Text Mining with R” book, Chapter 2.