Overview

In Text Mining with R, Chapter 2 looks at Sentiment Analysis. In this assignment, you should start by getting the primary example code from chapter 2 working in an R Markdown document. You should provide a citation to this base code. You’re then asked to extend the code in two ways:

• Work with a different corpus of your choosing, and

• Incorporate at least one additional sentiment lexicon (possibly from another R package that you’ve found through research).

You can find the rpubs file here

Set the environment

library(janeaustenr)
library(gutenbergr)
library(stringr)
library(tidytext)
library(textdata)
library(jsonlite)
library(tidyverse)
library(wordcloud)
library(reshape2)
get_sentiments("afinn")
## # A tibble: 2,477 x 2
##    word       value
##    <chr>      <dbl>
##  1 abandon       -2
##  2 abandoned     -2
##  3 abandons      -2
##  4 abducted      -2
##  5 abduction     -2
##  6 abductions    -2
##  7 abhor         -3
##  8 abhorred      -3
##  9 abhorrent     -3
## 10 abhors        -3
## # … with 2,467 more rows
get_sentiments("bing")
## # A tibble: 6,786 x 2
##    word        sentiment
##    <chr>       <chr>    
##  1 2-faces     negative 
##  2 abnormal    negative 
##  3 abolish     negative 
##  4 abominable  negative 
##  5 abominably  negative 
##  6 abominate   negative 
##  7 abomination negative 
##  8 abort       negative 
##  9 aborted     negative 
## 10 aborts      negative 
## # … with 6,776 more rows
get_sentiments("nrc")
## # A tibble: 13,901 x 2
##    word        sentiment
##    <chr>       <chr>    
##  1 abacus      trust    
##  2 abandon     fear     
##  3 abandon     negative 
##  4 abandon     sadness  
##  5 abandoned   anger    
##  6 abandoned   fear     
##  7 abandoned   negative 
##  8 abandoned   sadness  
##  9 abandonment anger    
## 10 abandonment fear     
## # … with 13,891 more rows

Jane Austen dataset

tidy_books <- austen_books() %>%
  group_by(book) %>%
  mutate(linenumber = row_number(),
         chapter = cumsum(str_detect(text, 
                                     regex("^chapter [\\divxlc]", 
                                                 ignore_case = TRUE)))) %>%
  ungroup() %>%
  unnest_tokens(word, text)

Filter sentiments of Joy words in Emma’s book and get the count

nrc_joy <- get_sentiments("nrc") %>%
  filter(sentiment == "joy")

tidy_books %>%
  filter(book == "Emma") %>%
  inner_join(nrc_joy) %>%
  count(word, sort = TRUE)
## Joining, by = "word"
## # A tibble: 303 x 2
##    word        n
##    <chr>   <int>
##  1 good      359
##  2 young     192
##  3 friend    166
##  4 hope      143
##  5 happy     125
##  6 love      117
##  7 deal       92
##  8 found      92
##  9 present    89
## 10 kind       82
## # … with 293 more rows

Count the positive and negative words

## Joining, by = "word"

compating three sentiments dictionaries

pride_prejudice <- tidy_books %>% 
 filter(book == "Pride & Prejudice")

pride_prejudice
## # A tibble: 122,204 x 4
##    book              linenumber chapter word     
##    <fct>                  <int>   <int> <chr>    
##  1 Pride & Prejudice          1       0 pride    
##  2 Pride & Prejudice          1       0 and      
##  3 Pride & Prejudice          1       0 prejudice
##  4 Pride & Prejudice          3       0 by       
##  5 Pride & Prejudice          3       0 jane     
##  6 Pride & Prejudice          3       0 austen   
##  7 Pride & Prejudice          7       1 chapter  
##  8 Pride & Prejudice          7       1 1        
##  9 Pride & Prejudice         10       1 it       
## 10 Pride & Prejudice         10       1 is       
## # … with 122,194 more rows
afinn <- pride_prejudice %>% 
 inner_join(get_sentiments("afinn")) %>% 
 group_by(index = linenumber %/% 80) %>% 
 summarise(sentiment = sum (value)) %>% 
 mutate(method = "AFINN")
## Joining, by = "word"
## `summarise()` ungrouping output (override with `.groups` argument)
bing_and_nrc <- bind_rows(pride_prejudice %>% 
                            inner_join(get_sentiments("bing")) %>%
                            mutate(method = "Bing et al."),
                          pride_prejudice %>% 
                            inner_join(get_sentiments("nrc") %>% 
                                         filter(sentiment %in% c("positive", 
                                                                 "negative"))) %>%
                            mutate(method = "NRC")) %>%
  count(method, index = linenumber %/% 80, sentiment) %>%
  spread(sentiment, n, fill = 0) %>%
  mutate(sentiment = positive - negative)
## Joining, by = "word"
## Joining, by = "word"

To compare the sentiments we plot them

bind_rows(afinn, 
          bing_and_nrc) %>%
  ggplot(aes(index, sentiment, fill = method)) +
  geom_col(show.legend = FALSE) +
  facet_wrap(~method, ncol = 1, scales = "free_y")

It looks like the NRC has the least negative sentiments, and it estimates more positive sentiments for the book.

bing_word_counts <- tidy_books %>% 
 inner_join(get_sentiments("bing")) %>% 
 count(word, sentiment, sort = TRUE) %>% 
 ungroup
## Joining, by = "word"
bing_word_counts
## # A tibble: 2,585 x 3
##    word     sentiment     n
##    <chr>    <chr>     <int>
##  1 miss     negative   1855
##  2 well     positive   1523
##  3 good     positive   1380
##  4 great    positive    981
##  5 like     positive    725
##  6 better   positive    639
##  7 enough   positive    613
##  8 happy    positive    534
##  9 love     positive    495
## 10 pleasure positive    462
## # … with 2,575 more rows

Now we can compare the negative and positive sentiments by plotting them and show the top 10 for each positive and negative

## Selecting by n

custom_stop_words <- 
 bind_rows(tibble(word = c("miss"),
                  lexicon = c("custom")),
           stop_words)

custom_stop_words
## # A tibble: 1,150 x 2
##    word        lexicon
##    <chr>       <chr>  
##  1 miss        custom 
##  2 a           SMART  
##  3 a's         SMART  
##  4 able        SMART  
##  5 about       SMART  
##  6 above       SMART  
##  7 according   SMART  
##  8 accordingly SMART  
##  9 across      SMART  
## 10 actually    SMART  
## # … with 1,140 more rows

We get the wordclouds

tidy_books %>% 
 anti_join(stop_words) %>% 
 count(word) %>% 
 
 
 
 with(wordcloud(word, n, max.words = 100))
## Joining, by = "word"

Now we we to reshape the negative and positive sentiments

tidy_books %>%
  inner_join(get_sentiments("bing")) %>%
  count(word, sentiment, sort = TRUE) %>%
  acast(word ~ sentiment, value.var = "n", fill = 0) %>%
  comparison.cloud(
    colors = c("gray20", "gray80"),
    max.words = 100
  )
## Joining, by = "word"

New Corpus

I will use My Bondage and My Freedom is an autobiographical slave narrative written by Frederick Douglass and published in 1855. Download data using gutenbergr package.

Reference: https://docsouth.unc.edu/neh/douglass55/douglass55.html

count_Bondage <- gutenberg_download(202) 
## Determining mirror for Project Gutenberg from http://www.gutenberg.org/robot/harvest
## Using mirror http://aleph.gutenberg.org
count_Bondage
## # A tibble: 12,208 x 2
##    gutenberg_id text                                                            
##           <int> <chr>                                                           
##  1          202 "MY BONDAGE and MY FREEDOM"                                     
##  2          202 ""                                                              
##  3          202 "By Frederick Douglass"                                         
##  4          202 ""                                                              
##  5          202 ""                                                              
##  6          202 "By a principle essential to Christianity, a PERSON is eternall…
##  7          202 "differenced from a THING; so that the idea of a HUMAN BEING, n…
##  8          202 "excludes the idea of PROPERTY IN THAT BEING."                  
##  9          202 "--COLERIDGE"                                                   
## 10          202 ""                                                              
## # … with 12,198 more rows

Convert Data to Tidy

## # A tibble: 10,624 x 4
##    gutenberg_id text                                          linenumber chapter
##           <int> <chr>                                              <int>   <int>
##  1          202 "CHAPTER I. _Childhood_"                               1       1
##  2          202 "PLACE OF BIRTH--CHARACTER OF THE DISTRICT--…          2       1
##  3          202 "NAME--CHOPTANK RIVER--TIME OF BIRTH--GENEAL…          3       1
##  4          202 "COUNTING TIME--NAMES OF GRANDPARENTS--THEIR…          4       1
##  5          202 "ESPECIALLY ESTEEMED--\"BORN TO GOOD LUCK\"-…          5       1
##  6          202 "POTATOES--SUPERSTITION--THE LOG CABIN--ITS …          6       1
##  7          202 "CHILDREN--MY AUNTS--THEIR NAMES--FIRST KNOW…          7       1
##  8          202 "MASTER--GRIEFS AND JOYS OF CHILDHOOD--COMPA…          8       1
##  9          202 "SLAVE-BOY AND THE SON OF A SLAVEHOLDER."              9       1
## 10          202 "In Talbot county, Eastern Shore, Maryland, …         10       1
## # … with 10,614 more rows

Analysis

The most frequent used words for positive sentiments and negative sentiments.

## Joining, by = "word"
## Selecting by n

Chapter wise positive and negative words

Group by chapter to get the positive/negative sentiments words.

we need now the total positive and negative word count using bing lexion.

## Joining, by = "word"

Now we will use ~ 80 lines of text, to see which chapter has more negative sentiments and we see the chapter 25 has the most.

## Joining, by = "word"
## `summarise()` regrouping output by 'index' (override with `.groups` argument)

Wordcloud

We need to check the most common words in “My Bondage and My Freedom”.

## Joining, by = "word"

Important words per chapter

we need to check the important of the words in the book per chapter for each 25 one.

## `summarise()` ungrouping output (override with `.groups` argument)
## Joining, by = "chapter"
## # A tibble: 34,361 x 6
##    chapter word            n      tf   idf  tf_idf
##      <int> <chr>       <int>   <dbl> <dbl>   <dbl>
##  1       8 gore           19 0.00722  2.12 0.0153 
##  2       8 denby          10 0.00380  3.22 0.0122 
##  3      22 bedford        33 0.00546  1.83 0.0100 
##  4      17 covey          46 0.00956  1.02 0.00976
##  5       7 barney         10 0.00300  3.22 0.00967
##  6      16 covey          28 0.00919  1.02 0.00939
##  7      18 holidays       19 0.00336  2.53 0.00850
##  8       1 grandmother    18 0.00664  1.27 0.00845
##  9      23 collins         5 0.00235  3.22 0.00755
## 10       6 nelly          12 0.00234  3.22 0.00755
## # … with 34,351 more rows
## Selecting by tf_idf
## Warning: Ignoring unknown parameters: stat