1. Instructions

The goal of this project is just to display that you’ve gotten used to working with the data and that you are on track to create your prediction algorithm. Please submit a report on R Pubs (http://rpubs.com/) that explains your exploratory analysis and your goals for the eventual app and algorithm. This document should be concise and explain only the major features of the data you have identified and briefly summarize your plans for creating the prediction algorithm and Shiny app in a way that would be understandable to a non-data scientist manager. You should make use of tables and plots to illustrate important summaries of the data set. The motivation for this project is to: 1. Demonstrate that you’ve downloaded the data and have successfully loaded it in. 2. Create a basic report of summary statistics about the data sets. 3. Report any interesting findings that you amassed so far. 4. Get feedback on your plans for creating a prediction algorithm and Shiny app.

2. Loading libraries

library(quanteda)
## Package version: 2.1.2
## Parallel computing: 2 of 4 threads used.
## See https://quanteda.io for tutorials and examples.
## 
## Attaching package: 'quanteda'
## The following object is masked from 'package:utils':
## 
##     View
library(ggplot2)
library(ngram)

3. Data processing

3.1 Reading the data

setwd("/Users/otyfrank/Documents/Data Science Statistics and Machine Learning Specialization/10 Data Science Capstone/final/en_US")

blogs <- readLines("en_US.blogs.txt", skipNul = TRUE)
news <- readLines("en_US.news.txt", skipNul = TRUE)
twitter <- readLines("en_US.twitter.txt", skipNul = TRUE)

3.2 Summary of the data

# Calculating the number of lines per file
blogs_lines <- length(blogs)
news_lines <- length(news)
twitter_lines <- length(twitter)

# Calculating words per file
blogs_words <- wordcount(blogs, sep = " ")
news_words  <- wordcount(news,  sep = " ")
twitter_words <- wordcount(news, sep = " ")

# Creating summary
data_summary <- data.frame(names = c("blogs", "news", "twitter"),
                           lines = c(blogs_lines, news_lines, twitter_lines),
                           words = c(blogs_words, news_words, twitter_words))
data_summary
##     names   lines    words
## 1   blogs  899288 37334131
## 2    news 1010242 34372530
## 3 twitter 2360148 34372530

3.3 Creating sample

# Creating samples
set.seed(12335001)
blogs_sample <- blogs[sample(length(blogs), length(blogs) * 0.02)]
news_sample <- news[sample(length(news), length(news) * 0.02)]
twitter_sample <- twitter[sample(length(twitter), length(twitter) * 0.02)]
data_sample <- c(blogs_sample, news_sample, twitter_sample)

3.4 Processing the data

# Creating corpus
data_corpus <- corpus(data_sample)

# Tokenizing
data_tokens <- tokens(data_corpus,
                      remove_punct = TRUE,
                      remove_symbols = TRUE,
                      remove_numbers = TRUE,
                      remove_url = TRUE)

# Converting to lower case
data_tokens <- tokens_tolower(data_tokens)

# Stemming
data_tokens <- tokens_wordstem(data_tokens, 
                               language = "english")

# Creating unigram, bigram and trigram
tokens_uni_gram <- data_tokens
tokens_bi_gram <- tokens_ngrams(data_tokens, n = 2)
tokens_tri_gram <- tokens_ngrams(data_tokens, n = 3)

# Creating document feature matrix
uni_dfm <- dfm(tokens_uni_gram)
bi_dfm <- dfm(tokens_bi_gram)
tri_dfm <- dfm(tokens_tri_gram)

# Trimming dfm based on feature's document frequency
uni_dfm <- dfm_trim(uni_dfm, min_docfreq = 3)
bi_dfm <- dfm_trim(bi_dfm, min_docfreq = 3)
tri_dfm <- dfm_trim(tri_dfm, min_docfreq = 3)

4. Exploratory analysis

4.1 Frequencies of words

# Plotting unigrams frequencies
top_uni_dfm <- topfeatures(uni_dfm, 10)
top_uni_dfm <- sort(top_uni_dfm, decreasing = FALSE)
top_uni_dfm <- data.frame(words = names(top_uni_dfm), freq = top_uni_dfm)
ggplot(data = top_uni_dfm, aes(x = factor(words, levels = words), y = freq)) + 
        geom_bar(stat = "identity") +
        labs(x = "Unigrams", y = "Count", title = expression("Unigrams Frequencies")) +
        coord_flip()

4.2 Frequencies of 2-grams

# Plotting bigrams frequencies
top_bi_dfm <- topfeatures(bi_dfm, 10)
top_bi_dfm <- sort(top_bi_dfm, decreasing = FALSE)
top_bi_dfm <- data.frame(words = names(top_bi_dfm), freq = top_bi_dfm)
ggplot(data = top_bi_dfm, aes(x = factor(words, levels = words), y = freq)) + 
        geom_bar(stat = "identity") +
        labs(x = "Bigrams", y = "Count", title = expression("Bigrams Frequencies")) +
        coord_flip()

4.3 Frequencies of 3-grams

# Plotting trigram frequencies
top_tri_dfm <- topfeatures(tri_dfm, 10)
top_tri_dfm <- sort(top_tri_dfm, decreasing = FALSE)
top_tri_dfm <- data.frame(words = names(top_tri_dfm), freq = top_tri_dfm)
ggplot(data = top_tri_dfm, aes(x = factor(words, levels = words), y = freq)) + 
        geom_bar(stat = "identity") +
        labs(x = "Trigrams", y = "Count", title = expression("Trigrams Frequencies")) +
        coord_flip()

5. Plans for creating a prediction algorithm and Shiny app

I am planning to build a n-gram model to predict what will the next word be based on the previous 1, 2 and 3 words. If the previous words don’t appear in the corpora I am planning to offer the word with the highest frequency.

My Shiny app will be very simple. A text box where you can type the first words, a submit button, and the predicted word will appear next to the input text box.