The goal of this project is just to display that you’ve gotten used to working with the data and that you are on track to create your prediction algorithm. Please submit a report on R Pubs (http://rpubs.com/) that explains your exploratory analysis and your goals for the eventual app and algorithm. This document should be concise and explain only the major features of the data you have identified and briefly summarize your plans for creating the prediction algorithm and Shiny app in a way that would be understandable to a non-data scientist manager. You should make use of tables and plots to illustrate important summaries of the data set.
The motivation for this project is to:
Demonstrate that you’ve downloaded the data and have successfully loaded it in. Create a basic report of summary statistics about the data sets. Report any interesting findings that you amassed so far. Get feedback on your plans for creating a prediction algorithm and Shiny app.
blog_con <- file("./en_US/en_US.blogs.txt");blog <- readLines(blog_con)
news_con <- file("./en_US/en_US.news.txt");news <- readLines(news_con)
twitter_con <- file("./en_US/en_US.twitter.txt");twitter <- readLines(twitter_con)
Toword<- function(t){
a<-unlist(strsplit(t,split="[^a-zA-Z]"))
b<-a[a!=""]
if (length(b)==0){b=NULL}
return(b)
}
wordblog <- Toword(blog);wordnews <- Toword(news);wordtwitter <-Toword(twitter)
dataCounts <-data.frame(lineCounts=c(length(blog),length(news),length(twitter)),
wordCounts=c(length(wordblog),length(wordnews),length(wordtwitter)))
rownames(dataCounts) <- c("en_blog","en_news","en_twitter")
kable(dataCounts) %>%
kable_styling(bootstrap_options = "striped", full_width = F)
| lineCounts | wordCounts | |
|---|---|---|
| en_blog | 899288 | 37880273 |
| en_news | 1010242 | 34616527 |
| en_twitter | 2360148 | 30557099 |
Build sample and line count
set.seed(321)
rbi <- rbinom(length(blog),1,10000/length(blog))
dat <- blog[rbi==1] #the data I use in following analysis
close(blog_con,news_con,twitter_con)
rm(blog_con,news_con,twitter_con,rbi,
blog,news,twitter,wordblog,wordnews,wordtwitter)
By sampling, I only take 10000 random lines for my further studies.
To explore the data, I used 3 functions, ToNword transfer the sentence to n-Grams, FreqNtb output a table with n-Grams and each sentence information which is an efficient way to store the data, FreqDf generate a data frame of the n-Grams frequence both in sentence and in total.
ToNword<-function(t,n){
a<-Toword(t)
b<- c()
if (length(a)>=n){
for (i in 1:(length(a)-n+1)){
d<-c()
for (j in 1:n){
d<- paste(d,a[i+j-1])
}
b<-c(b,d)
}
b <- gsub("^ ","",b)}
if (length(b)==0){b=NULL}
return(b)
}
FreqNtb <- function(dat,n){
datgram <-c()
sen<-c()
for (i in 1:length(dat)) {
a<- ToNword(dat[i],n)
sen <- c(sen,rep(i,length(a)))
datgram<-c(datgram,a)}
gramf <- data.frame(nGram =datgram,sentence=sen)
return(table(gramf))
}
FreqDf <-function(dat,n){
gramf =FreqNtb(dat,n)
freqdf = data.frame(nGram=rownames(gramf),
sentFreq = rowSums(gramf!=0),
freq = rowSums(gramf))
freqdf = freqdf[order(freqdf$sentFreq,freqdf$freq,decreasing = T),]
freqdf$freqOrder <- 1:nrow(freqdf)
rownames(freqdf) = NULL
return(freqdf)
}
freq1w <- FreqDf(dat,1)
p1w=ggplot(freq1w[freq1w$sentFreq>100,],aes(x=freqOrder,y=freq))+geom_col()+
xlab("Top words order") + ylab("Frequecy")+ggtitle("1-Gram Frequency")
freq2w <- FreqDf(dat,2)
p2w=ggplot(freq2w[freq2w$sentFreq>20,],aes(x=freqOrder,y=freq))+geom_col()+
xlab("Top words order") + ylab("Frequecy")+ggtitle("2-Gram Frequency")
grid.arrange(p1w, p2w,nrow = 1)
In the above picture, the left one shows the frequency of one word which occured in more than 100 sentence of the data. x-axis is the frequency order of the words, and y-axis is the words’ total frequency in the sampling data. The right one shows the frequency of the 2-words-Grams. The following table show the top 12 of 1-word and 2-word-Gram, with the number of sentences they appeared in (sentFreq) and their total frequency (Freq) in the whole sampling data.
top12 <- cbind(freq1w[1:12,1:3],freq2w[1:12,1:3])
colnames(top12) <- c(paste(names(freq1w)[1:3],"1gram",sep="."),
paste(names(freq2w)[1:3],"2gram",sep="."))
kable(top12) %>%
kable_styling(bootstrap_options = "striped", full_width = F)
| nGram.1gram | sentFreq.1gram | freq.1gram | nGram.2gram | sentFreq.2gram | freq.2gram |
|---|---|---|---|---|---|
| the | 5878 | 18703 | of the | 1538 | 2047 |
| and | 5123 | 11488 | in the | 1287 | 1605 |
| to | 5082 | 11895 | to the | 833 | 955 |
| a | 4536 | 9765 | to be | 687 | 787 |
| of | 4493 | 9709 | on the | 665 | 756 |
| in | 3589 | 6222 | I m | 584 | 754 |
| I | 3557 | 10105 | for the | 566 | 619 |
| that | 3014 | 5239 | and the | 511 | 574 |
| is | 2876 | 4755 | don t | 488 | 576 |
| for | 2648 | 3823 | and I | 487 | 551 |
| it | 2576 | 4359 | it s | 475 | 585 |
| s | 2342 | 3629 | at the | 473 | 519 |
In this report, I loaded the capstone dataset, did basic summary, sampling a subset data for further studies and did some exploratory analysis for the following prediction step.