In this analysis, we use the twitter account of Donald Trump. All Twitter posts collected from Donald Trump’s twitter account realDonaldTrump post for corona pandemic. We will request for 10,000 tweets related to #COVID-19 and #realDonaldTrump from March 15 to April 30 in 2020 for analysis.
We use the data science software R with the tidyr,tidytext, dplyr packages to do text analytics and the twitteR package to connect and download data from twitter.
To collect data from twitter, we created one twitter application. This twitter data belongs to realDonaldTrump page. Followed the below steps to connect the app and download data from twitter. After getting the data, store the file in a csv file in Git repository and perform the sentiment analysis.
create my access token then copy the below keys
Load necessary packages and collect the data from twitter.
library(twitteR)
library(tidyr)
consumer_key <- "Lsk1wfKakbL6rfFC1dcLfpSrb"
consumer_secret <- "rKywBX5uJ3d13drP8NxgWGxC3TtPraKZiucS58J2OzFW0Rpcci"
access_token <- "1257449305616650243-J8E4TpO0Fmu6v5KXyY8T9owgSsPI7v"
access_secret <- "tgCzUd7fXgnSZFrLvmeMBYNJzH3i1hZkDX16PK3APkM3E"
#now lets connect
setup_twitter_oauth(consumer_key, consumer_secret, access_token, access_secret)
## [1] "Using direct authentication"
# Collect 5000 tweets
trump_tweets <- twitteR::searchTwitter("COVID-19 + realDonaldTrump", n = 10000,lang = "en", since = "2020-03-15", until = "2020-04-30", retryOnRateLimit = 1e2)
# set to a data frame
trump_tweets_df = twListToDF(trump_tweets)
# Raw data
head(trump_tweets_df)
## text
## 1 @brndn_mcleod @the_resistor @realDonaldTrump The whole19-20 flu deaths at its height was 53000. That season is over… https://t.co/jJNFAAKfla
## 2 @realDonaldTrump Please, please, please, Mr. President look into the reality of COVID-19. Mortality rate is proving… https://t.co/J7FiK9F2A2
## 3 RT @FersharX: @GOP @realDonaldTrump 'I didn’t say it': Trump falsely claims he never said US could test 5m a day for Covid-19 'very soon'\nP…
## 4 @realDonaldTrump Due to COVID-19, NY has canceled its presidential primary election on 6/23. For details, visit… https://t.co/2X91KGsnL1
## 5 RT @MarshaBlackburn: \u2705@realDonaldTrump banned \ninternational flights from hotspots in January\n\n\u274cChina permitted international flights, spre…
## 6 RT @MarshaBlackburn: \u2705@realDonaldTrump banned \ninternational flights from hotspots in January\n\n\u274cChina permitted international flights, spre…
## favorited favoriteCount replyToSN created truncated
## 1 FALSE 2 brndn_mcleod 2020-04-29 23:59:57 TRUE
## 2 FALSE 0 realDonaldTrump 2020-04-29 23:59:50 TRUE
## 3 FALSE 0 <NA> 2020-04-29 23:59:48 FALSE
## 4 FALSE 1 realDonaldTrump 2020-04-29 23:59:48 TRUE
## 5 FALSE 0 <NA> 2020-04-29 23:59:43 FALSE
## 6 FALSE 0 <NA> 2020-04-29 23:59:41 FALSE
## replyToSID id replyToUID
## 1 1255446831099772929 1255648045146025992 45961528
## 2 <NA> 1255648014267342854 25073877
## 3 <NA> 1255648007837474816 <NA>
## 4 <NA> 1255648007451824130 25073877
## 5 <NA> 1255647986580893699 <NA>
## 6 <NA> 1255647979429670915 <NA>
## statusSource
## 1 <a href="http://twitter.com/download/android" rel="nofollow">Twitter for Android</a>
## 2 <a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>
## 3 <a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>
## 4 <a href="http://twitter.com/download/android" rel="nofollow">Twitter for Android</a>
## 5 <a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>
## 6 <a href="http://twitter.com/#!/download/ipad" rel="nofollow">Twitter for iPad</a>
## screenName retweetCount isRetweet retweeted longitude latitude
## 1 manniteo44 1 FALSE FALSE NA NA
## 2 miketackerman 0 FALSE FALSE NA NA
## 3 magsster1 3 TRUE FALSE NA NA
## 4 scotthiller 1 FALSE FALSE NA NA
## 5 ar0jay8 223 TRUE FALSE NA NA
## 6 akmac1111 223 TRUE FALSE NA NA
# Remove http from statusSource
trump_tweets_df$statusSource <- gsub("<.*?>", "",trump_tweets_df$statusSource)
# Most favorited tweets
trump_fav <- trump_tweets_df %>%
dplyr::arrange(desc(favoriteCount))
# Top 6 favorited tweets among the extracted 10000 tweets
head(trump_fav)
## text
## 1 President @realDonaldTrump is leading an unprecedented coronavirus response, and the numbers don’t lie – he has del… https://t.co/t7gpeKmyKQ
## 2 President @realDonaldTrump and I agree:\n\n\u2705Helping our fellow Americans during coronavirus \n\u2705Fund state-level respon… https://t.co/vF3dWJqiJh
## 3 Uh, unless Parscale was the genius who suggested the Lysol thing or several other looney lines @realDonaldTrump has… https://t.co/Y4o9MRLmh0
## 4 Collect all 11 coins and receive a free Covid-19 test! (Offer not valid in ‘Do-Nothing Democrat States’) all procee… https://t.co/qVXNv4d8Db
## 5 \u2705@realDonaldTrump banned \ninternational flights from hotspots in January\n\n\u274cChina permitted international flights, s… https://t.co/3wEpwmFCXS
## 6 @maddow Hey @realDonaldTrump \n\nAmerican WORKERS\nWest Point Cadets\nSchool Children\n\nShould NOT HAVE to PUT their LIV… https://t.co/rYBRV2FevQ
## favorited favoriteCount replyToSN created truncated
## 1 FALSE 7223 <NA> 2020-04-29 15:13:31 TRUE
## 2 FALSE 2511 <NA> 2020-04-29 18:08:01 TRUE
## 3 FALSE 2207 <NA> 2020-04-29 22:18:24 TRUE
## 4 FALSE 1562 <NA> 2020-04-29 19:26:40 TRUE
## 5 FALSE 518 <NA> 2020-04-29 23:52:41 TRUE
## 6 FALSE 417 maddow 2020-04-29 19:24:57 TRUE
## replyToSID id replyToUID statusSource
## 1 <NA> 1255515562874146816 <NA> Twitter Web App
## 2 <NA> 1255559480412119040 <NA> Twitter Media Studio
## 3 <NA> 1255622488085549056 <NA> Twitter for iPhone
## 4 <NA> 1255579272233828353 <NA> Twitter for iPhone
## 5 <NA> 1255646215284428800 <NA> Twitter for iPhone
## 6 1255578320948920320 1255578841004687360 16129920 Twitter Web App
## screenName retweetCount isRetweet retweeted longitude latitude
## 1 PressSec 2402 FALSE FALSE NA NA
## 2 RepMattGaetz 912 FALSE FALSE NA NA
## 3 davidaxelrod 448 FALSE FALSE NA NA
## 4 CaslerNoel 391 FALSE FALSE NA NA
## 5 MarshaBlackburn 223 FALSE FALSE NA NA
## 6 the_resistor 89 FALSE FALSE NA NA
# Most retweeted
trump_retweet <- trump_tweets_df %>%
dplyr::arrange(desc(retweetCount)) %>%
dplyr::distinct(text, .keep_all = TRUE)
# Top 6 retweeted texts among the extracted 10000 tweets
head(trump_retweet)
## text
## 1 RT @RealCandaceO: In light of the discovery that @GovNedLamont counted an infant suffocation toward his state’s #coronavirus death total, I…
## 2 RT @GOPChairwoman: State Rep. Whitsett nearly died from coronavirus, but because she dared to say something positive about @realDonaldTrump…
## 3 RT @MichaelCoudrey: UPDATE: @Twitter just suspended the account of the publicly traded biotech company AYTU BioScience that created a novel…
## 4 RT @RealCandaceO: It is absurd to ask taxpayers to bail out these bankrupt states— who are now pretending that their various pending bankru…
## 5 RT @RepLeeZeldin: While @realDonaldTrump believes the enemy is coronavirus, @SpeakerPelosi believes the enemy is Pres Trump. In order to de…
## 6 RT @IvankaTrump: Breaking: The House finally passes $480 billion package to deliver aid to millions of small businesses, workers and hospit…
## favorited favoriteCount replyToSN created truncated replyToSID
## 1 FALSE 0 <NA> 2020-04-29 18:08:29 FALSE <NA>
## 2 FALSE 0 <NA> 2020-04-29 22:35:09 FALSE <NA>
## 3 FALSE 0 <NA> 2020-04-29 23:30:47 FALSE <NA>
## 4 FALSE 0 <NA> 2020-04-29 23:51:24 FALSE <NA>
## 5 FALSE 0 <NA> 2020-04-29 20:42:31 FALSE <NA>
## 6 FALSE 0 <NA> 2020-04-29 15:41:32 FALSE <NA>
## id replyToUID statusSource screenName retweetCount
## 1 1255559596518899714 <NA> Twitter for iPhone rex1956 33995
## 2 1255626703725944832 <NA> Twitter for iPhone CailinAngel 19412
## 3 1255640706523283456 <NA> Twitter for iPhone CenotSi 13994
## 4 1255645893837099010 <NA> Twitter Web App patwest47 8326
## 5 1255598361568317441 <NA> Twitter for iPhone maddmav 8167
## 6 1255522615948652545 <NA> Twitter for Android ZickyX 6960
## isRetweet retweeted longitude latitude
## 1 TRUE FALSE NA NA
## 2 TRUE FALSE NA NA
## 3 TRUE FALSE NA NA
## 4 TRUE FALSE NA NA
## 5 TRUE FALSE NA NA
## 6 TRUE FALSE NA NA
trump_retweet_extracted <- trump_retweet[c(1,12,13,14)]
head(trump_retweet_extracted)
## text
## 1 RT @RealCandaceO: In light of the discovery that @GovNedLamont counted an infant suffocation toward his state’s #coronavirus death total, I…
## 2 RT @GOPChairwoman: State Rep. Whitsett nearly died from coronavirus, but because she dared to say something positive about @realDonaldTrump…
## 3 RT @MichaelCoudrey: UPDATE: @Twitter just suspended the account of the publicly traded biotech company AYTU BioScience that created a novel…
## 4 RT @RealCandaceO: It is absurd to ask taxpayers to bail out these bankrupt states— who are now pretending that their various pending bankru…
## 5 RT @RepLeeZeldin: While @realDonaldTrump believes the enemy is coronavirus, @SpeakerPelosi believes the enemy is Pres Trump. In order to de…
## 6 RT @IvankaTrump: Breaking: The House finally passes $480 billion package to deliver aid to millions of small businesses, workers and hospit…
## retweetCount isRetweet retweeted
## 1 33995 TRUE FALSE
## 2 19412 TRUE FALSE
## 3 13994 TRUE FALSE
## 4 8326 TRUE FALSE
## 5 8167 TRUE FALSE
## 6 6960 TRUE FALSE
Data cleaning and tokenization
We will convert the data set into a corpus and then clean the corpus such as making all character lower case, remove punctuation marks, white spaces, and stop words.
library(tm)
library(textmineR)
library(RWeka)
library(wordcloud)
library(RColorBrewer)
trump_tweets_df_2 <- trump_tweets_df[c(1,2)]
# remove imocation from text
trump_tweets_df_2$text <- gsub("[^\x01-\x7F]", "", trump_tweets_df_2$text)
# Change dataset into a corpus
trump_tweets_corp <- tm::VCorpus(tm::VectorSource(trump_tweets_df_2))
# Data cleaning
trump_tweets_corp <- tm::tm_map(trump_tweets_corp, tolower)
trump_tweets_corp <- tm::tm_map(trump_tweets_corp, PlainTextDocument)
trump_tweets_corp <- tm::tm_map(trump_tweets_corp, removePunctuation)
# Remove stop words
new_stops <-c("covid","iphone","coronavirus","android","web","rt","chuonlinenews","Fashion", "fashionblogger", "Covid_19", "Juventus", "WuhanVirus","covid19","dranthonyfauci","scotgov youre", "rvawonk two","false","president","realdonaldtrump")
trump_tweets_corp <- tm::tm_map(trump_tweets_corp, removeWords, words = c(stopwords("english"), new_stops))
trump_tweets_corp <- tm::tm_map(trump_tweets_corp, stripWhitespace)
trump_tweets_corp <- tm::tm_map(trump_tweets_corp, PlainTextDocument)
trump_tweets_corp <- tm::tm_map(trump_tweets_corp, removePunctuation)
trump_tweets_corp <- tm::tm_map(trump_tweets_corp, removeNumbers)
# Tokenize tweets texts into words
tokenizer <- function(x) {
RWeka::NGramTokenizer(x, RWeka::Weka_control(min = 2, max = 2))
}
tdm <- TermDocumentMatrix(
trump_tweets_corp,
control = list(tokenize = tokenizer)
)
tdm <- as.matrix(tdm)
trump_tweets_cleaned_freq <- rowSums(tdm)
# Create a bi-gram (2-word) word cloud
pal <- RColorBrewer::brewer.pal(8,"Set1")
wordcloud::wordcloud(names(trump_tweets_cleaned_freq), trump_tweets_cleaned_freq, min.freq=50,max.words = 50, random.order=TRUE,random.color = TRUE, rot.per=.15, colors = pal,scale = c(3,1))
This word cloud shows the Word frequency of bi-grams(2 words). Based on the bi-gram we can know what most people are taking on Trump’s post.
Sentiment analysis helps us understand peoples’ feelings towards a specific subject. We will break the tweets’ sentences into words for further analysis.
library(tibble)
library(tidytext)
# Transform sentences into words
trump_data <- trump_tweets_df %>%
tidytext::unnest_tokens(output = "words", input = text, token = "words")
# Remove stop words from tibble
trump_clean_data <- trump_data %>%
dplyr::anti_join(stop_words, by=c("words"="word")) %>% dplyr::filter(words != "trump" )
Polarity scores help us make quantitative judgments about the feelings of some text. In short, we categorize words from the tweets into positive and negative types and give them a score for analysis. Then, we filter the dataset to get only words with a polarity score of 80 or more. I assigned the words with sentiment using bing lexicon and categorize words using polarity scores.
library(tidyr)
library(ggplot2)
sentiment_data <- trump_clean_data %>%
# Inner join to bing lexicon by term = word
dplyr::inner_join(get_sentiments("bing"), by = c("words" = "word")) %>%
# Count by term and sentiment, weighted by count
dplyr::count(words, sentiment) %>%
# Spread sentiment, using n as values
tidyr::spread(sentiment, n, fill = 0) %>%
# Mutate to add a polarity column
dplyr::mutate(polarity = positive - negative)
# show summary of sentiment data
summary(sentiment_data)
## words negative positive polarity
## Length:775 Min. : 0.00 Min. : 0.000 Min. :-1306.000
## Class :character 1st Qu.: 0.00 1st Qu.: 0.000 1st Qu.: -2.000
## Mode :character Median : 1.00 Median : 0.000 Median : -1.000
## Mean : 10.61 Mean : 6.147 Mean : -4.467
## 3rd Qu.: 2.00 3rd Qu.: 1.000 3rd Qu.: 1.000
## Max. :1306.00 Max. :1259.000 Max. : 1259.000
polarity_data <- sentiment_data %>%
# Filter for absolute polarity at least 80
dplyr::filter(abs(polarity) >= 80) %>%
# add new column named as sentiments, shows positive/negative
dplyr::mutate(
Sentiments = ifelse(polarity > 0, "positive", "negative")
)
ggplot2::ggplot(polarity_data, aes(reorder(words, polarity), polarity, fill = Sentiments)) +
geom_col() +
ggtitle("Sentiment Word Frequency") +
theme(axis.text.x = element_text(angle = 45, vjust = 0.5, size = 10))+
xlab("Word")
From the frequency of sentiments, we can see, negative sentiments frequency and positive sentiments are equal.
To get a clear picture of how positive and negative words are used, I assigned the words with a sentiment using the ‘bing’ lexicon and do a simple count to generate the top 15 most common positive and negative words used in the extracted tweets.
word_counts <- trump_clean_data %>%
# sentiment analysis using the "bing" lexicon
dplyr::inner_join(get_sentiments("bing"), by = c("words" = "word")) %>%
# Count by word and sentiment
dplyr::count(words, sentiment)
top_words <- word_counts %>%
# Group by sentiment
dplyr:: group_by(sentiment) %>%
# Take the top 15 for each sentiment
dplyr::top_n(15) %>%
dplyr::ungroup() %>%
# Make word a factor in order of n
dplyr::mutate(words = reorder(words, n))
ggplot2::ggplot(top_words, aes(words, n, fill = sentiment)) +
geom_col(show.legend = FALSE) +
geom_text(aes(label = n, hjust=1), size = 3.5, color = "black") +
facet_wrap(~sentiment, scales = "free") +
coord_flip() +
ggtitle("Most common positive and negative words")
# Sentiment word cloud
tokenizer <- function(x) {
RWeka::NGramTokenizer(x, RWeka::Weka_control(min = 1, max = 1))
}
tdm <- TermDocumentMatrix(
trump_tweets_corp,
control = list(tokenize = tokenizer)
)
tdm <- as.matrix(tdm)
trump_tweets_cleaned_freq <- rowSums(tdm)
# Create a uni-gram (1-word) word cloud
pal <- RColorBrewer::brewer.pal(9,"Set2")
wordcloud::wordcloud(names(trump_tweets_cleaned_freq), trump_tweets_cleaned_freq, min.freq=50,max.words = 50, random.order=TRUE,random.color = TRUE, rot.per=.15, colors = pal,scale = c(3,1))
Overall, the tweets convey an optimistic sentiment with the high frequency of words such as Leading,hug and helping of defeating Coronavirus. And most negative high frequancy words such as lie,died and virus
When looking at bar graph (Sentiment Word Frequency graph), the word “lie” has highest frequency among other words, which suggests that there are news or stories posted on twitter about people died in Covid-19 pandemic and president statement is not correct.
The most frequent words in bi-gram word cloud plot show related to delivered hug, Died today, Virus Worse, American funds, and Vaccine injection suggesting that the Government is much more concerned about people and tries different ways to stop spreading the virus. The government shares positive messages to the citizen so, they can stay positive and fight with the Virus.
This sentiment analysis, quite surprising I thought of to get more negative sentiments on coronavirus but due to government financial and moral support, citizens are staying positive in this difficult situation.
References
We collect data from Yahoo Finance site using tidyquant package. By default quantmod download and stores the symbols with their own names.
Here we will show the stock performance of S&P 500, Dow Jones30, NASDAQ, and Russel2000 from March 15 to April 27.
We choose 3 levels of stocks such as High level, Medium Level, and Low level. We will select 5 companies from each category and will display their stocks from April 1 to April 30.
High level stocks companies:
Medium level stocks companies:
And, Low level stocks companies:
Here we will see stock performace of S&P 500, Dow30, NASDAQ and Russel2000 from April 1 to April 30.
Load necessary libraries.
library(tidyquant)
library(DT)
library(dplyr)
library(ggplot2)
tickers = c( "^GSPC", "^DJI","^IXIC", "^RUT")
prices <- tq_get(tickers,
from = "2020-03-15",
to = "2020-04-27",
get = "stock.prices")
prices %>%
dplyr::group_by(symbol) %>%
slice(1)
## # A tibble: 4 x 8
## # Groups: symbol [4]
## symbol date open high low close volume adjusted
## <chr> <date> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 ^DJI 2020-03-16 20918. 21768. 20116. 20189. 770130000 20189.
## 2 ^GSPC 2020-03-16 2509. 2563. 2381. 2386. 7781540000 2386.
## 3 ^IXIC 2020-03-16 7393. 7422. 6883. 6905. 4594360000 6905.
## 4 ^RUT 2020-03-16 1175. 1175. 1035. 1037. 77815400 1037.
prices %>%
ggplot(aes(x = date, y = adjusted, color = symbol)) +
geom_line()
DT::datatable(prices)
prices %>%
ggplot(aes(x = date, y = adjusted, color = symbol)) +
geom_line() +
facet_wrap(~symbol,scales = 'free_y') +
theme_classic() +
labs(x = 'Date',
y = "Adjusted Price",
title = "Price Chart") +
scale_x_date(date_breaks = "week",
date_labels = "%b %d")
tickers = c("AAPL", "NFLX", "AMZN", "MSFT", "GOOG")
prices_hi <- tq_get(tickers,
from = "2020-03-15",
to = "2020-04-27",
get = "stock.prices")
prices_hi %>%
dplyr::group_by(symbol) %>%
slice(1)
## # A tibble: 5 x 8
## # Groups: symbol [5]
## symbol date open high low close volume adjusted
## <chr> <date> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 AAPL 2020-03-16 242. 259. 240 242. 80605900 242.
## 2 AMZN 2020-03-16 1642. 1759. 1626. 1689. 8917300 1689.
## 3 GOOG 2020-03-16 1096 1152. 1074. 1084. 4252400 1084.
## 4 MSFT 2020-03-16 140 149. 135 135. 87905900 135.
## 5 NFLX 2020-03-16 307. 334. 295. 299. 10559900 299.
prices_hi %>%
ggplot(aes(x = date, y = adjusted, color = symbol)) +
geom_line()
DT::datatable(prices_hi)
prices_hi %>%
ggplot(aes(x = date, y = adjusted, color = symbol)) +
geom_line() +
facet_wrap(~symbol,scales = 'free_y') +
theme_classic() +
labs(x = 'Date',
y = "Adjusted Price",
title = "Price Chart") +
scale_x_date(date_breaks = "week",
date_labels = "%b %d")
tickers = c("TEAM", "FOUR.L", "ETSY", "CRUS", "HUBS")
prices_med <- tq_get(tickers,
from = "2020-03-15",
to = "2020-04-27",
get = "stock.prices")
prices_med %>%
dplyr::group_by(symbol) %>%
slice(1)
## # A tibble: 5 x 8
## # Groups: symbol [5]
## symbol date open high low close volume adjusted
## <chr> <date> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 CRUS 2020-03-16 56.1 58.9 51.3 51.3 975500 51.3
## 2 ETSY 2020-03-16 44.2 44.3 40.7 41.7 4022500 41.7
## 3 FOUR.L 2020-03-16 2310 2312. 1820 1840 70938 1799.
## 4 HUBS 2020-03-16 121 124. 109. 110. 1289600 110.
## 5 TEAM 2020-03-16 118. 128. 110. 121. 1826300 121.
prices_med %>%
ggplot(aes(x = date, y = adjusted, color = symbol)) +
geom_line()
DT::datatable(prices_med)
prices_med %>%
ggplot(aes(x = date, y = adjusted, color = symbol)) +
geom_line() +
facet_wrap(~symbol,scales = 'free_y') +
theme_classic() +
labs(x = 'Date',
y = "Adjusted Price",
title = "Price Chart") +
scale_x_date(date_breaks = "week",
date_labels = "%b %d")
tickers = c("SWI", "RGR", "SHOO", "IPGP", "WINA")
prices_low <- tq_get(tickers,
from = "2020-03-15",
to = "2020-04-27",
get = "stock.prices")
prices_low %>%
dplyr::group_by(symbol) %>%
slice(1)
## # A tibble: 5 x 8
## # Groups: symbol [5]
## symbol date open high low close volume adjusted
## <chr> <date> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 IPGP 2020-03-16 101. 108. 98.0 105. 720000 105.
## 2 RGR 2020-03-16 43.1 47.7 42.1 46.6 527600 46.6
## 3 SHOO 2020-03-16 25.5 25.9 21.6 21.8 1065800 21.8
## 4 SWI 2020-03-16 13.9 14.0 12.9 13.4 1385300 13.4
## 5 WINA 2020-03-16 154. 155. 133. 137. 26600 137.
prices_low %>%
ggplot(aes(x = date, y = adjusted, color = symbol)) +
geom_line()
DT::datatable(prices_low)
prices_low %>%
ggplot(aes(x = date, y = adjusted, color = symbol)) +
geom_line() +
facet_wrap(~symbol,scales = 'free_y') +
theme_classic() +
labs(x = 'Date',
y = "Adjusted Price",
title = "Price Chart") +
scale_x_date(date_breaks = "week",
date_labels = "%b %d")
From March 15 to April 27, above mentioned all stock has fluctuated.
Reference: https://www.codingfinance.com/post/2018-03-27-download-price/