Email : je070601@gmail.com
Instagram : https://www.instagram.com/marvis.zerex/
RPubs : https://rpubs.com/invokerarts/
Linkedin : https://www.linkedin.com/in/jeffry-wijaya-087a191b5/
Majors : Business Statistics
Address : ARA Center, Matana University Tower
Jl. CBD Barat Kav, RT.1, Curug Sangereng, Kelapa Dua, Tangerang, Banten 15810.
library(DT)
library(fpp2)
library(fpp3)
library(tidyverse)
library(ggplot2)
library(tsibbledata)
library(tsibble)
library(forecast)
library(fma)
library(expsmooth)
library(feasts)
library(USgas)gafa_stock, PBS, vic_elec and pelt represent.This is the code i use to explore the data
And for reader information I’ll give the link instead below
gafa_stock = 1 Day
PBS = 1 Month
vic_elec = 30 Minutes
pelt = 1 Year
filter() to find what days corresponded to the peak closing price for each of the four stocks in gafa_stock.VarsO <- c("Open")
CondO <- c(max(gafa_stock$Open))
gafa_stock %>% filter(.data[[VarsO[[1]]]]==CondO[[1]])## # A tsibble: 1 x 8 [!]
## # Key: Symbol [1]
## Symbol Date Open High Low Close Adj_Close Volume
## <chr> <date> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 AMZN 2018-09-05 2038. 2040. 1990. 1995. 1995. 8220600
VarsH <- c("High")
CondH <- c(max(gafa_stock$High))
gafa_stock %>% filter(.data[[VarsH[[1]]]]==CondH[[1]])## # A tsibble: 1 x 8 [!]
## # Key: Symbol [1]
## Symbol Date Open High Low Close Adj_Close Volume
## <chr> <date> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 AMZN 2018-09-04 2026. 2050. 2013 2040. 2040. 5721100
VarsL <- c("Low")
CondL <- c(max(gafa_stock$Low))
gafa_stock %>% filter(.data[[VarsL[[1]]]]==CondL[[1]])## # A tsibble: 1 x 8 [!]
## # Key: Symbol [1]
## Symbol Date Open High Low Close Adj_Close Volume
## <chr> <date> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 AMZN 2018-09-04 2026. 2050. 2013 2040. 2040. 5721100
VarsC <- c("Close")
CondC <- c(max(gafa_stock$Close))
gafa_stock %>% filter(.data[[VarsC[[1]]]]==CondC[[1]])## # A tsibble: 1 x 8 [!]
## # Key: Symbol [1]
## Symbol Date Open High Low Close Adj_Close Volume
## <chr> <date> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 AMZN 2018-09-04 2026. 2050. 2013 2040. 2040. 5721100
timeseries3 %>%
pivot_longer(-Quarter) %>%
ggplot(aes(x = Quarter, y = value, colour = name)) +
geom_line() +
facet_grid(name ~ ., scales = "free_y")facet_grid()timeseries3 %>%
pivot_longer(-Quarter) %>%
ggplot(aes(x = Quarter, y = value, colour = name)) +
geom_line()From picture above, we know that if we didn’t include facet_grid() the data will shown in one big graph. This can happen because they have same time point. If the data have many similar value each variable, we gonna see the line overlapping each other. So, using facet_grid() is not a must, but for better view, we need to use it so we can see each variable graph clearly.
USgas package contains data on the demand for natural gas in the US.us_total with year as the index and state as the key.## # A tsibble: 1,265 x 3 [1Y]
## # Key: state [53]
## year state y
## <int> <chr> <int>
## 1 1997 Alabama 324158
## 2 1998 Alabama 329134
## 3 1999 Alabama 337270
## 4 2000 Alabama 353614
## 5 2001 Alabama 332693
## 6 2002 Alabama 379343
## 7 2003 Alabama 350345
## 8 2004 Alabama 382367
## 9 2005 Alabama 353156
## 10 2006 Alabama 391093
## # ... with 1,255 more rows
tourism.xlsx here and read it into R using readxl::read_excel().tourism tsibble from the tsibble package.library(zoo)
exceltourism$Quarter <- as.yearqtr(exceltourism$Quarter, format = "%Y-%m-%d")
exceltourism## # A tibble: 24,320 x 5
## Quarter Region State Purpose Trips
## <yearqtr> <chr> <chr> <chr> <dbl>
## 1 1998 Q1 Adelaide South Australia Business 135.
## 2 1998 Q2 Adelaide South Australia Business 110.
## 3 1998 Q3 Adelaide South Australia Business 166.
## 4 1998 Q4 Adelaide South Australia Business 127.
## 5 1999 Q1 Adelaide South Australia Business 137.
## 6 1999 Q2 Adelaide South Australia Business 200.
## 7 1999 Q3 Adelaide South Australia Business 169.
## 8 1999 Q4 Adelaide South Australia Business 134.
## 9 2000 Q1 Adelaide South Australia Business 154.
## 10 2000 Q2 Adelaide South Australia Business 169.
## # ... with 24,310 more rows
Region and Purpose had the maximum number of overnight trips on average.exceltourism %>%
group_by(Region, Purpose) %>%
summarise_at(vars(-Quarter,-State), funs(mean(Trips, na.rm=TRUE)))## # A tibble: 304 x 3
## # Groups: Region [76]
## Region Purpose Trips
## <chr> <chr> <dbl>
## 1 Adelaide Business 156.
## 2 Adelaide Holiday 157.
## 3 Adelaide Other 56.6
## 4 Adelaide Visiting 205.
## 5 Adelaide Hills Business 2.66
## 6 Adelaide Hills Holiday 10.5
## 7 Adelaide Hills Other 1.40
## 8 Adelaide Hills Visiting 14.2
## 9 Alice Springs Business 14.6
## 10 Alice Springs Holiday 31.9
## # ... with 294 more rows
exceltourism %>%
group_by(Quarter, State) %>%
summarise_at(vars(-Region,-Purpose), funs(sum(Trips, na.rm=TRUE)))## # A tibble: 640 x 3
## # Groups: Quarter [80]
## Quarter State Trips
## <yearqtr> <chr> <dbl>
## 1 1998 Q1 ACT 551.
## 2 1998 Q1 New South Wales 8040.
## 3 1998 Q1 Northern Territory 181.
## 4 1998 Q1 Queensland 4041.
## 5 1998 Q1 South Australia 1735.
## 6 1998 Q1 Tasmania 982.
## 7 1998 Q1 Victoria 6010.
## 8 1998 Q1 Western Australia 1641.
## 9 1998 Q2 ACT 416.
## 10 1998 Q2 New South Wales 7166.
## # ... with 630 more rows
Bricks from aus_production, Lynx from pelt, Close from gafa_stock, Demand from vic_elec.?? (or help()) to find out about the data in each series.This is the code i use to explore the data
And for reader information I’ll give the link instead below
aus_arrivals data set comprises quarterly international arrivals to Australia from Japan, New Zealand, UK and the US.autoplot(), gg_season() and gg_subseries() to compare the differences between the arrivals from these four countries.From information given above, we can see Japanese arrivals to Australia is greatly decreased. After exploring some information in google on wikipedia, what i got is on March 2007 Australia and Japan signed a joint security pact. The scope of security cooperation includes:
This could happened because it’s Post-war diplomatic relations. The biggest reason that causing Japanese arrivals to Australia is decreasing is point 2(Border security) and 7(Maritime and aviation security)
set.seed(7777777)
seedseries <- aus_retail %>%
filter(`Series ID` == sample(aus_retail$`Series ID`,1))autoplot() graph clearly. The turnover keep have simmiliar pattern each year (like having 1 peak then go back down again)autoplot() graph clearly. The seasonality pattern keep repeating from year to year,autoplot() graph clearly. The turnover is going upward each year.autoplot(), gg_season(), gg_subseries(), gg_lag(), ACF() and explore features from the following time series: “Total Private” Employed from us_employment, Bricks from aus_production, Hare from pelt, “H02” Cost from PBS, and us_gasoline.Employed from us_employmentBricks from aus_productionHare from peltgg_season()Cost from PBSgg_lag()aus_livestock data contains the monthly total number of pigs slaughtered in Victoria, Australia, from Jul 1972 to Dec 2018. Use filter() to extract pig slaughters in Victoria between 1990 and 1995. Use autoplot() and ACF() for this data. How do they differ from white noise? If a longer period of data is used, what difference does it make to the ACF?DataVicPig <- aus_livestock %>%
filter(State == "Victoria",
Animal == "Pigs" ,
between(year(Month) ,1990,1995))
DataVicPig %>% ACF(Count) %>% autoplot()Almost all of the line are out of the bounds which is confirming the series is not white noise.
Because when we use a filter for the “GOOG”, the interval for “GOOG” by Date will be scrambled, misleading and not the same for each row, so we need to change our index to use the row sequence number so that the interval for each row is the same
datastock <- gafa_stock %>%
filter(Symbol == "GOOG") %>%
mutate(trading_day = row_number()) %>%
update_tsibble(index = trading_day, regular = TRUE)
datastock %>% ACF(difference(Close)) %>% autoplot()There are total 30 lags, 5% of 30 is
## [1] 1.5
then, if there are more than 1.5 lags that out of the bound, the data series is not white noises.
From the plot above we can see that there are 4 lags that out of the bound, then the data series is not white noise.