Email           :
Instagram   : https://www.instagram.com/marvis.zerex/
RPubs         : https://rpubs.com/invokerarts/
Linkedin     : https://www.linkedin.com/in/jeffry-wijaya-087a191b5/
Majors        : Business Statistics
Address      : ARA Center, Matana University Tower
                      Jl. CBD Barat Kav, RT.1, Curug Sangereng, Kelapa Dua, Tangerang, Banten 15810.



library(DT)
library(fpp2)
library(fpp3)
library(tidyverse)
library(ggplot2)
library(tsibbledata)
library(tsibble)
library(forecast)
library(fma)
library(expsmooth)
library(feasts)
library(USgas)

1 Use the help function to explore what the series gafa_stock, PBS, vic_elec and pelt represent.

This is the code i use to explore the data

# help(gafa_stock)
# help(PBS)
# help(vic_elec)
# help(pelt)

And for reader information I’ll give the link instead below

1.1 Use autoplot() to plot some of the series in these data sets.

1.1.1 gafa_stock

autoplot(gafa_stock, Volume)

1.1.2 PBS

After exploring with help function, we know that PBS is Monthly Medicare Australia prescription data which don’t have a value to

PBSdata <- ts(PBS, start=1992, end=2008, frequency = 12)
autoplot(PBSdata[,"Cost"])+ggtitle("Monthly Medicare Australia prescription data")+ylab("Cost")

1.1.3 vic_elec

autoplot(vic_elec)

1.1.4 pelt

autoplot(pelt)

1.2 What is the time interval of each series?

gafa_stock = 1 Day
PBS = 1 Month
vic_elec = 30 Minutes
pelt = 1 Year

2 Use filter() to find what days corresponded to the peak closing price for each of the four stocks in gafa_stock.

VarsO <- c("Open")
CondO <- c(max(gafa_stock$Open))

gafa_stock %>% filter(.data[[VarsO[[1]]]]==CondO[[1]])
## # A tsibble: 1 x 8 [!]
## # Key:       Symbol [1]
##   Symbol Date        Open  High   Low Close Adj_Close  Volume
##   <chr>  <date>     <dbl> <dbl> <dbl> <dbl>     <dbl>   <dbl>
## 1 AMZN   2018-09-05 2038. 2040. 1990. 1995.     1995. 8220600
VarsH <- c("High")
CondH <- c(max(gafa_stock$High))

gafa_stock %>% filter(.data[[VarsH[[1]]]]==CondH[[1]])
## # A tsibble: 1 x 8 [!]
## # Key:       Symbol [1]
##   Symbol Date        Open  High   Low Close Adj_Close  Volume
##   <chr>  <date>     <dbl> <dbl> <dbl> <dbl>     <dbl>   <dbl>
## 1 AMZN   2018-09-04 2026. 2050.  2013 2040.     2040. 5721100
VarsL <- c("Low")
CondL <- c(max(gafa_stock$Low))

gafa_stock %>% filter(.data[[VarsL[[1]]]]==CondL[[1]])
## # A tsibble: 1 x 8 [!]
## # Key:       Symbol [1]
##   Symbol Date        Open  High   Low Close Adj_Close  Volume
##   <chr>  <date>     <dbl> <dbl> <dbl> <dbl>     <dbl>   <dbl>
## 1 AMZN   2018-09-04 2026. 2050.  2013 2040.     2040. 5721100
VarsC <- c("Close")
CondC <- c(max(gafa_stock$Close))

gafa_stock %>% filter(.data[[VarsC[[1]]]]==CondC[[1]])
## # A tsibble: 1 x 8 [!]
## # Key:       Symbol [1]
##   Symbol Date        Open  High   Low Close Adj_Close  Volume
##   <chr>  <date>     <dbl> <dbl> <dbl> <dbl>     <dbl>   <dbl>
## 1 AMZN   2018-09-04 2026. 2050.  2013 2040.     2040. 5721100

3 Download the file tute1.csv here, open it in Excel (or some other spreadsheet application), and review its contents. You should find four columns of information. Columns B through D each contain a quarterly series, labelled Sales, AdBudget and GDP. Sales contains the quarterly sales for a small company over the period 1981-2005. AdBudget is the advertising budget and GDP is the gross domestic product. All series have been adjusted for inflation.

3.1 You can read the data into R with the following script:

tute1 <- readr::read_csv("tute1.csv")
datatable(tute1)

3.2 Convert the data to time series

timeseries3 <- tute1 %>%
  mutate(Quarter = yearmonth(Quarter)) %>%
  as_tsibble(index = Quarter)

3.3 Construct time series plots of each of the three series

timeseries3 %>%
  pivot_longer(-Quarter) %>%
  ggplot(aes(x = Quarter, y = value, colour = name)) +
  geom_line() +
  facet_grid(name ~ ., scales = "free_y")

3.3.1 Check what happens when you don’t include facet_grid()

timeseries3 %>%
  pivot_longer(-Quarter) %>%
  ggplot(aes(x = Quarter, y = value, colour = name)) +
  geom_line()

From picture above, we know that if we didn’t include facet_grid() the data will shown in one big graph. This can happen because they have same time point. If the data have many similar value each variable, we gonna see the line overlapping each other. So, using facet_grid() is not a must, but for better view, we need to use it so we can see each variable graph clearly.

4 The USgas package contains data on the demand for natural gas in the US.

4.1 Install the USgas package.

#install.packages("USgas")
library(USgas)

4.2 Create a tsibble from us_total with year as the index and state as the key.

gasdata <- us_total %>% as_tsibble(key = state, index = year)
gasdata
## # A tsibble: 1,265 x 3 [1Y]
## # Key:       state [53]
##     year state        y
##    <int> <chr>    <int>
##  1  1997 Alabama 324158
##  2  1998 Alabama 329134
##  3  1999 Alabama 337270
##  4  2000 Alabama 353614
##  5  2001 Alabama 332693
##  6  2002 Alabama 379343
##  7  2003 Alabama 350345
##  8  2004 Alabama 382367
##  9  2005 Alabama 353156
## 10  2006 Alabama 391093
## # ... with 1,255 more rows

4.3 Plot the annual natural gas consumption by state for the New England area (comprising the states of Maine, Vermont, New Hampshire, Massachusetts, Connecticut and Rhode Island).

autoplot(gasdata,y)+ylab("y")+ggtitle("Natural Gas in U.S.")

5 Follow the instructions below:

5.1 Download tourism.xlsx here and read it into R using readxl::read_excel().

exceltourism <- readxl::read_excel("tourism.xlsx")

5.2 Create a tsibble which is identical to the tourism tsibble from the tsibble package.

library(zoo)
exceltourism$Quarter <- as.yearqtr(exceltourism$Quarter, format = "%Y-%m-%d")
exceltourism
## # A tibble: 24,320 x 5
##    Quarter   Region   State           Purpose  Trips
##    <yearqtr> <chr>    <chr>           <chr>    <dbl>
##  1 1998 Q1   Adelaide South Australia Business  135.
##  2 1998 Q2   Adelaide South Australia Business  110.
##  3 1998 Q3   Adelaide South Australia Business  166.
##  4 1998 Q4   Adelaide South Australia Business  127.
##  5 1999 Q1   Adelaide South Australia Business  137.
##  6 1999 Q2   Adelaide South Australia Business  200.
##  7 1999 Q3   Adelaide South Australia Business  169.
##  8 1999 Q4   Adelaide South Australia Business  134.
##  9 2000 Q1   Adelaide South Australia Business  154.
## 10 2000 Q2   Adelaide South Australia Business  169.
## # ... with 24,310 more rows

5.3 Find what combination of Region and Purpose had the maximum number of overnight trips on average.

exceltourism %>%
  group_by(Region, Purpose) %>%
  summarise_at(vars(-Quarter,-State), funs(mean(Trips, na.rm=TRUE)))
## # A tibble: 304 x 3
## # Groups:   Region [76]
##    Region         Purpose   Trips
##    <chr>          <chr>     <dbl>
##  1 Adelaide       Business 156.  
##  2 Adelaide       Holiday  157.  
##  3 Adelaide       Other     56.6 
##  4 Adelaide       Visiting 205.  
##  5 Adelaide Hills Business   2.66
##  6 Adelaide Hills Holiday   10.5 
##  7 Adelaide Hills Other      1.40
##  8 Adelaide Hills Visiting  14.2 
##  9 Alice Springs  Business  14.6 
## 10 Alice Springs  Holiday   31.9 
## # ... with 294 more rows

5.4 Create a new tsibble which combines the Purposes and Regions, and just has total trips by State.

exceltourism %>%
  group_by(Quarter, State) %>%
  summarise_at(vars(-Region,-Purpose), funs(sum(Trips, na.rm=TRUE)))
## # A tibble: 640 x 3
## # Groups:   Quarter [80]
##    Quarter   State              Trips
##    <yearqtr> <chr>              <dbl>
##  1 1998 Q1   ACT                 551.
##  2 1998 Q1   New South Wales    8040.
##  3 1998 Q1   Northern Territory  181.
##  4 1998 Q1   Queensland         4041.
##  5 1998 Q1   South Australia    1735.
##  6 1998 Q1   Tasmania            982.
##  7 1998 Q1   Victoria           6010.
##  8 1998 Q1   Western Australia  1641.
##  9 1998 Q2   ACT                 416.
## 10 1998 Q2   New South Wales    7166.
## # ... with 630 more rows

6 Create time plots of the following four time series: Bricks from aus_production, Lynx from pelt, Close from gafa_stock, Demand from vic_elec.

6.1 Use ?? (or help()) to find out about the data in each series.

This is the code i use to explore the data

# help(aus_production)
# help(pelt)
# help(gafa_stock)
# help(vic_elec)

And for reader information I’ll give the link instead below

6.2 For the last plot, modify the axis labels and title.

6.2.1 First

autoplot(aus_production, Bricks)

6.2.2 Second

autoplot(pelt, Lynx)

6.2.3 Third

autoplot(gafa_stock, Close)

6.2.4 Last

Lastdata <- ts(vic_elec, start=2012, end=2015, frequency = 365.25*24*2)
autoplot(Lastdata[,"Demand"])+ggtitle("Half-hourly electricity demand for Victoria, Australia")+xlab("Year")+ylab("Demand over time")

7 The aus_arrivals data set comprises quarterly international arrivals to Australia from Japan, New Zealand, UK and the US.

7.1 Use autoplot(), gg_season() and gg_subseries() to compare the differences between the arrivals from these four countries.

7.1.1 autoplot()

autoplot(aus_arrivals, Arrivals)

7.1.2 gg_season()

gg_season(aus_arrivals, Arrivals)

7.1.3 gg_subseries()

gg_subseries(aus_arrivals, Arrivals)

7.2 Can you identify any unusual observations?

From information given above, we can see Japanese arrivals to Australia is greatly decreased. After exploring some information in google on wikipedia, what i got is on March 2007 Australia and Japan signed a joint security pact. The scope of security cooperation includes:

  • Law enforcement on combating transnational crime, including trafficking in illegal narcotics and precursors, people smuggling and trafficking, counterfeiting currency and arms smuggling
  • Border security
  • Counter-terrorism
  • Disarmament and counter-proliferation of weapons of mass destruction and their means of delivery
  • Peace operations
  • Exchange of strategic assessments and related information
  • Maritime and aviation security
  • Humanitarian relief operations, including disaster relief
  • Contingency planning, including for pandemics

This could happened because it’s Post-war diplomatic relations. The biggest reason that causing Japanese arrivals to Australia is decreasing is point 2(Border security) and 7(Maritime and aviation security)

8 Monthly Australian retail data is provided in aus_retail. Select one of the time series as follows (but choose your own seed value):

set.seed(7777777)
seedseries <- aus_retail %>%
  filter(`Series ID` == sample(aus_retail$`Series ID`,1))

8.1 Explore your chosen retail time series using the following functions:

8.1.1 autoplot(),

autoplot(seedseries, Turnover)

8.1.2 gg_season(),

gg_season(seedseries, Turnover)

8.1.3 gg_subseries(),

gg_subseries(seedseries, Turnover)

8.1.4 gg_lag(),

gg_lag(seedseries, Turnover)

8.1.5 ACF() %>% autoplot()

seedseries %>% ACF(Turnover) %>% autoplot()

8.2 Can you spot any seasonality, cyclicity and trend? What do you learn about the series?

  • We can see seasonality pattern in autoplot() graph clearly. The turnover keep have simmiliar pattern each year (like having 1 peak then go back down again)
  • We can see cyclicity in autoplot() graph clearly. The seasonality pattern keep repeating from year to year,
  • We can see trend in autoplot() graph clearly. The turnover is going upward each year.
  • From autoplot itself, we have enough evidence to prove that this data have seasonality pattern with cyclicity with increasing from year to year

9 Use the following graphics functions: autoplot(), gg_season(), gg_subseries(), gg_lag(), ACF() and explore features from the following time series: “Total Private” Employed from us_employment, Bricks from aus_production, Hare from pelt, “H02” Cost from PBS, and us_gasoline.

9.1 “Total Private” Employed from us_employment

TP <- us_employment %>% 
        filter(Title == "Total Private")

9.1.1 autoplot()

autoplot(TP,Employed)

9.1.2 gg_season()

gg_season(TP,Employed)

9.1.3 gg_subseries()

gg_subseries(TP,Employed)

9.1.4 gg_lag()

gg_lag(TP,Employed)

9.1.5 ACF()

TP %>% ACF(Employed) %>% autoplot()

9.2 Bricks from aus_production

9.2.1 autoplot()

autoplot(aus_production,Bricks)

9.2.2 gg_season()

gg_season(aus_production,Bricks)

9.2.3 gg_subseries()

gg_subseries(aus_production,Bricks)

9.2.4 gg_lag()

gg_lag(aus_production,Bricks)

9.2.5 ACF()

aus_production %>% ACF(Bricks) %>% autoplot()

9.3 Hare from pelt

9.3.1 autoplot()

autoplot(pelt,Hare)

9.3.2 gg_season()

9.3.3 gg_subseries()

gg_subseries(pelt,Hare)

9.3.4 gg_lag()

gg_lag(pelt,Hare)

9.3.5 ACF()

pelt %>% ACF(Hare) %>% autoplot()

9.4 “H02” Cost from PBS

H02 <- PBS %>% filter(ATC2 == "H02")

9.4.1 autoplot()

autoplot(H02,Cost)

9.4.2 gg_season()

gg_season(H02,Cost)

9.4.3 gg_subseries()

gg_subseries(H02,Cost)

9.4.4 gg_lag()

9.4.5 ACF()

H02 %>% ACF(Cost) %>% autoplot()

9.5 us_gasoline

9.5.1 autoplot()

autoplot(us_gasoline)

9.5.2 gg_season()

gg_season(us_gasoline)

9.5.3 gg_subseries()

gg_subseries(us_gasoline)

9.5.4 gg_lag()

gg_lag(us_gasoline)

9.5.5 ACF()

us_gasoline %>% ACF() %>% autoplot()

10 The following time plots and ACF plots correspond to four different time series. Your task is to match each time plot in the first row with one of the ACF plots in the second row.

  • 1 with B
  • 2 with A
  • 3 with D
  • 4 with C

11 The aus_livestock data contains the monthly total number of pigs slaughtered in Victoria, Australia, from Jul 1972 to Dec 2018. Use filter() to extract pig slaughters in Victoria between 1990 and 1995. Use autoplot() and ACF() for this data. How do they differ from white noise? If a longer period of data is used, what difference does it make to the ACF?

DataVicPig <- aus_livestock %>% 
              filter(State  == "Victoria", 
                     Animal == "Pigs"    , 
                     between(year(Month) ,1990,1995))

DataVicPig %>% ACF(Count) %>% autoplot()

Almost all of the line are out of the bounds which is confirming the series is not white noise.

12

12.1 Use the following code to compute the daily changes in Google closing stock prices.

dgoog <- gafa_stock %>%
  filter(Symbol == "GOOG", year(Date) >= 2018) %>%
  mutate(trading_day = row_number()) %>%
  update_tsibble(index = trading_day, regular = TRUE) %>%
  mutate(diff = difference(Close))

12.2 Why was it necessary to re-index the tsibble?

Because when we use a filter for the “GOOG”, the interval for “GOOG” by Date will be scrambled, misleading and not the same for each row, so we need to change our index to use the row sequence number so that the interval for each row is the same

12.3 Plot these differences and their ACF.

datastock <- gafa_stock %>%
             filter(Symbol == "GOOG") %>%
             mutate(trading_day = row_number()) %>%
             update_tsibble(index = trading_day, regular = TRUE)

datastock %>% ACF(difference(Close)) %>% autoplot()

12.4 Do the changes in the stock prices look like white noise?

There are total 30 lags, 5% of 30 is

5/100*30
## [1] 1.5

then, if there are more than 1.5 lags that out of the bound, the data series is not white noises.

From the plot above we can see that there are 4 lags that out of the bound, then the data series is not white noise.