library(tsibble)
## Warning: package 'tsibble' was built under R version 4.3.3
## Registered S3 method overwritten by 'tsibble':
##   method               from 
##   as_tibble.grouped_df dplyr
## 
## Attaching package: 'tsibble'
## The following objects are masked from 'package:base':
## 
##     intersect, setdiff, union
library(fpp3)
## Warning: package 'fpp3' was built under R version 4.3.3
## ── Attaching packages ──────────────────────────────────────────── fpp3 1.0.0 ──
## ✔ tibble      3.2.1     ✔ tsibbledata 0.4.1
## ✔ dplyr       1.1.4     ✔ feasts      0.3.2
## ✔ tidyr       1.3.1     ✔ fable       0.3.4
## ✔ lubridate   1.9.3     ✔ fabletools  0.4.2
## ✔ ggplot2     3.5.1
## ── Conflicts ───────────────────────────────────────────────── fpp3_conflicts ──
## ✖ lubridate::date()     masks base::date()
## ✖ dplyr::filter()       masks stats::filter()
## ✖ tsibble::intersect()  masks base::intersect()
## ✖ lubridate::interval() masks tsibble::interval()
## ✖ dplyr::lag()          masks stats::lag()
## ✖ tsibble::setdiff()    masks base::setdiff()
## ✖ tsibble::union()      masks base::union()

Chapter 9 Exercise 1

Figure 9.32 shows the ACFs for 36 random numbers, 360 random numbers and 1,000 random numbers.


fig_9.32
fig_9.32

1a

Explain the differences among these figures. Do they all indicate that the data are white noise?


The figures show different critical values and different ACF distributions. However, as long as the spikes in the ACFs are within the critical values (represented by the blue lines in each graph) then the figures indicate that the data are white noise. All three of these ACF plots indicate that the underlying data set is white noise, which is appropriate for a randomly generated set of numbers.

1b

Why are the critical values at different distances from the mean of zero? Why are the autocorrelations different in each figure when they each refer to white noise?


The critical values are calculated using \(\pm\frac{1.96}{\sqrt{T}}\), where \(T\) is the length of the time series (from the textbook section 2.9). As a result, as T increases, the critical values will get closer and closer to the mean. The ACF values get smaller along with the critical values because as the size of a random data set increases, the probability of correlations decreases.

Chapter 9 Exercise 2

A classic example of a non-stationary series are stock prices. Plot the daily closing prices for Amazon stock (contained in gafa_stock), along with the ACF and PACF. Explain how each plot shows that the series is non-stationary and should be differenced.


# Select data
amazon_close <- gafa_stock |>
  filter(Symbol == 'AMZN') |>
  select(Date, Close)

# Plot ACF and PACF
amazon_close |> gg_tsdisplay(Close, plot_type = 'partial')
## Warning: Provided data has an irregular interval, results should be treated with caution. Computing ACF by observation.
## Provided data has an irregular interval, results should be treated with caution. Computing ACF by observation.

The plot of the Closing prices indicates that the series is non-stationary because the data shows a clear trend. Until most of the way through 2018 there is a clear trend upward, and after that point there is a clear trend downward. A stationary series would not display a trend. The acf plot indicates that the series is non-stationary because the values are large, positive, and decrease slowly. In a stationary series, the values would rapidly trend to 0. Finally, the pacf plot indicates that the series is non-stationary because the pacf plot is sinusoidal, indicating that the data may follow an ARIMA(0,d,q) model (from the textbook section 9.5).

Chapter 9 Exercise 3

For the following series, find an appropriate Box-Cox transformation and order of differencing in order to obtain stationary data.


3a

Turkish GDP from global_economy.


First, select and plot the original data.

# Select data
turkish_gdp <- global_economy |>
  filter(Code == 'TUR') |>
  select(Year, GDP)

# Plot the original series
turkish_gdp |> autoplot(GDP)

Next, apply a Box-Cox transformation.

# Find lambda
lambda_3a <- turkish_gdp |>
  features(GDP, features = guerrero) |>
  pull(lambda_guerrero)

# Apply Box-Cox transformation
turkish_gdp_bc <- turkish_gdp |>
  mutate(b_c = box_cox(GDP, lambda_3a)) |>
  select(Year, b_c)

print(lambda_3a)
## [1] 0.1572187

Next, test for seasonal differencing.

# Test for seasonal differencing
turkish_gdp_bc |> features(b_c, unitroot_nsdiffs)
## # A tibble: 1 × 1
##   nsdiffs
##     <int>
## 1       0

No seasonal differences are suggested. Now test for an appropriate order of differencing.

# Test for order of differencing
turkish_gdp_bc |> features(b_c, unitroot_ndiffs)
## # A tibble: 1 × 1
##   ndiffs
##    <int>
## 1      1

The appropriate transformations are a Box-Cox transformation with \(\lambda \approx 0.1572\) and one order of differencing with no seasonal differencing. The result is shown below.

# Plot the transformed data
turkish_gdp_bc |>
  mutate(diff_bc = difference(b_c)) |>
  autoplot(diff_bc)
## Warning: Removed 1 row containing missing values or values outside the scale range
## (`geom_line()`).

The data now appears stationary.

3b

Accommodation takings in the state of Tasmania from aus_accommodation.


First, select and plot the original data.

# Select data
tasmania_takings <- aus_accommodation |>
  filter(State == 'Tasmania') |>
  select(Date, Takings)

# Plot the original series
tasmania_takings |> autoplot(Takings)

Next, apply a Box-Cox transformation.

# Find lambda
lambda_3b <- tasmania_takings |>
  features(Takings, features = guerrero) |>
  pull(lambda_guerrero)

# Apply Box-Cox transformation
tasmania_takings_bc <- tasmania_takings |>
  mutate(b_c = box_cox(Takings, lambda_3b)) |>
  select(Date, b_c)

print(lambda_3b)
## [1] 0.001819643

Next, test for seasonal differencing.

# Test for seasonal differencing
tasmania_takings_bc |> features(b_c, unitroot_nsdiffs)
## # A tibble: 1 × 1
##   nsdiffs
##     <int>
## 1       1

One seasonal difference is suggested. Now test for an appropriate order of differencing.

# Apply seasonal differencing and test for order of differencing
tasmania_takings_bc |>
  mutate(seas_diff = difference(b_c, 4)) |>
  features(seas_diff, unitroot_ndiffs)
## # A tibble: 1 × 1
##   ndiffs
##    <int>
## 1      0

The appropriate transformations are a Box-Cox transformation with \(\lambda \approx 0.0018\) and one seasonal difference. The result is shown below.

# Plot the transformed data
tasmania_takings_bc |>
  mutate(seas_diff = difference(b_c, 4)) |>
  autoplot(seas_diff)
## Warning: Removed 4 rows containing missing values or values outside the scale range
## (`geom_line()`).

The data now appears stationary.

3c

Monthly sales from souvenirs.


# Select data
monthly_souvenirs <- souvenirs

# Plot the original series
monthly_souvenirs |> autoplot(Sales)

Next, apply a Box-Cox transformation.

# Find lambda
lambda_3c <- monthly_souvenirs |>
  features(Sales, features = guerrero) |>
  pull(lambda_guerrero)

# Apply Box-Cox transformation
monthly_souvenirs_bc <- monthly_souvenirs |>
  mutate(b_c = box_cox(Sales, lambda_3c)) |>
  select(Month, b_c)

print(lambda_3c)
## [1] 0.002118221

Next, test for seasonal differencing.

# Test for seasonal differencing
monthly_souvenirs_bc |> features(b_c, unitroot_nsdiffs)
## # A tibble: 1 × 1
##   nsdiffs
##     <int>
## 1       1

One seasonal difference is suggested. Now test for an appropriate order of differencing.

# Apply seasonal differencing and test for order of differencing
monthly_souvenirs_bc |>
  mutate(seas_diff = difference(b_c, 12)) |>
  features(seas_diff, unitroot_ndiffs)
## # A tibble: 1 × 1
##   ndiffs
##    <int>
## 1      0

The appropriate transformations are a Box-Cox transformation with \(\lambda \approx 0.0021\) and one seasonal difference. The result is shown below.

# Plot the transformed data
monthly_souvenirs_bc |>
  mutate(seas_diff = difference(b_c, 12)) |>
  autoplot(seas_diff)
## Warning: Removed 12 rows containing missing values or values outside the scale range
## (`geom_line()`).

The data now appears stationary.

Chapter 9 Exercise 5

For your retail data (from Exercise 7 in Section 2.10), find the appropriate order of differencing (after transformation if necessary) to obtain stationary data.


# Recreate the retail time series data (code provided by the textbook)
set.seed(31415)
myseries <- aus_retail |>
  filter(`Series ID` == sample(aus_retail$`Series ID`,1))

# Select relevant columns
myseries <- myseries |> select(Month, Turnover)

# Plot the original series
myseries |> autoplot(Turnover)

Since the data does not show constant variation, a Box-Cox transformation is appropriate.

# Find lambda
lambda_5 <- myseries |>
  features(Turnover, features = guerrero) |>
  pull(lambda_guerrero)

# Apply Box-Cox transformation
myseries_bc <- myseries |>
  mutate(b_c = box_cox(Turnover, lambda_5)) |>
  select(Month, b_c)

# Plot the transformed series
myseries_bc |> autoplot(b_c)

The series now shows constant variation, and we can move on to testing for seasonal differencing.

# Test for seasonal differencing
myseries_bc |> features(b_c, unitroot_nsdiffs)
## # A tibble: 1 × 1
##   nsdiffs
##     <int>
## 1       1

One seasonal difference is suggested. We now test for an appropriate order of differencing.

# Apply seasonal differencing and test for order of differencing
myseries_bc |>
  mutate(seas_diff = difference(b_c, 12)) |>
  features(seas_diff, unitroot_ndiffs)
## # A tibble: 1 × 1
##   ndiffs
##    <int>
## 1      0

No additional differencing is suggested. We can plot the final transformed data.

# Plot the transformed data
myseries_bc |>
  mutate(seas_diff = difference(b_c, 12)) |>
  autoplot(seas_diff)
## Warning: Removed 12 rows containing missing values or values outside the scale range
## (`geom_line()`).

The data now appears stationary.

Chapter 9 Exercise 6

Simulate and plot some data from simple ARIMA models.


6a

Use the following R code to generate data from an AR(1) model with \(\phi_1=0.6\) and \(\sigma^2=1\). The process starts with \(y_1=0\).


# Code provided by the textbook
y <- numeric(100)
e <- rnorm(100)
for(i in 2:100)
  y[i] <- 0.6*y[i-1] + e[i]
sim <- tsibble(idx = seq_len(100), y = y, index = idx)

6b

Produce a time plot for the series. How does the plot change as you change \(\phi_1\)?


# Produce an initial time plot
sim |> autoplot(y)

# Produce a time plot with a variety of values of phi_1
y_1 <- numeric(100)
y_2 <- numeric(100)
y_3 <- numeric(100)
e <- rnorm(100)
for(i in 2:100) {
  y_1[i] <- 0.6*y_1[i-1] + e[i]
  y_2[i] <- 0.99*y_2[i-1] + e[i]
  y_3[i] <- 0.2*y_3[i-1] + e[i]
}

sim_multi <- tsibble(idx = seq_len(100), 'phi = 0.6' = y_1, 'phi = 0.99' = y_2, 'phi = 0.2' = y_3, index = idx)

sim_long <- sim_multi |>
  pivot_longer(!idx, names_to = 'phi', values_to = 'y')

autoplot(sim_long, y)

Lower values of \(\phi\) exhibit similar behavior. The extreme values of \(\phi=0.99\) results in a very different graph with more extreme values.

6c

Write your own code to generate data from an MA(1) model with \(\theta_1=0.6\) and \(\sigma^2=1\).


y_ma <- numeric(100)
e_ma <- rnorm(100)
for(i in 2:100)
  y_ma[i] <- e_ma[i] + 0.6*e_ma[i-1]
sim_ma <- tsibble(idx = seq_len(100), y_ma = y_ma, index = idx)

6d

Produce a time plot for the series. How does the plot change as you change \(\theta_1\)?


# Produce an initial time plot
sim_ma |> autoplot(y_ma)

# Produce a time plot with a variety of values of theta_1
y_1 <- numeric(100)
y_2 <- numeric(100)
y_3 <- numeric(100)
e <- rnorm(100)
for(i in 2:100) {
  y_1[i] <- e[i] + 0.6*e[i-1]
  y_2[i] <- e[i] + 0.99*e[i-1]
  y_3[i] <- e[i] + 0.2*e[i-1]
}

sim_ma_multi <- tsibble(idx = seq_len(100), 'theta = 0.6' = y_1, 'theta = 0.99' = y_2, 'theta = 0.2' = y_3, index = idx)

sim_ma_long <- sim_ma_multi |>
  pivot_longer(!idx, names_to = 'theta', values_to = 'y')

autoplot(sim_ma_long, y)

It appears from this plot that more extreme (ie, close to 1) values of \(\theta\) tend to produce higher and lower “peaks” in the data than values closer to 0.

6e

Generate data from an ARMA(1,1) model with \(\phi_1=0.6\), \(\theta_1=0.6\) and \(\sigma^2=1\).


y_arma <- numeric(100)
e_arma <- rnorm(100)
for(i in 2:100)
  y_arma[i] <- 0.6*y[i-1] + 0.6*e_arma[i-1] + e_arma[i]
sim_arma <- tsibble(idx = seq_len(100), y_arma = y_arma, index = idx)

6f

Generate data from an AR(2) model with \(\phi_1=-0.8\), \(\phi_2=0.3\), and \(\sigma^2=1\). (Note that these parameters will give a non-stationary series.)


y_ar2 <- numeric(100)
e_ar2 <- rnorm(100)
for(i in 3:100)
  y_ar2[i] <- -0.8*y_ar2[i-1] + 0.3*y_ar2[i-2] + e_ar2[i]
sim_ar2 <- tsibble(idx = seq_len(100), y_ar2 = y_ar2, index = idx)

6g

Graph the latter two series and compare them.


combined_data <- tsibble(idx = seq_len(100), y_ar2 = y_ar2, y_arma = y_arma, index = idx)
combined_data <- combined_data |>
  pivot_longer(!idx, names_to = 'model', values_to = 'y')

combined_data |> autoplot(y)

The AR(2) model shows increasing variance over time, while the ARMA(1,1) model is more stable and shows consistent variance. On the graph, it appears that the ARMA(1,1) model is a constant at 0, but this is due to the scale distortion created by the extreme values of the AR(2) model. The ARMA(1,1) model is shown by itself below.

sim_arma |> autoplot(y_arma)

Chapter 9 Exercise 7

Consider aus_airpassengers, the total number of passengers (in millions) from Australian air carriers for the period 1970-2011.


print(head(aus_airpassengers))
## # A tsibble: 6 x 2 [1Y]
##    Year Passengers
##   <dbl>      <dbl>
## 1  1970       7.32
## 2  1971       7.33
## 3  1972       7.80
## 4  1973       9.38
## 5  1974      10.7 
## 6  1975      11.1

7a

Use ARIMA() to find an appropriate ARIMA model. What model was selected. Check that the residuals look like white noise. Plot forecasts for the next 10 periods.


autoplot(aus_airpassengers, Passengers)

This data does not display seasonal variation, so a non-seasonal ARIMA model is appropriate.

aus_pass_7a <- aus_airpassengers |> model(ARIMA(Passengers))
report(aus_pass_7a)
## Series: Passengers 
## Model: ARIMA(0,2,1) 
## 
## Coefficients:
##           ma1
##       -0.8963
## s.e.   0.0594
## 
## sigma^2 estimated as 4.308:  log likelihood=-97.02
## AIC=198.04   AICc=198.32   BIC=201.65

The model selected was (0,2,1).

aus_airpassengers |>
  model(ARIMA(Passengers)) |>
  gg_tsresiduals()

The residuals histogram approximates a normal curve (as desired), although there is a significant outlier on the right-hand side. The acf plot shows all values lying within the critical values. The time plot of the residuals does show occasional spikes but the variance is mostly constant throughout.

aus_pass_7a |> forecast(h = 10) |> autoplot(aus_airpassengers)

7b

Write the model in terms of the backshift operator.


The model can be written as \((1-B)^2y_t=(1-0.8963B)\epsilon_t\)

7c

Plot forecasts from an ARIMA(0,1,0) model with drift and compare these to part a.


aus_pass_7c <- aus_airpassengers |>
  model('7a' = ARIMA(Passengers ~ pdq(0,2,1)),
        '7c' = ARIMA(Passengers ~ 1 + pdq(0,1,0)))
aus_pass_7c |> forecast(h = 10) |> autoplot(aus_airpassengers, level = NULL)

The (0,1,0) with drift model increases more slowly than the (0,2,1) model. Visually, the (0,1,0) with drift model appears to be more consistent with drawing a line from the first data point through the last data point, while the (0,2,1) model appears to be more consistent with continuing the recent trend (which increases more sharply than the early data did).

7d

Plot forecasts from an ARIMA(2,1,2) model with drift and compare these to parts a and c. Remove the constant and see what happens.


aus_pass_7d <- aus_airpassengers |>
  model('7a' = ARIMA(Passengers ~ pdq(0,2,1)),
        '7c' = ARIMA(Passengers ~ 1 + pdq(0,1,0)),
        '7d' = ARIMA(Passengers ~ 1 + pdq(2,1,2)))
aus_pass_7d |> forecast(h = 10) |> autoplot(aus_airpassengers, level = NULL)

The (2,1,2) with drift model is very similar to the (0,1,0) with drift model, but where the (0,1,0) with drift model produced a forecast that looked very “straight”, the (2,1,2) with drift model is more “wiggly” or “bumpy”, displaying small deviations both above and below the forecasts produced by the (2,1,2) with drift model.

aus_pass_7d_2 <- aus_airpassengers |>
  model('no_constant' = ARIMA(Passengers ~ 0 + pdq(2,1,2)))
## Warning: 1 error encountered for no_constant
## [1] non-stationary AR part from CSS
aus_pass_7d_2 |> forecast(h = 10) |> autoplot(aus_airpassengers, level = NULL)
## Warning: Removed 10 rows containing missing values or values outside the scale range
## (`geom_line()`).

Removing the constant produces an error, the function is unable to create a model using these parameters.

7e

Plot forecasts from an ARIMA(0,2,1) model with a constant. What happens?


aus_pass_7e <- aus_airpassengers |>
  model('7a' = ARIMA(Passengers ~ pdq(0,2,1)),
        '7e' = ARIMA(Passengers ~ 1 + pdq(0,2,1)))
## Warning: Model specification induces a quadratic or higher order polynomial trend. 
## This is generally discouraged, consider removing the constant or reducing the number of differences.
aus_pass_7e |> forecast(h = 10) |> autoplot(aus_airpassengers, level = NULL)

The inclusion of the constant triggers a warning that the model “induces a quadratic or higher order polynomial trend.” The result of this is evident in the graph, the ARIMA model with a constant increases much more rapidly than the model without one and appears to accelerate. We can see the full effects by forecasting out more than 10 years:

aus_pass_7e |> forecast(h = 50) |> autoplot(aus_airpassengers, level = NULL)

On a longer timeline, we can see the model with a constant displaying runaway growth, while the model without a constant is more restrained and consistent with the existing data.

Chapter 9 Exercise 8

For the United States GDP series (from global_economy)


us_gdp <- global_economy |>
  filter(Code == 'USA') |>
  select(Year, GDP)
print(head(us_gdp))
## # A tsibble: 6 x 2 [1Y]
##    Year          GDP
##   <dbl>        <dbl>
## 1  1960 543300000000
## 2  1961 563300000000
## 3  1962 605100000000
## 4  1963 638600000000
## 5  1964 685800000000
## 6  1965 743700000000

8a

if necessary, find a suitable Box-Cox transformation for the data


autoplot(us_gdp, GDP)

# Find lambda
lambda_8a <- us_gdp |>
  features(GDP, features = guerrero) |>
  pull(lambda_guerrero)

print(lambda_8a)
## [1] 0.2819443
us_gdp |>
  autoplot(box_cox(GDP, lambda_8a))

# Apply Box-Cox transformation
us_gdp_bc <- us_gdp |>
  mutate(b_c = box_cox(GDP, lambda_8a))

A Box-Cox transformation with \(\lambda \approx0.2819\) is appropriate for this data.

8b

fit a suitable ARIMA model to the transformed data using ARIMA()


The data is not seasonal, so a non-seasonal ARIMA model is appropriate.

# Fit ARIMA model and report results
us_gdp_arima <- us_gdp_bc |> model(ARIMA(b_c))
report(us_gdp_arima)
## Series: b_c 
## Model: ARIMA(1,1,0) w/ drift 
## 
## Coefficients:
##          ar1  constant
##       0.4586  118.1822
## s.e.  0.1198    9.5047
## 
## sigma^2 estimated as 5479:  log likelihood=-325.32
## AIC=656.65   AICc=657.1   BIC=662.78

8c

try some other plausible models by experimenting with the orders chosen


I’ll try to replicate the basics of the Hyndman-Khandakar algorithm for automatic ARIMA modeling described in section 9.7 of the textbook.

# Test for order of differencing
us_gdp_bc |> features(b_c, unitroot_ndiffs)
## # A tibble: 1 × 1
##   ndiffs
##    <int>
## 1      1
us_gdp_models <- us_gdp_bc |>
  model('ARIMA(0,1,0)' = ARIMA(b_c ~ 1 + pdq(0,1,0)),
        'ARIMA(2,1,2)' = ARIMA(b_c ~ 1 + pdq(2,1,2)),
        'ARIMA(1,1,0)' = ARIMA(b_c ~ 1 + pdq(1,1,0)),
        'ARIMA(0,1,1)' = ARIMA(b_c ~ 1 + pdq(0,1,1)))

glance(us_gdp_models)
## # A tibble: 4 × 8
##   .model       sigma2 log_lik   AIC  AICc   BIC ar_roots  ma_roots 
##   <chr>         <dbl>   <dbl> <dbl> <dbl> <dbl> <list>    <list>   
## 1 ARIMA(0,1,0)  6774.   -332.  668.  668.  672. <cpl [0]> <cpl [0]>
## 2 ARIMA(2,1,2)  5734.   -325.  662.  664.  674. <cpl [2]> <cpl [2]>
## 3 ARIMA(1,1,0)  5479.   -325.  657.  657.  663. <cpl [1]> <cpl [0]>
## 4 ARIMA(0,1,1)  5689.   -326.  659.  659.  665. <cpl [0]> <cpl [1]>

8d

choose what you think is the best model and check the residual diagnostics


glance(us_gdp_models) |> arrange(AICc)
## # A tibble: 4 × 8
##   .model       sigma2 log_lik   AIC  AICc   BIC ar_roots  ma_roots 
##   <chr>         <dbl>   <dbl> <dbl> <dbl> <dbl> <list>    <list>   
## 1 ARIMA(1,1,0)  5479.   -325.  657.  657.  663. <cpl [1]> <cpl [0]>
## 2 ARIMA(0,1,1)  5689.   -326.  659.  659.  665. <cpl [0]> <cpl [1]>
## 3 ARIMA(2,1,2)  5734.   -325.  662.  664.  674. <cpl [2]> <cpl [2]>
## 4 ARIMA(0,1,0)  6774.   -332.  668.  668.  672. <cpl [0]> <cpl [0]>

The “best” model is the one with the smallest AICc value, which is the ARIMA(1,1,0) model.

us_gdp_bc |>
  model(ARIMA(b_c ~ 1 + pdq(1,1,0))) |>
  gg_tsresiduals()

The residuals display mostly constant variance and an approximately normal distribution, with all values of the acf plot within the critical values.

8e

produce forecasts of your fitted model. Do the forecasts look reasonable?


us_gdp_bc |>
  model(ARIMA(b_c ~ 1 + pdq(1,1,0))) |>
  forecast(h = 10) |>
  autoplot(us_gdp_bc)

The forecasts do look reasonable. They continue the mostly linear trend that already exists in the existing (transformed) data.

8f

compare the results with what you would obtain using ETS() (with no transformation).


us_gdp_bc |>
  model('ARIMA' = ARIMA(b_c ~ 1 + pdq(1,1,0)),
        'ETS' = ETS(b_c ~ error('A') + trend('A') + season('N'))) |>
  forecast(h = 25) |>
  autoplot(us_gdp_bc, level = NULL)

The forecasts appear broadly similar, with the ARIMA model predicting slightly larger results than the ETS model.

# Code modified from textbook section 9.10
us_gdp_bc |>
  slice(-n()) |>
  stretch_tsibble(.init = 10) |>
  model('ARIMA' = ARIMA(b_c ~ 1 + pdq(1,1,0)),
        'ETS' = ETS(b_c ~ error('A') + trend('A') + season('N'))) |>
  forecast(h = 1) |>
  accuracy(us_gdp_bc) |>
  select(.model, RMSE:MAPE)
## # A tibble: 2 × 5
##   .model  RMSE   MAE    MPE  MAPE
##   <chr>  <dbl> <dbl>  <dbl> <dbl>
## 1 ARIMA   82.5  62.4 0.0904 0.454
## 2 ETS     90.6  66.7 0.0610 0.473

The ARIMA model has a lower RMSE, MAE, and MAPE, while the ETS model has the lower MPE. In general the model with the lower RMSE is better, so the ARIMA model will produce better forecasts in this situation.