2022-09-08

Time Series Decomposition

  • Time series data can exhibit a variety of patterns.
  1. trend
  2. seasonality
  3. cycles
  4. remainder component
  • It is often helpful to split a time series into several components, each representing an underlying pattern category.

Time Series Decomposition

  • When decomposing a time series, it is helpful to first transform or adjust the series.

  • Series transformation helps the decomposition (and later analysis) as simple as possible.

  • So we will begin by discussing transformations and adjustments.

Calendar adjustments

  • Some of the variation seen in seasonal data may be due to simple calendar effects.

  • In such cases, it is usually much easier to remove the variation before doing any further analysis.

  • It is also known as Trading Day Adjustments.

Calendar adjustments

Example

  • Monthly sales in a retail store - will be variation between the months simply because of the different numbers of trading days in each month + the seasonal variation across the year.

  • So to perform the calendar adjustment, we can compute the sales per trading day in each month.

  • Then we effectively remove the calendar variation.

Calendar adjustments

Sales
## # A tsibble: 12 x 3 [1M]
##       Month tranding_days sales
##       <mth>         <dbl> <dbl>
##  1 2007 Jan            23     4
##  2 2007 Feb            20     4
##  3 2007 Mar            22     5
##  4 2007 Apr            21     6
##  5 2007 May            23     5
##  6 2007 Jun            21     6
##  7 2007 Jul            22     6
##  8 2007 Aug            23     4
##  9 2007 Sep            20     3
## 10 2007 Oct            23     4
## 11 2007 Nov            22     5
## 12 2007 Dec            21     7

Calendar adjustments

Sales<-Sales%>%
  mutate(adj_sales = sales/trading_days)
Sales
## # A tsibble: 12 x 4 [1M]
##       Month trading_days sales adj_sales
##       <mth>        <dbl> <dbl>     <dbl>
##  1 2007 Jan           23     4     0.174
##  2 2007 Feb           20     4     0.2  
##  3 2007 Mar           22     5     0.227
##  4 2007 Apr           21     6     0.286
##  5 2007 May           23     5     0.217
##  6 2007 Jun           21     6     0.286
##  7 2007 Jul           22     6     0.273
##  8 2007 Aug           23     4     0.174
##  9 2007 Sep           20     3     0.15 
## 10 2007 Oct           23     4     0.174
## 11 2007 Nov           22     5     0.227
## 12 2007 Dec           21     7     0.333

Per capita adjustments

global_economy %>%
  filter(Country == "Brazil") %>%
  autoplot(GDP)

Per capita adjustments

global_economy %>%
  filter(Country == "Brazil") %>%
  autoplot(GDP / Population)

Inflation adjustments

  • Data which are affected by the value of money are best adjusted before modelling (based year)
  • Assume a price index \(z_{t}\) and an original value \(y_{t}\).
  • Then an adjusted value based on year 2010 would be:

\[x_{t} = \frac{y_{t}}{z_{t}*{z_{2010}}}\] - Examples of price indexes are CPI, GDP deflator, etc

Inflation adjustments

print_retail <- aus_retail %>%
  filter(Industry == "Newspaper and book retailing") %>%
  group_by(Industry) %>%
  index_by(Year = year(Month)) %>%
  summarise(Turnover = sum(Turnover))

aus_economy <- global_economy %>%
  filter(Code == "AUS")

xt<-print_retail %>%
  left_join(aus_economy, by = "Year") %>%
  mutate(Adjusted_turnover = Turnover / CPI * 100)%>%
  pivot_longer(c(Turnover, Adjusted_turnover), values_to = "Turnover")%>%
  ggplot(aes(x = Year, y = Turnover)) +
  geom_line() +
  facet_grid(name ~ ., scales = "free_y") +
  labs(title = "Turnover: Australian print media industry", y = "$AU")
#Left_join - it join both dataset given preference to the left dataset
# pivot_longer - Turn the dataset into tidy

Inflation adjustments

xt

Mathematical transformations

  • If the data shows variation that increases or decreases with the level of the series, then a transformation can be useful !

  • It should be kept in mind that data transformation simply changes the relative magnitude of the data and does not change the essential characteristics of the data patterns.

  • Growth rate - It is a transformation in the data when someone is interested in forecasting the rate of growth rather than the absolute level of growth in such variables as production, interest rate, unemployment, exports, capital formation, etc.

  • This transformation is quite useful when an analyst is developing forecasts with multiple economic variables measured in different units

Mathematical transformations

  • There are other transformations - Logs and Power Transformations

  • Denote original observations as \(y_1,\dots,y_T\) and transformed observations as \(w_1, \dots, w_T\).

  • Logarithms are useful because they are interpretable: changes in a log value are relative (or percentage) changes on the original scale (\(w_t = \log(y_t)\))

  • Additionally, square roots and cube roots can be used. These are called power transformations because they can be written in the form \(w_t = \sqrt[p]{y_t}\) (although they are not so interpretable).

Mathematical transformations

food <- aus_retail %>%
  filter(Industry == "Food retailing") %>%
  summarise(Turnover = sum(Turnover))

## Mathematical transformations

food %>% autoplot(sqrt(Turnover)) +
  labs(y = "Square root turnover")

Mathematical transformations

food %>% autoplot(Turnover^(1/3)) +
  labs(y = "Cube root turnover")

Mathematical transformations

food %>% autoplot(log(Turnover)) +
  labs(y = "Log turnover")

Mathematical transformations

food %>% autoplot(-1/Turnover) +
  labs(y = "Inverse turnover")

Box-Cox transformations

Each of these transformations is close to a member of the family of Box-Cox transformations: \[w_t = \left\{\begin{array}{ll} \log(y_t), & \quad \lambda = 0; \\ (sign(y_t)|y_t|^\lambda-1)/\lambda , & \quad \lambda \ne 0. \end{array}\right.\]

  • Actually the Bickel-Doksum transformation (allowing for \(y_t<0\))
  • \(\lambda=1\): \(w_t = y_y-1\) (no change in the shape of the time series)
  • \(\lambda=\frac12\): (Square root plus linear transformation)
  • \(\lambda=0\): (Natural logarithm)
  • \(\lambda=-1\): (Inverse plus 1)

Box-Cox transformations

  • A good value of \(\lambda\) is one which makes the size of the seasonal variation about the same across the whole series, as that makes the forecasting model simpler.

  • The guerrero feature (Guerrero, 1993) can be used to choose a value of lambda for you.

  • Check the Book for an interactive transformation (3.1)

Box-Cox transformations

food %>%
  features(Turnover, features = guerrero)
## # A tibble: 1 × 1
##   lambda_guerrero
##             <dbl>
## 1          0.0524
  • This attempts to balance the seasonal fluctuations and random variation across the series.
  • Always check the results.
  • A low value of \(\lambda\) can give extremely large prediction intervals.

Box-Cox transformations

food %>% autoplot(box_cox(Turnover, 0.0524)) +
  labs(y = "Box-Cox transformed turnover")

Transformations

  • Often no transformation needed.
  • Simple transformations are easier to explain and work well enough.
  • Transformations can have very large effect on prediction intervals.
  • If some data are zero or negative, then use \(\lambda>0\).
  • log1p() can also be useful for data with zeros. \(log(1+x)\)
  • Choosing logs is a simple way to force forecasts to be positive
  • Transformations must be reversed to obtain forecasts on the original scale. (Handled automatically by fable.)

Time series components

Time series patterns

Recall

Trend - pattern exists when there is a long-term increase or decrease in the data.

Cyclic - pattern exists when data exhibit rises and falls that are not of fixed period (duration usually of at least 2 years).

Seasonal - pattern exists when a series is influenced by seasonal factors (e.g., the quarter of the year, the month, or day of the week).

Time series decomposition

  • So in other words, a time series is a function of all these factors

\[y_t = f(S_t, T_t, R_t)\]

  • \(y_t=\) data at period \(t\)
  • \(T_t=\) trend-cycle component at period \(t\)
  • \(S_t=\) seasonal component at period \(t\)
  • \(R_t=\) remainder component at period \(t\) - Random or irregular variation.

Additive decomposition: \(y_t = S_t + T_t + R_t.\)

Multiplicative decomposition: \(y_t = S_t \times T_t \times R_t.\)

Time series decomposition

  • Additive model appropriate if magnitude of seasonal fluctuations does not vary with level.
  • If seasonal are proportional to level of series, then multiplicative model appropriate.
  • Multiplicative decomposition more prevalent with economic series
  • Alternative: use a Box-Cox transformation, and then use additive decomposition.
  • Logs turn multiplicative relationship into an additive relationship:

\[ y_t = S_t \times T_t \times R_t \quad\Rightarrow\quad \log y_t = \log S_t + \log T_t + \log R_t. \]

US Retail Employment

us_retail_employment <- us_employment %>%
  filter(year(Month) >= 1990, Title == "Retail Trade") %>%
  select(-Series_ID)
us_retail_employment
## # A tsibble: 357 x 3 [1M]
##       Month Title        Employed
##       <mth> <chr>           <dbl>
##  1 1990 Jan Retail Trade   13256.
##  2 1990 Feb Retail Trade   12966.
##  3 1990 Mar Retail Trade   12938.
##  4 1990 Apr Retail Trade   13012.
##  5 1990 May Retail Trade   13108.
##  6 1990 Jun Retail Trade   13183.
##  7 1990 Jul Retail Trade   13170.
##  8 1990 Aug Retail Trade   13160.
##  9 1990 Sep Retail Trade   13113.
## 10 1990 Oct Retail Trade   13185.
## # … with 347 more rows

US Retail Employment

us_retail_employment %>%
  autoplot(Employed) +
  labs(y="Persons (thousands)", title="Total employment in US retail")

US Retail Employment

  • To illustrate the ideas, we will use the STL decomposition method, which is discussed later.
us_retail_employment %>%
  model(stl = STL(Employed))
## # A mable: 1 x 1
##       stl
##   <model>
## 1   <STL>

US Retail Employment

dcmp <- us_retail_employment %>%
  model(stl = STL(Employed))
components(dcmp)
## # A dable: 357 x 7 [1M]
## # Key:     .model [1]
## # :        Employed = trend + season_year + remainder
##    .model    Month Employed  trend season_year remainder season_adjust
##    <chr>     <mth>    <dbl>  <dbl>       <dbl>     <dbl>         <dbl>
##  1 stl    1990 Jan   13256. 13288.      -33.0      0.836        13289.
##  2 stl    1990 Feb   12966. 13269.     -258.     -44.6          13224.
##  3 stl    1990 Mar   12938. 13250.     -290.     -22.1          13228.
##  4 stl    1990 Apr   13012. 13231.     -220.       1.05         13232.
##  5 stl    1990 May   13108. 13211.     -114.      11.3          13223.
##  6 stl    1990 Jun   13183. 13192.      -24.3     15.5          13207.
##  7 stl    1990 Jul   13170. 13172.      -23.2     21.6          13193.
##  8 stl    1990 Aug   13160. 13151.       -9.52    17.8          13169.
##  9 stl    1990 Sep   13113. 13131.      -39.5     22.0          13153.
## 10 stl    1990 Oct   13185. 13110.       61.6     13.2          13124.
## # … with 347 more rows

US Retail Employment

us_retail_employment %>%  autoplot(Employed, color='gray') +
  autolayer(components(dcmp), trend, color='#D55E00') +
  labs(y="Persons (thousands)", title="Total employment in US retail")

US Retail Employment

components(dcmp) %>% autoplot()

Seasonal adjustment

  • Useful by-product of decomposition: an easy way to calculate seasonally adjusted data.
  • Additive decomposition: seasonally adjusted data given by \[y_t - S_t = T_t + R_t\]
  • Multiplicative decomposition: seasonally adjusted data given by \[y_t / S_t = T_t \times R_t\]

Seasonal adjustment

  • If the variation due to seasonality is not of primary interest, the seasonally adjusted series can be useful.

  • For example, monthly unemployment data are usually seasonally adjusted in order to highlight variation due to the underlying state of the economy rather than the seasonal variation.

  • An increase in unemployment due to school leavers seeking work is seasonal variation, while an increase in unemployment due to an economic recession is non-seasonal.

  • Most economic analysts who study unemployment data are more interested in the non-seasonal variation. Consequently, employment data (and many other economic series) are usually seasonally adjusted.

US Retail Employment

us_retail_employment %>%
  autoplot(Employed, color='gray') +
  autolayer(components(dcmp), season_adjust, color='#0072B2') +
  labs(y="Persons (thousands)", title="Total employment in US retail")

Seasonal adjustment

  • We use estimates of \(S\) based on past values to seasonally adjust a current value.
  • Seasonally adjusted series reflect remainders as well as trend. Therefore they are not “smooth”, and “downturns” or “upturns” can be misleading.
  • It is better to use the trend-cycle component to look for turning points.

Moving Averages

  • The classical method of time series decomposition originated in the 1920s and was widely used until the 1950s.

  • It still forms the basis of many time series decomposition methods, so it is important to understand how it works.

  • The first step in a classical decomposition is to use a moving average method to estimate the trend-cycle.

Moving Averages

A moving average of order \(m\) can be written as:

\[ \hat{T}_t = \frac{1}{T}\sum_{j=-k}^ky_{t+j} \]

Where \(m=2k+1\). That is, the estimate of the trend-cycle at time \(t\) is obtained by averaging values of the time series within \(k\) periods of \(t\).

The average eliminates some of the randomness in the data, leaving a smooth trend-cycle component. We call this an
\(m-MA\), meaning a moving average of order \(m\).

Moving Averages

global_economy %>%
  filter(Country == "Australia") %>%
  autoplot(Exports) +
  labs(y = "% of GDP", title = "Total Australian exports")

Moving Averages

  • How to calculate the moving average of this time series?
  • This is easily computed using slide_dbl() from the slider package which applies a function to sliding/moving time windows
  • Since we want the moving average, we will use the mean() function.
aus_exports <- global_economy %>%
  filter(Country == "Australia") %>%
  select(Exports)%>%
  mutate(`5-MA` = slider::slide_dbl(Exports, mean,
                .before = 2, .after = 2, .complete = TRUE))
aus_exports
## # A tsibble: 58 x 3 [1Y]
##    Exports  Year `5-MA`
##      <dbl> <dbl>  <dbl>
##  1    13.0  1960   NA  
##  2    12.4  1961   NA  
##  3    13.9  1962   13.5
##  4    13.0  1963   13.5
##  5    14.9  1964   13.6
##  6    13.2  1965   13.4
##  7    12.9  1966   13.3
##  8    12.9  1967   12.7
##  9    12.3  1968   12.6
## 10    12.0  1969   12.6
## # … with 48 more rows

Moving Averages

  • The 5-MA provides an estimate of the trend-cycle using a moving average of 5
  • The first value in this column is the average of the first five observations (1960–1964)
  • The second value in the 5-MA column is the average of the values for 1961–1965; and so on.
  • Each value in the 5-MA column is the average of the observations in the five year window centered on the corresponding year.
  • In notations, column 5-MA contains the values of \(\hat{T}_t\) with \(k=2\), and \(m=2k+1 = 5\)
  • There are no values for either the first two years or the last two years, because we do not have two observations on either side.

Moving Averages

aus_exports %>%
  autoplot(Exports) +
  geom_line(aes(y = `5-MA`), colour = "#D55E00") +
  labs(y = "% of GDP",
       title = "Total Australian exports") +
  guides(colour = guide_legend(title = "series"))

Moving Averages

  • Notice that the trend-cycle (in orange) is smoother than the original data and captures the main movement of the time series without all of the minor fluctuations.

  • The order of the moving average determines the smoothness of the trend-cycle estimate.

  • In general, a larger order means a smoother curve.

Moving Averages-7

aus_exports %>%
  autoplot(Exports) +
  geom_line(aes(y = `7-MA`), colour = "#0072B2") +
  labs(y = "% of GDP",
       title = "Total Australian exports - Moving Average 7") +
  guides(colour = guide_legend(title = "series"))

Moving Averages - 9

aus_exports %>%
  autoplot(Exports) +
  geom_line(aes(y = `9-MA`), colour = "#009E73") +
  labs(y = "% of GDP",
       title = "Total Australian exports - Moving Average 9") +
  guides(colour = guide_legend(title = "series"))

Moving Averages

  • Simple moving averages such as these are usually of an odd order (e.g., 3, 5, 7, etc.).

  • This is so they are symmetric: in a moving average of order \(m=2k+1\), the middle observation, and
    \(k\) observations on either side, are averaged.

  • But if \(m\) was even, it would no longer be symmetric.

Moving averages of moving averages

  • It is possible to apply a moving average to a moving average.

  • One reason for doing this is to make an even-order moving average symmetric.

  • For example, we might take a moving average of order 4, and then apply another moving average of order 2.

beer <- aus_production %>%
  filter(year(Quarter) >= 1992) %>%
  select(Quarter, Beer)
beer_ma <- beer %>%
  mutate(`4-MA` = slider::slide_dbl(Beer, mean,.before = 1, .after = 2, .complete = TRUE),
         `2x4-MA` = slider::slide_dbl(`4-MA`, mean,.before = 1, .after = 0, .complete = TRUE))

Moving averages of moving averages

beer_ma
## # A tsibble: 74 x 4 [1Q]
##    Quarter  Beer `4-MA` `2x4-MA`
##      <qtr> <dbl>  <dbl>    <dbl>
##  1 1992 Q1   443    NA       NA 
##  2 1992 Q2   410   451.      NA 
##  3 1992 Q3   420   449.     450 
##  4 1992 Q4   532   452.     450.
##  5 1993 Q1   433   449      450.
##  6 1993 Q2   421   444      446.
##  7 1993 Q3   410   448      446 
##  8 1993 Q4   512   438      443 
##  9 1994 Q1   449   441.     440.
## 10 1994 Q2   381   446      444.
## # … with 64 more rows

Moving averages of moving averages

  • The notation 2×4-MA in the last column means a 4-MA followed by a 2-MA.
  • For example, the first two values in the 4-MA column are 451.25=(443+410+420+532)/4 and 448.75=(410+420+532+433)/4.
  • The first value in the 2x4-MA column is the average of these two: 450.00=(451.25+448.75)/2.
  • When a 2-MA follows a moving average of an even order (such as 4), it is called a “centered moving average of order 4” - symmetric

Moving averages of moving averages

  • In our case, our 2×4-MA becomes a weighted average of observations that is symmetric

\[ \hat{T}_t = \frac{1}{2}[\frac{1}{4}(y_{t-2}+y_{t-1}+y_{t}+y_{t+1})+\frac{1}{4}(y_{t-1}+y_{t}+y_{t+1}+y_{t+2})] \] \[ \hat{T}_t = \frac{1}{8}y_{t-2}+\frac{1}{4}y_{t-1}+\frac{1}{4}y_{t}+\frac{1}{4}y_{t+1}+\frac{1}{8}y_{t+2} \] - Other combinations of moving averages are also possible.

  • In general, an even order MA should be followed by an even order MA to make it symmetric (an odd order MA should be followed by an odd order MA.)

Estimating the trend-cycle with seasonal data

  • In general, a 2×m-MA is equivalent to a weighted moving average of order \(m+1\) where all observations take the weight \(1/m\), except for the first and last terms which takes weights \(1/(2m)\).

  • So, if the seasonal period is even and of order \(m\), we use a 2×m-MAto estimate the trend-cycle.

  • If the seasonal period is odd and of of order \(m\), we use a m-MAto estimate the trend-cycle.

  • For example, a 2×12-MAcan be used to estimate the trend-cycle of monthly data with annual seasonality

  • 7-MA can be used to estimate the trend-cycle of daily data with a weekly seasonality.

Estimating the trend-cycle with seasonal data

  • Let’s revisit the US unemployment data without using the stl() methodology.

  • It is a monthly data with annual seasonality - So what could be order of the moving average to account for seasonality?

  • 2x12-MA
us_retail_employment_ma <- us_retail_employment %>%
  mutate(
    `12-MA` = slider::slide_dbl(Employed, mean,
                .before = 5, .after = 6, .complete = TRUE),
    `2x12-MA` = slider::slide_dbl(`12-MA`, mean,
                .before = 1, .after = 0, .complete = TRUE)
  )

Estimating the trend-cycle with seasonal data

us_retail_employment_ma %>% autoplot(Employed, colour = "gray") +
  geom_line(aes(y = `2x12-MA`), colour = "#D55E00") + labs(y = "Persons (thousands)", 
                                                           title = "Total employment in US retail")

Estimating the trend-cycle with seasonal data

  • The smooth line shows no seasonality; it is almost the same as the trend-cycle shown in the stl() function

  • Any other choice for the order of the moving average (except for 24, 36, etc.) would have resulted in a smooth line that showed some seasonal fluctuations.

  • A major advantage of weighted moving averages is that they yield a smoother estimate of the trend-cycle.

  • Instead of observations entering and leaving the calculation at full weight, their weights slowly increase and then slowly decrease, resulting in a smoother curve.

Classical Decomposition

  • Classical method originated in 1920s, and it forms the starting point for most other methods of time series decomposition

  • There are two classical decomposition: 1 - Additive 2 - Multiplicative

  • Let’s assume a time series with seasonal period \(m\) (e.g, m=4 for quartely data, m=12 for monthly data, etc)

  • We assume that the seasonal component is constant from year to year

Additive Decomposition - Steps

  • Step 1
    • If m is an even number, compute the trend-cycle component \(\hat{T}_t\) using a \(2\times m-MA\).
    • If m is an odd number, compute the trend-cycle component \(\hat{T}_t\) using a \(m-MA\).
  • Step 2
    • Calculate the detrended series: \(y_t - \hat{T}_t\).

Additive Decomposition - Steps

  • Step 3
    • To estimate the seasonal component for each season, simply average the detrended values for that season. For example, with monthly data, the seasonal component for March is the average of all the detrended March values in the data (\(\tilde{S}_3\))

Additive Decomposition - Steps

  • Step 4
    • The seasonal effects should be normalized: The average of m seasonal components is 0 (this is equivalent to their sum being equal to 0). The seasonal component is obtained by replicating the sequence for each year of data. This gives \(\hat{S}_t\)
    • normalization:
    1. Calculate the total sum of the seasonal component - \(\sum_{t=1}^d \tilde{S}_t\)
    2. Calculate the average of the seasonal components - \(w = \frac{\sum_{t=1}^d \tilde{S}_t}{d}\)
    3. Adjust each seasonal component - \(\hat{S}_t = \tilde{S}_t- w\)
  • Step 5 The remainder component is calculated by subtracting the estimated seasonal and trend-cycle components: \(\hat{R}_t = y_t - \hat{T}_t-\hat{S}_t\)

Additive Decomposition

us_retail_employment %>%
  model(classical_decomposition(Employed, type = "additive")) %>%
  components() %>% autoplot() +
  labs(title = "Classical additive decomposition of total
                  US retail employment")

Multiplicative Decomposition - Steps

  • Step 1
    • If m is an even number, compute the trend-cycle component \(\hat{T}_t\) using a \(2\times m-MA\).
    • If m is an odd number, compute the trend-cycle component \(\hat{T}_t\) using a \(m-MA\).
  • Step 2
    • Calculate the detrended series: \(y_t/\hat{T}_t\).

Additive Decomposition - Steps

  • Step 3
    • To estimate the seasonal component for each season, simply average the detrended values for that season. For example, with monthly data, the seasonal component for March is the average of all the detrended March values in the data (\(\tilde{S}_3\))
  • Step 4
    • The seasonal effects should be normalized: The average of m seasonal components is 1 (this is equivalent to their sum being equal to m). The seasonal component is obtained by replicating the sequence for each year of data. This gives \(\hat{S}_t\).
    • normalization:
    1. Calculate the total sum of the seasonal component - \(\sum_{t=1}^d \tilde{S}_t\)
    2. Calculate the average of the seasonal components - \(w = \frac{\sum_{t=1}^d \tilde{S}_t}{d}\)
    3. Adjust each seasonal component - \(\hat{S}_t = \tilde{S}_t/w\)
  • Step 5 The remainder component is calculated by dividing the estimated seasonal and trend-cycle components: \(\hat{R}_t = \frac{y_t} {\hat{T}_t \times \hat{S}_t}\)

Shortcomings

  • The estimate of the trend-cycle is unavailable for the first few and last few observations.
  • The trend-cycle estimate tends to over-smooth rapid rises and falls in the data.
  • Classical decomposition methods assume that the seasonal component repeats from year to year.
  • There are shocks that the decomposition would not be able to account

National Statistics Offices

  • ABS uses X-12-ARIMA
  • US Census Bureau uses X-13ARIMA-SEATS
  • Statistics Canada uses X-12-ARIMA
  • ONS (UK) uses X-12-ARIMA
  • EuroStat use X-13ARIMA-SEATS

X-11 decomposition

Advantages

  • Relatively robust to outliers
  • Completely automated choices for trend and seasonal changes
  • Very widely tested on economic data over a long period of time.

Disadvantages

  • No prediction/confidence intervals
  • Ad hoc method with no underlying model
  • Only developed for quarterly and monthly data

Extensions: X-12-ARIMA and X-13-ARIMA

  • The X-11, X-12-ARIMA and X-13-ARIMA methods are based on Census II decomposition.
  • These allow adjustments for trading days and other explanatory variables.
  • Known outliers can be omitted.
  • Level shifts and ramp effects can be modelled.
  • Missing values estimated and replaced.
  • Holiday factors (e.g., Easter, Labour Day) can be estimated.

X-13ARIMA-SEATS

Advantages

  • Model-based
  • Smooth trend estimate
  • Allows estimates at end points
  • Allows changing seasonality
  • Developed for economic data

Disadvantages

  • Only developed for quarterly and monthly data

STL decomposition

  • STL: “Seasonal and Trend decomposition using Loess”
  • Very versatile and robust.
  • Unlike X-12-ARIMA, STL will handle any type of seasonality.
  • Seasonal component allowed to change over time, and rate of change controlled by user.
  • Smoothness of trend-cycle also controlled by user.
  • Robust to outliers
  • Not trading day or calendar adjustments.
  • Only additive.
  • Take logs to get multiplicative decomposition.
  • Use Box-Cox transformations to get other decompositions.
  • A value of \(\lambda=0\) gives a multiplicative decomposition while \(\lambda=1\) gives an additive decomposition.

STL decomposition

  • The best way to begin learning how to use STL is to see some examples and experiment with the settings

STL decomposition

us_retail_employment %>%
  model(STL(Employed ~ season(window=9), robust=TRUE)) %>%
  components() %>% autoplot() +
    labs(title = "STL decomposition: US retail employment")

STL decomposition

us_retail_employment %>%
  model(STL(Employed ~ season(window=7), robust=TRUE)) %>%
  components() %>% autoplot() +
    labs(title = "STL decomposition: US retail employment")

STL decomposition

us_retail_employment %>%
  model(STL(Employed ~ season(window=5), robust=TRUE)) %>%
  components() %>% autoplot() +
    labs(title = "STL decomposition: US retail employment")

STL decomposition

us_retail_employment %>%
  model(STL(Employed ~ trend(window=5) + season(window="periodic"), robust = TRUE)) %>%
  components() %>% autoplot() +
    labs(title = "STL decomposition: US retail employment")

STL decomposition

  • The two main parameters to be chosen when using STL are the trend-cycle window trend(window = ?) and the seasonal window season(window = ?).
  • These control how rapidly the trend-cycle and seasonal components can change - both should be odd numbers
  • trend window is the number of consecutive observations to be used when estimating the trend-cycle
  • season window is the number of consecutive years to be used in estimating each value in the seasonal component
  • season(window='periodic') - is equivalent to an infinite window - identical across years

STL decomposition

  • By default, the STL() function provides a convenient automated STL decomposition using a seasonal window of season(window=13), and the trend window chosen automatically from the seasonal period.

  • The default setting for monthly data is trend(window=21).

  • But, as with any automated procedure, the default settings will need adjusting for some time series.