library(fpp2)
library(ggplot2)
#summary(pigs)
summary(ses(pigs,h=4))
##
## Forecast method: Simple exponential smoothing
##
## Model Information:
## Simple exponential smoothing
##
## Call:
## ses(y = pigs, h = 4)
##
## Smoothing parameters:
## alpha = 0.2971
##
## Initial states:
## l = 77260.0561
##
## sigma: 10308.58
##
## AIC AICc BIC
## 4462.955 4463.086 4472.665
##
## Error measures:
## ME RMSE MAE MPE MAPE MASE
## Training set 385.8721 10253.6 7961.383 -0.922652 9.274016 0.7966249
## ACF1
## Training set 0.01282239
##
## Forecasts:
## Point Forecast Lo 80 Hi 80 Lo 95 Hi 95
## Sep 1995 98816.41 85605.43 112027.4 78611.97 119020.8
## Oct 1995 98816.41 85034.52 112598.3 77738.83 119894.0
## Nov 1995 98816.41 84486.34 113146.5 76900.46 120732.4
## Dec 1995 98816.41 83958.37 113674.4 76092.99 121539.8
From the output, we observe the alpha to be 0.2971 and l to be 10308.58
s<-sd((ses(pigs, h=4))$residuals)
print(paste0("lower Confidence Interval: ", ses(pigs,h=4)$mean[1]-1.96*s))
## [1] "lower Confidence Interval: 78679.9672534162"
print(paste0("Upper Confidence Interval: ", ses(pigs,h=4)$mean[1]+1.96*s))
## [1] "Upper Confidence Interval: 118952.844969765"
Our confidence intervals are slightly different than the ones produced by r’s output. They seem to be more narrow. We present a comparison of forecasting methods.
s2<-ses(pigs, h = 4)
autoplot(s2) + autolayer(s2$fitted)
Modify your function from the previous exercise to return the sum of squared errors rather than the forecast of the next observation. Then use the optim() function to find the optimal values of
alpha and lo. Do you get the same values as the ses() function?
We clearly can’t do this problem without doing 7.2 first which is… Write your own function to implement simple exponential smoothing. The function should take arguments y (the time series), alpha (the smoothing parameter and level. It should return the forecast of the next observation in the series. Does it give the same forecast as ses()?
my_ses <- function(y, alpha, l0)
{
y_hat <- l0
for(index in 1:length(y))
{
y_hat <- alpha*y[index] + (1 - alpha)*y_hat
}
print(paste0("Forecast result by My function: ",
as.character(y_hat)
))
}
Test the Function
alpha <- s2$model$par[1]
l0 <- s2$model$par[2]
my_ses(pigs, alpha = alpha, l0 = l0)
## [1] "Forecast result by My function: 98816.4061115907"
print(paste0("Using R's Function: ", s2$mean[1]))
## [1] "Using R's Function: 98816.4061115907"
We have indeed verified that our custom built function for exponential smoothing is the same as r’s built in function. Now we want to Modify your function from the previous exercise to return the sum of squared errors rather than the forecast of the next observation.
my_ses2 <- function(pars = c(alpha, l0), y) #modification to return the sum of squares
{
error <- 0
SSE <- 0
alpha <- pars[1]
l0 <- pars[2]
y_hat <- l0
for(index in 1:length(y)) #Code from 7.2
{
error <- y[index] - y_hat
SSE <- SSE + error^2
y_hat <- alpha*y[index] + (1 - alpha)*y_hat
}
return(SSE)
}
Test The Function.
test<-optim(par = c(0.5, pigs[1]), y = pigs, fn = my_ses2)
Display Results of the Test
print(paste0("My Function: ", test$par[1]))
## [1] "My Function: 0.299008094014243"
print(paste0(test$par[2]))
## [1] "76379.2653476235"
print(paste0("R's Function: ", s2$model$par[1]))
## [1] "R's Function: 0.297148833372095"
print(paste0(s2$model$par[2]))
## [1] "77260.0561458528"
The estimation we get with our custom built modified function is similar to that produced by r’s pre built function.