This data is weekly exchange rate data for US/Euro (number of USD per one Euro). I downloaded it from the FRED database.
Setup
library(forecast)
## Registered S3 method overwritten by 'quantmod':
## method from
## as.zoo.data.frame zoo
library(nnet)
Data
ts = ts(exchangerate[,2], frequency = 52, start = c(2010,01))
autoplot(ts, main = "US/Euro Exchange Rate", ylab = 'USD per one Euro')
pacf(ts)
acf(ts)
Split data
train = head(ts, n = length(ts)-102)
test = tail(ts, n = 102)
Neural Net
mod = nnetar(train, lambda=0)
fc = forecast(mod,h=104)
autoplot(fc)
checkresiduals(fc)
## Warning in modeldf.default(object): Could not find appropriate degrees of
## freedom for this model.
Logged Data
logtrain = log(train)
logtest = log(test)
logmod = nnetar(logtrain, lambda=0)
logfc = forecast(logmod,h=104)
autoplot(logfc)
checkresiduals(logfc)
## Warning in modeldf.default(object): Could not find appropriate degrees of
## freedom for this model.
Accuracy
accuracy(fc, test)
## ME RMSE MAE MPE MAPE
## Training set 5.660867e-05 0.01166770 0.009119122 -0.004302022 0.7392899
## Test set 1.427187e-02 0.02233122 0.018695844 1.249033950 1.6558268
## MASE ACF1 Theil's U
## Training set 0.1027375 0.01205408 NA
## Test set 0.2106304 0.82504744 2.65278
accuracy(logfc, logtest)
## ME RMSE MAE MPE MAPE
## Training set 0.0003529884 0.009105124 0.007222734 -0.1921338 4.526529
## Test set 0.0002676407 0.018458234 0.015192477 -2.4668969 14.153117
## MASE ACF1 Theil's U
## Training set 0.1015120 0.008038891 NA
## Test set 0.2135229 0.888436677 2.666307
The accuracy information shows that both models perform pretty well. While using the logged data makes a slight difference in the model chosen by the neural net, the forecast plots looks pretty similar, and this is confirmed in the accuracy data. The raw data performs slightly better, and neither model has all that much bias. We do see some autocorrelation in both models.