Peramalan Nilai Kurs Jual GBP terhadap Rupiah (2021) dengan Metode Pemulusan dan Pemodelan ARIMA

2022-10-21

Pendahuluan

Latar Belakang

Kurs jual adalah harga jual mata uang yang dipakai oleh bank yang digunakan untuk penukaran mata uang asing dan yang digunakan oleh para pedagang valuta asing untuk menjual valuta asing. Misalnya adalah ketika menukarkan rupiah (Rp) dengan dolar Amerika (USD), kurs yang digunakan adalah kurs jual. Ada tiga jenis kurs, sistem kurs tetap, sistem kurs bebas atau mengambang, dan sistem kurs mengambang terkendali. Dilansir dari laman resmi Bank Indonesia (BI), Indonesia menganut sistem kurs mengambang (free floating). Peran kestabilan kurs sangat penting dalam mencapai stabilitas harga dan sistem keuangan (Salim 2022). Oleh karena itu, forecasting penting digunakan untuk meramalkan keuangan negara di masa yang akan datang. GBP (Great Britain Poundsterling) merupakan mata uang yang berasal dari Britania Raya dan merupakan mata uang dengan nilai kurs jual yang tinggi di Indonesia.

Tuujuan

  1. Menentukan persamaan model menggunakan metode Autoregressive Lag
  2. Memperoleh peramalan nilai kurs jual GBP di masa mendatang

Analisis Data

Install Packagaes

library(readxl)
library(forecast)
library(TTR)
library(imputeTS)
library(tseries)
library(ggplot2)
library(dplyr)
library(graphics)
library(TSA)
library(tidyverse)
library(lubridate)
library(gridExtra)
library(ggfortify)
library(cowplot)
library(graphics)
library(lmtest)
library(stats)
library(MASS)
library(FinTS)
library(orcutt)

Import Data

Data yang digunakan pada proses forecasting adalah Kurs Transaksi Mata Uang yang dilakukan oleh Bank Indonesia pada rentang periode 1 tahun dengan perhitungan data harian, 4 Januari hingga 31 Desember 2021 terhadap mata uang Britania Raya, Pound sterling.

data <- read_excel("~/Downloads/kurs transaksi GBP.xlsx")
ts.data <- ts(data)
str(data)
## tibble [256 × 2] (S3: tbl_df/tbl/data.frame)
##  $ Tanggal  : POSIXct[1:256], format: "2021-01-04" "2021-01-05" ...
##  $ Kurs_Jual: num [1:256] 19128 19038 19044 19024 19158 ...
kableExtra::kable(head(data) ,caption = 'Kurs Transaksi Mata Uang Pound sterling')
Kurs Transaksi Mata Uang Pound sterling
Tanggal Kurs_Jual
2021-01-04 19128.38
2021-01-05 19037.61
2021-01-06 19043.85
2021-01-07 19023.84
2021-01-08 19157.96
2021-01-11 19224.72

Eksplorasi Data

data$Tanggal <- as.Date(data$Tanggal)
ggplot(data, aes(x=Tanggal, y= Kurs_Jual))+
  geom_line()+
  scale_x_date(date_breaks = "2 month", date_labels = "%Y %b %d") +
  labs(title = "Plot Time Series Data Kurs Transaksi GBP",
       subtitle = "(Januari 2021 sd Desember 2021)",
       y="Kurs Jual GBP") +
  theme(plot.title =  element_text(face = "bold", hjust=.5),
        plot.subtitle = element_text(hjust=.5))

p=0.8
freq_train=as.integer(p*nrow(data))
data$Tanggal <- as.Date(data$Tanggal)
ggplot(data, aes(x=Tanggal, y=Kurs_Jual)) +
  geom_line() + 
  scale_x_date(date_breaks = "2 month", date_labels = "%Y %b %d") +
  labs(title = "Plot Time Series Data Kurs Transaksi GBP",
       subtitle = "(Januari 2021 sd Desember 2021)",
       y="Kurs Jual GBP") +
  geom_vline(aes(xintercept = Tanggal[freq_train], 
                 col="Frequency_Train_Data"), lty=2, lwd=.7) +
  theme(plot.title =  element_text(face = "bold", hjust=.5),
        plot.subtitle = element_text(hjust=.5),
        legend.position = "bottom") +
  scale_color_manual(name = "", values = c(Frequency_Train_Data="red"))

data[freq_train,1]
## # A tibble: 1 × 1
##   Tanggal   
##   <date>    
## 1 2021-10-19

Berdasarkan hasil eksplorasi, terlihat bahwa sebaran data cenderung mengikuti pola siklik. Sekilas terlihat mirip namun pola siklik berbeda dengan pola musiman, pada pola musiman mempunyai panjang gelombang yang tetap dan terjadi pada waktu yang tetap, sedangkan pola siklik memiliki jarak waktu yang bervariasi dari satu siklus ke siklus lainnya (Ilham 2016). Pola data siklik tersebut menunjukkan bahwa siklus transaksi Kurs Mata Uang terus berfluktuasi dalam jangka panjang. Walaupun di bagian akhir periode, data cenderung mengalami penurunan, tetapi datanya tetap berfluktuasi sehingga lebih tepat dikatakan siklik daripada tren menurun.

Splitting Data

training <- data[1:204,]
testing<-data[204:256,]

training.ts<-ts(training)
testing.ts<-ts(testing)

Dari keseluruhan dataset dilakukan pengaturan komposisi menjadi 2 sub bagian yaitu data training dan testing dengan presentase 80 : 20 dengan tujuan menghindari adanya overfitting. Data training mulai dari tanggal 1 Januari 2021 sampai dengan tanggal 19 Oktober 2021. Kemudian untuk data testing mulai dari tanggal 19 Oktober 2021 sampai dengan tanggal 31 Desember 2021.

Plot Data Training

plot(training.ts, col="blue",main="Plot Data Training")
points(training.ts)

Plot Data Testing

plot(testing.ts, col="blue",main="Plot dataTtraining")
points(testing.ts)

Time Series Plot Data Training dan Testing

p=0.8
freq_train=as.integer(p*nrow(data))
data$Tanggal <- as.Date(data$Tanggal)
ggplot(data, aes(x=Tanggal, y=Kurs_Jual)) +
  geom_line() + 
  scale_x_date(date_breaks = "2 month", date_labels = "%Y %b %d") +
  labs(title = "Plot Time Series Data Kurs Transaksi GBP",
       subtitle = "(Januari 2021 sd Desember 2021)",
       y="Kurs Jual GBP") +
  geom_vline(aes(xintercept = Tanggal[freq_train], 
                 col="Frequency_Train_Data"), lty=2, lwd=.7) +
  theme(plot.title =  element_text(face = "bold", hjust=.5),
        plot.subtitle = element_text(hjust=.5),
        legend.position = "bottom") +
  scale_color_manual(name = "", values = c(Frequency_Train_Data="red"))

Pemulusan

Parameter Optimum SMA dan DMA

pemulusan <- function(x,min,max,method=c("SMA","DMA"),
                      metrik=c("SSE","MSE","RMSE","MAD","MAPE")){
  df.master <- data.frame()
  z <- max-min+1
  temp <- sqrt(z)
  a <- round(temp) 
  b=a
  if(a*b<z){
    b=b+1
  }
  par(mfrow=c(a,b))
  if(method=="SMA"){
    for(i in min:max) {
      sma <- TTR::SMA(x,i)
      ramal <- c(NA,sma)
      df <- cbind(aktual=c(x,NA),mulus=c(sma,NA),ramal)
      error <- df[,1]-df[,3]
      sse <- sum(error^2,na.rm=T)
      mse <- mean(error^2,na.rm=T)
      rmse <- sqrt(mse)
      mad <- mean(abs(error),na.rm=T)
      rerror <- error/df[,1]*100
      mape <- mean(abs(rerror),na.rm=T)
      ak <- data.frame(n=i,SSE=sse,MSE=mse,RMSE=rmse,MAD=mad,MAPE=mape)
      df.master <- rbind(df.master,ak)
      ts.plot(x,col="gray",main=paste("Single Moving Average n =",i))
      lines(sma,col="green",lwd=2)
      lines(ramal,col="red",lwd=1)
    }
  }
  else if(method=="DMA") {
    for(i in min:max) {
      df_ts_sma <- TTR::SMA(x,  n=i)
      df_ts_dma <- TTR::SMA(df_ts_sma, n=i)
      At <- 2*df_ts_sma-df_ts_dma
      Bt <- df_ts_sma-df_ts_dma
      pemulusan_dma <- At+Bt
      ramal_dma <- c(NA, pemulusan_dma)
      df_dma <- cbind(df_aktual=c(x,NA), pemulusan_dma=c(pemulusan_dma,NA), ramal_dma)
      error.dma <- df_dma[, 1] - df_dma[, 3]
      SSE.dma <- sum(error.dma^2, na.rm = T)
      MSE.dma <- mean(error.dma^2, na.rm = T)
      RMSE.dma <- sqrt(mean(error.dma^2, na.rm = T))
      MAD.dma <- mean(abs(error.dma), na.rm = T)
      r.error.dma <- (error.dma/df_dma[, 1])*100 
      MAPE.dma <- mean(abs(r.error.dma), na.rm = T)
      ak <- data.frame(n=i,SSE=SSE.dma,MSE=MSE.dma,RMSE=RMSE.dma,
                       MAD=MAD.dma,MAPE=MAPE.dma)
      df.master <- rbind(df.master,ak)
      ts.plot(x,main=paste("Double Moving Average n =",i))
      lines(pemulusan_dma,col="green",lwd=2)
      lines(ramal_dma,col="red",lwd= 2)
    }
  }
  opt <- df.master$n[which.min(df.master[,metrik])]
  return(list(Akurasi=df.master,n.optimum=paste("n optimum adalah",opt)))
}

Parameter Optimum SES dan DES

pemulusan2 <- function(x,alfa=NULL,beta=NULL,method=c("SES","DES"),
                       metrik=c("SSE","MSE","RMSE")){
  df.master <- data.frame()
  if(method=="SES"){
    for(i in alfa){
      df_ses <- HoltWinters(x, alpha = i, beta=F, gamma=F)
      sse <- df_ses$SSE 
      mse <- sse/length(x)
      rmse <-sqrt(mse)
      ak <- data.frame(Alpha=i,SSE=sse,MSE=mse,RMSE=rmse)
      df.master <- rbind(df.master,ak)
      datases <- data.frame(x, c(NA, df_ses$fitted[,1]))
      colnames(datases) = c("y","yhat")
      ts.plot(x, xlab="Periode  waktu", ylab="Yt", col="blue", lty=3,main=paste("Single Exponential Smoothing Alpha=",i))
      points(x)
      lines(datases[,2], col="red",lwd=2) #nilai dugaan
    }
  }
  else if(method=="DES"){
    for (i in alfa) {
      for (j in beta) {
        df_des <- HoltWinters(x, alpha = i, beta=j, gamma=F)
        sse <- df_des$SSE 
        mse <- sse/length(x)
        rmse <-sqrt(mse)
        ak <- data.frame(Alpha=i,Beta=j,SSE=sse,MSE=mse,RMSE=rmse)
        df.master <- rbind(df.master,ak)
        datades <- data.frame(x, c(NA, NA, df_des$fitted[,1]))
        colnames(datades) = c("y","yhat")
        ts.plot(x,xlab="periode  waktu", ylab="Yt",  col="blue", lty=3,main=paste("Double Exponential Smoothing (Alpha=",i,"Beta=",j))
        points(x)
        lines (datades[,2], col="red",lwd=2)
      }
    }
  }
  if(method=="SES"){
    opt <- df.master$Alpha[which.min(df.master[,metrik])]
    return(list(Akurasi=df.master,n.optimum=paste("Alpha optimum adalah",opt)))
  }
  else if(method=="DES"){
    opt <- df.master$Alpha[which.min(df.master[,metrik])]
    opt2 <- df.master$Beta[which.min(df.master[,metrik])]
    return(list(Akurasi=df.master,n.optimum=paste("Alpha optimum adalah",opt,
                                                  ",Beta optimum adalah",opt2)))
  }
}

SAS

par(mar=c(1,1,1,1))
pemulusan(x=data$Kurs_Jual,min=2,max=10,method = "SMA",metrik = "MSE")

## $Akurasi
##    n     SSE       MSE      RMSE      MAD      MAPE
## 1  2 1571354  6186.431  78.65387 63.05423 0.3185231
## 2  3 1956408  7732.837  87.93655 69.81639 0.3528312
## 3  4 2331726  9252.882  96.19190 75.33272 0.3805974
## 4  5 2671985 10645.357 103.17634 80.28536 0.4054753
## 5  6 2992723 11970.892 109.41157 84.10787 0.4246626
## 6  7 3246460 13037.991 114.18402 87.28620 0.4403455
## 7  8 3455174 13932.153 118.03454 90.66235 0.4571334
## 8  9 3703719 14994.814 122.45331 94.56226 0.4766684
## 9 10 3967532 16128.178 126.99677 98.11385 0.4945390
## 
## $n.optimum
## [1] "n optimum adalah 2"
pemulusan(x=data$Kurs_Jual,min=2,max=2,method = "SMA",metrik = "MSE")

## $Akurasi
##   n     SSE      MSE     RMSE      MAD      MAPE
## 1 2 1571354 6186.431 78.65387 63.05423 0.3185231
## 
## $n.optimum
## [1] "n optimum adalah 2"

DMA

par(mar=c(1,1,1,1))
pemulusan(x=data$Kurs_Jual,min=2,max=10,method = "DMA",metrik = "MSE")

## $Akurasi
##    n     SSE       MSE      RMSE       MAD      MAPE
## 1  2 1565904  6189.344  78.67239  62.89745 0.3173407
## 2  3 2140060  8526.136  92.33708  75.58943 0.3817482
## 3  4 2790207 11205.650 105.85674  83.83842 0.4233595
## 4  5 3435454 13908.720 117.93524  94.27952 0.4760195
## 5  6 3917089 15988.117 126.44413 101.74941 0.5133435
## 6  7 4310675 17739.402 133.18935 103.90173 0.5232927
## 7  8 4585508 19027.003 137.93840 108.43651 0.5456519
## 8  9 4882720 20429.791 142.93282 115.36217 0.5802503
## 9 10 5295199 22342.611 149.47445 122.07013 0.6142200
## 
## $n.optimum
## [1] "n optimum adalah 2"
pemulusan(x=data$Kurs_Jual,min=2,max=2,method = "DMA",metrik = "MSE")

## $Akurasi
##   n     SSE      MSE     RMSE      MAD      MAPE
## 1 2 1565904 6189.344 78.67239 62.89745 0.3173407
## 
## $n.optimum
## [1] "n optimum adalah 2"

masih lebih kecil sma antara sma dma lebih kecil ses antara ses des masih kecilan ses dari sma (rmse)

SES

par(mar=c(1,1,1,1))
pemulusan2(x=data$Kurs_Jual,alfa=c(0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9),
           method="SES",metrik="MSE")

## $Akurasi
##   Alpha     SSE       MSE      RMSE
## 1   0.1 5823358 22747.493 150.82272
## 2   0.2 3146003 12289.075 110.85610
## 3   0.3 2316206  9047.681  95.11930
## 4   0.4 1901653  7428.330  86.18776
## 5   0.5 1653383  6458.529  80.36497
## 6   0.6 1491548  5826.358  76.33058
## 7   0.7 1383478  5404.211  73.51334
## 8   0.8 1313546  5131.040  71.63128
## 9   0.9 1273686  4975.335  70.53605
## 
## $n.optimum
## [1] "Alpha optimum adalah 0.9"
pemulusan2(x=data$Kurs_Jual,alfa=c(0.9),
           method="SES",metrik="MSE")

## $Akurasi
##   Alpha     SSE      MSE     RMSE
## 1   0.9 1273686 4975.335 70.53605
## 
## $n.optimum
## [1] "Alpha optimum adalah 0.9"

DES

par(mar=c(1,1,1,1))
pemulusan2(x=data$Kurs_Jual,alfa=c(0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9),
           beta=c(0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9),method="DES",metrik="MSE")

## $Akurasi
##    Alpha Beta      SSE       MSE      RMSE
## 1    0.1  0.1 10030614 39182.085 197.94465
## 2    0.1  0.2  7575060 29590.076 172.01766
## 3    0.1  0.3  8331141 32543.519 180.39822
## 4    0.1  0.4  9207969 35968.627 189.65397
## 5    0.1  0.5  9032041 35281.409 187.83346
## 6    0.1  0.6  8110353 31681.068 177.99176
## 7    0.1  0.7  7115765 27795.958 166.72120
## 8    0.1  0.8  6271333 24497.393 156.51643
## 9    0.1  0.9  5608390 21907.772 148.01274
## 10   0.2  0.1  4520596 17658.579 132.88559
## 11   0.2  0.2  4214841 16464.221 128.31298
## 12   0.2  0.3  4016562 15689.694 125.25851
## 13   0.2  0.4  3731997 14578.112 120.73985
## 14   0.2  0.5  3578138 13977.102 118.22479
## 15   0.2  0.6  3602363 14071.730 118.62432
## 16   0.2  0.7  3749067 14644.792 121.01567
## 17   0.2  0.8  3942928 15402.061 124.10504
## 18   0.2  0.9  4138958 16167.803 127.15268
## 19   0.3  0.1  3023630 11811.056 108.67868
## 20   0.3  0.2  2833785 11069.473 105.21156
## 21   0.3  0.3  2750940 10745.859 103.66223
## 22   0.3  0.4  2757961 10773.287 103.79444
## 23   0.3  0.5  2845066 11113.540 105.42078
## 24   0.3  0.6  2967313 11591.066 107.66181
## 25   0.3  0.7  3094939 12089.605 109.95274
## 26   0.3  0.8  3211953 12546.690 112.01201
## 27   0.3  0.9  3305363 12911.574 113.62911
## 28   0.4  0.1  2336714  9127.791  95.53947
## 29   0.4  0.2  2244999  8769.527  93.64575
## 30   0.4  0.3  2250494  8790.992  93.76029
## 31   0.4  0.4  2306974  9011.619  94.92955
## 32   0.4  0.5  2382546  9306.821  96.47186
## 33   0.4  0.6  2451600  9576.564  97.85992
## 34   0.4  0.7  2499875  9765.137  98.81871
## 35   0.4  0.8  2521743  9850.557  99.24997
## 36   0.4  0.9  2519386  9841.352  99.20359
## 37   0.5  0.1  1954271  7633.869  87.37202
## 38   0.5  0.2  1915223  7481.338  86.49473
## 39   0.5  0.3  1942530  7588.009  87.10918
## 40   0.5  0.4  1990099  7773.823  88.16929
## 41   0.5  0.5  2035040  7949.376  89.15927
## 42   0.5  0.6  2066151  8070.902  89.83820
## 43   0.5  0.7  2082379  8134.294  90.19032
## 44   0.5  0.8  2088894  8159.743  90.33130
## 45   0.5  0.9  2092873  8175.285  90.41728
## 46   0.6  0.1  1716218  6703.975  81.87781
## 47   0.6  0.2  1703786  6655.414  81.58072
## 48   0.6  0.3  1734906  6776.977  82.32240
## 49   0.6  0.4  1773371  6927.230  83.22998
## 50   0.6  0.5  1806687  7057.371  84.00816
## 51   0.6  0.6  1832691  7158.951  84.61058
## 52   0.6  0.7  1854890  7245.664  85.12146
## 53   0.6  0.8  1878045  7336.113  85.65111
## 54   0.6  0.9  1905670  7444.024  86.27876
## 55   0.7  0.1  1561913  6101.222  78.11032
## 56   0.7  0.2  1566137  6117.721  78.21586
## 57   0.7  0.3  1600856  6253.344  79.07809
## 58   0.7  0.4  1639730  6405.195  80.03246
## 59   0.7  0.5  1676814  6550.053  80.93240
## 60   0.7  0.6  1713065  6691.661  81.80258
## 61   0.7  0.7  1751643  6842.357  82.71854
## 62   0.7  0.8  1795229  7012.615  83.74136
## 63   0.7  0.9  1845315  7208.263  84.90149
## 64   0.8  0.1  1464018  5718.822  75.62289
## 65   0.8  0.2  1482373  5790.520  76.09547
## 66   0.8  0.3  1524564  5955.330  77.17078
## 67   0.8  0.4  1571604  6139.077  78.35226
## 68   0.8  0.5  1620831  6331.369  79.56990
## 69   0.8  0.6  1673834  6538.413  80.86045
## 70   0.8  0.7  1732933  6769.269  82.27557
## 71   0.8  0.8  1799857  7030.691  83.84921
## 72   0.8  0.9  1875691  7326.920  85.59743
## 73   0.9  0.1  1408769  5503.003  74.18223
## 74   0.9  0.2  1442046  5632.990  75.05325
## 75   0.9  0.3  1496387  5845.262  76.45431
## 76   0.9  0.4  1557983  6085.871  78.01199
## 77   0.9  0.5  1625956  6351.389  79.69560
## 78   0.9  0.6  1702214  6649.274  81.54308
## 79   0.9  0.7  1789074  6988.568  83.59766
## 80   0.9  0.8  1888697  7377.723  85.89367
## 81   0.9  0.9  2003336  7825.530  88.46202
## 
## $n.optimum
## [1] "Alpha optimum adalah 0.9 ,Beta optimum adalah 0.1"
pemulusan2(x=data$Kurs_Jual,alfa=c(0.9),
            beta=c(0.1),
           method="SES",metrik="MSE")

## $Akurasi
##   Alpha     SSE      MSE     RMSE
## 1   0.9 1273686 4975.335 70.53605
## 
## $n.optimum
## [1] "Alpha optimum adalah 0.9"

Pemulusan Winter

winter.ts<-ts(data$Kurs_Jual, frequency = 23,)
training.ts<-ts(training$Kurs_Jual, frequency = 23)
testing.ts<-ts(testing$Kurs_Jual, start=233, frequency = 23)

Aditif

aditif <- HoltWinters(training.ts)
aditif
## Holt-Winters exponential smoothing with trend and additive seasonal component.
## 
## Call:
## HoltWinters(x = training.ts)
## 
## Smoothing parameters:
##  alpha: 0.9191187
##  beta : 0.02118222
##  gamma: 1
## 
## Coefficients:
##             [,1]
## a   19339.262412
## b      -9.678449
## s1     61.697218
## s2     33.652508
## s3    -58.708931
## s4    -85.789541
## s5    -83.596272
## s6    -79.898935
## s7    -64.367618
## s8    -52.024678
## s9    -33.258743
## s10   -68.110275
## s11   -64.865769
## s12   -86.191564
## s13   -79.629390
## s14   -78.064996
## s15   -14.090061
## s16    22.204708
## s17    38.160351
## s18    84.697280
## s19    93.775965
## s20    86.985230
## s21   111.395217
## s22    94.512796
## s23   101.397588

Forecasting

ramalan1 <- forecast(aditif, h=23)
ramalan1
##           Point Forecast    Lo 80    Hi 80    Lo 95    Hi 95
##  9.869565       19391.28 19286.19 19496.37 19230.56 19552.01
##  9.913043       19353.56 19209.43 19497.69 19133.13 19573.99
##  9.956522       19251.52 19075.70 19427.33 18982.63 19520.41
## 10.000000       19214.76 19011.13 19418.39 18903.34 19526.18
## 10.043478       19207.27 18978.27 19436.28 18857.04 19557.51
## 10.086957       19201.29 18948.60 19453.99 18814.83 19587.75
## 10.130435       19207.15 18932.00 19482.29 18786.34 19627.95
## 10.173913       19209.81 18913.15 19506.47 18756.10 19663.52
## 10.217391       19218.90 18901.46 19536.34 18733.41 19704.38
## 10.260870       19174.37 18836.74 19512.00 18658.00 19690.73
## 10.304348       19167.93 18810.59 19525.28 18621.42 19714.45
## 10.347826       19136.93 18760.26 19513.60 18560.86 19713.00
## 10.391304       19133.81 18738.14 19529.49 18528.68 19738.94
## 10.434783       19125.70 18711.29 19540.11 18491.92 19759.48
## 10.478261       19180.00 18747.08 19612.91 18517.91 19842.09
## 10.521739       19206.61 18755.37 19657.85 18516.50 19896.73
## 10.565217       19212.89 18743.48 19682.30 18494.99 19930.79
## 10.608696       19249.75 18762.30 19737.19 18504.26 19995.23
## 10.652174       19249.15 18743.77 19754.52 18476.24 20022.05
## 10.695652       19232.68 18709.46 19755.90 18432.49 20032.87
## 10.739130       19247.41 18706.42 19788.40 18420.04 20074.78
## 10.782609       19220.85 18662.15 19779.55 18366.40 20075.30
## 10.826087       19218.06 18641.69 19794.42 18336.58 20099.53

Akurasi Training (SSE)

sse1.train <- aditif$SSE
sse1.train
## [1] 1222623

Multiplikatif

multi <- HoltWinters(training.ts)
multi
## Holt-Winters exponential smoothing with trend and additive seasonal component.
## 
## Call:
## HoltWinters(x = training.ts)
## 
## Smoothing parameters:
##  alpha: 0.9191187
##  beta : 0.02118222
##  gamma: 1
## 
## Coefficients:
##             [,1]
## a   19339.262412
## b      -9.678449
## s1     61.697218
## s2     33.652508
## s3    -58.708931
## s4    -85.789541
## s5    -83.596272
## s6    -79.898935
## s7    -64.367618
## s8    -52.024678
## s9    -33.258743
## s10   -68.110275
## s11   -64.865769
## s12   -86.191564
## s13   -79.629390
## s14   -78.064996
## s15   -14.090061
## s16    22.204708
## s17    38.160351
## s18    84.697280
## s19    93.775965
## s20    86.985230
## s21   111.395217
## s22    94.512796
## s23   101.397588

Forcasting

ramalan2 <- forecast(multi, h=23)
ramalan2
##           Point Forecast    Lo 80    Hi 80    Lo 95    Hi 95
##  9.869565       19391.28 19286.19 19496.37 19230.56 19552.01
##  9.913043       19353.56 19209.43 19497.69 19133.13 19573.99
##  9.956522       19251.52 19075.70 19427.33 18982.63 19520.41
## 10.000000       19214.76 19011.13 19418.39 18903.34 19526.18
## 10.043478       19207.27 18978.27 19436.28 18857.04 19557.51
## 10.086957       19201.29 18948.60 19453.99 18814.83 19587.75
## 10.130435       19207.15 18932.00 19482.29 18786.34 19627.95
## 10.173913       19209.81 18913.15 19506.47 18756.10 19663.52
## 10.217391       19218.90 18901.46 19536.34 18733.41 19704.38
## 10.260870       19174.37 18836.74 19512.00 18658.00 19690.73
## 10.304348       19167.93 18810.59 19525.28 18621.42 19714.45
## 10.347826       19136.93 18760.26 19513.60 18560.86 19713.00
## 10.391304       19133.81 18738.14 19529.49 18528.68 19738.94
## 10.434783       19125.70 18711.29 19540.11 18491.92 19759.48
## 10.478261       19180.00 18747.08 19612.91 18517.91 19842.09
## 10.521739       19206.61 18755.37 19657.85 18516.50 19896.73
## 10.565217       19212.89 18743.48 19682.30 18494.99 19930.79
## 10.608696       19249.75 18762.30 19737.19 18504.26 19995.23
## 10.652174       19249.15 18743.77 19754.52 18476.24 20022.05
## 10.695652       19232.68 18709.46 19755.90 18432.49 20032.87
## 10.739130       19247.41 18706.42 19788.40 18420.04 20074.78
## 10.782609       19220.85 18662.15 19779.55 18366.40 20075.30
## 10.826087       19218.06 18641.69 19794.42 18336.58 20099.53

Akurasi Testing (SSE)

selisih1<-as.numeric(ramalan1$mean)-as.numeric(testing.ts)
## Warning in as.numeric(ramalan1$mean) - as.numeric(testing.ts): longer object
## length is not a multiple of shorter object length
selisih1
##  [1]  -49.378820 -165.501979 -358.071867 -442.170927 -427.496107 -422.797219
##  [7] -364.864351 -415.719861 -397.842375 -355.002356 -369.866299 -438.430543
## [13] -510.246819 -314.110874 -163.774388 -198.428069 -179.310875   23.897604
## [19]   65.227840   80.038656    9.480195  -54.840676  -95.584333  153.921180
## [25]  119.098021   61.538133   46.289073   65.373893  111.062781  -18.064351
## [31]   -7.289861   55.587625  -56.992356  -54.316299 -101.780543  -66.716819
## [37]   20.989126  130.415612  132.761931  143.999125  186.847604  118.727840
## [43]   61.098656   75.830195  131.809324  126.425667  372.611180  191.808021
## [49]  107.068133   56.519073  -25.796107  -48.097219  -92.094351
SSEtesting1<-sum(selisih1^2)
selisih2<-as.numeric(ramalan2$mean)-as.numeric(testing.ts)
## Warning in as.numeric(ramalan2$mean) - as.numeric(testing.ts): longer object
## length is not a multiple of shorter object length
selisih2
##  [1]  -49.378820 -165.501979 -358.071867 -442.170927 -427.496107 -422.797219
##  [7] -364.864351 -415.719861 -397.842375 -355.002356 -369.866299 -438.430543
## [13] -510.246819 -314.110874 -163.774388 -198.428069 -179.310875   23.897604
## [19]   65.227840   80.038656    9.480195  -54.840676  -95.584333  153.921180
## [25]  119.098021   61.538133   46.289073   65.373893  111.062781  -18.064351
## [31]   -7.289861   55.587625  -56.992356  -54.316299 -101.780543  -66.716819
## [37]   20.989126  130.415612  132.761931  143.999125  186.847604  118.727840
## [43]   61.098656   75.830195  131.809324  126.425667  372.611180  191.808021
## [49]  107.068133   56.519073  -25.796107  -48.097219  -92.094351
SSEtesting2<-sum(selisih2^2)
akurasi <- matrix(c(SSEtesting1, SSEtesting2), nrow=1, ncol=2)
row.names(akurasi)<- "SSE"
colnames(akurasi) <- c("Aditif", "Multiplikatif")

Simpulan

akurasi
##      Aditif Multiplikatif
## SSE 2549477       2549477

Pemodelan

Kestasioneran Data

Eksploratif

ACF dan PACF Plot

par(mfrow=c(1,1))
acf(training.ts)

pacf(training.ts)

ACF tails off slowly, sedangkan PACF cuts off setelah lag ke-1. Karena ACF tails off slowly atau polanya menurun secara perlahan, maka datanya tidak stasioner.

Uji Formal

ADF Test

Hipotesis:

H0: Data tidak stasioner

H1: Data stasioner

tseries::adf.test(training.ts)
## 
##  Augmented Dickey-Fuller Test
## 
## data:  training.ts
## Dickey-Fuller = -1.4815, Lag order = 5, p-value = 0.7933
## alternative hypothesis: stationary

Bedasarkan hasil Augmented Dickey-Fuller Test (ADF Test) diperoleh p-value (0.8257) > alpha (0.05) , maka tak tolak H0. Artinya, tidak cukup bukti untuk menyatakan data stasioner pada taraf nyata 5% atau dengan kata lain data deret waktu tersebut tidak stasioner. Setelah melalui uji ADF dan menggunakan plot ACF & PACF data tidak stasioner sehingga perlu dilakukan differencing.

Differencing

data.dif1<-diff(training.ts, differences = 1)

Kestasionearan Data Setelah Differencing

plot(x = data.dif1,
     col = "black",
     lwd = 1,
     type = "o",
     main = "Setelah Differencing")

Setelah dilakukan differencing satu kali ( d = 1). Pola data kurs jual rupiah terhadap british pound sudah stasioner dilihat dari time series plot di atas.

Hipotesis:

H0: Data tidak stasioner

H1: Data stasioner

tseries::adf.test(data.dif1)
## Warning in tseries::adf.test(data.dif1): p-value smaller than printed p-value
## 
##  Augmented Dickey-Fuller Test
## 
## data:  data.dif1
## Dickey-Fuller = -7.0165, Lag order = 5, p-value = 0.01
## alternative hypothesis: stationary

Setelah dilakukan differencing satu kali ( d = 1), dengan menggunakan uji ADF p-value (0.01) < alpha (0.05), maka tolak H0. Artinya, dengan menggunakan uji ADF data deret waktu tersebut stasioner pada taraf nyata 5%.

Identifikasi Model ARIMA

ACF

acf(data.dif1)

PACF

pacf(data.dif1) 

EACF

eacf(data.dif1)
## AR/MA
##   0 1 2 3 4 5 6 7 8 9 10 11 12 13
## 0 o o o o o o o o o o o  o  o  o 
## 1 o o o o o o o o o o o  o  o  o 
## 2 x x o o o o o o o o o  o  o  o 
## 3 x x x o o o o o o o o  o  o  o 
## 4 x x x x o o o o o o o  o  o  o 
## 5 x o x x o o o o o o o  o  o  o 
## 6 x x x x o x o o o o o  o  o  o 
## 7 x x x o o x o o o o o  o  o  o

Berdasarkan plot ACF, PACF, dan EACF, diperoleh … model sebagai berikut: ARIMA(0,1,0), ARIMA (0,1,1), ARIMA (2,1,2), ARIMA (3,1,2)

Perbandingan Kebaikan Model

model1 <- arima(training.ts,order = c(0,1,0))
model2 <- arima(training.ts,order = c(0,1,1))
model3 <- arima(training.ts,order = c(2,1,2))
model4 <- arima(training.ts,order = c(3,1,2))
Model <- c("ARIMA (0,1,0)","ARIMA (0,1,1)","ARIMA (2,1,2)","ARIMA (3,1,2)")
AIC <- c(model1$aic,model2$aic,model3$aic,model4$aic)
Akurasi <- data.frame(Model,AIC)
kableExtra::kable(Akurasi)
Model AIC
ARIMA (0,1,0) 2307.686
ARIMA (0,1,1) 2309.682
ARIMA (2,1,2) 2311.504
ARIMA (3,1,2) 2313.470

Simpulan

Setelah dibandingkan, model ARIMA yang paling baik berdasarkan log likelihood dan ragam terkecil adalah ARIMA(0,1,0).

Data kurs jual rupiah terhadap british pound merupakan data tak stasioner. Oleh karena itu, perlu dilakukan penstasioneran terlebih dahulu yaitu dengan differencing sebelum menentukan orde ARIMA yang cocok. Berdasarkan analisis yang telah dilakukan, pendugaan model terbaik pada data kurs jual rupiah terhadap british pound adalah ARIMA (0, 1, 0).

Diagnostik Model

Eksploratif

residual <- model1$residuals
par(mfrow = c(2,2))
qqnorm(residual)
qqline(residual, col = "blue", lwd = 2)
plot(residual, type="o", 
     ylab = "Sisaan", xlab = "Order", main = "Sisaan Model1 vs Order")
abline(h = 0, col='red')
acf(residual)
pacf(residual)

  1. Normal Q-Q Plot

Berdasarkan hasil eksplorasi di atas, terlihat bahwa banyak amatan yang di sepanjang garis qq-plot distribusi normal. Sehingga secara eksploratif, dapat disimpulkan bahwa sisaan menyebar normal.

Residual (SisaanModel1) vs Order

Berdasarkan hasil eksplorasi di atas, titik pada plot kebebasan sisaan kebanyakan di sekitar titik nol. Namun, terdapat beberapa titik amatan yang terletak cukup jauh dari titik nol. Sehingga, belum dapat disimpulkan apakah terdapat autokorelasi atau tidak.

Plot ACF dan PACF

Berdasarkan hasil eksplorasi di atas, baik dari plot ACF maupun plot PACF, pada keduanya terdapat garis vertikal di lag tertentu yang melebihi tinggi garis biru horizontal. Artinya, menurut kedua plot ini, terdapat autokorelasi pada model.

Uji Formal

1. Sisaan Menyebar Normal

Hipotesis:

H0: Sisaan menyebar normal

H1: Sisaan tidak menyebar normal

shapiro.test(residual)
## 
##  Shapiro-Wilk normality test
## 
## data:  residual
## W = 0.996, p-value = 0.8754

2. Sisaan Saling Bebas/Tidak Terdapat Autokorelasi

Hipotesis:

H0: Tidak terdapat autokorelasi

H1: Terdapat autokorelasi

Box.test(residual, type = "Ljung")
## 
##  Box-Ljung test
## 
## data:  residual
## X-squared = 0.0096098, df = 1, p-value = 0.9219

3. Nilai Tengah Sisaan Sama Dengan Nol

Hipotesis:

H0: Nilai tengah sisaan sama dengan nol

H1: Nilai tengah sisaan tidak sama dengan nol

t.test(residual, mu = 0, conf.level = 0.95)
## 
##  One Sample t-test
## 
## data:  residual
## t = 0.32607, df = 203, p-value = 0.7447
## alternative hypothesis: true mean is not equal to 0
## 95 percent confidence interval:
##  -8.19897 11.44807
## sample estimates:
## mean of x 
##  1.624551

Overfitting

1. Model ARIMA (0,1,0)

model1.a <- Arima(training.ts, order=c(1,1,0), method="ML")
summary(model1.a)
## Series: training.ts 
## ARIMA(1,1,0) 
## 
## Coefficients:
##           ar1
##       -0.0047
## s.e.   0.0703
## 
## sigma^2 = 5091:  log likelihood = -1153.84
## AIC=2311.68   AICc=2311.74   BIC=2318.31
## 
## Training set error measures:
##                    ME    RMSE      MAE         MPE      MAPE      MASE
## Training set 1.631472 71.0034 55.98659 0.007827834 0.2809441 0.2238124
##                      ACF1
## Training set -0.001969493
lmtest::coeftest(model1.a)
## 
## z test of coefficients:
## 
##       Estimate Std. Error z value Pr(>|z|)
## ar1 -0.0046589  0.0702976 -0.0663   0.9472

2. Model ARIMA (0,1,1)

model2.a <- Arima(training.ts,order = c(1,1,1), method="ML")
summary(model2.a)
## Series: training.ts 
## ARIMA(1,1,1) 
## 
## Coefficients:
## Warning in sqrt(diag(x$var.coef)): NaNs produced
##           ar1     ma1
##       -0.0025  -0.002
## s.e.      NaN     NaN
## 
## sigma^2 = 5117:  log likelihood = -1153.84
## AIC=2313.68   AICc=2313.8   BIC=2323.62
## 
## Training set error measures:
##                    ME     RMSE     MAE        MPE      MAPE     MASE
## Training set 1.631237 71.00343 55.9865 0.00782672 0.2809436 0.223812
##                      ACF1
## Training set -0.002145193
lmtest::coeftest(model2.a)
## Warning in sqrt(diag(se)): NaNs produced
## 
## z test of coefficients:
## 
##       Estimate Std. Error z value Pr(>|z|)
## ar1 -0.0025180        NaN     NaN      NaN
## ma1 -0.0019722        NaN     NaN      NaN
model2.b <- Arima(training.ts,order = c(0,1,2), method="ML")
summary(model2.b)
## Series: training.ts 
## ARIMA(0,1,2) 
## 
## Coefficients:
##           ma1     ma2
##       -0.0018  0.0510
## s.e.   0.0699  0.0783
## 
## sigma^2 = 5106:  log likelihood = -1153.63
## AIC=2313.26   AICc=2313.38   BIC=2323.2
## 
## Training set error measures:
##                    ME     RMSE      MAE         MPE      MAPE      MASE
## Training set 1.535782 70.92911 56.10259 0.007368042 0.2814967 0.2242761
##                      ACF1
## Training set -0.003474311
lmtest::coeftest(model2.b)
## 
## z test of coefficients:
## 
##       Estimate Std. Error z value Pr(>|z|)
## ma1 -0.0017974  0.0699461 -0.0257   0.9795
## ma2  0.0510102  0.0782967  0.6515   0.5147

3. Model ARIMA (2,1,2)

model3.a <- arima(training.ts,order = c(2,1,3), method = "ML")
## Warning in stats::arima(x = x, order = order, seasonal = seasonal, xreg =
## xreg, : possible convergence problem: optim gave code = 1
summary(model3.a)
## 
## Call:
## arima(x = training.ts, order = c(2, 1, 3), method = "ML")
## 
## Coefficients:
## Warning in sqrt(diag(x$var.coef)): NaNs produced
##          ar1      ar2      ma1     ma2      ma3
##       1.5991  -0.9738  -1.6189  1.0227  -0.0541
## s.e.     NaN      NaN      NaN  0.0614   0.0222
## 
## sigma^2 estimated as 4933:  log likelihood = -1151.39,  aic = 2312.79
## 
## Training set error measures:
## Warning in trainingaccuracy(object, test, d, D): test elements must be within
## sample
##               ME RMSE MAE MPE MAPE
## Training set NaN  NaN NaN NaN  NaN
lmtest::coeftest(model3.a)
## Warning in sqrt(diag(se)): NaNs produced
## 
## z test of coefficients:
## 
##      Estimate Std. Error z value Pr(>|z|)    
## ar1  1.599099        NaN     NaN      NaN    
## ar2 -0.973822        NaN     NaN      NaN    
## ma1 -1.618853        NaN     NaN      NaN    
## ma2  1.022694   0.061412 16.6531  < 2e-16 ***
## ma3 -0.054132   0.022239 -2.4342  0.01493 *  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

4. Model ARIMA (3,1,2)

model4.a <- arima(training.ts,order = c(3,1,3), method = "ML")
summary(model4.a)
## 
## Call:
## arima(x = training.ts, order = c(3, 1, 3), method = "ML")
## 
## Coefficients:
##          ar1      ar2     ar3      ma1     ma2      ma3
##       0.6336  -0.8247  0.7194  -0.6467  0.8779  -0.7994
## s.e.  0.2979   0.0612  0.2810   0.2798  0.0878   0.2858
## 
## sigma^2 estimated as 4878:  log likelihood = -1151.57,  aic = 2315.14
## 
## Training set error measures:
## Warning in trainingaccuracy(object, test, d, D): test elements must be within
## sample
##               ME RMSE MAE MPE MAPE
## Training set NaN  NaN NaN NaN  NaN
lmtest::coeftest(model4.a)
## 
## z test of coefficients:
## 
##      Estimate Std. Error  z value  Pr(>|z|)    
## ar1  0.633646   0.297855   2.1274  0.033390 *  
## ar2 -0.824665   0.061175 -13.4805 < 2.2e-16 ***
## ar3  0.719412   0.281014   2.5601  0.010465 *  
## ma1 -0.646731   0.279849  -2.3110  0.020833 *  
## ma2  0.877902   0.087824   9.9961 < 2.2e-16 ***
## ma3 -0.799422   0.285771  -2.7974  0.005151 ** 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
model4.b <- arima(training.ts,order = c(4,1,2), method = "ML")
summary(model4.b)
## 
## Call:
## arima(x = training.ts, order = c(4, 1, 2), method = "ML")
## 
## Coefficients:
##          ar1     ar2      ar3      ar4      ma1      ma2
##       0.1554  0.1959  -0.0334  -0.1102  -0.1634  -0.1518
## s.e.  0.6252  0.4330   0.0727   0.0786   0.6278   0.4337
## 
## sigma^2 estimated as 4988:  log likelihood = -1152.33,  aic = 2316.66
## 
## Training set error measures:
## Warning in trainingaccuracy(object, test, d, D): test elements must be within
## sample
##               ME RMSE MAE MPE MAPE
## Training set NaN  NaN NaN NaN  NaN
lmtest::coeftest(model4.b)
## 
## z test of coefficients:
## 
##      Estimate Std. Error z value Pr(>|z|)
## ar1  0.155419   0.625223  0.2486   0.8037
## ar2  0.195942   0.432975  0.4525   0.6509
## ar3 -0.033413   0.072736 -0.4594   0.6460
## ar4 -0.110181   0.078626 -1.4013   0.1611
## ma1 -0.163423   0.627825 -0.2603   0.7946
## ma2 -0.151838   0.433666 -0.3501   0.7262

Perbandingan Kebaikan Model dari Model Overfitting

Model <- c("ARIMA (0,1,0)", "ARIMA (1,1,0)","ARIMA (0,1,1)", "ARIMA (1,1,1)","ARIMA (0,1,2)","ARIMA (2,1,2)","ARIMA (2,1,3)","ARIMA (3,1,2)", "ARIMA (3,1,3)", "ARIMA (4,1,2)")
AIC <- c(model1$aic,model1.a$aic,model2$aic,model2.a$aic, model2.b$aic,model3$aic,model3.a$aic,model4$aic,model4.a$aic,model4.b$aic)
Akurasi <- data.frame(Model,AIC)
akurasioverfitting <- kableExtra::kable(Akurasi)

model.ov_aic <- data.frame(
  "Model" = c("ARIMA (0,1,0)", "ARIMA (1,1,0)","ARIMA (0,1,1)", "ARIMA (1,1,1)","ARIMA (0,1,2)","ARIMA (2,1,2)","ARIMA (2,1,3)","ARIMA (3,1,2)", "ARIMA (3,1,3)", "ARIMA (4,1,2)"),
  "AIC" = c(model1$aic,model1.a$aic,model2$aic,model2.a$aic, model2.b$aic,model3$aic,model3.a$aic,model4$aic,model4.a$aic,model4.b$aic)
)

model.ov_aic
##            Model      AIC
## 1  ARIMA (0,1,0) 2307.686
## 2  ARIMA (1,1,0) 2311.682
## 3  ARIMA (0,1,1) 2309.682
## 4  ARIMA (1,1,1) 2313.682
## 5  ARIMA (0,1,2) 2313.262
## 6  ARIMA (2,1,2) 2311.504
## 7  ARIMA (2,1,3) 2312.790
## 8  ARIMA (3,1,2) 2313.470
## 9  ARIMA (3,1,3) 2315.139
## 10 ARIMA (4,1,2) 2316.658
dplyr::arrange(.data=model.ov_aic, AIC)
##            Model      AIC
## 1  ARIMA (0,1,0) 2307.686
## 2  ARIMA (0,1,1) 2309.682
## 3  ARIMA (2,1,2) 2311.504
## 4  ARIMA (1,1,0) 2311.682
## 5  ARIMA (2,1,3) 2312.790
## 6  ARIMA (0,1,2) 2313.262
## 7  ARIMA (3,1,2) 2313.470
## 8  ARIMA (1,1,1) 2313.682
## 9  ARIMA (3,1,3) 2315.139
## 10 ARIMA (4,1,2) 2316.658

Setelah overfitting model yang paling baik ternyata tetap ARIMA (0,1,0)

Forecasting

1. Forecasting ARIMA (0,1,0)

ramalan1 <- forecast::forecast(training.ts,model=model1,h=53)
ramalan1
##           Point Forecast    Lo 80    Hi 80    Lo 95    Hi 95
##  9.869565       19440.66 19349.46 19531.86 19301.18 19580.14
##  9.913043       19440.66 19311.68 19569.64 19243.40 19637.92
##  9.956522       19440.66 19282.69 19598.63 19199.07 19682.25
## 10.000000       19440.66 19258.25 19623.07 19161.69 19719.63
## 10.043478       19440.66 19236.72 19644.60 19128.77 19752.55
## 10.086957       19440.66 19217.26 19664.06 19099.00 19782.32
## 10.130435       19440.66 19199.36 19681.96 19071.62 19809.70
## 10.173913       19440.66 19182.70 19698.62 19046.14 19835.18
## 10.217391       19440.66 19167.05 19714.27 19022.21 19859.11
## 10.260870       19440.66 19152.25 19729.07 18999.58 19881.74
## 10.304348       19440.66 19138.17 19743.15 18978.05 19903.27
## 10.347826       19440.66 19124.72 19756.60 18957.48 19923.84
## 10.391304       19440.66 19111.82 19769.50 18937.75 19943.57
## 10.434783       19440.66 19099.41 19781.91 18918.76 19962.56
## 10.478261       19440.66 19087.43 19793.89 18900.44 19980.88
## 10.521739       19440.66 19075.85 19805.47 18882.73 19998.59
## 10.565217       19440.66 19064.62 19816.70 18865.56 20015.76
## 10.608696       19440.66 19053.72 19827.60 18848.88 20032.44
## 10.652174       19440.66 19043.11 19838.21 18832.67 20048.65
## 10.695652       19440.66 19032.79 19848.53 18816.87 20064.45
## 10.739130       19440.66 19022.71 19858.61 18801.47 20079.85
## 10.782609       19440.66 19012.88 19868.44 18786.43 20094.89
## 10.826087       19440.66 19003.27 19878.05 18771.72 20109.60
## 10.869565       19440.66 18993.86 19887.46 18757.33 20123.99
## 10.913043       19440.66 18984.64 19896.68 18743.24 20138.08
## 10.956522       19440.66 18975.61 19905.71 18729.43 20151.89
## 11.000000       19440.66 18966.75 19914.57 18715.88 20165.44
## 11.043478       19440.66 18958.06 19923.26 18702.58 20178.74
## 11.086957       19440.66 18949.52 19931.80 18689.52 20191.80
## 11.130435       19440.66 18941.12 19940.20 18676.68 20204.64
## 11.173913       19440.66 18932.86 19948.46 18664.05 20217.27
## 11.217391       19440.66 18924.74 19956.58 18651.62 20229.70
## 11.260870       19440.66 18916.74 19964.58 18639.39 20241.93
## 11.304348       19440.66 18908.86 19972.46 18627.34 20253.98
## 11.347826       19440.66 18901.09 19980.23 18615.47 20265.85
## 11.391304       19440.66 18893.44 19987.88 18603.76 20277.56
## 11.434783       19440.66 18885.89 19995.43 18592.22 20289.10
## 11.478261       19440.66 18878.45 20002.87 18580.83 20300.49
## 11.521739       19440.66 18871.10 20010.22 18569.59 20311.73
## 11.565217       19440.66 18863.84 20017.48 18558.49 20322.83
## 11.608696       19440.66 18856.67 20024.65 18547.53 20333.79
## 11.652174       19440.66 18849.60 20031.72 18536.71 20344.61
## 11.695652       19440.66 18842.60 20038.72 18526.01 20355.31
## 11.739130       19440.66 18835.69 20045.63 18515.43 20365.89
## 11.782609       19440.66 18828.85 20052.47 18504.98 20376.34
## 11.826087       19440.66 18822.09 20059.23 18494.64 20386.68
## 11.869565       19440.66 18815.40 20065.92 18484.41 20396.91
## 11.913043       19440.66 18808.79 20072.53 18474.29 20407.03
## 11.956522       19440.66 18802.24 20079.08 18464.28 20417.04
## 12.000000       19440.66 18795.76 20085.56 18454.37 20426.95
## 12.043478       19440.66 18789.34 20091.98 18444.55 20436.77
## 12.086957       19440.66 18782.98 20098.34 18434.83 20446.49
## 12.130435       19440.66 18776.69 20104.63 18425.21 20456.11
data.ramalan1 <- ramalan1$mean
plot(ramalan1)
lines((length(training.ts)+1):(length(training.ts)+53), testing.ts,col="black")

2. Forecasting ARIMA (2,1,3)

ramalan2 <- forecast::forecast(training.ts,model=model3.a,h=53)
ramalan2
##           Point Forecast    Lo 80    Hi 80    Lo 95    Hi 95
##  9.869565       19444.19 19354.19 19534.20 19306.54 19581.85
##  9.913043       19456.60 19330.57 19582.64 19263.85 19649.36
##  9.956522       19472.35 19317.60 19627.09 19235.68 19709.01
## 10.000000       19485.43 19306.85 19664.01 19212.32 19758.55
## 10.043478       19491.03 19292.57 19689.49 19187.51 19794.54
## 10.086957       19487.23 19272.07 19702.39 19158.17 19816.29
## 10.130435       19475.72 19246.14 19705.30 19124.61 19826.83
## 10.173913       19460.99 19218.34 19703.65 19089.88 19832.11
## 10.217391       19448.67 19193.42 19703.92 19058.30 19839.04
## 10.260870       19443.29 19175.34 19711.25 19033.49 19853.10
## 10.304348       19446.70 19165.66 19727.75 19016.88 19876.52
## 10.347826       19457.39 19163.00 19751.78 19007.16 19907.62
## 10.391304       19471.16 19163.59 19778.72 19000.78 19941.53
## 10.434783       19482.76 19162.71 19802.82 18993.29 19972.24
## 10.478261       19487.92 19156.43 19819.42 18980.94 19994.90
## 10.521739       19484.87 19143.06 19826.67 18962.12 20007.61
## 10.565217       19474.95 19123.76 19826.14 18937.85 20012.05
## 10.608696       19462.08 19102.05 19822.11 18911.46 20012.70
## 10.652174       19451.15 19082.40 19819.90 18887.19 20015.10
## 10.695652       19446.20 19068.49 19823.91 18868.55 20023.85
## 10.739130       19448.94 19061.88 19836.00 18856.98 20040.90
## 10.782609       19458.13 19061.39 19854.87 18851.37 20064.89
## 10.826087       19470.17 19063.70 19876.64 18848.52 20091.81
## 10.869565       19480.46 19064.57 19896.35 18844.41 20116.51
## 10.913043       19485.20 19060.48 19909.92 18835.65 20134.75
## 10.956522       19482.76 19049.91 19915.60 18820.77 20144.74
## 11.000000       19474.23 19033.85 19914.61 18800.72 20147.74
## 11.043478       19462.98 19015.41 19910.55 18778.48 20147.48
## 11.086957       19453.29 18998.59 19907.99 18757.88 20148.69
## 11.130435       19448.75 18986.72 19910.78 18742.13 20155.36
## 11.173913       19450.93 18981.25 19920.61 18732.61 20169.24
## 11.217391       19458.83 18981.22 19936.44 18728.39 20189.27
## 11.260870       19469.35 18983.73 19954.97 18726.66 20212.04
## 11.304348       19478.47 18985.02 19971.92 18723.81 20233.13
## 11.347826       19482.82 18981.94 19983.69 18716.79 20248.84
## 11.391304       19480.88 18973.07 19988.69 18704.25 20257.51
## 11.434783       19473.55 18959.24 19987.87 18686.98 20260.13
## 11.478261       19463.72 18943.16 19984.28 18667.59 20259.85
## 11.521739       19455.14 18928.36 19981.91 18649.51 20260.77
## 11.565217       19450.98 18917.84 19984.12 18635.61 20266.35
## 11.608696       19452.70 18912.93 19992.46 18627.20 20278.20
## 11.652174       19459.49 18912.87 20006.10 18623.51 20295.47
## 11.695652       19468.67 18915.12 20022.23 18622.09 20315.26
## 11.739130       19476.75 18916.38 20037.12 18619.74 20333.76
## 11.782609       19480.73 18913.83 20047.62 18613.74 20347.71
## 11.826087       19479.21 18906.17 20052.26 18602.81 20355.61
## 11.869565       19472.92 18894.05 20051.79 18587.62 20358.22
## 11.913043       19464.33 18879.85 20048.82 18570.44 20358.23
## 11.956522       19456.73 18866.65 20046.81 18554.28 20359.18
## 12.000000       19452.94 18857.14 20048.73 18541.74 20364.13
## 12.043478       19454.27 18852.55 20055.99 18534.02 20374.52
## 12.086957       19460.10 18852.27 20067.92 18530.51 20389.69
## 12.130435       19468.12 18854.11 20082.13 18529.08 20407.16
data.ramalan2 <- ramalan2$mean
plot(ramalan2)
lines((length(training.ts)+1):(length(training.ts)+53), testing.ts,col="black")

Simpulan