A capital requirement (also known as regulatory capital or capital adequacy) is the amount of capital a bank or other financial institution has to have as required by its financial regulator. This is usually expressed as a capital adequacy ratio of equity as a percentage of risk-weighted assets. These requirements are put into place to ensure that financial institutions institutions do not take on excess leverage and risk becoming insolvent. The main international effort to establish rules around capital requirements has been the Basel Accords, published by the Basel Committee on Banking Supervision housed at the Bank for International Settlements.
Under Basel III, there new changes were introduced for the capital adequacy ratio:
Minimum Common Equity Tier 1 (common stocks and disclosed reserves/retained earnings) increased to 4.5%, down from 4% in Basel II.Minimum Tier 1 capital increased to 6% from 4% in Basel II.Minimum regulatory capital ratio was left unchanged at 8%, out of which 6% is Tier 1 capital.In addition, capital buffers were introduced as additional CET 1 capital that banks are obliged to maintain above the regulatory minimum so as to limit systemic risks in the financial system.
References:
Basel III addresses a number of shortcomings in the pre-crisis regulatory framework and provides a foundation for a resilient banking system that is able to support the real economy through the economic cycle. Capital buffers play an important role in this respect, as they are inter alia meant to mitigate procyclicality by acting as shock absorbers in times of stress. In the European framework, these buffers include:
CCoB)CCyB)G-SIIs and O-SIIs)SyRB).The combination of all these buffers constitutes the combined buffer requirement (CBR). Buffers are placed on top of minimum capital requirements to enhance banks’ resilience against shocks. Over the medium term, the buffer framework aims to ensure a sound and stable banking system that is able to continuously provide key services to the economy. Maintaining lending at the onset of a crisis may help to reduce the amount of capital that will be needed to absorb losses further down the road.
If the banks operate bellow the CBR, financial regulator cannot revoke their banking license but they face automatic restrictions on distributions, including dividends, bonus payments, and coupon payments on Additional Tier 1 instruments.
Avoiding excessive deleveraging by the banking sector in a downturn is important owing to the relevance of aggregate bank balance sheet size for the future path of the economy. As evidenced by the crisis of 2007-09, a shortfall in credit supply (a “credit crunch”) can have material negative effects on GDP growth. Similarly, the economy may be negatively affected if banks withdraw from other activities that are economically relevant (e.g. market making, ownership of central counterparties, lending to other banks).
The first main policy implication is that banks should build up capital buffers during normal times. The idea is to make banks more resilient to loan losses as a way of reducing ex ante the severity of financial crises and of lowering their frequency. Intuitively, a small reduction in loan supply during normal times - because of costly capital buffers - is traded off against a large reduction during times of financial crisis.
The second main policy implication is that banks should be given ample time to rebuild capital buffers following a financial crisis and that regulation should increase bank profitability in that process. The idea is to raise the prospect of future profitability during the financial crisis with a view to increasing a bank’s access to outside funding and reducing ex post the severity of a financial crisis. Intuitively, a small reduction in loan supply during the recovery - because of temporarily elevated bank profit margins - is traded off against a large reduction during the financial crisis.
In this project we will use of linear regression and ARIMA models for forecasting variables related to banking sector, as well as the use of HP (Hodrick-Prescott) filter for estimation of Countercyclical buffer guide for the Serbia banking sector.
Dataset that contains quarterly information from 2004 Q1 - 2021 Q3 about credit activity in Serbia and GDP:
## New names:
## * `` -> ...12
## * `` -> ...13
## * `` -> ...14
## Year Quarter Credit to households (long term)
## Min. :2004 Min. : 3.000 Min. : 19667
## 1st Qu.:2008 1st Qu.: 3.750 1st Qu.: 297262
## Median :2013 Median : 6.000 Median : 566767
## Mean :2013 Mean : 7.419 Mean : 573765
## 3rd Qu.:2017 3rd Qu.: 9.000 3rd Qu.: 811390
## Max. :2022 Max. :12.000 Max. :1302789
## NA's :3
## Credit to companies (long term) Credit to households (short term)
## Min. : 86877 Min. : 13348
## 1st Qu.: 255934 1st Qu.: 35796
## Median : 646975 Median : 52238
## Mean : 601905 Mean : 53714
## 3rd Qu.: 850912 3rd Qu.: 67182
## Max. :1230914 Max. :102987
## NA's :3 NA's :3
## Credit to companies (short term) credit to households credit to companies
## Min. :106133 Min. : 33015 Min. : 193010
## 1st Qu.:219886 1st Qu.: 356442 1st Qu.: 568258
## Median :261210 Median : 649499 Median :1073398
## Mean :300112 Mean : 627479 Mean : 902017
## 3rd Qu.:407971 3rd Qu.: 867351 3rd Qu.:1132198
## Max. :526463 Max. :1338141 Max. :1470902
## NA's :3 NA's :3 NA's :3
## credit to private sector GDP credit to gdp ...12
## Min. : 226025 Min. :1332021 Min. :0.1697 Length:74
## 1st Qu.: 924700 1st Qu.:2851154 1st Qu.:0.3302 Class :character
## Median :1760188 Median :3945691 Median :0.4263 Mode :character
## Mean :1529496 Mean :3818339 Mean :0.3907
## 3rd Qu.:1955298 3rd Qu.:4745073 3rd Qu.:0.4496
## Max. :2809044 Max. :7585350 Max. :0.4952
## NA's :3 NA's :4
## ...13 ...14
## Min. :3150956 Length:74
## 1st Qu.:4099936 Class :character
## Median :4212812 Mode :character
## Mean :4220510
## 3rd Qu.:4472609
## Max. :5087985
## NA's :4
For modeling the relationship between the credit activity and the various macroeconomic variables, we’re decided to use the following indicators:
## time interest rate real_gdp no unemployed
## Length:45 Min. : 1.000 Min. :-6.2623 Min. :352.0
## Class :character 1st Qu.: 3.000 1st Qu.: 0.6854 1st Qu.:566.3
## Mode :character Median : 4.500 Median : 2.1277 Median :723.1
## Mean : 6.237 Mean : 1.9910 Mean :669.0
## 3rd Qu.: 9.667 3rd Qu.: 3.4962 3rd Qu.:760.1
## Max. :12.333 Max. :13.7000 Max. :790.9
## wages fxrate inflation rate
## Min. :34192 Min. : 99.82 Min. : 0.300
## 1st Qu.:41718 1st Qu.:113.67 1st Qu.: 1.600
## Median :46041 Median :117.58 Median : 2.200
## Mean :47065 Mean :115.67 Mean : 4.004
## 3rd Qu.:49599 3rd Qu.:119.76 3rd Qu.: 4.900
## Max. :65218 Max. :123.88 Max. :14.100
Stationarity test for origianal series:
Another test we can conduct is the Augmented Dickey–Fuller (ADF). It determines if the series has a unit root (a series with a trend line will have a unit root and result in a large p-value). The significance level (p-value) for ADF test is pretty high (99%), so we cannot reject the null hypothesis.
##
## Augmented Dickey-Fuller Test
##
## data: credit_modeling
## Dickey-Fuller = -0.1633, Lag order = 3, p-value = 0.99
## alternative hypothesis: stationary
Now, we can test if the time series is level or trend stationary using the Kwiatkowski-Phillips-Schmidt-Shin (KPSS) test. Here we will test the null hypothesis of trend stationarity (a low p-value will indicate a signal that is not trend stationary, has a unit root). The significance level (p-value) for KPSS test is really low (1.6%), so we are rejecting the null hypothesis, which means that this time series has a unit root.
##
## KPSS Test for Trend Stationarity
##
## data: credit_modeling
## KPSS Trend = 0.19769, Truncation lag parameter = 3, p-value = 0.01687
Stationarity test for differenced series:
Let’s check now with differencing variable:
Not it looks a bit more like a stationary process. Now we can perform ADF test once again to see if it’s a stationary process. We’re testing against zero-mean stationarity. P-value is 0.247, so we still can’t reject the null hypothesis (there is the presence of unit root).
##
## Augmented Dickey-Fuller Test
##
## data: credit_modeling_diff
## Dickey-Fuller = -2.8256, Lag order = 3, p-value = 0.247
## alternative hypothesis: stationary
If we check the differenced series now with KPSS test, p-value for KPSS test is 0.1, so we are cannot reject the null hypothesis, which means that we cannot prove there is a unit root.
##
## KPSS Test for Level Stationarity
##
## data: credit_modeling_diff
## KPSS Level = 0.30443, Truncation lag parameter = 3, p-value = 0.1
Stationarity test for second difference of the series:
If we now do the second difference of the variable:
Now let’s again perform the ADF test for the second diff. We get that the p-value is 0.01, which is bellow the 5% significance level, so we reject the null and conclude that the series is stationary.
##
## Augmented Dickey-Fuller Test
##
## data: credit_modeling_diff
## Dickey-Fuller = -5.5227, Lag order = 3, p-value = 0.01
## alternative hypothesis: stationary
Second difference of the credit activity is stationary series!
On the ADF test we got that the p-value is 0.01, which is bellow the 5% significance level, so we reject the null and conclude that the series is stationary.
##
## Augmented Dickey-Fuller Test
##
## data: macro_real_gdp
## Dickey-Fuller = -5.8549, Lag order = 3, p-value = 0.01
## alternative hypothesis: stationary
On the ADF test we got that the p-value is 0.88, so we can’t reject the null hypothesis:
##
## Augmented Dickey-Fuller Test
##
## data: macro_interes_rate
## Dickey-Fuller = -1.2123, Lag order = 3, p-value = 0.8864
## alternative hypothesis: stationary
Let’s try now with differencing the interest rate:
On the ADF test we got that the p-value is 0.03, so we can reject the null hypotesis and conclude that the series is stationary.
##
## Augmented Dickey-Fuller Test
##
## data: macro_interes_rate_diff
## Dickey-Fuller = -3.7566, Lag order = 3, p-value = 0.0318
## alternative hypothesis: stationary
First difference of the interest rate is stationary series!
On the ADF test we got that the p-value is 0.57, so we can’t reject the null hypothesis:
##
## Augmented Dickey-Fuller Test
##
## data: macro_inflation_rate
## Dickey-Fuller = -2.0028, Lag order = 3, p-value = 0.5728
## alternative hypothesis: stationary
Let’s try now with differencing the inflation rate:
On the ADF test we got that the p-value is 0.01, so we can reject the null hypotesis and conclude that the series is stationary.
##
## Augmented Dickey-Fuller Test
##
## data: macro_inflation_rate_diff
## Dickey-Fuller = -7.9356, Lag order = 3, p-value = 0.01
## alternative hypothesis: stationary
First difference of the interest rate is stationary series!
This one is quite obvious that it’s not stationary. But let’s anyway perform the ADF test. We got that the p-value is 0.97, so we can’t reject the null hypothesis:
##
## Augmented Dickey-Fuller Test
##
## data: macro_unemployment
## Dickey-Fuller = -0.45251, Lag order = 3, p-value = 0.9797
## alternative hypothesis: stationary
Let’s try with unemployment change instead of number of unemployed people (by differencing the variable):
ADF test for unemployment change; we can see that it’s still not stationary, so we’ll have to do a second diff.
##
## Augmented Dickey-Fuller Test
##
## data: macro_unemployment_diff
## Dickey-Fuller = -2.2583, Lag order = 3, p-value = 0.4715
## alternative hypothesis: stationary
And finally we got the stationary series.
##
## Augmented Dickey-Fuller Test
##
## data: macro_unemployment_diff
## Dickey-Fuller = -3.6398, Lag order = 3, p-value = 0.04147
## alternative hypothesis: stationary
Second difference of the unemployment number is stationary series!
This one is also quite obvious that it’s not stationary. But let’s anyway perform the ADF test. We got that the p-value is 0.99, so we can’t reject the null hypothesis:
##
## Augmented Dickey-Fuller Test
##
## data: macro_wage
## Dickey-Fuller = 1.0342, Lag order = 3, p-value = 0.99
## alternative hypothesis: stationary
Let’s try with average wage change instead of actual wage number (by differencing the variable):
The p-value is still high so we can’t reject the null hypothesis and we’ll have to do another differencing.
##
## Augmented Dickey-Fuller Test
##
## data: macro_wage_diff
## Dickey-Fuller = -1.8545, Lag order = 3, p-value = 0.6314
## alternative hypothesis: stationary
Second differencing of average net wage:
Now for the second diff of wage we got the p-value of 0.01 so we can reject the null.
##
## Augmented Dickey-Fuller Test
##
## data: macro_wage_diff
## Dickey-Fuller = -6.7878, Lag order = 3, p-value = 0.01
## alternative hypothesis: stationary
Second difference of the average wage is stationary series!
Here is the summary of stationarity summary and which transformations we need to do for each variable for modeling:
| Series | Transformation |
|---|---|
| Target: Credit | Second Diff |
| Explanatory: Real GDP | Original |
| Explanatory: Interest rate | First Diff |
| Explanatory: Inflation rate | First Diff |
| Explanatory: Unemployment number | Second Diff |
| Explanatory: Average Net Wage | Second Diff |
Here we’re regressing over transformed variables.
From the results bellow, we can see that none of the variables is statistically significant (p-value is grater than 5%) We cannot reject the null hypothesis for any coefficients and state that they coefficients are different from 0. The F-statistics shows low value with above 5% p-value, which is another proof that there aren’t coefficients that are not equal to 0. And as a consequence the adjusted R-squared is also low. We can also see that both AIC and BIC are really low (negative), but these values will be used for comparing with other models.
##
## Call:
## lm(formula = credit_modeling_diff ~ macro_real_gdp[1:num_of_observations] +
## macro_interes_rate_diff[1:num_of_observations] + macro_inflation_rate_diff[1:num_of_observations] +
## macro_unemployment_diff + macro_wage_diff)
##
## Residuals:
## Min 1Q Median 3Q Max
## -82331 -19651 2615 25667 91608
##
## Coefficients:
## Estimate Std. Error t value
## (Intercept) 3647.627 7483.059 0.487
## macro_real_gdp[1:num_of_observations] -1722.624 2436.176 -0.707
## macro_interes_rate_diff[1:num_of_observations] 11763.696 12050.421 0.976
## macro_inflation_rate_diff[1:num_of_observations] -5438.868 4188.704 -1.298
## macro_unemployment_diff -236.237 235.602 -1.003
## macro_wage_diff -2.337 1.862 -1.255
## Pr(>|t|)
## (Intercept) 0.629
## macro_real_gdp[1:num_of_observations] 0.484
## macro_interes_rate_diff[1:num_of_observations] 0.335
## macro_inflation_rate_diff[1:num_of_observations] 0.202
## macro_unemployment_diff 0.323
## macro_wage_diff 0.217
##
## Residual standard error: 39430 on 37 degrees of freedom
## Multiple R-squared: 0.1082, Adjusted R-squared: -0.01229
## F-statistic: 0.898 on 5 and 37 DF, p-value: 0.4926
## AIC: 1039.647
## BIC: 1051.975
Let’s now try to perform the same regression, not with transformed variables, but with the original ones. In the model summary bellow you may see that now we have two coefficients that are statistically significant (p-value is < 5%):
We can see that F-statistics is high (p-value < 5%) and that adjusted R-squared is quite high with a value of 91.39%!
##
## Call:
## lm(formula = credit_modeling ~ macro_real_gdp + macro_interes_rate +
## macro_inflation_rate + macro_unemployment + macro_wage)
##
## Residuals:
## Min 1Q Median 3Q Max
## -373648 -112657 16782 91423 327806
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -6.330e+05 8.413e+05 -0.752 0.456318
## macro_real_gdp 1.845e+04 9.257e+03 1.993 0.053334 .
## macro_interes_rate -1.292e+05 2.078e+04 -6.216 2.6e-07 ***
## macro_inflation_rate 7.992e+03 1.332e+04 0.600 0.551952
## macro_unemployment 1.507e+03 7.530e+02 2.002 0.052327 .
## macro_wage 3.202e+01 8.803e+00 3.638 0.000795 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 166200 on 39 degrees of freedom
## Multiple R-squared: 0.9237, Adjusted R-squared: 0.9139
## F-statistic: 94.42 on 5 and 39 DF, p-value: < 2.2e-16
## AIC: 1217.161
## BIC: 1229.808
Now we’re doing regression on the target variable - credit activity with interest rate and average net wage as explanatory variables.
In the model summary bellow, we have statistically significant coefficients (p-value < 5%) for both the regressors and the intercept. We can reject the null hypothesis and state that these two coefficients are not 0. The F-statistics shows high value with zero p-value, which is another proof that there are some coefficients that are not equal to 0. Ajdusted R-squared is quite high (90.69%), which means high “goodness of fit”.
This model has a slightly lower BIC score compared to the previous one.
This coefficients makes sense:
Credit activity is negatively correlated with interest rate - when the interest rates are lower, it’s cheaper to borrow so the credit activity rises.Credit activity is positively correlated with the salaries - the higher the wages, the better credit score people have, so they are in the position get loans from the financial institutions.##
## Call:
## lm(formula = credit_modeling ~ +macro_interes_rate + macro_wage)
##
## Residuals:
## Min 1Q Median 3Q Max
## -468382 -147277 43611 126002 273082
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 9.885e+05 3.536e+05 2.795 0.00778 **
## macro_interes_rate -1.123e+05 1.269e+04 -8.848 3.8e-11 ***
## macro_wage 1.822e+01 6.035e+00 3.019 0.00430 **
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 172900 on 42 degrees of freedom
## Multiple R-squared: 0.9111, Adjusted R-squared: 0.9069
## F-statistic: 215.2 on 2 and 42 DF, p-value: < 2.2e-16
## AIC: 1218.033
## BIC: 1225.26
Now let’s try to fit the univariate time series model that fits the best.
Just by looking at ACF and PACF plot of credit activity, we can see that ACF if infinite and decaying over time and the PCAF has only one significant lag - at one. Based on the ARIMA models theoretical properties, we would expect that the best fit model is AR(1).
Let’s now use auto.arima functionality that will do a grid search over various p, d, q parameters for ARIMA models and give us the best performing model based on the AIC and BIC score.
As we can see bellow that the best performing model is ARIMA(1,1,0), or AR(1) of the differenced credit activity variable.
## Series: credit_arima
## ARIMA(1,1,0) with drift
##
## Coefficients:
## ar1 drift
## 0.4243 36450.243
## s.e. 0.1083 6577.264
##
## sigma^2 estimated as 1.04e+09: log likelihood=-813.31
## AIC=1632.61 AICc=1632.98 BIC=1639.32
##
## Training set error measures:
## ME RMSE MAE MPE MAPE MASE
## Training set 35.75589 31554.97 25565.28 0.02907102 1.938426 0.1649816
## ACF1
## Training set -0.008225807
In general, we must check the residuals. If the model is adequate, the residuals should behave like a white noise.
We can see straight from the ACF plot of residuals that there there is a serial correlation. Significant lags at 1, 2, 3 and 11.
When performing the Ljung-Box tests for residual independence. we come to the same conclusion - that the p-value for all three tests, using different number of lags, is significantly bellow 5%. That means we have to reject the null hypothesis which states: “There is no serial correlation among residuals (the residuals are white noise)”.
That means that the model didn’t capture everything from the data. To get the better model fit we would have to use some other model. This is somewhere expected as it credit activity is quite complex to comprehend and understand given everything that is happening across the whole economy and over many markets. It wouldn’t make sense if it could be modeled only two explanatory variables.
That means that the model didn’t capture everything from the data. To get the better model fit we would have to use some other model.
##
## Box-Ljung test
##
## data: model_two$residuals
## X-squared = 48.207, df = 4, p-value = 8.547e-10
##
## Box-Ljung test
##
## data: model_two$residuals
## X-squared = 62.696, df = 14, p-value = 3.929e-08
##
## Box-Ljung test
##
## data: model_two$residuals
## X-squared = 105.09, df = 24, p-value = 4.017e-12
Similarly as for the regression model, let’s first plot the ACF of the residuals.
It mostly looks all right, like there isn’t any significant serial correlation. There might be some correlation on lag 7. So it’s better to perform statistical test to be sure:
For both Ljnug-Box tests, with different lags observing, we got the p-value is higher than 5%, which is our significance level for rejection. Because of that we can’t reject the null hypothesis, meaning the there is no serial correlation among residuals:
##
## Box-Ljung test
##
## data: arima_model$residuals[1:(length(credit) - 4)]
## X-squared = 7.5907, df = 4, p-value = 0.1078
##
## Box-Ljung test
##
## data: arima_model$residuals[1:(length(credit) - 4)]
## X-squared = 18.311, df = 14, p-value = 0.193
We can also run more detailed residual checking:
##
## Ljung-Box test
##
## data: Residuals from ARIMA(1,1,0) with drift
## Q* = 7.1765, df = 6, p-value = 0.3048
##
## Model df: 2. Total lags used: 8
We had two candidate models:
We have to reject the linear regression model because of the presence of the serial correlation among residuals.
So the model with we’re going forward for forecasting credit activity in Serbia for the next four quarters is is ARIMA(1, 1, 0).
Since we won’t be using linear regression for forecasting credit activity, or any other model that uses independent variables, we won’t need the forecasts for any macroeconomic indicators.
Since we didn’t know that we’re going to use ARIMA(1,1,0) model at the time of data collection, we expected to use some of the macroeconomic indicators, so we’re listing the resource where we found forecasts for the indicators we chose at the beginning:
We’re using ARIMA(1,1,0) to forecast the credit activity for the next four quarters Q3 2021 - Q2 2022.
Here are the forecasted values per quarter:
## Point Forecast Lo 80 Hi 80 Lo 95 Hi 95
## 2021 Q3 2771229 2729895 2812564 2708013 2834445
## 2021 Q4 2813189 2741254 2885123 2703174 2923203
## 2022 Q1 2851977 2754139 2949814 2702347 3001606
## 2022 Q2 2889419 2769425 3009412 2705904 3072933
## 2022 Q3 2926290 2786960 3065619 2713204 3139375
## 2022 Q4 2962918 2806369 3119468 2723497 3202340
## 2023 Q1 2999444 2827290 3171599 2736157 3262732
## 2023 Q2 3035927 2849429 3222425 2750703 3321150
Let’s plot the forecast. We can see that forecasted values follow an upward trend.
Given forecast for the GDP for the next four quarters in the initial dataset:
## [1] 6162946 6645617 7112590 7585350
and our forecast for the credit activity for the next four quarters:
## [1] 2771229 2813189 2851977 2889419
we’re now calculating credit-to-GDP ratio for the next four quarters:
## [1] 0.4496598 0.4233149 0.4009758 0.3809209
In this section we’ll calculate Credit-to-GDP gap, denoted as GAPt. Is measured and calculated as: GAPt = RATIOt − TRENDt, where:
Hodrick-Prescott is often used in macroeconomics to extract a trend component from time series, by removing short-term fluctuations associated with business cycle. Removal of short-term fluctuations reveals long-term-trend.
Parameter lambda in the HP filter is used to smoothen the time series to reveal long-term trend. As per given methodology we’re going to set value for lambda at 400 000.
Credit-to-GDP ratio:
## Qtr1 Qtr2 Qtr3 Qtr4
## 2004 0.1696858 0.1866997 0.1961148 0.2096172
## 2005 0.2218625 0.2292687 0.2427790 0.2632830
## 2006 0.2666742 0.2762941 0.2716479 0.2589317
## 2007 0.2736487 0.2905687 0.3024915 0.3113063
## 2008 0.3257943 0.3280387 0.3365490 0.3645887
## 2009 0.3892744 0.3856364 0.3890532 0.4027878
## 2010 0.4238379 0.4583767 0.4668028 0.4777668
## 2011 0.4623611 0.4586753 0.4508921 0.4592095
## 2012 0.4844655 0.4870854 0.4952049 0.4781728
## 2013 0.4593480 0.4506600 0.4384503 0.4223573
## 2014 0.4075247 0.4134340 0.4265335 0.4387587
## 2015 0.4358310 0.4310522 0.4284706 0.4364406
## 2016 0.4262426 0.4325071 0.4348587 0.4271466
## 2017 0.4263651 0.4221910 0.4200627 0.4184495
## 2018 0.4156640 0.4188880 0.4214038 0.4322039
## 2019 0.4301770 0.4319562 0.4370708 0.4413796
## 2020 0.4464955 0.4693191 0.4813614 0.4809968
## 2021 0.4808137 0.4715211 0.4496598 0.4233149
## 2022 0.4009758 0.3809209
Credit-to-GDP gap plot:
Credit-to-GDP gap for next four quarters:
##
## Title:
## Hodrick-Prescott Filter
##
## Call:
## hpfilter(x = RATIO, freq = 4e+05, type = "lambda", drift = FALSE)
##
## Method:
## hpfilter
##
## Filter Type:
## lambda
##
## Series:
## RATIO
##
## RATIO Trend Cycle
## 2004 Q1 0.1697 0.2729 -1.032e-01
## 2004 Q2 0.1867 0.2766 -8.992e-02
## 2004 Q3 0.1961 0.2804 -8.426e-02
## 2004 Q4 0.2096 0.2841 -7.452e-02
## 2005 Q1 0.2219 0.2879 -6.603e-02
## 2005 Q2 0.2293 0.2916 -6.238e-02
## 2005 Q3 0.2428 0.2954 -5.262e-02
## 2005 Q4 0.2633 0.2991 -3.586e-02
## 2006 Q1 0.2667 0.3029 -3.621e-02
## 2006 Q2 0.2763 0.3066 -3.032e-02
## 2006 Q3 0.2716 0.3103 -3.869e-02
## 2006 Q4 0.2589 0.3141 -5.512e-02
## 2007 Q1 0.2736 0.3178 -4.411e-02
## 2007 Q2 0.2906 0.3214 -3.088e-02
## 2007 Q3 0.3025 0.3251 -2.263e-02
## 2007 Q4 0.3113 0.3288 -1.747e-02
## 2008 Q1 0.3258 0.3324 -6.613e-03
## 2008 Q2 0.3280 0.3360 -7.981e-03
## 2008 Q3 0.3365 0.3396 -3.059e-03
## 2008 Q4 0.3646 0.3432 2.142e-02
## 2009 Q1 0.3893 0.3467 4.257e-02
## 2009 Q2 0.3856 0.3502 3.543e-02
## 2009 Q3 0.3891 0.3537 3.538e-02
## 2009 Q4 0.4028 0.3571 4.568e-02
## 2010 Q1 0.4238 0.3605 6.333e-02
## 2010 Q2 0.4584 0.3639 9.450e-02
## 2010 Q3 0.4668 0.3672 9.961e-02
## 2010 Q4 0.4778 0.3705 1.073e-01
## 2011 Q1 0.4624 0.3737 8.865e-02
## 2011 Q2 0.4587 0.3769 8.176e-02
## 2011 Q3 0.4509 0.3801 7.083e-02
## 2011 Q4 0.4592 0.3832 7.604e-02
## 2012 Q1 0.4845 0.3862 9.823e-02
## 2012 Q2 0.4871 0.3893 9.782e-02
## 2012 Q3 0.4952 0.3922 1.030e-01
## 2012 Q4 0.4782 0.3952 8.300e-02
## 2013 Q1 0.4593 0.3981 6.128e-02
## 2013 Q2 0.4507 0.4009 4.974e-02
## 2013 Q3 0.4385 0.4037 3.472e-02
## 2013 Q4 0.4224 0.4065 1.585e-02
## 2014 Q1 0.4075 0.4092 -1.714e-03
## 2014 Q2 0.4134 0.4119 1.497e-03
## 2014 Q3 0.4265 0.4146 1.193e-02
## 2014 Q4 0.4388 0.4172 2.153e-02
## 2015 Q1 0.4358 0.4198 1.600e-02
## 2015 Q2 0.4311 0.4224 8.659e-03
## 2015 Q3 0.4285 0.4249 3.540e-03
## 2015 Q4 0.4364 0.4274 9.001e-03
## 2016 Q1 0.4262 0.4299 -3.680e-03
## 2016 Q2 0.4325 0.4324 1.260e-04
## 2016 Q3 0.4349 0.4348 4.254e-05
## 2016 Q4 0.4271 0.4372 -1.008e-02
## 2017 Q1 0.4264 0.4396 -1.326e-02
## 2017 Q2 0.4222 0.4420 -1.981e-02
## 2017 Q3 0.4201 0.4444 -2.429e-02
## 2017 Q4 0.4184 0.4467 -2.825e-02
## 2018 Q1 0.4157 0.4490 -3.336e-02
## 2018 Q2 0.4189 0.4513 -3.245e-02
## 2018 Q3 0.4214 0.4536 -3.224e-02
## 2018 Q4 0.4322 0.4559 -2.373e-02
## 2019 Q1 0.4302 0.4582 -2.803e-02
## 2019 Q2 0.4320 0.4605 -2.853e-02
## 2019 Q3 0.4371 0.4627 -2.568e-02
## 2019 Q4 0.4414 0.4650 -2.363e-02
## 2020 Q1 0.4465 0.4673 -2.076e-02
## 2020 Q2 0.4693 0.4695 -1.856e-04
## 2020 Q3 0.4814 0.4717 9.616e-03
## 2020 Q4 0.4810 0.4740 7.014e-03
## 2021 Q1 0.4808 0.4762 4.597e-03
## 2021 Q2 0.4715 0.4784 -6.926e-03
## 2021 Q3 0.4497 0.4807 -3.102e-02
## 2021 Q4 0.4233 0.4829 -5.959e-02
## 2022 Q1 0.4010 0.4851 -8.415e-02
## 2022 Q2 0.3809 0.4874 -1.064e-01
As we can see the values for the credit-to-GDP gap in the following four quarters are:
Using Countercyclical buffer guide decide we now have to decide the level of CCB rate to be applied in Serbia for the following four quarters.
Let’s recap the methodology for calculating the CCyB rate:
In our forecasted credit-to-GDP gap ration, we have that it’s currently under the long-term trend and way bellow the the 2% lower benchmark reference, so we can determine that the CCyB rate should be set at 0% for the next four quarters in Serbia.