avaialable from (https://rpubs.com/staszkiewicz/EX8_GC) [page]

Introduction to discrimnation methods

https://www.researchgate.net/publication/356458881_V5_20200207_Gorska_Staszkiewicz_artykul35995_en_1pdf

Going concern introduction

Going concern(GC) is the assumption in Financial reporting, that the entity will operate in normal condition until the next reporting period. It is a fundamental reporting assumption, if this is not sustained than the assets and liabilities should be revaluated on the current disposal basis, the long-term assets and liabilities should be treated as the current assets and liabilities. This usually translates itself to significant equity losses. Thus the auditor is asked explicitly in standards to assess the appropriateness of this assumption.

The CG is monitored by the entity on an annual basis, in case the current year losses consume the reserve capital and half of the shareholders’ funds, then the company needs to ask the shareholder for the resolution on further existence. In the case of the groups usually, the comfort letter is issued by the parent company to the subsidiary if the last one experiences the negative equity. In supervised entities like banks and financial institutions, the Supervisory Authority monitors quarterly (banks) or daily basis broker-dealers the level of the capital requirements. If the requirement falls below the threshold the supervisor initiates the correction measures (e.g. request to change the risk structure of assets to enhance capital basis), or requires the bank resolution - it occurs when authorities determine that a failing bank cannot go through normal insolvency proceedings without harming public interest and causing financial instability. So critical audit skills are to assess the likelihood that the entity will collapse until the next reporting date. Basically, the audit procedures go though the understanding business model, reviewing the business plans, forecasts, backlogs, capital commitments, market position, and discussion with management. Besides the standards procedures, there are more formal techniques generally known under the umbrella of “failure/insolvency/prediction”. The basic review of the techniques is here.

(https://www.researchgate.net/publication/344437838_Ograniczenie_modeli_postaltmanowskich_Nurt_badan_inspirowany_dorobkiem_prof_dr_hab_Marka_Gruszczynskiego)

As usual, we will use primary data from the article Audit fee and banks’ communication sentiment. Economic Research-Ekonomska Istraživanja. https://doi.org/10.1080/1331677X.2021.1985567 Piotr Staszkiewicz and Karkowska (2021). The data are, as usual, available from the Essential (“Public Materials Public Materials”) the file named Bank.cvs. Please download it to your computer and upload it to R. Please note that the data may be used after the class only for non-commercial purposes by indicating the source.

# z bazowych funkcji systemu wczytamy klasyczny plik z csv ale w taki sposób
#   że wybierzemy z okienka umiejscowienie pliku na włanym komuterze 
# dlaego zagnieżdzamy polecenie "file.choose()"

bank <-  read.csv(file.choose())

Review of the going concern opinions from our datasets

Let us first check if we have the going concern opinion in our dataset:

GCT<-table(bank$Going_Concern)
GCT
## 
##   No  Yes 
## 5320   36

we have quite a few going concern opinion just 0.68%.

Thus any calculation will be subject to the matching problem (a potential solution is the propensity score matching). By the way, it is reasonable to expect that the supervised market is unlikely to have the going concern entities.

Let us change the perspective to the capital requirements. The banks need to meet the capital requirements to operate. The basic rule is that the equity to risk-weighted assets should not be below a threshold say 4%. Unfortunately both the supervised equity (tier I, II, III) and risk-weighted assets are reported in Corep and Finrep, and are not easily extractable from the financial statements, thus we make a few proxies.

First, let us assume that risk-weighted assets are “Total_Assets”, supervised capital “Total_Equity” and stressed companies are those, whose total equity to total assets is below 10%. Let us create an index and see the boxplot of it.

bank$Cap_Req <- 0 # create new varialbe
bank$Cap_Req[bank$Total_Equity/bank$Total_Assets<.1] <- 1
boxplot.default(bank$Total_Equity/bank$Total_Assets)

table(bank$Cap_Req)
## 
##    0    1 
## 2431 2925

Now we have something like half of the population stressed.

Thus we change the perspective from insolvency to the stressed assets. It is reasonable to assume that the value of the stressed companies and their revenue is lower than those who are well-capitalized. Let us see this:

plot(bank$Revenue_USDx_N, bank$Book_Value_USDx_N, col=ifelse(bank$Cap_Req ==1,"red", "green"))

The red dots are our stressed banks, while the green is well capitalized. We might weigh our results with the companies assets to get the comparative values. Let us check again:

plot(bank$Revenue_USDx_N/bank$Total_Assets, bank$Book_Value_USDx_N/bank$Total_Assets, col=ifelse(bank$Cap_Req ==1,"red", "green"))

now the lineral discirminaiton is allmous funcitonal.

We need to set up the line, which would discriminate the reds from the green, without any calculations, we might guess, that the line y = .1 (our threshold value) will make the job, let us see:

plot(bank$Revenue_USDx_N/bank$Total_Assets, bank$Book_Value_USDx_N/bank$Total_Assets, col=ifelse(bank$Cap_Req ==1,"red", "green"))
abline(h = 0.1, col ="blue")  # here we add up a line horisontal to the existing plot at the hight of 0.1

We have reconciled our approach as our cut off value for setting up the distressed companies were 10% of capital requirements. Unfortunately not only revenue and market values impact the assets risk and equity, thus we have to add up different dimensions (e.g. liquidity, type of the audit opinionn, aduit fees, non audit fees ect).

Discrimination function

Before we continue let us now split our population into two sets a) traning and b) testing set

how to do it: https://www.r-bloggers.com/2021/12/how-to-split-data-into-train-and-test-in-r/

To begin, we’ll create a fake indicator to indicate whether a row is in the training or testing data set.

In an ideal world, we’d have 70% training data and 30% testing data, which would provide the highest level of accuracy.

split1<- sample(c(rep(0, 0.7 * nrow(bank)), rep(1, 0.3 * nrow(bank))))
head(split1)
## [1] 0 0 0 0 0 0
spt<-table(split1)
spt
## split1
##    0    1 
## 3749 1606

Let us check if the test sample is actually 30% this is 1606 / (3749 +1606) which is equal to 30%.

Now we define two subsets of the bank dataset (train and test)

train <-bank[split1==1,]
test <-bank[split1==0,]

Let us follow a basic regression model (tips for this see: https://www.r-bloggers.com/2015/09/how-to-perform-a-logistic-regression-in-r/)

model<- glm(Cap_Req~Revenue_USDx_N/Total_Assets + Book_Value_USDx_N/Total_Assets + Audit_Fees_USDy/Total_Assets, family =binomial(link='logit'), data=train )

let us check the relevance of the model

anova(model, test="Chisq")
## Analysis of Deviance Table
## 
## Model: binomial, link: logit
## 
## Response: Cap_Req
## 
## Terms added sequentially (first to last)
## 
## 
##                                Df Deviance Resid. Df Resid. Dev  Pr(>Chi)    
## NULL                                            1387     1901.3              
## Revenue_USDx_N                  1   0.0111      1386     1901.3 0.9160505    
## Book_Value_USDx_N               1  11.0999      1385     1890.2 0.0008633 ***
## Audit_Fees_USDy                 1  15.6963      1384     1874.5 7.437e-05 ***
## Revenue_USDx_N:Total_Assets     1   9.5663      1383     1864.9 0.0019818 ** 
## Total_Assets:Book_Value_USDx_N  1  24.2604      1382     1840.7 8.415e-07 ***
## Total_Assets:Audit_Fees_USDy    1  15.0812      1381     1825.6 0.0001030 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

The difference between the null deviance and the residual deviance shows how our model is doing against the null model (a model with only the intercept). The wider this gap, the better

While no exact equivalent to the R2 of linear regression exists, the McFadden’s R2 index can be used to assess the model fit.

library(pscl)
## Classes and Methods for R developed in the
## Political Science Computational Laboratory
## Department of Political Science
## Stanford University
## Simon Jackman
## hurdle and zeroinfl functions by Achim Zeileis
model<- glm(Cap_Req~Revenue_USDx_N/Total_Assets + Book_Value_USDx_N/Total_Assets + Audit_Fees_USDy/Total_Assets, family =binomial(link='logit'), data=train )
MFa<-pR2(model)
## fitting null model for pseudo-r2
MFa
##           llh       llhNull            G2      McFadden          r2ML 
## -912.78562900 -950.64324973   75.71524146    0.03982316    0.05308873 
##          r2CU 
##    0.07117950

Our McFadden is very weak of 4% thus, the model itself is rather for education purposes.

Prediction power of the model

In the steps above, we briefly evaluated the fitting of the model, now we would like to see how the model is doing when predicting on a new set of data. By setting the parameter type=‘response’, R will output probabilities in the form of P(y=1|X). Our decision boundary will be 0.5. If P(y=1|X) > 0.5 then y = 1 otherwise y=0. Note that for some applications different thresholds could be a better option.

fitted.results <- predict(model,newdata=test,type='response')
fitted.results <- ifelse(fitted.results > 0.5,1,0)

misClasificError <- mean(fitted.results != test$Cap_Req, na.rm = TRUE) # here we need to put aside those recoodrs which results in na
print(paste('Accuracy',1-misClasificError))
## [1] "Accuracy 0.608037094281298"

The accuracy of 61% is rather low, but slightly above random 50%.

ROC Cureve

we are going to plot the ROC curve and calculate the AUC (area under the curve) which are typical performance measurements for a binary classifier. The ROC is a curve generated by plotting the true positive rate (TPR) against the false positive rate (FPR) at various threshold settings while the AUC is the area under the ROC curve. As a rule of thumb, a model with good predictive ability should have an AUC closer to 1 (1 is ideal) than 0.5.

library(ROCR)
p <- predict(model,newdata=test,type='response')

 p[is.na(p)]<-0 # here we skipped the na value and associated the zero - just for calculation purpsoes
pr <- prediction(p, test$Cap_Req)
prf <- performance(pr, measure = "tpr", x.measure = "fpr")
plot(prf)

auc <- performance(pr, measure = "auc")
auc <- auc@y.values[[1]]
auc
## [1] 0.6235113

Our model prediction value 62% is quite weak.

analysis of a single bank

Let us predict the value for ASSOCIATED BANC-CORP for the y.e 12/31/2013 this is test[12,] element thus

predict(model, newdata= test[12,], type="response")
##        14 
## 0.4164882

42% gives us the chance thatASSOCIATED BANC-CORP will not survive until next financial reporting date, (or will be subject to the financial authority action or scrutinity)

Thus, if our model, has a strong prediction power, we will (not) have any substantial going problem with ASSOCIATED BANC-CORP. In practice, econometric modeling is data-driven, subject to numerous assumptions, and risky if you do not control the logic of the model.

In auditing practice or central banking activities, we use different types of models as decision-supporting tools. However, we always start with the basic financial statements, budget, and forecast analysis to judge upon the going concern assumption.

Independent work assignments

  • Certech p. 188

Try to set up the prediction of distress assets model with higher predictive power than 60%

To read:

Do going concern opinions provide incremental information to predict corporate defaults? Gutierrez et al. (2020)

The Relation between Managerial Ability and Audit Fees and Going Concern Opinions Krishnan and Wang (2014)

Do (Fe)Male Auditors Impair Audit Quality? Evidence from Going-Concern Opinions Hardies, Breesch, and Branson (2014)

Implications of the Joint Provision of CSR Assurance and Financial Audit for Auditors’ Assessment of Going-Concern Risk Maso et al. (2020)

Bankructwo przedsiębiorstwa a wynagrodzenie firmy audytorskiej. Implikacje dla regulacji rynku rewizji finansowej Piotr; Staszkiewicz (2019)

Literature

Gutierrez, Elizabeth, Jake Krupa, Miguel Minutti-Meza, and Maria Vulcheva. 2020. “Do Going Concern Opinions Provide Incremental Information to Predict Corporate Defaults?” Review of Accounting Studies 25 (4): 1344–81. https://doi.org/10.1007/s11142-020-09544-x.
Hardies, Kris, Diane Breesch, and Joël Branson. 2014. “Do (Fe)Male Auditors Impair Audit Quality? Evidence from Going-Concern Opinions.” European Accounting Review 25 (1): 7–34. https://doi.org/10.1080/09638180.2014.921445.
Krishnan, Gopal V., and Changjiang Wang. 2014. “The Relation Between Managerial Ability and Audit Fees and Going Concern Opinions.” AUDITING: A Journal of Practice & Theory 34 (3): 139–60. https://doi.org/10.2308/ajpt-50985.
Maso, Lorenzo Dal, Gerald J. Lobo, Francesco Mazzi, and Luc Paugam. 2020. “Implications of the Joint Provision of CSR Assurance and Financial Audit for Auditors’ Assessment of Going-Concern Risk.” Contemporary Accounting Research 37 (2): 1248–89. https://doi.org/10.1111/1911-3846.12560.
Staszkiewicz, Piotr; 2019. Bankructwo przedsiȩbiorstwa a wynagrodzenie firmy audytorskiej. Implikacje dla regulacji rynku rewizji finansowej. Warszawa: Oficyna Wyd. SGH.
Staszkiewicz, Piotr, and Renata Karkowska. 2021. “Audit Fee and Banks Communication Sentiment.” Economic Research-Ekonomska Istraživanja, October, 1–21. https://doi.org/10.1080/1331677x.2021.1985567.