Linear Discriminant Analysis (LDA)

A scoring model is a family of statistical tools developed from qualitative and quantitative empirical data that determines the appropriate parameters and variables for predicting default. Linear discriminant analysis (LDA) is one of the most popular statistical methods used for developing scoring models. An LDA-based model is a reduced form model due to its dependency on exogenous variable selection, the default composition, and the default definition. A scoring function is a linear function of variables produced by an LDA. The variables are chosen based on their estimated contribution to the likelihood of default and come from an extensive pool of qualitative features and accounting ratios. The contributions (i.e., weights) of each accounting ratio to the overall score are represented by Altmans Z-score. Although there are many discriminant analysis methods, the one referenced in this topic is the ordinary least squares method.

LDA categorizes firms into two groups: the first represents performing (solvent) firms and the second represents defaulting (insolvent) firms. One of the challenges of this categorization is whether or not it is possible to predict which firms will be solvent and which will be insolvent prior to default. A Z-score is assigned to each firm at some point prior to default on the basis of both financial and nonfinancial information. A Z cut-off point is used to differentiate both groups, although it is imperfect as both solvent and insolvent firms may have similar scores. This may lead to incorrect classifications.

Altman proposed the following LDA model:
[latex]Z = 1.21x_1 +1.40x_2 + 3.30x_3 + 0.6x_4 + 0.999x_5[/latex]

where:

[latex]x_1[/latex] = working capital / total assets

[latex]x_2[/latex] = accrued capital reserves / total assets

[latex]x_3[/latex] = EBIT / total assets

[latex]x_4[/latex] = equity market value / face value of term debt

[latex]x_5[/latex] = sales / total assets

In this model, the higher the Z-score, the more likely it is that a firm will be classified in the group of solvent firms. The Z-score cut-off (also known as the discriminant threshold) was set at Z = 2.673. The model was used not only to plug in current values to determine a Z-score, but also to perform stress tests to show what would happen to each component (and its associated weighting) if a financial factor changed.

Another example of LDA is the RiskCalc model, which was developed by Moodys. It incorporates variables that span several areas, such as financial leverage, growth, liquidity, debt coverage, profitability, size, and assets. The model is tailored to individual countries, with the model for a country like Italy driven by the positive impact on credit quality of factors such as higher profitability, higher liquidity, lower financial leverage, strong activity ratios, high growth, and larger company sizes.

With LDA, one of the main goals is to optimize variable coefficients such that Z-scores minimize the inevitable overlapping zone between solvent and insolvent firms. For two groups of borrowers with similar Z-scores, the overlapping zone is a risk area where firms may end up incorrectly classified, historical versions of LDA would sometimes consider a gray area allowing for three Z-score range interpretations to determine who would be granted funding: very safe borrowers, very risky borrowers, and the middle ground of borrowers that merited further investigation. In the current world, LDA incorporates the two additional objectives of measuring default probability and assigning ratings.

The process of fitting empirical data into a statistical model is called calibration. LDA calibration involves quantifying the probability of default by using statistical-based outputs of ratings systems and accounting for differences between the default rates of samples and the overall population. This process implies that more work is still needed, even after the scoring function is estimated and Z-scores are obtained, before the model can be used. In the case of the model being used simply to accept or reject credit applications, calibration simply involves adjusting the Z-score cut-off to account for differences between sample and population default rates. In the case of the model being used to categorize borrowers into different ratings classes (thereby assigning default probabilities to borrowers), calibration will include a cut-off adjustment and a potential rescaling of Z-score default quantifications.

Because of the relative infrequency of actual defaults, a more accurate model can be derived by attempting to create more balanced samples with relatively equal (in size) groups of both performing and defaulting firms. However, the risk of equaling the sample group sizes is that the model applied to a real population will tend to overpredict defaults. To protect against this risk, the results obtained from the sample must be calibrated. If the model is only used to classify potential borrowers into performing versus defaulting firms, calibration will only involve adjusting the Z cut-off using Bayes theorem to equate the frequency of defaulting borrowers per the model to the frequency in the actual population.

Prior probabilities represent the probability of default when there is no collected evidence on the borrower. Prior probabilities qinsojv and qsolv represent the prior probabilities of insolvency and solvency, respectively. One proposed solution is to adjust the cut-off point by the following relation:

[latex]  ln(\frac{q_solv}{q_insolv})[/latex]

If it is the case that the prior probabilities are equal (which would occur in a balanced sample), there is no adjustment needed to the cut-off point (i.e., relation is equal to 0). If the population is unbalanced, an adjustment is made by adding an amount from the relation just shown to the original cut-off quantity.

For example, assume a sample exists where the cut-off point is 1.00. Over the last 20 years, the average default rate is 3.73% (i.e., [latex]q_insolv[/latex] = 3.73%). This implies that qsolv is equal to 96.25%, and the relation will dictate that we must add [latex]ln(\frac{96.25%}{3.75%})[/latex] or 3.25 to the cut-off point (1.00 + 3.25 = 4.25).

The risk is the potential misclassification of borrowers leading to unfavorable decisions rejecting a borrower in spite of them being solvent or accepting a borrower that ends up defaulting. In the case of the first borrower, the cost of the error is an opportunity cost (C O STolv/insolv). In the case of the second borrower, the cost is the loss given default (COSTinsolv/soly). These costs are not equal, so the correct approach may be to adjust the cut-off point to account for these different costs by adjusting the relation equation as follows:

[latex] ln(\frac{q_solv \times COST_solv/insolv}{q_insolv \times COST_insolv/solv})[/latex]

Extending the earlier example, imagine the current assessment of loss given default is 50% and the opportunity cost is 20%. The cut-off score will require an adjustment of: [latex]ln\frac{96.25% \times 20%}{3.75% \times 50%} [/latex]= 2.33.

The cut-off point selection is very sensitive to factors such as overall credit portfolio profile, the market risk environment, market trends, funding costs, past performance/budgets, and customer segment competitive positions.

Note that LDA models typically offer only two decisions: accept or reject. Modern internal rating systems, which are based on the concept of default probability, require more options for decisions.

(Risk Model Discussion) Distinguish between the Structural and the Reduced-form Approaches

Distinguish between the structural approaches and the reduced-form approaches to predicting default.

The foundation of a structural approach (e.g., the Merton model) is the financial and economic theoretical assumptions that describe the overall path to default. Under this approach, building a model involves estimating the formal relationships that link the relevant variables of the model. In contrast, reduced form models (e.g., statistical and numerical approaches) arrive at a final solution using the set of variables that is most statistically suitable without factoring in the theoretical or conceptual causal relationships among variables.

A reduced form model will not make any ex ante assumptions about causal drivers for default (unlike structural models); specific firm characteristics are linked to default, using statistics to tie them to default data. As such, the default event itself represents a real-life event. The independent variables in these models are combined based on their estimated contribution to the final result and can change in terms of relevance depending on firm size, firm sector, and economic cycle stage.

A significant model risk in reduced form approaches results from a models dependency on the sample used to estimate it. To derive valid results, there must be a strong level of homogeneity between the sample and the population to which the model is applied.

Reduced form models used for credit risk can be classified into statistical and numerical- based categories. Statistical-based models use variables and relations that are selected and calibrated by statistical procedures. Numerical-based approaches use algorithms that connect actual defaults with observed variables. Both approaches can aggregate profiles, such as industry, sector, size, location, capitalization, and form of incorporation, into homogeneous top-down segment classifications. A bottom-up approach may also be used, which would classify variables based on case-by-case impacts. While numerical and statistical methods are primarily considered bottom-up approaches, experts-based approaches tend to be the most bottom up.

(Credit Risk/ Corporate Risk) Apply the Merton Model to Calculate Default Probability and the Distance to Default

Apply the Merton model to calculate default probability and the distance to default and describe the limitations of using the Merton model.

The Merton model, which is an example of a structural approach, is based on the premise that the technical event of default occurs only when the proprietary structure of the defaulting company is no longer considered worthwhile (V < D). Assuming that a default event is dependent on financial variables, default probability can be calculated using the Black- Scholes-Merton formula. The five relevant variables include the market risk interest rate, the maturity (when the debt expires), the debt face value (similar to an option strike price), the value of the borrowers assets, and the volatility of the assets value. The output provides the probability that the borrower will be insolvent.

In Mertons approach, the equity of a firm represents a call option on the market value of the assets. As such, the value of equity is a by-product of the market value and volatility of the assets, as well as the book value of liabilities; this implies that a firms asset volatility serves as the link between its business and financial risk. A firm’s risk structure is used to set its optimal financial structure, which in turn affects equity due to the probability of shareholders losing their investments due to default.
The default probability using the Merton approach and applying the Black-Scholes-Merton formula is as follows:

[latex]PD = N(\frac{ln(D)-ln(V_A) – r T + 0.5 \sigma_A^2 T}{\sigma_A \sqrt{T}})[/latex] [latex] = N(\frac{ln(\frac{D}{V_A}) – (r + 0.5 \sigma_A^2) T}{\sigma_A \sqrt{T}}) [/latex]

where:

In = the natural logarithm

D = debt face value

[latex]V_A[/latex] = firm asset value (market value of equity and net debt)

[latex]r[/latex]  = expected return in the risky world

T = time to maturity remaining

[latex]\sigma_A[/latex] = volatility (standard deviation of asset values)

N = cumulated normal distribution

In the preceding equation, the components that lie within the brackets are seen as a standardized measure of the distance to the debt barrier. This distance represents a threshold beyond which a firm will enter into financial distress and subsequently default.

The distance to default (DtD) using the Merton approach (assuming T = 1) is as follows:

[latex] DtD = \frac{ln(\frac{D}{V_A})- r  + 0.5 \sigma_A^2 -“other payouts”}{\sigma_A} [/latex]

There are many challenges associated with using the Merton model. Neither the asset value itself nor its associated volatility are observed. The structure of the underlying debt is typically very complex, as it involves differing maturities, covenants, guarantees, and other specifications. Because variables change so frequently, the model must be recalibrated continuously. Also, its main limitation is that it only applies to liquid, publicly traded firms. Using this approach for unlisted companies can be problematic due to unobservable prices and challenges with finding comparable prices. Finally, due to high sensitivity to market movements and underlying variables, the model tends to fall short of fully reflecting the dependence of credit risk on business and credit cycles.

The 7 Basel II Level 1 Operational Risk Categories

What is Basel II

According to Investopedia, Basel II is a set of international banking regulations put forth by the Basel Committee on Bank Supervision, which leveled the international regulation field with uniform rules and guidelines. Basel II expanded rules for minimum capital requirements established under Basel I, the first international regulatory accord, and provided the framework for regulatory review, as well as set disclosure requirements for assessment of capital adequacy of banks. The main difference between Basel II and Basel I is that Basel II incorporates credit risk of assets held by financial institutions to determine regulatory capital ratios.

OpRisk Loss Event Categories

Basel II provides 7 categories of level 1 loss events that most firms have adopted to meet their own operational risk (OpRisk) framework requirements. OpRisk models are designed to deal with identifying and mitigating operational risks of the firm that are a function of people, systems, and external events.

The 7 Basel II event risk categories intended to capture all potential operational risks. Every loss event should be mapped to the risk event categories outlined in the firms operational risk management policies and procedures. Some loss can fall into more than one categories.

The 7 categories are:

  • Internal Fraud – misappropriation of assets, tax evasion, intentional mismaking of positions, bribery
  • External Fraud – theft of information, hacking damage, third-party theft and forgery
  • Employment Practices and Workplace Safety – discrimination, workers compensation, employee health and safety
  • Clients, Products, and Business Practice – market manipulation, antitrust, improper trade, product defects, fiduciary breaches, account churning
  • Damage to Physical Assets – natural disasters, terrorism, vandalism
    Business Disruption and Systems Failures – utility disruptions, software failures, hardware failures
  • Execution, Delivery, and Process Management – data entry errors, accounting errors, failed mandatory reporting, negligent loss of client assets

Evaluating Operational Risk

When evaluating OpRisk event, it’s critical to understand that severity and frequency both contribute to the greatness of the loss. For example, loss events are small but occur very frequently in the Execution, Delivery, and Process Management category. Whereas, losses are much less frequent but typically have a large dollar amount in the Clients, Products, and Business Practices category as these loss events commonly arise from substantial litigation suits.

The modeling of loss event data differs for each category. Thus, it is important to make sure every event is placed in the appropriate group. When assigning loss events in OpRisk, consistency is more important than accuracy. Effective operational risk management requires that similar events are consistently categorized the same way. If mistakes are made classifying risks in past years it will impact the risk management control process and reporting to regulators.

In order to properly classify risks, it is important for the firm to perform a comprehensive risk mapping exercise that details every major process of the firm. The process of identifying and classifying risks is commonly referred to as OpRisk taxonomy.

Two-Way and One-Way CSA Agreement

This post will talk about difference between a two-way and one-way CSA agreement and describe how collateral parameters can be linked to credit quality.

Let’s briefly go through what is a CSA agreement.

CSA agreements often use in derivatives trading. A credit support annex (CSA) is a document that defines the terms for the provision of collateral by the parties in derivatives transactions. For more details, check this out.

It’s one of the four standard contact developed by ISDA (International Swap and Derivatives Association).

There may be instances when CSAs are not used. Institutions may be unable or unwilling to post collateral. This may be because their credit quality is far superior to their counterparty or they cannot commit to the operational and liquidity requirements that arise from committing to a CSA.

Two-way CSA

A two-way CSA is often established when two counterparties are relatively similar, as it will be beneficial to both parties involved. It is important to note that the two sides may not be treated equally in certain parameters, like threshold and initial margin depending on the respective risk levels of each party.

One-way CSA

A one-way CSA differs from a two-way CSA in that the former only requires that one counterparty post collateral (either immediately or after a specific event, such as a ratings downgrade). As a result, the CSA will be beneficial to the receiver of the collateral and at the same time will present additional risk for the counterparty posting the collateral. These types of CSAs are established when two counterparties are significantly different in size, risk levels, etc.

Benefits and Risks of CSAs

The terms of a collateral agreement are usually linked to the credit quality of the counterparties in a transaction. This is beneficial when a counterpartys credit quality is strong because it minimizes operational workload. However, it is also beneficial when a counterpartys credit quality is weak as it allows the other party to enforce collateralization terms triggered by a quality downgrade.

Although credit ratings are the most common quality linked, others include market value of equity, net asset value, and traded credit spread. The benefits of linking to credit ratings must be weighed against the costs associated with the requirement of collateral when a ratings downgrade occurs.

Estimate the expected shortfall given P/L or return data.

A major limitation of the VaR measure is that it does not tell the investor the amount or magnitude of the actual loss. VaR only provides the maximum value we can lose for a given confidence level. The expected shortfall (ES) provides an estimate of the tail loss by averaging the VaRs for increasing confidence levels in the tail. Specifically, the tail mass is divided into n equal slices and the corresponding n + 1 VaRs are computed. For example, if n = 3, we can construct the following table based on the normal distribution:

Confidence levelVaRDifference
96%1.7507
97%1.88080.1301
98%2.05370.1729
99%2.32630.2726

Observe that the VaR increases (from Difference column) in order to maintain the same interval mass (of 1 %) because the tails become thinner and thinner. The average of the four computed VaRs is 2.003 and represents the probability-weighted expected tail loss, which is Expected Shortfall.

Note that as n increases, the expected shortfall will increase and approach the theoretical true loss [2.063 in this case; the average of a high number of VaRs (e.g., greater than 10,000)].

Estimate VaR using a historical simulation approach.

Estimating VaR with a historical simulation approach is by far the simplest and most straightforward VaR method. To make this calculation, you simply order return observations from largest to smallest. The observation that follows the threshold loss level denotes the VaR limit. We are essentially searching for the observation that separates the tail from the body of the distribution.

More generally, the observation that determines VaR for n observations at the [latex](1 – \alpha) [/latex] confidence level would be: [latex](\alpha * n) [/latex]. Recall that the confidence level, [latex](1 – \alpha) [/latex] , is typically a large value (e.g., 95% ) whereas the significance level, usually denoted as [latex] \alpha [/latex] , is much smaller (e.g., 5%).


To illustrate this VaR method, assume you have gathered 1,000 monthly returns for security and produced the distribution. You decide that you want to compute the monthly VaR for this security at a confidence level of 99%. At a 99% confidence level, the lower tail displays the lowest 1% of the underlying distributions returns. For this distribution, the value associated with a 99% confidence level is a return of -4%. 


Figure 1

RanksDaily Return
17%
22.56%
32%
41.88%
961.02%
97-0.9%
98-2.7%
99-3.5%
100-4%

Estimate VaR using a parametric approach for both normal and lognormal

In contrast to the historical simulation method, the parametric approach (e.g., the delta- normal approach) explicitly assumes a distribution for the underlying observations.

We will analyze two cases:

(1) VaR for returns that follow a normal distribution, and

(2) VaR for returns that follow a lognormal distribution.

Intuitively, the VaR for a given confidence level denotes the point that separates the tail losses from the remaining distribution. The VaR cutoff will be in the left tail of the returns distribution. Usually, the calculated value at risk is negative, but is typically reported as a positive value since the negative amount is implied (i.e., it is the value that is at risk).

Normal VaR

The normal distribution is usually used to simulate the stock return. In this case, the VaR at significance level [latex]\alpha[/latex] is:

[latex]VaR(\alpha\%) = (\mu_R + \sigma_Rz_\alpha) P_{t-1}[/latex]

In practice, the population parameters are not likely known, in which case the researcher will use the sample mean and standard deviation.

Lognormal VaR 

Lognormal is used to simulate stock price as it’s always larger than 0. To calculate VaR in lognoram form. We need to transform the  non-logarithmized price mean and variance, denoted m and v, into a loarithmized mean and variance.

[latex]\mu = ln(\frac{m_{p/l}}{\sqrt{1+\frac{v_{p/l}}{m_{p/l}^2}}})[/latex].

[latex]\sigma^2 = ln(1+\frac{v_{p/l}}{m_{p/l}^2})[/latex] .

[latex]VaR(\alpha\%) = e^{\mu_{p/l} + \sigma_{p/l} z_\alpha}[/latex]