LO 49.1: Within the economic capital implementation framework describe the challenges that appear in: Defining and calculating risk measures Risk aggregation Validation of models Dependency modeling in credit risk Evaluating counterparty credit risk Assessing interest rate risk in the banking book
For this LO, it would be helpful to recall the properties of a coherent risk measure from the Part I curriculum. The properties are as follows: 1. Mono tonicity: A portfolio with greater future returns will likely have less risk.
2. Subadditivity: The risk of a portfolio is at most equal to the risk of the assets within the
portfolio.
3. Positive homogeneity: The size of a portfolio will impact the size of its risk.
4. Translation invariance: The risk of a portfolio is dependent on the assets within the
portfolio.
Defining and Calculating Risk Measures
It is not always apparent how risk should be quantified for a given bank, especially when there are many different possible risk measures to consider. Prior to defining specific measures, one should be aware of the general characteristics of ideal risk measures. They
Page 146
2018 Kaplan, Inc.
Topic 49 Cross Reference to GARP Assigned Reading – Basel Committee on Banking Supervision
should be: intuitive, stable, easy to compute, easy to understand, coherent, and interpretable in economic terms. In addition, the risk decomposition process must be simple and meaningful for a given risk measure.
Standard deviation, value at risk (VaR), expected shortfall (ES), as well as spectral (i.e., coherent) and distorted risk measures could be considered, each with their respective pros and cons. Obviously, no one measure would perfectly consider all of the necessary elements in measuring risk. In practice, VaR and ES are the most commonly used measures. The following section is a summary of challenges encountered when considering the appropriateness of each risk measure.
Standard deviation
Not stable because it depends on assumptions about the loss distribution. Not coherent because it violates the monotonicity condition.
Simple, but not very meaningful in the risk decomposition process.
VaR (the most com m only used measure)
Not stable because it depends on assumptions about the loss distribution. Not coherent because it violates the subadditivity condition (could cause problems in
internal capital allocation and limit setting for sub-portfolios).
Expected shortfall
May or may not be stable, depending on the loss distribution. Not easy to interpret, and the link to the banks desired target rating is not clear.
Spectral and distorted risk measures
Not intuitive nor easily understood (and rarely used in practice). May or may not be stable, depending on the loss distribution. In defining or using such risk measures, banks often consider several of them and for different purposes. For example, absolute risk and capital allocation within the bank are most commonly measured using VaR, but increasingly, the latter is being measured using ES. The VaR measure of absolute risk tends to be easier to communicate to senior management than ES, but ES is a more stable measure than VaR for allocating total portfolio capital. The challenge for the bank is to determine if and when one or the other, or both, should be used.
Amongst the commonly used measures to calculate economic capital, regulators do not have a clear preference for one over another. If different risk measures are implemented by a bank for external versus internal purposes, then there must be a logical connection between the two risk measures. For regulators, merely comparing a banks internal and regulatory capital amounts is insufficient when determining the underlying risks in its portfolio. Therefore, such a task presents an analytical challenge to regulators.
2018 Kaplan, Inc.
Page 147
Topic 49 Cross Reference to GARP Assigned Reading – Basel Committee on Banking Supervision
Risk Aggregation
Risk aggregation involves identifying the individual risk types and making certain choices in aggregating those risk types. Classification by risk types (market, credit, operational, and business) may be approximate and prone to error. For example, the definitions of risk types may differ across banks or within a given bank, which complicates the aggregation process.
Even though one or more of the previously mentioned four risk types may be found at the same time within a given bank portfolio, the portfolio will often be represented by one risk type for the banks classifications purposes. Such a simplistic distinction may result in inaccurate measurements of the risk types and this may bias the aggregation process.
Most banks begin by aggregating risk into silos by risk-type across the entire bank. Other banks prefer using business unit silos, while others combine both approaches. There is no one unanimously accepted method, as each approach has its specific advantages.
Before risk types can be aggregated into a single measure, they must be expressed in comparable units. There are three items to consider: risk metric, confidence level, and time horizon. 1. Risk metric: Relies on the metrics used in the quantification of different risk types.
Must consider whether the metric satisfies the subadditivity condition.
2. Confidence level: Loss distributions for different types of risk are assumed to
have different shapes, which implies differences in confidence intervals. The lack of consistency in choosing confidence levels creates additional complexity in the aggregation process.
3. Time horizon: Choosing the risk measurement time horizon is one of the most
challenging tasks in risk measurement. For example, combining risk measures that have been determined using different time horizons creates problems irrespective of actual measurement methods used. Specifically, there will be inaccurate comparisons between risk types.
A common belief is that combining two portfolios will result in lower risk per investment unit in the combined portfolio versus the weighted average of the two separate portfolios. However, when we consider risk aggregations across different portfolios or business units, such a belief does not hold up with VaR because it does not necessarily satisfy the subadditivity condition. Also, there may be a false assumption that covariance always fully takes into account the dependencies between risks. Specifically, there could be times where the risk interactions are such that the resulting combinations represent higher, not lower, risk. These points highlight an additional challenge in the computation of risk.
There are five commonly used aggregation methodologies. The following is a brief description of them, as well as the challenges associated with using them. 1. Simple summation
Adding together individual capital components.
Page 148
2018 Kaplan, Inc.
Topic 49 Cross Reference to GARP Assigned Reading – Basel Committee on Banking Supervision
Does not differentiate between risk types and therefore assumes equal weighting. Also, does not take into account the underlying interactions between risk types or for differences in the way the risk types may create diversification benefits. In addition, complications arising from using different confidence levels are ignored.
2. Constant diversification
Same process as simple summation except that it subtracts a fixed diversification percentage from the overall amount. Similar challenges as simple summation.
3. Variance-covariance matrix
Summarizes the interdependencies across risk types and provides a flexible framework for recognizing diversification benefits.
Estimates of inter-risk correlations (a bank-specific characteristic) are difficult and
costly to obtain, and the matrix does not adequately capture non-linearities and skewed distributions.
4. Copulas
Combines marginal probability distributions into a joint probability distribution
through copula functions.
More demanding input requirements and parameterization is very difficult to
validate. In addition, building a joint distribution is very difficult.
3. Full modeling/simulation
Simulate the impact of common risk drivers on all risk types and construct the joint distribution of losses.
The most demanding method in terms of required inputs. Also, there are high
information technology demands, the process is time consuming, and it may provide a false sense of security.
The variance-covariance approach is commonly used by banks. Frequently, however, bank- specific data is not available or is of poor quality. As a result, the items in the variance- covariance matrix are completed on the basis of expert judgment. On a related note, banks often use a conservative variance-covariance matrix where the correlations are reported to be approximate and biased upward. In order to reduce the need for expert judgment, banks may end up limiting the dimensionality of the matrix and aggregating risk categories so that there are only a few of them, not recognizing that such aggregations embed correlation assumptions. Clearly, a disadvantage of such a practice is that each category becomes less homogenous and therefore, more challenging to quantify.
One potential disadvantage of the more sophisticated methodologies is that they often lead to greater confidence in the accuracy of the output. It is important to consider robustness checks and estimates of specification and measurement error so as to prevent misleading results.
Validation of Models
Validation is the proof that a model works as intended. As an example, while it is a useful tool to test a models risk sensitivity, it is less useful for testing the accuracy of high quantiles in a loss distribution.
2018 Kaplan, Inc.
Page 149
Topic 49 Cross Reference to GARP Assigned Reading – Basel Committee on Banking Supervision
The validation of economic capital models differs from the valuation of an IRB (internal- ratings based) model because the output of economic capital models is a distribution rather than a single predicted forecast against which actual outcomes may be compared. Also, economic capital models are quite similar to VaR models despite the longer time horizons, higher confidence levels, and greater lack of data.
There are six qualitative validation processes to consider. The following is a brief description of them, as well as the challenges associated with using them (where applicable). 1. Use test
If a bank uses its measurement systems for internal purposes, then regulators could place more reliance on the outputs for regulatory capital.
The challenge is for regulators to obtain a detailed understanding of which models
properties are being used and which are not.
2. Qualitative review
Must examine documentation and development work, have discussions with the
models developers, test and derive algorithms, and compare with other practices and known information.
The challenge is to ensure that the model works in theory and takes into account the
correct risk drivers. Also, confirmation of the accuracy of the mathematics behind the model is necessary.
3. Systems implementation
For example, user acceptance testing and checking of code should be done prior to
implementation to ensure implementation of the model is done properly.
4. Management oversight
It is necessary to have involvement of senior management in examining the output data from the model and knowing how to use the data to make business decisions.
The challenge is ensuring that senior management is aware of how the model is used
and how the model outputs are interpreted.
3. Data quality checks
Processes to ensure completeness, accuracy, and relevance of data used in the
model. Examples include: qualitative review, identifying errors, and verification of transaction data.
6. Examination of assumptionssensitivity testing
Assumptions include: correlations, recovery rates, and shape of tail distributions.
The process involves reviewing the assumptions and examining the impact on model outputs.
There are also six quantitative validation processes to consider. The following is a brief description of them, as well as the challenges associated with using them (where applicable). 1. Validation of inputs and parameters
Validating input parameters for economic capital models requires validation of those
parameters not included in the IRB approach, such as correlations.
Page 150
2018 Kaplan, Inc.
Topic 49 Cross Reference to GARP Assigned Reading – Basel Committee on Banking Supervision
The challenge is that checking model inputs is not likely to be fully effective because
every model is based on underlying assumptions. Therefore, the more complex the model, the more likely there will be model error. Simply examining input parameters will not prevent the problem.
2. Model replication
Attempts to replicate the model results obtained by the bank. The challenge is that the process is rarely enough to validate models and in practice,
there is little evidence of it being used by banks. Specifically, replication simply by re-running a set of algorithms to produce the same set of results is not considered enough model validation.
3. Benchmarking and hypothetical portfolio testing
The process is commonly used and involves determining whether the model
produces results comparable to a standard model or comparing models on a set of reference portfolios.
The challenge is that the process can only compare one model against another and
may provide little comfort that the model reflects reality. All that the process is able to do is provide broad comparisons confirming that input parameters or model outputs are broadly comparable.
4. Backtesting
of outcomes to forecasts.
Considers how well the model forecasts the distribution of outcomescomparison
The challenge is that the process can really only be used for models whose outputs can be characterized by a quantifiable metric with which to compare an outcome. Obviously, there will be risk measurement systems whose outputs cannot be interpreted this way. Also, backtesting is not yet a major part of banks validation practices for economic purposes.
3. Profit and loss attribution
Involves regular analysis of profit and losscomparison between causes of actual profit and loss versus the models risk drivers.
The challenge is that the process is not widely used except for market risk pricing
models.
6. Stress testing
Involves stressing the model and comparing model outputs to stress losses.
Overall, although these validation processes may be highly effective in areas such as risk sensitivity, they may not be effective in areas such as overall absolute accuracy.
Additionally, there is difficulty in validating the conceptual soundness of a capital model. The development of a model almost always requires assumptions to be made. However, some of the assumptions may not be testable, so it could be impossible to be absolutely certain of the conceptual soundness of a model. Even though the underlying points may appear reasonable and logical, that may not be the case in practice.
>From a regulators perspective, some industry validation practices are weak, especially for total capital adequacy of the bank and the overall calibration of models. Such a
2018 Kaplan, Inc.
Page 151
Topic 49 Cross Reference to GARP Assigned Reading – Basel Committee on Banking Supervision
validation project is challenging because it usually requires evaluation of high quantiles of loss distributions over long periods of time. In addition, there are data scarcity problems plus technical difficulties, such as tail estimation. Therefore, it is important for senior management and model users to understand the limitations of models and the risks of using models that have not been fully validated.
Dependency Modeling in Credit Risk
Modeling the dependency structure between borrowers is crucial, yet challenging. Both linear and nonlinear dependency relationships between obligors need to be considered.
In general, dependencies can be modeled using: credit risk portfolio models, models using copulas, and models based on the asymptotic single-risk factor (ASRF) model. With the ASRF approach, banks may use their own estimates of correlations or may use multiple systematic risk factors to address concentrations. Such an approach would result in questioning the method used to calibrate the correlations and the ways in which the bank addressed the infinite granularity and single-factor structure of the ASRF model. ASRF can be used to compute the capital requirement for credit risk under the IRB framework.
There are many issues to consider regarding the challenges in coming up with reliable dependency assumptions used in credit risk portfolio models. Regulators may need to test the accuracy and strength of correlation estimates used by banks given their heavy reliance on model assumptions and the significant impact on economic capital calculations.
In the past, the validity of the following assumptions have been questioned: (1) the ASRF Gaussian copula approach, (2) the normal distribution for the variables driving default, (3) the stability of correlations over time, and (4) the joint assumptions of correctly specified default probabilities and doubly-stochastic processes, which suggest that default correlation is sufficiently captured by common risk factors.
Doubts have been raised about the ability of some models using such assumptions in terms of their ability to explain the time-clustering of defaults that is seen in certain markets. Insufficiently integrating the correlation between probability of default (PD) and loss given default (LGD) in the models, coupled with insufficiently modeling LGD variability, may lead to underestimating the necessary economic capital. Furthermore, it will create challenges in identifying the different sources of correlations and the clustering of defaults and losses.
Rating changes are greatly impacted by the business cycle and are explained by different models during expansionary and recessionary periods. As a result, the sample period and approach used to calibrate the dependency structure could be important in assessing whether correlation estimates are overestimated or underestimated. Furthermore, some models assume that unobservable asset returns may be approximated by changes in equity prices but fail to consider that the relationship between asset returns and equity prices are unobservable and may be non-linear. Also, the use of equity prices to estimate credit default probability is problematic because such prices may include information that is irrelevant for credit risk purposes. As a result, using equity prices may result in some inaccuracy in the correlation estimates.
Page 152
2018 Kaplan, Inc.
Topic 49 Cross Reference to GARP Assigned Reading – Basel Committee on Banking Supervision
In contrast, when banks use a regulatory-type approach, the assumptions of such an approach create other challenges for both banks and regulators: Correlation estimates need to be estimated, but there may be limited historical data
on which to base the correlation estimates. Also, the assumptions used to generate the correlations may not be consistent with the underlying assumptions of the Basel II credit risk model.
A banks use of the Basel II risk weight model requires concentration risk to be accounted
for by other measures and/or management methods. It will also require regulators to evaluate such measures/methods.
A key challenge to overcome is the use of misspecified or incorrectly calibrated correlations and the use of a normal distribution (which does not replicate the details of the distribution of asset returns). This may lead to large errors in measuring portfolio credit risk and economic capital.
Evaluating Counterparty Credit Risk
Such a task is a significant challenge because it requires: obtaining data from multiple systems, measuring exposures from an enormous number of transactions (including many that exhibit optionality) spanning a wide range of time periods, monitoring collateral and netting arrangements, and categorizing exposures across many counterparties. As a result, banks need to have well-developed processes and trained staff to deal with these challenges.
Market-risk-related challenges to counterparty exposure at default (EAD) estimation. Counterparty credit exposure requires simulation of market risk factors and the Market risk VaR models combine all positions in a portfolio into a single simulation. revaluation of counterparty positions under simulated risk factor shocks, similar to VaR models. Consider the following two challenges that occur when attempting to use VaR model technology to measure counterparty credit exposure. Market risk VaR models combine all positions in a portfolio into a single simulation.
Therefore, gains from one position may fully offset the losses in another position in the same simulation run. However, counterparty credit risk exposure measurement does not allow netting across counterparties. As a result, it is necessary to compute amounts at the netting set level (on each set of transactions that form the basis of a legally enforceable netting agreement), which increases computational complexity.
Market risk VaR calculations are usually performed for a single short-term holding
period. However, counterparty credit exposure measurement must be performed for multiple holding periods into the future. Therefore, market risk factors need to be simulated over much longer time periods than in VaR calculations, and the revaluation of the potential exposure in the future must be done for the entire portfolio at certain points in the future.
Credit-risk-related challenges to PD and LGD estimation.
Some material transactions are performed with counterparties with which the bank does not have any other exposures. Therefore, the bank must calculate a probability of default (PD) and loss given default (LGD) for the counterparty and transaction.
For hedge funds, the measurement challenge occurs when there is little information
provided on underlying fund volatility, leverage, or types of investment strategies employed.
Even for counterparties with which the bank has other credit exposures, the bank still
needs to calculate a specific LGD for the transaction.
2018 Kaplan, Inc.
Page 153
Topic 49 Cross Reference to GARP Assigned Reading – Basel Committee on Banking Supervision
Interaction between market risk and credit riskwrong-way risk.
Identifying and accounting for wrong-way risk (exposures that are negatively correlated with the counterpartys credit quality) is a significant challenge because it requires an understanding of the market risk factors to which the counterparty is exposed. That would be difficult to do in the case of a hedge fund, for example, which would be less transparent. It also requires a comparison of those factor sensitivities to the factor sensitivities of the banks own exposures to the counterparty.
The magnitude of wrong-way risk is difficult to quantify in an economic capital model
since it requires a long time horizon at a high confidence level.
Operational-risk-related challenges in managing counterparty credit risk. The challenge is that managing such risk requires specialized computer systems and
people. Complicated transactions, such as daily limit monitoring, marking-to-market, collateral management, and intraday liquidity and credit extensions, increase the risk of measurement errors.
The quantification of operational risks is a significant challenge, especially when it pertains to new or rapidly growing businesses, new products or processes, intraday extensions of credit, and infrequently occurring but severe events.
Differences in risk profiles between margined and non-margined counterparties. The modeling difference between the two types of counterparties is primarily concerned with the future forecasting period. For margined counterparties, the forecasting period is short, and for non-margined counterparties, it is usually much longer.
As a result of the difference in time periods, the aggregation of risk between these two types of counterparties is a challenge because the usual procedure is to use a single time period for all positions.
Aggregation challenges.
In general, the challenges are increased significantly when moving from measuring credit risk of one counterparty to measuring credit risk of the firm in general for economic capital purposes.
When counterparties have both derivatives and securities financing activities, the
problem is especially challenging because the systems in place may not be able to handle such aggregation.
Further aggregation challenges exist when high-level credit risk measures are required to be aggregated with high-level market risk and operational risk measures in order to calculate economic capital.
Breaking down counterparty credit risk into detailed component parts (as is often
done with market risk) is another challenge. The sheer computational complexities and enormous amounts of data required would generally be cost prohibitive to perform on a frequent basis. The challenge still remains for many banks due to outdated or ineffective computer systems.
Page 154
2018 Kaplan, Inc.
Topic 49 Cross Reference to GARP Assigned Reading – Basel Committee on Banking Supervision
Assessing Interest Rate Risk in the Banking Book
The computation challenge arises from the long holding period assumed for a banks balance sheet and the need to model indeterminate cash flows on both the asset and liability side due to the embedded optionality of many banking book items.
Optionality in the hanking book. A major measurement challenge is found with non-linear risk from long-term fixed-
income obligations with embedded options for the borrower to prepay and from embedded options in non-maturity deposits. In considering the asset side of the balance sheet, prepayment risk options (i.e., mortgages, mortgage-backed securities, and consumer loans) are the main form of embedded options. The prepayment option results in uncertain cash flows and makes interest rate risk measurement a difficult task. In considering the liability side, there are two embedded options in non-maturity deposits: (1) the bank has an option to determine the interest rate paid to depositors and when to amend the rate, and (2) the depositor has the option to withdraw up to the entire balance with no penalty. The interaction between these two embedded options creates significant valuation and interest rate sensitivity measurement problems. Sufficiently modeling optionality exposures requires very complex stochastic-path evaluation techniques.
Banks pricing behavior. This factor contributes to the challenges in measuring the interest rate risk of banking
book items. For example, it would require a model to analyze the persistence of the many different non-maturity banking products, as well as a model to determine bank interest rates that consider general market conditions, customer relationships, bank commercial power, and optimal commercial policies.
Determining bank interest rates would require the pricing of credit risk. The price
of credit risk applied to different banking products creates a challenge because it would require a pricing rule that links the credit spread to changes in macroeconomic conditions and interest rate changes. Also, it means that interest rate stress scenarios should consider the dependence between interest rate and credit risk factors.
The choice o f stress scenarios. The drawbacks of using simple interest rate shocks pose interest rate measurement Are not based on probabilities and, therefore, are difficult to integrate into economic challenges because the shocks:
Are not based on probabilities and, therefore, are difficult to integrate into economic
capital models based on VaR. Are not necessarily sensitive to the current rate or economic environment.Do not allow for an integrated analysis of interest rate and credit risks on banking
Are not necessarily sensitive to the current rate or economic environment.
Do not take into account changes in the slope or curvature of the yield curve.
Do not allow for an integrated analysis of interest rate and credit risks on banking
book items.
2018 Kaplan, Inc.
Page 155
Topic 49 Cross Reference to GARP Assigned Reading – Basel Committee on Banking Supervision
BIS R e c o m m e n d a t i o n s f o r S u p e r v i s o r s