LO 46.2: Compare qualitative and quantitative processes to validate internal

LO 46.2: Compare qualitative and quantitative processes to validate internal ratings, and describe elements of each process.
The goal of qualitative validation is to correctly apply quantitative procedures and to correctly use ratings. Qualitative and quantitative validation are complements although a greater emphasis is placed on qualitative validation given its holistic nature. In other words, neither a positive nor negative conclusion on quantitative validation is sufficient to make an overall conclusion.
Elements of Qualitative Validation
Rating systems design involves the selection of the correct model structure in context of the market segments where the model will be used. There are five key areas regarding rating systems that are analyzed during the qualitative validation process: (1) obtaining probabilities of default, (2) completeness, (3) objectivity, (4) acceptance, and (5) consistency.
Obtaining probabilities o f default (PD). Using statistical models created from actual historical data allows for the determination of the PD for separate rating classes through the calibration of results with the historical data. A direct PD calculation is possible with logistic regression, whereas other methods (e.g., linear discriminant analysis) require an adjustment. An ex post validation of the calibration of the model can be done with data obtained during the use of the model. The data would allow continuous monitoring and validation of the default parameter to ensure PDs that are consistent with true economic conditions.
2018 Kaplan, Inc.
Page 107
Topic 46 Cross Reference to GARP Assigned Reading – De Laurentis, Maino, and Molteni, Chapter 5
C om pleteness o f ra tin g system . All relevant information should be considered when determining creditworthiness and the resulting rating. Given that most default risk models include only a few borrower characteristics to determine creditworthiness, the validation process needs to provide assurance over the completeness of factors used for credit granting purposes. Statistical-based models allow for many borrower characteristics to be used, so there needs to be validation of the process of adding variables to the model to have greater coverage of appropriate risk factors.
O b jectivity o f ra tin g system. Objectivity is achieved when the rating system can clearly define creditworthiness factors with the least amount of interpretation required. A judgment- based rating model would likely be fraught with biases (with low discriminatory power of ratings); therefore, it requires features such as strict (but reasonable) guidelines, proper staff training, and continual benchmarking. A statistical-based ratings model analyzes borrower characteristics based on actual data, so it is a much more objective model.
A cceptance o f ra tin g system. Acceptance by users (e.g., lenders and analysts) is crucial, so the validation process must provide assurance that the models are easily understood and shared by the users. In that regard, the output from the models should be fairly close to what is expected by the users. In addition, users should be educated as to the key aspects of models, especially statistical-based ones, so that they understand them and can make informed judgments regarding acceptance. Heuristic models (i.e., expert systems) are more easily accepted since they mirror past experience and the credit assessments tend to be consistent with cultural norms. In contrast, fuzzy logic models and artificial neural networks are less easily accepted given the high technical knowledge demands to understand them and the high complexity that creates challenges when interpreting the output.
C onsistency o f ra tin g system. The validation process must ensure that the models make sense and are appropriate for their intended use. For example, statistical models may produce relationships between variables that are nonsensical, so the process of eliminating such variables increases consistency. The validation process would test such consistency. In contrast, heuristic models do not suffer from the same shortcoming since they are based on real life experiences. Statistical models used in isolation may still result in rating errors due to the mechanical nature of information processing. As a result, even though such models can remain the primary source of assigning ratings, they must be supplemented with a human element to promote the inclusion of all relevant and important information (usually qualitative and beyond the confines of the model) when making credit decisions.
Additionally, the validation process must deal with the continuity of validation processes, which includes periodic analysis of model performance and stability, analysis of model relationships, and comparisons of model outputs versus actual outcomes. In addition, the validation of statistical models must evaluate the completeness of documentation with focus on documenting the statistical foundations. Finally, validation must consider external benchmarks such as how rating systems are used by competitors.
Elements of Quantitative Validation
Quantitative validation comprises the following areas: (1) sample representativeness, (2) discriminatory power, (3) dynamic properties, and (4) calibration.
Page 108
2018 Kaplan, Inc.
Topic 46 Cross Reference to GARP Assigned Reading – De Laurentis, Maino, and Molteni, Chapter 5
S am ple representativeness. Sample representativeness is demonstrated when a sample from a population is taken and its characteristics match those of the total population. A key problem is that some loan portfolios (in certain niche areas or industries) have very low default rates, which frequently results in an overly low sample size for defaulting entities. The validation process would use bootstrap procedures that randomly create samples through an iterative process that combines items from a default group and items from a non-default group. The rating model is reassessed using the new samples; after analyzing a group of statistically created models, should the end result be stable and common among the models, then the reliability of the result is satisfied. If not, instability risk would still persist and further in-depth analysis would be required. Using more homogeneous subsets in the form of cluster analysis, for example, could provide a more stable result. Alternatively, the model could focus on key factors within the subsets or consider alternative calibrations.
D iscrim inatory p ow er. Discriminatory power is the relative ability of a rating model to accurately differentiate between defaulting and non-defaulting entities for a given forecast period. The forecast period is usually 12 months for PD estimation purposes but is longer for rating validation purposes. It also involves classifying borrowers by risk level on an overall basis or by specific attributes such as industry sector, size, or geographical location.
D yn am ic properties. Dynamic properties include rating systems stability and attributes of migration matrices. In fact, the use of migration matrices assists in determining ratings stability. Migration matrices are introduced after a minimum two-year operational period for the rating model. Ideal attributes of annual migration matrices include (1) ascending order of transition rates to default as rating classes deteriorate, (2) stable ratings over time (e.g., high values being on the diagonal and low values being off-diagonal), and (3) gradual rating movements as opposed to abrupt and large movements (e.g., migration rates of +/ one class are higher than those of +/ two classes). Should the validation process determine the migration matrices to be stable, then the conclusion is that ratings move slowly given their relative insensitivity to credit cycles and other temporary events.
C alibration. Calibration looks at the relative ability to estimate PD. Validating calibration occurs at a very early stage, and because of the limited usefulness in using statistical tools to validate calibration, benchmarking could be used as a supplement to validate estimates of probability of default (PD), loss given default (LGD), and exposure at default (EAD). The benchmarking process compares a financial institutions ratings and estimates to those of other comparable sources; there is flexibility permitted in choosing the most suitable benchmark.
D a t a Q u a l
i t y

Write a Comment