LO 4.2: Explain how the m apping process captures general and specific risks.
So how many general risk factors (or primitive risk factors) are appropriate for a given portfolio? In some cases, one or two risk factors may be sufficient. O f course, the more risk factors chosen, the more time consuming the modeling of a portfolio becomes. However, more risk factors could lead to a better approximation of the portfolios risk exposure.
In our choice of general risk factors for use in VaR models, we should be aware that the types and number of risk factors we choose will have an effect on the size of residual or specific risks. Specific risks arise from unsystematic risk or asset-specific risks of various positions in the portfolio. The more precisely we define risk, the smaller the specific risk.
2018 Kaplan, Inc.
Page 39
Topic 4 Cross Reference to GARP Assigned Reading – Jorion, Chapter 11
For example, a portfolio of bonds may include bonds of different ratings, terms, and currencies. If we use duration as our only risk factor, there will be a significant amount of variance among the bonds that we referred to as specific risk. If we add a risk factor for credit risk, we could expect that the amount of specific risk would be smaller. If we add another risk factor for currencies, we would expect that the specific risk would be even smaller. Thus, the definition of specific risk is a function of general market risk. 1)] / 2) to evaluate the correlation between each risk factor. To simplify the number of As an example, suppose an equity portfolio consists of 5,000 stocks. Each stock has a market risk component and a firm-specific component. If each stock has a corresponding risk factor, we would need roughly 12.5 million covariance terms (i.e., [5,000 x (5,000
1)] / 2) to evaluate the correlation between each risk factor. To simplify the number of parameters required, we need to understand that diversification will reduce firm-specific components and leave only market risk (i.e., systematic risk or beta risk). We can then map the market risk component of each stock onto a stock index (i.e., changes in equity prices) to greatly reduce the number of parameters needed.
Suppose you have a portfolio of N stocks and map each stock to the market index, which is defined as your primitive risk factor. The risk exposure, /3-, is computed by regressing the return of stock i on the market index return using the following equation:
Ri = oq +(3jRy + j
We can ignore the first term (i.e., the intercept) as it does not relate to risk, and we will also assume that the last term, which is related to specific risk, is not correlated with other stocks or the market portfolio. If the weight of each position in the portfolio is defined as then the portfolio return is defined as follows:
r p = ^ wiR; = y ^ w i(3iRM + y > ii
N
i=i
N
i=i
N
i=i
Aggregating all risk exposures, /T, based on the market weights of each position determines the risk exposure as follows: =X}WiPi p P =X}WiPi
N
i=l
We can then decompose the variance, V, of the portfolio return into two components, which consist of general market risk exposures and specific risk exposures, as follows:
V(Rp) = Pp x V(Rm) + wf x<j2j
N
i= l
General market risk: (3p x V (R m)
Specific risk:
N
i= l
wf x cr^
Page 40
2018 Kaplan, Inc.
Topic 4 Cross Reference to GARP Assigned Reading – Jorion, Chapter 11
M a p p i n g A p p r o a c h e s f o r F i x e d -In c o m e P o r t f o l i o s
Articles by kenli
LO 4.1: Explain the principles underlying VaR m apping, and describe the m apping
LO 4.1: Explain the principles underlying VaR m apping, and describe the m apping process.
Value at risk (VaR) mapping involves replacing the current values of a portfolio with risk factor exposures. The first step in the process is to measure all current positions within a portfolio. These positions are then mapped to risk factors by means of factor exposures. Mapping involves finding common risk factors among positions in a given portfolio. If we have a portfolio consisting of a large number of positions, it may be difficult and time consuming to manage the risk of each individual position. Instead, we can evaluate the value of these positions by mapping them onto common risk factors (e.g., changes in interest rates or equity prices). By reducing the number of variables under consideration, we greatly simplify the risk management process.
Mapping can assist a risk manager in evaluating positions whose characteristics may change over time, such as fixed-income securities. Mapping can also provide an effective way to manage risk when there is not sufficient historical data for an investment, such as an initial public offering (IPO). In both cases, evaluating historical prices may not be relevant, so the manager must evaluate those risk factors that are likely to impact the portfolios risk profile.
The principles for VaR risk mapping are summarized as follows:
VaR mapping aggregates risk exposure when it is impractical to consider each position
separately. For example, there may be too many computations needed to measure the risk for each individual position.
VaR mapping simplifies risk exposures into primitive risk factors. For example, a
portfolio may have thousands of positions linked to a specific exchange rate that could be summarized with one aggregate risk factor.
VaR risk measurements can differ from pricing methods where prices cannot be
aggregated. The aggregation of a number of positions to one risk factor is acceptable for risk measurement purposes.
Page 38
2018 Kaplan, Inc.
Topic 4 Cross Reference to GARP Assigned Reading – Jorion, Chapter 11
VaR mapping is useful for measuring changes over time, as with bonds or options. For example, as bonds mature, risk exposure can be mapped to spot yields that reflect the current position.
VaR mapping is useful when historical data is not available.
The first step in the VaR mapping process is to identify common risk factors for different investment positions. Figure 1 illustrates how the market values (MVs) of each position or investment are matched to the common risk factors identified by a risk manager.
Figure 1: Mapping Positions to Risk Factors
Figure 2 illustrates the next step, where the risk manager constructs risk factor distributions and inputs all data into the risk model. In this case, the market value of the first position, MVp is allocated to the risk exposures in the first row, x^ , x12, and x^y The other market value positions are linked to the risk exposures in a similar way. Summing the risk factors in each column then creates a vector consisting of three risk exposures.
Figure 2: Mapping Risk Exposures
Investment Market Value
Risk Factor 1 Risk Factor 2 Risk Factor 3
1 2 3 4 5
MVX m v 2 MV, m v 4 m v 5
*n
*21
*31
*41
*51
*12
*22
*32
*42
*52
*13
*23
*33
*43
*53
LO 3.6: Describe the Basel rules for backtesting.
LO 3.6: Describe the Basel rules for backtesting.
In the backtesting process, we attempt to strike a balance between the probability of a Type I error (rejecting a model that is correct) and a Type II error (failing to reject a model that is incorrect). Thus, the Basel Committee is primarily concerned with identifying whether exceptions are the result of bad luck (Type I error) or a faulty model (Type II error). The Basel Committee requires that market VaR be calculated at the 99% confidence level and backtested over the past year. At the 99% confidence level, we would expect to have 2.3 exceptions (230 x 0.01) each year, given approximately 250 trading days.
Regulators do not have access to every parameter input of the model and must construct rules that are applicable across institutions. To mitigate the risk that banks willingly commit a Type II error and use a faulty model, the Basel Committee designed the Basel penalty zones presented in Figure 5. The committee established a scale of the number of exceptions and corresponding increases in the capital multiplier, k. Thus, banks are penalized for exceeding four exceptions per year. The multiplier is normally three but can be increased to as much as four, based on the accuracy of the banks VaR model. Increasing k significantly increases the amount of capital a bank must hold and lowers the banks performance measures, like return on equity.
Notice in Figure 5 that there are three zones. The green zone is an acceptable number of exceptions. The yellow zone indicates a penalty zone where the capital multiplier is increased by 0.40 to 1.00. The red zone, where 10 or more exceptions are observed, indicates the strictest penalty with an increase of 1 to the capital multiplier.
Figure 5: Basel Penalty Zones
N um ber o f Exceptions
M ultiplier (k)
Zone
Green
Yellow
0 to 4
5
6
7
8
9
3.00
3.40
3.50
3.65
3.75
3.85
4.00
2018 Kaplan, Inc.
Page 33
Red
10 or more
Topic 3 Cross Reference to GARP Assigned Reading – Jorion, Chapter 6
As shown in Figure 3, the yellow zone is quite broad (five to nine exceptions). The penalty (raising the multiplier from three to four) is automatically required for banks with 10 or more exceptions. However, the penalty for banks with five to nine exceptions is subject to supervisors discretions, based on what type of model error caused the exceptions. The Committee established four categories of causes for exceptions and guidance for supervisors for each category:
The basic integrity o f the model is lacking. Exceptions occurred because of incorrect data or errors in the model programming. The penalty should apply.
Model accuracy needs improvement. The exceptions occurred because the model does not
accurately describe risks. The penalty should apply. Intraday trading activity. The exceptions occurred due to trading activity (VaR is based on static portfolios). The penalty should be considered. Bad luck. The exceptions occurred because market conditions (volatility and correlations among financial instruments) significantly varied from an accepted norm. These exceptions should be expected to occur at least some of the time. No penalty guidance is provided.
Although the yellow zone is broad, an accurate model could produce five or more exceptions 10.8% of the time at the 99% confidence level. So even if a bank has an accurate model, it is subject to punishment 10.8% of the time (using the required 99% confidence level). However, regulators are more concerned about Type II errors, and the increased capital multiplier penalty is enforced using the 97% confidence level. At this level, inaccurate models would not be rejected 12.8% of the time (e.g., those with VaR calculated at the 97% confidence level rather than the required 99% confidence level). While this seems to be only a slight difference, using a 99% confidence level would result in a 1.24 times greater level of required capital, providing a powerful economic incentive for banks to use a lower confidence level. Exemptions may be excluded if they are the result of bad luck that follows from an unexpected change in interest rates, exchange rates, political event, or natural disaster. Bank regulators keep the description of exceptions intentionally vague to allow adjustments during major market disruptions.
Industry analysts have suggested lowering the required VaR confidence level to 93% and compensating by using a greater multiplier. This would result in a greater number of expected exceptions, and variances would be more statistically significant. The one- year exception rate at the 95% level would be 13, and with more than 17 exceptions, the probability of a Type I error would be 12.5% (close to the 10.8% previously noted), but the probability of a Type II error at this level would fall to 7.4% (compared to 12.8% at a 97.5% confidence level). Thus, inaccurate models would fail to be rejected less frequently.
Another way to make variations in the number of exceptions more significant would be to use a longer backtesting period. This approach may not be as practical because the nature of markets, portfolios, and risk changes over time.
Page 34
2018 Kaplan, Inc.
Topic 3 Cross Reference to GARP Assigned Reading – Jorion, Chapter 6
K e y C o n c e p t s
LO 3.1
Backtesting is an important part of VaR model validation. It involves comparing the number of instances where the actual loss exceeds the VaR level (called exceptions) with the number predicted by the model at the chosen level of confidence. The Basel Committee requires banks to backtest internal VaR models and penalizes banks with excessive exceptions in the form of higher capital requirements.
LO 3.2
VaR models are based on static portfolios, while actual portfolio compositions are dynamic and incorporate fees, commissions, and other profit and loss factors. This effect is minimized by backtesting with a relatively short time horizon such as daily holding periods. The backtesting period constitutes a limited sample, and a challenge for risk managers is to find an acceptable level of exceptions.
LO 3.3
The failure rate of a model backtest is the number of exceptions divided by the number of observations: N / T. The Basel Committee requires backtesting at the 99% confidence level over the past year (230 business days). At this level, we would expect 230 x 0.01, or 2.5 exceptions.
LO 3.4
In using backtesting to accept or reject a VaR model, we must balance the probabilities of two types of errors: a Type I error is rejecting an accurate model, and a Type II error is failing to reject an inaccurate model. A log-likelihood ratio is used as a test for the validity of VaR models.
LO 3.5
Unconditional coverage testing does not evaluate the timing of exceptions, while conditional coverage tests review the number and timing of exceptions for independence. Current market or trading portfolio conditions may require changes to the VaR model.
LO 3.6
The Basel Committee penalizes financial institutions when the number of exceptions exceeds four. The corresponding penalties incrementally increase the capital requirement multiplier for the financial institution from three to four as the number of exceptions increase.
2018 Kaplan, Inc.
Page 35
Topic 3 Cross Reference to GARP Assigned Reading – Jorion, Chapter 6
C o n c e p t C h e c k e r s
1.
2.
3.
4.
5.
In backtesting a value at risk (VaR) model that was constructed using a 97.3% confidence level over a 232-day period, how many exceptions are forecasted? A. 2.5. B. 3.7. C. 6.3. D. 12.6.
Unconditional testing does not reflect the: A. size of the portfolio. B. number of exceptions. C. confidence level chosen. D. timing of the exceptions.
Which of the following statements regarding verification of a VaR model by examining its failure rates is false? A. The frequency of exceptions should correspond to the confidence level used for
the model.
B. According to Kupiec (1995), we should reject the hypothesis that the model is
correct if the log-likelihood ratio (LR) > 3.84.
C. Backtesting VaR models with a higher probability of exceptions is difficult
because the number of exceptions is not high enough to provide meaningful information.
D. The range for the number of exceptions must strike a balance between the
chances of rejecting an accurate model (a Type I error) and the chances of failing to reject an inaccurate model (a Type II error).
The Basel Committee has established four categories of causes for exceptions. Which of the following does not apply to one of those categories? A. The sample is small. B. Intraday trading activity. C. Model accuracy needs improvement. D. The basic integrity of the model is lacking.
A risk manager is backtesting a sample at the 95% confidence level to see if a VaR model needs to be recalibrated. He is using 252 daily returns for the sample and discovered 17 exceptions. What is the 2;-score for this sample when conducting VaR model verification? A. 0.62. B. 1.27. C. 1.64. D 2.86.
Page 36
2018 Kaplan, Inc.
Topic 3 Cross Reference to GARP Assigned Reading – Jorion, Chapter 6
C o n c e p t C h e c k e r An s w e r s
1. C
(1 – 0.975) X 252 = 6.3
2. D Unconditional testing does not capture the timing of exceptions.
3. C Backtesting VaR models with a lower probability of exceptions is difficult because the number
of exceptions is not high enough to provide meaningful information.
4. A Causes include the following: bad luck, intraday trading activity, model accuracy needs
improvement, and the basic integrity of the model is lacking.
5. B The z-score is calculated using x = 17, p = 0.05, c = 0.95, and N = 252, as follows:
1 7 -0 .0 5 (2 5 2 ) _ 1 7 -1 2 .6 _
4.4
_ ^ ^
~ ^0.05(0.95)252 ~ V ll.9 7 ~~ 3.4598 ~~
2018 Kaplan, Inc.
Page 37
The following is a review of the Market Risk Measurement and Management principles designed to address the learning objectives set forth by GARP. This topic is also covered in:
VaR M a p p i n g
E x a m F o c u s
Topic 4
This topic introduces the concept of mapping a portfolio and shows how the risk of a complex, multi-asset portfolio can be separated into risk factors. For the exam, be able to explain the mapping process for several types of portfolios, including fixed-income portfolios and portfolios consisting of linear and nonlinear derivatives. Also, be able to describe how the mapping process simplifies risk management for large portfolios. Finally, be able to distinguish between general and specific risk factors, and understand the various inputs required for calculating undiversified and diversified value at risk (VaR).
T h e M a p p i n g P r o c e s s
LO 3.3: Explain the need to consider conditional coverage in the backtesting
LO 3.3: Explain the need to consider conditional coverage in the backtesting framework.
So far in the examples and discussion, we have been backtesting models based on unconditional coverage, in which the timing of our exceptions was not considered. Conditioning considers the time variation of the data. In addition to having a predictable number of exceptions, we also anticipate the exceptions to be fairly equally distributed across time. A bunching of exceptions may indicate that market correlations have changed or that our trading positions have been altered. In the event that exceptions are not independent, the risk manager should incorporate models that consider time variation in risk.
We need some guide to determine if the bunching is random or caused by one of these changes. By including a measure of the independence of exceptions, we can measure conditional coverage of the model. Christofferson2 proposed extending the unconditional coverage test statistic (ZAuc) to allow for potential time variation of the data. He developed a statistic to determine the serial independence of deviations using a log-likelihood ratio test (ZAind). The overall log-likelihood test statistic for conditional coverage (ZJ?cc) is then computed as:
LR = LR + LR- j ina
uc
cc
Each individual component is independently distributed as chi-squared, and the sum is also distributed as chi-squared. At the 93% confidence level, we would reject the model if LRcc > 5.99 and we would reject the independence term alone if LRind > 3.84. If exceptions
2. P.F. Christofferson, Evaluating Interval Forecasts, International Economic Review, 39 (1998),
841-862.
Page 32
2018 Kaplan, Inc.
Topic 3 Cross Reference to GARP Assigned Reading – Jorion, Chapter 6
are determined to be serially dependent, then the VaR model needs to be revised to incorporate the correlations that are evident in the current conditions.
Professors Note: For the exam, you do not need to know how to calculate the log-likelihood test statistic for conditional coverage. Therefore, the focus here is to understand that the test for conditional coverage should be performed when exceptions are clustered together.
B a s e l C o m m i t t e e R u l e s f o r B a c k t e s t i n g
LO 3.2: Explain the significant difficulties in backtesting a VaR model.
LO 3.2: Explain the significant difficulties in backtesting a VaR model.
VaR models are based on static portfolios, while actual portfolio compositions are constantly changing as relative prices change and positions are bought and sold. Multiple risk factors affect actual profit and loss, but they are not included in the VaR model. For example, the actual returns are complicated by intraday changes as well as profit and loss factors that result from commissions, fees, interest income, and bid-ask spreads. Such effects can be minimized by backtesting with a relatively short time horizon such as a daily holding period.
Another difficulty with backtesting is that the sample backtested may not be representative of the true underlying risk. The backtesting period constitutes a limited sample, so we do not expect to find the predicted number of exceptions in every sample. At some level, we must reject the model, which suggests the need to find an acceptable level of exceptions.
Risk managers should track both actual and hypothetical returns that reflect VaR expectations. The VaR modeled returns are comparable to the hypothetical return that would be experienced had the portfolio remained constant for the holding period. Generally, we compare the VaR model returns to cleaned returns (i.e., actual returns adjusted for all changes that arise from changes that are not marked to market, like funding costs and fee income). Both actual and hypothetical returns should be backtested to verify the validity of the VaR model, and the VaR modeling methodology should be adjusted if hypothetical returns fail when backtesting.
U sing Failure Rates in M odel Verification
LO 3.1: Define backtesting and exceptions and explain the importance of
LO 3.1: Define backtesting and exceptions and explain the importance of backtesting VaR models.
Backtesting is the process of comparing losses predicted by a value at risk (VaR) model to those actually experienced over the testing period. It is an important tool for providing model validation, which is a process for determining whether a VaR model is adequate. The main goal of backtesting is to ensure that actual losses do not exceed expected losses at a given confidence level. The number of actual observations that fall outside a given confidence level are called exceptions. The number of exceptions falling outside of the VaR confidence level should not exceed one minus the confidence level. For example, exceptions should occur less than 3% of the time if the confidence level is 93%.
Backtesting is extremely important for risk managers and regulators to validate whether VaR models are properly calibrated or accurate. If the level of exceptions is too high, models should be recalibrated and risk managers should re-evaluate assumptions, parameters, and/ or modeling processes. The Basel Committee allows banks to use internal VaR models to measure their risk levels, and backtesting provides a critical evaluation technique to test the adequacy of those internal VaR models. Bank regulators rely on backtesting to verify risk models and identify banks that are designing models that underestimate their risk. Banks with excessive exceptions (more than four exceptions in a sample size of 250) are penalized with higher capital requirements.
2018 Kaplan, Inc.
Page 25
Topic 3 Cross Reference to GARP Assigned Reading – Jorion, Chapter 6
LO 2.4: Identify advantages and disadvantages o f non-parametric estimation
LO 2.4: Identify advantages and disadvantages o f non-parametric estimation methods. *
Any risk manager should be prepared to use non-parametric estimation techniques. There are some clear advantages to non-parametric methods, but there is some danger as well. Therefore, it is incumbent to understand the advantages, the disadvantages, and the appropriateness of the methodology for analysis.
2018 Kaplan, Inc.
Page 19
Topic 2 Cross Reference to GARP Assigned Reading – Dowd, Chapter 4
Advantages of non-parametric methods include the following:
Intuitive and often computationally simple (even on a spreadsheet).
Not hindered by parametric violations of skewness, fat-tails, et cetera. Avoids complex variance-covariance matrices and dimension problems. Data is often readily available and does not require adjustments (e.g., financial
statements adjustments).
Can accommodate more complex analysis (e.g., by incorporating age-weighting with
volatility-weigh ting).
Disadvantages of non-parametric methods include the following:
Analysis depends critically on historical data. Volatile data periods lead to VaR and ES estimates that are too high. Quiet data periods lead to VaR and ES estimates that are too low. Difficult to detect structural shifts/regime changes in the data. Cannot accommodate plausible large impact events if they did not occur within the
sample period.
Difficult to estimate losses significantly larger than the maximum loss within the data set
(historical simulation cannot; volatility-weigh ting can, to some degree).
Need sufficient data, which may not be possible for new instruments or markets.
Page 20
2018 Kaplan, Inc.
Topic 2 Cross Reference to GARP Assigned Reading – Dowd, Chapter 4
K e y C o n c e p t s
LO 2.1
Bootstrapping involves resampling a subset of the original data set with replacement. Each draw (subsample) yields a coherent risk measure (VaR or ES). The average of the risk measures across all samples is then the best estimate.
LO 2.2
The discreteness of historical data reduces the number of possible VaR estimates since historical simulation cannot adjust for significance levels between ordered observations. However, non-parametric density estimation allows the original histogram to be modified to fill in these gaps. The process connects the midpoints between successive columns in the histogram. The area is then removed from the upper bar and placed in the lower bar, which creates a smooth function between the original data points.
LO 2.3
One important limitation to the historical simulation method is the equal-weight assumed for all data in the estimation period, and zero weight otherwise. This arbitrary methodology can be improved by using age-weighted simulation, volatility-weighted simulation, correlation-weighted simulation, and filtered historical simulation.
The age-weighted simulation method adjusts the most recent (distant) observations to be more (less) heavily weighted.
The volatility-weighting procedure incorporates the possibility that volatility may change over the estimation period, which may understate or overstate current risk by including stale data. The procedure replaces historic returns with volatility-adjusted returns; however, the actual procedure of estimating VaR is unchanged (i.e., only the data inputs change).
Cor relation-weigh ted simulation updates the variance-covariance matrix between the assets in the portfolio. The off-diagonal elements represent the covariance pairs while the diagonal elements update the individual variance estimates. Therefore, the correlation-weighted methodology is more general than the volatility-weighting procedure by incorporating both variance and covariance adjustments.
Filtered historical simulation is the most complex estimation method. The procedure relies on bootstrapping of standardized returns based on volatility forecasts. The volatility forecasts arise from GARCH or similar models and are able to capture conditional volatility, volatility clustering, and/or asymmetry.
2018 Kaplan, Inc.
Page 21
Topic 2 Cross Reference to GARP Assigned Reading – Dowd, Chapter 4
LO 2.4
Advantages of non-parametric models include: data can be skewed or have fat tails; they are conceptually straightforward; there is readily available data; and they can accommodate more complex analysis. Disadvantages focus mainly on the use of historical data, which limits the VaR forecast to (approximately) the maximum loss in the data set; they are slow to respond to changing market conditions; they are affected by volatile (quiet) data periods; and they cannot accommodate plausible large losses if not in the data set.
Page 22
2018 Kaplan, Inc.
Topic 2 Cross Reference to GARP Assigned Reading – Dowd, Chapter 4
C o n c e p t C h e c k e r s
1.
2.
3.
4.
3.
Johanna Roberto has collected a data set of 1,000 daily observations on equity returns. She is concerned about the appropriateness of using parametric techniques as the data appears skewed. Ultimately, she decides to use historical simulation and bootstrapping to estimate the 5% VaR. Which of the following steps is most likely to be part of the estimation procedure? A. Filter the data to remove the obvious outliers. B. Repeated sampling with replacement. C. Identify the tail region from reordering the original data. D. Apply a weighting procedure to reduce the impact of older data.
All of the following approaches improve the traditional historical simulation approach for estimating VaR except the: A. volatility-weighted historical simulation. B. age-weighted historical simulation. C. market-weighted historical simulation. D. correlation-weighted historical simulation.
Which of the following statements about age-weighting is most accurate? A. The age-weighting procedure incorporates estimates from GARCH models. B.
If the decay factor in the model is close to 1, there is persistence within the data set.
C. When using this approach, the weight assigned on day i is equal to:
w(i) = Xi_1 x (1 X )/(1 X ) .
D. The number of observations should at least exceed 230.
Which of the following statements about volatility-weighting is true? A. Historic returns are adjusted, and the VaR calculation is more complicated. B. Historic returns are adjusted, and the VaR calculation procedure is the same. C. Current period returns are adjusted, and the VaR calculation is more
complicated.
D. Current period returns are adjusted, and the VaR calculation is the same.
All of the following items are generally considered advantages of non-parametric estimation methods except: A. ability to accommodate skewed data. B. availability of data. C. use of historical data. D. little or no reliance on covariance matrices.
2018 Kaplan, Inc.
Page 23
Topic 2 Cross Reference to GARP Assigned Reading – Dowd, Chapter 4
C o n c e p t C h e c k e r An s w e r s
1. B Bootstrapping from historical simulation involves repeated sampling with replacement. The
5% VaR is recorded from each sample draw. The average of the VaRs from all the draws is the VaR estimate. The bootstrapping procedure does not involve filtering the data or weighting observations. Note that the VaR from the original data set is not used in the analysis.
2. C Market-weighted historical simulation is not discussed in this topic. Age-weighted historical simulation weights observations higher when they appear closer to the event date. Volatility- weighted historical simulation adjusts for changing volatility levels in the data. Correlation- weighted historical simulation incorporates anticipated changes in correlation between assets in the portfolio.
3. B
If the intensity parameter (i.e., decay factor) is close to 1, there will be persistence (i.e., slow decay) in the estimate. The expression for the weight on day i has i in the exponent when it should be n. While a large sample size is generally preferred, some of the data may no longer be representative in a large sample.
4. B The volatility-weighting method adjusts historic returns for current volatility. Specifically, return at time t is multiplied by (current volatility estimate / volatility estimate at time t). However, the actual procedure for calculating VaR using a historical simulation method is unchanged; it is only the inputted data that changes.
5. C The use of historical data in non-parametric analysis is a disadvantage, not an advantage. If the estimation period was quiet (volatile) then the estimated risk measures may understate (overstate) the current risk level. Generally, the largest VaR cannot exceed the largest loss in the historical period. On the other hand, the remaining choices are all considered advantages of non-parametric methods. For instance, the non-parametric nature of the analysis can accommodate skewed data, data points are readily available, and there is no requirement for estimates of covariance matrices.
Page 24
2018 Kaplan, Inc.
The following is a review of the Market Risk Measurement and Management principles designed to address the learning objectives set forth by GARP. This topic is also covered in:
Ba c k t e s t i n g VaR
Topic 3
E x a m F o c u s
We use value at risk (VaR) methodologies to model risk. With VaR models, we seek to approximate the changes in value that our portfolio would experience in response to changes in the underlying risk factors. Model validation incorporates several methods that we use in order to determine how close our approximations are to actual changes in value. Through model validation, we are able to determine what confidence to place in our models, and we have the opportunity to improve their accuracy. For the exam, be prepared to validate approaches that measure how close VaR model approximations are to actual changes in value. Also, understand how the log-likelihood ratio (LR) is used to test the validity of VaR models for Type I and Type II errors for both unconditional and conditional tests. Finally, be familiar with Basel Committee outcomes that require banks to backtest their internal VaR models and penalize banks by enforcing higher capital requirements for excessive exceptions.
B a c k t e s t i n g V a R M o d e l s
LO 2.2: Describe historical simulation using non-parametric density estimation.
LO 2.2: Describe historical simulation using non-parametric density estimation.
The clear advantage of the traditional historical simulation approach is its simplicity. One obvious drawback, however, is that the discreteness of the data does not allow for estimation of VaRs between data points. If there were 100 historical observations, then it is straightforward to estimate VaR at the 95% or the 96% confidence levels, and so on. However, this method is unable to incorporate a confidence level of 95.5%, for example. More generally, with n observations, the historical simulation method only allows for n different confidence levels.
One of the advantages of non-parametric density estimation is that the underlying distribution is free from restrictive assumptions. Therefore, the existing data points can be used to smooth the data points to allow for VaR calculation at all confidence levels. The simplest adjustment is to connect the midpoints between successive histogram bars in the original data sets distribution. See Figure 1 for an illustration of this surrogate density function. Notice that by connecting the midpoints, the lower bar receives area from the upper bar, which loses an equal amount of area. In total, no area is lost, only displaced, so we still have a probability distribution function, just with a modified shape. The shaded area in Figure 1 represents a possible confidence interval, which can be utilized regardless of the size of the data set. The major improvement of this non-parametric approach over the traditional historical simulation approach is that VaR can now be calculated for a continuum of points in the data set.
Figure 1: Surrogate Density Function
Distribution
Tail
Following this logic, one can see that the linear adjustment is a simple solution to the interval problem. A more complicated adjustment would involve connecting curves, rather than lines, between successive bars to better capture the characteristics of the data.
Page 16
2018 Kaplan, Inc.
Topic 2 Cross Reference to GARP Assigned Reading – Dowd, Chapter 4
W e i g h t e d H i s t o r i c a l S i m u l a t i o n A p p r o a c h e s
LO 2.1: Apply the bootstrap historical simulation approach to estimate coherent
LO 2.1: Apply the bootstrap historical simulation approach to estimate coherent risk measures.
The bootstrap historical simulation is a simple and intuitive estimation procedure. In essence, the bootstrap technique draws a sample from the original data set, records the VaR from that particular sample and returns the data. This procedure is repeated over and over and records multiple sample VaRs. Since the data is always returned to the data set, this procedure is akin to sampling with replacement. The best VaR estimate from the full data set is the average of all sample VaRs.
This same procedure can be performed to estimate the expected shortfall (ES). Each drawn sample will calculate its own ES by slicing the tail region into n slices and averaging the VaRs at each of the n 1 quantiles. This is exactly the same procedure described in the previous topic. Similarly, the best estimate of the expected shortfall for the original data set is the average of all of the sample expected shortfalls.
Empirical analysis demonstrates that the bootstrapping technique consistently provides more precise estimates of coherent risk measures than historical simulation on raw data alone.
2018 Kaplan, Inc.
Page 15
Topic 2 Cross Reference to GARP Assigned Reading – Dowd, Chapter 4
U s i n g N o n -Pa r a m e t r i c E s t i m a t i o n
LO 1.7: Interpret Q Q plots to identify the characteristics o f a distribution.
LO 1.7: Interpret Q Q plots to identify the characteristics o f a distribution.
A natural question to ask in the course of our analysis is, From what distribution is the data drawn? The truth is that you will never really know since you only observe the realizations from random draws of an unknown distribution. However, visual inspection can be a very simple but powerful technique.
In particular, the quantile-quantile (QQ) plot is a straightforward way to visually examine if empirical data fits the reference or hypothesized theoretical distribution (assume standard normal distribution for this discussion). The process graphs the quantiles at regular confidence intervals for the empirical distribution against the theoretical distribution. As an example, if both the empirical and theoretical data are drawn from the same distribution, then the median (confidence level = 30%) of the empirical distribution would plot very close to zero, while the median of the theoretical distribution would plot exactly at zero.
Continuing in this fashion for other quantiles (40%, 60%, and so on) will map out a function. If the two distributions are very similar, the resulting Q Q plot will be linear.
Let us compare a theoretical standard normal distribution relative to an empirical ^-distribution (assume that the degrees of freedom for the ^-distribution are sufficiently small and that there are noticeable differences from the normal distribution). We know that both distributions are symmetric, but the ^-distribution will have fatter tails. Hence, the quantiles near zero (confidence level = 50%) will match up quite closely. As we move further into the tails, the quantiles between the ^-distribution and the normal will diverge (see Figure 3). For example, at a confidence level of 95%, the critical .z-value is 1.65, but for the ^-distribution, it is closer to 1.68 (degrees of freedom of approximately 40). At 97.5% confidence, the difference is even larger, as the .z-value is equal to 1.96 and the r-stat is equal to 2.02. More generally, if the middles of the QQplot match up, but the tails do not, then the empirical distribution can be interpreted as symmetric with tails that differ from a normal distribution (either fatter or thinner).
2018 Kaplan, Inc.
Page 9
Topic 1 Cross Reference to GARP Assigned Reading – Dowd, Chapter 3
Figure 3: Q Q Plot
– 3
– 2
–
1
0
1
2
3
Normal Quantiles
Page 10
2018 Kaplan, Inc.
Topic 1 Cross Reference to GARP Assigned Reading – Dowd, Chapter 3
K e y C o n c e p t s
LO 1.1
Historical simulation is the easiest method to estimate value at risk. All that is required is to reorder the profit/loss observations in increasing magnitude of losses and identify the breakpoint between the tail region and the remainder of distribution.
LO 1.2
Parametric estimation of VaR requires a specific distribution of prices or equivalently, returns. This method can be used to calculate VaR with either a normal distribution or a lognormal distribution.
Under the assumption of a normal distribution, VaR (i.e., delta-normal VaR) is calculated as follows:
VaR = |ip/L + tfp/L x za
Under the assumption of a lognormal distribution, lognormal VaR is calculated as follows:
VaR = Pt_! x (l – e^R –<7RXza )
LO 1.3
VaR identifies the lower bound of the profit/loss distribution, but it does not estimate the expected tail loss. Expected shortfall overcomes this deficiency by dividing the tail region into equal probability mass slices and averaging their corresponding VaRs.
LO 1.4
A more general risk measure than either VaR or ES is known as a coherent risk measure.
LO 1.3
A coherent risk measure is a weighted average of the quantiles of the loss distribution where the weights are user-specific based on individual risk aversion. A coherent risk measure will assign each quantile (not just tail quantiles) a weight. The average of the weighted VaRs is the estimated loss.
2018 Kaplan, Inc.
Page 11
Topic 1 Cross Reference to GARP Assigned Reading - Dowd, Chapter 3
LO 1.6
Sound risk management requires the computation of the standard error of a coherent risk measure to estimate the precision of the risk measure itself. The simplest method creates a confidence interval around the quantile in question. To compute standard error, it is necessary to find the variance of the quantile, which will require estimates from the underlying distribution.
LO 1.7
The quantile-quantile (QQ) plot is a visual inspection of an empirical quantile relative to a hypothesized theoretical distribution. If the empirical distribution closely matches the theoretical distribution, the QQplot would be linear.
Page 12
2018 Kaplan, Inc.
Topic 1 Cross Reference to GARP Assigned Reading - Dowd, Chapter 3
C o n c e p t C h e c k e r s
The VaR at a 93% confidence level is estimated to be 1.36 from a historical simulation of 1,000 observations. Which of the following statements is most likely true? A. The parametric assumption of normal returns is correct. B. The parametric assumption of lognormal returns is correct. C. The historical distribution has fatter tails than a normal distribution. D. The historical distribution has thinner tails than a normal distribution.
Assume the profit/loss distribution for XYZ is normally distributed with an annual mean of $20 million and a standard deviation of $10 million. The 5% VaR is calculated and interpreted as which of the following statements? A. 5% probability of losses of at least $3.50 million. B. 5% probability of earnings of at least $3.50 million. C. 95% probability of losses of at least $3.50 million. D. 95% probability of earnings of at least $3.50 million.
Which of the following statements about expected shortfall estimates and coherent risk measures are true? A. Expected shortfall and coherent risk measures estimate quantiles for the entire
loss distribution.
B. Expected shortfall and coherent risk measures estimate quantiles for the tail
region.
C. Expected shortfall estimates quantiles for the tail region and coherent risk
measures estimate quantiles for the non-tail region only.
D. Expected shortfall estimates quantiles for the entire distribution and coherent
risk measures estimate quantiles for the tail region only.
Which of the following statements most likely increases standard errors from coherent risk measures? A. Increasing sample size and increasing the left tail probability. B. Increasing sample size and decreasing the left tail probability. C. Decreasing sample size and increasing the left tail probability. D. Decreasing sample size and decreasing the left tail probability.
The quantile-quantile plot is best used for what purpose? A. Testing an empirical distribution from a theoretical distribution. B. Testing a theoretical distribution from an empirical distribution. C. Identifying an empirical distribution from a theoretical distribution. D. Identifying a theoretical distribution from an empirical distribution.
2.
3.
4.
5.
2018 Kaplan, Inc.
Page 13
Topic 1 Cross Reference to GARP Assigned Reading - Dowd, Chapter 3
C o n c e p t C h e c k e r An s w e r s
1. D The historical simulation indicates that the 5% tail loss begins at 1.56, which is less than the
1.65 predicted by a standard normal distribution. Therefore, the historical simulation has thinner tails than a standard normal distribution.
2. D The value at risk calculation at 95% confidence is: -20 million + 1.65 x 10 million = -$3.50
million. Since the expected loss is negative and VaR is an implied negative amount, the interpretation is that XYZ will earn less than +$3.50 million with 5% probability, which is equivalent to XYZ earning at least $3.50 million with 95% probability.
3. B ES estimates quantiles for n - 1 equal probability masses in the tail region only. The coherent
risk measure estimates quantiles for the entire distribution including the tail region.
4. C Decreasing sample size clearly increases the standard error of the coherent risk measure given
that standard error is defined as:
V pQ p)/n
f(q)
As the left tail probability, p, increases, the probability of tail events increases, which also increases the standard error. Mathematically, p(l - p) increases asp increases untilp = 0.5. Small values ofp imply smaller standard errors.
5. C Once a sample is obtained, it can be compared to a reference distribution for possible
identification. The QQ plot maps the quantiles one to one. If the relationship is close to linear, then a match for the empirical distribution is found. The QQ plot is used for visual inspection only without any formal statistical test.
Page 14
2018 Kaplan, Inc.
The following is a review of the Market Risk Measurement and Management principles designed to address the learning objectives set forth by GARP. This topic is also covered in:
N o n -p a r a m e t r ic A p p r o a c h e s
Topic 2
E x a m F o c u s
This topic introduces non-parametric estimation and bootstrapping (i.e., resampling). The key difference between these approaches and parametric approaches discussed in the previous topic is that with non-parametric approaches the underlying distribution is not specified, and it is a data driven, not assumption driven, analysis. For example, historical simulation is limited by the discreteness of the data, but non-parametric analysis smoothes the data points to allow for any VaR confidence level between observations. For the exam, pay close attention to the description of the bootstrap historical simulation approach as well as the various weighted historical simulations approaches.
Non-parametric estimation does not make restrictive assumptions about the underlying distribution like parametric methods, which assume very specific forms such as normal or lognormal distributions. Non-parametric estimation lets the data drive the estimation. The flexibility of these methods makes them excellent candidates for VaR estimation, especially if tail events are sparse.
B o o t s t r a p H i s t o r i c a l S i m u l a t i o n A p p r o a c h