LO 3.3: Explain the need to consider conditional coverage in the backtesting framework.
So far in the examples and discussion, we have been backtesting models based on unconditional coverage, in which the timing of our exceptions was not considered. Conditioning considers the time variation of the data. In addition to having a predictable number of exceptions, we also anticipate the exceptions to be fairly equally distributed across time. A bunching of exceptions may indicate that market correlations have changed or that our trading positions have been altered. In the event that exceptions are not independent, the risk manager should incorporate models that consider time variation in risk.
We need some guide to determine if the bunching is random or caused by one of these changes. By including a measure of the independence of exceptions, we can measure conditional coverage of the model. Christofferson2 proposed extending the unconditional coverage test statistic (ZAuc) to allow for potential time variation of the data. He developed a statistic to determine the serial independence of deviations using a log-likelihood ratio test (ZAind). The overall log-likelihood test statistic for conditional coverage (ZJ?cc) is then computed as:
LR = LR + LR- j ina
uc
cc
Each individual component is independently distributed as chi-squared, and the sum is also distributed as chi-squared. At the 93% confidence level, we would reject the model if LRcc > 5.99 and we would reject the independence term alone if LRind > 3.84. If exceptions
2. P.F. Christofferson, Evaluating Interval Forecasts, International Economic Review, 39 (1998),
841-862.
Page 32
2018 Kaplan, Inc.
Topic 3 Cross Reference to GARP Assigned Reading – Jorion, Chapter 6
are determined to be serially dependent, then the VaR model needs to be revised to incorporate the correlations that are evident in the current conditions.
Professors Note: For the exam, you do not need to know how to calculate the log-likelihood test statistic for conditional coverage. Therefore, the focus here is to understand that the test for conditional coverage should be performed when exceptions are clustered together.
B a s e l C o m m i t t e e R u l e s f o r B a c k t e s t i n g
Temp_store
LO 3.2: Explain the significant difficulties in backtesting a VaR model.
LO 3.2: Explain the significant difficulties in backtesting a VaR model.
VaR models are based on static portfolios, while actual portfolio compositions are constantly changing as relative prices change and positions are bought and sold. Multiple risk factors affect actual profit and loss, but they are not included in the VaR model. For example, the actual returns are complicated by intraday changes as well as profit and loss factors that result from commissions, fees, interest income, and bid-ask spreads. Such effects can be minimized by backtesting with a relatively short time horizon such as a daily holding period.
Another difficulty with backtesting is that the sample backtested may not be representative of the true underlying risk. The backtesting period constitutes a limited sample, so we do not expect to find the predicted number of exceptions in every sample. At some level, we must reject the model, which suggests the need to find an acceptable level of exceptions.
Risk managers should track both actual and hypothetical returns that reflect VaR expectations. The VaR modeled returns are comparable to the hypothetical return that would be experienced had the portfolio remained constant for the holding period. Generally, we compare the VaR model returns to cleaned returns (i.e., actual returns adjusted for all changes that arise from changes that are not marked to market, like funding costs and fee income). Both actual and hypothetical returns should be backtested to verify the validity of the VaR model, and the VaR modeling methodology should be adjusted if hypothetical returns fail when backtesting.
U sing Failure Rates in M odel Verification
LO 3.1: Define backtesting and exceptions and explain the importance of
LO 3.1: Define backtesting and exceptions and explain the importance of backtesting VaR models.
Backtesting is the process of comparing losses predicted by a value at risk (VaR) model to those actually experienced over the testing period. It is an important tool for providing model validation, which is a process for determining whether a VaR model is adequate. The main goal of backtesting is to ensure that actual losses do not exceed expected losses at a given confidence level. The number of actual observations that fall outside a given confidence level are called exceptions. The number of exceptions falling outside of the VaR confidence level should not exceed one minus the confidence level. For example, exceptions should occur less than 3% of the time if the confidence level is 93%.
Backtesting is extremely important for risk managers and regulators to validate whether VaR models are properly calibrated or accurate. If the level of exceptions is too high, models should be recalibrated and risk managers should re-evaluate assumptions, parameters, and/ or modeling processes. The Basel Committee allows banks to use internal VaR models to measure their risk levels, and backtesting provides a critical evaluation technique to test the adequacy of those internal VaR models. Bank regulators rely on backtesting to verify risk models and identify banks that are designing models that underestimate their risk. Banks with excessive exceptions (more than four exceptions in a sample size of 250) are penalized with higher capital requirements.
2018 Kaplan, Inc.
Page 25
Topic 3 Cross Reference to GARP Assigned Reading – Jorion, Chapter 6
LO 2.4: Identify advantages and disadvantages o f non-parametric estimation
LO 2.4: Identify advantages and disadvantages o f non-parametric estimation methods. *
Any risk manager should be prepared to use non-parametric estimation techniques. There are some clear advantages to non-parametric methods, but there is some danger as well. Therefore, it is incumbent to understand the advantages, the disadvantages, and the appropriateness of the methodology for analysis.
2018 Kaplan, Inc.
Page 19
Topic 2 Cross Reference to GARP Assigned Reading – Dowd, Chapter 4
Advantages of non-parametric methods include the following:
Intuitive and often computationally simple (even on a spreadsheet).
Not hindered by parametric violations of skewness, fat-tails, et cetera. Avoids complex variance-covariance matrices and dimension problems. Data is often readily available and does not require adjustments (e.g., financial
statements adjustments).
Can accommodate more complex analysis (e.g., by incorporating age-weighting with
volatility-weigh ting).
Disadvantages of non-parametric methods include the following:
Analysis depends critically on historical data. Volatile data periods lead to VaR and ES estimates that are too high. Quiet data periods lead to VaR and ES estimates that are too low. Difficult to detect structural shifts/regime changes in the data. Cannot accommodate plausible large impact events if they did not occur within the
sample period.
Difficult to estimate losses significantly larger than the maximum loss within the data set
(historical simulation cannot; volatility-weigh ting can, to some degree).
Need sufficient data, which may not be possible for new instruments or markets.
Page 20
2018 Kaplan, Inc.
Topic 2 Cross Reference to GARP Assigned Reading – Dowd, Chapter 4
K e y C o n c e p t s
LO 2.1
Bootstrapping involves resampling a subset of the original data set with replacement. Each draw (subsample) yields a coherent risk measure (VaR or ES). The average of the risk measures across all samples is then the best estimate.
LO 2.2
The discreteness of historical data reduces the number of possible VaR estimates since historical simulation cannot adjust for significance levels between ordered observations. However, non-parametric density estimation allows the original histogram to be modified to fill in these gaps. The process connects the midpoints between successive columns in the histogram. The area is then removed from the upper bar and placed in the lower bar, which creates a smooth function between the original data points.
LO 2.3
One important limitation to the historical simulation method is the equal-weight assumed for all data in the estimation period, and zero weight otherwise. This arbitrary methodology can be improved by using age-weighted simulation, volatility-weighted simulation, correlation-weighted simulation, and filtered historical simulation.
The age-weighted simulation method adjusts the most recent (distant) observations to be more (less) heavily weighted.
The volatility-weighting procedure incorporates the possibility that volatility may change over the estimation period, which may understate or overstate current risk by including stale data. The procedure replaces historic returns with volatility-adjusted returns; however, the actual procedure of estimating VaR is unchanged (i.e., only the data inputs change).
Cor relation-weigh ted simulation updates the variance-covariance matrix between the assets in the portfolio. The off-diagonal elements represent the covariance pairs while the diagonal elements update the individual variance estimates. Therefore, the correlation-weighted methodology is more general than the volatility-weighting procedure by incorporating both variance and covariance adjustments.
Filtered historical simulation is the most complex estimation method. The procedure relies on bootstrapping of standardized returns based on volatility forecasts. The volatility forecasts arise from GARCH or similar models and are able to capture conditional volatility, volatility clustering, and/or asymmetry.
2018 Kaplan, Inc.
Page 21
Topic 2 Cross Reference to GARP Assigned Reading – Dowd, Chapter 4
LO 2.4
Advantages of non-parametric models include: data can be skewed or have fat tails; they are conceptually straightforward; there is readily available data; and they can accommodate more complex analysis. Disadvantages focus mainly on the use of historical data, which limits the VaR forecast to (approximately) the maximum loss in the data set; they are slow to respond to changing market conditions; they are affected by volatile (quiet) data periods; and they cannot accommodate plausible large losses if not in the data set.
Page 22
2018 Kaplan, Inc.
Topic 2 Cross Reference to GARP Assigned Reading – Dowd, Chapter 4
C o n c e p t C h e c k e r s
1.
2.
3.
4.
3.
Johanna Roberto has collected a data set of 1,000 daily observations on equity returns. She is concerned about the appropriateness of using parametric techniques as the data appears skewed. Ultimately, she decides to use historical simulation and bootstrapping to estimate the 5% VaR. Which of the following steps is most likely to be part of the estimation procedure? A. Filter the data to remove the obvious outliers. B. Repeated sampling with replacement. C. Identify the tail region from reordering the original data. D. Apply a weighting procedure to reduce the impact of older data.
All of the following approaches improve the traditional historical simulation approach for estimating VaR except the: A. volatility-weighted historical simulation. B. age-weighted historical simulation. C. market-weighted historical simulation. D. correlation-weighted historical simulation.
Which of the following statements about age-weighting is most accurate? A. The age-weighting procedure incorporates estimates from GARCH models. B.
If the decay factor in the model is close to 1, there is persistence within the data set.
C. When using this approach, the weight assigned on day i is equal to:
w(i) = Xi_1 x (1 X )/(1 X ) .
D. The number of observations should at least exceed 230.
Which of the following statements about volatility-weighting is true? A. Historic returns are adjusted, and the VaR calculation is more complicated. B. Historic returns are adjusted, and the VaR calculation procedure is the same. C. Current period returns are adjusted, and the VaR calculation is more
complicated.
D. Current period returns are adjusted, and the VaR calculation is the same.
All of the following items are generally considered advantages of non-parametric estimation methods except: A. ability to accommodate skewed data. B. availability of data. C. use of historical data. D. little or no reliance on covariance matrices.
2018 Kaplan, Inc.
Page 23
Topic 2 Cross Reference to GARP Assigned Reading – Dowd, Chapter 4
C o n c e p t C h e c k e r An s w e r s
1. B Bootstrapping from historical simulation involves repeated sampling with replacement. The
5% VaR is recorded from each sample draw. The average of the VaRs from all the draws is the VaR estimate. The bootstrapping procedure does not involve filtering the data or weighting observations. Note that the VaR from the original data set is not used in the analysis.
2. C Market-weighted historical simulation is not discussed in this topic. Age-weighted historical simulation weights observations higher when they appear closer to the event date. Volatility- weighted historical simulation adjusts for changing volatility levels in the data. Correlation- weighted historical simulation incorporates anticipated changes in correlation between assets in the portfolio.
3. B
If the intensity parameter (i.e., decay factor) is close to 1, there will be persistence (i.e., slow decay) in the estimate. The expression for the weight on day i has i in the exponent when it should be n. While a large sample size is generally preferred, some of the data may no longer be representative in a large sample.
4. B The volatility-weighting method adjusts historic returns for current volatility. Specifically, return at time t is multiplied by (current volatility estimate / volatility estimate at time t). However, the actual procedure for calculating VaR using a historical simulation method is unchanged; it is only the inputted data that changes.
5. C The use of historical data in non-parametric analysis is a disadvantage, not an advantage. If the estimation period was quiet (volatile) then the estimated risk measures may understate (overstate) the current risk level. Generally, the largest VaR cannot exceed the largest loss in the historical period. On the other hand, the remaining choices are all considered advantages of non-parametric methods. For instance, the non-parametric nature of the analysis can accommodate skewed data, data points are readily available, and there is no requirement for estimates of covariance matrices.
Page 24
2018 Kaplan, Inc.
The following is a review of the Market Risk Measurement and Management principles designed to address the learning objectives set forth by GARP. This topic is also covered in:
Ba c k t e s t i n g VaR
Topic 3
E x a m F o c u s
We use value at risk (VaR) methodologies to model risk. With VaR models, we seek to approximate the changes in value that our portfolio would experience in response to changes in the underlying risk factors. Model validation incorporates several methods that we use in order to determine how close our approximations are to actual changes in value. Through model validation, we are able to determine what confidence to place in our models, and we have the opportunity to improve their accuracy. For the exam, be prepared to validate approaches that measure how close VaR model approximations are to actual changes in value. Also, understand how the log-likelihood ratio (LR) is used to test the validity of VaR models for Type I and Type II errors for both unconditional and conditional tests. Finally, be familiar with Basel Committee outcomes that require banks to backtest their internal VaR models and penalize banks by enforcing higher capital requirements for excessive exceptions.
B a c k t e s t i n g V a R M o d e l s
LO 2.2: Describe historical simulation using non-parametric density estimation.
LO 2.2: Describe historical simulation using non-parametric density estimation.
The clear advantage of the traditional historical simulation approach is its simplicity. One obvious drawback, however, is that the discreteness of the data does not allow for estimation of VaRs between data points. If there were 100 historical observations, then it is straightforward to estimate VaR at the 95% or the 96% confidence levels, and so on. However, this method is unable to incorporate a confidence level of 95.5%, for example. More generally, with n observations, the historical simulation method only allows for n different confidence levels.
One of the advantages of non-parametric density estimation is that the underlying distribution is free from restrictive assumptions. Therefore, the existing data points can be used to smooth the data points to allow for VaR calculation at all confidence levels. The simplest adjustment is to connect the midpoints between successive histogram bars in the original data sets distribution. See Figure 1 for an illustration of this surrogate density function. Notice that by connecting the midpoints, the lower bar receives area from the upper bar, which loses an equal amount of area. In total, no area is lost, only displaced, so we still have a probability distribution function, just with a modified shape. The shaded area in Figure 1 represents a possible confidence interval, which can be utilized regardless of the size of the data set. The major improvement of this non-parametric approach over the traditional historical simulation approach is that VaR can now be calculated for a continuum of points in the data set.
Figure 1: Surrogate Density Function
Distribution
Tail
Following this logic, one can see that the linear adjustment is a simple solution to the interval problem. A more complicated adjustment would involve connecting curves, rather than lines, between successive bars to better capture the characteristics of the data.
Page 16
2018 Kaplan, Inc.
Topic 2 Cross Reference to GARP Assigned Reading – Dowd, Chapter 4
W e i g h t e d H i s t o r i c a l S i m u l a t i o n A p p r o a c h e s
LO 2.1: Apply the bootstrap historical simulation approach to estimate coherent
LO 2.1: Apply the bootstrap historical simulation approach to estimate coherent risk measures.
The bootstrap historical simulation is a simple and intuitive estimation procedure. In essence, the bootstrap technique draws a sample from the original data set, records the VaR from that particular sample and returns the data. This procedure is repeated over and over and records multiple sample VaRs. Since the data is always returned to the data set, this procedure is akin to sampling with replacement. The best VaR estimate from the full data set is the average of all sample VaRs.
This same procedure can be performed to estimate the expected shortfall (ES). Each drawn sample will calculate its own ES by slicing the tail region into n slices and averaging the VaRs at each of the n 1 quantiles. This is exactly the same procedure described in the previous topic. Similarly, the best estimate of the expected shortfall for the original data set is the average of all of the sample expected shortfalls.
Empirical analysis demonstrates that the bootstrapping technique consistently provides more precise estimates of coherent risk measures than historical simulation on raw data alone.
2018 Kaplan, Inc.
Page 15
Topic 2 Cross Reference to GARP Assigned Reading – Dowd, Chapter 4
U s i n g N o n -Pa r a m e t r i c E s t i m a t i o n
LO 1.7: Interpret Q Q plots to identify the characteristics o f a distribution.
LO 1.7: Interpret Q Q plots to identify the characteristics o f a distribution.
A natural question to ask in the course of our analysis is, From what distribution is the data drawn? The truth is that you will never really know since you only observe the realizations from random draws of an unknown distribution. However, visual inspection can be a very simple but powerful technique.
In particular, the quantile-quantile (QQ) plot is a straightforward way to visually examine if empirical data fits the reference or hypothesized theoretical distribution (assume standard normal distribution for this discussion). The process graphs the quantiles at regular confidence intervals for the empirical distribution against the theoretical distribution. As an example, if both the empirical and theoretical data are drawn from the same distribution, then the median (confidence level = 30%) of the empirical distribution would plot very close to zero, while the median of the theoretical distribution would plot exactly at zero.
Continuing in this fashion for other quantiles (40%, 60%, and so on) will map out a function. If the two distributions are very similar, the resulting Q Q plot will be linear.
Let us compare a theoretical standard normal distribution relative to an empirical ^-distribution (assume that the degrees of freedom for the ^-distribution are sufficiently small and that there are noticeable differences from the normal distribution). We know that both distributions are symmetric, but the ^-distribution will have fatter tails. Hence, the quantiles near zero (confidence level = 50%) will match up quite closely. As we move further into the tails, the quantiles between the ^-distribution and the normal will diverge (see Figure 3). For example, at a confidence level of 95%, the critical .z-value is 1.65, but for the ^-distribution, it is closer to 1.68 (degrees of freedom of approximately 40). At 97.5% confidence, the difference is even larger, as the .z-value is equal to 1.96 and the r-stat is equal to 2.02. More generally, if the middles of the QQplot match up, but the tails do not, then the empirical distribution can be interpreted as symmetric with tails that differ from a normal distribution (either fatter or thinner).
2018 Kaplan, Inc.
Page 9
Topic 1 Cross Reference to GARP Assigned Reading – Dowd, Chapter 3
Figure 3: Q Q Plot
– 3
– 2
–
1
0
1
2
3
Normal Quantiles
Page 10
2018 Kaplan, Inc.
Topic 1 Cross Reference to GARP Assigned Reading – Dowd, Chapter 3
K e y C o n c e p t s
LO 1.1
Historical simulation is the easiest method to estimate value at risk. All that is required is to reorder the profit/loss observations in increasing magnitude of losses and identify the breakpoint between the tail region and the remainder of distribution.
LO 1.2
Parametric estimation of VaR requires a specific distribution of prices or equivalently, returns. This method can be used to calculate VaR with either a normal distribution or a lognormal distribution.
Under the assumption of a normal distribution, VaR (i.e., delta-normal VaR) is calculated as follows:
VaR = |ip/L + tfp/L x za
Under the assumption of a lognormal distribution, lognormal VaR is calculated as follows:
VaR = Pt_! x (l – e^R –<7RXza )
LO 1.3
VaR identifies the lower bound of the profit/loss distribution, but it does not estimate the expected tail loss. Expected shortfall overcomes this deficiency by dividing the tail region into equal probability mass slices and averaging their corresponding VaRs.
LO 1.4
A more general risk measure than either VaR or ES is known as a coherent risk measure.
LO 1.3
A coherent risk measure is a weighted average of the quantiles of the loss distribution where the weights are user-specific based on individual risk aversion. A coherent risk measure will assign each quantile (not just tail quantiles) a weight. The average of the weighted VaRs is the estimated loss.
2018 Kaplan, Inc.
Page 11
Topic 1 Cross Reference to GARP Assigned Reading - Dowd, Chapter 3
LO 1.6
Sound risk management requires the computation of the standard error of a coherent risk measure to estimate the precision of the risk measure itself. The simplest method creates a confidence interval around the quantile in question. To compute standard error, it is necessary to find the variance of the quantile, which will require estimates from the underlying distribution.
LO 1.7
The quantile-quantile (QQ) plot is a visual inspection of an empirical quantile relative to a hypothesized theoretical distribution. If the empirical distribution closely matches the theoretical distribution, the QQplot would be linear.
Page 12
2018 Kaplan, Inc.
Topic 1 Cross Reference to GARP Assigned Reading - Dowd, Chapter 3
C o n c e p t C h e c k e r s
The VaR at a 93% confidence level is estimated to be 1.36 from a historical simulation of 1,000 observations. Which of the following statements is most likely true? A. The parametric assumption of normal returns is correct. B. The parametric assumption of lognormal returns is correct. C. The historical distribution has fatter tails than a normal distribution. D. The historical distribution has thinner tails than a normal distribution.
Assume the profit/loss distribution for XYZ is normally distributed with an annual mean of $20 million and a standard deviation of $10 million. The 5% VaR is calculated and interpreted as which of the following statements? A. 5% probability of losses of at least $3.50 million. B. 5% probability of earnings of at least $3.50 million. C. 95% probability of losses of at least $3.50 million. D. 95% probability of earnings of at least $3.50 million.
Which of the following statements about expected shortfall estimates and coherent risk measures are true? A. Expected shortfall and coherent risk measures estimate quantiles for the entire
loss distribution.
B. Expected shortfall and coherent risk measures estimate quantiles for the tail
region.
C. Expected shortfall estimates quantiles for the tail region and coherent risk
measures estimate quantiles for the non-tail region only.
D. Expected shortfall estimates quantiles for the entire distribution and coherent
risk measures estimate quantiles for the tail region only.
Which of the following statements most likely increases standard errors from coherent risk measures? A. Increasing sample size and increasing the left tail probability. B. Increasing sample size and decreasing the left tail probability. C. Decreasing sample size and increasing the left tail probability. D. Decreasing sample size and decreasing the left tail probability.
The quantile-quantile plot is best used for what purpose? A. Testing an empirical distribution from a theoretical distribution. B. Testing a theoretical distribution from an empirical distribution. C. Identifying an empirical distribution from a theoretical distribution. D. Identifying a theoretical distribution from an empirical distribution.
2.
3.
4.
5.
2018 Kaplan, Inc.
Page 13
Topic 1 Cross Reference to GARP Assigned Reading - Dowd, Chapter 3
C o n c e p t C h e c k e r An s w e r s
1. D The historical simulation indicates that the 5% tail loss begins at 1.56, which is less than the
1.65 predicted by a standard normal distribution. Therefore, the historical simulation has thinner tails than a standard normal distribution.
2. D The value at risk calculation at 95% confidence is: -20 million + 1.65 x 10 million = -$3.50
million. Since the expected loss is negative and VaR is an implied negative amount, the interpretation is that XYZ will earn less than +$3.50 million with 5% probability, which is equivalent to XYZ earning at least $3.50 million with 95% probability.
3. B ES estimates quantiles for n - 1 equal probability masses in the tail region only. The coherent
risk measure estimates quantiles for the entire distribution including the tail region.
4. C Decreasing sample size clearly increases the standard error of the coherent risk measure given
that standard error is defined as:
V pQ p)/n
f(q)
As the left tail probability, p, increases, the probability of tail events increases, which also increases the standard error. Mathematically, p(l - p) increases asp increases untilp = 0.5. Small values ofp imply smaller standard errors.
5. C Once a sample is obtained, it can be compared to a reference distribution for possible
identification. The QQ plot maps the quantiles one to one. If the relationship is close to linear, then a match for the empirical distribution is found. The QQ plot is used for visual inspection only without any formal statistical test.
Page 14
2018 Kaplan, Inc.
The following is a review of the Market Risk Measurement and Management principles designed to address the learning objectives set forth by GARP. This topic is also covered in:
N o n -p a r a m e t r ic A p p r o a c h e s
Topic 2
E x a m F o c u s
This topic introduces non-parametric estimation and bootstrapping (i.e., resampling). The key difference between these approaches and parametric approaches discussed in the previous topic is that with non-parametric approaches the underlying distribution is not specified, and it is a data driven, not assumption driven, analysis. For example, historical simulation is limited by the discreteness of the data, but non-parametric analysis smoothes the data points to allow for any VaR confidence level between observations. For the exam, pay close attention to the description of the bootstrap historical simulation approach as well as the various weighted historical simulations approaches.
Non-parametric estimation does not make restrictive assumptions about the underlying distribution like parametric methods, which assume very specific forms such as normal or lognormal distributions. Non-parametric estimation lets the data drive the estimation. The flexibility of these methods makes them excellent candidates for VaR estimation, especially if tail events are sparse.
B o o t s t r a p H i s t o r i c a l S i m u l a t i o n A p p r o a c h
LO 1.6: Evaluate estimators o f risk measures by estimating their standard errors.
LO 1.6: Evaluate estimators o f risk measures by estimating their standard errors.
Sound risk management practice reminds us that estimators are only as useful as their precision. That is, estimators that are less precise (i.e., have large standard errors and wide confidence intervals) will have limited practical value. Therefore, it is best practice to also compute the standard error for all coherent risk measures.
Professors Note: The process o f estimating standard errors for estimators o f coherent risk measures is quite complex, so your focus should be on interpretation o f this concept.
First, lets start with a sample size of n and arbitrary bin width of h around quantile, q. Bin width is just the width of the intervals, sometimes called bins, in a histogram. Computing standard error is done by realizing that the square root of the variance of the quantile is equal to the standard error of the quantile. After finding the standard error, a confidence interval for a risk measure such as VaR can be constructed as follows:
[q + se(q)X za ] > VaR > [q se(q)X za ]
2018 Kaplan, Inc.
Page 7
Topic 1 Cross Reference to GARP Assigned Reading – Dowd, Chapter 3
Example: Estimating standard errors
Construct a 90% confidence interval for 3% VaR (the 93% quantile) drawn from a standard normal distribution. Assume bin width = 0.1 and that the sample size is equal to 500.
Answer:
The quantile value, q, corresponds to the 5% VaR which occurs at 1.65 for the standard normal distribution. The confidence interval takes the following form:
[1.65 + 1.65 x se(q)] > VaR > [1.65 1.65 x se(q)]
Professors Note: Recall that a confidence interval is a two-tailed test (unlike VaR), so a 90% confidence level will have 5% in each tail. Given that this is equivalent to the 5% significance level o f VaR, the critical values o f 1.65 will he the same in both cases.
Since bin width is 0.1, q is in the range 1.65 0.1/2 = [1.7, 1.6]. Note that the left tail probability, p , is the area to the left o f1.7 for a standard normal distribution.
Next, calculate the probability mass between [1.7, 1.6], represented 2&f(q). From the standard normal table, the probability of a loss greater than 1.7 is 0.045 (left tail). Similarly, the probability of a loss less than 1.6 (right tail) is 0.945. Collectively, f(q) = 1 -0 .0 4 5 -0 .9 4 5 = 0.01
The standard error of the quantile is derived from the variance approximation of q and is equal to:
yp(l ~ p ) / n
f(q)
Now we are ready to substitute in the variance approximation to calculate the confidence interval for VaR:
1.65 + 1.65V 4 –
0.01
= 3.18 > VaR > 0 .1 2
‘ 500 > V aR >
, 1.65 1.65
, , c V0.045(l 0.045) / 500
0.01
Lets return to the variance approximation and perform some basic comparative statistics. What happens if we increase the sample size holding all other factors constant? Intuitively, the larger the sample size the smaller the standard error and the narrower the confidence interval.
Page 8
2018 Kaplan, Inc.
Topic 1 Cross Reference to GARP Assigned Reading – Dowd, Chapter 3
Now suppose we increase the bin size, h, holding all else constant. This will increase the probability mass/fg’) and reducep , the probability in the left tail. The standard error will decrease and the confidence interval will again narrow.
Lastly, suppose that p increases indicating that tail probabilities are more likely. Intuitively, the estimator becomes less precise and standard errors increase, which widens the confidence interval. Note that the expression p(l p) will be maximized at p = 0.3.
The above analysis was based on one quantile of the loss distribution. Just as the previous section generalized the expected shortfall to the coherent risk measure, we can do the same for the standard error computation. Thankfully, this complex process is not the focus of the LO.
Quantile-Quantile Plots
LO 1.3: Estimate risk measures by estimating quantiles.
LO 1.3: Estimate risk measures by estimating quantiles.
A more general risk measure than either VaR or ES is known as a coherent risk measure. A coherent risk measure is a weighted average of the quantiles of the loss distribution where the weights are user-specific based on individual risk aversion. ES (as well as VaR) is a special case of a coherent risk measure. When modeling the ES case, the weighting function is set to [1 / (1 confidence level)] for all tail losses. All other quantiles will have a weight of zero.
Page 6
2018 Kaplan, Inc.
Topic 1 Cross Reference to GARP Assigned Reading – Dowd, Chapter 3
Under expected shortfall estimation, the tail region is divided into equal probability slices and then multiplied by the corresponding quantiles. Under the more general coherent risk measure, the entire distribution is divided into equal probability slices weighted by the more general risk aversion (weighting) function.
This procedure is illustrated for n = 10. First, the entire return distribution is divided into nine (i.e., n 1) equal probability mass slices at 10%, 20%, …, 90% (i.e., loss quantiles). Each breakpoint corresponds to a different quantile. For example, the 10% quantile (confidence level = 10%) relates to 1.2816, the 20% quantile (confidence level = 20%) relates to 0.8416, and the 90% quantile (confidence level = 90%) relates to 1.2816. Next, each quantile is weighted by the specific risk aversion function and then averaged to arrive at the value of the coherent risk measure.
This coherent risk measure is more sensitive to the choice of n than expected shortfall, but will converge to the risk measures true value for a sufficiently large number of observations. The intuition is that as n increases, the quantiles will be further into the tails where more extreme values of the distribution are located.
LO 1.4: Define coherent risk measures.
LO 1.4: Define coherent risk measures.