LO 10.2: Describe a regression hedge and explain how it can improve a standard

LO 10.2: Describe a regression hedge and explain how it can improve a standard DV01 -neutral hedge.
A regression hedge takes DV01 -style hedges and adjusts them for projected nominal yield changes compared to projected real yield changes. Least squares regression analysis, which is used for regression-based hedges, looks at the historical relationship between real and nominal yields.
The advantage of a regression framework is that it provides an estimate of a hedged portfolios volatility. An investor can gauge the expected gain in advance and compare it to historical volatility to determine whether the hedged portfolio is an attractive investment.
For example, assume a relative value trade is established whereby a trader sells a U.S. Treasury bond and buys a U.S. TIPS (which makes inflation-adjusted payments) to hedge the T-bond. The initial spread between these two securities represents the current views on inflation. Over time, changes in yields on nominal bonds and TIPS do not track one-for- one. To illustrate this hedge, assume the following data for yields and DVOls of a TIPS and a T-bond. Also assume that the trader is selling 100 million of the T-bond.
Bond TIPS T-Bond i e l Y i e l d (%) 1.325 3.475
DV01 0.084 0.068
If the trade was made DV01-neutral, which assumes that the yield on the TIPS and the nominal bond will increase/decrease by the same number of basis points, the trade will not earn a profit or sustain a loss. The calculation for the amount of TIPS to purchase to hedge the short nominal bond is as follows:
lOOMx 0.068 100
0.084 x —– 100 lOOMx 0.068 0.084
= $80.95 million
where: Fr = face amount of the real yield bond
To improve this hedge, the trader gathers yield data over time and plots a regression line, whereby the real yield is the independent variable and the nominal yield is the dependent variable. To compensate for the dispersion in the change in the nominal yield for a given change in the real yield, the trader would adjust the DV01-neutral hedge.
Page 122
2018 Kaplan, Inc.
Topic 10 Cross Reference to GARP Assigned Reading – Tuckman, Chapter 6
Hedge Adjustment Factor

LO 10.1: Explain the drawbacks to using a DVO 1-neutral hedge for a bond

LO 10.1: Explain the drawbacks to using a DVO 1-neutral hedge for a bond position.
A standard DVO 1-neutral hedge assumes that the yield on a bond and the yield on a hedging instrument rise and fall by the same number of basis points. However, a one-to- one relationship does not always exist in practice. For example, if a trader hedges a T-bond which uses a nominal yield with a treasury security indexed to inflation [i.e., Treasury Inflation Protected Security (TIPS)] which uses a real yield, the hedge will likely be imprecise when changes in yield occur. In general, more dispersion surrounds the change in the nominal yield for a given change in the real yield. Empirically, the nominal yield adjusts by more than one basis point for every basis point adjustment in the real yield.
DVO 1-style metrics and hedges focus on how rates change relative to one another. As mentioned, the presumption that yields on nominal bonds and TIPS change by the same amount is not very realistic. To improve this DVO 1-neutral hedge approach, we can apply regression analysis techniques. Using a regression hedge examines the volatility of historical rate differences and adjusts the DV01 hedge accordingly, based on historical volatility.
2018 Kaplan, Inc.
Page 121
Topic 10 Cross Reference to GARP Assigned Reading – Tuckman, Chapter 6
R e g r e s s i o n H e d g e

LO 9.3: Summarize the process o f finding the default time o f an asset correlated to

LO 9.3: Summarize the process o f finding the default time o f an asset correlated to all other assets in a portfolio using the Gaussian copula.
When a Gaussian copula is used to derive the default time relationship for more than two assets, a Cholesky decomposition is used to derive a sample Mn () from a multivariate copula Mn() C [0,1]. The default correlations of the sample are determined by the default correlation matrix pM for the -variate standard normal distribution, Mn.
The first step is to equate the sample Mn() to the cumulative individual default probability, Q, for asset i at time t Microsoft Excel or a Newton-Raphson search procedure.
using the following equation. This is accomplished using
M n W = Q i ( T i )
Next, the random samples are repeatedly drawn from the -variate standard normal distribution Mn() to determine the expected default time using the Gaussian copula.
2018 Kaplan, Inc.
Page 115
Topic 9 Cross Reference to GARP Assigned Reading – Meissner, Chapter 4
Random samples are drawn to estimate the default times, because there is no closed form solution for this equation.
Example: Estimating default time
Illustrate how a risk manager estimates the expected default time of asset i using an ^-variate Gaussian copula.
Answer: of 3.5 years. This process is then repeated 100,000 Suppose a risk manager draws a 25% cumulative default probability for asset i from a random 72-variate standard normal distribution, Mn(). The 72-variate standard normal distribution includes a default correlation matrix, pM, that has the default correlations of asset i with all n assets. Figure 4 illustrates how to equate this 25% with the market determined cumulative individual default probability Q^Tj). Suppose the first random sample equates to a default time t of 3.5 years. This process is then repeated 100,000 times to estimate the default time of asset i.
Figure 4: Mapping Default Time for a Random Sample
Page 116
2018 Kaplan, Inc.
Topic 9 Cross Reference to GARP Assigned Reading – Meissner, Chapter 4
K e y C o n c e p t s
LO 9.1
The general equation for correlation copula, C, is defined as:
C [G i(ui),…,Gn(un)] Fn
1(G1(u^)),…,Fn 1(Gn(un));pp
The notation for this copula equation is translated as: G-(uJ are marginal distributions, F is the joint cumulative distribution function, Zq-1 is the inverse function of i 7 , and pF is the correlation matrix structure of the joint cumulative function F .
LO 9.2
The Gaussian default time copula is defined as:
Cg d [QiM- QnW] Mn N f 1(Q i(t)),…,N -1(Q n(t));pM
Marginal distributions of cumulative default probabilities, Q(t), for assets / = 1 to n for fixed time periods t are mapped to the single w-variate standard normal distribution, M , with a correlation structure of pM.
The Gaussian copula for the bivariate standard normal distribution, M2, for two assets with a default correlation coefficient of p is defined as:
CGD [ Q b (t). Q c Wl = M2 N – 1 (Q B (t)), N – 1 (Qc (t)) ; p
1
LO 9.3
Random samples are drawn from an -variate standard normal distribution sample, Mn (), to estimate expected default times using the Gaussian copula:
M n W = Q i ( T i )
2018 Kaplan, Inc.
Page 117
Topic 9 Cross Reference to GARP Assigned Reading – Meissner, Chapter 4
C o n c e p t C h e c k e r s
1.
Suppose a risk manager creates a copula function, C, defined by the equation:
C[G1(u1),…,Gn(un)] = Fn Fj 1 ( G j ( u j F n 1 (Gn(un)); pF
-l
Which of the following statements does not accurately describe this copula function? A. G-fu-J are standard normal univariate distributions. B. F is the joint cumulative distribution function. C. Zq-1 is the inverse function of F that is used in the mapping process. D. /?p is the correlation matrix structure of the joint cumulative function F .
Which of the following statements best describes a Gaussian copula? A. A major disadvantage of a Gaussian copula model is the transformation of the
original marginal distributions in order to define the correlation matrix.
B. The mapping of each variable to the new distribution is done by defining a
mathematical relationship between marginal and unknown distributions. C. A Gaussian copula maps the marginal distribution of each variable to the
standard normal distribution.
D. A Gaussian copula is seldom used in financial models because ordinal numbers
are required.
A Gaussian copula is constructed to estimate the joint default probability of two assets within a one-year time period. Which of the following statements regarding this type of copula is incorrect? A. This copula requires that the respective cumulative default probabilities are
mapped to a bivariate standard normal distribution.
B. This copula defines the relationship between the variables using a default
correlation matrix, pM.
C. The term A ^ ^ Q ^ t)) maps each individual cumulative default probability for
asset i for time period t on a percentile-to-percentile basis.
D. This copula is a common approach used in finance to estimate joint default
probabilities.
A risk manager is trying to estimate the default time for asset i based on the default correlation copula of asset i to n assets. Which of the following equations best defines the process that the risk manager should use to generate and map random samples to estimate the default time? A. CGD [QB(t),Q c (t)] = M2 [n -V Q b M J.N-‘CQ c M) ; p
B. C[GI(u1),…,Gn(un)] = Fn F[ 1 ( G ^ u , F n ‘( G J u J ) ; pF
-1
C. CGD[Q i(t),…,Q n(t)]-M n Nj 1(Q 1(t)),…,Nn 1(Q n(t)); pM
i
D. Mn(.) = Q i ( T i )
Page 118
2018 Kaplan, Inc.
Topic 9 Cross Reference to GARP Assigned Reading – Meissner, Chapter 4
Suppose a risk manager owns two non-investment grade assets and has determined their individual default probabilities for the next five years. Which of the following equations best defines how a Gaussian copula is constructed by the risk manager to estimate the joint probability of these two companies defaulting within the next year, assuming a Gaussian default correlation of 0.35? A. CGD [QB(t),Qc (t)] = M2 [n _ 1 (Q b (t)). N 1 (Qc (t)); p
B. CfG^u,)…. G(un)] = F F f1(G,(u1)),…,Fn-I(Gn(un));pF c. CGD [Qi(t),…,Qn(t)] = Mn [Nj (Q1(t)),…,N ‘(Q n(t)); pM D. Mn(.) = Q i(T=)
-1
2018 Kaplan, Inc.
Page 119
Topic 9 Cross Reference to GARP Assigned Reading – Meissner, Chapter 4
C o n c e p t C h e c k e r An s w e r s
1. A
are marginal distributions that do not have well-known distribution properties.
2. C Observations of the unknown marginal distributions are mapped to the standard normal
distribution on a percentile-to-percentile basis to create a Gaussian copula.
3. B Because there are only two companies, only a single correlation coefficient is required and
not a correlation matrix, pM.
4. D The equation Mn() = Q i ( T i ) is used to repeatedly generate random drawings from the
72-variate standard normal distribution to determine the expected default time using the Gaussian copula.
5. A Because there are only two assets, the risk manager should use this equation to define the
bivariate standard normal distribution, Mv with a single default correlation coefficient of p.
Page 120
2018 Kaplan, Inc.
The following is a review of the Market Risk Measurement and Management principles designed to address the learning objectives set forth by GARP. This topic is also covered in:
E m p i r i c a l A p p r o a c h e s t o R i s k M e t r i c s a n d H e d g i n g
T opic 10
E x a m F o c u s
This topic discusses how dollar value of a basis point (DVOl)-style hedges can be improved. Regression-based hedges enhance DV01-style hedges by examining yield changes over time. Principal components analysis (PCA) greatly simplifies bond hedging techniques. For the exam, understand the drawbacks of a standard DVOl-netural hedge, and know how to compute the face value of an offsetting position using DV01 and how to adjust this position using regression-based hedging techniques.
DVO 1-N e u t r a l H e d g e

LO 9.2: Describe the Gaussian copula and explain how to use it to derive the joint

LO 9.2: Describe the Gaussian copula and explain how to use it to derive the joint probability o f default o f two assets.
A Gaussian copula maps the marginal distribution of each variable to the standard normal distribution which, by definition, has a mean of zero and a standard deviation of one. The key property of a copula correlation model is preserving the original marginal distributions while defining a correlation between them. The mapping of each variable to the new distribution is done on percentile-to-percentile basis.
Figure 1 illustrates that the variables of two unknown distributions Xand Khave unique marginal distributions. The observations of the unknown distributions are mapped to the standard normal distribution on a percentile-to-percentile basis to create a Gaussian copula.
Figure 1: Mapping a Gaussian Copula to the Standard Normal Distribution
Distribution of X
Distribution of Y
For example, the 5 th percentile observation for marginal distribution X is mapped to the 5th percentile point on the univariate standard normal distribution. When the 5th percentile is mapped, it will have a value of 1.645. This is repeated for each observation on
Page 112
2018 Kaplan, Inc.
Topic 9 Cross Reference to GARP Assigned Reading – Meissner, Chapter 4
a percentile-to-percentile basis. Likewise, every observation on the marginal distribution of Fis mapped to the corresponding percentile on the univariate standard normal distribution. The new joint distribution is now a multivariate standard normal distribution.
Now a correlation structure can be defined between the two variables X and Y The unique marginal distributions of X and Y are not well-behaved structures, and therefore, it is difficult to define a relationship between the two variables. However, the standard normal distribution is a well-behaved distribution. Therefore, a copula is a way to indirectly define a correlation relationship between two variables when it is not possible to directly define a correlation.
A Gaussian copula, CG, is defined in the following expression for an ^-variate example. The joint standard multivariate normal distribution is denoted as Mn. The inverse of the univariate standard normal distribution is denoted as A^-1. The notation denotes the n x n correlation matrix for the joint standard multivariate normal distribution M n.
C q [Gi(ui),…,G n(un)] Mn ^ HG^uijjj.-.jNn 1(Gn(un));
– l
In finance, the Gaussian copula is a common approach for measuring default risk. The approach can be transformed to define the Gaussian default time copula, CGD, in the following expression:
CGD[Qi. . . . . . Qn(t)] = Mn N f 1(Q 1(t)),…,N~1(Q n(t)); pM 1
1
Marginal distributions of cumulative default probabilities, Q(t), for assets / = 1 to n for fixed time periods t are mapped to the single w-variate standard normal distribution M with a correlation structure of pM. The term N j’^Q ^t)) maps each individual cumulative default probability for asset i for time period t on a percentile-to-percentile basis to the standard normal distribution.
2018 Kaplan, Inc.
Page 113
Topic 9 Cross Reference to GARP Assigned Reading – Meissner, Chapter 4
Example: Applying a Gaussian copula
Suppose a risk manager owns two non-investment grade assets. Figure 2 lists the default probabilities for the next five years for companies B and C that have B and C credit ratings, respectively. How can a Gaussian copula be constructed to estimate the joint default probability, Q, of these two companies in the next year, assuming a one-year Gaussian default correlation of 0.4?
Figure 2: Default Probabilities of Companies B and C
Time, t
B Default Probability
C Default Probability
1 2 3 4 5
0.065 0.081 0.072 0.064 0.059
0.238 0.152 0.113 0.092 0.072
Professor’s Note: Non-investment grade companies have a higher probability o f default in the near term during the company crisis state. I f the company survives past the near term crisis, the probability o f default will go down over time.
Answer:
In this example, there are only two companies, B and C. Thus, a bivariate standard normal distribution, M2, with a default correlation coefficient of p can be applied. With two companies, only a single correlation coefficient is required, and not a correlation matrix of
CGD [Q b (t).Qc(t)l = M2 N – 1 (Q b (t)),N -1 (Qc(t)) p
-1
Figure 3 illustrates the percentile-to-percentile mapping of cumulative default probabilities for each company to the standard normal distribution.
Page 114
2018 Kaplan, Inc.
Topic 9 Cross Reference to GARP Assigned Reading – Meissner, Chapter 4
Figure 3: Mapping Cumulative Default Probabilities to Standard Normal Distribution
Time, t
B Default Probability
1 2 3 4 5
0.065 0.081 0.072 0.064 0.059
Q a(t)
0.065 0.146 0.218 0.282 0.341
N”;(Q s(t))
C Default Probability
-1.513 -1.053 -0.779 -0.577 -0.409
0.238 0.152 0.113 0.092 0.072
Q cto 0.238 0.390 0.503 0.595 0.667
N -‘(Q c (t))
-0.712 -0.279 0.008 0.241 0.432
Columns 3 and 6 represent the cumulative default probabilities Q^(t) and Qc (t) for companies B and C, respectively. The values in columns 4 and 7 map the respective cumulative default probabilities, Q^(t) and Qc (t), to the standard normal distribution via N _1(Q(t)). The values for the standard normal distribution are determined using the Microsoft Excel function =NORMSINV(Q(t)) or the MATLAB function =NORMINV(Q(t)). This process was illustrated graphically in Figure 1.
The joint probability of both Company B and Company C defaulting within one year is calculated as:
Q (tB < i r n c < l ) = M ( X B < -1 .5 1 3 n X c < -0 .7 1 2 ,p = 0.4) = 3.4% Professors Note: You will not be asked to calculate the percentiles for mapping to the standard normal distribution because it requires the use o f Microsoft Excel or MATLAB. In addition, you will not be asked to calculate the joint probability o f default for a bivariate normal distribution due to its complexity. C o r r e l a t e d D e f a u l t T i m e

LO 9.1: Explain the purpose o f copula functions and the translation o f the copula

LO 9.1: Explain the purpose o f copula functions and the translation o f the copula equation.
A correlation copula is created by converting two or more unknown distributions that may have unique shapes and mapping them to a known distribution with well-defined properties, such as the normal distribution. A copula creates a joint probability distribution between two or more variables while maintaining their individual marginal distributions. This is accomplished by mapping multiple distributions to a single multivariate distribution. For example, the following expression defines a copula function, C, that transforms an ^-dimensional function on the interval [0,1] to a one-dimensional function.
C : [0,l]n -[0,1] Suppose (i.e., i is an element of set N). A copula function, C, can then be defined as follows:
G [0,1] is a univariate, uniform distribution with zq = zq,…, zzn, and z G N
C[G1(u1),…,Gn(un)] = Fn Fl l(Gl (ul )),…,Fn 1(Gn(un));p
-1
In this equation, G-(uJ are the marginal distributions, Fn is the joint cumulative distribution function, iq 1 is the inverse function of F , and structure of the joint cumulative function F .
is the correlation matrix
2018 Kaplan, Inc.
Page 111
Topic 9 Cross Reference to GARP Assigned Reading – Meissner, Chapter 4
This copula function is translated as follows. Suppose there are n marginal distributions,
to Gn(n). A copula function exists that maps the marginal distributions of G^(u^) to
Gn(n) yia F j^G^Uj) and allows for the joining of the separate values F -1Gi(ui) to a single w-variate function Fn Fl_1(G1(ui)),…,Fn_1(Gn(un)) that has a correlation matrix of pF. Thus, this equation defines the process where unknown marginal distributions are mapped to a well-known distribution, such as the standard multivariate normal distribution.
G a u s s i a n C o p u l a

LO 8.2: Assess the Pearson correlation approach, Spearm ans rank correlation, and

LO 8.2: Assess the Pearson correlation approach, Spearm ans rank correlation, and Kendall s t , and evaluate their lim itations and usefulness in finance.
Pearson Correlation
The Pearson correlation coefficient is commonly used to measure the linear relationship between two variables. The Pearson correlation is defined by dividing covariance (covXY) by the product of the two assets standard deviations (oxa Y).
PXY
covXY ox a Y
Covariance is a measure of how two assets move with each other over time. The Pearson correlation coefficient standardizes covariance by dividing it by the standard deviations of each asset. This is very convenient because the correlation coefficient is always between 1 and 1.
Covariance is calculated by finding the product of each assets deviation from their respective mean return for each period. The products of the deviations for each period are then added together and divided by the number of observations less one for degrees of freedom.
( X t – n x )(Yt – j i Y)
c o v x y
n 1
There is a second methodology that is used for calculating the Pearson correlation coefficient if the data is drawn from random processes with unknown outcomes (e.g., rolling a die). The following equation defines covariance with expectation values. If E(X) and E(Y) are the expected values of variables X and Y, respectively, then the expected product of deviations from these expected values is computed as follows:
E{ [X – E(X)1 [Y – E(Y)]} or E(XY) – E(X)E(Y)
When using random sets of data, the correlation coefficient can be rewritten as:
PXY
E(XY) – E(X)E( Y)
Ve (X 2) (E(X))2 x Ve (Y2)-(E (Y ))2
Because many financial variables have nonlinear relationships, the Pearson correlation coefficient is only an approximation of the nonlinear relationship between financial
Page 100
2018 Kaplan, Inc.
Topic 8 Cross Reference to GARP Assigned Reading – Meissner, Chapter 3
variables. Thus, when applying the Pearson correlation coefficient in financial models, risk managers and investors need to be aware of the following five limitations:
1. The Pearson correlation coefficient measures the linear relationship between two
variables, but financial relationships are often nonlinear.
2. A Pearson correlation of zero does not imply independence between the two variables. It simply means there is not a linear relationship between the variables. For example, the parabola relationship defined as Y = X2 has a correlation coefficient of zero. There is, however, an obvious nonlinear relationship between variables Y and X.
3. When the joint distribution between variables is not elliptical, linear correlation
measures do not have meaningful interpretations. Examples of common elliptical joint distributions are the multivariate normal distribution and the multivariate Students ^-distribution.
4. The Pearson correlation coefficient requires that the variance calculations of the
variables X and Y are finite. In cases where kurtosis is very high, such as the Students ^-distribution, the variance could be infinite, so the Pearson correlation coefficient would be undefined.
3. The Pearson correlation coefficient is not meaningful if the data is transformed. For
example, the correlation coefficient between two variables X and Ywill be different than the correlation coefficient between ln(X) and ln(Y).
Spearm ans Rank Correlation
Ordinal measures are based on the order of elements in data sets. Two examples of ordinal correlation measures are the Spearman rank correlation and the Kendall t . The Spearman rank correlation is a nonparametric approach because no knowledge of the joint distribution of the variables is necessary. The calculation is based on the relationship of the ranked variables. The following equation defines the Spearman rank correlation coefficient where n is the number of observations for each variable, and d- is the difference between the ranking for period i.
The Spearman rank correlation coefficient is determined in three steps:
Step 1: Order the set pairs of variables X and Y with respect to the set X. Step 2: Determine the ranks of Xy and Y- for each time period i. Step 3: Calculate the difference of the variable rankings and square the difference.
2018 Kaplan, Inc.
Page 101
Topic 8 Cross Reference to GARP Assigned Reading – Meissner, Chapter 3
Example: Spearmans rank correlation
Calculate the Spearman rank correlation for the returns of stocks X and Yprovided in Figure 1.
Figure 1: Returns for Stocks X and Y
X
25.0% 60.0% -20.0% 40.0% -10.0% 19.0%
Y
-20.0% 40.0% 10.0% 20.0% 30.0% 16.0%
Year 2010 2011 2012 2013 2014 Average
Answer:
The calculations for determining the Spearman rank correlation coefficient are shown in Figure 2. The first step involves ranking the returns for stock X from lowest to highest in the second column. The first column denotes the respective year for each return. The returns for stock Y are then listed for each respective year. The fourth and fifth columns rank the returns for variables X and Y The differences between the rankings for each year are listed in column six. Lastly, the sum of squared differences in rankings is determined in column 7.
Figure 2: Ranking Returns for Stocks X and Y
Year 2012 2014 2010 2013 2011
X
-20.0% -10.0% 25.0% 40.0% 60.0%
Y
10.0% 30.0% -20.0% 20.0% 40.0%
X Rank
Y Rank
1 2 3 4 5
2 4 1 3 5
d, -1 -2 2 1 0
Sum =
1 4 4 1 0 10
The Spearman rank correlation coefficient can then be determined as 0.3:
n
6Ed? i=l n(n2 – l )
Ps 1
1 –
6×10 5(23-1)
0.5
Page 102
2018 Kaplan, Inc.
Kendall S T
Topic 8 Cross Reference to GARP Assigned Reading – Meissner, Chapter 3 is also a
is another ordinal correlation measure that is becoming more widely applied Kendalls t in financial models for ordinal variables such as credit ratings. Kendalls t is also a nonparametric measure that does not require any assumptions regarding the joint probability distributions of variables. Both Spearmans rank correlation coefficient and Kendalls t perfectly correlated variables will have a coefficient of 1 . The Kendall t Y always increases with an increase in variable X. The numerical amount of the increase does not matter for two variables to be perfectly correlated. Therefore, for most cases, the Kendall t
are similar to the Pearson correlation coefficient for ranked variables because
and the Spearman rank correlation coefficients will be different.
will be 1 if variable
The mathematical definition of Kendalls t
is provided as follows:
nc ~ nd n(n 1)/ 2
In this equation, the number of concordant pairs is represented as nc, and the number of discordant pairs is represented as nd. A concordant pair of observations is when the rankings of two pairs are in agreement:
< Y and X ~ < Y* or X^ >
t*
t*
t
t
t
t
and X * > Y * and t ^ t
*
t*
t*
A discordant pair of observations is when the rankings of two pairs are not in agreement:
Xt < Yt and Xt* > Yt* or Xt > Yt and Xt* < Yt* and t ^ t* A pair of rankings is neither concordant nor discordant if the rankings are equal: xt = Yt or xt, = Yt, equation computes the total number of pair The denominator in the Kendall t equation computes the total number of pair combinations. For example, if there are six pairs of observations, there will be 13 combinations of pairs: [n x (n - 1)] / 2 = (6 x 3) / 2 = 13 2018 Kaplan, Inc. Page 103 Topic 8 Cross Reference to GARP Assigned Reading - Meissner, Chapter 3 Example: Kendalls T Calculate the Kendall t Figure 3. correlation coefficient for the stock returns of X and Klisted in Figure 3: Ranked Returns for Stocks X and Y X -20.0% -10.0% 25.0% 40.0% 60.0% Y 10.0% 30.0% -20.0% 20.0% 40.0% X Rank Y Rank 1 2 3 4 5 2 4 1 3 5 Year 2012 2014 2010 2013 2011 Answer: Begin by comparing the rankings of X and Ystock returns in columns four and five of Figure 3. There are five pairs of observations, so there will be ten combinations. Figure 4 summarizes the pairs of rankings based on the stock returns for X and Y There are two concordant pairs, four discordant pairs, and four pairs that are neither concordant nor discordant. Figure 4: Categorizing Pairs of Stock X and TReturns Concordant Pairs {(1,2),(2,4)} {(3,1),(4,3)} Discordant Pairs {(1,2),(3,1)} {(1,2),(4,3)} {(2,4),(3,1)} {(2,4),(4,3)} Neither {(1,2),(3,3)} {(2,4),(5,5)} {(3,1),(5,5)} {(4,3),(5,5)} Kendalls t can then be determined as 0.2: T n c n d n(n 1) / 2 2 - 4 5(5 1)/2 - 0.2 Thus, the relationship between the stock returns of X and Kis slightly negative based on the Kendall t correlation coefficient. Lim itations o f Ordinal Risk Measures Ordinal correlation measures based on ranking (i.e., Spearmans rank correlation and Kendalls t ) are implemented in copula correlation models to analyze the dependence of market prices and counterparty risk. Because ordinal numbers simply show the rank of observations, problems arise when ordinal measures are used for cardinal observations, which show the quantity, number, or value of observations. Page 104 2018 Kaplan, Inc. Topic 8 Cross Reference to GARP Assigned Reading - Meissner, Chapter 3 Example: Impact of outliers on ordinal measures Suppose we triple the returns of X in the previous example to show the impact of outliers. If outliers are important sources of information and financial variables are cardinal, what are the implications for ordinal correlation measures? Answer: Notice from Figure 3 that Spearmans rank correlation and Kendalls t with an increased probability of outliers. Thus, ordinal correlation measures are less sensitive to outliers, which are extremely important in VaR and stress test models during extreme economic conditions. Numerical values are not important for ordinal correlation measures where only the rankings matter. Thus, since outliers do not change the rankings, ordinal measures underestimate risk by ignoring the impact of outliers. do not change Figure 5: Ranking Returns with Outliers Year 2012 2014 2010 2013 2011 3X -60.0% -30.0% 75.0% 120.0% 180.0% Y 3XRank Y Rank 10.0% 30.0% -20.0% 20.0% 40.0% 1 2 3 4 5 2 4 1 3 5 d, -1 -2 2 1 0 Sum = d ? 1 4 4 1 0 10 calculation can be occurs when there are a large number of pairs that Another limitation of Kendalls t are neither concordant nor discordant. In other words, the Kendall t calculation can be distorted when there are only a few concordant and discordant pairs. For example, there were 4 out of 10 pairs that were neither concordant nor discordant in Figure 4. Thus, the Kendall t calculation was based on only 6 out of 10, or 60%, of the observations. 2018 Kaplan, Inc. Page 105 Topic 8 Cross Reference to GARP Assigned Reading - Meissner, Chapter 3 K e y C o n c e p t s LO 8.1 Limitations of financial models arise due to inaccurate input values, erroneous underlying distribution assumptions, and mathematical inconsistencies. Copula correlation models failed during the 20072009 financial crisis due to assumptions of a negative correlation between the equity and senior tranches in a collateralized debt obligation (CDO) structure and the calibration of correlation estimates with pre-crisis data. LO 8.2 A major limitation of the Pearson correlation coefficient is that it measures linear relationships when most financial variables are nonlinear. The Spearman rank correlation coefficient, where n is the number of observations for each variable and d- is the difference between the ranking for period /, is computed as follows: i=l The Kendall t as nc and the number of discordant pairs is represented as nd, is computed as follows: correlation coefficient, where the number of concordant pairs is represented nc ~ nd n(n 1) / 2 correlation coefficients should not be used with cardinal Spearmans rank and Kendalls t financial variables because ordinal measures underestimate risk by ignoring the impact of outliers. Page 106 2018 Kaplan, Inc. Topic 8 Cross Reference to GARP Assigned Reading - Meissner, Chapter 3 C o n c e p t C h e c k e r s 1. 2. Kirk Rozenboom, FRM, uses the Black-Scholes-Merton (BSM) model to value options. Following the financial crisis of 20072009, he is more aware of the limitations of the BSM option pricing model. Which of the following statements best characterizes a major limitation of the BSM option pricing model? A. The BSM model assumes strike prices have nonconstant volatility. B. Option traders often use a volatility smile with lower volatilities for out-of-the- money call and put options when applying the BSM model. C. For up-and-out calls and puts, the BSM model is insensitive to changes in implied volatility when the knock-out strike price is equal to the strike price and the interest rate equals the underlying asset return. D. For down-and-out calls and puts, the BSM model is insensitive to changes in option maturity when the knock-out strike price is greater than the strike price and the interest rate is greater than the underlying asset return. New copula correlation models were used by traders and risk managers during the 2007-2009 global financial crisis. This led to miscalculations in the underlying risk for structured products such as collateralized debt obligation (CDO) models. Which of the following statements least likely explains the failure of these new copula correlation models during the financial crisis? A. The copula correlation models assumed a negative correlation between the equity and senior tranches of CDOs. B. Correlations for equity tranches of CDOs increased during the financial crisis. C. The correlation copula models were calibrated with data from time periods that had low risk. D. Correlations for senior tranches of CDOs decreased during the financial crisis. 3. A risk manager gathers five years of historical returns to calculate the Spearman rank correlation coefficient for stocks X and Y The stock returns for X and Y from 2010 to 2014 are as follows: Year 2010 2011 2012 2013 2014 X 5.0% 50.0% -10.0% -20.0% 30.0% Y -10.0% -5.0% 20.0% 40.0% 15.0% What is the Spearman rank correlation coefficient for the stock returns of X and F? A. -0.7. B. -0.3. C. 0.3. D. 0.7. 2018 Kaplan, Inc. Page 107 Topic 8 Cross Reference to GARP Assigned Reading - Meissner, Chapter 3 4. A risk manager gathers five years of historical returns to calculate the Kendall t correlation coefficient for stocks X and Y The stock returns for X and Yfrom 2010 to 2014 are as follows: Year 2010 2011 2012 2013 2014 X 5.0% 50.0% -10.0% -20.0% 30.0% Y -10.0% -5.0% 20.0% 40.0% 15.0% correlation coefficient for the stock returns of X and Y? What is the Kendall t A. -0.3. B. -0.2. C. 0.4. D. 0.7. 3. A risk manager is using a copula correlation model to perform stress tests of financial risk during systemic economic crises. If the risk manager is concerned about extreme outliers, which of the following correlation coefficient measures should be used? A. Kendalls t B. Ordinal correlation. C. Pearson correlation. D. Spearmans rank correlation. correlation. Page 108 2018 Kaplan, Inc. Topic 8 Cross Reference to GARP Assigned Reading - Meissner, Chapter 3 C o n c e p t C h e c k e r A n s w e r s 1. C For up-and-out calls and puts and for down-and-out calls and puts, the BSM option pricing model is insensitive to changes in implied volatility when the knock-out strike price is equal to the strike price and the interest rate equals the underlying asset return. The BSM model assumes strike prices have a constant volatility, and option traders often use a volatility smile with higher volatilities for out-of-the-money call and put options. 2. D During the crisis, the correlations for both the equity and senior tranches of CDOs significantly increased causing losses in value for both. 3. A The following table illustrates the calculations used to determine the sum of squared ranking deviations: Year 2013 2012 2010 2014 2011 Y X Rank Y Rank -20.0% -10.0% 5.0% 30.0% 50.0% 40.0% 20.0% -10.0% 15.0% -5.0% 1 2 3 4 5 5 4 1 3 2 4 -A -2 2 1 3 Sum = d f 16 4 4 1 9 34 Thus, the Spearman rank correlation coefficient is -0.7: n Ps 1 i=l n(n2 -1 ) = 1 6x34 5(25-1) = -0 .7 2018 Kaplan, Inc. Page 109 Topic 8 Cross Reference to GARP Assigned Reading - Meissner, Chapter 3 4. B The following table provides the ranking of pairs with respect to X. Year 2013 2012 2010 2014 2011 X -20.0% -10.0% 5.0% 30.0% 50.0% Y 40.0% 20.0% -10.0% 15.0% -5.0% X Rank Y Rank 1 2 3 4 5 5 4 1 3 2 There are four concordant pairs and six discordant pairs shown as follows: Concordant Pairs 1(1,5),(2,4)} {(3,1),(4,3)} {(3,1),(5,2)} {(4,3),(5,2)} Discordant Pairs {(1,5), (3,1)} {(1,5),(4,3)} {(1,5),(5,2)} {(2,4),(3,1)} {(2,4),(4,3)} {(2,4),(5,2)} Thus, the Kendall t correlation coefficient is -0.2: nc nd n(n 1)/ 2 4 - 6 5(5 1)/2 5. C The Pearson correlation coefficient is preferred to ordinal measures when outliers are a concern. Spearmans rank correlation and Kendalls t that should not be used with cardinal financial variables because they underestimate risk by ignoring the impact of outliers. are ordinal correlation coefficients Page 110 2018 Kaplan, Inc. The following is a review of the Market Risk Measurement and Management principles designed to address the learning objectives set forth by GARP. This topic is also covered in: F in a n c ia l C o r r e l a t io n M o d e l i n g B o t t o m -U p A p p r o a c h e s Topic 9 E x a m F o c u s A copula is a joint multivariate distribution that describes how variables from marginal distributions come together. Copulas provide an alternative measure of dependence between random variables that is not subject to the same limitations as correlation in applications such as risk measurement. For the exam, understand how a correlation copula is created by mapping two or more unknown distributions to a known distribution that has well-defined properties. Also, know how the Gaussian copula is used to estimate joint probabilities of default for specific time periods and the default time for multiple assets. The material in this topic is relatively complex, so your focus here should be on gaining a general understanding of how a copula function is applied. C o p u l a F u n c t i o n s

LO 8.1: Evaluate the lim itations o f financial m odeling with respect to the model

LO 8.1: Evaluate the lim itations o f financial m odeling with respect to the model itself, calibration o f the model, and the m odels output.
Financial models are important tools to help individuals and institutions better understand the complexity of the financial world. Financial models always deal with uncertainty and are, therefore, only approximations of a very complex pricing system that is influenced by numerous dynamic factors. There are many different types of markets trading a variety of assets and financial products such as equities, bonds, structured products, derivatives, real estate, and exchange-traded funds. Data from multiple sources is then gathered to calibrate financial models.
Due to the complexity of the global financial system, it is important to recognize the limitations of financial models. Limitations arise in financial models as a result of inaccurate inputs, erroneous assumptions regarding asset variable distributions, and mathematical inconsistencies. Almost all financial models require market valuations as inputs. Unfortunately, these values are often determined by investors who do not always behave rationally. Therefore, asset values are sometimes random and may exhibit unexpected changes.
Financial models also require assumptions regarding the underlying distribution of the asset returns. Value at risk (VaR) models are used to estimate market risk, and these models often assume that asset returns follow a normal distribution. However, empirical studies actually find higher kurtosis in return distributions, which suggest a distribution with fatter tails than the normal distribution.
Page 98
2018 Kaplan, Inc.
Topic 8 Cross Reference to GARP Assigned Reading – Meissner, Chapter 3
.Another example of a shortcoming of financial models is illustrated with the Black-Scholes- Merton (BSM) option pricing model. The BSM option pricing model assumes strike prices have constant volatility. However, numerous empirical studies find higher volatility for out- of-the money options and a volatility skew in equity markets. Thus, option traders and risk managers often use a volatility smile (discussed in Topic 13) with higher volatilities for out- of-the money call and put options.
Financial models at times may fail to accurately measure risk due to mathematical inconsistencies. For example, regarding barrier options, when applying the BSM option pricing model to up-and-out calls and puts and down-and-out calls and puts, there are rare cases where the inputs make the model insensitive to changes in implied volatility and option maturity. This can occur when the knock-out strike price is equal to the strike price, and the interest rate equals the underlying asset return. Risk managers and traders need to be aware of the possibility of mathematical inconsistencies causing model risk that leads to incorrect pricing and the inability to properly hedge risk.
Lim itations in the Calibration o f Financial M odels
Financial models calibrate parameter inputs to reflect current market values. These parameters are then used in financial models to estimate market values with limited or no pricing information. The choice of time period used to calibrate the parameter inputs for the model can have a big impact on the results. For example, during the 2007 to 2009 financial crisis, risk managers used volatility and correlation estimates from pre-crisis periods. This resulted in significantly underestimating the risk for value at risk (VaR), credit value at risk (CVaR), and collateralized debt obligation (CDO) models.
All financial models should be tested using scenarios of extreme economic conditions. This process is referred to as stress testing. For example, VaR estimates are calculated in the event of a systemic financial crisis or severe recession. In 2012, the Federal Reserve, under the guidelines of Basel III, required all financial institutions to use stress tests.
Lim itations o f Financial M odel O utputs
Limitations of financial models became evident during the recent global financial crisis. Traders and risk managers used new copula correlation models to estimate values in collateralized debt obligation (CDO) models. The values of these structured products were linked to mortgages in a collapsing real estate market.
The copula correlation models failed for two reasons. First, the copula correlation models assumed a negative correlation between the equity and senior tranches of CDOs. However, during the crisis, the correlations for both tranches significantly increased causing losses for both. Second, the copula correlation models were calibrated using volatility and correlation estimates with data from time periods that had low risk, and correlations changed significantly during the crisis.
A major lesson learned from the global financial crisis is that copula models cannot be blindly trusted. There should always be an element of human judgment in assessing the risk associated with any financial model. This is especially true for extreme market conditions.
2018 Kaplan, Inc.
Page 99
Topic 8 Cross Reference to GARP Assigned Reading – Meissner, Chapter 3
S t a t i s t i c a l C o r r e l a t i o n M e a s u r e s

LO 7.3: Identify the best-fit distribution for equity, bond, and default correlations.

LO 7.3: Identify the best-fit distribution for equity, bond, and default correlations.
Seventy-seven percent of the correlations between stocks listed on the Dow from 1972 to 2012 were positive. Three distribution fitting tests were used to determine the best fit for equity correlations. Based on the results of the Kolmogorov-Smirnov, Anderson-Darling, and chi-squared distribution fitting tests, the Johnson SB distribution (which has two shape parameters, one location parameter, and one scale parameter) provided the best fit for equity correlations. The Johnson SB distribution best fit was also robust with respect to testing different economic states for the time period in question. The normal, lognormal, and beta distributions provided a poor fit for equity correlations.
There were three mild recessions and three severe recessions from 1972 to 2012. The time periods for the mild recessions occurred in 1980, 1990 to 1991, and 2001. More severe recessions occurred from 1973 to 1974 and from 1981 to 1982. Both of these severe recessions were caused by huge increases in oil prices. The most severe recession for this time period occurred from 2007 to 2009 following the global financial crisis. The percentage change in correlation volatility prior to a recession was negative in every case except for the 1990 to 1991 recession. This is consistent with the findings discussed earlier where correlation volatility is low during expansionary periods that often occur prior to a recession.
An empirical investigation of 7,643 bond correlations found average correlations for bonds of 42%. Correlation volatility for bond correlations was 64%. Bond correlations were also found to exhibit properties of mean reversion, but the mean reversion rate was only 26%. The best fit distribution for bond correlations was found to be the generalized extreme value (GEV) distribution. However, the normal distribution is also a good fit for bond correlations.
A study of 4,633 default probability correlations revealed an average default correlation of 30%. Correlation volatility for default probability correlations was 88%. The mean
Page 92
2018 Kaplan, Inc.
Topic 7 Cross Reference to GARP Assigned Reading – Meissner, Chapter 2
reversion rate for default probability correlations was 30%, which is closer to the 26% for bond correlations. However, the default probability correlation distribution was similar to equity distributions in that the Johnson SB distribution is the best fit for both distributions. Figure 1 summarizes the findings of the empirical correlation analysis.
Figure 1: Empirical Findings for Equity, Bond, and Default Correlations
Correlation Type
Average Correlation
Correlation Volatility
Reversion Rate
Equity
Bond
Default Probability
35%
42%
30%
80%
64%
88%
78%
26%
30%
Best Fit Distribution Johnson SB Generalized Extreme Value Johnson SB
2018 Kaplan, Inc.
Page 93
Topic 7 Cross Reference to GARP Assigned Reading – Meissner, Chapter 2
K e y C o n c e p t s
LO 7.1
Risk managers should be cognizant that historical correlation levels for common stocks in the Dow are highest during recessions. Correlation volatility for Dow stocks is high during recessions but highest during normal economic periods.
LO 7.2
W hen a regression is run where St St l (the ^variable) is regressed with respect to St l (the X variable), the (3 coefficient of the regression is equal to the negative mean reversion rate, a.
Equity correlations show high mean reversion rates (78%) and low autocorrelations (22%). These two rates must sum to 100%. Bond correlations and default probability correlations show much lower mean reversion rates and higher autocorrelation rates.
LO 7.3
Equity correlation distributions and default probability correlation distributions are best fit with the Johnson SB distribution. Bond correlation distributions are best fit with the generalized extreme value distribution, but the normal distribution is also a good fit.
Page 94
2018 Kaplan, Inc.
Topic 7 Cross Reference to GARP Assigned Reading – Meissner, Chapter 2
C o n c e p t C h e c k e r s
1.
2.
3.
Suppose a risk manager examines the correlations and correlation volatility of stocks in the Dow Jones Industrial Average (Dow) for the period beginning in 1972 and ending in 2012. Expansionary periods are defined as periods where the U.S. gross domestic product (GDP) growth rate is greater than 3.5%, periods are normal when the GDP growth rates are between 0 and 3.5%, and recessions are periods with two consecutive negative GDP growth rates. Which of the following statements characterizes correlation and correlation volatilities for this sample? The risk manager will most likely find that: A. correlations and correlation volatility are highest for recessions. B. correlations and correlation volatility are highest for expansionary periods. C. correlations are highest for normal periods, and correlation volatility is highest
for recessions.
D. correlations are highest for recessions, and correlation volatility is highest for
normal periods.
Suppose mean reversion exists for a variable with a value of 30 at time period t 1. Assume that the long-run mean value for this variable is 40 and ignore the stochastic term included in most regressions of financial data. What is the expected change in value of the variable for the next period if the mean reversion rate is 0.4? A. -10. B. -4. C. 4. D. 10.
A risk manager uses the past 480 months of correlation data from the Dow Jones Industrial Average (Dow) to estimate the long-run mean correlation of common stocks and the mean reversion rate. Based on historical data, the long-run mean correlation of Dow stocks was 32%, and the regression output estimates the following regression relationship: Y = 0.24 – 0.75X. Suppose that in April 2014, the average monthly correlation for all Dow stocks was 36%. What is the expected correlation for May 2014 assuming the mean reversion rate estimated in the regression analysis? A. 32%. B. 33%. C. 35%. D. 37%.
2018 Kaplan, Inc.
Page 95
Topic 7 Cross Reference to GARP Assigned Reading – Meissner, Chapter 2
4.
3.
A risk manager uses the past 480 months of correlation data from the Dow Jones Industrial Average (Dow) to estimate the long-run mean correlation of common stocks and the mean reversion rate. Based on this historical data, the long-run mean correlation of Dow stocks was 34%, and the regression output estimates the following regression relationship: Y = 0.262 – 0.77X. Suppose that in April 2014, the average monthly correlation for all Dow stocks was 33%. What is the estimated one-period autocorrelation for this time period based on the mean reversion rate estimated in the regression analysis? A. 23%. B. 26%. C. 30%. D. 33%.
In estimating correlation matrices, risk managers often assume an underlying distribution for the correlations. Which of the following statements most accurately describes the best fit distributions for equity correlation distributions, bond correlation distributions, and default probability correlation distributions? The best fit distribution for the equity, bond, and default probability correlation distributions, respectively are: A. B. Johnson SB, generalized extreme value, and Johnson SB. C. beta, normal, and beta. D. Johnson SB, normal, and beta.
lognormal, generalized extreme value, and normal.
Page 96
2018 Kaplan, Inc.
Topic 7 Cross Reference to GARP Assigned Reading – Meissner, Chapter 2
C o n c e p t C h e c k e r A n s w e r s
1. D Findings of an empirical study of monthly correlations of Dow stocks from 1972 to 2012
revealed the highest correlation levels for recessions and the highest correlation volatilities for normal periods. The correlation volatilities during a recession and normal period were 80.5% and 83.4%, respectively.
2. C The mean reversion rate, a, indicates the speed of the change or reversion back to the mean.
If the mean reversion rate is 0.4 and the difference between the last variable and long-run mean is 10 (= 40 – 30), the expected change for the next period is 4 (i.e., 0.4 x 10 = 4).
3. B There is a -4% difference from the long-run mean correlation and April 2014 correlation
(32% – 36% = -4%). The inverse of the (3 coefficient in the regression relationship implies a mean reversion rate of 75%. Thus, the expected correlation for May 2014 is 33.0%:
St = a(p, – St_j) + St l
St = 0.75(32% – 36%) + 0.36 = 0.33
4. A The autocorrelation for a one-period lag is 23% for the same sample. The sum of the mean reversion rate (77% given the beta coefficient of-0.77) and the one-period autocorrelation rate will always equal 100%.
5. B Equity correlation distributions and default probability correlation distributions are best fit with the Johnson SB distribution. Bond correlation distributions are best fit with the generalized extreme value distribution.
2018 Kaplan, Inc.
Page 97
The following is a review of the Market Risk Measurement and Management principles designed to address the learning objectives set forth by GARP. This topic is also covered in:
S t a t is t ic a l C o r r e l a t io n M o d e l s C a n W e A p p l y Th e m t o F in a n c e ?
Topic 8
E x a m F o c u s
This topic addresses the limitations of financial models and popular statistical correlation measures such as the Pearson correlation measure, the Spearman rank correlation, and the Kendall t . For the exam, understand that the major limitation of the Pearson correlation coefficient is that most financial variables have nonlinear relationships. Also, be able to discuss the limitations of ordinal correlation measures, such as Spearmans rank correlation and Kendalls t underlying joint distributions of variables; however, applications of ordinal risk measures are limited to ordinal variables where only the rankings are important instead of actual numerical values.
. These nonparametric measures do not require assumptions about the
L i m i t a t i o n s o f F i n a n c i a l M o d e l s

LO 7.2: Calculate a mean reversion rate using standard regression and calculate the

LO 7.2: Calculate a mean reversion rate using standard regression and calculate the corresponding autocorrelation.
Mean reversion implies that over time, variables or returns regress back to the mean or average return. Empirical studies reveal evidence that bond values, interest rates, credit spreads, stock returns, volatility, and other variables are mean reverting. For example, during a recession, demand for capital is low. Therefore, interest rates are lowered to encourage investment in the economy. Then, as the economy picks up, demand for capital increases and, at some point, interest rates will rise. If interest rates are too high, demand for capital decreases and interest rates decrease and approach the long-run average. The level of interest rates is also a function of monetary and fiscal policy and not just supply and demand levels of capital.
Mean reversion is statistically defined as a negative relationship between the change in a variable over time, S St l , and the variable in the previous period, St p
d(St – S t_!)
a s t_!
In this equation, S is the value of the variable at time period r, S j is the value of the variable in the previous period, and d is a partial derivative coefficient. Mean reversion exists when S j increases (decreases) by a small amount causing S S j to decrease (increase) by a small amount. For example, if S j increases and is high at time period t 1, then mean reversion causes the next value at S to reverse and decrease toward the long-run average or mean value. The mean reversion rate is the degree of the attraction back to the mean and is also referred to as the speed or gravity of mean reversion. The mean reversion rate, a, is expressed as follows:
St St_j a(p St_ i) At + a S VAt
2018 Kaplan, Inc.
Page 89
Topic 7 Cross Reference to GARP Assigned Reading – Meissner, Chapter 2
If we are only concerned with measuring mean reversion, we can ignore the last term, crgeVAt , which is the stochastic part of the equation requiring random samples from a distribution over time. By ignoring the last term and assuming A t = 1, the mean reversion rate equation simplifies to:
Example: Calculating mean reversion
Suppose mean reversion exists for a variable with a value of 50 at time period t 1. The long-run mean value, /i, is 80. What are the expected changes in value of the variable over the next period, S S 15 if the mean reversion rate, a, is 0, 0.5, or 1.0?
Answer:
If the mean reversion rate is 0, there is no mean reversion and there is no expected change. If the mean reversion rate is 0.5, there is a 50% mean reversion and the expected change is 15 [i.e., 0.5 x (80 – 50)]. If the mean reversion rate is 1.0, there is 100% mean reversion and the expected change is 30 [i.e., 1.0 x (80 50)]. Thus, a stronger or faster mean reversion is expected with a higher mean reversion rate.
Standard regression analysis is one method used to estimate the mean reversion rate, a. We can think of the mean reversion rate equation in terms of a standard regression equation (i.e., Y = a + (3X) by applying the distributive property to reformulate the right side of the equation:
^t-l ”
a^t-l
Thinking of this equation in terms of a standard regression implies the following terms in the regression equation:
St – St l = Y; ap, = a ; and – aSt l = (3X
A regression is run where S St l (i.e., the Yvariable) is regressed with respect to St l (i.e., the X variable). Thus, the (3 coefficient of the regression is equal to the negative of the mean reversion rate, a.
>From the 1972 to 2012 study, the data resulted in the following regression equation:
Y = 0.27 – 0.78X
The beta coefficient of 0.78 implies a mean reversion rate of 78%. This is a relatively high mean reversion rate. Thus, if there is a large decrease (increase) from the mean correlation
Page 90
2018 Kaplan, Inc.
Topic 7 Cross Reference to GARP Assigned Reading – Meissner, Chapter 2
for one month, the following month is expected to have a large increase (decrease) in correlation.
Example: Calculating expected correlation
Suppose that in October 2012, the average monthly correlation for all Dow stocks was 30% and the long-run correlation mean of Dow stocks was 33%. A risk manager runs a regression, and the regression output estimates the following regression relationship: Y = 0.273 0.78X. What is the expected correlation for November 2012 given the mean reversion rate estimated in the regression analysis? (Solve for S in the mean reversion rate equation.)
Answer:
There is a 3% difference from the October 2012 and long-run mean correlation (35% 30% = 5%). The (3 coefficient in the regression relationship implies a mean reversion rate of 78%. The November 2012 correlation is expected to revert 78% of the difference back toward the mean. Thus, the expected correlation for November 2012 is 33.9%:
St = a(M– St_i) + St_i
S = 0.78(35% – 30%) + 0.3 = 0.339
Autocorrelation measures the degree that a current variable value is correlated to past values. Autocorrelation is often calculated using an autoregressive conditional heteroskedasticity (ARCH) model or a generalized autoregressive conditional heteroskedasticity (GARCH) model. An alternative approach to measuring autocorrelation is running a regression equation. In fact, autocorrelation has the exact opposite properties of mean reversion.
Mean reversion measures the tendency to pull away from the current value back to the long-run mean. Autocorrelation instead measures the persistence to pull toward more recent historical values. The mean reversion rate in the previous example was 78% for Dow stocks. Thus, the autocorrelation for a one-period lag is 22% for the same sample. The sum of the mean reversion rate and the one-period autocorrelation rate will always equal one (i.e., 78% + 22% = 100%).
Autocorrelation for a one-period lag is statistically defined as:
AC(pt,pt_ I) =
cov(pt,ptt) a(pt) x cr(pt_ i)
The term AC(pt, pt l) represents the autocorrelation of the correlation from time period t and the correlation from time period t – 1. For this example, the pt term can represent the correlation matrix for Dow stocks on day r, and the pt l term can represent the correlation
2018 Kaplan, Inc.
Page 91
Topic 7 Cross Reference to GARP Assigned Reading – Meissner, Chapter 2
matrix for Dow stocks on day t 1. The covariance between the correlation measures, cov(pt, pt l), is calculated the same way covariance is calculated for equity returns.
This autocorrelation equation was used to calculate the one-period lag autocorrelation of Dow stocks for the 1972 to 2012 time period, and the result was 22%, which is identical to subtracting the mean reversion rate from one. The study also used this equation to test autocorrelations for 1- to 10-day lag periods for Dow stocks. The highest autocorrelation of 26% was found using a two-day lag, which compares the time period t correlation with the t 2 correlation (two months prior). The autocorrelation for longer lags decreased gradually to approximately 10% using a 10-day lag. It is common for autocorrelations to decay with longer time period lags.
Professor Note: The autocorrelation equation is exactly the same as the correlation coefficient. Correlation values for time period t and t 1 are used to determine the autocorrelation between the two correlations.
B e s t -F i t D i s t r i b u t i o n s f o r C o r r e l a t i o n s

LO 7.1: Describe how equity correlations and correlation volatilities behave

LO 7.1: Describe how equity correlations and correlation volatilities behave throughout various economic states.
The recent financial crisis of 2007 provided new information on how correlation changes during different economic states. From 1972 to 2012, an empirical investigation on correlations of the 30 common stocks of the Dow Jones Industrial Average (Dow) was conducted. The correlation statistic was used to create a 30 x 30 correlation matrix for each stock in the Dow every month. This required 900 correlation calculations (30 x 30 = 900). There were 490 months in the study, so 441,000 monthly correlations were computed (900 x 490 = 441,000). However, the correlations of each stock with itself were eliminated from the study resulting in a total of 426,300 monthly correlations (441,000 30 x 490 = 426,300).
The average correlation values were compared for three states of the U.S. economy based on gross domestic product (GDP) growth rates. The state of the economy was defined as an expansionary period when GDP was greater than 3.3%, a normal economic period when GDP was between 0% and 3.3%, and a recession when there were two consecutive quarters of negative growth rates. Based on these definitions, from 1972 to 2012 there were six recessions, five expansionary periods, and five normal periods.
The average monthly correlation and correlation volatilities were then compared for each state of the economy. Correlation levels during a recession, normal period, and expansionary period were 37.0%, 32.7%, and 27.5%, respectively. Thus, as expected, correlations were highest during recessions when common stocks in equity markets tend to go down together. The low correlation levels during an expansionary period suggest common stock
Page 88
2018 Kaplan, Inc.
Topic 7 Cross Reference to GARP Assigned Reading – Meissner, Chapter 2
valuations are determined more on industry and company-specific information rather than macroeconomic factors.
The correlation volatilities during a recession, normal period, and expansionary period were 80.5%, 83.4%, and 71.2%, respectively. These results may seem a little surprising at first as one may expect volatilities are highest during a recession. However, there is perhaps slightly more uncertainty in a normal economy regarding the overall direction of the stock market. In other words, investors expect stocks to go down during a recession and up during an expansionary period, but they are less certain of direction during normal times, which results in higher correlation volatility.
Professor Note: The main lesson from this portion o f the study is that risk managers should be cognizant o f high correlation and correlation volatility levels during recessions and times o f extreme economic distress when calibrating risk management models.
M e a n R e v e r s i o n a n d A u t o c o r r e l a t i o n