LO 43.1: Compare the basic indicator approach, the standardized approach, and

LO 43.1: Compare the basic indicator approach, the standardized approach, and the alternative standardized approach for calculating the operational risk capital charge, and calculate the Basel operational risk charge using each approach.
Basel II proposed three approaches for determining the operational risk capital requirement (i.e., the amount of capital needed to protect against the possibility of operational risk losses). The basic indicator approach (BIA) and the standardized approach (TSA) determine capital requirements as a multiple of gross income at either the business line or institutional level. The advanced measurement approach (AMA) offers institutions the possibility to lower capital requirements in exchange for investing in risk assessment and management technologies. If a firm chooses to use the AMA, calculations will draw on the underlying elements illustrated in Figure 1.
Figure 1: Role of Capital Modeling in the Operational Risk Framework
Risk Governance
Risk Appetite
Risk Measurement and Modeling / Risk Reporting
V
Internal Loss Data Risk and Control Self-Assessment External Loss Data
Scenario Analysis Key Risk Indicators
(KRIs)
Risk Culture / Risk Policies and Procedures
Topic 43 Cross Reference to GARP Assigned Reading – Girling, Chapter 12
Basel II encourages banks to develop more sophisticated operational risk management tools and expects international banks to use either the standardized approach or advanced measurement approach. In fact, many nations require large financial institutions to calculate operational risk with the AMA in order to be approved for Basel II.
Basic Indicator Approach
With the BIA, operational risk capital is based on 13% of the banks annual gross income (GI) over a three-year period. Gross income in this case includes both net interest income and noninterest income. The capital requirement, KBIA, under this approach is computed as follows:
where: GI = annual (positive) gross income over the previous three years n = number of years in which gross income was positive a =13% (set by Basel Committee)
Firms using this approach are still encouraged to adopt the risk management elements outlined in the Basel Committee on Banking Supervision, Risk Management Group, Sound Practices for the Management and Supervision of Operational Risk. When a firm uses the BIA, it does not need loss data, risk and control self-assessment, scenario analysis, and business environment internal control factors (BEICF) for capital calculations. However, these data elements are needed as part of an operational risk framework to ensure risks are adequately identified, assessed, monitored, and mitigated.
Iixample 1: Calculating BIA capital charge
Assume Omega Bank has the following revenue results from the past three years:
Annual Gross Revenue (in $ 100 millions) Calculate the operational risk capital requirement under the BIA. 25 30 25 Year 1 30 Year 2
Year 3 35
Answer:
_[(25 + 30 + 3 5 )x 0 .1 5 ]_ ., K BIA —————————————– 4 –>
D
Thus, Omega Bank must hold $450 million in operational risk capital under Basel II using the basic indicator approach.
Page 74
2018 Kaplan, Inc.
Topic 43 Cross Reference to GARP Assigned Reading – Girling, Chapter 12
Example 2: Calculating BIA capital charge
Assume Theta Bank has the following revenue results from the past three years:
Annual Gross Revenue (in $100 millions) Calculate the operational risk capital requirement under the BIA. 10 -5 10 Year 1 -5 Year 2
Year 3 15
Answer:
Because Year 2 is negative, it will not count toward the sum of gross income over the past three years. This will also reduce the value of n to two.
[(10 + .5 )X .1 5 L 1J75
Thus, Theta Bank must hold $187.3 million in operational risk capital under Basel II using the basic indicator approach.
The BIA for risk capital is simple to adopt, but it is an unreliable indication of the true capital needs of a firm because it uses only revenue as a driver. For example, if two firms had the same annual revenue over the last three years, but widely different risk controls, their capital requirements would be the same. Note also that operational risk capital requirements can be greatly affected by a single years extraordinary revenue when risk at the firm has not materially changed.
The Standardized Approach
Investment banking (corporate finance): 18%. Investment banking (trading and sales): 18%. For the standardized approach (TSA), the bank uses eight business lines with different beta factors to calculate the capital charge. With this approach, the beta factor of each business line is multiplied by the annual gross income amount over a three-year period. The results are then summed to arrive at the total operational risk capital charge under the standardized approach. The beta factors used in this approach are shown as follows:
Retail banking: 12%. Commercial banking: 13%.
Agency and custody services: 15%. Asset management: 12%. Retail brokerage: 12%. The standardized approach attempts to capture operational risk factors not covered by the BIA by assuming that different business activities carry different levels of operational risk.
Settlement and payment services: 18%.
2018 Kaplan, Inc.
Page 7 5
Topic 43 Cross Reference to GARP Assigned Reading – Girling, Chapter 12
Any negative capital charges from business lines can be offset up to a maximum of zero capital. The capital requirement, KTSA, under this approach is computed as follows:
3 Years
3
where: GIt 8 = annual gross income in a given year for each of the eight business lines (3j_8 = beta factors (fixed percentages for each business line)
In the following examples, Gamma Bank has only three lines of business and uses the standardized approach for its operational risk capital calculation.
Example 1: Calculating TSA capital charge
Assume Gamma Bank has the following revenue (in $100 millions) for the past three years for its three lines of business: trading and sales, commercial banking, and asset management.
Business Line
Year 1
Year 2
Trading and Sales Commercial Banking A set Management Calculate the operational risk capital requirement under TSA.
10 5 10
15 10 10 20 15 10 20 15 10 Year 3
Answer:
To calculate TSA capital charge, we first incorporate the relevant beta factors as follows:
Business Line
Year 1
Year 2
Year 3
Ok OJ II 0s Vp
0 X 0
boo
15 x 15% = 2.25 10 x 12% = 1.2
7.05
k-n
IIo
0s Vp -n
X k
kJ\
5.4
bo II 0s Vp
0 X 0
10 x 18% = 1.8 Trading and Sales Commercial Banking A set Management Total Next, enter these totals into the capital charge calculation as follows:
10 X 15% = 1.5 10 x 12% = 1.2
10 x 12% = 1.2
3.75
c/ k t s a —————-: —————- N4
_ (3.73 + 3.4 + 7.03)
Thus, Gamma Bank must hold $540 million in operational risk capital under Basel II using the standardized approach.
Page 76
2018 Kaplan, Inc.
Topic 43 Cross Reference to GARP Assigned Reading – Girling, Chapter 12
Example 2: Calculating TSA capital charge
If Delta Bank has negative revenue in any business line, it can offset capital charges that year up to a maximum benefit of zero capital. Beta Bank has had the following revenue (in $100 millions) for the past three years for its two lines of business: corporate finance and retail banking.
Business Line
Year 1
Corporate Finance Retail Banking Calculate the operational risk capital requirement under TSA.
5 5 10 -25 10 -25 Year 2
Year 3 15 5
2018 Kaplan, Inc.
Page 77
Year 1
Year 2
Year 3
IIbo
Vp
0 X 0
v-n
10 x 18%= 1.80
‘Oo IIo
0s
0 X 0
b\o IIo vP0s
o Xb
1
o

II 0s nP
o Xb
bo
1
N OO
O
II NP 0s
o Xb
1.5
-1.2
3.3
Answer:
Business Line Corporate Finance Retail Banking Total
Because a negative number cannot be used in the numerator, we replace 1.2 in Year 2 with zero. However, unlike the BIA, the number of years remains at three. Entering these totals into the capital charge calculation yields:
is” r t s a ———– ;——– — ho
(1.5+ 0 + 3.3)
Thus, Delta Bank would hold $160 million operational risk capital under Basel II using the standardized approach.
Alternative Standardized Approach
Under Basel II, a bank can be permitted to use the alternative standardized approach (ASA) provided it can demonstrate an ability to minimize double counting of certain risks. The ASA is identical to the standardized approach except for the calculation methodologies in the retail and commercial banking business lines. For these business lines, gross income is replaced with loans and advances times a multiplier, which is set equal to 0.035. Under the ASA, the beta factor for both retail and commercial banking is set to 15%. The capital requirement for the retail banking business line, K ^, (which is the same for commercial banking) is computed as follows:
K r b = P r b x L A r b x m
where: (3r b = beta factor for retail banking business line (15%) LAr b = average total outstanding retail loans and advances over the past three years m multiplier (0.035) = multiplier (0.035)
Under this scenario, the standardized approach will compute a capital charge of $50 million as follows:
(1.5+ 0 + 0)
3
However, recall that the BIA applies a fixed 15% of gross income and reduces the value of n when negative gross income is present. Thus, under the same scenario, the BIA will compute a capital charge of $150 million as follows:
[(5 + 5) x 0.15] 1
Therefore, this bank would hold only $50 million in operational risk capital using TSA but $150 million under the BIA. The Basel Committee has recognized that capital under Pillar 1 (minimum capital requirements) may be distorted and, therefore, recommends that additional capital should be added under Pillar 2 (supervisory review) if negative gross income leads to unanticipated results.
A d v a n c e d M e a s u r e m e n t A p p r o a c h

LO 42.4: Describe the Societe Generale operational loss event and explain the

LO 42.4: Describe the Societe Generale operational loss event and explain the lessons learned from the event.
In January 2008, it was discovered that one of Societe Generales junior traders, Jerome Kerviel, was involved in rogue trading activities, which ultimately resulted in losses of 4.9 billion. The multinational bank was fined 4 million, and Mr. Kerviel was sentenced to three years in prison. The incident damaged the reputation of Societe Generale and required the bank to raise additional funds to meet capital needs.
Between July 2005 and January 2008, Kerviel established large, unauthorized positions in futures contracts and equity securities. To hide the size and riskiness of these unauthorized positions, he created fake transactions that offset the price movements of the actual positions. Kerviel created fake transactions with forward start dates and then used his knowledge of control personnel confirmation timing to cancel these trades right before any confirmations took place. Given the need to continuously replace fake trades with new ones, Kerviel created close to 1,000 fictitious trades before the fraud was finally discovered.
The operational risk world was galvanized by this event as it demonstrated the dangers of unmitigated operational risk. In 2008, many firms were developing operational risk frameworks and often focused on the delivery of new reporting, loss data tools, and adaptions to their scenario analysis programs. However, even though firms were developing internal risk systems, the amount of new regulatory requirements rapidly overcame their ability to keep up in practice. With the news of Mr. Kernels activities, many heads of operational risk found themselves asking the question Could that happen here?
Page 66
2018 Kaplan, Inc.
Topic 42 Cross Reference to GARP Assigned Reading – Girling, Chapter 8
IBM Algo FIRST provided an analysis based on press reviews. The highlights of alleged contributing factors to this operational loss event are summarized as follows: 1. Mr. Kerviel was involved in extensive unauthorized trading activities.
2. Mr. Kerviel was not sufficiently supervised.
3. Mr. Kerviel used his knowledge of middle and back office controls to ensure his fraud
went undetected.
4. Mr. Kerviel achieved password access to systems allowing him to manipulate trade data. A number of reasons were cited that explained how Kerviels unauthorized trading activity went undetected, including the incorrect handling of trade cancellations, the lack of proper supervision, and the inability of the banks trading system to consider gross positions.
Regarding trade cancellations, the banks system was not equipped to review trading information that was entered and later canceled. In addition, the system was not set up to flag any unusual levels of trade cancellations. Regarding the lack of supervision, oversight of Kerviels trading activity was weak, especially after his manager resigned in early 2007. Under the new manager, Kerviels unauthorized trading activity increased significantly. Regarding the size of Kerviels positions, the banks system was only set up to evaluate net positions instead of both net and gross positions. Thus, the abnormally large size of his trading positions went undetected. Flad the system properly monitored gross positions, it is likely that the large positions would have issued a warning sign given the level of riskiness associated with those notional amounts. Also, the large amount of trading commissions should have raised a red flag to management.
Additional reasons that contributed to the unauthorized positions going undetected included the inaction of Kernels trading assistant to report fraudulent activity, the violation of the banks vacation policy, the weak reporting system for collateral and cash accounts, and the lack of investigation into unexpected reported trading gains.
Kerviels trading assistant had immediate access to Kerviels trading activities. Because the fictitious trades and the manipulation of the banks trading system went unreported, it was believed that the trading assistant was acting in collusion with Kerviel. Regarding the banks vacation policy, the rule that forced traders to take two weeks of vacation in a row was ignored. Had this policy been enforced, another trader would have been responsible for Kerviels positions and likely would have uncovered the fraudulent activity of rolling fake transactions forward. Regarding collateral and cash reports, the fake transactions did not warrant any collateral or cash movements, so nothing balanced the collateral and cash needs of the actual trades that were being offset. If Societe Generales collateral and cash reports had been more robust, it would have detected unauthorized movements in the levels of these accounts for each individual trader. Regarding reported trading gains, Kerviel inflated trading gains above levels that could be reasonably accounted for given his actual authorized trades. This action should have prompted management to investigate the source of the reported trading gains.
2018 Kaplan, Inc.
Page 67
Topic 42 Cross Reference to GARP Assigned Reading – Girling, Chapter 8
Ultimately, the unauthorized trading positions were discovered by chance after one of Kennels fake trades was detected by control personnel during a routine monitoring of positions. Kennels inability to explain the fictitious transaction led to a rigorous investigation, revealing the depth of his fraudulent activities.
Lessons to be learned specific to this operational loss event include the following: Traders who perform a large amount of trade cancellations should be flagged and, as a result, have a sample of their cancellations reviewed by validating details with trading counterparties to ensure cancellations are associated with real trades.
Tighter controls should be applied to situations that involve a new or temporary
manager.
Banks must check for abnormally high gross-to-net-position ratios. High ratios suggest
a greater probability of unauthorized trading activities and/or basis risk measurement issues.
Control personnel should not assume the independence of a trading assistants actions.
Trading assistants often work under extreme pressure and, thus, are susceptible to bullying tactics given that job performance depends on them following direction from traders.
Mandatory vacation rules should be enforced. Requirements for collateral and cash reports must be monitored for individual traders. Profit and loss activity that is outside reasonable expectations must be investigated
by control personnel and management. Reported losses or gains can be compared to previous periods, forecasted values, or peer performance.
Page 68
2018 Kaplan, Inc.
Topic 42 Cross Reference to GARP Assigned Reading – Girling, Chapter 8
K e y C o n c e p t s
LO 42.1 Operational risk departments look at events outside the firm to gain valuable insights and inputs into operational risk capital calculations. External events can also be useful in many areas of a firms operational risk framework, as they provide information useful for risk self-assessment activities. These events are key inputs in scenario analysis and can help in developing key risk indicators for monitoring the business environment. Additionally, external data is a required element in the advanced measurement approach (AMA) capital calculation.
Subscription databases include descriptions and analyses of operational risk events, which are derived from legal and regulatory sources and news articles. In addition to database systems, there are also consortium-based risk event services that provide a central data repository to member firms and can offer benchmarking services as well. ORX is a provider of this type of data.
LO 42.2 When comparing data in the FIRST and ORX databases, we see significant differences between them. The FIRST database has a significantly higher percentage of losses for Internal Fraud than does ORX. In contrast, ORX has a significantly higher percent of Execution, Delivery, and Process Management losses. This could be because not all Execution, Delivery, and Process Management events are reported by the press, implying the FIRST database is missing many events and has an unavoidable collection bias.
.Another difference between the two databases with respect to Execution, Delivery, and Process Management events is that ORX data is supplied directly from member banks. However, not all banks are ORX members, implying that ORX likely also suffers from collection bias. This is in contrast to the FIRST database that collects data on all firms, including a significant number of firms outside of Basel II compliance.
LO 42.3 ORX and FIRST databases must be viewed with caution, as there are several challenges with using external data. For example, external data derived from the media is subject to reporting bias because the press is far more likely to cover illegal and dramatic events. The use of benchmark data is also a concern, as there is a chance that comparisons are not accurate because of different interpretations of the underlying database definitions.
One of the best ways to use external data is not to spot exact events to be avoided but rather to determine the types of errors and control failings necessary to avoid similar losses. External data can still have a valuable role in operational risk management if staff acknowledges any limitations. Databases can provide valuable lessons about risk management and highlight trends in the industry.
2018 Kaplan, Inc.
Page 69
Topic 42 Cross Reference to GARP Assigned Reading – Girling, Chapter 8
LO 42.4 Jerome Kerviel, a junior trader at Societe Generale, participated in unauthorized trading activity and concealed this activity with fictitious offsetting transactions. The fraud resulted in 4.9 billion in losses and severely damaged the reputation of Societe Generale.
Page 70
2018 Kaplan, Inc.
Topic 42 Cross Reference to GARP Assigned Reading – Girling, Chapter 8
C o n c e p t C h e c k e r s
1.
2.
3.
4.
3.
Which of the following reasons is least likely to be a motivation for firms to use external data? A. To provide inputs into operational risk calculations. B. To engage in risk self-assessment activities. C. To ignore any operational loss events outside of external loss databases. D. To use in the advanced measurement approach (AMA) capital calculation.
In the IBM Algo FIRST database, which event type accounts for the most risk events? A. Business Disruptions and Systems Failures. B. Execution, Delivery, and Process Management. C. Clients, Products, and Business Practices. D. Internal Fraud.
IBM Algo FIRST Which database is likely to suffer from selection bias for Execution, Delivery, and Process Management losses because not all events are reported in the press? I. II. Operational Riskdata eXchange Association (ORX) A. I only. B. II only. C. Both I and II. D. Neither I nor II.
Which of the following statements is least likely to be a limitation of using external databases? External databases: A. must be viewed with caution. B. suffer from collection biases. C. do not report all events. D. cannot be used in internal calculations.
Which of the following statements was not a contributing factor to Jerome Kernels activities at Societe Generale? Mr. Kerviel: A. engaged in extensive unauthorized activities. B. engaged in rogue trading despite being sufficiently supervised. C. had knowledge of controls to ensure his activities were not detected. D. gained password access to back office systems to manipulate data.
2018 Kaplan, Inc.
Page 71
Topic 42 Cross Reference to GARP Assigned Reading – Girling, Chapter 8
Co n c e pt Ch e c k e r A n sw e r s
1. C Operational risk departments look at events outside the firm to gain valuable insights and inputs into operational risk capital calculations. Firms should understand that external loss databases only include a sample of potential operational loss events.
2. C Forty six percent of all records in the FIRST database fall into the category of Clients,
Products, and Business Practices, more than any other category.
3. A Because not all Execution, Delivery, and Process Management events are reported by
the press, it is likely that the FIRST database is missing many events and, thus, has an unavoidable collection bias.
4. D The use of external databases is critical to firms operational risk management calculations, an
example of which is observing fat tail events at other firms.
5. B Mr. Kerviel was insufficiently supervised according to IBM Algo FIRST.
Page 72
2018 Kaplan, Inc.
2018 Kaplan, Inc.
Page 73
The following is a review of the Operational and Integrated Risk Management principles designed to address the learning objectives set forth by GARP. This topic is also covered in:
Ca pi t a l M o d e l i n g
Topic 43
E x a m F o c u s
This topic discusses approaches for modeling operational risk capital requirements. Collecting data for loss frequency and loss severity distributions is an important component of allocating operational risk capital among various bank business lines. The loss distribution approach (LDA) models losses with respect to both frequency and severity with the goal of determining the appropriate level of capital. For the exam, be able to compare the approaches for calculating operational risk capital charges and be able to describe the LDA for modeling capital. Approaches for calculating operational risk capital requirements will be covered again in Topics 44 and 38.
O p e r a t i o n a l R i s k C a p i t a l R e q u i r e m e n t s

LO 42.3: Describe the challenges that can arise through the use of external data.

LO 42.3: Describe the challenges that can arise through the use of external data.
Many firms operational risk systems not only include ORX and FIRST data but are also supplemented with information from the firms own research and relevant industry news and journals. However, as we noted previously about the various differences between ORX and FIRST, the databases must be viewed with caution, as there are several challenges with using external data.
For example, external data derived from the media is subject to reporting bias. This is because it is up to the press to decide which events to cover, and the preference is for illegal and dramatic acts. For instance, consider that a large internal trading fraud might get press coverage, while a systems outage might get none. We should also consider that a major gain is less likely to be reported by the media than a major loss. .Another barrier to determining whether an event is relevant is that some external events may be ignored because they are
2018 Kaplan, Inc.
Page 65
Topic 42 Cross Reference to GARP Assigned Reading – Girling, Chapter 8
perceived as types of events that could not happen here. Finally, the use of benchmark data may be a concern because there is a chance that comparisons may not be accurate due to different interpretations of the underlying database definitions.
One of the best ways to use external data is not to spot exact events to be avoided but rather to determine the types of errors and control failings that could cause similar losses. External data may have direct relevance despite differences in the details. For example, the Societe Generale event led many firms to overhaul their fraud controls.
External data can serve a valuable role in operational risk management if its limitations are acknowledged. Databases can provide valuable lessons about risk management and highlight trends in the industry. While internal and external databases only tell us about what has already gone wrong, the data can be used to implement controls to mitigate the chances of similar events repeating, and they provide valuable inputs into the operational risk framework. Loss data is also useful for self-assessment, scenario analysis, and key risk indicators (KRIs) that indicate loss trends and weaknesses in controls.
S o c i e t e G e n e r a l e O p e r a t i o n a l L o s s E v e n t

LO 42.2: Explain ways in which data from different external sources may differ.

LO 42.2: Explain ways in which data from different external sources may differ.
Differences in the collection methods between the ORX and the FIRST databases have an interesting impact on the relative distribution of the loss data.
Size and Frequency of Losses by Risk Category
When comparing the size of losses by risk category in the ORX and FIRST databases, we see that the FIRST database has a significantly higher percentage of losses for Internal Fraud than ORX does. In contrast, ORX has a significantly higher percent of Execution, Delivery,
Page 64
2018 Kaplan, Inc.
Topic 42 Cross Reference to GARP Assigned Reading – Girling, Chapter 8
and Process Management losses than does FIRST. This could be because not all Execution, Delivery, and Process Management losses are reported by the press, implying that the FIRST database is missing many events and has an unavoidable collection bias.
The primary difference between these two databases with respect to Execution, Delivery, and Process Management events is that ORX data is supplied directly from member banks, which does not include all banks, implying that ORX also suffers from collection bias. This is in contrast to the FIRST database that collects data on all firms, including a significant number of firms outside of Basel II compliance.
Regarding the frequency of losses by risk category, Execution, Delivery, and Process Management events are missing from FIRST data, presumably because they rarely get press coverage. ORX has a larger frequency of External Fraud than FIRST, which suggests that such events are often kept from the press. ORX data also shows a large amount of External Fraud due to the participation of retail banks in the consortium. This is because Retail Banking includes credit card services, which causes this category to be driven by numerous small instances of fraud in retail banking and credit cards. The threshold for reporting loss data to ORX is 20,000.
Size and Frequency of Losses by Business Line
When comparing the size of losses by business lines in the ORX and FIRST databases, ORX losses are heavily weighted toward Retail Banking. Also, Commercial Banking accounts for a smaller percentage of losses for ORX than for FIRST, which may be due to recent commercial banking events making it into the press and, therefore, into the FIRST database (but not the ORX database).
Regarding the frequency of losses by business line, ORX data is driven by Retail Banking events, whereas FIRST events are more evenly distributed among the various business lines. Also, the majority of events for ORX and FIRST occur in Retail Banking but by a slimmer margin for the FIRST database.
C h a l
l e n g e s w i t h U s i n g E x t e r n a l D a t a

LO 42.1: Explain the motivations for using external operational loss data and

LO 42.1: Explain the motivations for using external operational loss data and common sources of external data.
One reason operational risk departments look at events outside the firm is to gain valuable insights and inputs into operational risk capital calculations. Furthermore, external data is a required element in the advanced measurement approach (AMA) capital calculation under Basel II.
External events can be useful in many areas of the firms operational risk framework, as these events provide information for risk self-assessment activities. They are key inputs in scenario analysis and can help in developing key risk indicators for monitoring the business environment.
Figure 1: External Loss Data in the Operational Risk Framework
Risk Measurement and Modeling / Risk Reporting
f Internal Loss Data Risk and Control Self-Assessment External Loss Data
Scenario Analysis Key Risk Indicators
(KRIs)
Risk Culture / Risk Policies and Procedures
Senior management should take an interest in external events because news headlines can provide useful information on operational risk. Examining events among industry peers and competitors helps management understand the importance of effective operational risk management and mitigation procedures. This is why external data is the key to developing a strong culture of operational risk awareness.
2018 Kaplan, Inc.
Page 61
Topic 42 Cross Reference to GARP Assigned Reading – Girling, Chapter 8
An example of a huge risk event that impacted industry discipline is the 4.9 billion trading scandal at Societe Generale in 2006. This internal loss for Societe Generale demonstrated to the financial services industry how operational risk can lead to large losses. In spite of the lessons learned from this experience, the financial industry saw another huge trading loss event at UBS in 2011, which led firms to reassess how they respond to external events and to ensure any lessons learned do not go unheeded.
Sources of External Loss Data
There are many sources of operational risk event data in the form of news articles, journals, and email services. Operational risk system vendors offer access to their database of events, and there are consortiums of operational risk losses as well. External events are a valuable source of information on individual events and also serve as a benchmarking tool for comparing internal loss patterns to external loss patterns. This process provides insight into whether firm losses are reflective of the industry.
Subscription Databases
Subscription databases include descriptions and analyses of operational risk events derived from legal and regulatory sources and news articles. This information allows firms to map events to the appropriate business lines, risk categories, and causes. The primary goal of external databases is to collect information on tail losses and examples of large risk events. An excerpt showing the total operational risk loss percentages to date by risk category in the IBM Algo FIRST database is shown in Figure 2.
Figure 2: Operational Risk Losses Recorded in IBM Algo FIRST (Q4 2012)
E vent Type
Business Disruption and System Failures Clients, Products, and Business Practices Damage to Physical Assets Employment Practices and Workplace Safety Execution, Delivery, and Process Management External Fraud Internal Fraud Total
% o f Losses 0.41% 48.25% 19.22% 0.88% 6.68% 3.94% 20.63% 100%
% o f E vents 1.54% 46.11% 3.18% 5.97% 7.28% 9.71% 26.20% 100%
Through these statistics, we can see some patterns in operational risk events. For example, 46% of all records fall into the category of Clients, Products, and Business Practices, accounting for 48% of dollar value losses. Internal Fraud is another large area of risk events, with 26% of records and 21% of losses. Damage to Physical Assets is the next most expensive category with 19% of losses but only 3% of events.
Page 62
2018 Kaplan, Inc.
Topic 42 Cross Reference to GARP Assigned Reading – Girling, Chapter 8
Figure 2 shows us that within an internal database such IBM Algo FIRST (FIRST), operational risk losses from Internal Fraud, Damage to Physical Assets, and Client, Products, and Business Practices are more significant than those from other categories. However, keep in mind that the FIRST database includes business lines that are not part of the Basel-specified business lines. This results in relatively high Damage to Physical Assets losses, as insurance company losses are included in that category.
In Figure 3, we see subsets of losses from the FIRST database. (Note that any losses not attributed to one of the Basel business lines have been removed.)
Figure 3: FIRST Losses by Business Line (Q4 2012)
Business L ine
Agency Services Asset Management Commercial Banking Corporate Finance Payment and Settlement Retail Banking Retail Brokerage Trading and Sales Total
% o f Losses 0.35% 14.40% 23.42% 17.56% 2.72% 23.67% 1.30% 16.58% 100%
% o f E vents
2.22% 16.37% 17.70% 9.00% 5.90% 20.79% 10.33% 17.70% 100%
Figure 3 shows about 10% of events occur in the Retail Brokerage business line, but these retail brokerage events account for only 1 % of losses because average losses in this business line are relatively small. Conversely, we see that Corporate Finance generated 9% of events but accounted for 18% of losses. Clearly, average losses in Corporate Finance tend to be more expensive.
We should keep in mind that this analysis is based on publicly available data for operational risk events, which is subject to reporting bias. The FIRST database is useful for financial services firms to compare their risk profiles to the industry by category and business line. FIRST provides insights into events that may not have occurred at a particular firm in the risk modeling process.
Consortium Data
Besides the FIRST approach to collecting data, there are also consortium-based risk event services that provide a central data repository. Operational Riskdata eXchange Association (ORX) is a provider of this data, which is gathered from members to provide benchmarking. ORX applies quality assurance standards to keep all receipt and delivery of data anonymous and to provide consistency in definitions of events.
Unlike subscription services, ORX data does not suffer from the availability bias that skews the FIRST data (which relies on public sources of information). With ORX, all events are entered anonymously into the database; however, the data relates only to a small subset of firms that are members of the consortium. ORX also uses different business lines than FIRST. For example, it splits retail banking into two groups: Retail Banking and Private
2018 Kaplan, Inc.
Page 63
Topic 42 Cross Reference to GARP Assigned Reading – Girling, Chapter 8
Banking. It also renames Payment and Settlement to Clearing. The ORX database has gathered nearly 30,000 events costing its members over 100 billion, which helps highlight the potential costs of operational risk.
ORX publishes reports that summarize the number and amount of losses for each business line and risk category. Regarding the reported contributions, the Retail Banking business area generates 38% of events; most of them in the External Fraud category. Trading and Sales and Commercial Banking follow with about 10% of total events each. Retail Banking has the biggest share of total costs at 46% of total losses. Execution, Delivery, and Process Management produce the largest number of events (36%), with 23% of total costs. Also, Clients, Products, and Business Practices accounts for about 17% of events but more than 50% of losses, which demonstrates that for members of ORX, these events tend to be large. Many firms use information from this category to conduct scenario analysis for potential fat tail events. Data representing dollar value losses of operational risk for each business line is shown in Figure 4.
Figure 4: Dollar Value Losses by Risk Category and Business Line
Clearing Corporate Items Private Banking Asset Management
I
Agency Services I Retail Brokerage I Corporate Finance |l Commercial Banking Trading and Sales Retail Banking $0
$15,000,000,000
$30,000,000,000
$45,000,000,000
$60,000,000,000
Internal Fraud Employment Practices and Workplace Safety Damage to Physical Assets Execution, Delivery, and Process Management
External Fraud Clients, Products, and Business Practices Business Disruption and System Failure
S u b s c r i p t
i o n v s . C o n s o r t
i u m D a t a b a s e s

LO 41.6: Explain the role of operational risk governance and explain how a firms

LO 41.6: Explain the role of operational risk governance and explain how a firms organizational structure can impact risk governance.
A key factor in creating a successful OpRisk framework is the organizational design of the risk management framework. Developing an understanding of reporting lines is just as important as developing good measurement tools and key risk indicators. All stakeholders for the organization should be informed of the OpRisk framework to help ensure that data is collected accurately and reflects the systems in place. The way in which risk is managed in an organization and the internal governance is an important aspect of OpRisk management.
There are four main organizational designs for integrating the OpRisk framework within the organization. Most large firms start at design 1 and progress to design 4 over time. The four organizational designs are illustrated in Figure 11 and summarized below.
Design 1: Central Risk Function Coordinator
In the first risk organizational design, the risk manager is viewed more as a coordinator or facilitator of risk management. This risk management design typically involves only a small Central Risk group who is responsible for OpRisk management. The risk manager gathers all risk data and then reports directly to the Chief Executive Officer (CEO) or Board of Directors. Regulators believe there exists a conflict of interest for reporting risk data directly to management or stakeholders that are primarily concerned with maximizing profits. Thus, this design can only be successful if business units are responsive to the Central Risk function without being influenced by upper management who controls their compensation and evaluates their performance.
Design 2: Dotted Line or Matrix Reporting
Creating a link or dotted line from the business risk managers to the Central Risk function of the organization is the next natural progression in risk organizational design. The dotted line implies that business unit managers are still directly under the influence of the CEO who controls their compensation and evaluates their performance. Thus, this type of framework is only successful if there is a strong risk culture for each business unit that
Page 56
2018 Kaplan, Inc.
Topic 41 Cross Reference to GARP Assigned Reading – Cruz, Chapter 2
encourages collaboration with the Central Risk function. Furthermore, this dotted line structure is preferred when there is a culture of distrust of the Central Risk function based on some historical events.
Design 3: Solid Line Reporting
For larger firms that have centralized management, the solid line reporting is more popular. The solid line indicates that each business unit has a risk manager that reports directly to the Central Risk function. This design enables the Central Risk function to more effectively prioritize risk management objectives and goals for the entire firm. The solid line reporting also creates a more homogeneous risk culture for the entire organization.
Design 4: Strong Central Risk Management
Many large firms have evolved into a strong central risk management design either voluntarily or from regulatory pressure. Under this design, there is a Corporate Chief Risk Officer who is responsible for OpRisk management throughout the entire firm. The Central Risk Manager monitors OpRisk in all business units and reports directly to the CEO or Board of Directors. Regulators prefer this structure as it centralizes risk data which makes regulatory supervision easier for one direct line of risk management as opposed to numerous risk managers dispersed throughout various business units of the firm.
Figure 11: Risk Department Organizational Designs
1. Central Risk Function Coordinator
2. Matrix Reporting (Dotted Line)
3. Central Risk Management (Solid Line)
4. Strong Central Risk Management
2018 Kaplan, Inc.
Page 57
Topic 41 Cross Reference to GARP Assigned Reading – Cruz, Chapter 2
Ke y C o n c e pt s
LO 41.1 Basel II classifies loss events into seven categories. Loss events in the Execution, Delivery, and Process Management category have a small dollar amount but a very large frequency of occurrence. Losses are more infrequent but very large in the Clients, Products, and Business Practices category.
LO 41.2 Thresholds for collecting loss data should not be set too low if there are business units that have a very large number of smaller losses. Another important issue to consider in the process of collecting loss data is the timeframe for recoveries. Time horizons for complex loss events can stretch out for as much as five years or longer.
The International Accounting Standards Board (IASB) prepared IAS37, which states that loss provisions: (1) are not recognized for future operating losses, (2) are recognized for onerous contracts where the costs of fulfilling obligations exceed expected economic benefits, and (3) are only recognized for restructuring costs when a firm has a detailed restructuring plan in place.
LO 41.3 Risk control self-assessment (RCSA) requires the assessment of risks that provides a rating system and control identification process for the OpRisk framework. Key risk indicators (KRIs) are used to quantify the quality of the control environment with respect to specific business unit processes.
LO 41.4 Expert opinions are drawn from structured workshops and used as inputs in scenario analysis models. A challenge for scenario analysis is that these expert opinions may contain the following biases: presentation, context, availability, anchoring, huddle, gaming, confidence, and inexpert opinion.
LO 41.3 In general, the Clients, Products, and Business Practices unit and the Execution, Delivery, and Process Management unit have the largest losses based on OpRisk profiles across financial sectors in terms of severity and frequency of losses.
LO 41.6 There are four main organizational designs for integrating an OpRisk framework. Most large firms evolve from design 1 to design 4 over time. The primary difference in the designs is how risk is reported and the link between separate business unit risk managers and the Central Risk function.
Page 58
2018 Kaplan, Inc.
Topic 41 Cross Reference to GARP Assigned Reading – Cruz, Chapter 2
C o n c e pt C h e c k e r s
1.
2.
3.
4.
3.
Suppose a broker-dealer has a loss that occurs from a failure in properly processing and settling a transaction. According to Basel II operational risk categories, this type of event loss would be categorized as: A. Business Disruption and System Failures. B. Clients, Products, and Business Practices. C. Execution, Delivery, and Process Management. D. Employment Practices and Workplace Safety.
There are typically four steps used in designing the risk control self-assessment (RCSA) program for a large firm. Which of the following statements is least likely to be a step in the design of that program? A. Identify and assess risks associated with each business units activities. B. Controls are added to the RCSA program to mitigate risks identified for the
firm.
C. Risk metrics and all other OpRisk initiatives are linked to the RCSA program. D. Reports to regulators are prepared that summarize the degree of OpRisk.
Scenario analysis is often used by financial institutions in determining the amount and frequency of losses. Because historical data is often limited for all possible losses, the opinions of experts are often obtained from workshops. These expert opinions are often subject to biases. Which of the following biases refers to the problem that can arise in this group setting where an expert may not be willing to share a conflicting opinion? A. Huddle bias. B. Context bias. C. Availability bias. D. Anchoring bias.
Based on OpRisk profiles across financial sectors, which of the following loss event type categories have the highest frequency and severity of losses? A. Business Disruption and System Failures. B. Clients, Products, and Business Practices. C. External Fraud. D. Internal Fraud.
Which of the following risk organizational design frameworks is preferred by regulators? A. Central risk function coordinator. B. Matrix reporting using dotted lines. C. Solid line reporting to central risk management. D. Strong central risk management.
2018 Kaplan, Inc.
Page 59
Topic 41 Cross Reference to GARP Assigned Reading – Cruz, Chapter 2
C o n c e pt Ch e c k e r A n s w e r s
1. C Basel II classifies losses from failed transaction processing or process management from
relations with trade counterparties and vendors under the Execution, Delivery, and Process Management category.
2. D The last step in the design of a risk control self-assessment (RCSA) program involves control
tests to assess how well the controls in place mitigate potential risks.
3. A Huddle bias suggests that groups of individuals tend to avoid conflicts that can result from
different viewpoints or opinions. Availability bias is related to the experts experience in dealing with a specific event or loss risk. Anchoring bias occurs when an expert limits the range of a loss estimate based on personal knowledge. Context bias occurs when questions are framed in a way that influences the responses of those being questioned.
4. B From the choices listed the Clients, Products, and Business Practices unit has the highest
frequency percentages and severity of loss percentages across business units. The Execution, Delivery, and Process Management unit also has large losses across business units in terms of frequency and severity of losses, however, this category was not listed as a possible choice.
5. D Regulators prefer the strong central risk management design because they can streamline their supervision over one direct line of risk management as opposed to numerous risk managers throughout the firm.
Page 60
2018 Kaplan, Inc.
The following is a review of the Operational and Integrated Risk Management principles designed to address the learning objectives set forth by GARP. This topic is also covered in:
Ex t e r n a l Lo s s Da t a
Topic 42
E x a m F o c u s
This topic examines the motivations for using external operational loss data and compares characteristics of loss data from different sources. For the exam, understand why firms are motivated to use external data in their internal operational risk framework development and the types of data that are available. Also, understand the differences in construction methodologies between the ORX and FIRST databases and be able to cite examples of how these differences manifest themselves in the data. Finally, be able to describe the Societe Generale operational loss event.
C o l
l e c t i n g E x t e r n a l L o s s D a t a

LO 41.3: Compare the typical operational risk profiles of firms in different

LO 41.3: Compare the typical operational risk profiles of firms in different financial sectors.
Various business units within a financial institution are identified separately in an OpRisk profile. This allows the OpRisk manager to gather data for specific risks of each business unit. For example, an asset management unit typically has greater legal liability problems whereas an investment bank unit has more losses associated with transaction processing operational errors.
Basel II defines level 1 business units into the following categories: Trading and Sales, Corporate Finance, Retail Banking, Commercial Banking, Payment and Settlement, Agency Services, Asset Management, and Retail Brokerage. Large financial institutions typically define business units within their firm based on these Basel II definitions.
Figures 9 and 10 contrast the OpRisk profiles for five of these financial business units with respect to frequency and severity, respectively. The first columns of Figure 9 and Figure 10 summarize the type of event risk for each business unit. The frequency percentages based on the number of loss events are presented for each business unit in Figure 9. The severity percentages based on total dollar amount losses are presented for each business unit in Figure 10.
2018 Kaplan, Inc.
Page 53
Topic 41 Cross Reference to GARP Assigned Reading – Cruz, Chapter 2
Figure 9: OpRisk Profiles Showing Frequency (%)
Event Type
Internal Fraud External Fraud Employment Practices Clients, Products, & Business Practices Physical Asset Damage System Failures & Business Disruptions Execution, Delivery, & Process Mgt
Trading Corporate Finance & Sales 1.6% 1.0% 1.0% 5.4% 10.1% 3.1%
12.7% 0.4%
5.0% 76.7%
47.1% 1.1%
2.2% 32.5%
Retail Banking M anagement
Asset
Retail
Brokerage
5.4% 40.3% 17.6%
13.1% 1.4%
1.6% 20.6%
1.5% 2.7% 4.3%
13.7% 0.3%
3.3% 74.2%
5.8% 2.3% 4.4%
66.9% 0.1%
0.5% 20.0%
Source: 2008 Loss Data collection exercise: for Operational Risk BCBS (2009)
Figure 10: OpRisk Profile Showing Severity (%)
Event Type
Internal Fraud External Fraud Employment Practices Clients, Products, & Business Practices Physical Asset Damage System Failures & Business Disruptions Execution, Delivery, & Process Mgt
Trading Corporate & Sales Finance 0.2% 11.0% 0.1% 0.3% 0.6% 2.3%
29.0% 93.7% 0.2% 0.0%
1.8% 55.3%
0.0% 5.4%
Retail Banking M anagem ent
Asset
6.3% 19.4% 9.8%
40.4% 1.1%
1.5% 21.4%
11.1% 0.9% 2.5%
30.8% 0.2%
1.5% 52.8%
Retail
Brokerage 18.1% 1.4% 6.3%
59.5% 0.1%
0.2% 14.4%
Source: 2008 Loss Data collection exercise for Operational Risk BCBS (2009) The two categories with the largest percentage of losses are emphasized in bold across different business units. The Clients, Products, and Business Practices (CPBP) unit and the Execution, Delivery, and Process Management (EDPM) unit have the largest losses across business units in terms of both frequency and severity of losses.
The number of losses related to the EDPM unit represented the highest frequency percentage and severity percentage for the Trading and Sales business unit in a 2008 survey of financial institutions. This is expected based on the number of trades executed daily by this business unit. Within this business unit, traders are required to execute trades for their firm or clients and then later settle the transactions. The complexity and wide range of products processed increases the possibility that errors may occur in the process. There is also a high frequency percentage and severity percentage related to the CPBP unit. Losses within this category arise from client or counterparty disputes, regulatory fines, and improper advisory activities.
The Corporate Finance business unit primarily provides consulting regarding initial public offerings, mergers and acquisitions, and other strategic planning. Figure 10 suggests that
Page 54
2018 Kaplan, Inc.
Topic 41 Cross Reference to GARP Assigned Reading – Cruz, Chapter 2
over 93% of losses fall under the CPBP category. The majority of losses are from litigation from clients arguing IPOs were mispriced or some other improper advice.
The Retail Banking unit has the highest frequency of losses associated with external frauds at 40%. However, external fraud accounts for only about 20% of the total severity percentage. The largest severity percentage for the retail banking sector is the Clients, Products, and Business Practices category with Execution, Delivery, and Process Management as the next highest category.
Prior to the financial crisis of 20072009, Asset Management firms had steady increases in assets under management (AUM) as profits were realized across most financial markets in the bull market. Thus, most asset managers did not focus on operational costs. Conversely, after the crisis all costs became extremely important as AUM were reduced by as much as 40%. The lack of proper controls increased the losses beyond market related losses.
In addition to the financial crisis, one litigation case reached an unprecedented level and brought an added demand for increased controls. Bernie Madoffs Ponzi scheme caused many individuals to lose all of their investments and pension savings. These events have led to dramatic increases in OpRisk controls for the asset management industry. The asset management industry reduced operational costs by consolidating administration and distribution departments for large geographical regions. In addition, more focus is now concentrated toward reducing operational costs and risk management. Productivity has also seen changes as illustrated by select financial firms significantly reducing the number of products offered to focus on fewer products on a global scale.
OpRisk, market risk, and credit risk are all concerns for asset management firms. However, economic losses are largely due to OpRisk losses, because credit and market risks do not have an immediate impact on manager fee income. The OpRisk profile for asset management firms reveals the largest frequency and severity percentage in the Execution, Delivery, and Process Management area.
The OpRisk profile for firms in the Retail Brokerage industry can vary to some extent due to the wide range of business strategies ranging from online to brick-and-mortar broker- dealers. Changes in technologies have significantly increased the speed of trading and clients of broker-dealers now have direct market access through trading tools. Clients such as hedge funds, mutual funds, insurance companies, or wealthy individuals are able to directly access markets using the broker-dealers market participant identifier (MPID). This greatly increases the operational risk for broker-dealers who are responsible for all trades made with their MPID. If trades are not filtered by the broker-dealer, then the risks are even greater.
For example, due to the high speed of trades driven by algorithms and large blocks of trades, a two-minute delay in detecting a mistake could lead to losses approaching three-quarters of a billion dollars. Thus, it is important to integrate pre-trade controls into the system to mitigate the risk of mistakes or entry errors. The OpRisk profile of the retail brokerage industry has the largest frequency and severity percentage in the Clients, Products, and Business Practices area.
There was no loss frequency or severity data provided for the Insurance sector. Perhaps this is due to the fact that firms in the insurance industry are still in the early stages of developing accurate OpRisk frameworks and there is no data available. The insurance sector
2018 Kaplan, Inc.
Page 55
Topic 41 Cross Reference to GARP Assigned Reading – Cruz, Chapter 2
is divided into three major insurance types: life, health, and property and casualty. The insurance industry collects premiums for insuring individual losses and the insurer pays for losses incurred by policyholders, thus reducing the possibility of a large loss for any one individual.
In order to properly price the premiums, the insurer must have accurate actuarial calculations. In fact, OpRisk capital requirement models determined by regulators are designed after actuarial calculation models in the property and casualty insurance industry. Some major OpRisks for insurers include misselling products to clients, fraudulent sales techniques, customer frauds, discrimination litigation, and incomplete policy litigation following the 9/11 attacks.
O r g a n i z a t i o n a l S t r u c t u r e s f o r R i s k G o v e r n a n c e

LO 41.4: Describe and assess the use of scenario analysis in managing operational

LO 41.4: Describe and assess the use of scenario analysis in managing operational risk, and identify biases and challenges that can arise when using scenario analysis.
Scenario analysis is defined as the process of evaluating a portfolio, project, or asset by changing a number of economic, market, industry, or company specific factors. Scenario analysis models are especially useful tools for estimating losses when loss experiences related to emerging risks are not available to the financial institution. Inputs to scenario analysis models are collected from external data, expert opinions, internal loss trends, or key risk indicators (KRIs). Expert opinions are typically drawn from structured workshops for large financial institutions. However, surveys and individual meetings can also be used to gather expert advice. Studies suggest that most financial firms analyze between 50 and 100 scenarios on an annual basis.
One of the challenges in scenario analysis is taking expert advice and quantifying this advice to reflect possible internal losses for the firm. The following example illustrates how a firm may create a frequency distribution of loss events that can be used in scenario analysis.
Figure 8 illustrates data constructed for a financial institution based on expert inputs. Information is gathered on loss frequencies for pre-determined loss brackets. Thus, a frequency distribution is created to model the probability of losses based on the amount of loss on an annual basis. This frequency distribution is then used in the OpRisk framework for the firm.
Figure 8: Scenario Analysis Model for Loss Frequencies
Loss Bracket
N umber o f Losses
Over $5,000,000 $1,000,000 to $5,000,000 $500,000 to $1,000,000 $250,000 to $500,000 $100,000 to $250,000 $50,000 to $100,000 Total
3 9 18 25 41 12, 168
Frequency 1.8% 5.4% 10.7% 14.9% 24.4% 42.9% 100.0%
2018 Kaplan, Inc.
Page 51
Topic 41 Cross Reference to GARP Assigned Reading – Cruz, Chapter 2
Biases and Challenges of Scenario Analysis
One of the biggest challenges of scenario analysis is the fact that expert opinions are always subject to numerous possible biases. There is often disparity of opinions and knowledge regarding the amount and frequency of losses. Expert biases are difficult to avoid when conducting scenario analysis. Examples of possible biases are related to presentation, context, availability, anchoring, confidence, huddle, gaming, and inexpert opinion.
Presentation bias occurs when the order that information is presented impacts the experts opinion or advice. Another similar type of bias is context bias. Context bias occurs when questions are framed in a way that influences the responses of those being questioned. In the case of scenario analysis, the context or framing of questions may influence the response of the experts.
Another set of biases are related to the lack of available information regarding loss data for a particular expert or for all experts. Availability bias is related to the experts experience in dealing with a specific event or loss risk. For example, some experts may have a long career in a particular field and never actually experience a loss over $ 1 billion. The availability bias can result in over or under estimating the frequency and amount of loss events. A similar bias is referred to as anchoring bias. Anchoring bias can occur if an expert limits the range of a loss estimate based on personal experiences or knowledge of prior loss events. The availability an expert has to information can also result in a confidence bias. The expert may over or under estimate the amount of risk for a particular loss event if there is limited information or knowledge available for the risk or the probability of occurrence.
Expert opinions are often obtained in structured workshops that have a group setting. This group setting environment can lead to a number of biases. Huddle bias (also known as anxiety bias) refers to a situation described by behavioral scientists where individuals in a group setting tend to avoid conflicts and not express information that is unique because it results from different viewpoints or opinions. An example of a huddle bias would be a situation where junior experts do not voice their opinions in a structured workshop because they do not want to disagree in public with senior experts. Another concern for group environments is the possibility of gaming. Some experts may have ulterior motives for not participating or providing useful information in workshops. Another problem with workshop settings is the fact that top experts in the field may not be willing to join the workshop and prefer to work independently. The lack of top experts then attracts less experienced or junior experts who may have an inexpert opinion. These inexpert opinions can then lead to inaccurate estimates and poor scenario analysis models.
One technique that can help in scenario analysis is the Delphi technique. This technique originated from the U.S. Air Force in the 1950s and was designed to obtain the most reliable consensus of opinions from a group of experts. This technique is useful for many applications for analyzing cases where there is limited historical data available. More specifically, the Delphi technique is often applied in situations that exhibit some of the following issues: Precise mathematical models are not available but subjective opinions can be gathered
from experts.
Experts have a diverse background of experience and expertise, but little experience in
communicating within expert groups.
Page 52
2018 Kaplan, Inc.
Topic 41 Cross Reference to GARP Assigned Reading – Cruz, Chapter 2
Group meetings are too costly due to time and travel expenses. A large number of opinions is required and a single face-to-face meeting is not feasible. Under the Delphi technique, information is gathered from a large number of participants across various business units, areas of expertise, or geographical regions. The information is then presented in a workshop with representatives from each area. Recommendations are determined by this workshop group and quantified based on a pre-determined confidence level. A basic Delphi technique commonly goes through the following four steps: 1. Discussion and feedback is gathered from a large number of participants who may have
diverse exposure and experience with particular risks.
2.
Information gathered in step 1 is summarized and presented to a workshop group with representatives from various locations or business units surveyed.
3. Differences in feedback are evaluated from step 2.
4. Final evaluation and recommendations are made based on analysis of data and feedback
from participants and/or respondents.
O p e r a t i o n a l R i s k P r o f i l e s

LO 41.3: Explain the use of a Risk Control Self-Assessment (RCSA) and key

LO 41.3: Explain the use of a Risk Control Self-Assessment (RCSA) and key risk indicators (KRIs) in identifying, controlling, and assessing operational risk exposures.
The control environment plays an important role in mitigating operational losses. The OpRisk manager should map each business units processes, risks, and control mechanisms associated with the processes. For example, Figure 3 illustrates the equity settlement process for an equity trading firm. All major processes for the business unit are identified as the first step in managing risks.
Figure 5: Equity Setdement Process
A risk control self-assessment (RCSA) requires the documentation of risks and provides a rating system and control identification process that is used as a foundation in the OpRisk framework. Once the RCSA is created, it is commonly performed every 1218 months to assess the business units operational risks. It is common for financial institutions to seek

Page 48
2018 Kaplan, Inc.
Topic 41 Cross Reference to GARP Assigned Reading – Cruz, Chapter 2
expert opinions to help provide qualitative measures for the effectiveness of the RCSA framework. The experts perform an evaluation and color rate the performance in each process as Red, Amber, or Green (RAG) to indicate the level of risk based on historical process data.
The following four steps are commonly used in designing an RCSA program: 1.
Identify and assess risks associated with each business units activities. The manager first identifies key functions in the firm and performs risk scenarios to assess potential losses, the exposure or potential loss amount, and the correlation risk to other important aspects of the firm such as financial, reputation, or performance.
2. Controls are then added to the RCSA program to mitigate risks identified for the firm. The manager also assesses any residual risk which often remains even after controls are in place.
3. Risk metrics, such as key risk indicators or internal loss events, are used to measure the
success of OpRisk initiatives and are linked to the RCSA program for review. These risk metrics would also include all available external data and risk benchmarks for operational risks.
4. Control tests are performed to assess how effective the controls in place mitigate
potential operational risks.
A major challenge for OpRisk managers is the ability to properly interpret output data of the aggregated RCSA framework. Outputs could give managers a false sense of security if risks are controlled within tolerances that are set too high. Alternatively, risk managers may weight some risks more heavily and take corrective actions that focus too intensively on specific key measures while spending too little focus on other important variables.
Key risk indicators (KRIs) are identified and used to quantify the quality of the control environment with respect to specific business unit processes. KRIs are used as indicators for the OpRisk framework in the same way that other quantitative measures are used in market and credit risk models. The collection of reliable data used as KRIs is an important aspect of the self-assessment process. The data collection process may be automated to improve the accuracy of the data, but there will be costs associated with implementation. Even though KRIs may be costly to measure, they provide the best means for measuring and controlling OpRisk for the firm.
Regulators prefer the use of accurate quantitative KRIs in a control environment over more qualitative measures that only indicate whether the firm is getting better or worse based on historical losses. The more qualitative measures used in the example of the equity trading process in Figure 3 can be expanded to incorporate quantitative KRIs. Figure 6 includes examples of KRIs for the equity settlement process to help the firm self-assess the quality of the risk control environment.
The first step in creating an OpRisk model is identifying key factors that may be driving the success or failure of a business process. For example, the daily trade volume may be an important measure used to quantify how well the firm is executing the trade capture process. During the exercise of identifying KRIs, assumptions are made to determine proxies or inputs that drive the process. For example, execution errors are assumed to be greater
2018 Kaplan, Inc.
Page 49
Topic 41 Cross Reference to GARP Assigned Reading – Cruz, Chapter 2
on high volume days. Other examples of KRIs that are used to predict execution errors are the number of securities that were not delivered, trading desk head count, and system downtime.
An important KRI for the process of matching trades and confirmation is the number of unsigned confirmations. KRIs are used as warning lights or red flags that highlight possible concerns for the firm. For example, when the number of unsigned confirmations older than 30 days as a percentage of total confirmations exceeds target percentages it indicates a problem area in the confirmation process. Similarly, the number of disputed collateral calls may be a good KRI for the custody and control step. Finally, the number of transactions that failed to clear or settle may be a good KRI for the settlement process.
Figure 6: Key Risk Indicators for an Equity Trading Firm
Collecting data at the lowest level or the cost center level allows information to be aggregated for all locations. This is very advantageous for the RCSA program because the OpRisk manager is then able to drill down or disaggregate the total data for the firm to help pinpoint where potential concerns may be originating.
Some additional examples of common internal control factors that are used to explain specific business environments are summarized in Figure 7.
Figure 7: Examples of Business Environment and Internal Control Factors (BEICFs)
Business Environment Systems Information Security People Execution/Processing
Factor Description Minutes system is down or slow Number of malware or hacking attacks Headcount of employees, experience Number of transactions or transaction breaks
External data such as stock market indices and market interest rate levels are also used in RCSA frameworks. For example, increased volatility in the equity market can lead to higher volume and higher operational losses for the firm. The insurance industry often relies on external databases to gather information on accidents or losses for areas or geographical regions they are less familiar with. Banks may also use external databases to gather information regarding losses for risks they have not been exposed to and therefore lack any relevant internal data.
Three common methods of gathering external data are: internal development, consortia, and vendors. Under the internal development method, the firm gathers and collates information from media such as news or magazines. This may be the least expensive method, but it may not be as accurate and has the potential to overlook large amounts of relevant data. The most popular consortium for banks is the Operational Riskdata eXchange Association (ORX), which contains large banks in the financial industry. While
Page 50
2018 Kaplan, Inc.
Topic 41 Cross Reference to GARP Assigned Reading – Cruz, Chapter 2
this consortium has a relatively low loss reporting threshold, there are often no details on the losses and therefore this data can only be used for measurement. There are a number of vendors who provide detailed analysis on losses that can be used for scenario analysis. However, the loss threshold for vendor data is often much higher and the information may not always be accurate.
S c e n a r i o A n a l y s i s

LO 41.2: Summarize the process of collecting and reporting internal operational

LO 41.2: Summarize the process of collecting and reporting internal operational loss data, including the selection of thresholds, the timeframe for recoveries, and reporting expected operational losses.
The foundation of an OpRisk framework is the internally created loss database. Any event that meets a firms definition of an operational risk event should be recorded in the loss event database and classified based on guidelines in the operational risk event policy. Many firms adopt Basel II categories at the highest level and then customize lower level entries to match their firms specific needs. A minimum of five years of historical data is required to satisfy Basel II regulatory guidelines. Collecting and analyzing operational risk events provides valuable insights into a firms operational risk exposures. When loss data is not collected, it could be perceived by regulators that operational risk management issues are not a concern. Usually once a firm begins to collect loss data, the organization gains a new appreciation of its operational risks.
The collection of data is challenging because large amounts of data must be gathered over diverse geographical areas. The process of gathering data must ensure that it accurately reflects all loss information from all locations. The process should have checks and balances to ensure human errors are not present in gathering data and sending it to the central data collection point. Basel II regulations require a high degree of reliability in the loss data flow from all areas of the financial institution.
Page 46
2018 Kaplan, Inc.
Topic 41 Cross Reference to GARP Assigned Reading – Cruz, Chapter 2
Financial institutions often create OpRisk filters to identify potential operational events used in the calculation of operational losses. These OpRisk filters are typically the most expensive cost in the process. Flowever, filters provide important added assurance for regulators regarding the accuracy of the data collection process.
Basel II requirements allow financial institutions to select a loss threshold for loss data collection. This threshold amount will have significant implications for the risk profile of business units within the firm. OpRisk managers should not set the threshold for collecting loss data too low (e.g., $0) if there are business units that have a very large number of smaller losses, because it would require a very high amount of reporting. OpRisk managers should also not just think in terms of large OpRisk threshold amounts. The following example illustrates how setting a threshold too high will bias the total losses and therefore the risk profile for a financial institution.
Suppose the OpRisk manager for Bank XYZ sets the threshold at $50,000. Bank XYZ categorized all losses by the amount of the loss into loss brackets or buckets illustrated in Figure 4. The first row of Figure 4 states that there were two losses greater than $4,000,000 in the past year and the total amount of loss from these two events was $18,242,000. These two losses accounted for 25.3% of the total losses for the year. If a loss threshold was set at $50,000, then the last two rows or 28.3% of the total losses for the year would not be reported. Therefore, if the firm did not set a loss threshold for collecting data they would show that they actually had $72,136,148 of total losses instead of $51,724,314 (computed as $72,136,148 – $4,480,627 – $15,931,207).
Figure 4: Bank XYZ Total Annual Losses
Loss Bracket
Over $4,000,000 $1,000,000 to $4,000,000 $500,000 to $1,000,000 $250,000 to $500,000 $100,000 to $250,000 $75,000 to $100,000 $50,000 to $75,000 $25,000 to $50,000 Less than $25,000 Total
Events 2 8 9 7 10 15 18 50 1230
Loss Amount $18,242,000 $17,524,400 $7,850,425 $1,825,763 $1,784,632 $1,948,971 $2,548,123 $4,480,627 $15,931,207 $72,136,148
Percentage 25.3% 24.3% 10.9% 2.5% 2.5% 2.7% 3.5% 6.2% 22.1% 100.0%
When quantifying capital requirements, Basel II does not allow recoveries of losses to be included in the calculation. Regulators require this rule because gross losses are always considered for capital calculations to provide a more realistic view of the potential of large losses that occur once every 1,000 years.
Another important issue to consider in the process of collecting loss data is the timeframe for recoveries. The financial crisis of 20072009 illustrated that the complexity of some loss events can lead to very long time horizons from the start of the loss event to the final closure. Complex litigation cases from this financial crisis took five to six years for resolutions. Sometimes loss events will take lawyers and OpRisk managers several years to estimate the loss amount.
2018 Kaplan, Inc.
Page 47
Topic 41 Cross Reference to GARP Assigned Reading – Cruz, Chapter 2
While firms could create reserves for these losses, they seldom do to avoid giving the impression that they may owe a certain amount prior to reaching a judgment. The fact that many firms do not have legal expertise within the firm to handle these complex cases adds to the cost, because outsourcing of lawyers is often required. It is important for firms to have a policy in place for the processing of large long timeframe losses.
To help firms know what to report, the International Accounting Standards Board (IASB) prepared IAS37, which establishes guidelines on loss provisions or the reporting of expected operational losses after the financial crisis in 20072009. Three important requirements for the reporting of expected operational losses are as follows: 1. Loss provisions are not recognized for future operating losses.
2. Loss provisions are recognized for onerous contracts where the costs of fulfilling
obligations exceed expected economic benefits.
3. Loss provisions are only recognized for restructuring costs when a firm has a detailed
restructuring plan in place.
The IAS37 report states that loss provisions of restructuring costs should not include provisions related to relocation of staff, marketing, equipment investments, or distribution investments. Loss provisions must be recognized on the balance sheet when the firm has a current obligation regarding a past loss event. Balance sheet reporting of loss events is required when the firm is likely to be obligated for a loss and it is possible to establish a reliable estimate of the amount of loss. Gains from the disposal of assets or expected reimbursements linked to the loss should not be used to reduce the total expected loss amount. Reimbursements can only be recognized as a separate asset.
I d e n t i f y i n g , C o n t r o l
l
i n g , a n d A s s e s s i n g O p e r a t i o n a l R i s k