Ultimately, the objective of the operational risk severity estimation exercise is to calculate the 99.9th percentile loss over a one year horizon; and everything else we do with data, collecting loss information, modeling, curve fitting etc revolves around this objective. If we cannot estimate the 99.9th percentile loss accurately, then not much else matters. Therefore Choice 'a' is the correct answer.
Minimizing the combination of fitting and approximation errors is one of the things we do witha view to better estimating the operational loss distribution. Likewise, empirical loss data generally is range bound because corporations do not require employees to log losses less than an threshold, and high value losses are generally rare. This problemis addressed by extrapolating both large and small losses, something that impacts the performance of our model. Likewise, one of the objectives of scenario analysis is to fill data gaps by generating plausible scenarios. Yet while all these are real issues to address, the primary problem we are trying to solve is estimating the 0.999th quantile.
Contribute your Thoughts:
Chosen Answer:
This is a voting comment (?). You can switch to a simple comment. It is better to Upvote an existing comment if you don't have anything to add.
Submit