**A Risk-Factor Model Foundation for**Ratings-Based Bank Capital Rules Michael B. Gordy∗ Board of Governors of the Federal Reserve System October 22, 2002 Abstract When economic capital is calculated using a portfolio model of credit value-at-risk, the marginal capital requirement for an instrument depends, in general, on the properties of the portfolio in which it is held. By contrast, ratings-based capital rules, including both the current Basel Accord and its proposed revision, assign a capital charge to an instrument based only on its own characteristics. I demonstrate that ratings- based capital rules can be reconciled with the general class of credit VaR models. Contributions to VaR are portfolio-invariant only if (a) there is only a single systematic risk factor driving correlations across oblig- ors, and (b) no exposure in a portfolio accounts for more than an arbitrarily small share of total exposure. Analysis of rates of convergence to asymptotic VaR leads to a simple and accurate portfolio-level add-on charge for undiversified idiosyncratic risk. There is no similarly simple way to address violation of the single factor assumption. JEL Codes: G31, G38 ∗The views expressed herein are my own and do not necessarily reflect those of the Board of Governors oritsstaff. Iwould like tothank PaulCalem, Darrell Duffie, PaulEmbrechts, Jon Faust, ErikHeitfield, David Jones, David Lando, Gennady Samorodnistky, Dirk Tasche, and Tom Wilde for their helpful comments, and Susan Yeh for editorial suggestions. Please address correspondence to the author at Division of Research and Statistics, Mail Stop 153, Federal Reserve Board, Washington, DC 20551, USA.Phone: (202)452-3705. Fax: (202)452-5295. Email: hmichael.gordy@frb.govi.Recent years have witnessed significant advances in the design, calibration and implementation of port- folio models of credit risk. Large commercial banks and other financial institutions with significant credit exposure rely increasingly on models to guide credit risk management at the portfolio level. Models allow management to identify concentrations of risk and opportunities for diversification within a disciplined and objective framework, and thus offer a more sophisticated, less arbitrary alternative to traditional lending limit controls. More widespread and intensive use of models is encouraging a more active approach to port- folio management at commercial banks, which has contributed to the improved liquidity of markets for debt instruments and credit derivatives. Stripped toits essentials, acredit risk modelisafunction mapping from aparsimonious set ofinstrument- level characteristics and market-level parameters to a distribution for portfolio credit losses over some cho- sen horizon. The model output of primary interest, the “economic capital” required to support the portfolio, is derived as some summary statistic of the loss distribution. The definition of economic capital in most widespread use is value-at-risk (“VaR”). Under the VaR paradigm, an institution holds capital in order to maintain a target rating for its own debt. Associated with the target rating is a probability of survival over the horizon (say, 99.9% over one year). To be consistent with its target survival probability (denoted q), the institution must hold reserves and equity capital sufficient to cover up to the qthquantile of the distribution of portfolio loss over the horizon. Directly or indirectly, model applications to active portfolio management depend on the capacity to measure how the portfolio capital requirement changes with changes in portfolio composition. From a public policy perspective, model-based measurement of economic capital offers a potentially attractive solution to an increasingly urgent regulatory problem. The current regulatory framework for re- quired capital on commercial bank lending is based on the 1988 Basel Accord. Under the Accord, the capital charge on commercial loans is a uniform 8% of loan face value, regardless of the financial strength of the borrower or the quality of collateral.1The Accord’s rules are risk-sensitive only in that lower charges are specified for certain special classes of lending, e.g., to OECD member governments, to other banks in OECD countries, or for residential mortgages. When the Accord was first introduced, the 8% charge appeared to be “about right on average” for a typical bank portfolio. Over time, however, the failure to distinguish among commercial loans of very different degrees of credit risk created the incentive to move low-risk instruments off balance sheet and retain only relatively high-risk instruments. The financial innova- tions which arose in response to this incentive have undermined the effectiveness of regulatory capital rules (see, e.g., Jones 2000) and thus led to current efforts towards reform. It is widely recognized that regulatory arbitrage will continue until regulatory capital charges at the instrument level are aligned more closely with underlying risk. The Basel Committee on Bank Supervision (1999) undertook a detailed study of how banks’ internal models might be used for setting regulatory capital. The Committee acknowledged that a carefully specified and calibrated model could deliver a more accurate measure of portfolio credit risk than any rule-based system, but found that the present state of model development could not ensure an acceptable degree of 1The so-called 8% rule takes a rather broad definition of “capital.” In effect, roughly half this 8% must be in equity capital, as measured on a book-value basis. 1

comparability across institutions and that data constraints would prevent validation of key model parameters and assumptions.2It seems unlikely, therefore, that regulators will be prepared in the near- to medium-term to accept the use of internal models for setting regulatory capital. Nonetheless, regulators and industry practitioners appear to be in broad agreement that a revised Accord should permit evolution towards an internal models approach as models and data improve. At present, it appears virtually certain that a reformed Accord will offer a ratings-based “risk-bucketing” system of one form or another. In such a system, banking book assets are grouped into “buckets,” which are presumed to be homogeneous. Associated with each bucket is a fixed capital charge per dollar of exposure. In the latest version of the Basel proposal for an Internal Ratings-Based (“IRB”) approach (Basel Committee on Bank Supervision 2001), the bucketing system is required to partition instruments by internal borrower rating; by loan type (e.g., sovereign vs. corporate vs. project finance); by one or more proxies for senior- ity/collateral type, which determines loss severity in the event of default; and by maturity. More complex systems might further partition instruments by, for example, country and industry of borrower. Regardless of the sophistication of the bucketing scheme, capital charges are portfolio-invariant, i.e., the capital charge on a given instrument depends only on its own characteristics, and not the characteristics of the portfolio in which it is held. I take portfolio-invariance to be the essential property of ratings-based capital rules. Throughout this paper, I will use the term “ratings-based” to refer broadly to portfolio-invariant capital allo- cation rules with bucketing along multiple dimensions, rather than to constrain the term to schemes in which capital depends only on a traditional univariate credit rating. A regulatory regime based on ratings-based assignment of capital charges does offer significant ad- vantages. The current Accord is itself a simple ratings-based framework. The proposed new Accord will introduce additional bucketing criteria and make better use of information in borrower ratings, yet still be viewed as a natural extension of the current regime. Because the capital charge for a portfolio is simply a weighted sum of the dollars in each bucket, ratings-based systems are relatively simple to administer and need not impose burdensome reporting requirements. Validation problems are also limited in scope. As the new Accord is currently envisioned, the most significant empirical challenge facing supervisors would likely concern the quality of default probability estimates for internal grades. Though not often recognized in the debate on regulatory reform, in practice many (if not most) large banks apply ratings-based rules for allocation of capital at the transaction level. Even at institutions that have implemented models for portfolio management and portfolio-level capital assessment, there may be reluctance to apply the implied marginal capital requirements to assess hurdle rates for individual transac- tions. Computational and information systems burdens may be substantial. More important perhaps, line managers are likely to oppose any performance monitoring system in which a loan that could be booked one day at a profitable credit spread becomes unprofitable the next due only to changes in the composition of the bank’s overall portfolio. The need for stability in business operations thus favors portfolio-invariant capital charges at the transaction level. 2In an industry practitioner response, GARP (1999) acknowledges the obstacles to immediate adoption of an internal models regulatory regime, but argues that the challenges can be met through an evolutionary, piecemeal approach to regulatory certification of model components. 2

Though a ratings-based scheme may be a necessary “second-best” solution under current conditions, it is nonetheless desirable that the capital charges be calibrated within a portfolio model. Consistency with a well-specified model would bring greater discipline and accuracy to the calibration process, and would provide a smoother path of evolution towards a regime based on internal models. This paper is about the challenges in models-based calibration of ratings-based capital charges. In particular, it asks what modeling assumptions must be imposed so that marginal contributions to portfolio economic capital are portfolio- invariant. By design, portfolio models do not, in general, yield portfolio-invariant capital charges. To obtain a distribution of portfolio loss, a model must determine a joint distribution over credit losses at the instrument level. The latest generation of widely-used models gives structure to this problem by assuming that corre- lations across obligors in credit events arise due to common dependence on a set of systematic risk factors. Implicitly or explicitly, these factors represent the sectoral shifts and macroeconomic forces that impinge to a greater or lesser extent on all firms in an economy. A natural property of these models is that the marginal capital required for a loan depends on how it affects diversification, and thus depends on what else is in the portfolio. If economic capital is defined within the value-at-risk paradigm, then the problem has a simple an- swer. I show that two conditions are necessary and (with a few regularity conditions) sufficient to guarantee portfolio-invariance: First, the portfolio must be asymptotically fine-grained, in the sense that no single exposure in the portfolio can account for more than an arbitrarily small share of total portfolio exposure. Second, there must be only a single systematic risk factor. The emphasis in this paper is on generality across portfolios and models. The use of asymptotics to characterize model properties is not new to practitioners, but all previous analyses have been applied to homogeneous portfolios and with the objective of simplifying computation.3Banks vary widely in the size and composition of their portfolios and in the details of their credit risk models. For policy purposes, it is essential that our results be sufficiently general to embrace this diversity. Indeed, our results are shown to apply to quite heterogeneous portfolios and across a broad class of credit risk models. Needless to say, the real world does not give us perfectly fine-grained portfolios. Bank portfolios have finite numbers of obligors and lumpy distributions of exposure sizes. Capital charges calibrated to the asymptotic case, which assume that idiosyncratic risk is diversified away completely, must understate re- quired capital for any given finite portfolio. To assess the magnitude of this bias, I determine the rate of convergence of credit value-at-risk to its asymptotic limit. As an application, I propose a simple method- ology for assessing a portfolio level add-on charge to compensate for less-than-perfect diversification of idiosyncratic risk. Numerical examples suggest that the method works extremely well, so that moderate departures from asymptotic granularity need not pose a problem in practice for ratings-based capital rules. Although it is the standard most commonly applied, value-at-risk is not without shortcomings as a risk- measure for defining economic capital. Because it is based on a single quantile of the loss distribution, VaR 3Large-sample approximations have been applied tohomogeneous portfolios under single riskfactor versions of the RiskMetrics Group’s CreditMetrics (Finger 1999) and KMV Portfolio Manager (Vasicek 1997) in order to obtain computational shortcuts. B¨ urgisser, Kurth and Wagner (2001) characterize the asymptotic behavior of a generalized CreditRisk+model on a sequence of portfolios with n statistically identical copies of a fixed heterogeneous portfolio. 3

provides no information on the magnitude of loss incurred in the event that capital is exhausted. A more robust risk-measure is expected shortfall (“ES”), which is (loosely speaking) the expected loss conditional on being in the tail. From the perspective of an insurer of deposits (e.g., the FDIC in the US), an even more relevant risk-measure is expected excess loss (“EEL”). Under the EEL paradigm, an institution must hold enough capital so that the expected credit loss in excess of capital is less than or equal to a target loss rate. I consider whether ES and EEL deliver portfolio-invariant capital charges for an asymptotic portfolio in a single-factor setting. Expected shortfall does, but EEL does not, and thus is unsuitable as a soundness standard for deriving risk-bucket capital charges. Section 1 sets out a general framework for the class of risk-factor models in current use under a book- value definition of credit loss. Section 2 presents the key results for VaR for this class of models. In Section 3, these results are shown to apply equally to the case of “multi-state” models in which loss is measured on a market-value basis. A capital adjustment for undiversified idiosyncratic risk is developed in Section 4. In Section 5, I examine the asymptotic behavior of expected shortfall and expected excess loss as alternatives to VaR. Concluding remarks focus on the assumption of a single systematic risk factor, which is empirically untenable and yet an unavoidable precondition for portfolio-invariant capital charges. While this assumption ought to be acceptable in the pursuit of achievable and substantive near- to medium-term regulatory reform, it may limit the long-term viability of ratings-based risk-bucket rules for regulatory capital. 1 A general model framework under book-value accounting Under a book-value (or actuarial) definition of loss, credit loss arises only in the event of obligor default. Change in market value due to rating downgrade or upgrade is ignored. This is the simplest framework for our purposes, because we need only be concerned with default risk and with uncertainty in the recovery value of an asset in the event of obligor default. An essential concept in any risk-factor model is the distinction between unconditional and conditional event probabilities. An obligor’s unconditional default probability, also known as its PD or expected default frequency, is the probability of default before some horizon given all information currently observable. The conditional default probability is the PD we would assign the obligor if we also knew what the realized value of the systematic risk factors at the horizon would be. The unconditional PD is the average value of the conditional default probability across all possible realizations of the systematic risk factors. To take an example, consider a simple credit cycle in which the systematic risk factor takes only three values. The “bad state” corresponds to a recession at the risk horizon, the “good state” to an expansion, and the “neutral state” to ordinary times. Say that the three states occur with probabilities of 1/4, 1/2 and 1/4 (respectively) at the risk horizon. Consider an obligor which defaults with probability 2% in the event of a bad state, probability 1% in the neutral state, and probability 0.4% in the event of a good state. The “conditional default probability” is then 0.4%, 1%, or 2%, depending on which horizon state we condition upon. The PD is the probability-weighted average default rate, or 1.1%. Let X denote the systematic risk factors (possibly multivariate), which are drawn from a known joint distribution. These risk factors may be identified in some models with specific observable quantities, such 4

as macroeconomic variables or industrial sector performance indicators, or may be left abstract. Regardless of their identity, it is assumed that all correlations in credit events are due to common sensitivity to these factors. Conditional on X, the portfolio’s remaining credit risk is idiosyncratic to the individual obligors in the portfolio. Let pi(x) denote the probability of default for obligor i conditional on realization x of X. This general framework for modeling default is compatible with all of the best-known industry models of portfolio credit risk, including the RiskMetrics Group’s CreditMetrics, Credit Suisse Financial Prod- uct’s CreditRisk+, McKinsey’s CreditPortfolioView, and KMV’s Portfolio Manager. The similarity to CreditRisk+is easiest to see because that model is written in the language of conditional default proba- bilities. To obtain CreditRisk+within our framework, assume that the risk factors X1,... ,XKare inde- pendent gamma-distributed random variables with mean one and variances σ2 PD of obligor i, and specify pi(x) as: K. Let ¯ pidenote the 1,... ,σ2 K ! X pi(x) = ¯ pi 1 + wik(xk− 1) (1) k=1 where wiis a vector of factor loadings with sum in [0,1].4 CreditMetrics, which is based on a simplified Merton model of default, also can be cast within a condi- tional probability framework. It is assumed that the vector of risk factors X is jointly distributed N(0,Ω). Associated with each obligor is a latent variable Riwhich represents the return on the firm’s assets. Riis given by Ri= ψi?i− Xwi, (2) where the ?iare iid N(0,1) white noise (representing obligor-specific risk) and wiis a vector of factor loadings.5Without loss of generality, the weights wiand ψiare scaled so that Riis mean zero, variance one.6A borrower defaults if and only if its asset return falls below a threshold value γi. To obtain the conditional default probability function pi(x), observe that default occurs if and only if ?i≤ (γi+ Xwi)/ψi. Therefore, conditional on X=x, default by i is an independent Bernoulli event with probability pi(x) = Pr(?i≤ (γi+ xwi)/ψi) = Φ((γi+ xwi)/ψi) (3) where Φ is the standard normal cdf. To calibrate the parameter γi, note that the unconditional probability of default is Φ(γi), so γi= Φ−1(¯ pi), where ¯ piis the PD for obligor i.7See Gordy (2000) for a more detailed derivation of these two models and their representation in terms of conditional probabilities. 4Strictly speaking, this functional form is invalid because it allows conditional probabilities to exceed one. In practice, this problem is negligible for high and moderate quality portfolios and reasonable calibrations of the σ2 5The usual way this is specified has Xwi added, not subtracted. The change in sign here is convenient because it implies that the pi(x) function will be increasing in x, but does not otherwise change the statistical properties of the model. 6Specifically, the weights ψiare given by (1 − w0 7By construction, the unconditional distribution of Riis N(0,1), so the probability that Ri ≤ γiis Φ(γi). k. iΩwi)1/2. 5

In some industry models, it is assumed that loss given default (“LGD”) is known and non-stochastic. Of the credit VaR models in widespread use, those that do allow for stochastic LGD always take recovery risk to be purely idiosyncratic. In practice, LGD not only may be highly uncertain, but may also be subject to systematic risk. For example, the recovery value of defaulted commercial real estate loans depends on the value of the real estate collateral, which is likely to be lower (higher) when many (few) other real estate projects have failed. In recent months, some progress has been made in capturing this effect. Frye (2000) develops an extension of a one-factor CreditMetrics model in which collateral values (and thus recoveries) are correlated with the same systematic risks that drive default rates. B¨ urgisser et al. (2001) extend the CreditRisk+model to include a systematic factor for recovery risk that is orthogonal to the systematic factors for default risk. In order to accommodate systematic and idiosyncratic recovery risk, I take loss, rather than merely default status, as the primitive outcome variable. Let Aibe the exposure to obligor i; these are taken to be known and non-stochastic.8Let the random variable Uidenote loss per dollar exposure. In the event of survival, Ui= 0. Otherwise, Uiis the percentage LGD on instrument i. The usual assumption of conditional independence of defaults is extended to conditional independence of the Ui. I assume that (A-1) the {Ui} are bounded in the unit interval and, conditional on X, are mutually independent. For a portfolio of n obligors, define the portfolio loss ratio Lnas the ratio of total losses to total portfolio exposure,9i.e., Pn i=1UiAi Pn (4) Ln≡ . i=1Ai For a given q ∈ (0,1), value-at-risk is defined as the qthpercentile of the distribution of loss, and is denoted VaRq[Ln]. Let αq(Y ) denote the qthpercentile of the distribution of random variable Y , i.e., αq(Y ) ≡ inf{y : Pr(Y ≤ y) ≥ q}. (5) In terms of this more general notation, we have VaRq[Ln] = αq(Ln). 2 Asymptotic loss distribution under book-value accounting Imagine that the bank selects its portfolio as the first n elements of an infinite sequence of lending opportu- nities. To guarantee that idiosyncratic risk vanishes as more assets are added to the portfolio, the sequence of exposure sizes must neither blow up nor shrink to zero too quickly. I assume that 8In practice, it need not be so simple. If the instrument is a coupon bond, book-value exposure is simply the face value. Much bank lending, however, is in the form of lines of credit which give the borrower some control over the exposure size. Borrowers do tend to draw down unutilized credit lines as they deteriorate towards default. If we assume that uncertainty in A is idiosyncratic conditional on the state of the obligor and is of bounded variance, then all the conclusions of this paper continue to hold. In this case, we interpret Aias the expected dollar exposure in the event of obligor default. 9For simplicity, I assume that the portfolio contains only a single asset for each obligor. Under actuarial treatment of loss, multiple assets of a single obligor may be aggregated into a single asset without affecting the results. 6

(A-2) the Aiare a sequence of positive constants such that (a)Pn i=1Ai↑ ∞ and (b) there exists a ζ > 0 i=1Ai= O(n−(1/2+ζ)).10 such that An/Pn The restrictions in (A-2) are sufficient to guarantee that the share of the largest single exposure in total portfolio exposure vanishes to zero as the number of exposures in the portfolio increases. As a practical matter, the restrictions are quite weak and would be satisfied by any conceivable real-world large bank portfolio. For example, they are satisfied if all the Aiare bounded from below by a positive minimum size and from above by a finite maximum size. Our first result is that, under quite general conditions, the conditional distribution of Lndegenerates to its conditional expectation as n → ∞. More formally, we can show that Proposition 1 If (A-1) and (A-2) hold, then, conditional on X=x, Ln− E[Ln|x] → 0, almost surely. The proof, which relies mainly on a strong law of large numbers, is given in Appendix A. Note that there is no restriction on the relationship between Aiand the distribution of Ui, so there is no problem if, for example, high quality loans tend also to be the largest loans. Also, no restrictions have yet been imposed on the number of systematic factors or their joint distribution. In intuitive terms, Proposition 1 says that as the exposure share of each asset in the portfolio goes to zero, idiosyncratic risk in portfolio loss is diversified away perfectly. In the limit, the loss ratio converges to a fixed function of the systematic factor X. We refer to this limiting portfolio as “infinitely fine-grained” or as an “asymptotic portfolio.” An implication is that, in the limit, we need only know the unconditional distribution of E[Ln|X] to answer questions about the unconditional distribution of Ln. For example, if we wish to know the variance of the loss ratio, we can look to the variance of E[Ln|X]: Proposition 2 If (A-1) and (A-2) hold, then V[Ln] − V[E[Ln|X]] → 0. Proof is in Appendix A. A more important result is, in essence, that for any q ∈ (0,1), the qthquantile of the unconditional loss distribution approaches the qthquantile of the unconditional distribution of E[Ln|X] as n → ∞. Our desired result is to have αq(Ln) − αq(E[Ln|X]) → 0. (6) For technical reasons, however, we are limited to a slightly restricted variant on this result. Let Fndenote the cdf of Ln. We can show: 10For definition of the order notation O(·) see Billingsley (1995, A18). 7

Proposition 3 If (A-1) and (A-2) hold, then for any ? > 0 Fn(αq(E[Ln|X]) + ?) Fn(αq(E[Ln|X]) − ?) [q,1] (7) → → [0,q]. (8) The proof is in Appendix B. For all practical purposes, this proposition ensures that equation (6) will hold.11The literal interpretation of Proposition 3 is that the qthquantile of E[Ln|X] plus an arbitrarily small “smidgeon” (i.e., ?) is guaranteed, in the limit, to cover (or, at least, to come arbitrarily close to covering) q or more of the distribution of loss. Similarly, the qthquantile of E[Ln|X] less the smidgeon is guaranteed, in the limit, to fail to cover the qthquantile of the distribution of loss (or, at least, to come arbitrarily close to so failing). The importance of Proposition 3 is that it allows us to substitute the quantiles of E[Ln|X] (which typ- ically are relatively easy to calculate) for the corresponding quantiles of the loss ratio Ln(which are hard to calculate) as the portfolio becomes large. It should be emphasized that we have obtained this result with very minimal restrictions on the make-up of the portfolio and the nature of credit risk. The assets may be of quite varied PD, expected LGD, and exposure sizes. We have bounded the support of the Uito the unit interval, but have otherwise not restricted the behavior of the conditional expected loss functions (i.e., the E[Ui|x]).12These functions may be discontinuous and non-monotonic, and can vary in form from obligor to obligor. More importantly, we have placed no restrictions on the vector of risk factors X. It may be a vector of any finite length and with any distribution (continuous or discrete). The quantiles of E[Ln|X] take on a particularly simple and desirable asymptotic form when we impose two additional restrictions: (A-3) the systematic risk factor X is one-dimensional; and (A-4) there is an open interval B containing αq(X) and a real number n0< ∞ such that for all n > n0, (i) E[Ln|x]isnondecreasing onB, and (ii)infx∈BE[Ln|x] ≥ supx≤inf BE[Ln|x]and supx∈BE[Ln|x] ≤ infx≥supBE[Ln|x]. Intuitively, Assumption (A-3) imposes a single global business cycle as the source of all dependence across obligors. It is needed in order that αq(X) be a unique point. Assumption (A-4) is needed so that the neighborhood of the qthpercentile of E[Ln|X] is associated with the neighborhood of the qthpercentile of X. Without (A-4), the tail quantiles of the loss distribution would depend in complex ways both on how conditional expected loss for each borrower varies with x. A more parsimonious way to avoid this problem would have been to require that the E[Ui|x] be nondecreasing in x for all i. However, such a requirement 11The difference has to do with the possibility that the unconditional distributions for the {E[Ln|X]} will permit jump points (or arbitrarily steep slope) at the quantiles αq(E[Ln|X]) as n → ∞. This possibility is purely a theoretical matter, and would never arise in practical applications. 12Technically, the CreditRisk+model allows Uito exceed one, because it approximates the Bernoulli distribution of the default event as a Poisson distribution. To accommodate CreditRisk+, we could loosen this restriction to a requirement that the Ui have bounded variance. See the modified version of (A-1) introduced in Section 3. 8

would exclude hedging instruments (such as credit derivatives) and obligors withcounter-cyclical credit risk. Assumption (A-4) allows for some Uito be negatively associated with X, just so long as, asymptotically and in aggregate, such instruments do not alter the monotonic dependence of losses on the systematic factor when X is near the relevant “tail event.” Furthermore, (A-4) allows E[Ln|x] to be locally nonmonotonic in x when x is not in the neighborhood of αq(X), and allows for discontinuity at any x. For notational convenience, define functions µi(x) ≡ E[Ui|x] and Pn i=1µi(x)Ai Pn Mn(x) ≡ E[Ln|x] = (9) . i=1Ai We now have Proposition 4 If (A-3) and (A-4) are satisfied, then αq(E[Ln|X]) = E[Ln|αq(X)] = Mn(αq(X)) for n > n0. Proof: Fix n > n0. If X ≤ αq(X), then Mn(X) ≤ Mn(αq(X)), so Pr(Mn(X) ≤ Mn(αq(X))) ≥ Pr(X ≤ αq(X)) ≥ q. If Mn(X) < Mn(αq(X)), then X < αq(X), so Pr(Mn(X) < Mn(αq(X))) ≤ Pr(X < αq(X)) < q. Therefore, inf{y : Pr(Mn(X) ≤ y) ≥ q} = Mn(αq(X)). QED Taken together, Propositions 1, 3 and 4 imply a simple and powerful rule for determining capital require- ments. For asset i, set capital per dollar book value (inclusive of expected loss) to ci≡ µi(αq(X)) + ?, for some arbitrarily small ?.13Observe that this capital charge depends only on the characteristics of instrument i and thus this rule is portfolio-invariant. Portfolio losses exceed capital if and only if n n X X (10) UiAi> ciAi. i=1 i=1 Given our rule for ciand the definition of Ln, !−1 n n n n ! X X X X Pr = Pr (µi(αq(X)) + ?)Ai UiAi> ciAi Ln> Pr(Ln> E[Ln|αq(X)] + ?) → [0,1 − q]. Ai i=1 i=1 i=1 i=1 = Thus, capital is sufficient, in the limit, so that the probability of portfolio credit losses exceeding portfolio capital is no greater than 1 − q, as desired. If additional regularity conditions are imposed in order to eliminate the possibility of discontinuities at the desired quantiles, the insolvency probability converges to 1 − q exactly for ? = 0. A simple way to achieve this would be to require that X be continuous and that the µi(x) functions be continuous and with 13In most practitioner discussions, it is assumed that expected loss is charged against the loan loss reserve and that “capital” refers only to the amount held against unexpected loss. In this paper, “capital” refers to the gross amount set aside. 9

bounded derivatives. However, we can be rather less restrictive, as we really need only to guarantee that the asymptotic portfolio loss cdf is smooth and has bounded derivatives in the neighborhood of its qthpercentile value. The following condition is sufficient to circumvent the technical caveats of Proposition 3. (A-5) There exists an open interval B containing αq(X) on which (i) the cdf of systematic factor X is continuous and increasing, and (ii) for all i, µi(x) is continuous and differentiable on B, and (iii) there are real numbers δ,δ and n0< ∞ such that 0 < δ ≤ M0 This assumption allows for a non-trivial share of the portfolio to consist of hedging instruments or loans to counter-cyclical borrowers. In Appendix C, I show that n(x) ≤ δ < ∞ for all n > n0. Proposition 5 If assumptions (A-1)–(A-5) hold, then Pr(Ln≤ E[Ln|αq(X)]) → q and |αq(Ln) − E[Ln|αq(X)]| → 0. Therefore, for an infinitely fine-grained portfolio, the proposed portfolio-invariant capital rule provides a solvency probability of exactly q. The results of this section closely parallel recent developments in techniques for capital allocation in a market risk setting. Gouri´ eroux, Laurent and Scaillet (2000), Tasche (2000) and others show how to take partial first derivatives of VaR.14In terms of the notation used here, the first derivative is given by dαq(Ln) dAi = E[Ui|Ln= αq(Ln)]. (11) Under the assumptions of Proposition 5, the condition Ln= αq(Ln) is asymptotically equivalent to X = αq(X), which implies that marginal VaR is equal to µi(αq(X)). Gouri´ eroux et al. (2000) require that the joint distribution of the losses {Ui} be continuous, as otherwise VaR need not be differentiable. This presents a problem in application to credit risk modeling, as credit risk is largely driven by discrete events (e.g., defaults). The approach taken here in obtaining Proposition 5 allows for discrete (or mixed) Ui.15 Portfolio-invariance depends strongly on the asymptotic assumption and on the assumption of a single systematic risk-factor. Portfolios that are not asymptotically fine-grained contain undiversified idiosyn- cratic risk, which implies that marginal contributions to VaR depend on what else is in the portfolio. As a practical matter, residual idiosyncratic risk is not an impediment to ratings-based capital allocation. Large internationally-active banks are typically near the asymptotic ideal. Furthermore, the techniques of Section 4 allow for a simple portfolio-level correction. Assumption (A-3) is much less innocuous from an empirical point-of-view. It can be relaxed only slightly. Say that some group of obligors shared dependence on a “local” risk-factor. Conditional on X, the {Ui} within the group would no longer be independent, though they would remain independent of the {Uj} outside the group. So long as the within-group exposures in aggregate account for a trivial share of 14This problem was solved independently by several authors. See references in Tasche (2002, §4). 15Tasche (2000) provides slightly less stringent conditions for differentiability. Tasche (2001) applies equation (11) to a discrete model (CreditRisk+), and discusses the technical issues that arise. 10

the total portfolio (i.e., they could be aggregated into a single exposure without violating assumption (A-2)), the local dependence can be ignored. Even the largest banks have geographic and industrial concentrations at some level. If these larger- scale sectors are not perfectly comonotonic, then portfolio-invariance is lost. Say we had two risk factors, and obligors could differ in their sensitivity to each factor. The realizations (x1,x2) associated with a given quantile of the loss distribution would then depend on the particular set of obligors in the portfolio. In intuitive terms, the appropriate capital charge for a loan to a heavily-X1-sensitive borrower would depend on whether the other obligors in the portfolio were predominantly sensitive to X1(in which case the loan would add little diversification benefit) or to X2(in which case the diversification benefit would be larger). To take a simple example, let X1represent the US business cycle and X2the European business cycle. Consider the merger of a strictly domestic US, asymptotically fine-grained portfolio with another asymptotically fine- grained bank portfolio. If the second portfolio were also exclusively US, then no diversification benefit would ensue, and required capital for the merged portfolio should be the sum of the capital charges on the two portfolios. However, if the second portfolio contained European obligors, then there would be a diversification benefit (as long as X1and X2were not perfectly comonotonic), and the merger should result in reduced total VaR. Therefore, capital charges could not be portfolio-invariant. Finally, observe that “bucketing” has not appeared, per se, in the derivation. Indeed, the µifunctions need not even share a common form across obligors. Sorting obligors into a finite number of statistically ho- mogeneous buckets is helpful for purposes of calibration from data, but is not needed for portfolio-invariant capital charges to be obtained.16 3 Asymptotic loss distribution under mark-to-market valuation Actuarial models are simple to calibrate and understand, and fit naturally with traditional book-value ac- counting applied to bank loan books. However, much of the credit risk is missed, especially for long-dated highly-rated instruments. Because losses are deemed to arise only in the event of default, no credit loss is recognized when, say, a two-year AA-rated loan downgrades after one year to grade BB. Under a mark-to- market (MTM) notion of loss, credit risk includes the risk of downward (or upward) rating migration, short of default, when the instrument’s maturity extends beyond the risk horizon. Even for institutions that report on a book-value basis, it may be desirable to calculate capital charges within a MTM framework in order to capture the additional risk associated with longer instrument maturity. “Loss” is an ambiguous construct in a mark-to-market setting. I follow one widely-used convention in defining the loss rate Uion asset i as the difference between expected and realized value at the horizon, discounted by the risk-free rate and divided by current market value.17For example, ui= 0.2 represents 16Multi-state models such as CreditMetrics and CreditPortfolioView typically calibrate PDs to a finite set of rating grades, but the factor loadings wimay be set at the individual obligor level. In this case, each obligor would comprise its own “bucket.” In the KMV model, there is a continuum of “rating grades,” so buckets do not arise in any natural way. 17Coupon payments, if any, are assumed to be accrued to the horizon at the risk-free rate. Some convention also must be imposed on which intra-horizon cashflows are received on defaulting assets. In practice, how coupons are handled has little effect on the loss distribution, and no qualitative effect on the asymptotics. 11

a 20% loss, and ui= −0.05 represents a 5% gain. Other definitions can be applied without changing the results below. I redefine “exposure” Aias the current market value. Credit risk arises due to uncertainty in U. As before, I assume a vector of systematic risk factors X and that the Uiare conditionally independent. The parameterization and calibration of the µi(x) ≡ E[Ui|x] functions can draw on existing industry models such as CreditMetrics. Say, for example, that we have a rating system with G non-default grades (grade G + 1 denoting default), and for each obligor i we have a set of unconditional transition probabilities ¯ pigfor grade g at the horizon. From these we calculate threshold values γigfor obligor i’s asset return Ri(see equation (2)), such that obligor i defaults if Ri≤ γi,G, and transits to “live” grade g if γi,g< Ri≤ γi,g−1. The variables (X,?1,?2,... ,?n) are iid N(0,1). Therefore, the conditional transition probabilities are given in CreditMetrics by ? ? ? ? q q pig(x) = Φ (γi,g−1+ xwi)/ 1 − w2 − Φ (γi,g+ xwi)/ 1 − w2 (12) , i i and the unconditional transition probabilities determine the thresholds as γi,g= Φ−1(¯ pi,g+1+...+ ¯ pi,G+1). Consider a zero-coupon instrument maturing at or after the horizon. Assume the current value Aiis known, and let vi,g(x) be the value of instrument i at the horizon conditional on the obligor migrating to rating g. In standard implementations of CreditMetrics, pricing at the horizon is done by discounting future contractual cash flows, where the spreads for each grade are taken as fixed and known. In principle, however, we can allow spreads to be non-stochastic functions of X. The conditional expected mark-to-market value at the horizon is G vig(x)pig(x) +¯Ai(1 − E[LGDi|x])pi,G+1(x), X MTMi(x) = (13) g=1 where¯Aiis the size of the bank’s legal claim on the obligor in the event of a default. Coupons can easily be accommodated in this pricing formula as well with some additional notation. The conditional expected loss functions µi(x) are then given by µi(x) =exp(−rTh) (E[MTMi(X)] − MTMi(x)), (14) Ai where This the time to horizon and r is the risk-free yield for term Th. The results of the previous section can be adapted to a mark-to-market setting without difficulty. In contrast to the actuarial case, MTM loss is not bounded from below by zero (e.g., if the obligor’s rating improves, there typically will be a gain in value). In principle, it need not be bounded from above either. To accommodate the MTM case, I modify assumption (A-1) as follows: (A-1) Conditional on X, the {Ui} are independent. The conditional second moment of loss exists and is bounded; i.e., there exists a function Υ(x) such that E[U2 realizations x. Furthermore, E[Υ(X)] < ∞. This version of the assumption is strictly weaker than the version of Section 1. i|x] ≤ Υ(x) < ∞ for all instruments i and 12

For a given portfolio of n assets, Ln, as defined in equation (4), is the discounted portfolio market-valued credit loss at the horizon as a percentage of current market value. I find that all of the Propositions of Section 2 continue to hold, as stated, under the relaxed version of assumption (A-1). Indeed, the proofs given in the appendix explicitly rely only on the relaxed version. The results in no way depend on the assumptions and conventions of CreditMetrics, which are described above for illustrative purposes.18By the same logic as before, the appropriate asymptotic capital charge per dollar current market value for asset i is simply µi(αq(X)). 4 Capital adjustments for undiversified idiosyncratic risk No portfolio is ever infinitely fine-grained: real-world portfolios have finite numbers of obligors and lumpy distributions of exposure sizes. Large portfolios of consumer loans ought to come close enough to the asymptotic ideal that this issue can safely be ignored, but we ought not to presume the same for even the largest commercial loan portfolios. Unless ratings-based capital rules are to be abandoned for a full-blown internal models approach, we require a methodology for assessing a capital add-on to cover the residual idiosyncratic risk that remains undiversified in a portfolio. Consider a homogeneous portfolio in which each instrument has the same conditional expected loss function µ(x) and the same exposure size. Under assumptions (A-3) and (A-4) and suitable regularity conditions, αq(Ln) = µ(αq(X)) + O(n−1). (15) That is, the difference between the VaR for a given finite homogeneous portfolio and its asymptotic approx- imation is proportional to 1/n. One way to obtain this result is through a generalized Cornish-Fisher expansion due to Hill and Davis (1968) for a sequence of distributions converging to an arbitrarily differentiable limiting distribution. The jthterm in the expansion of αq(Ln) is proportional to the difference between the jthcumulants of the distri- butions for Lnand L∞. Under very general conditions, the cumulants (for j ≥ 2) converge at O(n−1). The difficulty is in specifying precisely a set of regularity conditions under which the Cornish-Fisher expansion is guaranteed to be convergent. Building on the results of Gouri´ eroux et al. (2000), Martin and Wilde (forthcoming) derive equation (15) more rigorously as a Taylor series expansion of VaR around its asymptotic value. Although the necessary regularity conditions remain slightly opaque, the main additional requirement is that the conditional variance V[U|x] is locally continuous and differentiable in x. Furthermore, Martin and Wilde show that the O(n−1) 18In the spirit of KMV Portfolio Manager, for example, one could replace equation (12) with the conditional density function for the default probability at the horizon. The summation in equation (13) would be replaced by an integral, and the vig would be obtained using risk-neutral valuation. Valuation in the default state in equation (13) also would be modified. 13

term is given by β/n where19 ??????x=αq(X) −1 2h(x) ?V[U|x]h(x) d β = (16) µ0(x) dx and where h(x) is the pdf of X. Of course, equation (15) is itself an asymptotic result. When we say that convergence is at rate 1/n, we are saying that for large enough n the gap between VaR and its asymptotic approximation shrinks by half when n is doubled. Short of running the credit VaR model, there is no way to say whether a given n is “large enough” for this relationship to hold. To see whether our “1/n rule” works well for realistic values of n and realistic model calibrations, I examine the behavior of VaR in an extended version of CreditRisk+. The virtue of CreditRisk+for this exercise is that it has an analytic solution. We not only can execute the model for any n very quickly, but also avoid Monte Carlo simulation noise in the results. However, the standard CreditRisk+model assumes fixed loss given default, and so ignores a potentially important source of volatility.20For the special case of a homogeneous portfolio, it is not difficult to augment the model to allow for idiosyncratic recovery risk. As in the standard CreditRisk+, assume that the systematic risk factor X is gamma-distributed with mean one and variance σ2. Each obligor has the same default probability ¯ p and factor loading w. Each facility in the portfolio has identical exposure size, which is normalized to one, and identical expected LGD. The functional form for conditional expected loss function is µ(x) = E[LGD] · ¯ p(1 + w(x − 1)). (17) To introduce idiosyncratic recovery risk, assume LGD for each obligor is drawn from a gamma distribution with mean λ and variance η2. This specification is convenient because the sum of m independent and identical gamma random variables is gamma-distributed with mean mλ and variance mη2. Let Gmdenote the gamma cdf with this mean and variance. Let πmdenote the probability that there will be m defaults in the portfolio; these probabilities are calculated in the usual way in CreditRisk+. The cdf of Lncan then be obtained as ∞ X Pr(Ln≤ y) = πmGm(ny). (18) m=0 Long before m approaches n, the πmbecome negligibly small, so numerical calculation of equation (18) presents no difficulty. A minor disadvantage of this specification is that it allows LGD to exceed one. However, so long as η is not too large, aggregate losses in the portfolio will be well-behaved, so the problem can be ignored. 19Equation (16) is obtained through less formal arguments in Wilde (2001). 20The standard model also implies a discrete loss distribution. As n increases, the “steps” in the loss distribution are re-aligned, which causes local violations of monotonicity in the relationship between n and VaR. 14

For this model, the asymptotic slope β is given by 1 ?1 1 +σ2− 1 αq(X) αq(X) +1 − w ? ?? ? ? β = 2λ(λ2+ η2) − 1 (19) σ2 w This formula generalizes a formula derived in Wilde (2001) under the specific parameter values used in the Basel proposal. Calibration is intended to be qualitatively faithful to available data. When CreditRisk+is calibrated to rating agency historical performance data, as in Gordy (2000), one finds a negative relationship between ¯ p and w. By contrast, when a Merton model such as CreditMetrics is calibrated to these data, there is no strong relationship between PD and factor loading. This makes sense, as there is no strong reason to expect that average asset-value correlation should vary systematically across rating grades. To make use of this stylized fact in our calibration, I choose a constant asset-value correlation of 15% in CreditMetrics, and calculate a within-grade default correlation for each grade. Shifting back to CreditRisk+, I set a conservative but reasonable value of σ = 2 for the volatility of X, and then calibrate w for each rating grade so that the within-grade default correlation matches the value from CreditMetrics.21The remainder of the calibration exercise is straightforward. I choose stylized values for the default probabilities, and assume that LGD has mean 0.5 and standard deviation 0.25. The chosen coverage target is q = 0.995 of the loss distribution. Results are shown in Table 1 for five rating grades. The final column (n = ∞) provides the asymptotic capital charge, so the difference between each column and the final column represents the “true” granularity add-on. Even for portfolios of only n = 200 homogeneous obligors, granularity add-ons are small in the absolute sense (under 60 basis points). However, the add-ons can be large relative to the asymptotic capital charge for investment grade obligors. For a homogeneous portfolio of 200 A-rated loans, the granularity add-on is roughly equal to the asymptotic charge. Table 1: Convergence of Value-at-Risk∗ VaRq[Ln] for values of n 1000 0.445 1.106 4.856 17.485 37.226 ¯ p 0.06 0.20 1.25 6.25 17.50 200 0.723 1.425 5.217 17.881 37.663 500 0.521 1.190 4.947 17.584 37.335 2000 0.406 1.064 4.810 17.435 37.172 5000 0.381 1.038 4.783 17.405 37.139 ∞ 0.364 1.020 4.764 17.385 37.117 w 1.011 0.836 0.602 0.415 0.295 A BBB BB B CCC *: Default probabilities and VaR expressed inpercentage points. Simulations assume q = 0.995, σ = 2, λ = 0.5 and η = 0.25. Figure 1 demonstrates the relationship between the theoretical granularity add-on and 1/n for three ho- mogeneous portfolios. For an extremely low quality portfolio (CCC rating), the predicted linear relationship 21See Gordy (2000) for more details on the choice of σ and on using within-grade default correlations for consistent calibration across the two models. 15

holds down to n = 200.22For the medium quality (BB rated) portfolio, there are visible but negligible departures from the predicted linear relationship when n < 500. For a high quality portfolio (A rated), departures from linearity are visible at n = 1000 and become significant at lower values of n. Because departures from linearity are in the concave direction, a granularity adjustment calibrated to the asymptotic slope would slightly overshoot the theoretically optimal add-on for smaller high-quality portfolios. Figure 1: Granularity Add-on as Linear Function of 1/n o CCC x BB + A 0.5 0.4 Granularity Add−on 0.3 0.2 0.1 0 1/5000 1/2000 1/1000 1/500 1/300 1/200 1/n Note: The “true” granularity add-on in the extended CreditRisk+model is plotted with symbols for three homogeneous portfolios of various sizes. The lines show the corresponding theoretical add-on for this model. In the case of a non-homogeneous portfolio, determining an appropriate granularity add-on is only slightly more complex. The method of Wilde (2001) accommodates heterogeneity (the V[U|x] and µ(x) terms in equation (16) become V[Ln|x] and Mn(x), respectively). An alternative two-step method also appears to work quite well and may be better suited to a regulatory setting. The first step is to map the actual portfolio to a homogeneous “comparable portfolio” by matching moments of the loss distribution. 22The slope between each plotted point is constant to six significant digits for both B (not shown) and CCC portfolios. 16

The second step is to determine the granularity add-on for the comparable portfolio. The same add-on is applied to the capital charge for the actual portfolio. Consider a heterogeneous portfolio of n lending facilities divided among B buckets. Within each bucket b, every facility has the same PD ¯ pb, factor loading wb, expected LGD λband LGD volatility ηb. Exposure sizes Aiare allowed to vary across facilities in a bucket. To measure the extent to which bucket b exposure is concentrated in a small number of facilities, we require the within-bucket Herfindahl index given by23 i∈bA2 i∈bAi P i Hb≡ ?2. ?P The higher is Hb, the more concentrated is the exposure within the bucket, so the more slowly idiosyncratic risk is diversified away. The matching methodology takes bucket-level inputs {¯ pb,wb,λb,ηb,Hb} and total bucket exposure. This data structure may be especially convenient in a regulatory setting with bucket-level reporting requirements. The goal is to construct the comparable portfolio as a portfolio of n∗equal-sized facilities with common PD ¯ p∗, factor loading w∗, and LGDparameters λ∗and η∗. In principle, a wide variety of moment restrictions could beused to dothe mapping, but itseems best tochoose moments withintuitive interpretation. Appendix D develops a matching procedure based on five moments:24 • exposure-weighted expected default rate, • expected portfolio loss rate, • contribution of systematic risk to loss variance, • contribution of idiosyncratic default risk to loss variance, and • contribution of idiosyncratic recovery risk to loss variance. Under this methodology, each of the parameters of the comparable portfolio is given by an explicit linear equation that can be interpreted as a weighted average of the characteristics of the heterogeneous portfolio. Most interestingly, the number of loans n∗can be interpreted as an inverse measure of weighted exposure concentration. Finally, the asymptotic slope β∗for the comparable homogeneous portfolio is given by equation (19). The portfolio data needed for the mapping method should pose minimal additional reporting burden for regulated institutions. Default probability, expected LGD and total bucket exposure would need to be reported by the bank to calculate the asymptotic capital charge. Factor loadings and LGD volatilities would likely be assigned as functions of the ¯ pband λb; this is indeed the case in Basel Committee on Bank Supervision (2001). The only new required inputs, the within-bucket Herfindahl indices, are easily calculated from the individual exposure sizes. 23The Herfindahl index is a measure of concentration in very widespread use in anti-trust analysis, and should be familiar to many practitioners. 24The matching procedure specified in the proposed granularity adjustment of Basel Committee on Bank Supervision (2001, Chapter 8) is based on a different set of moments but follows similar intuition. 17

Matching lower-order moments gives no guarantee that the loss distribution for the comparable portfolio will display higher-order moments very close to those of the original heterogeneous portfolio. Tail quantiles of the loss distribution are sensitive to higher-order moments, so the performance of the methodology needs to be confirmed on a range of empirically plausible portfolios. As an example, I construct a portfolio of 600 obligors divided equally across four buckets. The buckets represent high investment grade, low investment grade, high speculative grade and moderate-to-low speculative grade. Factor loadings are calibrated as in Table 1. Expected LGDs for the buckets are set to 0.3, 0.2, 0.6, and 0.5, respectively, and the LGD volatility is set to ηb= 0.5pλb(1 − λb). Table 2 displays the bucket-level parameters. Table 2: Bucket-level Parameters of Stylized Portfolio∗ ¯ p 0.05 0.50 1.00 5.00 w 1.040 0.715 0.629 0.440 λ 0.3 0.2 0.6 0.5 η 0.229 0.200 0.245 0.250 1 2 3 4 *: Default probabilities in percentage points. Exposure size for facility i is set to i4; i.e., Aiis $1 for the first exposure, $16 for the second, $81 for the third, and so on. The exposures are assigned to buckets by turn. The first exposure is assigned to Bucket 4, the second to Bucket 3, the third to Bucket 2, the fourth to Bucket 1, the fifth to Bucket 4, and so on. Looking at the portfolio as a whole, I find that the largest 10% of exposures account for roughly 40% of total exposure, which matches the empirical rule of thumb reported by Carey (2001) for concentration of outstandings. Also, portfolio exposure is roughly split between investment and speculative grades, which appears to be typical of a commercial loan portfolio at a large bank.25 I first obtain parameters for the comparable homogeneous portfolio. The comparable portfolio has n∗= 218.7 obligors, which is under 40% of the obligor count of the original portfolio.26Each obligor has PD of 1.64% and factor loading w∗= 0.487. Loss given default has expected value 0.491 and volatility 0.247.27By construction, the comparable portfolio matches the original portfolio in its expected loss rate of 0.804%. For each portfolio, the standard deviation of the loss rate is 0.918%. Once the comparable portfolio is determined, the constant of proportionality β is calculated for the target percentile q. Finally, I approximate VaRqfor the original portfolio as its asymptotic VaR plus β∗ Results are shown in Table 3 for three tail values of q. Row (i) presents estimates of VaR obtained by direct simulation of the original portfolio. Row (ii) presents the asymptotic VaR for the original portfolio given by E[Ln|αq(X)]. Row (iii) shows VaR for the comparable portfolio obtained using the cdf in equation 25In a sample of large bank commercial loan portfolios, Treacy and Carey (1998, Chart 3) show that roughly half of aggregate internally rated outstandings are investment grade. 26Note that the procedure for calculating the granularity add-on does not require n∗to be an integer. 27In practice, as in this exercise, LGD volatility is often assumed to be a simple function of expected LGD. Reporting require- ments and computations could be simplified by setting η∗= VLGD(λ∗). While this ignores the effect of nonlinearity in VLGD, the difference is typically small. In our example, VLGD(λ∗) would have been 0.250. q/n∗. 18

(18). The granularity add-on β∗ add-on to get our approximation. Tracking error between rows (v) and (i) is shown in the final row. The procedure works well for all values of q. Despite the relatively small obligor count in the com- parable portfolio, the error due to linear approximation of the “1/n rule” is minimal. At q = 99.5%, our approximated VaR overshoots its target by 2.2 basis points. q/n∗is shown in row (iv). Row (v) sums the asymptotic VaR and granularity Table 3: Direct and Approximated Estimates of VaR∗ q: 99.0 4.577 4.220 4.570 0.357 4.578 0.001 99.5 5.522 5.109 5.535 0.435 5.544 0.022 99.9 7.872 7.260 7.872 0.627 7.886 0.014 (i) (ii) (iii) (iv) (v) (vi) “True” VaR Asymptotic VaR VaR for comparable portfolio Granularity add-on Approximated VaR Tracking error *: All quantities expressed in percentage points. “True” VaR estimated by simulation with 300,000 Monte Carlo trials. It should be emphasized that the theoretical underpinnings for the granularity adjustment apply equally to mark-to-market models. The simple linear formulae for parameters of the comparable homogeneous port- folio depend on the linear functional forms assumed in CreditRisk+. Specifications based on more complex models, e.g., KMV Portfolio Manager or CreditMetrics, imply more complex mapping formulae whose in- puts need not be reducable to bucket-level summary statistics (e.g., Herfindahl indices). However, it seems reasonable to conjecture that one can achieve tolerable accuracy using crude rules based on the CreditRisk+ formulae. What is most important is that there be a reasonably accurate measure for the “effective” obligor count (i.e., n∗) in a heterogeneous portfolio. Most bank portfolios are heavy-tailed in exposure size distri- bution, and thus may have an effective n∗that is an order of magnitude smaller than the raw obligor count in the portfolio. 5 Asymptotic properties of alternative risk measures Industry application of credit risk modeling to capital allocation appears almost invariably to equate sound- ness with a coverage target for value-at-risk. However, because it ignores the distribution of losses beyond the target quantile, VaR has significant theoretical and practical shortcomings. As has been emphasized in recent literature on risk-measures, VaR is not sub-additive. That is, if LAand LBare losses on bank portfo- lios A and B, then we need not have VaRq[LA+LB] ≤ VaRq[LA]+VaRq[LB], which would imply that a merger of bank A and bank B could increase VaR; see Frey and McNeil (2002, §2.3) for an example based on credit risk measurement. Sub-additivity is one of the four requirements for a “coherent” risk measure, as defined by Artzner, Delbaen, Eber and Heath (1999). Under the assumptions needed to achieve portfolio invariance, we have VaRq[LA+LB] = VaRq[LA]+ VaRq[LB], so sub-additivity is preserved in mergers of asymptotically fine-grained portfolios. Even in this 19

case, however, VaR can be manipulated by splitting the portfolio. Under an actuarial definition of loss, for example, a portfolio consisting of a single loan with default probability under 1 − q has VaRq= 0, but the same loan has positive contribution (assuming positive dependence of U on X) to VaR in an asymptotically fine-grained portfolio. Segregating this loan from the larger portfolio does not change the VaR contributions of the remaining loans, so VaR is unambiguously reduced. Another problem is that a mean-preserving spread of the loss distribution can decrease VaR. This results in a counterintuitive non-monotonic relationship between within-portfolio correlation and VaR. Correlations increase with factor loadings, and, when factor loadings are low to moderate, VaR does as well. However, as factor loadings are pushed higher and higher, the loss distribution becomes increasingly long-tailed. VaR then shrinks towards the median, while the probability of a cataclysmic loss increases. In the limiting case of perfectly comonotonic losses, the default rate is either zero (with probability 1− ¯ p) or one (with probability ¯ p). If ¯ p < 1 − q, then VaRq= 0. For a survey discussion of the potential pitfalls of VaR, see Szeg¨ o (2002). As an alternative to VaR, Acerbi and Tasche (2002) propose using generalized expected shortfall (“ES”), defined by ESq[Y ] = (1 − q)−1?E[Y · 1{Y ≥αq(Y )}] + αq(Y )(q − Pr(Y < αq(Y )))?. The first term is often used as the definition of expected shortfall for continuous variables. It is also known as “tail conditional expectations.” The second term is a correction for mass at the qthquantile of Y . Under this definition, Acerbi and Tasche (2002) show that ES is coherent and equivalent to Rockafellar and Uryasev’s (2002) CVaR. Expected shortfall offers some important advantages as a soundness standard. By Acerbi and Tasche (2002, Corollary 3.3), ESqis continuous and monotonic is q, so small increases in the “stringency” of the capital rule (as controlled by q) lead to small increases in required capital. ES is nondecreasing with any mean-preserving spread in the loss distribution, so ES increases with factor loadings. In Appendix E, I show that (20) Proposition 6 If assumptions (A-1)-(A-5) hold, then |ESq[Ln] − ESq[Mn(X)]| → 0. An immediate implication is that ES-based capital charges are portfolio invariant under the same assump- tions as VaR-based capital charges. The asymptotic expected shortfall, ESq[Mn(X)], can be decomposed as (P specifications, cihas an analytical solution, so there is no operational difficulty in calibrating ratings-based capital charges to an ES soundness standard. Another alternative to VaR is expected excess loss (“EEL”). For a random variable Y and target loss rate θ > 0, EEL is defined by iAi), where the capital charge per dollar of exposure to i is ci= E[Ui|X ≥ αq(X)]. Ob- serve that cidepends only on how Uidepends on X, and so is portfolio-invariant. Under a variety of model iciAi)/(P EELθ[Y ] ≡ inf{y : E[(Y − y)+] ≤ θ}. (21) 20

where Y+denotes max(Y,0). Under the EEL paradigm, an institution holds capital (plus reserves) so that the expected credit loss in excess of capital is less than or equal to the target loss rate. That is, the required total capital (plus reserves) is given by c = EELθ[Ln] per dollar of total exposure. EEL is sensitive to the tail of the loss distribution, so shares many of the advantages of ES. More importantly, the target rate θ represents the expected loss borne by the depository insurance agency (such as the FDIC in the US), so EEL-based capital has a natural policy interpretation. Unlike ES, however, EEL cannot be reconciled with portfolio-invariant capital charges. Some intuition for this problem can be gained by writing the asymptotic EEL for homogeneous portfolios in terms of the distribution of the systematic risk factor. Assume we have loans of two types, denoted “a” and “b”. Let µa(x) denote the expected loss for bucket a loans conditional on X=x. By reasoning very similar to that of Proposition 6, we can show that E[(La− y)+] → E[(µa(X) − y)+] for any y. Therefore, the asymptotic EEL capital charge cais set so that E[(µa(X)−ca)+] equals the desired target θ. Similar analysis for bucket b gives cb. Now say we have a mixed portfolio containing equal numbers of loans from a and b. For simplicity, the exposures are equal-sized. Asymptotic EEL capital for the mixed portfolio is given by EELθ[µm(X)]. By construction of the mixed portfolio, we have µm(X) = (µa(X) + µb(X))/2. If asymptotic EEL were portfolio-invariant, then cm≡ (ca+ cb)/2 would satisfy θ = E[(µm(X) − cm)+]. (22) We now require the following triangle inequality: Lemma 1 If Y1and Y2are integrable random variables on a probability space (Ω,F,P), then E[(Y1+ Y2)+] ≤ E[Y+ 1] + E[Y+ 2]. (23) If P({ω : (Y1(ω)<0<Y2(ω)) ∨ (Y2(ω)<0<Y1(ω))}) > 0, then the inequality in equation (23) is strict. Proof is given in Appendix F. The conditions of Lemma 1 apply to Yj≡ (µj(X) − cj)/2, which gives us E[(µm(X) − cm)+] ≤ E[((µa(X) − ca)/2)+] + E[((µb(X) − cb)/2)+] = θ. (24) In general, the threshold realization of X at which µa(x) = cadoes not equal the corresponding threshold for portfolio b, so for some interval of x values we will have either µa(x) − ca < 0 < µb(x) − cbor µa(x) − ca> 0 > µb(x) − cb. Therefore, the inequality in equation (24) will in most situations be strict, which implies that cmis too strict a capital requirement for the asymptotic mixed portfolio. To provide a rough idea of how much we overshoot required capital in a mixed portfolio, I apply EEL to an asymptotic, single systematic factor version of CreditRisk+. In Appendix G, I show that asymptotic EEL 21

takes on a relatively simple form in this model. Table 4 presents EEL- and VaR-based capital requirements for homogeneous asymptotic portfolios of different credit ratings. Parameters for each rating grade and the volatility of X are taken from Table 1. The “EEL” and “VaR” columns in Table 4 report required capital charges (gross of reserves) for an EEL target of θ = 0.00002 (i.e., 0.2 basis points) and a VaR target of q = 99.5%, respectively. The value of θ was chosen to equate capital requirements under the two standards for an obligor at the border of investment and speculative grades (i.e., between BBB and BB). In this example, the EEL standard produces lower (higher) capital requirements than VaR for the higher (lower) grades. Table 4: Asymptotic EEL and VaR Capital Charges∗ EEL 0.050 0.131 0.571 4.135 19.352 45.550 VaR 0.135 0.248 0.709 3.397 12.657 27.390 AA A BBB BB B CCC *: Capital in percentage points. I next form mixed portfolios. In each case, I assume an asymptotic portfolio of equal-sized loans, half of which are in one bucket and half in another bucket. It is straightforward to show that the conditional expected loss rate for a mixed portfolio is µm(x) =1 2µa(x) +1 2µb(x) = λ¯ pm(1 − wm+ wmx) (25) where ¯ pm = (¯ pa+ ¯ pb)/2 and wm = (¯ pawa+ ¯ pbwb)/(2¯ pm). The µm(x) take on the same form as for homogeneous portfolios, so the tools of Appendix G apply without modification. Results for four different mixed portfolios are presented in Table5. The third column shows the EELfor the mixed portfolio, while the fourth column shows the average of the EELs for homogeneous portfolios of the two constituent buckets. The final column shows the “tracking error” as a percentage of the third column. As one would expect, the average of the homogeneous capital charges overshoots the correct mixed-portfolio capital charge by a relatively small (though non-negligible) amount when the two buckets are adjacent. For a mix of grades AA and A, we overshoot by under 3%. For a mix of BBB and BB, we overshoot by 6.5%. If distant buckets are mixed, the overshoot is much larger (over 16% for the two examples in the table). Discussion This paper shows how risk-factor models of credit value-at-risk can be used to justify and calibrate a ratings- based system for assigning capital charges for credit risk at the instrument level. Ratings-based systems, by definition, permit capital charges to depend only on the characteristics of the instrument and its obligor, and 22

Table 5: Asymptotic EEL for Mixed Portfolios (ca+ cb)/2 0.090 9.741 2.353 22.800 Bucket a AA A BBB AA Bucket b A B BB CCC Error +2.7% +16.3% +6.5% +16.0% cm 0.088 8.378 2.210 19.658 *: Capital charges expressed in percentage points. not the characteristics of the remainder of the portfolio. Risk-factor models deliver this property, which I call portfolio invariance, only if two conditions are satisfied. First, the portfolio must be asymptotically fine-grained, in order that all idiosyncratic risk be diversified away. Second, there can be only a single systematic risk factor. Violation of the first condition, which occurs for every finite portfolio, does not pose a serious obstacle in practice. Analysis of rates of convergence of VaR to its asymptotic limit leads to a robust and practical method of approximating a portfolio-level adjustment for undiversified idiosyncratic risk. The second condition presents a greater dilemma. The single risk factor assumption, in effect, imposes a single monolithic business cycle on all obligors. A revised Basel Accord must apply to the largest interna- tional banks, so the single risk factor should in principle represent the global business cycle. By assumption, all other credit risk is strictly idiosyncratic to the obligor. In reality, the global business cycle is a composite of a multiplicity of cycles tied to geography and to prices of production inputs. A single factor model cannot capture any clustering of firm defaults due to common sensitivity to these smaller-scale components of the global business cycle. Holding fixed the state of the global economy, local events in, for example, Spain are permitted to contribute nothing to the default rate of Spanish obligors. If there are indeed pockets of risk, then calibrating a single factor model to a broadly diversified international credit index may significantly understate the capital needed to support a regional or specialized lender. Would empirical violation of the single factor assumption necessarily render a risk-bucket capital rule unreliable and ineffective? The answer depends on the scope of application and the sophistication of debt markets. Regulators will need to use caution and judgement in applying risk-bucket capital charges to institutions that are less broadly diversified. One should note that the current Basel Accord, which is itself a risk-bucket system, is applied to an enormous range of institutions, so it seems unlikely that a reformed Accord would bring about any greater harm. More generally, the ability of banks to subvert ratings-based capital rules by exploiting the inadequacy of the single factor assumption depends on the capacity of debt markets to recognize and price different risk-factors. At present, such capacity appears to be lacking. Partly because markets do not yet provide precise information on correlations of credit events across obligors, many (perhaps most) of the institu- tions that actively use credit VaR models effectively impose the single-factor assumption.28In the near- to 28Users of KMV Portfolio Manager and CreditMetrics often impose a uniform asset-value correlation across obligors. Users of CreditRisk+typically assume a single factor and a factor loading of w = 1 for all obligors. In both these examples, the user is 23

medium-term, therefore, the implausibility of the single factor assumption need not present an obstacle to the implementation of reformed ratings-based risk-bucket capital rules. In the long run, however, the need to relax this assumption may impel adoption of a more sophisticated internal-models regulatory regime. Appendix A Proof of Propositions 1 and 2 The proof of Proposition 1 requires a version of the the strong law of large numbers for a sequence {Yn} of independent random variables and a sequence {an} of positive constants: Lemma 2 (Petrov (1995), Theorem 6.7) If an↑ ∞ andP∞ 1 an i=1 i=1 n=1(V[Yn]/a2 n) < ∞, then n n " #! X X Yn− E → 0a.s.. Yn We also make use of the following lemma: Lemma 3 If {bn} isa sequence of positive real numbers such that {bn} isO(n−ρ) for some ρ > 1, thenP∞ This lemma is a corollary of Theorem 3.5.2 in Knopp (1956) and the convergence of the harmonic series 1/nρfor ρ > 1 (see Knopp 1956, Example 3.1.2.3). We now prove Proposition 1. Let Yn≡ UnAnand an≡Pn n=1bn< ∞. i=1Ai. For any realization x, conditional independence implies !2 ∞ X ∞ X n X (V[Yn|x]/a2 n) = V[Un|x] An/ Ai n=1 n=1 i=1 Under the actuarial definition of loss, Unis bounded in [0,1], so we must have V[Un|x] < 1 for any X=x. For this proposition to hold under the mark-to-market paradigm as well, assumption (A-1) provides a bound on V[Un|x]. Therefore, under either definition of loss, there exists a finite constant V∗such that !2 ∞ X ∞ X n X n) ≤ V∗ (V[Yn|x]/a2 An/ Ai . n=1 n=1 i=1 By part (b) of assumption (A-2), the sequence {An/Pn implicitly imposing both a single systematic factor and a uniform value for the factor loading. i=1Ai} is O(n−(1/2+ζ)) for some ζ > 0, so the n i=1Ai)2o (An/Pn is O(n−(1+2ζ)). By Lemma 3, the series sum must be finite. By part (a) of sequence 24

assumption (A-2), we have an↑ ∞. The conditions of Lemma 2 are thus satisfied. The loss ratio Lnis equal toPn Lemma 4 Let {bn} and {dn} be sequences of real numbers such that an ≡ (1/an)Pn This result is a special case of Petrov (1995), Lemma 6.10. If we let bn= Anand dn= An/Pn QED i=1Yi/an, so Proposition 1 is proved. Proposition 2 follows similar logic. We require the following lemma: Pn i=1bi ↑ ∞ and dn → 0. Then i=1bidi → 0. i=1Ai, then assumption (A-2) guarantees that an↑ ∞ and dn→ 0, so apply Lemma 4 to get n 1 A2 X → 0. i (26) Pn Pi i=1Ai j=1Aj i=1 The standard rule for conditional variance gives Pn iE[V[Ui|X]] i=1Ai)2 i=1A2 (Pn V[Ln] − V[E[Ln|X]] = E[V[Ln|X]] = . E[V[Ui|X]] must be less than one under the actuarial paradigm. Under the mark-to-market paradigm, we have E[V[Ui|X]] < E[Υ(X)] < ∞ by assumption (A-1). Therefore, there exists a finite constant V∗such that n n Pn 1 1 i=1A2 i=1Ai)2= V∗ A2 j=1Aj A2 X X E[V[Ln|X]] ≤ V∗ ≤ V∗ → 0. i i i Pn Pn Pn Pi (Pn i=1Ai i=1Ai j=1Aj i=1 i=1 As E[V[Ln|X]] must be non-negative and is bounded from above by a quantity converging to zero, it too must converge to zero. QED B Proof of Proposition 3 Almost sure convergence implies convergence in probability (see Billingsley 1995, Theorem 25.2), so for all x and ? > 0, Pr(|Ln− E[Ln|x]| < ?|x) → 1. (27) If Fnis the cdf of Ln, then equation (27) implies Fn(E[Ln|x] + ?|x) − Fn(E[Ln|x] − ?|x) → 1. Because Fnis bounded in [0,1], we must have Fn(E[Ln|x] + ?|x) → 1 and Fn(E[Ln|x] − ?|x) → 0. Let S+ ndenote the set of realizations x of X such that E[Ln|x] is less than or equal to its qthquantile 25

value, i.e., n≡ {x : E[Ln|x] ≤ αq(E[Ln|X])}. S+ By construction, Pr(x ∈ S+ By the usual rules for conditional probability, we have n) ≥ q. Fn(αq(E[Ln|X]) + ?) = Fn(αq(E[Ln|X]) + ?|X ∈ S+ + Fn(αq(E[Ln|X]) + ?|X 6∈ S+ Fn(αq(E[Ln|X]) + ?|X ∈ S+ Fn(αq(E[Ln|X]) + ?|X ∈ S+ n)Pr(X ∈ S+ n)Pr(X 6∈ S+ n)Pr(X ∈ S+ n)q n) n) n) ≥ ≥ (28) For all x ∈ S+ n, we have Fn(αq(E[Ln|X]) + ?|x) ≥ Fn(E[Ln|x] + ?|x) → 1 so the dominated convergence theorem (Billingsley 1995, Theorem 16.4) implies that Fn(αq(E[Ln|X]) + ?|X ∈ S+ n) → 1, so from equation (28) we have Fn(αq(E[Ln|X]) + ?) → [q,1] as required. The other half of the proof follows similarly. Define S− nas S− n≡ {x : E[Ln|x] ≥ αq(E[Ln|X])} so that Pr(x ∈ S− n) ≥ 1 − q. Then Fn(αq(E[Ln|X]) − ?|X 6∈ S− + Fn(αq(E[Ln|X]) − ?|X ∈ S− q + Fn(αq(E[Ln|X]) − ?|X ∈ S− n)Pr(X 6∈ S− n)Pr(X ∈ S− n)Pr(X ∈ S− Fn(αq(E[Ln|X]) − ?) = n) n) n). (29) ≤ For all x ∈ S− n, we have Fn(αq(E[Ln|X]) − ?|x) ≤ Fn(E[Ln|x] − ?|x) → 0 26

so the dominated convergence theorem implies that Fn(αq(E[Ln|X]) − ?|X ∈ S− n) → 0, so from equation (29) we have Fn(αq(E[Ln|X]) − ?) → [0,q] QED as required. C Proof of Proposition 5 The proof of Proposition 5 requires the following lemma: Lemma 5 Let Y1and Y2be random variables with cdfs F1and F2, respectively. For all y and all ? > 0, |F1(y) − F2(y)| ≤ Pr(|Y1− Y2| > ?) + max{F2(y + ?) − F2(y),F2(y) − F2(y − ?)}. Proof: Corollary of Petrov (1995, Lemma 1.8). To apply Lemma 5, let Y1= Lnwith cdf Fnand Y2= E[Ln|X] with cdf F∗ real numbers n0and δ,δ for which assumptions (A-4) and (A-5) are satisfied. At every point ˆ x ∈ B, we have n. Fix an open set B and |Fn(E[Ln|ˆ x]) − F∗ n(E[Ln|ˆ x])| ≤ Pr(|Ln− E[Ln|X]| > ?) + max{F∗ n(E[Ln|ˆ x] + ?) − F∗ n(E[Ln|ˆ x]),F∗ n(E[Ln|ˆ x]) − F∗ n(E[Ln|ˆ x] − ?)} (30) for any ? > 0. Forn > n0, (A-5) guarantees that Mn(x) isstrictly increasing on B, so for allx ∈ B, Mn(X) ≤ Mn(x) if and only if X ≤ x, so F∗ Fix ?∗> 0 such that (ˆ x − ?∗, ˆ x + ?∗) ⊂ B. For any positive ? < δ?∗, we then have ˆ x + ?/δ ∈ B, so Mn(ˆ x + ?/δ) − Mn(ˆ x) > ? for all n > n0. As F∗ n(Mn(x)) = H(x) where H is the cdf of X. nis nondecreasing, this implies that F∗ n(E[Ln|ˆ x] + ?) ≤ F∗ n(Mn(ˆ x + ?/δ)) = H(ˆ x + ?/δ) Similarly, we have F∗ n(E[Ln|ˆ x] − ?) ≥ F∗ n(Mn(ˆ x − ?/δ)) = H(ˆ x − ?/δ). 27

Thus, for all n > n0, max{F∗ n(E[Ln|ˆ x] + ?) − F∗ n(E[Ln|ˆ x]),F∗ n(E[Ln|ˆ x]) − F∗ ≤ max{H(ˆ x + ?/δ) − H(ˆ x),H(ˆ x) − H(ˆ x − ?/δ)}. n(E[Ln|ˆ x] − ?)} Assumption (A-5) also provides that H is continuous and increasing on B, so for any η > 0 there exists ? > 0 such that max{H(ˆ x + ?/δ) − H(ˆ x),H(ˆ x) − H(ˆ x − ?/δ)} < η. (31) By Proposition 1 and the dominated convergence theorem, Ln− E[Ln|X] converges to zero almost surely, which implies convergence in probability as well. Therefore, for any choice of ? > 0 and η > 0, there exists n?< ∞ such that Pr(|Ln− E[Ln|X]| > ?) < η (32) ∀n > n?. Combining these results, we have that for any η > 0, there exists an ? > 0 such that equations (31) and (32) are simultaneously satisfied for n > max{n0,n?}. Thus, n→∞|Fn(Mn(ˆ x)) − F∗ lim n(Mn(ˆ x))| → 0. (33) Setting ˆ x = αq(X) and observing that F∗ Proposition 5. For any positive η < δ?∗and n > n0, (A-5) implies that Mn(αq(X)) − Mn(αq(X) − η/δ) ≤ η so Fn(Mn(αq(X)) − η) ≤ Fn(Mn(αq(X) − η/δ)). Because αq(X) − η/δ ∈ B, we have by equation (33) that n(Mn(αq(X))) = H(αq(X)) = q establishes the first result of |Fn(Mn(αq(X) − η/δ)) − F∗ n(Mn(αq(X) − η/δ))| → 0. n(Mn(αq(X) − η/δ)) = H(αq(X) − η/δ) < q, so there exists ˜ n− < ∞ such that For n > n0, F∗ Fn(Mn(αq(X) − η/δ)) < q for all n > ˜ n−. Thus, for all n > ˜ n−, Fn(Mn(αq(X)) − η) < q, which implies that Mn(αq(X)) − η < αq(Ln). By a parallel argument, for all positive η < δ?∗there exists ˜ n+ < ∞ such that Mn(αq(X)) − η > αq(Ln) for all n > ˜ n+. Thus, for all n > max{˜ n−, ˜ n+}, we have |αq(Ln) − Mn(αq(X))| < η. As η can be made arbitrarily close to zero, the second result of Proposition 5 is established. QED D Construction of the comparable homogeneous portfolio Moment restrictions provide a convenient and intuitive way to map the heterogeneous portfolio into a homo- geneous portfolio of n∗equal-sized facilities with common PD ¯ p∗, factor loading w∗, and LGD parameters 28

λ∗and η∗. Let sbdenote the share of total portfolio exposure held in bucket b, i.e., P i∈bAi P sb≡ . iAi The first two restrictions equate exposure-weighted expected default rate and expected portfolio loss rate: B B X X ¯ p∗= λ∗¯ p∗= ¯ pbsb λb¯ pbsb. and (34) b=1 b=1 Thus, λ∗is the expected loss rate divided by the expected default rate; i.e., PB b=1λb¯ pbsb PB λ∗= (35) . b=1¯ pbsb The remaining moment restrictions equate across the actual and comparable portfolios the contribution to loss variance from different sources of risk. The contribution of systematic risk (i.e., V[E[L|X]]) takes the simple form !2 B b=1 X V[E[Ln|X]] = λb¯ pbwbsb σ2 V[E[L∗|X]] σ2(λ∗¯ p∗w∗)2, = which implies PB b=1λb¯ pbwbsb PB w∗= (36) . b=1λb¯ pbsb Note that w∗is simply an expected-loss-weighted average of the wb. The contribution of idiosyncratic risk to loss variance (i.e., E[V[Ln|X]]) works out to B X 1 n∗ E[V[Ln|X]] = b(¯ pb(1 − ¯ pb) − (¯ pbwbσ)2) + ¯ pbη2 ?λ2 ? ?Hbs2 b b b=1 λ∗2(¯ p∗(1 − ¯ p∗) − (¯ p∗w∗σ)2) + ¯ p∗η∗2? E[V[L∗|X]] = . Terms containing λ2(¯ p(1 − ¯ p) − (¯ pwσ)2) represent the contribution of idiosyncratic default risk, and terms containing ¯ pη2represent the contribution of idiosyncratic recovery risk. By matching these two contribu- tions separately, I get the final two restrictions needed for identification. The number of exposures in the 29

comparable portfolio works out to !−1 B b=1 X n∗= ΛbHbs2 (37) b where b(¯ pb(1 − ¯ pb) − (¯ pbwbσ)2) λ2 λ∗2(¯ p∗(1 − ¯ p∗) − (¯ p∗w∗σ)2). Λb≡ Finally, the variance of LGD for the comparable portfolio is given by B η∗2=n∗ X b¯ pbHbs2 η2 (38) b. ¯ p∗ b=1 E Proof of Proposition 6 Expected shortfall is the sum of expected loss in the tail and a correction term for mass at the VaR boundary. Under the given assumptions, the latter term disappears asymptotically both in ESq[Ln] and ESq[Mn(X)]. For n > n0, the variable Mn(X) is continuous in the neighborhood of Mn(αq(X)) and, by Proposition 4, αq(Mn(X)) = Mn(αq(X)); this implies Pr(Mn(X) < Mn(αq(X))) = q. By Chebyshev’s inequality and assumption (A-1) we have E[Mn(X)2] Pr(Mn(X) > Mn(αq(X)))≤E[Υ(X)] Mn(αq(X))2≤ < ∞, 1 − q so the sequence {Mn(αq(X))} is bounded from above. Therefore, αq(Mn(X))(q − Pr(Mn(X) < αq(Mn(X)))) = 0 ∀n > n0. Although Lnneed not be continuous, arguments parallel to those used in the proof of Proposition 5 show that Pr(Ln≥ αq(Ln)) → 1 − q. That proposition also provides that |α(Ln) − Mn(αq(X))| → 0, so αq(Ln) is asymptotically bounded from above. This implies n→∞|αq(Ln)(q − Pr(Ln< αq(Ln)))| = 0. lim To complete the proof, we now only need show that |E[Ln· 1{Ln≥αq(Ln)}] − E[Mn(X) · 1{Mn(X)≥αq(Mn(X))}]| → 0. (39) Let Yn ≡ (Ln− αq(Ln)) and letˆYn ≡ (Mn(X) − Mn(αq(X))). The terms of equation (39) can be 30

re-written as E[Mn(X) · 1{Mn(X)≥αq(Mn(X))}] = E[(Mn(X) − Mn(αq(X))) · 1{Mn(X)−Mn(αq(X))≥0}] + Mn(αq(X))E[1{Mn(X)≥Mn(αq(X))}] = E[max{ˆYn,0}] + Mn(αq(X))Pr(Mn(X) ≥ Mn(αq(X))) and similarly E[Ln· 1{Ln≥αq(Ln)}] = E[max{Yn,0}] + αq(Ln)Pr(Ln≥ αq(Ln)). As |αq(Ln) − Mn(αq(X))| → 0 and n→∞Pr(Ln≥ αq(Ln)) = lim lim n→∞Pr(Mn(X) ≥ Mn(αq(X))) = 1 − q, we have |αq(Ln)Pr(Ln≥ αq(Ln)) − Mn(αq(X))Pr(Mn(X) ≥ Mn(αq(X)))| → 0. For all y, ˆ y ∈ <, |max(y,0) − max(ˆ y,0)| ≤ |y − ˆ y|. Therefore, |E[max{Yn,0}] − E[max{ˆYn,0}]| ≤ E[|Yn−ˆYn|] ≤ E[|Ln− Mn(X)|] + |αq(Ln) − Mn(αq(X))|. QED As each of these terms converges to zero, equation (39) is established. F Proof of Lemma 1 Divide Ω into two subsets = {ω : 0≤min(Y1(ω),Y2(ω)) ∨ max(Y1(ω),Y2(ω))≤0} {ω : (Y1(ω)<0<Y2(ω)) ∨ (Y2(ω)<0<Y1(ω))}. B1 = B2 Observe that B1∪ B2= Ω and B1∩ B2= ∅. If Y is an integrable random variable on (Ω,F,P), we can write Z Z E[Y+] = max(Y (ω),0)P(dω) Ω Z = max(Y (ω),0)P(dω) + max(Y (ω),0)P(dω). B1 B2 31

The set B1contains all ω for which Y1and Y2are either both positive or both negative. Under both these circumstances, max(Y1(ω) + Y2(ω),0) equals max(Y1(ω),0) + max(Y2(ω),0), so Z max(Y1(ω) + Y2(ω),0)P(dω) B1 Z Z = max(Y1(ω),0)P(dω) + max(Y2(ω),0)P(dω). (40) B1 B1 The set B2contains all ω for which Y1and Y2are of opposite sign, so Z max(Y1(ω) + Y2(ω),0)P(dω) B2 Z Z max(Y1(ω),0)P(dω) + max(Y2(ω),0)P(dω). (41) ≤ B2 B2 Summing left and right hand sides of equations (40) and (41), we obtain E[(Y1+ Y2)+] ≤ E[Y+ 1] + E[Y+ 2]. (42) If P(B2) > 0, then the inequality in equation (41) is strict, and therefore the inequality in equation (42) is strict as well. Asymptotic EEL in CreditRisk+ G I derive the asymptotic EEL for a homogeneous portfolio under a single systematic factor version of CreditRisk+. Let ¯ p denote default probability, λ denote LGD, w denote factor loading, and σ denote the volatility of systematic factor X. The conditional expected loss rate in the CreditRisk+specification is given by equation (17). As n → ∞, Lnconverges to µ(X), so asymptotic EEL is equal to the value of c solving Z∞ θ = E[(µ(X) − c)+] = (µ(x) − c)h(x)dx, (43) µ−1(c) where h(·) is the gamma pdf with mean one, variance σ2. Using Abramowitz and Stegun (1968, 6.5.1, 6.5.21) to solve this integral, I obtain EL · w Γ(1 + 1/σ2) ?µ−1(c)/σ2?1/σ2exp?−µ−1(c)/σ2?, θ = (EL − c)(1 − H(µ−1(c))) + (44) where H(·) denotes the gamma cdf, EL is expected loss (λ¯ p), and µ−1(c) =c − (1 − w) · EL . w · EL The gamma cdf is available in nearly all numerical packages. Standard software for solving nonlinear equations quickly finds the capital ratio c which covers EEL target θ. In the special case of σ = 1, the 32

gamma distribution reduces to the exponential distribution, and equation (44) simplifies to θ = w · EL · exp?µ−1(c)?. This yields the closed-form solution EELθ[L∞] = c = EL − w · EL · (1 + ln(θ) − ln(w · EL)). References Abramowitz, Milton and Irene A. Stegun, Handbook of Mathematical Functions number 55. In ‘Applied Mathematics Series.’, National Bureau of Standards, 1968. Acerbi, Carlo and Dirk Tasche, “On the coherence of expected shortfall,” Journal of Banking and Finance, 2002, 26 (7), 1487–1503. Artzner, Philippe, Freddy Delbaen, Jean-Marc Eber, and David Heath, “Coherent Measures of Risk,” Mathematical Finance, 1999, 9 (3), 203–228. Basel Committee on Bank Supervision, “Credit Risk Modelling: Current Practices and Applications,” Technical Report, Bank for International Settlements 1999. , “The Internal Ratings-Based Approach: Supporting Document to the New Basel Capital Accord,” Technical Report, Bank for International Settlements 2001. Billingsley, Patrick, Probability and Measure, third ed., New York: John Wiley & Sons, 1995. B¨ urgisser, Peter, Alexandre Kurth, and Armin Wagner, “Incorporating Severity Variations into Credit Risk,” Journal of Risk, 2001, 3 (4), 5–31. Carey, Mark, “Dimensions of Credit Risk and Their Relationship to Economic Capital Requirements,” in Frederic S. Mishkin, ed., Prudential Supervision: What Works and What Doesn’t, University of Chicago Press, 2001. Credit Suisse Financial Products, CreditRisk+: A Credit Risk Management Framework, London: Credit Suisse Financial Products, 1997. Finger, Christopher C., “Conditional Approaches for CreditMetrics Portfolio Distributions,” CreditMetrics Monitor, 1999, pp. 14–33. Frey, R¨ udiger and Alexander J. McNeil, “VaR and expected shortfall in portfolios of dependent credit risks: Conceptual and practical insights,” Journal of Banking and Finance, 2002, 26 (7), 1317–1334. Frye, Jon, “Collateral Damage: A Source of Systematic Credit Risk,” Risk, 2000. GARPCommittee on Regulation andSupervision, “Response to Basle’s ”Credit Risk Modelling: Current Practices and Applications”,” Technical Report, Global Association of Risk Professionals 1999. 33

Gordy, Michael B., “A Comparative Anatomy of Credit Risk Models,” Journal of Banking and Finance, January 2000, 24 (1-2), 119–149. Gouri´ eroux, C., J.P. Laurent, and O. Scaillet, “Sensitivity analysis of Values at Risk,” Journal of Empiri- cal Finance, 2000, 7, 225–245. Gupton, Greg M., Christopher C. Finger, and Mickey Bhatia, CreditMetrics–Technical Document, New York: J.P. Morgan & Co. Incorporated, 1997. Hill, G.W.andA.W.Davis, “Generalized Asymptotic Expansions ofCornish-Fisher Type,” Annals ofMath- ematical Statistics, August 1968, 39 (4), 1264–1273. Jones, David, “Emerging problems with the Basel Capital Accord: Regulatory capital arbitrage and related issues,” Journal of Banking and Finance, January 2000, 24 (1-2), 35–58. Knopp, Konrad, Infinite Sequences and Series, New York: Dover Publications, 1956. Martin, Richard and Tom Wilde, “Unsystematic Credit Risk,” Risk, forthcoming. Petrov, Valentin V., Limit Theorems of Probability Theory number 4. In ‘Oxford Studies in Probability.’, Oxford University Press, 1995. Rockafellar, R. Tyrrell and Stanislav Uryasev, “Conditional value-at-risk for general loss distributions,” Journal of Banking and Finance, 2002, 26 (7), 1443–1471. Szeg¨ o, Giorgio, “Measures of Risk,” Journal of Banking and Finance, 2002, 26 (7), 1253–1272. Tasche, Dirk, “Conditional Expectation as Quantile Derivative,” November 2000. , “Calculating Value-at-Risk contributions in CreditRisk+,” November 2001. , “Expected shortfall and beyond,” Journal of Banking and Finance, 2002, 26 (7), 1519–1533. Treacy, William F. and Mark S. Carey, “Credit Risk Rating at Large U.S. Banks,” Federal Reserve Bul- letin, 1998, 84 (11), 897–921. Vasicek, Oldrich A., “The Loan Loss Distribution,” Technical Report, KMV Corporation December 1997. Wilde, Tom, “Probing Granularity,” Risk, 2001, pp. 103–106. 34