1. Introduction
“Whereas a catastrophe can occur in an instant, longevity risk takes decades to unfold”
The Economist (2012)
1.1 Longevity is different from many other risks an insurer faces because the risk lies in the long-term trend taken by mortality rates. However, although longevity is typically a long-term risk, it is often necessary to pose questions over a short-term horizon, such as a year. Two useful questions in risk management and reserving are “what could happen over the coming year to change the best-estimate projection?” and “by how much could a reserve change based on new information?”. The pending Solvency II regulations for insurers and reinsurers in the EU are concerned with reserves being adequate in 99.5% of situations which might arise over the coming year.
1.2 This paper describes a framework for answering such questions, and for setting reserve requirements for longevity risk based on a one-year horizon instead of the more natural long-term approach. The paper contrasts three approaches to reserving for longevity risk: the stressed-trend method, the mortality-shock method and a new value-at-risk proposal. The framework presented here is general, and can work with any stochastic projection model which can be fitted to data and is capable of generating sample paths. Like Börger (Reference Börger2010), Plat (Reference Plat2011) and Cairns (Reference Cairns2011), we will work with all-cause mortality rates rather than rates disaggregated by cause of death.
1.3 Börger (Reference Börger2010) and Cairns (Reference Cairns2011) both work with so-called forward models of mortality, i.e. models which produce multi-year survival probabilities as their output. In this paper we will work with what are sometimes described as spot models for mortality, i.e. models which produce instantaneous rates such as the force of mortality, μx, or the rate of mortality, qx. A forward model would specify directly a distribution for the survival probability, t px, whereas a spot model would require sample-path simulation in order to empirically derive a distribution for t px. The motivation for a forward model comes from the desire to deal with (or avoid) nested stochastic simulations. Both Börger (Reference Börger2010) and Plat (Reference Plat2011) approach this problem by presenting a new model specifically designed for value-at-risk problems involving longevity. In writing about some specific challenges regarding dynamic hedging, Cairns (Reference Cairns2011) evaluates some approximations to avoid nested stochastic simulations. In contrast, the framework presented in this paper avoids nested simulations by design and without approximation.
1.4 In considering the Solvency Capital Requirement (SCR) for longevity risk, Börger (Reference Börger2010) concluded that “the computation of the SCR for longevity risk via the VaR approach obviously requires stochastic modelling of mortality”. Similarly, Plat (Reference Plat2011) stated that “naturally this requires stochastic mortality rates”. This paper therefore only considers stochastic mortality as a solution to the value-at-risk question of longevity risk. The value-at-risk framework presented in this paper requires stochastic projection models.
1.5 Cairns (Reference Cairns2011) warns of the risks in relying on a single model by posing the oft-overlooked questions “what if the parameters […] have been miscalibrated?” and “what if the model itself is wrong?”. Cairns (Reference Cairns2011) further writes that any solution “should be applicable to a wide range of stochastic mortality models”. The framework described in this paper works with a wide variety of models, enabling practitioners to explore the impact of model risk on capital requirements.
1.6 This paper is about the technical question of how to put longevity trend risk into a one-year, value-at-risk framework. It is not a detailed review of current industry practices, which would be a quite different study. Neither is this paper an attempt to evaluate the respective merits of the various models used — the reader is directed to Cairns et al (Reference Cairns, Blake, Dowd, Coughlan, Epstein, Ong and Balevich2009) for just such a comparison, and also to Currie (Reference Currie2012) for questions over the validity of model assumptions. We use a variety of all-cause mortality models to illustrate the methodology proposed in this paper, and so the question of whether to project by all-cause mortality or by cause of death is a side issue here; the interested reader could continue with Booth & Tickle (Reference Booth and Tickle2008) or Richards (Reference Richards2010).
2. Data
2.1 The data used in this paper are the all-cause number of deaths aged x last birthday during each calendar year y, split by gender. Corresponding mid-year population estimates are also given. The data therefore lend themselves to modelling the force of mortality, $$${{\mu }_{x\: + \:\frac{{\rm{1}}}{{\rm{2}}},\ \:y\: + \:\frac{{\rm{1}}}{{\rm{2}}}}} $$$, without further adjustment. We use data provided by the Office for National Statistics (ONS) for England & Wales for the calendar years 1961–2010 inclusive. This particular data set has death counts and estimated exposures at individual ages up to age 104. We will work here with the subset of ages 50–104, which is most relevant for insurance products sold around retirement ages. The deaths and exposures in the age group labelled “105+” were not used. More detailed discussion of this data set, particularly regarding the estimated exposures, can be found in Richards (Reference Richards2008).
2.2 One consequence of only having data to age 104 is having to decide how to calculate annuity factors for comparison. One option would be to create an arbitrary extension of the projected mortality rates up to (say) age 120. Another alternative is to simply look at temporary annuities to avoid artefacts arising from the arbitrary extrapolation. We use the latter approach in this paper, and we therefore calculate continuously paid temporary annuity factors as follows:
where i is the discount rate, $$$ {{v}^t} \: = \:{{\left( {1\: + \:i} \right)}^{{\rm{ - }}t}} $$$ and t px,y is the probability a life aged x at outset in year y survives for t years:
2.3 Restricting our calculations to temporary annuities has no meaningful consequences at the main ages of interest, as shown in Table 1.
For reference, our temporary continuous annuity factors are calculated using the trapezoidal rule for approximating an integral on a uniform grid:
2.4 In this paper we will use y = 2011 as a common outset year throughout. From Equations 1 and 2 we will always need a mortality projection for at least (105−x) years to calculate the annuity factor, even if we are only looking for the one-year change in the value of the annuity factor. While we have opted for the temporary-annuity solution, it is worth noting that the models of Cairns et al (Reference Cairns, Blake and Dowd2006) and Richards et al (Reference Richards, Kirkby and Currie2006) are capable of simultaneously extrapolating mortality rates to higher (and lower) ages at the same time as projecting forward in time. These models therefore deserve a special place in the actuarial toolkit, and the subject is discussed in more detail by Richards & Currie (Reference Richards and Currie2011) and Currie (Reference Currie2011).
3. Components of longevity risk
3.1 For high-level work it is often necessary to quote a single capital amount or percentage of reserve held in respect of longevity risk. However, it is a good discipline to itemise the various possible components of longevity risk, and a specimen list is given in Table 2.
3.2 Table 2 is not intended to be exhaustive and, depending on the nature of the liabilities, other longevity-related elements might appear. In a defined-benefit pension scheme, or in a portfolio of bulk-purchase annuities, there would be uncertainty over the proportion of pensioners who were married, and whose death might lead to the payment of a spouse's pension. Similarly, there would be uncertainty over the age of that spouse. Within an active pension scheme there might be risk related to early retirements, commutation options or death-in-service benefits. Such risks might be less important to a portfolio of individual annuities, but such portfolios would be exposed to additional risk in the form of anti-selection from the existence of the enhanced-annuity market.
3.3 This paper will only address the trend-risk component of Table 2, so the figures in Table 5 and elsewhere can only be minimum values for the total capital requirement for longevity risk. Other components will have to be estimated in very different ways: reserving for model risk requires a degree of subjectivity, while idiosyncratic risk can best be assessed using simulations of the actual portfolio — see Plat (Reference Plat2011) and also Richards & Currie (Reference Richards and Currie2009) for some examples for different portfolio sizes. For large portfolios the idiosyncratic risk will often be diversified away almost to zero in the presence of the other components. In contrast, trend risk and model risk will always remain, regardless of how large the portfolio is. For an overview of how capital requirements for longevity risk fit into the wider regulatory framework, see Makin (Reference Makin2011a, Reference Makin2011b).
4. The stressed-trend approach to longevity risk
4.1 Longevity risk lies in the long-term trend taken by mortality rates. This trend unfolds over many years as an accumulation of small changes, so a natural approach to reserving is to use a long-term stress projection applying over the potential lifetime of the annuitant or pensioner. This view of longevity trend risk is sometimes called the run-off approach, and it does not correspond with the one-year view demanded by a pure value-at-risk methodology. However, the run-off or stressed-trend approach is arguably the most appropriate way to investigate longevity trend risk, and it should therefore not be viewed as invalid merely because it is not a one-year methodology. While a great many insurance risks fit naturally into a one-year value-at-risk framework, not all risks do. It would be excessively dogmatic to insist that longevity trend risk only be measured over a one-year horizon.
4.2 Mortality can be viewed as a stochastic process, as can the direction of mortality trends, so it makes sense to use a stochastic projection model to calibrate a stress trend for longevity risk. This has the benefit of being able to assign a probability to the stress; Figure 1 illustrates the approach. The central projection comes from the maximum-likelihood estimate, while the stress trend is simply the relevant confidence envelope for the central projection. The confidence envelope is derived by multiplying the projection standard error by Φ−1(p), where Φ is the cumulative distribution function of a Normal (0, 1) random variable and p is the relevant probability level (Φ−1(0.005) = −2.58 in this case). Note that the “projection standard error” here encompasses parameter risk, not process risk, and the nature of the projection standard error is discussed in more detail in Section 8. The stressed trend depicted in Figure 1 is smooth because we do not allow for annual volatility in mortality rates, a subject which is discussed in detail in Section 8.
4.3 The stressed-trend approach to longevity risk involves calculating the capital requirement as follows:
where the numerator with Z = −2.58 denotes the annuity factor calculated with the stressed trend as per Figure 1, and the denominator with Z = 0 denotes the annuity factor calculated with the central projection. The stressed-trend approach behind Figure 1 can be used for a wide variety of regression-type models — Table 3 shows some annuity factors calculated using central estimates and 99.5% stress reserves, together with the resulting capital requirements. Table 3 also illustrates model risk, namely the tendency for different models to produce different central and stressed estimates, and hence different capital requirements.
4.4 The stressed-trend capital requirements in Table 3 are dependent on the model, the outset age and the discount rate. Figure 2 shows how the capital requirements vary by age for four different models. Of the models shown, it is interesting to note that the Lee-Carter model yields the second-largest capital requirement up to age 70, but after age 88 it demands the lowest capital requirement. Although the Age-Period-Cohort model generates consistently low capital requirements, Currie (Reference Currie2012) raises concerns about the assumptions required in projecting mortality using the APC model.
4.5 There is also the question of how best to measure the impact of longevity risk. Plat (Reference Plat2011) carries out a whole-portfolio valuation to minimise the risks associated with representing a portfolio with model points, which makes the published results portfolio-specific. In contrast, we have often chosen a single specimen age for most calculations in this paper on the grounds of simplicity, although in practice for a given portfolio it would be better to follow the approach of Plat (Reference Plat2011). As to the choice of specimen age, Börger (Reference Börger2010) chose age 65 to illustrate the impact of different interest rates, and this age would generally be appropriate for newly written annuity business. In contrast, we have chosen age 70 as it is more likely to be representative of mature portfolios of pensions in payment (i.e. excluding deferred or active members in a pension scheme).
4.6 Another important feature is the sensitivity of the capital requirement to the rate of discount. Börger (Reference Börger2010) notes that capital requirements increase as the (net) discount rate decreases. Figure 3 shows the stressed-trend capital requirements under the Lee-Carter model, which show that lower discount rates lead to higher capital requirements. Interest rates are particularly low at the time of writing, so the presence of significant volumes of escalating annuities could mean an effective net discount rate close to zero for some portfolios. Indeed, interest rates are not only low but the yield curve has a very particular shape which has some consequences of its own (see Appendix A6).
4.7 One consequence of the stressed-trend approach is that, for a given p-value, it will tend to produce higher capital requirements than a one-year approach. Some practitioners use a simple approach to try and allow for this by picking a p-value for the stressed-trend which is less than the 99.5th percentile required for the one-year approach. However, this can be arbitrary, and even less subjective approaches can be inconsistent. For example, one simple approach might be to select a p-value, p, such that 0.995n = 1 − p. The parameter n could be chosen relative to the portfolio under consideration, say by setting it to the discounted mean term of the annuity payments. However, the drawback of this approach is that portfolios exposed to greater levels of longevity risk will receive less onerous values of p. For example, assume n = 10 for a portfolio of level annuities, thus leading to p = 0.0489. If we then assume that the annuities are in fact escalating, the value of n might increase to 12 (say), leading to p = 0.0631. This is counter-intuitive — the portfolio of escalating annuities is arguably more exposed to longevity risk, so why should it be subjected to a weaker stress scenario? Deriving a one-year longevity stress from a multi-year calculation is tricky, and so other methodologies are required.
5. The shock approach to longevity risk
5.1 A simplified standard-formula approach to longevity risk is to assume an immediate fall in current and future projected mortality rates, i.e.
where μx,t is the central mortality projection (Z = 0) behind Table 3 and f is the shock reduction in mortality rates. Analogous to Equation 4, the shock capital requirements are:
where the numerator is the annuity factor calculated with the shocked mortality rates according to Equation 5 and the denominator is the annuity factor calculated using the central projection. The QIS5 rules for Solvency II specify f = 20% (European Commission, 2010), and the results of this are shown in Figure 4 for the Lee-Carter model from Table 3. It is worth noting that the shock approach is designed to cover more aspects of longevity risk than just the trend risk. The subject of the various components of longevity risk is addressed in Section 2.
5.2 The standard-formula approach conflicts with the stressed-trend approach: whereas in Figure 2 the capital requirements reduce after around age 70, the standard-formula capital requirements increase steadily with age to reach excessive levels. Furthermore, the capital requirements at younger ages look to be a little too low compared with the stressed-trend approach of Figure 2. Any given portfolio contains many pensions at different ages, of course, and the appropriateness (or otherwise) of the shock approach depends on the individual portfolio. Richards & Currie (Reference Richards and Currie2009) suggested that it might be reasonable for a new and growing annuity portfolio.
5.3 The stressed-trend approach produces a more intuitive set of results than the shock approach — capital requirements are greater at younger ages, where there is more scope for suffering from an adverse long-term trend. Furthermore, comparing Figures 4 and 2 suggests that the stressed-trend approach produces lower capital requirements than the shock approach. Börger (Reference Börger2010) describes “the structural shortcomings of the shock (approach)”, specifically the too-low capital requirements at younger ages and the too-high requirements at older ages, concluding that “an age-dependent stress with smaller reductions for old ages might be more appropriate”. In theory this might not matter, as the shock approach would be intended to work at a portfolio level and excess capital at higher ages would be offset by lower capital requirements at younger ages, although whether this worked in practice would depend critically on the profile of liabilities by age. Also, the shock approach, as structured as a standard formula under Solvency II, is intended to cover more than just longevity trend risk.
6. A Value-at-Risk framework
6.1 The stressed-trend approach of Section 4 deals with changes in mortality rates over many years. This does not answer the one-year value-at-risk question, so something different is needed for the likes of Solvency II. This Section describes a one-year framework for longevity risk based on the sensitivity of the central projection to new data. This approach differs from the models of Börger (Reference Börger2010) and Plat (Reference Plat2011), which seek to model the trend and its tail distribution directly. Börger (Reference Börger2010) and Plat (Reference Plat2011) also present specific models, whereas the framework described here is general and can accommodate a wide range of stochastic projection models. In contrast to Plat (Reference Plat2011), who modelled longevity risk and insurance risk, the framework here is intended to focus solely on longevity trend risk in pensions and annuities in payment.
6.2 At a high level we use a stochastic model to simulate the mortality experience of an extra year, and then feed this into an updated model to see how the central projection is affected. This is repeated many times to generate a probability distribution of how the central projection might change over a one-year time horizon. In more detail the framework is as follows:
6.3 First, select a data set covering ages xL to xH and running from years yL to yH. This includes the deaths at each age in each year, dx,y, and the corresponding population exposures. The population exposures can be either the initial exposed-to-risk, Ex,y or the mid-year central exposed-to-risk, $$$ E_{{x,y}}^{c} $$$. For this process we need the exposures for the start of year yH + 1, so if the basic exposures are central we will approximate the initial exposures using $$$ {{E}_{x,{{y}_{H\: + \:1}}}}\: \approx \:E_{{x{\rm{ - }}1,{{y}_H}}}^{c} {\rm{ - }}{{d}_{x{\rm{ - }}1,{{y}_H}}}/2 $$$.
6.4 Next, select a statistical model and fit it to the data set in ¶6.3. This gives fitted values for logμx,y, where x is the age in years and y is the calendar year. We can use the projections from this model to calculate various life expectancies and annuity factors at specimen ages if desired.
6.5 Use the statistical model in ¶6.4 to generate sample paths for $$$ \log {{\mu }_{x,{{y}_{H\: + \:1}}}} $$$, i.e. for the year immediately following the last year for which we have data. These sample paths can include trend uncertainty or volatility or both (see Section 8). In practice the dominant source of uncertainty over a one-year horizon is usually volatility, so this should always be included. We can estimate $$$ {{q}_{x,{{y}_{H\: + \:1}}}} $$$, the binomial probability of death in year yH + 1, by using the approximation $$$ q\: \approx \:1{\rm{ - }}{{e}^{{\rm{ - }}\mu }} $$$.
6.6 We simulate the number of deaths in year yH + 1 at each age as a binomial random variable. The population counts are the $$$ {{E}_{x,{{y}_{H\: + \:1}}}} $$$ from ¶6.3 and the binomial probabilities are those simulated in ¶6.5. This gives us simulated death counts at each age apart from xL, and we can calculate corresponding mid-year exposures as $$$ E_{{x,{{y}_{H\: + \:1}}}}^{c} \: \approx \:{{E}_{x,{{y}_{H\: + \:1}}}}{\rm{ - }}{{d}_{x,{{y}_{H\: + \:1}}}}/2 $$$.
6.7 We then temporarily append our simulated data from ¶6.6 to the real data in ¶6.3, creating a single simulation of the data we might have in one year's time. The missing data for age xL in year yH + 1 is treated by providing dummy values and assigning a weight of zero. We then refit the statistical model to this combined data set, reperform the projections and recalculate the life expectancies and annuity values at the specimen ages using the updated central projection.
6.8 Repeat steps ¶6.5 to ¶6.7 n times, where n might be at least 1,000 (say) for Solvency II-style work. It is implicit in this methodology that there is no migration, or that if there is migration its net effect is zero, i.e. that immigrants have similar numbers and mortality characteristics to emigrants. The choice of n will have a number of practical considerations, but for estimating the 99.5th percentile a minimum value of n = 1,000 is required.
6.9 Figure 5 shows the resulting updated central projections from a handful of instances of performing steps ¶6.3–¶6.8. Note that we do not require nested simulations, as the central projection is evaluated without needing to perform any simulations. The nature of projections and their standard errors is discussed in Section 8.
6.10 After following this procedure we have a set, S, of n realised values of how life expectancies or annuity values can change based on the addition of a single year's data:
6.11 S can then be used to set a capital requirement to cover potential changes in expectation of longevity trend risk over one year. For example, a one-year value-at-risk estimate of trend-risk capital would be:
6.12 Appendix 5 contains details of how percentiles are estimated from sets of data in this paper. Before we come to the results of this approach in Section 9, we must first consider which models are appropriate (Section 7) and how to generate sample paths (Section 8).
7. Model choices
7.1 Suitable models for the framework in Section 6 are those which are (i) estimated from data only, i.e. regression-type models where no subjective intervention is required post-fit, and (ii) are capable of generating sample-path projections. Most statistical projection models are therefore potentially suitable, including the Lee-Carter family (Appendix 1), the Cairns-Blake-Dowd model (Appendix 2), the Age-Period-Cohort model (Appendix 3) and 2D P-spline model (Appendix 4). This is by no means an exhaustive list, and many other models could be used. Note that a number of models will be sensitive to the choice of time period (i.e. yL to yH) and some models will be sensitive to the choice of age range (i.e. xL to xH).
7.2 Unsuitable models are those which either (i) require parameters which are subjectively set, or which are set without reference to the basic data, or (ii) are deterministic scenarios. For example, the CMI's 2009–2011 models cannot easily be used here because they are deterministic targeting models which do not generate sample paths — see CMI (2009, 2010). Models which project mortality disaggregated by cause of death could potentially be used, provided the problems surrounding the projection of correlated time series were dealt with — see Richards (Reference Richards2010) for details of other issues with cause-of-death projections. Models which contain artificial limits on the total possible reduction in mortality would not be suitable, however, as the purpose of this exercise is to estimate tail risk. When modelling tail risk, it would be self-defeating to use a model which starts by limiting the tail in question. In addition to this, Oeppen & Vaupel (Reference Oeppen and Vaupel2004) show that models claiming to know maximum life expectancy (and thus limiting the maximum possible improvements) have a poor record.
8. Sample paths
8.1 The framework in Section 6 requires the generation of sample-path projections for the following year. There are two components to the uncertainty over projected mortality rates: (i) trend risk, and (ii) volatility. Trend risk is the uncertainty over the general direction taken by mortality rates, and is what is illustrated in Figure 1. Trend uncertainty is determined by the uncertainty over the parameters in the projection model, and is therefore a form of parameter risk. Volatility is an additional risk on top of the particular direction taken — it arises due to temporary fluctuations in mortality around the trend, such as those caused by a harsh (or mild) winter, or an outbreak of influenza. Volatility is fundamentally a short-term phenomenon and would therefore seem to be of less concern to an annuity writer than the trend risk. However, volatility plays an important role in the one-year simulations required for the framework described in Section 6.
8.2 The distinction between trend risk and volatility can be illustrated by examining a drift model, which says that the mortality index κ in year y + 1 is related to the index in the previous year by the simple relationship:
where κy is (say) the Lee-Carter mortality index in year y and {∈y} is a set of independent, identically distributed noise terms with zero mean and variance $$$ \sigma _{ \in }^{2} $$$. In fitting a drift model there are therefore three parameters to be estimated: (i) the drift constant, (ii) the standard error of the drift constant, σ drift, and (iii) the standard error of the noise process, σ∈. For an annuity writer, trend risk lies in the uncertainty over the drift constant, i.e. in the parameter risk expressed by σ drift. For an assurance-type liability the situation may be reversed and the dominant risk might be the noise variance, σ∈, i.e. the volatility.
8.3 For a detailed risk investigation any simulation should be capable of including or excluding each type of risk independently, i.e. (i) trend risk only, (ii) volatility only, and (iii) trend risk and volatility combined. This enables the actuary to not only reproduce a realistic-looking sample-path projection when required, but also to test the relative contribution of each of the two risks to the costs for a given liability profile.
8.4 The drift model in Equation 9 is a simple and restricted subset of a full ARIMA model for κy. The situation for simulations under ARIMA models is analogous: there is a noise process as before, but now κy + 1 is related to one or more previous values as an autoregressive moving average. There are therefore more parameters to be estimated than just the drift constant, and each new parameter will also have an estimated standard error corresponding to σ drift. As before, the variance of the noise process, $$$ {\rm{\sigma }}_{∈}^{2} $$$, will be labelled as the volatility, while the trend risk lies in the standard error of $$$ \log {{\hat{\mu }}_{x,y}} $$$, which arises from the uncertainty surrounding the various ARIMA parameter estimates.
8.5 For models projecting via a penalty function the situation is different because the forecast is always smooth and there is no volatility. However, simulation of volatility can be achieved by randomly sampling columns from the matrix of unstandardised residuals and adding them column-wise to the smooth projection. This approach is very general and requires no distributional assumption for the volatility — and therefore no estimate of σ∈ — as the empirical set of past volatility is used for sampling. This has the advantage of simulating mortality shocks in a given year while preserving the age structure. For any smooth projection model, therefore, a simulated sample path, $$$ \mu _{{x,y}}^{{{\rm{simulated}}}} $$$, can be obtained as follows:
where $$$ {{\hat{\mu }}_{x,y}} $$$ is the smooth projected force of mortality and σx,y the corresponding standard error of $$$ \log {{\hat{\mu }}_{x,y}} $$$. Z is a value from a N(0, 1) distribution, while I is an indicator variable taking the value zero (if excluding volatility) or one (if including volatility). rx,y is derived from a randomly chosen column j from the unstandardised residuals thus:
where j is a year in the data region, i.e. $$$ j\, \in \,\{ 1961,1962, \ldots, 2010\} $$$ for the data in this paper, and y ≥ 2011. With this method of simulating sample paths for smooth projection models, we again have the facility to consider the two components of risk in Equation 10 either separately or combined. To include trend risk only, we sample Z from a N(0, 1) distribution and set I = 0. To include volatility only, we set Z = 0 and I = 1. To include both trend risk and volatility, we sample Z from N(0,1) and set I = 1. Note that this approach will work for any model where projections are smooth, including the fully smoothed 2D P-spline models of Richards et al (Reference Richards, Kirkby and Currie2006) and the variant of the Lee-Carter model in Richards & Currie (Reference Richards and Currie2009).
8.6 Although we have very different structures for our projections — drift models, ARIMA processes and penalty forecasts — we can nevertheless arrange a common framework for trend risk (represented by parameter uncertainty) and volatility (which is either simulated or resampled from the past departure from the model fit). These two separate components are individually switchable to allow investigation of their respective and combined impact. This is summarised in Table 4.
8.7 There is an irony behind Table 4 and the VaR approach to longevity risk. Longevity risk is essentially about an adverse long-term trend, the uncertainty about which is predominantly described by parameter risk. However, when simulating over a time horizon as short as a year, the volatility is likely to play the major role.
9. Results of the one-year VaR approach
9.1 The framework in Section 6 is applied to some of the models in Table 3. The results for some selected models are shown in Table 5. As a check, the average annuity value in column (a) in Table 5 should be very close to the central annuity value in column (a) in Table 3.
9.2 With the exception of the 2DAP, the capital requirements shown in Table 5 are lower than the equivalent values in Table 3. This is partly because the data behind the models in Table 3 end in 2010 and project rates forward to 2011 and beyond. In contrast, the data behind the models in Table 5 end in 2011 due to the simulated extra year's experience, so there is one year's less uncertainty in the projections. However, the primary driver in the reduced capital requirements in Table 5 is the difference in time horizon — a single year as opposed to the run-off in Table 3. As with Table 3, model risk is illustrated by the varying capital requirements.
9.3 One issue with the Figures in Table 5 is that they are sample quantiles, i.e. they are based on the top few order statistics and are therefore themselves random variables with uncertainty surrounding their estimated value. One approach is to use a more sophisticated estimator of the quantile, such as that from Harrell & Davis (Reference Harrell and Davis1982). Such estimators are more efficient and produce standard errors for the estimator without any distributional assumptions. Figure 6 illustrates the Harrell-Davis estimates of the value-at-risk capital for the smoothed Lee-Carter model, LC(S), together with a confidence envelope around those estimates.
9.4 Plat (Reference Plat2011) felt that there was a risk of underestimation of true risk if the so-called spot model is not responsive to a single extra year's experience. This is perhaps true of the time-series models shown in Table 5. However, this cuts both ways as overestimation of capital requirements is also possible if the model is too responsive. This is possible for models using penalty projections, such as those of Richards et al (Reference Richards, Kirkby and Currie2006) and Richards & Currie (Reference Richards and Currie2009), and an illustration of what can go wrong is given in ¶10.3.
10. A test for model robustness
10.1 Besides answering the value-at-risk question, the framework outlined here can also be used to test a model for robustness to new data. When selecting a model for internal management use, it is important to know two things about its behaviour when new data becomes available. First, one wants the model to fit, and second, one wants the model to produce sensible results. For example, Richards et al (Reference Richards, Ellam, Hubbard, Lu, Makin and Miller2007) highlight occasions when two-dimensional P-spline models can come unstuck due to under-smoothing in the time direction. A particular model must be tested in advance to see if it is vulnerable to such behaviour and either correct it or choose a different model.
10.2 Note that the question of robustness here — will the model fit and work? — is quite different from the question of model suitability, which might itself be addressed by back-testing or the kind of quantitative assessment carried out by Cairns et al (Reference Cairns, Blake, Dowd, Coughlan, Epstein, Ong and Balevich2009). Indeed, the one-year framework presented in this paper might usefully be added to the tests in Cairns et al (Reference Cairns, Blake, Dowd, Coughlan, Epstein, Ong and Balevich2009) when considering an internal model. To illustrate, consider the models listed in Table 5. In each case 1,000 simulations were carried out of the next year's potential mortality experience, and in each case the model was successfully refitted. The resulting capital requirements are in a similar enough range to conclude that any one of these models could be used as an internal model, safe in the knowledge that once the commitment has been made to use the model the user is unlikely to be surprised by the model's response to another year's data.
10.3 In contrast, consider an example where we remove the over-dispersion parameter from the 2DAP model, i.e. we force ψ 2 = 1 regardless of the over-dispersion apparent in the data (see Appendix 4 for details of the over-dispersion parameter). With 1,000 simulations of the framework in steps ¶6.3 to ¶6.8, there were five instances where the model without over-dispersion could not fit. Although this failure rate sounds small (0.5%), the implied capital requirement is also of the order of 30%. When the model fits are inspected, we see that a number of the simulations have led to under-smoothing in time and thus excessively volatile projections. The 2DAP model is therefore only usable as an internal model with the England & Wales dataset with the over-dispersion parameter. Analogous situations are possible with many other models, and it is obviously important to find a model's weak points before committing to use it.
10.4 The procedure described in steps ¶6.3–¶6.8 is therefore not only useful for value-at-risk calculations, it is also a practical test of the robustness of a model. The procedure can be used to check that the model selected this year will continue to perform sensibly when new data arrive next year. Furthermore, there is no reason to force the generating stochastic model to be the same as the one being tested. The reality of model risk — as shown in Tables 3 and 5 — means that a thorough robustness test would involve testing a model's ability to handle the sample paths generated by a different model. Without this sort of test, there is a risk of investing in a given model only for it to surprise (or embarrass) when new data become available. It therefore makes sense to throughly test a model before committing to using it.
11. Conclusions
11.1 There are a number of components to longevity risk, of which trend risk is just one part. The longevity trend risk faced by insurers and defined-benefit pension schemes exists as a long-term accumulation of small changes, which could together add up to an adverse trend. Despite the long-term nature of longevity risk, there are reasons why insurers and others want to look at longevity through a one-year, value-at-risk prism. These reasons include the one-year horizon demanded by the ICA regime in the United Kingdom and the pending Solvency II regime for the European Union. This paper describes a framework for putting a long-term longevity trend risk into a one-year view for setting capital requirements. The results of using this framework tend to produce lower capital requirements than the stressed-trend approach to valuing longevity risk, but this is not uniformly the case.
11.2 Whatever the choice of method — stressed trend, mortality shock or value-at-risk — the actual capital requirements depend on the age and interest rate used in the calculations, and also the choice of model. However, the three approaches used in this paper suggest that, at a specimen male age of 70 and based on population mortality rates, the capital requirement in respect of longevity trend risk in level annuities should be at least $$${\rm{3}}\frac{{\rm{1}}}{{\rm{2}}}\% $$$ of the best-estimate reserve. For escalating annuities, or for indexed pensions in payment, the minimum capital requirement in respect of longevity trend risk will be higher.
Acknowledgements
The authors thank Stephen Makin, Matthias Börger, Rakesh Parekh, Richard Plat and David Wilkie for helpful comments. Any errors or omissions remain the sole responsibility of the authors. All models were fitted using the Projections Toolkit. Graphs were done in R (2011) and typesetting was done in pdfTEX. The R source code for the graphs and the output data in CSV format are freely available at http://www.longevitas.co.uk/var.html
Appendix 1: Lee-Carter model
A1.1 Lee & Carter (Reference Lee and Carter1992) proposed a single-factor model for the force of mortality defined as follows:
where μx,y is the force of mortality applying at age x in year y. αx is the age-related component of mortality, κy is a period effect for year y and βx is an age-related modulation of κy. The Lee-Carter model is fitted by estimating the parameters αx, βx and κy by the method of maximum likelihood discussed by Brouhns et al (Reference Brouhns, Denuit and Vermunt2002). Note, however, that the model specified in Equation 12 cannot be fitted without further specification, as there are infinitely many parameterizations which yield the same fitted values. To see this, note that the following transformations yield the same fitted values of μx,y for any real value of c:
A1.2 We therefore fix a convenient parameterisation by setting $$$ \mathop{\sum} {{\kappa }_j}\: = \:0 $$$ and $$$ \mathop{\sum} \kappa _{j}^{2} \: = \:1 $$$. This has the attractive feature that αx is a measure of average log mortality for age x.
A1.3 A notable variant of the Lee-Carter model is that of Delwarde et al (Reference Delwarde, Denuit and Eilers2007), who used penalised splines to smooth the βx parameters. This is labelled the DDE model, and a useful feature is that it reduces the risk that mortality rates at adjacent ages will cross over in the projections. A further improvement can be obtained by smoothing both the βx and the αx parameters, which we will denote by LC(S). All three versions of the Lee-Carter model — LC, DDE and LC(S) — could project κ either as a drift model or as a full ARIMA time-series process. A further variant of the Lee-Carter model was proposed by Richards & Currie (Reference Richards and Currie2009), which applied spline smoothing not only to βx but also to κy. In this case, projections of κy are done by means of the penalty function, as in the 2D P-spline models in Appendix 4.
A1.4 The drift model is very simple and is described in Equation 9 for κ. In contrast, an ARIMA model is more complicated, but also more flexible and produces more realistic expanding “funnels of doubt” for the projected mortality rates. To understand an ARIMA model, we first define the lag operator, L, which operates on an element of a time series to produce the previous element. Thus, if we define a collection of time-indexed values $$$ \{ {{\kappa }_y}\} $$$, then $$$ L{{\kappa }_y}\: = \:{{\kappa }_{y{\rm{ - }}1}} $$$. Powers of L mean the operator is repeatedly applied, i.e. $$$ {{L}^i} {{\kappa }_y}\: = \:{{\kappa }_{y{\rm{ - }}i}} $$$. The lag operator is also known as the backshift operator.
A1.5 A time series, κy, is said to be integrated if the differences of order d are stationary, i.e. $$$ {{\left( {1{\rm{ - }}L} \right)}^d} {{\kappa }_y} $$$ is stationary. A time series, κy, is said to be autoregressive of order p if it involves a linear combination of the previous values, i.e. $$$ \left( {1{\rm{ - }}\mathop{\sum}\limits_{i\: = \:1}^p {\rm{a}}{{{\rm{r}}}_{\rm{i}}}{{L}^i} } \right){{\kappa }_y} $$$, where ari denotes an autoregressive parameter to be estimated. A time series, κy, is said to be a moving average of order q if the current value can be expressed as a linear combination of the past q error terms, i.e. $$$ {{\kappa }_y}\: = \:\left( {1\: + \:\mathop{\sum}\limits_{i\: = \:1}^q {\rm{m}}{{{\rm{a}}}_{\rm{i}}}{{L}^i} } \right){{ ∈ }_y} $$$, where mai denotes a moving-average parameter to be estimated and {∈y} is a sequence of independent, identically distributed error terms with zero mean and common variance. A time series, κy, can be modelled combining these three elements as an autoregressive, integrated moving average (ARIMA) model as follows:
A1.6 A drift model is sometimes referred to as an ARIMA(0, 1, 0) model, emphasizing that the drift model is a narrowly restricted subset of available ARIMA models. Although an ARIMA(p, d, q) model is by definition more complicated than a drift model, it is also more flexible and produces more reasonable extrapolations of past trends. To illustrate this, Figure 7 shows the past mortality rates for males aged 70 in England & Wales, together with two Lee-Carter projections: a drift model and an ARIMA model.
A1.7 The restrictive and rigid nature of the drift-model projection is evident in Figure 7. The gradient of the trajectory of mortality rates shows an increasing rate of improvement, with the most recent twelve or so years of data showing the fastest rate of decline. This is broadly extrapolated by the ARIMA model. In contrast, the drift model projects with a shallower gradient which looks like a less credible extrapolation. Plat (Reference Plat2011)refers to the ARIMA(0,1,q) model as a kind of “exponential smoothing”, meaning the projection of κy depends more heavily on recent values than distant values. This is useful against a backdrop of accelerating improvements, as it yields more sensible-looking extrapolations in Figure 7.
A1.8 ARIMA processes can be fitted with or without a mean; the ARIMA processes for κ in the Lee-Carter models in this paper are fitted with a mean.
Appendix 2: Cairns-Blake-Dowd (CBD) model
A2.1 Cairns et al (Reference Cairns, Blake and Dowd2006) introduced a two-factor model for the force of mortality defined as follows:
where μx,y is the force of mortality applying at age x in year y. Projections of future mortality are done by means of a bivariate random walk with drift for κ0 and κ1, where both the drift terms and their correlation are estimated from the data.
A2.2 The model in Equation 15 is essentially a Gompertz (Reference Gompertz1825) model fitted separately for each year y. Currie (Reference Currie2011) relaxed the Gompertz assumption to allow a general smooth function of age instead of a straight line on a logarithmic scale as follows:
where S() denotes a smooth function based on penalised splines (of which the Gompertzian straight line is a special case).
A2.3 In addition to projecting mortality rates in time, Richards & Currie (Reference Richards and Currie2011) showed how the models in Cairns et al (Reference Cairns, Blake and Dowd2006) & Currie (Reference Currie2011) can also be used to extrapolate mortality rates to higher (and lower) ages. The ability to extrapolate is a useful feature shared with the 2D P-spline models in Appendix 4.
Appendix 3: Age-Period-Cohort (APC) model
A3.1 The Age-Period-Cohort model for the force of mortality is defined as follows:
where μx,y is the force of mortality applying at age x in year y for the birth cohort born in y−x. There are numerous issues requiring care with the APC model, including the risk of over-interpreting the parameter values and projecting parameters whose values are dictated by the choice of identifiability constraints. The three constraints used in the APC model in this paper are:
A3.2 In C2 and C3 wc is the cohort weight, namely, the number of times cohort c occurs in the data set. If we have nx ages and ny years then we have nc = nx + ny − 1 cohorts and c runs from 1 (oldest) to nc (youngest). The first two constraints are used in Cairns et al (Reference Cairns, Blake, Dowd, Coughlan, Epstein, Ong and Balevich2009), but the third constraint is different (although Cairns et al (Reference Cairns, Blake, Dowd, Coughlan, Epstein, Ong and Balevich2009) do use C3 in their discussion of their models M6 and M7). We prefer constraint C3 to the constraint used in Cairns et al (Reference Cairns, Blake, Dowd, Coughlan, Epstein, Ong and Balevich2009) because we can check that C1, C2 and C3 do indeed specify a unique parameter solution. The non-linear nature of the third constraint in Cairns et al (Reference Cairns, Blake, Dowd, Coughlan, Epstein, Ong and Balevich2009) would appear to have two drawbacks: first, it is very difficult to check that a unique solution has in fact been specified, and second, the efficient numerical methods implemented in Currie (Reference Currie2012) are not available for non-linear constraints.
A3.3 Although popular and seemingly intuitive, Currie (Reference Currie2012) raises concerns about the use of the APC model. Some implementations of the APC model assume a zero correlation between the α, κ and γ terms. However, using the England & Wales population data, Currie (Reference Currie2012) found that these parameters were highly correlated, thus invalidating a core assumption of the model. Currie (Reference Currie2012) highlights similar worries about correlations amongst parameters in the Renshaw-Haberman model (Renshaw & Haberman, Reference Renshaw and Haberman2006). The limitations and challenges of the Age-Period-Cohort model are discussed in Clayton & Schifflers (Reference Clayton and Schifflers1987).
A3.4 Note that in this paper κ in APC models is projected as a random walk with drift, while γ in APC models is projected as an ARIMA process without a mean.
Appendix 4: 2D P-spline models
A4.1 Currie et al (Reference Currie, Durban and Eilers2004) proposed the following model:
where μx,y is the force of mortality applying at age x in year y, and B() represents a basis spline — see de Boor (Reference De Boor2001). This model smoothed the θij with two of the penalty functions proposed by Eilers & Marx (Reference Eilers and Marx1996) — one penalty in age and one in calendar time.
A4.2 Richards et al (Reference Richards, Kirkby and Currie2006) presented an alternative model where the smoothing took place by year of birth instead of year of observation:
A4.3 As in Richards et al (Reference Richards, Kirkby and Currie2006), we use a five-year spacing between the knots by age and by time. There is, however, an important difference between the 2D P-spline models in Richards et al (Reference Richards, Kirkby and Currie2006) and the ones used in this paper. All the models fitted in this earlier paper assumed that the number of deaths follows a Poisson distribution, a key feature of which is that the variance is equal to the mean. In practice many mortality counts exhibit greater variance than the mean, a phenomenon called over-dispersion. For time-series models this does not pose a problem, but for models using penalty projections over-dispersion can lead to under-smoothing in the time direction, and thus unstable and volatile projections. To allow for this extra variance, and to restore the stability to the penalty projections, we can include an over-dispersion parameter, as described by Djeundje & Currie (Reference Djeundje and Currie2011). This works as follows: if we let Y be a Poisson random variable with parameter μ, we thus have E(Y) = Var(Y) = μ. If we now suppose that the variance of Y is greater than the Poisson assumption by a factor of ψ 2, then Var(Y) = ψ 2μ. ψ 2 is referred to as the over-dispersion parameter, and if ψ 2 = 1 we have the usual Poisson distribution. In practice we expect ψ 2 ≥ 1, but in our implementation we only impose ψ 2 > 0, thus theoretically allowing the phenomenon of under-dispersion. The importance of the over-dispersion parameter for projections is illustrated in ¶10.3.
A4.4 Richards & Currie (Reference Richards and Currie2011) showed how the models in Richards et al (Reference Richards, Kirkby and Currie2006) can be used not only to project mortality forward in time, but also to extrapolate mortality rates to higher (and lower) ages. This useful property is shared with the CBD model in Appendix 2.
Appendix 5: Quantiles and Percentiles
A5.1 Quantiles are points taken at regular intervals from the cumulative distribution function of a random variable. They are generally described as q-quantiles, where q specifies the number of intervals which are separated by q − 1 points. For example, the 2-quantile is the median, i.e. the point where values of a distribution are equally likely to be above or below this point.
A5.2 A percentile is the name given to a 100-quantile. In Solvency II work we most commonly look for the 99.5th percentile, i.e. the point at which the probability that a random event exceeds this value is 0.5%. This is the boundary of the top 200-quantile. There are various ways of estimating a quantile or percentile, but the one most easily accessed by actuaries is the definition used by Microsoft Excel and the R (2011) function quantile ( ) with the option type = 7.
A5.3 We will illustrate by using the following five lines of R code to (i) generate 1,000 pseudo-random numbers from a N(0,1) distribution, (ii) write these values to a temporary file called data.csv, and then (iii) calculate the 99.5th percentile:
set.seed(1)
temp = rnorm(1000)
write(temp, file = ′′C:/data.csv′′, ncol = 1)
sort(temp) [994:1000]
quantile(temp, 0.995, type = 7)
A5.4 After executing the above instructions, the R console should contain something like the following:
> set.seed(1)
> temp = rnorm(1000)
> write(temp, file = ′′C:/data.csv′′, ncol = 1)
> sort(temp) [994:1000]
[1] 2.401618 2.446531 2.497662 2.649167 2.675741 3.055742 3.810277
> quantile(temp, 0.995, type = 7)
99.5%
2.446787
where the greater-than symbol denotes a command executed by R and the other three lines are output. We can see the seven largest values, and we can also see that the R quantile ( ) function has returned 2.446787. If we read the contents of the file data.csv into Excel we can use the equivalent PERCENTILE (A : A,0.995) function. It returns the value 2.446786655, i.e. R and Excel agree to at least seven significant figures.
A5.5 Note that the answer produced by R and Excel is not one of the seven largest values in the data. This is because in both cases the software is interpolating between the fifth and sixth largest values. In general, we seek a percentile level p $$$\in$$$ (0,1). If x[i] denotes the i th largest value in a data set, then the value sought is x[(n − 1)p + 1]. In the example above, n = 1000 and p = 0.995, so (n − 1)p + 1 = 995.005. This latter value is not an integer, so we must interpolate between the 995th and 996th largest values. The final answer is then:
which agrees with the Excel and R answers to at least seven significant figures. There are other methods of calculating quantiles and percentiles, however, and further information can be found in Hyndman & Fan (Reference Hyndman and Fan1996).
A5.6 An important point to note is that sample quantiles and percentiles are estimators, and are thus subject to uncertainty themselves. This is evidenced by the above example producing an estimate of the 99.5th percentile as 2.4468, whereas the true value for the N(0,1) distribution in question is 2.5758 (Lindley & Scott, Reference Lindley and Scott1984). There are many ways of calculating the confidence intervals for a quantile, including bootstrapping, but for illustration here we use the methods of Harrell & Davis (Reference Harrell and Davis1982) by running the following two R statements (in addition to the five above):
library (Hmisc)
hdquantile (temp, 0.995, se=TRUE, names=FALSE)
A5.7 After executing the above two instructions immediately following the initial five above, the R console should contain something like the following:
> hdquantile (temp, 0.995, se=TRUE, names=FALSE)
[1] 2.534310
attr(,′′se′′)
[1] 0.1360113
which shows three things: (i) the Harrell-Davis estimate of the 99.5th percentile (2.5343) is noticeably closer to the true known value for the standard normal distribution (2.5758), courtesy of greater efficiency, (ii) the standard error of the estimate of the percentile is substantial with 1,000 values, and (iii) the true known value is well within the 95% confidence interval implied by this estimate and standard error.
Appendix 6: Yield curves
A6.1 A yield curve describes the pattern of redemption yields by outstanding term for a given class of bonds. At the time of writing, a complication is that the assumption of a constant discount rate is not an accurate one — Figure 8 shows the yield curve implied by principal strips of UK government gilts on 27th April 2012. The non-constant nature of the yield curve causes some complications for the shape of capital requirements, as shown in Figure 9. Above age 80 the capital requirement for the given yield curve is best approximated by a constant discount rate of 3% p.a., but below age 70 a discount rate of 4% p.a. would be a closer approximation. Indeed, the capital requirements below age 65 are in fact slightly lower than those for the 4% curve, despite the fact that the yield curve in Figure 8 peaks at 3.71%. The interaction of the survival curve and the shape of the yield curve is therefore not a trivial point. Makin (Reference Makin2011a) considers the finer points of interaction between longevity risk and interest rates.