Hostname: page-component-7b9c58cd5d-6tpvb Total loading time: 0 Render date: 2025-03-15T15:26:22.141Z Has data issue: false hasContentIssue false

Triangle-free reserving A non-traditional framework for estimating reserves and reserve uncertainty ‐ Abstract of the London discussion

Published online by Cambridge University Press:  24 July 2013

Rights & Permissions [Opens in a new window]

Abstract

This abstract relates to the following paper:

ParodiP.Triangle-free reserving A non-traditional framework for estimating reserves and reserve uncertainty ‐ Abstract of the London discussionBritish Actuarial Journal, doi:10.1017/S1357321713000093

Type
Sessional meetings: papers and abstracts of discussions
Copyright
Copyright © Institute and Faculty of Actuaries 2013 

Mr S. Fisher, F.I.A. (Chair): I will ask Pietro Parodi to introduce his paper.

Dr P. Parodi, F.I.A. (introducing the paper): I should like to make you aware that this paper is not exactly the same as the paper presented to the General Insurance Research Organising Committee (GIRO). After GIRO, I was strongly advised to include a real-world case study and now a large part of the paper is devoted to that case study. I am very grateful to those who insisted that I should add that part because I believe that the paper has improved as a result of that inclusion.

In any case, the method outlined here was never meant for academic consumption but for the real world. This method is used regularly for such exercises as letters of credit, pricing, industrial disease reserving, etc. Normally, a simplified version of it is used rather than the full-blown version that you find in this paper, depending on the data available.

The big challenge of reserving

The problem to solve is to estimate the full statistical distribution of the outstanding liabilities across a portfolio of risks given a number of years of past claims experience.

This problem has traditionally been addressed by extending the scope of triangle development methods such as the chain ladder, which were initially developed to produce a point estimate of the projected liability. These methods were then enhanced with techniques that allowed people to assess the uncertainty around the point estimate and produce a full reserve distribution.

My problem with these methods, and something which has been bothering many actuaries for many years, is that the method involves a great deal of information compression. Once your data is summarised into a triangle and that becomes the input to your analysis, then you lose the ability to analyse that data to the level of detail that you would have wanted.

Another problem is that, traditionally, pricing has been looked at with burning costs analysis (Fig. 1, top left), and reserving has been looked at with triangulation techniques (Fig. 1, top right). There was some consistency between these two ways of looking at risk, both using summarised data: for example, the output of claims triangle development techniques, i.e. the projected value of the total claims for each year, could be fed back into the burning cost analysis to provide an IBNR-adjusted of the total losses for each year.

Figure 1 Two different valuation frameworks for the same risk

Burning cost analysis was then replaced in many contexts by the collective risk model, which allows us to produce a separate model for frequency and severity and to combine them, for example, with Monte Carlo simulation (Fig. 1, bottom left), all based on granular data. In the meantime, reserving has simply stuck to its guns and kept working on summarised data. We now have a gap in our diagram (Fig. 1, bottom right).

As a result of this, the modern way in which you look at risk prospectively for pricing (Fig. 1, bottom left) is not the same as the way you look at risk retrospectively for reserving (Fig. 1, top right) – however, it is the same risk so why should we change our toolkit depending on the purpose of the analysis?

Also, in capital modelling you often need to use the results of pricing and reserving and combine them in order to produce an overall model of the risks of an insurer. Again, it would make sense to have everything in the same framework.

It is one thing to complain that this situation is not satisfactory; it is another thing to produce a method that can solve these two problems.

The approach that I am proposing in this paper to solve these two problems, of information compression and different valuation frameworks for the same risk, and fill the gap in Fig. 1 (bottom right) is not very adventurous. I propose to use for reserving a pricing-like technique, compression-free and based on the collective risk model, which is, after all, one of the foundation blocks of actuarial science. I propose to look at Incurred But Not Reported (IBNR) and Incurred But Not Enough Reserved (IBNER) losses separately.

Estimating IBNR

As for IBNR, the idea is to estimate the IBNR claim count based on the reporting delay distribution rather than based on a triangle. My favourite way of doing that is not by looking at the IBNR claim count year by year but to look at that as a whole, for there is no fundamental reason for separating the IBNR contributions of different years: it derives from our habit of using triangles, since after all, you can only make triangle projections if you have put your claims into rows and columns.

If, on the other hand, you use the reporting delay distribution you can project each year separately. But, you do not need to: if you need to split the outstanding liabilities between different years for accounting purposes, you can do it more efficiently later on. If you look at IBNR as a whole, your projection will probably be much more accurate.

As for the severity, you will need to revalue the claims to some point in time. You would carry out some IBNER analysis as necessary, and then you would produce a kernel severity model. This kernel severity model will be your template severity which you can then adapt to the different years of occurrence by applying suitable inflation factors. You may treat the existence of such a scaling factor as an assumption, but there is quite a bit of empirical evidence that this is usually borne out in practice.

You then combine the frequency and severity models in the usual way to produce an aggregate loss distribution, either for the whole period, or year by year, for each year in which there is IBNR.

Estimating IBNER

As for IBNER, there is nothing original in the paper about it. In the case study, I have used the simple and well-known Murphy-McLennan method (Murphy & McLennan, Reference Murphy and McLennan2006) that allows you to develop each open claim to a random simulated ultimate value, and then do that for all claims and repeat it 10,000 times and produce the distribution of IBNER claims.

Combining IBNR and IBNER

You can then combine the IBNER claims with the IBNR distribution, and under the assumption that these are independent, you will get the overall reserve distribution.

Every time you say something is independent, of course, it is never completely true. The paper contains a discussion of the conditions under which I believe this is true, and when this assumption breaks down.

The paper also contains a comparison based on artificial data between one of these triangle-based methods: a chain ladder with a lognormal reserve distribution and triangle-free reserving. I have compared not only the mean but also the actual distance between the two distributions with a Kolmogorov-Smirnov distance.

The triangle-free reserving is reproducing the results quite well since the artificial data has been produced based on a collective risk model. In a way it is “collective risk model in, collective risk model out”: the fact that you get back what you put in is merely evidence that your method does not mess up things too badly.

What I have found quite surprising is that the particular triangle-based method which I have tested fared so badly. There was no resemblance whatsoever between the distribution produced with a chain ladder with lognormal and the true distribution. One of the good things about using artificial data is, of course, you know the true answer and you can compare it with the output of your model. There is therefore no particular reason to think that the lognormal-enhanced chain ladder method is helping us understanding the reserve distribution.

Cost/benefit analysis

The main disadvantage of triangle-free reserving is that it is more complex than the chain ladder and similar triangle-based approaches: it is roughly as complex as a pricing exercise. Furthermore, you might need to use different tools, you might need more data, you might need to retrain people, etc.

So, whether you want to adopt it or not boils down to doing a cost-benefit analysis. The costs are quite clear and all revolve around the added complexity.

Quantifying the benefits for your company is more difficult. One benefit is that you have a more accurate view of the distribution of outstanding liabilities: that gives you an obvious quantitative advantage but only if that gets you to a lower capital requirement.

Then there are some soft benefits which revolve around a better and more articulate understanding of risk because you have looked at it in finer detail, whether or not the capital you have to hold is smaller or not.

Then there is a consistency between pricing, reserving and also capital modelling.

We started looking at the big challenge of reserving, which was that of estimating the distribution of your outstanding liabilities. I would now like to propose a version of that challenge, and I hope that someone will want to take this on. This goes as follows.

Assuming that:

  1. (i) The collective risk model is valid (loss amounts are i.i.d, frequency and severity are independent). (This is the only essential assumption: the other two are here just to make our life a bit easier);

  2. (ii) The reporting delay distribution is constant over the years; and

  3. (iii) The severity distribution depends on the occurrence year and on the reporting year only through a scale factor,

which of the known reserving methods and stochastic reserving methods that we know (chain ladder with lognormal; chain ladder with bootstrapping; generalised linear models; Kalman filter; etc.), produces an outstanding liabilities distribution that converges to the true distribution when you have enough claims data points while keeping the triangle with the same size?

A simple way of testing for convergence would be running experiments with artificial data. We tend to be a bit squeamish about using artificial data, and, of course, real-world data are the ultimate test bed, but sometimes artificial data allow you to answer questions that you would not otherwise be able to address. If you have no controlled way to test your methods, how will you ever know if they will work? How many different real-world reserving exercises do you need to run to convince yourself that the way you calculated the 95th percentile of the reserve distribution is correct?

Also note that although you may not be convinced by a proof of convergence based on artificial data, to prove lack of convergence, artificial data are certainly sufficient. If you cannot even have convergence in well-behaved artificial data, why bother trying your method out on real-world data?.

If you find that a given method does not converge to the real answer, then this is telling you that the distribution that you are obtaining from this method does not reflect fairly the true outstanding liability distribution, and you should abandon this method, or reject the assumption that the collective risk model is valid. This would be a serious problem: the collective risk model is not perfect, but in many contexts it seems to be a good approximation of reality, and is the foundation stone of much of actuarial loss modelling. And its validity can be tested independently when there is enough data.

My favourite option would be to discard the triangle-based methodologies that fail the convergence test and adopt a methodology like triangle-free reserving or any other method which guarantees convergence.

This also leads me to my last point for the advocacy of the triangle-free reserving methodology outlined in this paper: it provides, almost by construction, a method which convergences to the true distribution of outstanding liabilities if the assumptions of the collective risk model hold true. If you have a frequency distribution as an input, and a severity distribution as another input, and these are independent, and if you have enough data, of course you will get back your frequency distribution as it is and your severity distribution as it is, and convergence to the true distribution is trivial.

I hope someone will warm up to this challenge and look in more depth at the convergence issues of the various reserving methods in the literature, perhaps as a research topic for an SA0 dissertation, or a piece of research sponsored by the profession.

Mr D. F. B. Newton, F.I.A. (opening the discussion): It has been suggested that many of the standard reserving methods in common use, such as the chain ladder, were overly simplistic, and that the development and widespread use of more sophisticated mathematical and statistical methods should be a priority for the profession. Those who have read the General Insurance Reserving Issues Taskforce (GRIT) report may recall that it rejected this suggestion, highlighting other aspects of reserving as being of higher priority. There was also a session at GIRO in 2009 in which the relative merits of deterministic and stochastic methods were debated, with a closing vote that resoundingly concluded that the continuing use of deterministic – indeed, triangle-based – reserving methods was appropriate. So why are we here, willingly talking about an explicitly triangle-free reserving method?

In my view, that of the Reserving Oversight Committee (ROC) and that of the General Insurance Practice Executive Committee, the results of the vote, the preceding discussion, and the earlier GRIT report did not reflect an outright rejection of the so-called sophisticated methods. Rather they reflected a reluctance on the part of reserving actuaries to embrace them. We identified four main reasons behind the wariness of reserving practitioners in adopting these new-fangled methods:

  1. 1. They don't understand them. Many papers supporting sophisticated techniques are written as academic papers. I now struggle to read and absorb any paper that comprises a significant amount of formulae and mathematical proofs. Based on conversations with others, I believe that I am not unique in that regard.

  2. 2. They do not know how to use them. Back-testing, peer reviewing, allowing for past actual or future expected changes in patterns, or for rogue claims that distort patterns – all this can be done simply and transparently within triangle-based methods. Key assumptions and drivers of the emerging results can be seen clearly. The more sophisticated methods can appear to be like black boxes, where data is fed in one end and results come out the other. One of GRIT's recommendations was to provide more transparency to actuarial reserving methods and help stakeholders have more insight into the key reserving assumptions and decisions. Few practitioners have developed an understanding of how to interpret results emerging from these sophisticated methods and, until they do, they will be reluctant to use them in anything other than a peripheral manner.

  3. 3. They do not know how to explain them. This is linked to their understanding of how to use them. Moreover, data triangles have been common currency in the insurance industry since the year dot. Most stakeholders are familiar with them and have an intuitive understanding of reserving techniques based upon triangles. Therefore, reserving actuaries are comfortable explaining the results of chain ladder projections. They are far less comfortable explaining results derived using other methods.

  4. 4. They do not believe that they work with real data. Claims data, even in well-managed insurers, is often badly behaved. Too many papers outlining sophisticated methods start with the premise that the data is perfect.

So we on ROC and the Practice Executive Committee are keen on the development and the wider use of methods more sophisticated than the chain ladder, both new methods and those already published. But we need the theory behind them to be more readily understood by the majority of reserving practitioners. We need those practitioners to be equipped with the skills necessary to use the methods, to interpret the results and to explain the main assumptions and drivers to stakeholders, and we need those methods to be robust in the real world environment of claims data of variable quality. We think that we have a long way to go before we have achieved that end.

Turning now to this paper, the idea of harmonising approaches to insurance pricing and reserving makes a lot of sense. I have long argued that risk pricing and reserving are opposite sides of the same coin. But I am also aware that insurers (some, at least) regard risk pricing as the more important. Risk pricing largely determines the future profitability of the insurer: reserving can be considered to act as a tap, determining the rate at which those profits or losses subsequently emerge. However, the linkages between pricing and reserving are clear and it should be in the interest of insurers to achieve greater accord between the methodologies underlying the two disciplines.

Similarly, making optimal use of the available data also appears to make sense. For pricing purposes it is important to understand the relative risk costs at a fine level. However, reserves are often at a more aggregated level. Where does “optimal” lie?

I would be interested in hearing the views of any senior insurance managers who would be stakeholders for both pricing and reserving.

The author has demonstrated in his paper that the triangle-free framework is a good predictor, in comparison to the chain-ladder approach. I would like to understand a bit more about this area. In what circumstances, or with what claim or policy types, is this approach particularly effective? Are there circumstances where it is less effective? Moreover, how has he allowed in his testing for actuarial intervention?

A few years ago, a working party led by Mr Fisher considered the relative effectiveness of various actuarial reserving techniques by asking volunteers to apply different methods to various data sets. One of the findings was that the differing skills levels of the various practitioners generated a greater diversity of results than did the methods themselves. This finding resonated with two of GRIT's conclusions, that, for improved reserving, actuaries needed to understand better the business that they were reserving, and that there should be greater consistency among practitioners in the application of standard actuarial reserving methods.

So I would like to understand whether the triangle-free approach is a better predictor purely on an automated, hands-off basis, or whether it would remain a better predictor when compared with one of the standard methods used by a skilled, experienced actuary.

We then come to the question of how this approach works in practice. My initial reaction on reading the paper was that, by specifying multiple stage models that need to be verified not only in isolation but also in how they all combine to derive estimates, the author might be putting forward a framework that is simply too complex to have practical applicability.

I also noted that it had been demonstrated using artificially-generated data. But then again, I am aware that some firms, not least the author's own, use this approach on a day-to-day basis and apply it to real data. So I would be interested to hear how this is used in practice: is it used as the main projection methodology or to supplement other approaches? How is the approach varied for different business conditions and how are judgemental elements introduced? Has it added extra complexity, cost and delays to the previous process? What data issues have needed to be addressed to make this fully effective? Indeed, is this approach better or worse than the various deterministic methods at coping with flawed data?

Finally, I asked various contacts their views of the paper. I heard some very complimentary remarks but also some less positive comments from people who appeared underwhelmed by what they read. It seems that they had been expecting something new and different, but were disappointed that much of the material appeared similar to modelling frameworks with which they were already familiar from previous papers (the collective risk model was mentioned several times). Interestingly, these people all cited different sources for the original work.

I think that their view rather misses the point. There is really nothing new under the sun, or any truly unique new ideas. Most new developments are really combinations or extensions of existing ideas. Papers do not have to be completely innovative to be worthwhile – those that build upon existing actuarial thinking can be valuable, as can papers that establish practical uses for existing ideas, or that express them in easy-to-understand ways, or that disinter ideas that were previously considered impractical but whose time has now come with the improvement in computing power.

Regardless of the originality of the ideas contained therein, this is a well-constructed paper that introduces ideas in an accessible way. As such, it is at the very least, a useful addition to the actuarial canon. The extent to which it becomes a valuable paper for the profession depends on what happens from now on. How will these ideas be developed and promulgated within the profession? The author has made some suggestions for further work and I would be interested in what further suggestions are made during the course of this discussion.

Mr T. A. G. Marcuson, F.I.A.: I want to make a few observations about the way in which we think about reserving, particularly the techniques used in the modelling of uncertainty in reserves.

It is important that we remember that there is a reason why established techniques such as the chain-ladder and Bornhuetter-Ferguson(B-F) are so well-entrenched in actuarial reserving, for all of their many, widely-discussed, statistical flaws. It is because they are a robust (certainly the B-F, but with suitable care the chain-ladder as well), common-sense approach to a problem. They apply to aggregate data, which means we can overcome some data deficiencies, and, most importantly, they are relatively easy to communicate to non-actuaries.

That said, we always need to be clear which problem they were designed to address: it is to form an estimate of the mean, or something in that region of the distribution, for reasonably well-behaved portfolios of claims. In practice, the law of large numbers means that quite a lot of techniques will work well for stable, homogeneous collections of claims.

Their longevity and appeal arises because, amongst other things, they seem to provide a reliable approach of getting to quite good estimates even when there is not that much data, possibly where we bring in external benchmarks or other pricing data, and even where their underlying assumptions do not follow. But when we do so, we need to use them carefully and often need to make a number of pragmatic adjustments or professional judgements.

And it is at this point, when we move to thinking about the uncertainty in reserves that we get tripped up, logically speaking, as follows:

  • We find our trusted techniques can be used to good effect even when their assumptions do not really hold.

  • Next, we develop stochastic triangle-based reserving techniques that hold when the assumptions underpinning our chain-ladder and B-F techniques apply.

  • Then, we assume that the core assumptions must hold sufficiently well because the best estimates produced seem reasonable.

  • Finally, we conclude that the results from our stochastic triangle-based techniques must be a reasonable base for further analysis.

The reality is perhaps the opposite: that using the small sample of data points within a triangle may, in fact, be a very poor basis for further analysis of the uncertainty in reserves.

However we have a number of flawed tools that exist, but that are nevertheless widely used and often requested or even expected to be used by many of us in review roles.

Not only this, but it is considerably less work, cost, effort and yes, risk of failure, for us to apply a Mack, or what is colloquially known as a Bootstrap, to a single aggregated data set.

We need to use this paper as a call to arms and develop tools that enable us to perform the task we have been asked to address in a better way. So, I would like to propose that in doing so we think about three aspects, or dimensions, of the problem.

First, we must look at claims reserves in a much more granular fashion: it is here where Dr Parodi's paper takes us forward significantly. That said, I would like to see more attention in a reserving framework to go beyond the retrospective loss data and take into account information regarding the potential for individual claims to vary, either through considering policy terms or loss scenarios for key claims or portfolio segments. Certainly this framework of looking at individual loss frequency and severity distributions makes this possible.

Next we should continue with our efforts to get under the skin of the risks or perils to which our reserves are exposed. This is continuing the journey embarked upon by the GRIT paper, but, if anything, this needs to be broken down between understanding the underlying perils, or drivers of loss such as inflation, litigation, demographic, and other trends, and understanding how each of these affect the classes of business within the reserves. That discussion is not the goal of the paper, but this understanding is critical if we are to enhance our approach to modelling dependency structures across portfolios.

Finally, we really need to have a better way of thinking about how loss reserve information emerges, and the associated risks are expected to manifest themselves, over time. This is probably the hardest and least comfortable area at the moment. We throw IBNR and IBNER factors at our B-F models, and Proportional Proxy techniques at our risk margins and SCRs. But we should not delude ourselves about their universal suitability for the tasks given to them, particularly where we move from mean best estimates to extreme values.

I feel we are still very much on a journey and not at the end. We are grappling with the uncertainty in reserves, the understanding and communication of which is both good business practice and an important part of our professional obligations, and need to do more to enhance the tools at our disposal and establishing their widespread use.

Mr H. T. Medlam, F.I.A.: Something which struck me as you were talking was around the distribution of your triangle-free technique versus the chain ladder. One thing that was very clear was that the standard deviation coming from the chain ladder was significantly greater than the standard deviation coming from the triangle-free technique. Many firms would jump on that. They would love to have a similar standard deviation and to lower their reserve risk. I do not believe that there is a correct way that you can measure the reserve risk. All ways are proxies for what can come through. What is important you obtain out of it is whether that risk is proportionate to the actual underlying risk. If the author had increased the variation of the artificial data, how would the two have varied? Would the variation of the chain ladder technique have gone up proportionately as well as the variability of the triangle-free technique?

When it comes down to it in the end, it is when you do the aggregation assumptions and you aggregate your risks together that you reach your final answer. It is the final answer which is important, not the individual path underneath it.

Mr P. D. Smith, F.I.A.: I really appreciate the practical outlook in parts of this paper, particularly things like the end of paragraph 2.5 where the author says the main risk is to become too clever and to quibble about the exact behaviour of the tail when we do not have enough data to support anything other than a simple approach.

My second point is about the reductions in standard error. You rightly show very big reductions moving from chain ladder to triangle-free, but I am not sure that that is the right comparison. Chain ladder has big standard error, mainly because it grosses up the actual claims experience of the latest year. For that reason, we tend to use B-F for the latest years rather than chain ladder. The B-F IBNR for the latest years uses an a-priori loss ratio based on the average or total experience of prior years, which is exactly what the triangle-free method does.

I would, therefore, like to see the change in standard error broken down into two parts: the impact of moving from chain ladder to B-F; and that of moving from B-F to triangle-free.

Having said that, however, I am clear that a method which uses individual data and does not throw away assumption information on an aggregate method should give a better answer.

Mrs K. A. Morgan, F.I.A.: If the Actuarial Profession were moving into general insurance now, that is probably not where we would start. We would start with a method that used underlying data. Thinking back to Mr Newton's comments about the debate which I chaired at GIRO, where we talked about stochastic versus deterministic methods, one of the main points that came out was that stochastic methods are less popular because they are really hard to understand and visualise.

I do think that this triangle-free framework that Dr Parodi has outlined, having less compression of data, unpacks the methodology, but also makes it clearer what is going on. It is easy to understand and you can see the different parts of the reserves as they emerge.

In the work that I have done over the last few years looking at internal models, the way that I have thought about them is that I like to use the maths to try to feel how a model works. Once you move beyond the maths, you can feel that if this input goes in, that means the outputs do this. You can get a rough idea or picture of what is going on. The maths is the medium for getting that feeling for the way models work.

I believe that this methodology moves us into a stochastic reserving approach that can be felt by people and can also allow for judgement, because it does use detailed data and you can see the different assumptions working together.

The Chairman: I should like to put a question to Dr Parodi. It seems to my mind, and couple of speakers have already alluded to this, that there is a tension when we talk about reserving methods between, on the one hand, the desire for a method that is sophisticated and is able to take into account as much information as possible, and, on the other hand, the desire for a simpler framework that facilitates the actuary in applying his or her expert judgement to the situation.

With the more sophisticated methods, perhaps, it is often more difficult for the experienced or skilled actuary to be able to bring to bear his or her experience and judgement.

I wonder whether there is any resolution of that tension that you are able to offer through this methodology.

Dr Parodi: I must say I do not have much sympathy for complex methods myself. There is an obvious risk of over-modelling. Certainly, we should always make the attempt of keeping complexity to a minimum. For example, if you model severity distributions, only use a very small number of reliable ones and model the tail according to a consolidated theory such as extreme value theory. Then use only two or three very basic models for frequency.

Triangle-free reserving allows you and forces you to make judgment calls at key stages of the process, bringing in your experience.

Here are a couple of examples:

  1. (i) When you look at IBNR, you have to decide the frequency rate per unit of exposure over the last, say, 10 years. You have to make a judgement call and decide if there are trends you have to take into account.

  2. (ii) In modelling severity we might be called upon to make a judgement call as to what is the largest possible loss, and whether you use a market severity curve beyond a certain threshold.

Apart from a few judgement calls like those above, everything else is automatic. It is inevitable to have these judgment calls, but it is also important to recognise exactly at which point in the reserving process you are going to make them, and to be completely transparent about them.

Mr J. C. T. Leigh, F.I.A.: I wonder whether you have had any pushback from some of your clients when you are trying to apply this method, along the lines of, “Well, we do not have to supply all this detailed data to anybody else, so why do you need it when apparently the rest of the market manages without?”

Secondly, in recent years we have also seen a distinct movement towards making sure that it is the directors of the company rather than the actuaries or the management who are setting reserves, and directors have taken their responsibilities in that area very seriously. It has been mentioned several times that it is really quite easy to convey how the straightforward methods work to people who are not mathematical specialists in any way. Have you found, while trying to persuade boards of directors of the merits of this type of approach, that they have they accepted them or have they said, “This is beyond us. Please can we have something simpler?”

Dr Parodi: The short answer to your first question is “Yes”. For this method to work what you need is a snapshot of the full loss run with individual claims information for each year, for example, for a number of years rather than just a traditional triangle. It is not easy to obtain this information. Sometimes we only have it because we have had a client for a number of years and we can combine the different loss runs ourselves.

I must say that triangle-free reserving is currently applied not under ideal circumstances most of the time. Sometimes we have to make a hybrid use of triangles and loss runs in order to get a better idea of IBNER. That is a problem we would not have if we were in a company. We would indeed be masters of our own destiny.

I am not sure I can comment usefully on the second point. In the circumstances in which we have used it, we did not have pushbacks like that suggested.

We mostly deal with corporate companies, and they do not necessarily have very strong views on the way you do reserving, so long as you can take them by the hand and explain to them where you are coming from when you show them your results.

I must say that sometimes you lose risk managers when you get into the technicalities of stochastic modelling. But when you discuss, e.g., the frequency of their losses with appropriate charts, it is quite easy for them to follow. It is something quite concrete, perhaps more concrete than a development triangle. I would say that most of the intermediate steps of this method – reporting delay distribution, frequency distribution, severity distribution – are quite concrete, and with some effort can be communicated in an effective way.

Mrs H. F. Cooper, F.I.A.: In your paper you recommend this approach as a means to establish the uncertainty in reserving. It strikes me that it is, in fact, also really important in best estimates for example where companies are changing claims development under claims transformation programs.

Another example would be where the reporting delay on claims is changing. For instance, if we take motor, we have telematics data which is going to give us much more information more quickly. So I wanted to explore whether it was the data limitations which lead you to suggest that the approach was not as appropriate for a best estimate, or whether there was something else in that thought process.

Dr Parodi: The reason why I have been emphasising the use of triangle-free reserving for calculating the distribution of outstanding liabilities rather than for a best estimate is simply that I thought it would be harder to convince people to leave something so simple and cheap as the chain ladder and embrace something like triangle-free reserving which requires more in-depth analysis of the frequency and severity components of risk, just in order to get a better mean. This was my thought process.

However, you raise a second point, which is fascinating because it clearly identifies a situation where triangle-based methods such as the chain ladder will struggle even to obtain a simple best estimate: the case when something changes in the reporting pattern. With triangle-based methods it will take a few “diagonals” (calendar years/periods) before the change will become apparent in your development triangle, and even then statistical estimates of the development factors will be wobbly because they will be based on a small sample.

Triangle-free reserving is especially suited to deal with a situation like this, since there are simple statistical methods to decide whether the reporting delay distribution is changing with time, and even identify the optimal point at which a new reporting delay distribution should be used. And it takes far less time to converge to a decent approximation of the new reporting delay distribution.

Mr J. B. Orr, F.F.A. (closing the discussion): I have been in discussions about more sophisticated methods and whether we should be looking to embrace those: whether we should be looking to challenge ourselves. I think that we have come up with the same conclusions a number of times before, where we have not quite engaged with or looked seriously at adopting new more detailed methods and models.

Mr Newton has rehearsed some of the reasons for that. I also observed a lack of eagerness to come forward with views on this paper. I wonder whether, if people are feeling honest, they would say they felt a bit intimidated by the fact that it did have the ‘Greek symbols’ to which Mr Newton referred.

If this had been a talk about general risk management and governance, I wonder whether we would have had the same level of reticence.

I have a bias in favour of what Dr Parodi has done. I use the collective risk model to demonstrate how claims development occurs, and how that translates to the development patterns that you see in the chain ladder and B-F methods, showing how a claims portfolio evolves, and also as a way of exposing the limitations of the collective risk model and these established reserving methods.

I am using the collective risk modelling as a teaching aid, as I think that the students who benefit from that are obtaining a better insight into what is going on within a claims portfolio.

In 2007, I put together a much more modest paper on a simple multistate reserving method which took a much simpler approach. It focused on the claim numbers process with suggested extensions to claims amounts. The paper also demonstrated how some very simple assumptions, including exponential delay elements and the Markov assumption, we could end up with closed form results for the development of claims over time.

Having read the paper, I was not disappointed. I felt that it took a significant step forward. I thought it was credible, and the addition of the real world examples did, for me, show that this was not just a theoretical set of results, but something that could be used by actuaries in their day-to-day work.

I think that by seeking to embrace the elements of the real world, whether those are about the way in which claims are managed, IBNER and the rest, that the collective risk model, as Dr Parodi has presented it, is moving in the right direction. He has gauged the right side of ‘too simple’ well, in my view.

It is by keeping at this quite difficult set of problems that we will progress in terms of what we do. I liked the contribution which talked about ‘claims transformation’ projects, because when I look at the state of the world at the moment, whether it is in terms of globalisation, new markets, social change, technological change, or even environmental change, from now all our assumptions are going to be challenged. We need to have a framework that does not just say “That portfolio in the past, we can rely upon that behaving in the same way in the future.”

We are going to have a lot of drivers of change that we are going to have to take into account in our work. The one that I am particularly interested in is inflation. I like it that the framework, as presented, did look at inflation and the question about where you have a calendar year effect, whether that can be properly captured in your reserving and risk modelling processes.

I should like to suggest that detailed work, using the collective risk model, can be used selectively to validate the general use of those more approximate methods that we know and love, and which we believe are robust and usable in the circumstances that we see.

We should be thinking seriously about this modelling framework or elements of it: we should be priming a new generation of general insurance actuaries to understand this type of framework and to use it in our work.

Dr Parodi: Thank you all for this feedback. I should perhaps clarify that I am not against deterministic methods! I am not against triangle-based methods either. What I am against is the illusion of using them for producing the whole distribution of outstanding liabilities.

I think that if you are just interested in the mean estimate, triangle-based methods do a decent job and one should always think about whether the added costs of triangle-free reserving are justified by the increased accuracy which is likely to be obtained. Mrs Cooper's example of a rapidly changing reporting pattern is one case where it might be easier to convince practitioners to use triangle-free reserving even for producing a point estimate.

In what circumstances does the method work well?

Paradoxically, although I started this discussion with an image compression analogy, and how we should not be compressing thousands of claims into a small triangle, from my experience the relative gains of triangle-free reserving are even more evident when you have less data. If you have only 20 claims you will not be able to put together a decent triangle to analyse, but you will still be able to discern a pattern in the reporting delay distribution if you use individual claims data.

There are some fields, like reinsurance, where this could be particularly useful, because of the longer reporting delays involved. I have always had my reinsurance experience in mind when developing triangle-free reserving.

I have also used triangle-free reserving for industrial disease liabilities where the delays are so extreme that triangle-based techniques are completely inadequate. You can, however, study the severity distribution of asbestos-related claims and produce a severity model. For frequency purposes you can then use the studies that the profession has done as to what the projected number of future asbestos-related claims will be (see, e.g., Lowe et al. (2004), “UK Asbestos: The definitive guide”). You can then combine the two models, using Monte Carlo simulation, in a simple application of the collective risk model.

These are just a few examples of circumstances where I would tend to use a method like this.

Is this method too complex to be used in the real world?

Since this methodology is, at the core, simply a pricing technique applied to reserving, then there is no question that it can be used in practice, exactly as pricing methods that are based on the same methodology are used in practice for pricing purposes.

The role of judgment

Several questions have been raised on the role of judgment in this method, and on whether this method is better than triangle-based methods only when fully automated, or also when considering actuarial intervention.

It is difficult to say something objective about this last point – perhaps we need another survey! All I can say at this stage is that the method gives the experienced practitioner more ammunition, allowing her/him to disentangle the effects of IBNR frequency, IBNR severity, IBNER, and see where the trends come from.

In general, there is a role for judgment in the method and I think that one good feature of the method is that it allows you to see, very transparently, where you are allowed to incorporate judgment, e.g. in the estimate of the frequency per unit of exposure or in the choice of the threshold for the severity tail.

Judgment opens up the possibility of conscious, or unconscious, tampering, and to avoid this you need to make sure you do not over-complicate your model. Use a very small set of models to model the frequency and the severity so that you do not introduce anything spurious just for the sake of getting the result that you want.

Ease of communication

Mr Marcuson has mentioned that triangle-based methods are easier to communicate. I agree. I am a big fan of triangles as a visualisation tool – they give a snapshot view of a complex reserving problem. I would like to point out that you do not need to give that up if you use triangle-free reserving – triangles are an easily produced by-product of triangle-free reserving. You can produce them given your loss dataset, and you can show them to people to illustrate how claims are developing.

However, triangle-free reserving also offers you other ways of communicating the results of the analysis to laymen effectively e.g. the graph showing the number of claims over the years and illustrating how much you need to add for IBNR.

Other “triangle-free” papers

When I presented this paper to GIRO, I was ignorant of the existence of other papers advocating a more granular approach, but after my talk I was pointed to other literature, which I have now added to the list of references. The first mention of a granular method, similar to triangle-free reserving, that I was able to track, was a paper by a Danish author, Norberg, in 1993. Norberg's line of investigation has been picked up again by others more recently, especially by Antonio & Plat (2012).

Lower uncertainty?

Mr Medlam and Mr Smith have noted that the standard error or the standard deviation for triangle-free is lower. I have not investigated that yet and I cannot guarantee that this is regularly the case. However, the argument seems sensible and I think it deserves further research. This, of course, has an impact because people would like to have a lower uncertainty on the reserves.

Conclusion

I hope that someone will consider this topic for research, because one thing that has always been difficult in reserving, because of the stochastic nature of the underlying phenomenon and the delay between prediction and realisation, is that it is difficult to become rid of methods that are not fit for the purpose of calculating reserve uncertainty: a simple experiment based on the collective risk model may be a first step towards that “pruning” effort.

References

Additional references

Murphy, K., McLennan, A. (2006). A Method For Projecting Individual Large Claims, Casualty Actuarial Society Forum, Fall 2006. The Chairman: It just remains for me to express my own thanks, and I am sure the thanks of all us, to Dr Parodi, Mr Newman, Mr Orr and everyone who has contributed to this evening's discussion.Google Scholar
Figure 0

Figure 1 Two different valuation frameworks for the same risk