Hostname: page-component-745bb68f8f-mzp66 Total loading time: 0 Render date: 2025-02-11T13:27:57.407Z Has data issue: false hasContentIssue false

Solvency II Technical Provisions – what actuaries will be doing differently

Introductory paper prepared by:

Published online by Cambridge University Press:  13 June 2013

Rights & Permissions [Opens in a new window]

Abstract

Type
Sessional meetings: papers and abstracts of discussions
Copyright
Copyright © Institute and Faculty of Actuaries 2013 

Here are four areas that we would like to discuss in the order we would like to address them. In places we have been deliberately controversial to stimulate discussion.

Premium provision

One approach to calculating the gross premium provision is to use a loss ratio multiplied by the unearned premium to derive an undiscounted ultimate; apply a claims payment pattern; add expense cashflows; deduct future premium cashflows and discount everything back to the valuation date. There are two key assumptions here: the loss ratio and the cashflow pattern. Other assumptions include future expense amounts (potentially expense ratio and accrual pattern), assessment of premium received to date relative to earned premium (including consideration of agent balances and bad debts, etc.) and premium amounts to be received in the future (and pattern).

How should the loss ratio be derived? At face value, this may seem to be a simple question. However, it is simply not enough to project forward last year's loss ratio. One should allow for changes in premium rates and in the risk mix – not necessary if you are working at a very granular level, but for most of us risk mix changes are a very real concern. One could use plan loss ratios or pricing loss ratios. Perhaps, but remember that the premium provision loss ratio only applies to the obligated business, and assuming that the company is continuing to write business, the plan loss ratio may be “off” as it allows for future business and future changes in risk mix. Also, the plan loss ratio may be prudent, or based on “stretch” targets, and it may not be the entity's best estimate, but influenced by external factors, such as Lloyd's SBF requirements. Pricing loss ratios are also likely to be “off” unless all of your obligated business can be said to be from the same cohort. There is also the very real question as to whether pricing should feed into reserving at all. We will, however, need to be able to justify our assumption relative to other assumptions such as these already used by the business.

For those entities that reserve on an accident year basis, cash flow patterns for the premium provision may not be readily available. Does this mean that we are all going to have to reserve paid claims on an underwriting basis? Alternatively is it sufficient to estimate of future premium provision cash flows based on an average accident date from unearned balances and interpolation from an AY pattern?

There are many practical issues that crop up related to premium provision:

  1. 1. Accounting Data: Ensuring consistency between accounting data and premium provision, overcoming granularity and allocation challenges.

  2. 2. Credit control systems: Ensuring consistency between premium provision and creditors and debtors, dealing with funds sitting in clearing accounts.

  3. 3. Outstanding premiums: Splitting into earned and unearned.

  4. 4. Seasonality influences: e.g. due to concentrations of policy start dates

  5. 5. Expenses: Who owns the assumptions, granularity issues and validation. Is the data sufficient for Solvency II?

  6. 6. Ensuring consistency between reinsurance creditors and credit for reinsurance and lack of sufficiently granular reinsurance data.

Usually Finance departments are responsible for most of these. We have observed that traditionally Finance and Actuarial function use data differently. Does the data exist in the form that we, the actuaries, need it? To what extent are we, as actuaries, happy to rely on our Finance colleagues for assumptions/inputs into SII numbers for which we will be responsible? Are we and our Finance colleagues “on the same page” when it comes to SII?

Binary events

There appears to be a lot of uncertainty over the definition of a binary event. For the purpose of Solvency II technical provisions, we, on the working party, define a binary events loading as the balancing item between the true best estimate reserves and the best estimate as currently understood. As the “best estimate as currently understood” differs between companies, so will what is included in the binary events “bucket”. For example, some entities may include in their best estimate an allowance for future changes in the Ogden discount rate, so, provided that allowance is a probability weighted best estimate, they will not need to allow for Ogden changes in their binary events loading.

The term “binary events” has been subject to some criticism. However, it is useful in one aspect – it reminds us that we are required to consider both unusual positive as well as negative events.

Having agreed a definition, there are various approaches to deriving a binary events loading. Is a truncated distribution appropriate, or is this simply too subjective – especially as regards the selection of a cut-off point? Is a scenario based approach likely to be too pessimistic, given a tendency to focus on potential negative outcomes and ignore the possible positives? Is it possible or practical to link binary events to the treatment of emerging risks?

We will often need different binary event loadings for premium provision, because of the additional exposure to future events, such as cats that tend to be reported quickly. There is an argument that, unless we are very careful, we could end up double-counting if we use a loss ratio to set our premium provision: it depends what is included in that loss ratio.

Finally, does the loading for binary events present an opportunity for insurers to apply a “back-door” loading to the technical provisions and reduce the probability of a negative run-off in the technical provisions. Why shouldn't we be doing this?

Validation

We are required to carry out back-testing and to validate our technical provisions. As a profession, we ought to have a view as to how much validation is enough, and who should be carrying it out? Is it acceptable for a reserving actuary to validate his own figures, or should there be an independent third party? Does an independent review of Technical Provisions by a third party (and the corresponding discussions that follow) satisfy the requirement? If so, what does the scope of the independent review need to be? Should validation be based on rules, set according to ranges based on methods like bootstrapping, or is a more subjective assessment of the methodology and assumptions used acceptable or preferable?

Here is an expanded list of questions that we have been considering:

Validation of Data

  1. 1. Data ought to tie back to the audited figures. How close is close enough?

  2. 2. Can we rely on a third party to validate data?

Validation of Methods and Assumptions

  1. 1. Documentation of assumptions and basis for assumptions: How different does this need to be from current actuarial reporting that is compliant with actuarial standards?

  2. 2. Back-testing: Introduction of a framework to assess actual accruals relative to expectations (based on distributions) on a granular basis (i.e. reserve segment). Assessment on an aggregate basis (i.e. Company total), and levels in between granular and aggregate (i.e. Solvency II segmentation, all Property, all Motor, etc.) require correlation assumptions. How sophisticated need this be? Are we devoting enough resources to it?

  3. 3. Sensitivity Tests: How different should sensitivity testing be from current practice compliant with actuarial standards?

  4. 4. Scenario Tests: Do we need these for TPs?

  5. 5. P&L Attribution Tests: Is this relevant to for TPs?

How does one validate expert judgement or a binary events loading?

Governance considerations

  1. 1. Responsible person, position, or department: Is validation within the same department, ie with the validator reporting to the person ultimately responsible for the TPs, sufficient?

  2. 2. Requirements for documentation

  3. 3. Requirements for peer review

  4. 4. Frequency of validation

  5. 5. Escalation

Reinsurance

Solvency II requires companies to estimate their gross provisions and reinsurance provisions separately. Currently net : gross ratios are widely used to derive net from gross reserves for reporting purposes. But where a company has significant non-proportional reinsurance, is this approach acceptable for Solvency II, or indeed has net:gross had its day across the board?

Solvency II Technical Provisions are on a cash flow basis. We know cash flows can be more challenging than you think BUT reinsurance cashflows add new dimensions of complexity. One should consider gross cashflows and then allow for settlement delays, possible disputes, performance related commissions or adjustment premiums and possible defaults – which could be dependent on the timing of payments, the size of losses and underlying losses, especially for large losses and binary events. There are then further considerations of items such as PPOs, where the claim may pay out for several years before the reinsurance recoveries kick-in.

Common approaches can be to assume the reinsurance patterns are a simple lag or stretch from the gross and then apply a simple bad debt percentage. Are these really adequate in all cases? Are these adequate in any cases?

Given the non-proportionality, complexity and possible dependencies in the cashflows, does this mean that in reality stochastic methods are the only way to tackle to problem?

Consistency with the internal model for reinsurance (and reinsurance bad debt) is a particular challenge but this seems an area where the SII requirements could easily drive real world benefits for firms.

Discussion follows:

Mr J. B. Orr, F.F.A. (Chairman): I should like to introduce the first of our speakers. Mrs Dreksler is an associate director and non-life actuary in the actuarial insurance practice at PwC where she has been working for the past 14 years. She has worked in a wide variety of projects in her time at PwC, including reserving work for a wide variety of insurers, so she is well qualified for this topic.

Recently she has been managing the transition of Solvency II technical provisions into business-as-usual for a large insurer. She chairs the general insurance Solvency II technical provisions working party.

Mrs S. A. Dreksler, F.I.A. (introduction): I am going to start with a brief introduction to give you the background to the working party and then do an introduction to premium provision and, I hope, spark some discussion on that subject. Mr Piper is going to follow me with binary events and Mr Kirk with validation.

Figure 1 shows the different elements of technical provisions that are affected by the Solvency II requirements.

Figure 1 Elements of technical provisions affected by Solvency II.

I think it brings out just how many areas there are and how many areas we could discuss. I know many of you will be very familiar with the requirements and you will know that in some places they are very specific. But there are also other places where there is still a lot of judgement that we, as actuaries, will have to apply. We envisage that in about five years’ time there will probably be a lot of common thinking on how we deal with Solvency II technical provisions for general insurance. But we are not there yet; we are at the stage where I think a lot of individual companies are struggling with some of the aspects where they have a choice as to how they proceed.

What we would ideally like to do out of this discussion is to progress convergence to an optimal common approach and, even if we do not have the right answers to every solution, leave knowing the sort of issues that we should be considering in the environment that we work in.

The objectives when the working party was set up three years ago were focused on education, and I believe they still are. It is just that we have moved on from trying to raise awareness of the Solvency II requirements to now trying to share some of the thinking that we have done and the progress that we have made in this area.

What we are aiming to do is to help the actuarial community with some insight, suggest approaches, considerations, and some examples of the sort of things that one might consider when implementing Solvency II technical provisions.

We specifically do not intend to produce any guidance. We do not see that as our role; but we do want to provide something useful to try to get people up to speed a little bit more quickly so that one does not have to start from so many papers, which can be quite arduous to get through.

The working party has quite a few members and we obviously have some quite diverse views but we recognise that as a community, we are likely to have far more ideas, and possibly we may have missed some of the key concerns that are out there. We may not have picked up on some issues. If you have something that you think everybody would benefit from knowing, something that you have had some time to think about and you recognise may not be widely known or thought about, then please let us know.

One of the things I should say before we start is what I do not want to get into is the right and wrongs of Solvency II. I hope that we can all accept that we are where we are, whether we like Solvency II or not, and we need to get on and deal with it.

First of all, let us talk about premium provision. Premium provision for Solvency II replaces what we know in the accounting environment as the unearned premium reserve. It is a best estimate of future cash flows arising from unexpired risk that we are obligated to at the valuation date, discounted back to the present. So it involves projecting your claims indemnity costs into the future, less your premiums and your expenses relating to your unexpired risk, and then discounting those back to the present at a rate that will be provided by European Insurance and Occupational Pensions Authority (EIOPA).

So it sounds all very simple in theory but it is not quite as simple when you actually start to think about it. One of the approaches that you can take, and it is not necessarily the only approach, to calculating your premium provision is to use a loss ratio approach. This involves picking a loss ratio and applying it to the unearned premium reserve relating to all your obligated business to give an ultimate claims estimate, to which you can then apply a payment pattern to project the claim payments in the future. Then of course you need to add your expenses, and so on.

This creates an issue in itself. How do you pick that loss ratio? Solvency II is very much about managing businesses on an enterprise-wide basis. Therefore it is very reasonable for your regulator and your board, to ask how the loss ratio you picked fits with the other loss ratios that they are aware of, the loss ratios that you may use for pricing, the loss ratios that you have got in your plan, for example.

Just taking a plan loss ratio is not that straightforward. For example, plan loss ratios may be a stretch target. They may well have been set by someone other than yourself, the reserving actuary, and you may not actually agree that they are a best estimate. They may have been perhaps increased because you work for a Lloyd's syndicate, and as part of the Syndicate Business Forecast process you have been required perhaps to increase your loss ratios, and so on. So it is not a given that a plan loss ratio will be appropriate.

Of course, the other important thing is that a plan loss ratio will often apply to a longer period than you are actually looking at for your premium provision. Remember, your premium provision is only concerned with the unexpired risk relating to the business you are obligated to at the valuation date; it does not apply to business that you are going to write in the forthcoming year.

So that is the problem with loss ratios. Then we can move on with cash flows. This can be very difficult, especially if you work on an accident year reporting basis. You perhaps do not have underwriting year payment patterns and you may never have calculated those historically, in which case, you need to think about how the payment pattern is going to look for this unexpired risk, because it is not going to look the same as it does for your claims experience. Does this mean that we all should be moving to an underwriting year basis for at least reserving the premium provisions to give us a premium provision payment pattern? Or does it mean that we should be doing some sort of adjustment?

If we are going to do some sort of adjustment to the payment pattern based on an accident year basis, how do we do that?

Reinsurance is another interesting area. Solvency II requires us to project our reinsurance cash flows separately and to discount them back. If you have non-proportional re-insurance, you might quite reasonably expect your future payments to be quite lumpy, so how do you come up with the payment pattern for that? Is it adequate to assume something like a six-month delay in your reinsurance payments?

Expenses are a quite interesting area because we have to rely very much on our finance functions. This can create lots of interesting issues. Something which has occurred to me is that I do not necessarily speak the same language as all finance functions. I have sat down and I have had various discussions about line items and such things. It is not immediately obvious to me that I am looking at what I think I ought to be looking at. So there are difficulties around communication between finance and actuaries. Yet, we need expense assumptions. This is not just for the premium provision; this is for the claims provision as well.

In terms of using the expense data, we are supposed to take the expenses down to the same level we are reserving at. In theory, that could be down to peril level. Historically, it is quite likely that a lot of insurance companies have never tried to allocate their expenses down to that level. Is it absolutely necessary that we do that? Can we do something perhaps on a Solvency II line of business level instead to make that allocation easier? Can we actually do that allocation as actuaries? Do we understand enough about the expenses that our companies write or do we have to rely on our finance functions to do that for us? Are we taking responsibility for somebody else's work, and are we happy to do that? That is a very important question when you hit the problem that I mentioned earlier: that you do not necessarily think you speak the same language as your finance function.

Finally, investment management costs. Here is a deliberately controversial question. Should we be allowing for full investment management costs when we are only crediting risk-free rates of return?

I am going to wrap up on three quite common questions that are asked a lot to do with premium provisions.

First are contract boundaries, and how we deal with those.

Secondly, should we be doing something different on binary events for our premium provision from what we do for our claims provision? Clearly we need to allow for future exposure, but are there other differences?

Finally, reinsurance which is quite complicated. It is quite difficult to get to the bottom of what we are supposed to do in the Solvency II requirements. For example, there are several issues. What premium should we be allowing for? Should we be reducing the future premium to do something that approximates the reinsurance that we will benefit from on the exposure that we are looking at rather than the full premium that would provide a benefit for policies that we have not written yet? Is that an acceptable thing to do?

Mr J. G. Spain, F.I.A.: How are rates of return in excess of risk-free actually earned, accredited, as an experience surplus item?

If you are assuming 4% or whatever risk-free means, and you actually get 6% in a year, is the 2% just coming through as a surplus item?

Mrs Dreksler: Yes, it will come through as a surplus item.

Mr Spain: Fine. In that case, what I would like to see is that last bullet point changed round. How about: is it consistent only to credit risk-free rates of return when you have the potential of reward? I know that is Solvency II, but…

Mrs Dreksler: That is Solvency II!

Does anyone have a view as to the loss ratio that we should be using, and whether we should be trying to make it consistent with all these other loss ratios that we may be using in the business?

Mr M. J. Wheatley, F.I.A.: In terms of loss ratio, I think it needs to be the loss ratio that is consistent with the internal model and what you use to run your business. I think that is one of the challenges running through all the technical provisions that it is consistent with your internal model.

Mr S. Fisher, F.I.A.: I should like to come back to one of the things that Mrs Dreksler said about the loss ratio. You described it as the problem of coming up with the right loss ratio. I should prefer to characterise it differently as an opportunity being to demonstrate that we are able to come up with the right loss ratio. I think we have the skills already to be able to do this. It involves exactly the same sort of thought processes that we already go through with reserving on an underwriting year basis and coming up with an initial expected loss ratio for the most recent year of account.

These are things that we have the skills to do. If we are in a situation where our analysis contradicts what has been going into the business plan, or what is being used for a pricing loss ratio, again, I see that as an opportunity to highlight those differences and to reconcile those differences.

Some of the differences might be due to legitimate reasons; as you suggested, perhaps the business plan is being put together on a stretch basis. That is absolutely fine. We can explain that that is the case. We can even quantify how much of a stretch the business plan is. “Is the degree of stretch the same for each individual underwriter?” would be valuable information we can bring to this debate.

If there are differences with pricing loss ratios, where there is no reason for that difference to exist, this is also going to be valuable information that we can bring to the table by highlighting this. I think that this is an opportunity, something that falls well within our skill-set, and where we can actually help businesses apply a holistic view that actually fits in with the vision of what Solvency II was always supposed to be about (even if some of the implementation has deviated somewhat from that original vision).

Mr H. N. H. Peard, F.I.A.: I would like to go back to basics. I will come back at the end to the question of the investment charges. Coming back to basics around the definition of technical provisions and Solvency II, they do not actually make any distinction between non-life business, health business and life business. In all cases they say “look at the expected cash flows and discount those using a risk-free curve and then add on something which is to do effectively with cost of capital for unhedgeable risk”. Things like the loss ratio do not actually come into the definition of technical provisions.

What I think you are conceptually supposed to be doing in all the cases is coming up with projected cash flows and indeed coming up with the full distribution of cash flows at each future duration, and at each future duration discounting back using a risk-free curve, which is a market-consistent risk-free curve, and getting expected value for that. So I do not think that it is enough just to look at a loss ratio because I think you have to look at how quickly you develop to the ultimate point on that loss ratio. If you reach it very rapidly, you get a completely different value for the technical provision than if you reach it very slowly.

Talking about things like loss ratios, I would go so far as to talk about the two components of the reserve on the non-life side, the premium and the claims reserves, as being a way of trying to use existing technologies to fit within the definition which is being used or which is required under Solvency II. Things like triangulation methods do not actually work terribly well for getting what you are supposed really to be getting on a Solvency II basis. There needs to be development in the future in order to achieve that more accurately.

I think the most interesting question from what you have presented so far is that question around what is meant to happen to investment charges. I think conceptually we should be getting consistency between life insurance balance sheets and non-life balance sheets. On the life balance sheet, I think you would probably be valuing assets at market value. On the non-life balance sheet as well. I am actually not certain what people are doing, perhaps somebody here can comment on that, on the life side in terms of reserving for investment fees.

But it does seem to me that conceptually, if you are taking a risky asset position, and either internally or externally you are using investment managers who are charging you a fee to do that, your market-consistent expectation is that you will only earn a risk-free rate on the assets, but you will have incurred a charge. Therefore I think it is unavoidable that you need to reserve for that in the technical provisions.

The Chairman: On that point, on my interpretation of the wording, it seems to me that the additional cost that is being borne should be recognised. Is there a debate where people are saying, “Well, actually, there is a reason why we are paying that fee, say for expected out-performance, and we want to get some of it back” – or have they got to take it as a hit?

Mr I. J. Rogers, F.I.A.: I would say if you are expecting to incur the expense regardless of the investment return, you should reserve for it.

If the investment expense is performance related in some way I think there would be an interesting discussion as to how much of that you should bring in, and I suppose technically you should be considering some different scenarios, and whether that performance fee would be compensated by higher return, the scenarios where you get 6%.

The Chairman: I was struck by the comment about the loss ratios. We are being pragmatic when we talk about loss ratios because we are assuming that most firms are going to start with the existing approach to reserving.

Mrs Dreksler: Yes, I did not want to sound defensive, but I did say it was one approach. I think that a lot of companies are starting from what they do today, trying to find the easiest way to get to a Solvency II number.

Personally, I think people will change their approaches because they will develop over time. Given that we have got a deadline for Solvency II compliance, it is a natural thing to start with what you have got and try to adapt it rather than do “blue sky” thinking. I think that it would be quite a big leap for a lot of non-life insurers at the moment to get to do what you are suggesting.

The Chairman: We will not ask them to name the firm, but is anybody doing anything slightly more radical which is directed at cash flow modelling not being built upon a loss ratio?

Mr D. N. Roberts, F.I.A.: We are not universally taking the same approach throughout our group. We have eight or nine European jurisdictions in which we are attempting to solve these problems, and some are more sophisticated than others. Therefore it is “horses for courses” a bit.

In some places we are using our internal models to derive premium provisions, which I think goes some way back to the previous speaker's comment that it is not necessarily a loss ratio that drives claims; in fact, it is exposures that drive claims. The thought of building a stochastic model exposure by exposure to create a distribution does not fill me with a sense of (a) being able to do it in the requisite time and (b) being able to explain to management what is going on.

I think that last point is critical. Whichever way we go about doing things, we have to reconcile back to IFRS (or whatever Accounting Standard) to enable management to understand what we actuaries might be up to.

Mrs Dreksler: May I pick up a point that Mr Fisher made? He said that we already look forwards on an underwriting year basis. If you are working in the London market you may well be reserving on an underwriting basis. Obviously, not everybody does.

The next question is: will people have to do it in the future? My own personal experience is that increasingly whereas I think companies did it for working out their additional unexpired risk reserve, and did it properly if they needed one, going forwards perhaps companies are going to have to do it properly to come up with that best estimate irrespective of whether they need an additional unexpired risk reserve.

It is not going to be just a case of looking at the ones where you think they may be not profitable, but all classes of business.

Mr T. A. G. Marcuson, F.I.A.: I would like to describe how I think about a lot of these problems, namely to think about the risk appetite of the business and, in this context, how much the weaknesses in the approach used matter.

The reason for putting it like this is that over a number of years people have been hugely critical of things like triangulation techniques, the chain ladder and Bornhuetter-Ferguson, in saying how they have so many limitations. But actually in practice what you find is that lots of people use them, and lots of people find that they are not really that bad as tools to get you to where you need to get to on reserves.

What you typically find is that when reserves are wrong it is because of something quite big or people doing something slightly crazy. Put another way, either you are doing something and you are not using the tools properly, or you did not really have much chance in advance of knowing where the problem was.

So, I feel it is important to say something in favour of these techniques. In the context of the risks that the businesses face, they are not always so bad, and if they then lead you to a loss ratio approach to calculate your premium provision, you are not necessarily getting to such a terrible place.

That does not mean that you can ignore all the things and the potential ways that these things can go wrong, and all the flaws that you set out in your note and you have described today. But it does mean that once you have thought about them and used the simpler techniques as a base-line, you tend to end up in a position to go forward.

The alternative seems to be one of doing something that is extremely complicated and extremely “stochastisized”. This may be great in terms of statistical purity, and it might address all the problems that arise when you are discounting a loss ratio and you are working with means and you have the consequences of Jensen's Inequality to consider, and all the other technical challenges discussed to overcome. Unfortunately the process complexity, the very short timescale and the risk of losing yourself and failing to be able to give a sensible result can become too great. We need to remember that the thing that, as a profession we have done really, really well, certainly in the general insurance area, is to focus on the common-sense aspect of what we do.

On top of this is the question of whether we can communicate sensibly to management what on earth is going on when there is something important to address. They need to be able to really challenge us and keep us honest in what we are doing. As we evolve into using these other techniques more, we do need to be really mindful of what can go wrong. With techniques as they are, I have a preference for keeping the method simple and knowing why it is wrong rather than keeping the method complicated and having no idea what is going on.

Mr P. H. Hinton, F.I.A.: It is important to consider the question of technical provisions in the context of Solvency II as a whole. Solvency II is not just about number crunching and about the estimation of assets and liabilities on a new and different basis. This is just part of it. In particular, it is about risk management and communication, as Mr Marcuson says.

Another relevant consideration is proportionality. A further point that we need to consider is the unfortunate fact that, despite the best efforts of all involved, insurers have been given far too little time to adjust their systems. Even now we do not know the detail: the Level II and Level III measures have not yet been finalised.

There seldom is, or has been, sufficient data of sufficient quantity or enough time or resources to calculate technical provisions as well as one would like. That is not a new problem. A degree of compromise is normally therefore necessary. Obviously, this will be exacerbated in the early days of Solvency II because it is not practical or cost-effective to adapt systems in time to provide the new data and the new analysis which is required.

Part of the role of the actuarial function under Solvency II will be to inform the Board about the reliability and adequacy of the calculation of technical provisions. The additional uncertainty introduced by any inadequate or unreliable data will affect the firm's risk profile and so affect its capital needs. Therefore it will need to be reflected in any internal model for firms relying on internal models, and for firms on a standard formula it may give rise to the need for a capital add-on.

The Board, and ultimately the regulator, will need to be given information to assess these things. Consideration will also have to be given to the information that will be placed in the public domain. I would hope that we will be discussing the communication of all these things because that is at least as important as actually doing them.

Over time the quality of data and estimates will improve. An important part of the actuary's role is to suggest improvements both in the data to be collected and the way it is to be analysed, and to ensure that the Board understands the capital implications of not making improvements. Then the firm will have to decide whether or not to make those improvements.

There is to some extent a trade-off between capital needs and how accurately we calculate provisions.

The Chairman: Thank you, possibly an agenda for the actuarial function. I am now going to hand over to Mr Piper. He has worked in reinsurance marketing, consultancy, run-off, and now Solvency II as an actuary for a Lloyd's syndicate.

Mr J. M. Piper, F.I.A.: I prepared for this discussion by having a few conversations with individuals, which were very helpful, some inside the working party, some outside. So I should like to thank them first.

The things I discussed were whether there was a general buy-in to the concept of binary events at all; discussion about the available guidance; what approaches firms are using for the calculation of binary events; whether there is any risk of manipulation of this binary load, and a couple of other points.

The term “binary event” generally, outside of the Solvency II world, refers to an event where there are two distinct outcomes. In the insurance world we might think of a large claim which goes to litigation where you could either win or lose the case. Here we are talking about the definition as put out by the Groupe Consultatif, where we are talking about low probability/high cost events. There are a small number of examples.

One thing that has come out from talking to people about this is that there is a strong belief that going through the process of thinking about these events and discussing them gives genuine added value, and alerts management to the level of potential exposure, forces firms to look at the emerging risks, and that this has been very useful.

When reviewing the published guidance, there are some inconsistencies. The QIS5 guidance on technical provisions suggested that existing simple actuarial methods that are in use could be adequate for allowing for binary events, including a specific reference to the chain ladder method.

The Solvency II directive itself and EIOPA guidance make reference to the weighted average of all possible scenarios. Looking at that, I probably should have highlighted or underscored the “all”, which is the significant word there.

In the Lloyd's guidance, they pick up on this difference in language and suggest that this wording implies a wider scope than the GAAP terminology of “reasonably foreseeable” or the old GN20 wording about allowing for latent claims in a consistent manner as they have emerged in the past.

Lloyd's guidance also gives suggested methodology. It steers towards looking at the binary loading as the difference in the means of full and truncated distributions. There has generally been a fair amount of focus on the Lloyd's guidance both inside and outside the Lloyd's market because it is available and it is a very practical guide.

A general comment that I received from a number of people was that there was concern over limitations of guidance generally.

As to the actual approaches that are being seen, it is clear they are still developing, different approaches are being used. There is no consensus yet as to the right way to do this. There was a common thread and that was using detailed analysis either with the probability severity approach or truncated distributions, but then expressing this as a simple percentage load which might vary by class.

As to the actual magnitude of the loads that are being seen, Lloyd's guidance refers to an indicative range of 2% to 5%. The general comment I received was that where firms had carried out detailed analysis, they were struggling to come up with numbers as high as that. Again, there are some firms applying zero uplift on the grounds that the existing methods already allow for what we are calling binary events. This is probably a significant point. Phrases were used like, “We do this already. Why is there any reason we need to change things?”

On the risk of manipulation, this was a question which I thought was going to be more interesting than it turned out to be. I thought that there was a risk that binary event loading could be used as a back door way of management putting in a loading on top of the true best estimate. There seemed to be consensus that the fact that it is a transparent load limits that risk.

Is this really a capital issue rather than a reserving issue? If the meteor does strike and we have a 1% load on our reserves, does that really help us very much? If there truly is a difference between Solvency II and IFRS or GAAP reserves, then does that mean an IFRS GAAP reserve should include binary events?

There was concern as to whether the UK was consistent on binary events with the rest of Europe. I did try asking and getting some views on what the approaches are in continental Europe and the rest of the EU without much success.

Is there going to be a consensus on methodology or approaches? Is that even desirable? Is there any sensible way that you can do validation on binary events, given their extreme nature?

The Chairman: As you said at the start, there are two definitions of binary events. There is an accepted one about a legal result still to come. The other one is a high severity/low probability event. There is not actually wording in the directive that refers to a binary event. Are people comfortable with this or is the view that that is entirely down to the reference to a full range of potential cash flows and therefore people have thought that means we are to think about what is not reflected in the data already?

Mr Piper: The Groupe Consultatif were the first body to come up with this definition; there is reference to low frequency/high cost. I am not aware of anything prior to that nor am I too sure why they came up with that.

Mrs Dreksler: My interpretation is that we need an amount to cover the difference between what you have got in the data and what you would need to get to a true mean allowing for all possible future outcomes.

I think that the term “binary event” is unfortunate. It is good in that you should be considering both positive and negative outcomes but it is wrong in the fact that it detracts from what we are trying to get to, which is the true mean.

My personal preference is to abandon the Groupe Consultatif definition and to say, “What have we allowed for and what could possibly happen?”

I think that the important things to allow for are not necessarily the very extreme events, such as the meteor strike because the chances are we will not be around after such an event, or it will just be so severe an event that the insurance company will not be paying anything to anybody. The binary event allowance is to allow for things that are not perhaps so unusual, that may indeed have happened in the past but are still not in our data.

My favourite example is the long freeze over the winter. In the last 300 years or so it has been so cold for prolonged periods that the River Thames has frozen. They have been able to hold markets on it. That is a feasible future event.

This is the sort of thing that I think we need to be allowing for. It is the sort of event after which the insurance company would continue to pay out, and it could potentially be a large loss-maker for the likes of motor insurers and house insurers. That is how I interpret binary events.

The Chairman: Thank you Mrs Dreksler. Would anybody like to make the case for a zero loading here in terms of binary events? Are people intending to look at the analysis and put a zero in there?

Mr A. S. Collins, F.I.A.: You give a few examples of low probability/high cost events which could be reversed. Retrospective legislation – that could be a positive. High inflation or equally you could have low inflation.

In my company we have very much focused on the downside, but there does not seem to be much acceptance of the potential upside. There is definitely a reasonable argument for having a 0% loading if you are reserving on a best estimate with the data you have.

Ms J. Shing (Ernst & Young): Following Mrs Dreksler's example on the long freeze, in theory if you have done your ICA properly, would you not have considered these scenarios already? Hence, you might have a capital loading already for these events.

The Chairman: I guess the question is whether we have put a full enough set of scenarios in there?

Mrs Dreksler: I think you do have to be careful about double counting; allowing for events both in your technical provisions and your capital requirement. Have you allowed for that?

Mr Wheatley: In terms of binary events and the validation point, I think that I would agree that the only way you can try to validate it is to think about the events that could happen, the long freeze for example, and compare that to the technical provisions result and then work out if there is anything missing, and if you do need a binary event. I think that the problem is that this is quite a hard/technical area and Lloyd's have told us what they believe the answer could be. People are now doing a bit of goal seeking; if their approach comes up with a number close to Lloyd's then that is what they are running with. I think that there is a lot of cynicism in the market as to what value people get from this process.

In terms of considering upside risk, I have seen two of my clients consider upside risk and downside risk, so truncating both the top and bottom of the distribution, but have still come up with a positive load overall.

Mrs K. A. Morgan, F.I.A.: Not all insurers will be using internal models. So if you have not got an internal model, you will not be able to model these binary events in it.

Mr P. D. Smith, F.I.A.: I agree with Mrs Dreksler that our estimation methods often give something below the true mean. We are therefore trying to quantify the difference between our estimate and the true mean, and I know there are enormous difficulties in doing so. I am unconvinced about simply saying that we should quantify the difference between a truncated and a full claim size distribution, because this begs the question about the error in our estimate of the tail of the distribution. In practice, there may be issues like binary events, that produce a big kick in the tail which normal curve fitting methods would not allow for.

One speaker queried whether binary events should be a capital issue rather than addressed by increasing the mean of the distribution, but this depends on your portfolio. If you do not have a diversified portfolio, then your binary event capital requirements will completely dwarf any increase in your mean to allow for those events. The increase in the mean is then a non-issue. However, if you have a highly diversified portfolio, then it is essential to increase the mean to allow for the binary events.

My final point is about the sheer practicality of trying to do an estimate of the binary event mean by a bottom-up quantification of known events because, by definition, this ignores Donald Rumsfeld's unknown unknowns. You should quantify the known events, and then add X% for the unknowns; X is not zero, but it is a subjective judgement whether it should be 25%, 50%, etc. of it. At least you are addressing all the relevant issues, and the result should be compared with the curve fitting estimate as a sense check.

Mr Spain: I am a bit surprised that a small percentage uplift would be necessarily appropriate. Different Lloyd's syndicates, for example, have different portfolios. Some will be very much more subject, I would have thought, to some of those problems than others.

Mr Fisher: Whenever the subject of binary events comes up, I worry increasingly that, as a profession, we are in danger of getting ourselves trapped in a very esoteric debate that is of limited value to the stakeholders that we are seeking to advise.

I absolutely recognise that if you are trying to get to a true mean best estimate, then our traditional methods may well understate that mean best estimate and therefore legitimately an additional loading may be necessary. But at the same time we need to bear in mind the requirements for proportionality which is entirely permitted by the guidance, and also a general preference for simplicity.

Mr Marcuson, in the previous discussion, articulated very well why simplicity was important in terms of our ability to communicate the work that we are doing. So, to be getting into complex methods about truncating distributions, and determining exactly what the shape of the tail of that distribution might be, and how that might impact the binary events loading, is probably heading in the wrong direction because it is applying an overly-complex approach for when a relatively small adjustment is what is required.

We need to recognise the sheer unquantifiability of a perfect binary events load, and then accept that there is simply an actuarial judgement to be made; the same kind of judgement that we are making all the time when we are having to choose different assumptions that are part of our reserving basis.

We have to make sure that we understand the types of claims that we are exposed to that might fall into the binary events category, and then use our skills to make a judgement that we can then communicate to people in a very simple and straightforward way which will save us all a lot of time and effort in the long run.

Mr Roberts: I have a lot of sympathy with the previous comment. To me it is with some regret that we have to change our balance sheet at all for Solvency II, but that is water under the bridge.

I do think that there is scope for confusion among management because of this second balance sheet that we are all going to have to produce, and in my case it will definitely be a second one – maybe even a third one. That is unfortunate.

My other point was that irrespective of Solvency II, we have the Accounting Standards Board addressing these issues. This one in particular I am told came up at a recent meeting between the IASB and FASB, when they were talking about this very thing.

The issue that seemed to concern them a lot was what happens if on your balance sheet date you know there is a hurricane 200 miles off Miami? What do you then book? Fortunately, I am not exposed to that particular risk, but I think that it does beg quite a lot of questions that people may have to think about in the future, irrespective of Solvency II, just under accounting standards generally.

Mr Marcuson: I have a lot of sympathy with the comments that previous speakers have made. There is, as the name suggests, the binary event, where something happens or it doesn't. This ought to be a simple thing. If you are, as Mr Roberts said, sitting with a Miami re-insurance treaty portfolio and a hurricane sitting off the coast, you may need to decide whether you should be making an additional allowance for that in your premium provisions because your expected losses your tools (calibrated to normal circumstances) calculate have not allowed for the changed shape of the loss distribution in the current situation.

That seems to me to be an entirely sensible thing to be doing. Equally, if you know that in your liability reserve book you have a major all-or-nothing claim to allow for (and even if in practice there are settled outcomes in between), you need to make sure that you have thought about the sufficiency of your reserve in light of this. It might be crazy not to do so.

Then you have the other interpretation of a binary event loading. This says that reserves are never quite enough and that we ought therefore to be adding somewhere between 2% and 5% (or whatever) to our initial estimates. Getting reserves that accurate is just a very difficult thing to do because we normally don't have all of the facts.

I think that judging what this allowance might be is actually the really hard change arising from the move to Solvency II. It is not hard because we should be adding a bit. The hard bit is that up to now, you very often go to committee meetings as the actuary providing advice or an opinion. It is, however, the audit committee or the reserve committee, or whoever, that is deciding the provisions, not the actuary.

So you have the actuary coming up and doing all the analysis and saying, “This is what I think the number is” and in the past what has happened is management then said, “Thank you very much. We are going to book a little bit higher.” They are aware of all the soft, subtle things that go wrong. They have worked with the actuary over a period of time.

Now, under Solvency II, we will be in the situation where the actuary has ownership and responsibility for a number which is going to be much more visible. That then puts a burden on the actuary to think, “How do I make sure that my answer allows for these soft factors in an appropriate manner?”

As we all know, there is a whole spectrum of approaches and philosophies that actuaries take that are quite justifiable, and that is where the challenge will arise. If your approach is to take your standard actuarial technique, take the answer that comes out of there, and accept that as the hard number, and accept the rough with the smooth in it, overs and unders, that is fine and that is one philosophy. But if your other approach is to use these tools as a means to an end and accept their limitations and feel that you probably need to take a little bit of pessimism because of the skew distributions you are dealing with that is another philosophy, and that may capture the spirit of a mean best estimate more closely.

I think that trying to work out how, as actuaries, we can centre ourselves in a sensible way, will be difficult. I do not think that there are any magic solutions to it; but I do think that we risk losing ourselves if we focus on some sort of analytical approach on the binary event load other than saying we have done some research on what the load might be, and we have also done some research about the spectrum of reasonable estimates that actuaries come up with, and, guess what, the latter is probably bigger than the former. If that is the case, then it seems that the discussion to have is about what our mean reserve represents, and how we demonstrate it meets our requirements. That would be preferable to treating our reserves as places where we put things on top of other things.

Mr D. Brown, F.I.A.: I just wanted to give my company's view on how we are going to uplift for binary events. I believe that the method is pragmatic and I would encourage people to look into it. It is based on an older working party paper called, “We Are Skewed” (Fleming, 2008). The paper investigates how a sample mean underestimates the true mean of a distribution. They based their method on a log-normal distribution, but similar approaches could be used on other distributions.

In the paper they compared the mode of a distribution of a sample mean to the true mean of that original distribution. As the number of samples goes up, the mode tends towards to the mean, because of the Central Limit Theorem, and the higher the coefficient of variance of the distribution, the slower it would tend toward the mean.

We are looking at using that difference between the mode and the mean, where the number of years’ history is the sample number, in order to come up with a reasonable uplift factor.

Fortunately, the results were in the 2% to 5% range that Lloyd's estimated, so we were comforted by that.

***

Mr J. Kirk, F.I.A.: It was that paper, “We Are Skewed” that led to the idea of the truncated distribution. We did just put that out as a method three years ago, and one of the ideas was that it was meant to be simple, transparent and explicit, so at least people knew what was happening.

I know sometimes we have had to stand up and then defend it. Is it the most rigorous? Probably not. Is it subjective? Yes, it is; but it is back to the view that I do think you do have to add something because these are solvency provisions, and if you are only making allowance for items that are reasonably foreseeable, you know you are under-estimating it. So what you can do is add something on. That is really what the method does.

Mr J. P. Ryan, F.I.A.: In my experience over the years, an account that shows a lot of historic volatility has unknown unknowns in it. Those types of things are going to come in, in the future. Therefore loading, based on historical loading – well, the theoretical explanation is such that we can understand the mathematics. In practice, it works out quite well if you look at a series of liability business and other types of business over a long period of time. So practically, the theory works out.

Mr Peard: I think it is both a capital issue and a reserving issue. I am again going to be conceptual about this.

Thanks to Mr Hinton's comments, I will remember that we have to be proportional and we are trying to solve practical problems.

I think it is helpful to go back to what conceptually is behind this. I think it might be better, rather than calling them binary events, to call them tail events. What we are really looking at is the tail of the distribution. By way of a simple example as to why it is both a reserving and a capital issue, consider an extreme distribution where you have a one in 100 chance of a claim of 100, and a one in 10,000 chance of a claim of 10,000. And that is it. Your data might show one claim in 100 years of 100. You might be tempted to have technical provision of 1.

I think, arguably, it is pretty clear that that is insufficient from a statutory position, a regulatory position. Mathematically, the mean of the distribution is 2, which would give you a loading of 100%, not something between 2% and 5%. A point on that is that the 2% to 5% range is going to be entirely dependent on what the underlying distribution is; how significant the tail is in relation to the rest of your calculation.

In terms of the capital issue, again conceptually you should be looking across the whole of your book and looking at the variation in your total reserves in a one in 200 scenario. So, if you have a book of business which consists of a single risk with that distribution, arguably you would be holding 99 capital.

Of course that is back to the conception of what your risk metric is, and probably ideally one would be using something like a tail VaR metric, because its mathematical features work better for that.

If you have a whole range of those risks and they are independent and identically distributed, then you will be holding significantly more capital. I am not going to pretend to know what the right answer is. I am just going to say that it is issues like that that we need to think about and hold in our minds when we are trying, practically, to apply the methods that we have in order to come up with answers which are acceptable under Solvency II.

The Chairman: Thank you very much for that. Certainly one of the themes that I have in mind when looking at what firms are doing and getting into a dialogue with them, is that firms have to comply with Solvency II. They need to make those decisions and explain those decisions. Be prepared to explain those decisions. It is the quality of the thinking, and firms going beyond the initial decisions about parameters and models, saying, “What are the consequences of this? What might be the challenges?” That is where we are expecting to see that further level of thought, that further depth, in the work that people are doing. Certainly for me firms should be deciding and then explaining what they are doing.

I am now very pleased to introduce Mr Kirk, who is head of actuarial services with Lloyd's Market Reserving and Capital department. He is responsible for the provision of all actuarial work concerning Lloyd's market reserves, syndicate capital, Solvency II dry run and U.K. international regulatory requirements.

Mr J. Kirk, F.I.A.: I am going to talk about validation. The debates have been very good, especially on binary events. The reason why we picked validation as a topic is because it is something that does keep coming up and there are potential difficulties and practicalities around how you validate technical provisions under Solvency II.

I liked your Mr Hinton's words referring to a degree of compromise when you are calculating technical provisions. I have never heard it expressed like that before. So maybe we will have that in mind.

Solvency II has the requirement that you need to validate the technical provisions.

You are going to have to validate both the data that underlies the technical provisions and then validate the models, methods and also back-testing as well.

We have talked already on the premium provisions and the points about data; that it should be complete, accurate and appropriate. These are the things to think about for the debate including the extent of reliance on third parties, especially the auditors, and then the finance functions.

Talking different languages has been discussed. This is important if you are trying to reconcile numbers and people are speaking a different language as it can be a fundamental flaw. The other thing about the data is clear responsibility of who is actually going to do what. This is an underlying theme and overlaps with the requirements of the actuarial function. What I want you to think about is: where does the actuarial function sit in this validation? Is this the role of the actuarial function to do a lot of this validation? Some people have already said it is the role of the actuarial function to suggest numbers so this is a point for debate.

Then we move on to the models and methods. The validation requirements have the same sort of language as the data. This is relevant, applicable and appropriate for both methods and results.

If you then re-read the requirements of the actuarial function, they overlap a lot. In fact, these are pretty much part of them.

A versus E. That is definitely something you have to regularly do. There is a reason why you would do these. It is because of understanding the cash flows and again I like the language and possible flaws in the calculation process. It is to validate the methods. You need to do this annually, at least annually if things change.

Those are very small snippets of the formal requirements. By the way, we know that the Level II is draft, but a lot of this was out in the old CP 39, which became DOC 330933/09, which was out in 2009.

Where are we currently on validation? You should actually be doing that. It is part of actuarial standards. TAS D and TAS M all talk about validation in a different language, one on data and one on models/methods. So it is something you should already be doing.

In DOC 330933/09 there are some example methods of what you should think about doing, looking at percentiles, goodness of fit; settled reported; paid incurred; look at patterns and then do scenario testing or experience investigations. Again, that is just standard. It even uses the phrase “in line with actuarial best practice”. It is not prescribed and is up to the person doing the validation to come up with it.

These are all current methods; these are all current techniques. I guess one of the questions is: what is different? Why is validation now different? Is it because of the new requirements for Solvency II? For example, validation also needs to be done, best estimate and risk margin separately. There also need to be premium provisions and claims provisions separately and also gross and net. There is a prescribed split.

Are there new data issues because of the new requirements?. It is more difficult for new requirements.

Or is it just documentation? Do we already do it and are just not very good at documenting it? Now you have to document it. The open question is: do we actually do this and as actuaries do not necessarily document it very well.

The main questions are: Is it anything new? What is different? Why is validation more of a point than it used to be?

We have tried to pick out what people have been doing differently. We found it quite limited, to be honest. We have come up with something that you can do differently, maybe, compared to what you currently have been doing. There is the case of boot strapping to help understand the results. This could be part of validation. You look at the range. Here it is meant to be representative of a range and it gives us an added understanding. The question for debate is does it really add anything to what we are normally doing or is this a case of doing something else for the sake of it?

Also bootstrapping is limited and you then have to validate that is a valid method itself. Also, does it really allow for reinsurance? The final point, which is the biggest “but”, is that bootstrapping is commonly targeting your selected mean, so how could it possibly validate the number that you are targeting?

Once again it is really back to the question: Is this anything new? What is new under Solvency II for validation? I do not think that there is really anything new, if you have not figured that out already.

Here are some of the questions that we would like to ask. Does it need to be improved? Is there anything new or is it just that there are new requirements in Solvency II so you have to do the old validation just on new items. There may be data issues.

What about coming up with standard validation methods – methods as opposed to just the generic “This is how you would validate it”?

Where does the actuarial function fit into it? We would be interested in that. What about independent reviews? Is that the solution? Finally, how often? You have to do it at least annually. What are you going to be doing annually, and what would be ad hoc? What would make you change and absolutely re-validate things more frequently? Those are the type of questions that we would like to open it up to you.

Mr D. I. W. Reynolds, F.I.A.: I think that all of this suggests that we will validate ourselves. If validation is part of the actuarial function, then is the actuarial function also providing the model and providing the data? You cannot validate yourself; it has to be independent. I think Solvency II tells you it has to be independent.

Dr E. R. W. Tredger (student): I think that one of the challenges of validation is not so much going through the process of validation itself, but how you use the results, and once you actually run the tests, to think of a sensible way of doing something different. Otherwise, there is really not much point in running validation in the first place.

If you are both the person investigating, and the person validating your own models within the actuarial function, there might be more potential for understanding why you have gone wrong and how to improve in future, although I do recognise that it is also very important to be objective and independent.

Mr P. J. Yeates, F.I.A.: My feeling in terms of what is new with validation is perhaps in keeping with a lot of the risks of Solvency II, is this new formality. There is a need for the rest of the business to be involved. Whereas some of these validations happen in the back of a reserving report, or something like that, now the spirit to me feels like you need to take it to the underwriters, take it to the claims guys, take it to the Board when necessary, so perhaps a bit more exposure of the risk to the business to this validation work that we have been doing anyway.

The Chairman: I was certainly hoping that it would not be the regulator who was going to be doing the validation; that they could rely upon the work that was being done by the firms, saying to the firms, “You show us how you have gained comfort that these were the appropriate calculations that were done.”

Mr Kirk: I do not know if that is different. That is how it should be and I think that is the whole point. The current requirements are that you should be able to demonstrate validation. Under whatever governance structure you have, you should be able to demonstrate why you believe results are valid.

I wonder whether it has not been seen that way and do just wonder whether we are seeing a shift to how it should be.

The Chairman: Or is it a restatement of the ideal that we have always not been quite reaching?

Mr Kirk: Yes, I think that is what we also want people to think about especially the role of the actuarial function and think of the requirements of why they have been formed. Is this function going to be calculating the technical provisions, and validate them, and then presenting them?

I think that the question then is: how would it work? As you do have to think about the validation and how to get objective challenge?

Mr Roberts: On validation, I always think when I am doing a reserving exercise that the first thing I do is validate whatever I did last time. Did I choose a good method? Were the assumptions sensible or not? It strikes me that the validation is actually part of the first line piece of work that gets done.

Also, the proposal that you have to have it validated by somebody independent is, in some parts of Europe, going to put a real strain on the system. There are just not the number of actuaries in many emerging nations in Europe, and in some of the more developed ones, too, to go around.

So it strikes me that you self-validate because that is part of the process, and you just then add some peer review from appropriate third parties and that is the validation done. But to start saying that we have to have a total separation between front-line people and the actuarial function is just impractical. I am sure people will devise governance structures that get rid of that conflict of interest which I think is what the level III text actually says.

Mr Kirk: We were asking that as a question. I do not think that is necessarily what we were saying.

Mr Hinton: Regarding data validation, reliance on third parties can be very dangerous. The data probably originates from a process that was not designed to estimate provisions. It may be perfectly suitable for that process, so it is validated. Yes, all correct. But it is just not sufficiently correct. Unless a person who is doing the validation understands what you are going to be using it for, and can understand it sufficiently to say yes it is good enough for your purposes, you cannot rely on what it is telling you. You do need to understand what has been done to verify it as well.

Mr Kirk: I think that this is something which has been asked about before. When we introduced annual accounting at Lloyd's some six or seven years ago, there was reliance on the auditors and the UPRs and the unearned exposures. There was a lot of debate around that. I think that has actually turned out relatively okay, but it does involve communication. So long as there is communication between the parties you are relying on, I do not think that should stop anyone relying on it. Also, that you are clear what you have done to check the data for reasonableness, where you are relying on others, and make the statement that you think it is complete, accurate and appropriate, and why you think so.

I personally would go against saying that the actuarial function, or the reserving function, is going to be responsible for validating all the data, because that is an area that I do not think we want to enter. We do have to rely on others. So just make sure you are clear on whom you are relying on, how you are relying on them and make sure that you understand what they have and have not done and are providing.

The Chairman: It just remains for me to say thank you to all of you who have contributed. We have been very pleased at the quality and the quantity of the contributions. I think that we have seen some key issues highlighted in terms of the extent to which we are relying on existing methods in the approach that we are going to take. That is entirely appropriate.

There have been some good contributions and some challenging contributions from life actuaries and also some non-executives saying “okay, you need to look at this afresh, you need perhaps to challenge the thinking”. I have certainly appreciated those comments. This is a complicated area, an important area.

One of the things that we were concerned about in looking at Solvency II is that the focus on capital and not the technical provisions was really a discussion about how high the sea wall should be when we have not actually decided how high the sea level is. Technical provisions are such an important part of deciding the fundamental strength of an insurance company's balance sheet that I think we need to focus on it very carefully and it should continue to be a topic of focus among actuaries. We have a key role to play within that.

References

Fleming, K.G. (2008). “Yep, We're Skewed”. Variance, 2(2), 179183. Available at http://www.variancejournal.org/issues/02-02/179.pdfGoogle Scholar
Figure 0

Figure 1 Elements of technical provisions affected by Solvency II.