NCCI.InformationalExhibits

From BattleActs Wiki
Revision as of 19:38, 4 June 2021 by Graham (talk | contribs)
Jump to navigation Jump to search

Reading: National Council on Compensation Insurance, Circular CIF-2018-28, 06/21/2018. Informational Exhibits 1 — 3.

Synopsis: This is the second of two articles on the NCCI Circular CIF-2018-28 reading. It covers the second part of the circular which is a series of informational exhibits that explain how the NCCI derives their aggregate excess loss factors on demand for their retrospective rating plan. The first part of the NCCI Circular reading is available at NCCI.Circular.

Forum

Study Tips

The source material is rather long and although it provides a lot of detail about many of the calculations, it lacks the necessary tables to perform them yourself and often requires numerical methods for solving. Also, the informational exhibits do not provide any examples of real life calculations. Therefore, focus on understanding the flow of the material/how it is used rather than worrying about the minutiae of the calculations. There are other topics that are much more likely to come up on the exam and account for more points.

Estimated study time: 4 hours (not including subsequent review time)

BattleTable

This is a new reading and due to the CAS no longer publishing past exams there are no prior exam questions available. At BattleActs we feel the main things you need to know (in rough order of importance) are:

Questions are held out from most recent exam. (Use these to have a fresh exam to practice on later. For links to these questions see Exam Summaries.)
reference part (a) part (b) part (c) part (d)
Currently no prior exam questions

Full BattleQuiz You must be logged in or this will not work.

Forum

In Plain English!

Overview

There are three informational exhibits which detail the derivation of the aggregate excess loss factors for both the on-demand factors and those provided in the pre-tabulated countrywide tables.

In terms of testability, the first exhibit and the last part of the third exhibit are probably more realistic to draw material from. If you're reading the source as well, note there is a lot of overlap between these exhibits and the NCCI.Circular wiki article - Alice has done her best to avoid duplicating any content here so you can study efficiently.

Informational Exhibit 1

This exhibit deals with how the NCCI produces the aggregate loss distribution used to create the Aggregate Excess Loss Factors. The key idea is to start with separate distributions for the claim counts and claim severity before merging them using Panjer's Algorithm to create an aggregate loss distribution.

For a quick refresher on merging a count and severity distribution together without using Panjer's Algorithm, try the following problem.

Create Aggregate Loss Distribution from Count and Severity Distributions

Claim Count Distribution

The NCCI models claim counts using a negative binomial distribution. The distribution is specified (parameterized) by its mean, E[N], and Variance-to-Mean function, VtM. The Variance-to-Mean function allows you to compute the variance by multiplying the mean and variance-to-mean function together.

Notation The NCCI uses a superscript PC to mean "per-claim" and a superscript PO to mean "per-occurrence". Remember, an accident is an occurrence and a single occurrence may result in several claims if multiple parties were harmed. You need to keep track of whether your calculations are on a "per-claim" or "per-occurrence" basis as it is necessary to carefully convert between the two at times.

Irrespective of the use of per-claim or per-occurrence in general you should perform calculations at the state/hazard group level and then sum the results across all state/hazard group combinations for a risk.

Per-Claim Basis

Let E[N]PC be the sum of the expected number of claims for the policy over all state/hazard groups. Use this value for the mean, E[N].

The negative binomial variance is the product of E[N]PC and VtMPC. Here, VtMPC is the Variance-to-Mean function evaluated at E[N]PC.

Question: What does the Variance-to-Mean function look like, what properties does it have, and how do I find it?
Solution:
The Variance-to-Mean function is defined as [math]VtM(x, A, B) = \displaystyle\begin{cases}1+m\cdot x & \mbox{if } x\leq k \\ A\cdot x^B & \mbox{if } x\gt k \end{cases}[/math].

The Variance-to-Mean function must satisfy the following:

  1. It must be continuous, and
  2. Its first derivative must also be continuous, i.e. the function is smooth.

The above conditions determine the values of m and k in the Variance-to-mean formula as follows:

The slope of the linear function, m, is expressed in terms of A, B and k via [math]m=\displaystyle\frac{A\cdot k^B -1}{k}[/math]. The transition point, k, is given by [math]k=[A(1-B)]^\frac{-1}{B}[/math]. So the Variance-to-Mean function is entirely specified by A and B.

Due to the above conditions, the transition point k is called the tangent point and denoted by E[N]TP. This is because the slope of the line and the slope of the power curve are equal at this point.

The NCCI determines the Variance-to-Mean function by fitting it to empirical data so in the exam you would be given the fixed values of A and B.

Per-Occurrence Basis

When working on a per-occurrence basis the claim count distribution is still parameterized using E[N] = E[N]PO and Variance-to-Mean function VtMPO. However, it's necessary to carefully convert from the per-claim basis to per-occurrence basis by dividing E[N]PC by the per-occurrence constant to get E[N] = E[N]PO. The per-occurrence constant, [math]\alpha[/math], is determined empirically by dividing the number of claims by the number of occurrences.

Similarly, an adjustment to VtMPC is needed to calculate VtMPO. It is trickier because you need to form a negative binomial per-occurrence distribution whose probability of zero occurrences is the same as the probability of zero claims under the per-claim negative binomial distribution. (Can't have a claim without an occurrence!)

The general approach is to express the mean and variance of the negative binomial distributions using [math]r_i[/math] and [math]\beta_i[/math] where the subscript i indicates which claim type (1 = "per-claim", 2 = "per-occurrence"). To meet the zero claims/zero occurrences probability requirement the following must hold:

  1. [math]r_2\beta_2=\frac{r_1\beta_1}{\alpha}[/math], and
  2. [math]\left(1+\beta_1\right)^{-r_1} = \left(1+\beta_2\right)^{r_2}[/math].

The first condition adjusts the mean while the second matches the variance-to-mean ratio between the two negative binomial distributions. With these adjustments VtMPO is calculated by numerically solving [math]\displaystyle\frac{\ln(VtM^{PC})}{\ln(VtM^{PO})}=\frac{VtM^{PC}-1}{\alpha\left(VtM^{PO}-1\right)}[/math].

mini BattleQuiz 1 You must be logged in or this will not work.

Severity Distribution

The severity distribution used varies by state and hazard group and is determined by formulas and parameters not included in the study kit (the ELF Parameters and Tables document). The formulas and parameters come from fitting excess ratio curves using a mixture of two lognormal curves spliced with a generalized Pareto tail. The NCCI lets [math]XS_{CG}(L)[/math] be the function for calculating the unlimited claim group (CG) per-claim excess ratio at loss limit L.

A requirement of Panjer's Algorithm is the severity distribution must be given as a uniformly discrete distribution. So the next step converts the continuous severity distribution into a discrete distribution with equally spaced points.

The continuous severity distribution is evaluated at equally spaced points going from 0 to the minimum of the loss limit L and [math]10\cdot AGG_L[/math] where AGGL is the aggregate expected limited loss for the policy. The calculation of AGGL depends on the type of loss limitation.

Alice: "You should skim what follows - the material involved is likely too complicated to calculate in a timed closed book exam. Focus on the concepts and resume focusing in detail on the calculations when you reach assigning probabilities to the evaluation points."

Per-Claim Basis:

[math]AGG_L=E[N]^{PC}\cdot \mathrm{AvgSev}_L^{PC}[/math], where [math]\mathrm{AvgSev}_L^{PC}[/math] is the weighted average of the per-claim limited severities across all state/hazard groups using the expected claim counts as the weights.

The per-claim limited severity for a state/hazard group is calculated as the average unlimited severity (ASCG) multiplied by [math]1-XS_{CG}[/math]. The NCCI uses a $50 million limit to define "unlimited severity" — severities over this are excluded and treated as catastrophes. The average unlimited severity is defined by [math]ACC_{CG}=AS_{CG}\cdot(1-XS(\$50\mathrm{million}))[/math], where ACCCG is the average cost per case. It is necessary to solve these equations using numerical methods because ASCG is embedded within XSCG(L).

Per-Occurrence Basis:

Similarly, on a per-occurrence basis [math]AGG_L=E[N]^{PO}\cdot\mathrm{AvgSev}_L^{PO}[/math] where [math]\mathrm{AvgSev}_L^{PO}[/math] is the average limited severity on a per-occurrence basis. Then [math]\mathrm{AvgSev}_L^{PO}=\mathrm{AvgSev}_L^{PC}\cdot\alpha\cdot\left(1-XS^{PO}(L)\right)[/math].

[math]\mathrm{AvgSev}_L^{PC}[/math] is calculated the same as in the per-claim section and [math]XS^{PC}(L)=1-\frac{\mathrm{AvgSev}_L^{PC}}{\mathrm{AvgSev}^{PC}}[/math]. This is then converted to a per-occurrence basis using the per-claim to per-occurrence conversion table that isn't part of the study kit. Linear interpolation is used if needed.

Ideally the discretized severity distribution should contain 1,500 intervals up to AGGL (or 15,000 intervals up to 10AGGL if that is smaller) and there is a lower bound on the number of intervals, called the Minimum Severity Interval value (MSI value). The MSI value is chosen based on the precision required and computing power available. The larger the MSI value, the greater the precision and more computing power or time is required.

The interval size is determining using the following equation: [math]\mbox{Interval size}=\displaystyle\frac{L}{\mathrm{Ceiling}\left[\frac{L}{\mathrm{Min}\left(\frac{AGG_L}{1500},\frac{L}{MSI}\right)}\right]}[/math].

Assigning Probabilities to the Evaluation Points

Let [math]x_i[/math] be an evaluation point for the uniformly discretized severity distribution. For each point compute the per-claim average severity limited at the evaluation point, [math]\mathrm{AvgSev}_{x_i}^{PC}[/math] using the method described above. If the policy has a per-occurrence limit then it's necessary to use the per-claim to per-occurrence conversion table with linear interpolation.

Alice: "It's likely you would be given the average limited severity at each evaluation point on the exam."

View [math]\mathrm{AvgSev}_{x_i}^{PC}[/math] as the per-claim limited expected value and change the notation to [math]LEV^{PC}_i[/math]. Irrespective of whether the loss limitation is on a per-claim or per-occurrence basis, the following properties must hold at each evaluation point i or adjustments are needed until they do.

  1. [math]LEV_i\leq x_i[/math]
  2. [math] LEV_i-LEV_{i-1} \geq LEV_{i+1}-LEV_i[/math]

If these conditions aren't met then set [math]LEV_0=\mathrm{Min}\left(x_0,LEV_0\right)[/math], [math]LEV_1 =\mathrm{Min}\left(x_1,LEV_1, 2\cdot LEV_0\right)[/math] and in general [math]LEV_i =\mathrm{Min}\left(x_i,LEV_i, 2\cdot LEV_{i-1} - LEV_{i-2}\right)[/math].

Once you have the LEV values, calculate the loss in layer (LIL) as [math]LIL_i=LEV_i - LEV_{i-1}[/math], where LIL0 = 0.

From the LIL values calculate the cumulative distribution function (CDF) at each loss point using [math]CDF_i = 1- \displaystyle\frac{LIL_{i+1}}{x_{i+1}-x_i}[/math]. At the last evaluation point set the CDF equal to 1.

Finally, calculate the probability density function (PDF) at each point using [math]PDF_i = CDF_i - CDF_{i-1}[/math].

Alice: "Woah this is a lot to take in. Let's look at a small example from the NCCI manual."

Uniformly Discretize a Continuous Distribution

Aggregate Loss Distribution

Now you have a parameterized negative binomial distribution for the claim counts and a uniformly discretized severity distribution you can begin to apply Panjer's Recursive Algorithm.

Alice: "Panjer's Algorithm is also covered briefly in Clark's section on aggregate models. You should compare the material there against here to reinforce your learning."

A key requirement to use Panjer's Recursive Algorithm is the severity distribution must have a zero probability of a loss of size 0. The work done so far may not have produced that, so the following adjustments should be considered.

Define the adjustment factor, AF, as [math]AF=1-PDF_0[/math], where PDF0 is the probability density function at 0 for the discretized severity distribution.

Modify the claim count distribution by defining the new expected number of claims and new variance to mean function as

  • [math]E[N]_{\mathrm{Panjer}} = E[N]^{PC}\cdot AF[/math]
  • [math]VtM_{\mathrm{Panjer}} = 1+AF\cdot\left(VtM^{PC} -1\right)[/math].

Modify the severity distribution by removing the evaluation point corresponding to no loss and defining [math]PDF_i^{\mathrm{Panjer}}=\displaystyle\frac{PDF_i}{AF}[/math].

The remaining inputs for Panjer's Recursive Algorithm are:

  • [math]a=1-\frac{1}{VtM_\mathrm{Panjer}}[/math]
  • [math] r= \frac{E[N]_{\mathrm{Panjer}}}{VtM_{\mathrm{Panjer}}-1}[/math]
  • [math]b=a\cdot(r-1)[/math]
  • [math]p_0 = (1-a)^r[/math] where p0 is the probability of 0 claims.
  • l is the number of evaluation points in the adjusted severity distribution.
  • [math]I=\displaystyle\mathrm{Ceiling}\left(\frac{10AGG_L}{\mathrm{Interval Size}}\right)+1[/math].

Let yi denote an evaluation point for the aggregate loss distribution. The evaluation points go from 0 to [math]10\cdot AGG_L[/math] using the same interval size as previously used when discretizing the severity distribution.

Finally we apply Panjer's Recursive Algorithm by setting [math]PDF_0^{\mathrm{Agg}}=p_0[/math] and letting [math]PDF_i^{\mathrm{Agg}} = \displaystyle\sum_{j=1}^{\mathrm{Min}(i,l)}\left(a+\frac{b\cdot j}{i}\right)\cdot PDF_j^{\mathrm{Panjer}}\cdot PDF_{i-j}^{\mathrm{Agg}} [/math].

Aggregate Excess Loss Factors

The Aggregate Excess Loss Factors (AELFs) are calculated directly from the aggregate loss distribution using the following formulas:

  • [math]AELF_i = 1-\displaystyle\frac{\sum_{j=0}^i\left(y_j\cdot PDF_j^\mathrm{Agg}\right)+\left(y_i\cdot(1-CDF_i^\mathrm{Agg})\right)}{}[/math], and
  • [math]CDF_i^\mathrm{Agg}=\displaystyle\sum_{j=0}^i\left(PDF_j^\mathrm{Agg}\right)[/math].

The last thing that remains is to convert the evaluation points into entry ratios by assigning the entry ratio [math]r_i=\displaystyle\frac{y_i}{AGG_L}[/math] to the ith evaluation point.

Alice: "If Aggregate Minimum Loss Factors are desired just calculate them using: [math]AMLF_r = AELF_r+r-1[/math]."

Question: What is a key advantage of the approach used to develop the Aggregate Loss Factors on Demand?
Solution:
  • By directly calculating the limited aggregate loss distribution there is no need for an adjustment to remove the overlap between the loss limit and the aggregate loss limit.

mini BattleQuiz 2 You must be logged in or this will not work.

Informational Exhibit 2

This exhibit gives you seven tables of aggregate excess loss factors for the state of Alaska, one for each of the seven hazard groups. The tables assume there is no loss limit and all exposures are from a single hazard group.

Each table is grouped by entry ratio (rows) from 0.2 to 2 in increments of 0.2 and then from 2 to 10 in increments of 1. The columns are the expected number of claims ([math]E[N][/math]).

Alice: "This is a really weird exhibit to have in the study kit as none of the other materials in the Circular refer to these tables. Make sure you know the rest of the NCCI circular material really well and then if they test you on these particular tables you should be able to logic your way through it."

Informational Exhibit 3

This exhibit describes the Table of Aggregate Excess Loss Factors which is the countrywide pre-tabulated alternative to accessing the state/hazard group specific factors on demand online. The source emphasizes that because the table is pre-computed there are additional steps required to use it which is why the NCCI.Circular example is so involved.

The Table of Aggregate Loss factors is a countrywide table meant for using with policies which have any of the following:

  • Exposures in any combination of states and hazard groups.
  • Loss limitations of any size.
  • Any number of expected claims.

The NCCI attempted to balance the need to keep the table small enough to be practical while letting it contain enough values to produce the desired level of accuracy. Consideration was also given to the number of calculations a user would have to do to be able to utilize it.

The Table is split into 18 sub-tables, each of which is defined by a range of policy excess ratios. As the policy loss limit is increased, the policy excess ratio decreases and this places upward pressure on the aggregate excess loss factors. This makes splitting the table into sub-tables by policy excess ratio a natural choice.

Each sub-table is split into columns where each column represents an expected claim count group. For a fixed loss limitation, as the expected number of claims increases the variation in the loss ratio decreases. This means as the expected number of claims increases (i.e. the size of the risk increases) the aggregate excess loss factors decrease for any given entry ratio.

Question: What are two advantages of using expected claim counts rather than claim dollars in the table?
Solution:
  • By using counts rather than dollars there is no need to update the mapping of the claim count ranges to the table columns on a regular basis.
  • Inflation impacts the policy excess ratio for a risk which results in mapping the risk to a new sub-table rather than having to update the table.

The remainder of the exhibit discusses how the countrywide aggregate excess loss factors are populated in the table. The details are highly involved and probably not worth devoting a lot of time to. Instead, you should know the NCCI formed a "Grid" which contained AELFs over a wide range of risk sizes determined by expected occurrences, loss limits and entry ratios. From there they followed the method for the on-demand factors but used countrywide parameters instead. A piecewise exponential parametric form was then found for the Grid and the AELFs values determined from a subset of the Grid known as the Lattice. Once the parametric form was determined it was then expanded to determine a form for each of the expected claim groups and policy excess ratios which are then used to calculate the AELFs.

Alice: "I just don't see how they can test you on the above in any detail - there's simply too much going on to fit into the time and points allowed. What follows is much more likely to be asked in my opinion!"

The NCCI did several tests on the resulting sub-tables to ensure the following properties hold:

  1. AELFs are monotonically decreasing as the entry ratio increases for a given sub-table and expected claim count group.
  2. The rate of decrease of the AELFs should decrease monotonically as the entry ratios increase.
  3. AELFs increase monotonically as the policy loss limits increase (increase as the sub-table numbers increase).
  4. The AELFs for a fixed entry ratio should monotonically decrease as the size of the risk increases.
  5. The rate of decrease for the AELFs for a fixed entry ratio should monotonically decrease as the size of the risk increases.

Remember, the larger the size of the risk, the more claims they expect to have so their results should be more stable and therefore have a lower AELF. The lower numbered expected claim count groups are the ones which have a greater number of expected claims.

A Note about ALAE and Loss Limitations There is only one Table of Aggregate Excess Loss Factors that is pre-tabulated. It covers countrywide risks and can be used regardless of whether the policy includes Allocated Loss Adjustment Expenses or not. Similarly, the same table can be used regardless of whether the policy loss limitation is a per-claim or per-occurrence limitation (this explains the ambiguity in the NCCI.Circular example.

The NCCI conducted an analysis which observed the relative change in AELFs resulting from including or excluding ALAE with the losses was immaterial. The same analysis concluded the difference between the AELFs when the loss limitation is per-occurrence instead of per-claim is immaterial. This only applies to the countrywide pre-tabulated table. It is extremely important to note the loss limitation type and whether ALAE is included or not when working with the on-demand AELFs.

mini BattleQuiz 3 You must be logged in or this will not work.

Full BattleQuiz You must be logged in or this will not work.

Forum