NCCI.InformationalExhibits
Reading: National Council on Compensation Insurance, Circular CIF-2018-28, 06/21/2018. Informational Exhibits 1 — 3.
Synopsis: This is the second of two articles on the NCCI Circular CIF-2018-28 reading. It covers the second part of the circular which is a series of informational exhibits that explain how the NCCI derives their aggregate excess loss factors on demand for their retrospective rating plan. The first part of the NCCI Circular reading is available at NCCI.Circular.
Study Tips
To follow...
Estimated study time: 4 hours (not including subsequent review time)
BattleTable
This is a new reading and due to the CAS no longer publishing past exams there are no prior exam questions available. At BattleActs we feel the main things you need to know (in rough order of importance) are:
Questions are held out from most recent exam. (Use these to have a fresh exam to practice on later. For links to these questions see Exam Summaries.) |
reference part (a) part (b) part (c) part (d) Currently no prior exam questions
Full BattleQuiz You must be logged in or this will not work.
In Plain English!
Overview
There are three informational exhibits which detail the derivation of the aggregate excess loss factors for both the on-demand factors and those provided in the pre-tabulated countrywide tables.
In terms of testability, the first exhibit and the last part of the third exhibit are probably more realistic to draw material from. If you're reading the source as well, note there is a lot of overlap between these exhibits and the NCCI.Circular wiki article - Alice has done her best to avoid duplicating any content here so you can study efficiently.
Informational Exhibit 1
This exhibit deals with how the NCCI produces the aggregate loss distribution used to create the Aggregate Excess Loss Factors. The key idea is to start with separate distributions for the claim counts and claim severity before merging them using Panjer's Algorithm to create an aggregate loss distribution.
Claim Count Distribution
The NCCI models claim counts using a negative binomial distribution. The distribution is specified (parameterized) by its mean, E[N], and Variance-to-Mean function, VtM. The Variance-to-Mean function allows you to compute the variance by multiplying the mean and variance-to-mean function together.
Notation The NCCI uses a superscript PC to mean "per-claim" and a superscript PO to mean "per-occurrence". Remember, an accident is an occurrence and a single occurrence may result in several claims if multiple parties were harmed. You need to keep track of whether your calculations are on a "per-claim" or "per-occurrence" basis as it is necessary to carefully convert between the two at times.
Irrespective of the use of per-claim or per-occurrence in general you should perform calculations at the state/hazard group level and then sum the results across all state/hazard group combinations for a risk.
Per-Claim Basis
Let E[N]PC be the sum of the expected number of claims for the policy over all state/hazard groups. Use this value for the mean, E[N].
The negative binomial variance is the product of E[N]PC and VtMPC. Here, VtMPC is the Variance-to-Mean function evaluated at E[N]PC.
Question: What does the Variance-to-Mean function look like, what properties does it have, and how do I find it?
Solution: - The Variance-to-Mean function is defined as [math]VtM(x, A, B) = \displaystyle\begin{cases}1+m\cdot x & \mbox{if } x\leq k \\ A\cdot x^B & \mbox{if } x\gt k \end{cases}[/math].
The Variance-to-Mean function must satisfy the following:
- It must be continuous, and
- Its first derivative must also be continuous, i.e. the function is smooth.
The above conditions determine the values of m and k in the Variance-to-mean formula as follows:
The slope of the linear function, m, is expressed in terms of A, B and k via [math]m=\displaystyle\frac{A\cdot k^B -1}{k}[/math]. The transition point, k, is given by [math]k=[A(1-B)]^\frac{-1}{B}[/math]. So the Variance-to-Mean function is entirely specified by A and B.
Due to the above conditions, the transition point k is called the tangent point and denoted by E[N]TP. This is because the slope of the line and the slope of the power curve are equal at this point.
The NCCI determines the Variance-to-Mean function by fitting it to empirical data so in the exam you would be given the fixed values of A and B.
Per-Occurrence Basis
When working on a per-occurrence basis the claim count distribution is still parameterized using E[N] = E[N]PO and Variance-to-Mean function VtMPO. However, it's necessary to carefully convert from the per-claim basis to per-occurrence basis by dividing E[N]PC by the per-occurrence constant to get E[N] = E[N]PO. The per-occurrence constant, [math]\alpha[/math], is determined empirically by dividing the number of claims by the number of occurrences.
Similarly, an adjustment to VtMPC is needed to calculate VtMPO. It is trickier because you need to form a negative binomial per-occurrence distribution whose probability of zero occurrences is the same as the probability of zero claims under the per-claim negative binomial distribution. (Can't have a claim without an occurrence!)
The general approach is to express the mean and variance of the negative binomial distributions using [math]r_i[/math] and [math]\beta_i[/math] where the subscript i indicates which claim type (1 = "per-claim", 2 = "per-occurrence"). To meet the zero claims/zero occurrences probability requirement the following must hold:
- [math]r_2\beta_2=\frac{r_1\beta_1}{\alpha}[/math], and
- [math]\left(1+\beta_1\right)^{-r_1} = \left(1+\beta_2\right)^{r_2}[/math].
The first condition adjusts the mean while the second matches the variance-to-mean ratio between the two negative binomial distributions. With these adjustments VtMPO is calculated by numerically solving [math]\displaystyle\frac{\ln(VtM^{PC})}{\ln(VtM^{PO})}=\frac{VtM^{PC}-1}{\alpha\left(VtM^{PO}-1\right)}[/math].
Severity Distribution
The severity distribution used varies by state and hazard group and is determined by formulas and parameters not included in the study kit (the ELF Parameters and Tables document). The formulas and parameters come from fitting excess ratio curves using a mixture of two lognormal curves spliced with a generalized Pareto tail. The NCCI lets [math]XS_{CG}(L)[/math] be the function for calculating the unlimited claim group (CG) per-claim excess ratio at loss limit L.
A requirement of Panjer's Algorithm is the severity distribution must be given as a uniformly discrete distribution. So the next step converts the continuous severity distribution into a discrete distribution with equally spaced points.
The continuous severity distribution is evaluated at equally spaced points going from 0 to the minimum of the loss limit L and [math]10\cdot AGG_L[/math] where AGGL is the aggregate expected limited loss for the policy. The calculation of AGGL depends on the type of loss limitation.
Alice: "You should skim what follows - the material involved is likely too complicated to calculate in a timed closed book exam. Focus on the concepts and resume focusing in detail on the calculations when you reach assigning probabilities to the evaluation points."
Per-Claim Basis:
[math]AGG_L=E[N]^{PC}\cdot \mathrm{AvgSev}_L^{PC}[/math], where [math]\mathrm{AvgSev}_L^{PC}[/math] is the weighted average of the per-claim limited severities across all state/hazard groups using the expected claim counts as the weights.
The per-claim limited severity for a state/hazard group is calculated as the average unlimited severity (ASCG) multiplied by [math]1-XS_{CG}[/math]. The NCCI uses a $50 million limit to define "unlimited severity" — severities over this are excluded and treated as catastrophes. The average unlimited severity is defined by [math]ACC_{CG}=AS_{CG}\cdot(1-XS(\$50\mathrm{million}))[/math], where ACCCG is the average cost per case. It is necessary to solve these equations using numerical methods because ASCG is embedded within XSCG(L).
Per-Occurrence Basis:
Similarly, on a per-occurrence basis [math]AGG_L=E[N]^{PO}\cdot\mathrm{AvgSev}_L^{PO}[/math] where [math]\mathrm{AvgSev}_L^{PO}[/math] is the average limited severity on a per-occurrence basis. Then [math]\mathrm{AvgSev}_L^{PO}=\mathrm{AvgSev}_L^{PC}\cdot\alpha\cdot\left(1-XS^{PO}(L)\right)[/math].
[math]\mathrm{AvgSev}_L^{PC}[/math] is calculated the same as in the per-claim section and [math]XS^{PC}(L)=1-\frac{\mathrm{AvgSev}_L^{PC}}{\mathrm{AvgSev}^{PC}}[/math]. This is then converted to a per-occurrence basis using the per-claim to per-occurrence conversion table that isn't part of the study kit. Linear interpolation is used if needed.
Ideally the discretized severity distribution should contain 1,500 intervals up to AGGL and there is a lower bound on the number of intervals, called the Minimum Severity Interval value (MSI value). The MSI value is chosen based on the precision required and computing power available. The larger the MSI value, the greater the precision and more computing power or time is required.
The interval size is determining using the following equation: [math]\mbox{Interval size}=\displaystyle\frac{L}{\mathrm{Ceiling}\left[\frac{L}{\mathrm{Min}\left(\frac{AGG_L}{1500},\frac{L}{MSI}\right)}\right]}[/math].
Assigning Probabilities to the Evaluation Points
Let [math]x_i[/math] be an evaluation point for the uniformly discretized severity distribution. For each point compute the per-claim average severity limited at the evaluation point, [math]\mathrm{AvgSev}_{x_i}^{PC}[/math] using the method described above. If the policy has a per-occurrence limit then it's necessary to use the per-claim to per-occurrence conversion table with linear interpolation.
Alice: "It's likely you would be given the average limited severity at each evaluation point on the exam."
View [math]\mathrm{AvgSev}_{x_i}^{PC}[/math] as the per-claim limited expected value and change the notation to [math]LEV^{PC}_i[/math]. Irrespective of whether the loss limitation is on a per-claim or per-occurrence basis, the following properties must hold at each evaluation point i or adjustments are needed until they do.
- [math]LEV_i\leq x_i[/math]
- [math] LEV_i-LEV_{i-1} \geq LEV_{i+1}-LEV_i[/math]
If these conditions aren't met then set [math]LEV_0=\mathrm{Min}\left(x_0,LEV_0\right)[/math], [math]LEV_1 =\mathrm{Min}\left(x_1,LEV_1, 2\cdot LEV_0\right)[/math] and in general [math]LEV_i =\mathrm{Min}\left(x_i,LEV_i, 2\cdot LEV_{i-1} - LEV_{i-2}\right)[/math].
Once you have the LEV values, calculate the loss in layer (LIL) as [math]LIL_i=LEV_i - LEV_{i-1}[/math], where LIL0 = 0.
From the LIL values calculate the cumulative distribution function (CDF) at each loss point using [math]CDF_i = 1- \displaystyle\frac{LIL_{i+1}}{x_{i+1}-x_i}[/math]. At the last evaluation point set the CDF equal to 1.
Finally, calculate the probability density function (PDF) at each point using [math]PDF_i = CDF_i - CDF_{i-1}[/math].
Alice: "Woah this is a lot to take in. Let's look at a small example from the NCCI manual."
Panjer's Recursive Algorithm
Alice: "Panjer's Algorithm is also covered briefly in Clark's section on aggregate models. You should compare the material there against here to reinforce your learning. This section presents the NCCI's use of Panjer's Algorithm."
Aggregate Loss Distribution
Aggregate Excess Loss Factors
Informational Exhibit 2
This exhibit gives you seven tables of aggregate excess loss factors for the state of Alaska, one for each of the seven hazard groups. The tables assume there is no loss limit and all exposures are from a single hazard group.
Each table is grouped by entry ratio (rows) from 0.2 to 2 in increments of 0.2 and then from 2 to 10 in increments of 1. The columns are the expected number of claims ([math]E[N][/math]).
Alice: "This is a really weird exhibit to have in the study kit as none of the other materials in the Circular refer to these tables. Make sure you know the rest of the NCCI circular material really well and then if they test you on these particular tables you should be able to logic your way through it."
Informational Exhibit 3
Full BattleQuiz You must be logged in or this will not work.