Fisher.TableM: Difference between revisions
(Created page with "'''Reading''': Fisher, G. et al, "Individual Risk Study Note," CAS Study Note, Version 3, October 2019. Chapter 3. Section 3 '''Synopsis''': This article goes into the detail...") |
|||
Line 23: | Line 23: | ||
|- | |- | ||
|| | || no prior questions | ||
|| | || | ||
|| | || |
Revision as of 19:58, 26 May 2020
Reading: Fisher, G. et al, "Individual Risk Study Note," CAS Study Note, Version 3, October 2019. Chapter 3. Section 3
Synopsis: This article goes into the details needed to build a Table M using either the vertical or horizontal slicing methods.
Study Tips
This is a fairly quick reading which should be easy points if it comes up on the exam. It makes use of a key formula from the previous reading.
Estimated study time: 1 Hour (not including subsequent review time)
BattleTable
Based on past exams, the main things you need to know (in rough order of importance) are:
- How Table M is grouped
- The vertical slicing method
- The horizontal slicing method
reference part (a) part (b) part (c) part (d) no prior questions
In Plain English!
Table M consists of insurance charges and insurance savings listed by entry ratio and grouped by size of policy (expected aggregate loss, E), policy limit, and potentially other distinguishing policy characteristics.
The separation into groups is necessary to accurately estimate the insurance charges and savings because different size policies have different expected claim counts and hence different aggregate loss distributions. For example, it doesn't make sense to consider Workers' Compensation data together for metal foundry workers and office workers as the type of injury (severity) and number of injuries are likely to be materially different.
By grouping into groups with similar claim frequency the aggregate loss distributions have similar variance due to claim frequency and are less subject to inflation.
We can use Table M to calculate the expected loss covered by the insurer. Fisher gives the following example:
Question: A loss sensitive Workers' Compensation policy has expected aggregate loss E = $200,000 and the insured is responsible for an aggregate limit of $80,000. Assume φ(0.4) = 0.72. Calculate the expected loss covered by the insurer.
- Solution:
- The entry ratio for the policy is [math]r=\frac{80,000}{200,000}=0.4[/math]. The expected loss covered by the insurer is [math]\phi(r)\cdot E=0.72\cdot 200,000 = \$144,000[/math].
Building a Table M
A Table M is usually built using empirical data but may have curves fitted to smooth out the data and allow for easy interpolation.
- Gather annual aggregate loss data for many insureds and split into groups which are expected to have similar aggregate loss distributions.
- Ideally want insureds to have similar loss frequency and claim severity distributions but these can be hard to estimate.
- Settle for grouping by similar size and type of risk. For instance, group together Workers' Compensation risks for low hazard activities with expected aggregate unlimited loss between $100,000 and $200,000.
- For each group we need the actual aggregate losses or the actual aggregate loss ratios.
- Allows us to compare the individual risk experience to the group average.
- Losses should be developed to ultimate before comparing. This is tricky to do without reducing the variance between risks. Alice: "It's not part of the syllabus but you need to know you must do it in practice".
- There are two methods of comparison
- Vertical Slicing: Click on the PDF for details. : Vertical Slicing Method
- Horizontal Slicing: Click on the PDF for details. : Horizontal Slicing Method
Using a Parameterized Aggregate Loss Distribution
Reinsurance companies tend to do this because they may not have a statistically credible set of similar policies to use the empirical process on. They may model claim frequency with a Poisson distribution and severity with a gamma distribution, or use a Tweedie distribution to approximate the aggregate loss (see Goldburd.Basics for more details on modelling). When the data is thin, make sure you test the sensitivity of the aggregate loss estimates (pure premiums) under variety of assumptions.
In practice we often combine the empirical approach with parametric models when there is insufficient data to form all of the desired groupings with enough credibility. Split the data into a small number of credible groups which cover the range of the data and for each group fit the resulting empirical loss distribution to a parametric distribution. Once you have the parametric distributions, interpolate to build the aggregate loss distributions for the smaller, more homogeneous groups.
When performing this, it's important to look at each policy's aggregate loss as a ratio to its expected loss to avoid additional variation due to differences in expected loss. You should also consider the impact of loss development, trend and the heterogeneous nature of the underlying exposures.