Bahnemann.Chapter6

From BattleActs Wiki
Jump to navigation Jump to search

Reading: Bahnemann, D., "Distributions for Actuaries", CAS Monograph #2, Chapter 6.

Synopsis: To follow...

  Forum

Study Tips

...your insights... To follow...

Estimated study time: x mins, or y hrs, or n1-n2 days, or 1 week,... (not including subsequent review time)

BattleTable

Based on past exams, the main things you need to know (in rough order of importance) are:

  • fact A...
  • fact B...
reference part (a) part (b) part (c) part (d)
E (2018.Spring #1)
E (2018.Spring #1)
E (2018.Spring #1)
E (2018.Spring #1)
E (2018.Spring #1)
E (2018.Spring #1)
E (2018.Spring #1)
E (2018.Spring #1)

Full BattleQuiz You must be logged in or this will not work.

  Forum

In Plain English!

Premium Concepts

Most of Section 6.1 of the text should be very familiar to anyone who has worked in insurance for a while. The key points are given here but you should skim-read the source just to be safe.

The premium charged for a policy is the expected loss (including expected Allocated Loss Adjustment Expense) plus a load for general expenses, underwriting profit, and a provision for risk.

Let N be the per policy claim count random variable, m be the number of exposures, and φ be the ground-up claim frequency per exposure. Let Y be the claim size including ALAE. Then [math]E[N]=m\phi[/math] is the expected claim count and the expected loss and ALAE for a policy is given by [math]E[N]\cdot E[Y][/math]. The per policy pure premium is [math]p=\phi\cdot E[Y][/math].

Usually, the expected claim count, [math]E[N][/math], depends on the exposure base associated with the coverage. A policy may have one exposure such as in the case of a 1-year Homeowners policy, or may have multiple exposures. An example of multiple exposures on a policy would be a 6-month auto policy which covers 3 vehicles. This has an exposure of [math]0.5\cdot 3=1.5[/math] vehicle years.

Claim frequency is the expected number of claims per unit of exposure, and claim severity is the average claim size given a claim has occurred.

The risk charge (also know as the provision for risk) is extra premium collected by the insurer to cover:

  1. Process risk - the random fluctuation of losses about the expected values.
  2. Parameter risk - the uncertainty surrounding the selection of model parameters.

The rate per unit of exposure is given by [math]R=\frac{p+f}{1-v}[/math], where p is the pure premium, f is the fixed expense dollars, and v is the variable expense percentage.

The policy premium is given by [math]P=mR[/math].

If all expenses are variable then f = 0 and the quantity [math]\psi=\frac{1}{1-v}[/math] is called a loss cost multiplier (LCM). The LCM is used to load all other costs on top of the pure premium to get the final rate.

mini BattleQuiz 1 You must be logged in or this will not work.

Increased Limit Factors (ILFs)

You can either use empirical loss data organized by the per policy limits to calculate increased limit factors for higher levels of coverage, or you can fit a distribution to the loss data. Fitting a distribution is useful for obtaining factors for higher limits where there may be a lack of credible data.

An increased limit factor is the ratio of the policy premium at limit L to the policy premium at the basic limit, b. Letting Y be the random variable for the ground-up loss distribution, then mathematically we have [math]I(L)=\frac{P_L}{P_b}=\frac{E[Y;L]}{E[Y;b]}[/math], under the following assumptions:

  • The loss cost multipliers are identical for each limit,
  • Frequency and severity are independent,
  • Frequency is the same across the layers (doesn't change by policy limit).

Expense Loading and ILFs:

There are a couple of ways that (allocated) loss adjustment expenses can be incorporated into increased limit factors.
  1. The policy limit applies to the total claim amount, i.e. indemnity loss plus allocated loss adjustment expense.
    • In this case, the ILF is the ratio of policy severities at limits L and b.
  2. The policy limit applies only to the indemnity portion of the claim (usual situation). Letting X be the indemnity component of the claim and ε be the average per claim allocated loss adjustment expense, set [math]E[Y;L]=E[X;L]+\epsilon[/math], then the ILF is defined as usual.
  3. Instead of 2. assume the allocated loss adjustment expenses have a relationship with claim size. Usually bigger claims have more loss adjustment expenses, so assume loss adjustment expenses are a fixed multiple u of the indemnity amount. Then [math]E[Y;L]=E[X;L]+u\cdot E[X;L][/math] and define the ILF as usual.
  4. Approaches 2 and 3 may be combined to get a general formula: [math]E[Y;L]=\left(E[X;L]+c\right)\cdot\left(1+u\right)[/math].
    • This approach is used by Insurance Services Office, Inc (ISO) to load c for ALAE and then apply u for ULAE (unallocated loss adjustment expenses) on top.

To see this in action, let's look at Example 6.3 from the text. Insert BahnemannEx6-3 PDF

Excess Layer Pricing

An excess layer of coverage is defined by a policy limit L and an attachment point, A. The pure premium for the layer [math](A, A+L][/math] is [math]p_{A,L}=\phi\cdot\left(1-F_{X_t}(A)\right)E_{A,L}[Y][/math], where [math]\phi[/math] is the ground-up claim frequency, [math]E_{A,L}[Y][/math] is the severity for the layer [math](A, A+L][/math] and X_t is the total ground-up claim amount.

If [math]X_t[/math] is subject to the layer limits (remember, it may not be due to how loss adjustment expenses are included), then we get

[math]\begin{align} p_{A,L} &=\phi\cdot(1-F_{X_t}(A))\frac{E[X_t;A+L]-E[X_t;A]}{1-F_{X_t}(A)} \\ &= \phi\left(E[X_t;A+L]-E[X_t;A]\right) \\ &= \phi\cdot E[X_t;b]\left(\frac{E[X_t;A+L]}{E[X_t;b]}-\frac{E[X_t,A]}{E[X_t;b]}\right)\\ &= p_b\cdot(I(A+L)-I(A)) \end{align}[/math]

Here pb is the pure premium for the basic limit, b.

Since the excess layer premium is [math]P_{A,L}=m(\psi p_{A,L})=m(\psi p_b)(I(A+L)-I(A))[/math], we get [math]P=P_b\cdot(I(A+L)-I(A))[/math], where Pb is the basic limit premium. This is known as the layer formula.

That is, for this particular case, the excess layer premium is the base premium multiplied by the difference between the two ground-up increased limit factors for the attachment point and attachment point plus layer.

The simplicity of the special case formula above makes it easy to use when pricing increased limits, even when it is not appropriate. The formula for excess layer premium is more accurate than the layer formula using the basic limits. An example of this is when ALAE is added on after the layer limits L and A are applied. The excess layer approach accounts for the ALAE but the layer formula doesn't.

Excess Layer Premium:

[math]\begin{align} P_{A,L}&=\psi E[N](1-F_X(A))\left(\frac{E[X;A+L]-E[X;A]}{1-F_X(A)}+\epsilon\right) \\ &= \psi E[N](E[X;A+L]-E[X;A]+(1-F_X(A))\epsilon) \end{align}[/math]

Layer Formula Premium:

[math]\begin{align} P&=\psi E[N](E[X;b]+\epsilon)\left(\frac{E[X;A+L]+\epsilon}{E[X;b]+\epsilon}-\frac{E[X;A]+\epsilon}{E[X;b]+\epsilon}\right)\\ &=\psi E[N](E[X;A+L]-E[X;A]) \end{align}[/math]

This is okay if you're considering an excess policy written over a primary policy because an assumption is the primary policy pays all of the ALAE.

If, instead of [math]\epsilon[/math] of ALAE being added on, we load ALAE via a factor [math]1+u[/math], then the layer formula and the excess layer approach are identical.

Excess Layer Premium:

[math]P_{A,L}=\psi E[N](E[X;A+L]-E[X;A])(1+u)[/math]

Layer Formula Premium:

[math]\begin{align} P&=\psi E[N]E[X;b](1+u) \left(\frac{E[X;A+L]}{E[X;b]}-\frac{E[X;A]}{E[X;b]}\right)\\ &=\psi E[N]\left(E[X;A+L]-E[X;A]\right)(1+u) \end{align}[/math]

Consistency

Premium amounts decrease for successively higher layers of constant width as the attachment point gets larger. A set of increased limit factors for which [math]I^{\prime\prime}(x)\lt 0[/math] for all limits x is said to be consistent.

Consistency means [math]I^\prime(x)[/math] is a decreasing function of x. Consistency can be violated if the underlying assumptions of independent frequency and severity are violated, or if the assumption that frequency is the same for all layers is violated.

To see how to check for consistency in practice, check out the following example: Insert Bahnemann.Consistency PDF

Risk Load

Increased limit factors are generally thought to be insufficient for pricing insurance policies with high limits or high attachment points because the insurer likely has sparse data. A solution is to load the increased limit factor with a charge for insurer risk (process risk) as the loss behaviour for higher limits or excess policies is more volatile/harder to predict than lower limits or primary policies. Actuaries understand process risk as a function of the variance of the stochastic claim process.

The risk load, [math]\rho(L)[/math], is usually an increasing function of the policy limit L and is added to the expected total policy severity. The increased limit factors are then formed in the usual way:

[math]I(L)=\frac{(E[X;L]+\epsilon)(1+u)+\rho(L)}{(E[X;b]+\epsilon)(1+u)+\rho(b)}[/math].

Miccolis Variance Approach:

Add process risk to the policy level expected aggregate loss via a constant multiple of the variance of the indemnity loss variable S. That is, [math]k\cdot Var(S)[/math]. This gives the following risk loaded loss cost formula:

[math]\begin{align} E[N]E[X;L]+kVar(S) &=E[N]\left(E[X;L]+k\frac{Var(S)}{E[N]}\right)\\ &=E[N]\left(E[X;L]+\rho(L)\right) \end{align}[/math]

Here, k is an arbitrary constant which is judgmentally selected to coincide with the estimated level of risk.

Note we can write [math]Var(S)=E[N]Var(X;L)+Var(N)(E[X;L])^2[/math]. Using this we get:

[math]\begin{align} \rho(L)&= k\frac{Var(S)}{E[N]}\\ &=k\left(\frac{E[N]Var(X;L)+Var(N)(E[X;L])^2}{E[N]}\right)\\ &=k\left(E[X^2;L]-(E[X;L])^2+\frac{Var(N)(E[X;L])^2}{E[N]}\right)\\ &=k\left(E[X^2;L]+\left(\frac{Var(N)}{E[N]}-1\right)(E[X;L])^2\right)\\ &=k\left(E[X^2;L]+\delta(E[X;L])^2\right) \end{align}[/math]

Observe [math]\frac{Var(N)}{E[N]}-1=\delta=0[/math] if N has a Poisson distribution. This is an important fact to remember!

When [math]\delta=0[/math] the risk load [math]\rho(L)[/math] is independent of the claim count random variable, and only depends on the claim-size variable.

Later, ISO used the standard deviation of the policy aggregate indemnity loss distribution, setting

[math]\begin{align} \rho(L)&=k\frac{\sqrt{Var(S)}}{E[N]}\\ &= \frac{k}{\sqrt{E[N]}}\sqrt{E[X^2;L]+\delta(E[X;L])^2} \end{align}[/math]

Here [math]\delta[/math] is still defined as [math]\delta=\frac{Var(N)}{E[N]}-1[/math].

Neither the variance nor the standard deviation methods of risk loading work with the layer formula. In other words, the risk load for the excess layer [math](A,A+L][/math] is not equal to [math]\rho(A+L)-\rho(A)[/math]. The layer formula approach overestimates the risk load. A better approach is to determine the layer premium via the layer formula without risk loading and then add on a risk load for the layer. This method is used by reinsurers providing excess of loss coverage.

Aggregate Limits

The per-claim limit, L, is the maximum payout for a single claim. The aggregate limit, M, is the maximum amount paid out during a policy term across all claims combined. Clearly, [math]L\leq M[/math].

Let [math]S_L[/math] be the aggregate loss random variable based on claim count variable N and claim size variable X limited at L. The unlimited aggregate mean is [math]E[S_L]=E[N]E[X;L][/math]. The policy expected loss due to the per-claim limit, L, and aggregate limit, M, is [math]E[S_L;M][/math].

It is often easier to calculate both the unlimited aggregate mean and the expected aggregate loss eliminated by the limit M, [math]E[S_L]-E[S_L;M][/math]. By subtracting the second expression from the first, we get [math]E[S_L;M][/math].

To close out the section on aggregate limits we consider the case where N is the claim count variable, X is the unlimited indemnity-only claim size variable, and the allocated loss adjustment expense is loaded multiplicatively by [math]1+u[/math]. Then the increased limit factor from the basic limit, b with no aggregate limit, to a per-claim limit, L, with an aggregate limit, M, is given by [math]I(L,M)=\frac{\psi E[S_L;M](1+u)}{\psi E[N]E[X;b](1+u)}=\frac{E[S_L;M]}{E[N]E[X;b]}[/math].

Deductibles

The aim of a deductible is to eliminate small "nuisance" claims (which may cause the insurer to incur more in expense than the actual loss incurred), and to encourage policyholders to prevent or limit claims. A deductible lowers premiums because less coverage is provided. The policy limit is applied first and then the deductible.

Straight Deductibles

A straight deductible, d, eliminates claims for which [math]X\lt d[/math] and for claims satisfying [math]X\geq d[/math], reduces them by d.

When X is the ground-up unmodified loss excluding ALAE claim size random variable then a straight deductible truncates from below and shifts by d to give the new random variable [math]X_d=X-d \mbox{ for }d\leq X\lt \infty[/math]. Such claims are viewed as excess of an underlying limit which we covered in Bahnemann.Chapter5.

For all deductibles, assume ALAE is not included in the deductible or policy limit. We modify the basic-limit pure premium formula with a deductible credit factor, C(d), where [math]0\lt d\lt b[/math], as follows:

[math]\begin{align} p_b=\phi\left(E[X;b]+\epsilon\right)(1+u) & \mbox{ before applying deductible}\\ p_{d,b}=p_b\cdot (1-C(d)) & \mbox{ after applying deductible} \end{align}[/math]

In the text, Bahnemann derives the expression for C(d) from first principles, we'll cut to the chase.

[math]C(d)=\frac{E[X;d]+F_X(d)\epsilon}{E[X;b]+\epsilon}[/math]. This is known as the loss elimination ratio.

The premium charged for a higher limit, L, is given by [math]P_L=P_b\cdot I(L)[/math]. Applying the deductible credit factor gives [math]P_{d,L}=P_L-P_b\cdot C(d) = P_b\cdot\left(I(L)-C(d)\right)[/math].

If we're given the ground-up frequency, then the deductible-adjusted frequency is [math]\phi(1-F_X(d))[/math].

Similarly, the modified severity is given by [math]\frac{E[X;b]-E[X;d]}{1-F_X(d)}(1+u)[/math].

Now try the following example which is based on the source text. Insert Bahnemann.StrDed PDF

Franchise Deductibles

A franchise deductible eliminates all claims less than or equal to d and pays those over d in full; d is the deductible or franchise amount. This results in truncated but not shifted data.

The basic limit pure premium is given by [math]p_{b,d}=\phi\left(E[X;b]-E[X;d]+\left(1-F_X(d)\right)\left(d+\epsilon\right)\right)(1+u)[/math] while the deductible credit factor is [math]C(d)=\frac{E[X;d]-d\left(1-F_X(d)\right)+F_X(d)\epsilon}{E[X;b]+\epsilon}[/math].

Note that the deductible adjusted frequency is not impacted by changing from a straight deductible to a franchise deductible. However, the modified severity formula does change; it becomes [math]\left(\frac{E[X;b]-E[X;d]}{1-F_X(d)}+d\right)(1+u)[/math].

A straight deductible results in a lower premium than a franchise deductible because the straight deductible eliminates a larger part of the pure premium. Insert Bahnemann.FranchDed PDF

Diminishing Deductibles

A diminishing deductible has features of both straight and franchise deductibles. It eliminates all claims less than d and pays all claims over D ([math]D\gt d[/math]), claims are paid net of a deductible which decreases which decreases linearly from d at [math]X=d[/math] to 0 at [math]X=D[/math]. Mathematically, [math]Ded(x)=\left\{\begin{align}&\frac{d}{D-d}(D-x) &&\mbox{ if }d\leq x\leq D\\ &0 && \mbox{ if } D\lt x\lt \infty\end{align}\right.[/math].

We can describe a diminishing deductible via the random variable [math]X_{d,D}=\left\{\begin{align}&\frac{D}{D-d}(X-d) && \mbox{ if } d\lt X\leq D\\ &X &&\mbox{ if } D\lt X\lt \infty\end{align}\right.[/math].

The cumulative distribution function is [math]F_{X_{d,D}}(x)=\left\{\begin{align} &0 && \mbox{ if }x\leq 0 \\ &\frac{F_X\left(\frac{D-d}{D}x+d\right)-F_X(d)}{1-F_X(d)} &&\mbox{ if } 0\leq x \leq D \\ &\frac{F_X(x)-F_X(d)}{1-F_X(d)} &&\mbox{ if }D\lt x\lt \infty\end{align}\right.[/math].

Note the credit for the diminishing deductible is between the franchise deductible and the regular deductible. This is because it pays some but not all losses at full value (i.e. 0 deductible).

Deductibles & Inflation

If we apply a uniform inflationary trend, [math]\tau=1+r[/math] to the pure premium for a unlimited policy with straight deductible, d, the effective trend factor is [math]\tilde{\tau}=1+\tilde{r}=(1+r)\frac{E[X]-E[X;\frac{d}{\tau}]+(1-F_X\left(\frac{d}{\tau}\right))\epsilon}{E[X]-E[X;d]+(1-F_X(d))\epsilon}[/math].

Since [math]E[X;x][/math] and [math]F_X(x)[/math] are non-decreasing functions of x we have [math]0\lt r\leq \tilde{r}[/math] for a positive trend, or [math]\tilde{r}\leq r\lt 0[/math] in the case of a negative trend. The straight deductible has intensified the impact of the uniform inflation.

Now suppose the policy has a claim limit, b, as well as the straight deductible, d. Then we have [math]\tilde{r}=(1+r)\frac{E[X;\frac{b}{\tau}]-E[X;\frac{d}{\tau}]+(1-F_X(\frac{d}{\tau}))\epsilon}{E[X;b]-E[X;d]+(1-F_X(d))\epsilon}[/math].

The policy limit, b, means the previous inequalities do not always hold. So sometimes the trend is made larger by a straight deductible and a policy limit, and other times it's made smaller.

Pop Quiz Answers