Instructors Notes


There are based on Nicos Georgiou’s notes that are available on Canvas or on his website. I’ve my own notes, that are not really meant for studying from. But it contains things like “midterm questions” and so on, that you might find useful.

Week 1

Initial announcements:

  • homework, midterms and final. Grades, curving. Office hours. Use common office hours. 5-6pm on mondays. First HW will be posted on canvas at the end of the week.
  • announcements will be through canvas.
  • textbook I’m using is the second edition. Some HW problems will be from the textbook, but I’ll write the whole problem down so you ought to be able to manage with the first edition.
  • I’ve posted the previous instructors notes.

Preview: Good to start with the risk redistribution example. Why insurance companies are needed. Take Alice and Bob, they want to invest in two different companies. They invest $100 and expect to receive random independent amounts $X_1$ and $X_2$ in two different investments. Let’s suppose the expect to get back $110$ dollars from each. Suppose the expected variability in their income is the same: $Var(X_1) = Var(X_2) = 30$. Instead they say, let’s pool our resources, and put in $50 each into investment one and two. They agree that their future income would each be $(X_1 + X_2)/2$.

Let’s compute the variability of their income:

Compare this if they’d just invested $100 on their own. Thus, pooling their resources reduces their individual risk. The risk becomes shared equally, although the overall variability is not changed. This is the general principle of insurance: individual exposures to risk is reduced.

Basic probability

  • Sample space, roll of dice example. Properties of probability measure.

  • Independence, conditioning. Extra information. Can it only increase the probability of event happening? No the extra information might mean that the probability of the event decreases.

  • Random variables, cdf, independent random variables

  • Expectations, variance, covariance. Independence and covariance.

Record values example: $X_i$ is an iid sequence. Let $N = \min{ n \colon X_n > X_1 }$. It’s the first time that $X_i$ is larger than $X_1$. $N > n$ if the first $n-1$ variables are smaller than $X_1$. Then $\Prob(N > n)$ is the same as the probability that the first one is maximal out of $n$. What’s the probability that the first one is maximum? Must be $1/n$. So

Then, we can sum over $n$ and use the divergence of the harmonic series to show that

  • Basic distributions: geometric, uniform, exponential distribution. Do simple discrete and continuous examples?

Week 2

Theory of interest

Most of this stuff is from Nicos’ notes.

  • Compound interest, rate of change. Rate of relative change. If you have more money it’s going to change faster. Rate of change can depend on time.
  • Nominal rate of interest and effective rate.
  • Conditional probability, total probability, bayes theorem.

Week 3

Value at Risk

Basic example: Take two variables $X$ and $Y$. Suppose they represent returns of two different investments. Want to determine which one takes “bad values” (maybe negative). So fix a probability level $\gamma = 5\%$, and ask how bad or negative is $X$ 5% of the time? It’s sort of measuring worst case scenarios.

At $\gamma = 0.05$, we have, of course

The same is true for $Y$. What if you try $\gamma = 0.08$.

Assume there are 10 assets with random returns $X_i, i = 1,\ldots,10$. Is the VaR criterion monotone? Strictly monotone? Can you use this example to determine this? A: No, since there is no joint-information about $X$ and $Y$.

Example: Normal distributions

Let $X \sim N(\mu,\sigma)$.

  • What $\Phi(x)$ for a standard normal?
  • What is the quantile equation?

    For $\gamma = 0.05$, we get $q^*_{\gamma} = -1.64$ for the standard normal. So we get

Example 4 from page 74 2nd edition, talks about the value at risk for $n$ returns.

The parameter $k$ says that you weight the variance more than the mean. If it’s pretty large, then the variance matters more.

Remark: I stopped here after mondays class.

Nicos suggests more strategies:

  • Pick an asset at random, and put all your money there
  • Pick an asset at $i$ with probability $p_i$ and put all your money there.

These are examples to work out. Perhaps it will be on the final.

Mean-variance criterion

A popular route in finance is to fix some number $\tau$ and define the criterion to be

If $X \geq Y$, do we have $V(X) \geq V(Y)$? For the 2nd edition, this is Example 1. Let $X=0$, and number $a \geq 1$, and

We have $\E[Y] = 1$ and $\Var(Y) = a - 1$. Then

Since $V(X) = 0$, It’s clear that the values of $a$ and $\tau$ may be bad and have $V(Y) < V(X)$. So if this happens, this means that $V(X)$ could be sort of ridiculous.

Limit theorems

Law of large numbers and central limit theorem are covered well in Nicos’ notes. Do weak and strong law of large numbers.

remark: Perhaps its not so important to emphasize almost sure convergence and the strong law. It’s sort of pointless for this course.

Model of insurance with many clients. The insurance company’s payments to clients is modeled by random variables $X_i$ with mean $\mu$. The premium charged by the company is $c = m + \e$. The profit is of course $n c - S_n$.

  • If $\e > 0$, show that as $n \to \infty$, you won’t incur a loss.

Week 4

I gave them HW on double integrals, and then some HW on interest rates.

  • If $\e = 0$, show that there is a reasonable chance of loss.
  • Suppose the company decides that it wants a reasonable chance of making a profit, determine $c$. The reasonable chance should be something like

    Turns out that you will get

    Does this make sense? How does it depend on $\sigma$ and $\sqrt{n}$.

St. Petersburg paradox. This is the martingale strategy. Flip a coin each time. You win when the first tails shows up. You win $X = 2^n$ if the first tails shows up at the $n^{\text{th}}$ term. What should you charge for the ticket to play?

Find the expectation of $\E[X]$, your chance of making a profit?

If we play the same game multiple times, then the $\sum X_i$ goes to infinity. So the law of large numbers is telling us that whatever the price of the ticket, one should play the game. But this is ridiculous. This only makes sense if one gets to play several times!

Perhaps a better game would be to ask for a fixed ticket price each time you play. Suppose $c$ is the charge to play a game. This would give a potential profit after $n$ games of Again the expectation runs off to infinity.

remark It’s worth asking them if using the LLN makes sense here.

The game starts to make sense if you make the ticket price depend on the number of games you want to play. Interesting! That is you should ramp up the cost of the game as you go, because now you have a very good chance of getting a good game.

Turns out (see Feller, probability theory) that

So if you made the entry price for $n$ plays about $\log 2 \log n$, then you would be playing a fair game.

This was a game considered by D. Bernoulli. There were two Bernoulli brothers, Johann and Jacob. Very talented mathematicians, but very competitive. Jacob died young. Johann became very jealous of his son, Daniel. Btw, Euler was Johann’s student.

There was the old problem of division by zero. One of the early works on the number zero was due to an Indian astronomer called Brahmagupta. So this was brought to the Islamic world (Baghdad) by the Caliph Al-Mansur. This was during the golden age of Islam, 7th century CE. This was when zero was introduced into Arabic numerals, which is what we still call them today.

Since the 7th century, the problem of defining $0/0$ was open until Calculus was invented in the 17th century. L’Hospital’s rule was discovered by Johann Bernouilli, but L’Hospital sort of stole it from him. This is rather ironic since Johann tried to steal from his son, Daniel.

Expected Utility Maximization Criterion (EUM)

In fact, D. Bernoulli proceeded from the simple observation that the degree of satisfaction of having capital or the “utility of capital” depends on the amount of capital you have.

This is the EUM criterion.

This means that if we have two rvs, $X$ and $Y$, then we will prefer $X$ over $Y$, and say $X \succsim Y$ , if $\E[u(X)] \geq \E[u(Y)]$. This is called a preference order. If the expected utilities are the same, we say that $X \backsimeq Y$, or $X$ is equivalent to $Y$.

The investor who follows the EUM criterion is called an expected utility maximizer. This does not mean that the investor solves equations and maximizes utility; it’s the model we have for an abstract investor.

The utility $u(X)$ must be increasing; the more money we make, the happier we will be. This means $u(x)$ is increasing.

Property 1 says that we may replace $u$ upto a linear transformation $u \mapsto a u + b$ for $a$ positive.

  • Convex functions. Concave functions. Convexity for differentiable functions. How to remember $x^2$ is convex. conCAVE. And conVex.

  • Utility function represents the amount of abstract satisfaction we attain by possessing an amount $x$ of money.

  • Motivate logarithmic utility. If the capital $x$ is increased by $dx$ then the satisfaction $du(x)$ is propotional to $dx/x$. Why? You sort of say that the more money you have, the less satisfaction you get from the same change in your money. It’s a good model, but there are many different models of utility.

  • Do example with utility for St. Petersburg paradox game.

    This shows that $\E[u(X)]$ is finite. Nicos claims that if the ticket is much larger than $2 \log 2$, the expected utility, the player will not play the game. This means that the utility value should be in units of money.

    The fact that $u(X)$ is concave means that the player is not risky. We will see what this means in the future.

Cautious Gambler: He likes big returns, so he could be sort of risky. But when he starts making losses, he becomes risk averse. This is in the textbook

Try it for Bernoulli random variable $X = \pm a$ with parameter $p$ (there is a typo in Nicos’ notes). Compare utility with $Y=0$ identically. What does $Y=0$ correspond to?

remark Will give the reckless gambler as homework. Maybe further analysis of the cautious gambler.

Certainty Equivalent

If we find a constant $c$ such that $c \backsimeq X$, then $c$ is called a certainty equivalent. Essentially, you want to find a constant $c$ such that

You can call this “the amount of money equivalent to gambling with $X$”. That is if I were to give you an amount of money $c$, it’s equivalent to investing/gambling in the investment $X$.

Proposition Let $u(x) = -e^{-\beta x}$ and under EUM, $X \gtrsim Y$. Then $w + X \gtrsim w + Y$. This says that initial wealth does not affect the utility criterion. Again, this is a special kind of utility function.

Midterm will be on

  • Review problem on Bayes rule and conditional probability
  • Value at risk.
  • Expected utility maximization.

Week 5

Utility and Insurance Consider an individual with wealth $w$, facing a possible random loss $L$. Suppose he’s an EU maximizer with utility $u(x)$.

  • What premium would the individual be willing to pay to get insured? Call this $g$
  • Suppose the company’s utility is $u_c(x)$, what premium would the company need in order to insure the individual?
  • What condition must hold between $g,h$ so that an agreement is met?

Answers:

  • If insured, the individuals wealth after payment of premium is $X = w - g$. If not insured, he will pay $Y = w - L$. So he wants to compare the expected utilities of $X$ and $Y$. The maximal acceptable premium is

  • Let $w_c$ be the company’s wealth. From the point of view of the company, the minimal accepted premium

For the customer to be insured and company to insure him, the premium $p$ must satisfy

remark Midterm portions upto risk aversion.

Condition $Z$ Let $Z_{\e} = \pm \e$ with equal probability. Suppose $\gtrsim$ is a preference order. Then the preference order is said to satisfy Condition $Z$ if for any random variable $X$ and any $\e > 0$, with $Z_{\e}$ independent of $X$ we have

You should think of this as saying, if I make $X$ more variable, then it becomes less preferable.

  • An individual whose preference order satisfies condition $Z$ is called a risk averter.
  • If $X + Z_{\e} \gtrsim X$, then the individual is a risk lover
  • If an individual is neither a risk lover or averter, then we call the person risk neutral.

Proposition Let $\gtrsim$ be an EUM order. Let $u(x)$ be a continuous utility function. Condition $Z$ holds iff $u(x)$ is concave!

Proposition (Jensen’s inequality). Let $X$ be a random variable with finite expectation. Let $u$ be a convex function. Then, Let $u$ be a concave function, then

If the client is risk averse, then $u(x)$ is concave and by Jensen’s inequality Since $u$ is monotone increasing, this means that This means that the client is willing to pay more than the average loss! So the insurance company can make money.

Proposition In the case of risk aversion, the certainty equivalent satisfies $c(X) \leq \E[X]$.

The proof of this follows from the fact that if $u$ is concave and monotone, $u^{-1}$ is convex. Thus, we can apply Jensen’s inequality to $u^{-1}$.

What does this mean? It means that if $X$ is the payoff on an investment for you, the amount required for you to invest is smaller than the expected payoff!

Week 6

Optimal payment from the standpoint of the insured

Consider the following problem. An individual with wealth $w$ is facing a random loss $L$ that has mean $\mu_L > 0$. To protect against the risk, he appeals to an insurer. The insurer, having many clients, proceeds from the mean value of the future payment.

The average amount the insurer must pay should be something called $\mu_P$, where $P$ is for payment.

If a client wants full coverage, then $\mu_P = \mu_L$ and premium must be a little more than the average payment; i.e., $g = (1 + \theta)\mu_L$. Sometimes $\mu_L$ is too big and hence the premium is very large. The client might not be willing to pay that much.

So we may assume that the mean payment is $\mu_P < \mu_L$.

To have a more complex policy that allows the insurer to pay less than what the consumer loses, it must have a payout or payment function $r(x)$. It is the amount the insurer will pay if the insured’s loss is $L=x$. The mean amount paid by the insured is

Assumptions on $r(x)$:

  • $r(0) = 0$
  • $r(x)$ is increasing.
  • Clearly we must have $r(x) \leq x$ since we don’t want to pay more than what the client loses.

Examples:

  • Propotional insurance: if $r(x) = kx$ with $k \leq 1$, then $k = \mu_P/\mu_L$.

  • Excess of loss / stop loss insurance / insurance with a deductible

  • Insurance with limited coverage. There is an upper bound to the amount paid. For lower amounts, there is full payment.

Arrow’s theorem

Suppose we have $u(x)$, the utility function of a customer. Recall the wealth of the individual under insurance payment $r$.

We want to find the payout function $r^$ that *maximizes the utility of the insured! This would make the customer happiest. Of course, $r$ must satisfy the above constraints, and importantly it must have

where $\mu_P$ was the average payout specified earlier.

Theorem (Arrow) If $u$ is a concave function (client is a risk averter) let $r^* = r_d(x)$ where $r_d$ is insurance with a deductible specified above. Let the deductible $d$ satisfy

where $F_L$ is the cdf of the loss. Then for any function $r$ such that $\E[r(X)] = \mu_P$,

Isn’t that a spectacular theorem? This is why you see insurance plans with deductibles so frequently. They really want to maximize your happiness and their profits!

I also proved this in class, but the proof is essentially from Rotar’s book. It follows from integration by parts and the second stochastic dominance.

An individual risk model for a short period of time

Distribution of loss. Let $\xi$ represent the loss, given that loss has occured. $\xi$ is sometimes called “severity”. Then the real loss of the insured is

Exponential distribution Suppose $\xi$ is exponential with parameter $\lambda$. Recall that

The expectation and mean are

Exercise: If $\xi$ is $Exp(\lambda)$ then show that it’s memory less. That is,

The tail of the exponential variable is

It gives the probability of something bad happening.

Some notation. Two functions have similar right tails, or more succinctly if

Some typical tails:

  1. Polynomial tails (Heavy tails). $\Prob(\xi > x) \sim C x^{-k}$

    Let us compute the probability that something superbad happens if something bad happens.

    This means that if we know something bad happens, it’s quite likely that it’s very bad.

  2. Exponential tails (Light tails). $\Prob(\xi > x) \sim C e^{-\lambda x}$.

  3. Super exponential tails $\Prob(\xi > x) \sim C e^{-\lambda x^{\gamma}}$. Can you give me an example? We will see this in age and survival distributions, especially when we dicuss hazard rates.

Light tails A distribution $F$ is said to be light tailed if there are positive numbers $C, \lambda$ such that

Heavy tails If distribution is not light tailed, we call it heavy tailed.

Examples of light tailed random variables

  • Bounded random variable
  • Gamma distribution

Examples of heavy tailed distributions

  • Pareto distribution

    It’s a bit of work to show that the Pareto distribution is heavy-tailed. This will be on your homework.

Next we want to determine when functions are heavy tailed or light tailed. We will do this using moment generating functions.

Moment generating functions

The moment generating function (MGF) of $X$ is given by

Why is this useful? Suppose we want to find the expectation of $X$.

Clearly

In general

Examples:

  • Exponential

    Use this to compute the first two moments of $X$.

  • Gaussian random variable. (will be on homework)

Proposition: The distribution of $\xi$ is light-tailed iff $M_{\xi}(z) < \infty$.

Proof: Suppose $M_{\xi}(z) < \infty$, we will show that $\xi$ has a light-tail. Suppose $\xi$ has a smooth density function $f$ and cdf $F$. Then

In fact, more generally, if we have some integrable function $g$, then

Clearly $e^{zs} \leq e^{zx}$ for $s \leq x$. Then,

Therefore the tail satisfies $T(s) = (1-F(s)) \leq e^{-zs} M_{\xi}(z)$.

For the other implication, assume that $\xi$ has a light tail. That is, for some positive constants $B,c$ we have

Then for any $z < c$, we have So it’s enough to analyze the integral over the positive real numbers. Integration by parts gives

The last step required us to show that But this is easy, using $T(x) \leq B e^{-c x}$.

Prop: The moment generating function is convex. Which means that Proof: Apply Jensen’s inequality to the convex function $e^{x}$.

Distribution of Loss

Here we have $X = 1_{loss} \xi$. If the probability of loss is $q$, then

Example: $\xi \sim Exp(a)$. Then Let’s compute $\E[X^k]$. This is useful for them to practice since it has an atom at $0$.

How do we compute $\E[\xi^k]?$ Use moment generating function! Should give

It’s also useful to compute the variance:

Distribution of payments and types of insurance

Deductible

Maximum limit

Deductible and maximum limit

Franchise deductible

Example: Let $\xi$ be exponentially distributed with parameter $a$. Let $r = r_{3,d,s}$. Find the moment of $\E[Y^k] = \E[r^k(X)]$. Turns out that it is

Example 2. An insurance company offers two types of policies. Type Q and Type R. Type Q has no deductible but a policy limit of $3000. Type R has no limit but an ordinary deductible of $d$. The word “ordinary” distinguishes this policy from the franchise deductible policy. Calculate the deductible $d$ such that both policies have the same expected cost per loss. Loss follows Pareto with $\theta = 2000$ and $\alpha = 3$. Recall that the Pareto may be defined as

In the HW it was defined to be

They’re both essentially the same thing.

Week 7

I stopped with example 2 the last week. Complete it by completing the Pareto distribution calculations.

So recall, we’re trying to find

using the fact that

remark We wrote the expectation of $r(x)$ in terms of the tail since that’s what we were given about the Pareto. We could easily just use $dT(x)$ instead.

remark Read the distribution function of $r(x)$ section, for an increasing function $r(x)$.

HW I will give them an insurance calculation for next week with Example 4 from page 153 of the textbook (2nd ed)

Aggregate payment.

We will no longer specify particular details of insurance contracts such as deductible or payment limits.

$X_i$ represents payment to individual $i$ by the insurance company. Want to analyze distribution of $S_n$. This brings us to our next topic: convolutions.

Convolutions.

Introduce basic definition and do it terms of densities and distribution functions for $X_1 + X_2$.

Proposition:

HW will give practice on convolutions on HW.

  • Do discrete and continuous counterparts of convolution.
  • Example, two Bernoulli distributions.

Classical examples

  • Sums of normals.
  • Sums of Poissons. $S_n \sim \operatorname{Poisson}(\sum_i \lambda_i)$
  • Sums of Gamma. $S_n \sim \Gamma(\alpha, \sum_{i=1}^n \nu_i)$ where $X_i$ has density function

Just state the formulas for this.

HW Have them compute mgf of the gamma function.

Moment generating functions This is the most important fact about convolutions of independent random variables. State that all proofs go via mgfs or more generally through characteristic functions for this.

What is the problem with mgfs? It may not exist. But characteristic function does exist!

I proved If a density exists for both $x$ and $y$, we have

What about discrete random variables $X$ and $Y$? We have for the probability mass function

Convolution smooths out densities. Let $X_i$ $i=1,2,3$ be three random variables, all uniform on $[0,1]$. $f_{S_1} = 1$. $f_{S_2}$ is a triangle. $f_{S_3}$ is a smoother version of this that I will have them calculate on the homework.

HW. See above. Do an example with exponentials $X_1,X_2$.

Conditional expectation as a random variable

Suppose $X,Y$ are discrete rvs with joint probability mass functions $f_{X,Y}$. Define the marginals. Recall the conditional expectation. Do an example, and you want to say that $\E[Y|X]$ is to be thought of as a random variable.

Take an old example. $Y = \operatorname{Bernoulli}(1/X)$ where $X$ is represents the roll of a die.

We should do the full setup of $X$ as a random variable, as a map from the sample space $\W$ to $\R$.

  • $\E[Y X]$ can be computed if we knew the value of $X$. If we didn’t then we should think of it as random.
  • If we knew the value of $X$, then $\E[Y X]$ is a number.
  • So we may also think of it as a function of the value of $X$.

For discrete random variables. Do another example with two Poissons and compute $\E[X_1 | X_1 + X_2 = n]$. Use the fact that $X_1 + X_2 \sim \operatorname{Poisson}(\lambda_1 + \lambda_2)$.

Continuous distributions

Define as Do marginals and remind them about the conditional distributions. Write Also write it down for a random vector.

Next is an important example. Suppose $X_1$ and $X_2$ are independent. Then

Prop If $S = X+ Y$, $X,Y$ independent, then Then do an example with exponentials to find the density of $S = X_1 + X_2$. Then use this to find the conditional $f_{X_1|S}$ density of $X$ given $S$.

What do you get when you add two exponentials? You get a $\operatorname{Gamma}(\alpha,2)$. The conditional should turn out to be uniform!

Week 9

remark According to the schedule, I was supposed to be at distribution of the aggregate claim here. I’m a little behind, but it’s ok.

Mar 09 2016 Continue with the example of two independent exponentials.

Then do the uniform distribution example of the two Gaussians.

Properties of conditional expectation, in Nicos’ notes. Tower property, linearity, etc.

Let $Y = \eta + \xi X$ where $X$ is a stoch index and $Y$ is a price of a particular stock.

Then do conditional variance. And write the variance in terms of the conditional variance.

Resume here.

HW Give them the circle thing for homework.

Future Risk Portfolio . Suppose have $S_N$ where $N$ the number of clients who’ve signed up is random. Let $X_i$ iid mean $\mu$ and variance $\sigma^2$. Compute $\E[S_N] = m \E[N]$. Importantly, compute $M_{S_N} = M_N \circ \log M_X$.

  • mean
  • variance
  • mgf
  • special case when $N$ is Poisson. Here, you have mgf of Poisson is

    Then,

Characteristic functions. Why don’t moment generating functions always exist? Can you give me an example of a distribution where the MGF doesn’t exist? What do you use in its place?

Some more properties of conditional variances

Poisson distribution and Poisson counting

Derive it from Binomial distribution. Binomial counts number of successes in $n$ trials. Suppose the chance of a client having an accident is rare. Then, a good model for the number of accidents is Poisson.

HW Why use Poisson? Compute the factorials and show what a mess it is.

Lots of trials, low chance of success.

HW perhaps give them a Poisson with a really low probability of success. For example $p = 1/n^2$ and see what it goes to. So the scaling has to be just right.

RESUME FROM HERE

Next is the Poisson approximation theorem. Do example 3, page 227 (2nd edition) (whic h follows)

An insurance company pays claims at Poisson rate of 2000 per year. Claims are divided into three categories: “minor”, “major” and “severe” with payment amounts of 1000, 5000, 10000 respectively. The proportion of “minor” claims is 50%. The total expected claim payments per year is 7000000. What is the proportion of severe claims?

Suppose $\lambda_1,\lambda_2,\lambda_3$ are the three Poisson rates for the minor major and severe accident categories. Clearly, we must have

Week 11

Distribution of the aggregate claim

After doing Poisson again, return to $N \sim \Poisson(\lambda)$. Then

Why did we use Poisson here? Remind them about the Poisson approximation theorem. Then, compute the formula

Also compute the density by differentiating.

HW: Give the the three categories of population thing again, which is minor, major, severe with payments 1,4,10 units of money respectively.

Week 12

Recall the moment generating function calculation

So we have which means that $M(z)$ is an average over the other $M_i$. It follows that $F(x)$ can be replaced by a mixture of distributions!

Moral there is a unique homogeneous group $N$ with parameter $\Poisson(\lambda)$ that have distribution $F$ for the claim that is the average of the individual distributions!

So we did Chapter 2 section 2, about the aggregate payment, convolutions and so on. Then we moved onto chapter 3, with a collective risk model for a short period, where we covered

  • The Poisson approximation theorem.
  • Compound poisson distribution.
  • How to compute mean variance and mgf of $S_N$.
  • Several homogeneous groups and average behavior.
  • Unfortunately, we’ve skipped premiums and solvency.

remark I would recommend doing a little bit about premiums and solvency the next time this is taught.

Example 2, page 226 2nd edition Let $l=2$, be the number of groups with Poisson rates $\lambda_1 = 200$, and $\lambda_2 = 300$. Assume the rv’s $X_{1j}$ and $X_{2j}$ are exponentially distributed with $\E = 2$ and $\E = 3$.

Find the distribution of the claim and its moment genrating function. Also compute $\E[S]$ and $\Var(S)$ using the above. They keep referring to (1.2) and (1.4) in the text. These are the lemmas that say

The second equality follows from

Example 3, page 227 An insurance company pays claims at $\Poisson(2000)$. Three categories $1000,5000,10000$ payment amounts. Proportion of minor claims is $50\%$. Total expected claim payments a year is $7,000,000$. What is the proportion of severe claims.

Total expected claim

Proportion of minor claims is

Then a third equation is

Three unknowns, three equations.

Normal approximation of aggregate claim

When $N$ is large, then CLT holds again.

Theorem (usual CLT): Let $X_i$ iid, have two moments, with mean $\mu$ and variance $\sigma^2$, and let

Theorem (CLT for random sum) See Nicos’ notes for this.

Let $X_i$ iid, have two moments, and let where $N$ is $\Poisson(\lambda)$. Recall that Then,

In other words, for LARGE random sums, $S_N$ is approximately normal.

Estimation of premium by normal approxmation

This stuff is in Chapter 2 part 3. This is the Poissonized version of this, which is in Chapter 3, part 4.

Remark The number of contracts needed to maintain a given security level is a good question. Should have done Chapter 2 part 3 earlier, so they can contrast it with the Poissonized version.

Remark There is a cool theorem on page 233 that says that if $N$ is a geometric distrbution instead, then $S_N$ goes to an exponential! This is a theorem due to Renyi.

HW example 2 from page 173.

Loading coefficient. Suppose the average payment is $\E[\xi]$. Then, the premium is usually

and $\theta$ includes your profit and security. Hence it’s called the relative security loading coefficient.

In this setting, we have two cases.

  • Random payment $S_n$ with fixed number of clients.
  • Random payment $S_N$ with random number of claims.
  • The TOTAL amount of money $c(\lambda)$ that is a function of the Poisson rate to required to cover the random payment $S_N$ with high probability.

The number $\beta$ can be anything and is usually high and close to $1$, like $0.95$. We again have a $\theta$ here and we get

Again, compute the $\beta$ quantile, and then compute $\theta$ in terms of the $\beta$ quantile. This can be written in terms of the coefficient of variation $\sigma/m$.

Explain the coefficient of variation again. Do the example in Nicos’ notes with numbers.

Individual premiums maybe calculated as $c(\lambda)$ divided by the number of clients. In the random number of claims case, the individual premium to charge is not clear. One would have to do this precisely using $n$, the actual number of clients and the Binomial distribution instead of the Poisson.

Nicos does Example 1 on page 234, when $X$ is log normal and $N$ is Poisson with parameter $\lambda$.

Week 13

Stochastic processes

  • How you want to model the amount of money an insurance company has.
  • Intro to counting processes: Poisson process, waiting times.

This weeks lecture was mostly from Nicos’ notes. Resume at waiting times. We showed that the first waiting time was exponential. Let us show that the $i^{th}$ waiting time has the same distribution as the $1^{st}$ waiting time.

This shows that the $i^{th}$ waiting time is also exponentially distributed.

Similarly The waiting times have a memoryless property that it inherits from $X_t$. where we’ve used the fact that $X_{s+t} - X_t$ is independent of $X_t - X_0$ (independent increments). Also the increments of the process are invariant under time-translation, which means that asking if there are no customers between $[t,s+t]$ is the same as asking if there are no customers between $[0,s]$.

Do example from Nicos’ notes. Only Q4 needs a little work. Put it on this weeks HW

HW Poisson process with mean time between two claims is 1/2 hours. With probability $p$, a claim is made by a woman over 60 years old. What is the prob that 5 claims or more are made by such women from 2-6pm. What kind of Poisson process is the claims coming from women over 60 years old (it’s a fraction of the original Poisson rate).

His two examples about the Poisson process are useful: find the rate of the Poisson process.

The nonhomogeneous Poisson process

Assumption that the Poisson rate is not realistic. Usually the intensity $\lambda$ is a function of time.

Write $\overline{\lambda}(t)$ instead. State the proposition about the problem.

HW Maybe formulate a problem about this with waiting times.

I’m going to skip the example, and tell them where to find it. It’s from pages 254 - 256 in the 2nd edition book. There are a bunch of useful examples there.

  • Write down the compound process. It can model both claims and new customers coming in.

Chapter 7. Surplus process and ruin models.

The typical model again is making the cash flow a little more than the average claim at time $t$.

Maximizing the surplus upto time $T$ under some preference criterion.

Week 14

  • Continue with ruin models and surplus processes.
  • Want to maximize $R_t$ with respect to some criterion. $R_t$ is called the surplus process.
  • Also define $W_t = u - R_t = S_t - c_t$, the claim surplus at time $t$.

Things that you’d like to maximize

  • the “survival probability” up to time $t$.
  • expected surplus at time $t$.

A condition at infinity Want $R_t$ to grow to infinity with high probablity. This is a good place to learn Chebyshev’s inequality. Show that this condition at infinity is generally true for reasonable models.

  • Define the time of ruin.
  • Define the MGF of $W_{\Delta} = W_{t+s} - W_t$, $M_{\Delta}$
  • The adjustment coefficient of $M_{\Delta}$.

Theorem Crazy theorem, where you get the ruin probability in terms of the adjustment coefficient.

  • The larger the adjustment coefficient, the smaller the chance of ruin.
  • Larger the initial wealth, smaller the chance of ruin.

Example Compound Poisson with exponential claims. $R_t = u + ct - S_t$, $X \sim \Exponential(\alpha)$. Can compute the ruin probability explicitly.

Resume at the Chebyshev inequality calculation. Do the example with the standard Poisson process.

Computing adjustment coefficients.

Remind them about the claim surplus process, and correct the sign error for the adjustment coefficient.

Draw the pictures for $M_{\Delta}(z)$ for the previous proposition. Remind them that $M’(0) = \mu$.

For a homogeneous Poisson process with unit rate, and claims having standard exponential distribution

Prop A positive solution to $M(z) = 1, z \geq 0$. Assume $\Prob(\xi = 0) \neq 1$, meaning its not zero identically. Then

  • A positive solution exists iff $\mu < 0$ and

  • If $\mu < 0$ and $M(z_0) < 1$, then $M(z) < 1$

  • If the equation has a positive solution, it is unique.

Usually we will have something of the following form.

  • Example 1 from page 352 (2nd edition). This one has $f(x) = x e^{-x}$. Turns out that $z_0 = 1$ and everything can be computed
  • Example 2 from page 352 (2nd edition). This is the normal distribution and gives

Approximating the adjustment coefficient

Then solving this we get

Then do adjustment coefficients for homogeneous compound Poisson process.

Week 15

I went quickly through survival distributions, and ended up at the actuarial present value. We covered

  • Survival function
  • Hazard rate, or force of mortality.
  • Hazard rates from independent sources add up.
  • Time until death, and “complete expectation of life”: $\E[T(x)]$.
  • Present value of fugure payment.

I didn’t cover

  • Payment to many clients,
  • Normal approximation.
  • Types of life insurance: Whole life insurance, term insurance, etc