Insurance that no two people are exactly alike

Insurance companies always have the unenviable task of calculating the premiums and benefits insured customers should pay and receive.

This is quite a tedious task in the sense that no two people are exactly alike and so their calculations should cancel out the outliers so that the companies do not end making significant losses.Using Markov chains help in the sense that they can define n states for the insured and use it as a basis for the calculations going forward. The assumption is that the insured is in one of these states which will determine the impact and expected future cash flows/net present value.This paper will look at the different states/Markov chain models that insurance companies adopt and its associated pitfalls.1         Markov Chain 1.1        Basic conceptsA stochastic process is a collection of random variables that describes an evolution over time. The process is called a Markov process if it possesses the Markov property.The Markov chain is a stochastic process (a collection of random variables), however it differs from the normal stochastic process in that a Markov chain must be “memory-less”.

This means that the probability of future actions are not dependent on the steps that led up to the present state. This is called the Markov property. The theory of Markov chain is important mainly due to the fact that many “everyday” processes fulfil the Markov property, but there are some examples of stochastic properties that do not satisfy the Markov property.

 Discrete time Markov chains can jump between states only on certain moments of times t ? N. (van der Molen, 2017). Also since the discrete and continuous Markov chains are analogous, this paper will only explain the continuous-time Markov chain.

In probability theory, the most immediate example, is that of a time-homogenous Markov chain. The time-homogenous Markov chain is the probability of any state transition is independent of time.  For such a process this can be visualized as figure 1. The sum of the labels of any vertex’s outgoing edge is 1.

                                                                           Figure 1 visualization of states           Markov chain is a sequence  of the random variables satisfying the rule of conditional independence: Overall, the information of the previous state are all essential to define the probability distribution of the current state. 1.2        Transition MatricesThe transition matrix  for Markov chain  at time t is a matrix holding information about the probability of states transitioning between each other. In general, given a ordering of a matrix’s rows and columns by the state space, the  element of the matrix  is given by This means that for each row in the matrix is a probability vector and that the sum of its entries is 1. Transition matrices have the property that the product of subsequent ones describes a transition along the time interval spanned by the transition matrices.

This suggests that  has in its  position the probability that  given that. In addition, this also means that the  position of   is the probability.The formula for the k-step transition matrix is: 1.3        Properties This section will look at the variety of descriptions of either the whole Markov chain or a specific state in the chain which will allow for a better understanding of the Markov’s chain behaviour.

 is the in the transition matrix of Markov chain  .§  The state  has the period. This means that if any chain is starting at and returning to state  with positive probability they must take a number of steps divisible by.

If, this means that the state is aperiodic. If the, this means that the state is periodic. The Markov chain is known as aperiodic when all states are aperiodic.

§  If there exists a chain of steps between any two steps that has positive probability then the Markov chain is known as irreducible. Therefore irreducibility is the property that regardless the present state we can reach any other state in finite time. §  A state in a Markov chain is known as recurrent or transient dependent if the Markov chain will eventually return to it. If the recurrent state is expected to return within a finite number of steps then it is known as a positive recurrent and if not then it is known as a null recurrent. §  Absorbtion theorem is an arbitrary absorbing Markov chain. It renumbers the states so that the transient states come first.

If there are  absorbing states and  transient states, the transition matrix has the canonical form: Here  is an r-by-r identity matrix, 0 is an r-by-t zero matrix, R is a nonzero t-by-r matrix, and Q is a t-by-t matrix. In an absorbing Markov chain, the probability that the process will be absorbed is 1 (Gorban, 2018). §  An ergodic state is if it is positive recurrent and aperiodic.

If the Markov chain is ergodic if all its states are.  2         Time continuous Markov chain A continuous-time Markov chain (Tolver, 2016) on a finite or countable set, S, is a family of random variables,  on a probability space  such that For j, i, in-1,…, i0, ? S and tn+1 > tn > … > t0 ? 0. The distribution of the Markov chain is determined by through the identity The finite-state Markov chain has been used in the context of life insurance and provides a model for various kinds of risk to the individual’s life. This could be the loss of income due to unemployment or disability. There could also be the risk of increasing mortality intensity in connection with stochastic deterioration of health and the risk connected with one’s dependents following from e.g. marriage or parenthood.

3         Poisson Process A Poisson process is a Markov process with intensity matrix4         The Kolmogorov forward differential equationsWith the occupancy probabilities mentioned above:Where   is the probability for age x to age (x+t), and I and j represent the state i and state j., For , we derivate to get:§  Represent when the life keeps the state in state i:§  Or when the life does not just stay on state i, which means that it leave and return back to the state, so there should be more transition in these cases, with probability 0(t)With the probabilities  known, there is a system to analyse these equations, called ‘ Kolmogorov forward differential equations’, which is: (for all i and j in S) 5         Markov chain Models for Insurance5.1        Survival Model 5.1.1        Transition Diagram:      There are two states for this model, ‘ALIVE’ and ‘DEAD’. The ‘arrow’ here indicates the transition.

The weight of the ‘arrow’ is the force, so the greater number  is, the more likely the state ‘ALIVE’ into state ‘DEAD’. is the force of pushing from alive to dead at age xFORCE OF MORTALITY:Where the represents someone die.5.1.2        Probabilities:Firstly, the two types of probabilities are within Markov models.Transition Probability:§  That is the probability in one state moving to another state.Occupancy Probability:§  Stay at the same state.

5.1.3        Assumptions:1.       Markov Assumption (Markov Property)Probabilities depend only on current state. The straight is that all information is contain in the present state, so the past does not influence the present that push on to the future.2.          for small values of h.3.

        is a constant   for .The force of mortality are constant throughout the year. where  5.1.4        Kolmogorov forward differential equation:By Markov Assumption:By Assumption 2:Rearrange Taking     5.

2        The Disability/Unemployment Model 5.2.1        Transition Diagram:        The disability/unemployment model is an example of a three state model which is different to the two state model such as the survival model. The difference between them is that the survival model only considers the insurer to either be alive or dead but the disability/unemployed model considers the fact that the insurer may in the future become unemployed or become unable to work because of illness. Therefore the three states in this model are state 0 “active/employed” which the insurer is working, healthy and has disposable income. The second the state in this model is state 1 “disabled/unemployed”, this state is if the insurer becomes ill and unable to work therefore has no income coming in. The third state in the model is state 2 “dead”, this state represents the insurer is dead. This model is for individuals that are insured under the disability income policy.

The premiums are payable while the insured is in state 0 as the insurer is working and are healthy therefore they pay for the premiums for when the moment arises that they are not healthy and unemployed. The benefits are payable while the insured is in state 1 (after a waiting period).            5.3        Multiple State ModelIn order to refine the above states, the multi state was adopted. This helps to generalise for more involving and demanding problems to form the grounds for other insurance problems.

In the two state, only two states of the insured are looked at; alive and dead. In the more general multi state model there are an arbitrary but finite number of states, S = {1, 2, …, n}.For any two distinct and separate states, i and j, the probability of making a transition from state I to state j at age x+t is given by the transition intensity .

This statement holds by making the following assumptions:·         All intensities  depend only on the current age x+t and not on any oother aspect of the life’s past history.·         The probability of making a transitioni to j before age x+t+dt, conditional on being in state I at age x+t, is§  ·         The probability of making any two or more transitions in time dt is 0(dt).


I'm Mary!

Would you like to get a custom essay? How about receiving a customized one?

Check it out