QUESTION 1

We redo example 2.2.5 on Page 25 from book Modern Actuarial Risk Theory by Rob Kaas, Marc Goovaerts and Michel Denuit(2001). The can be found on various website and here is a one link (https://faculty.ksu.edu.sa/sites/default/files/modern_acturial_risk_theory.pdf)

Consider an insurance policy against a liability loss S. We want to determine the expected value, the variance and the distribution function of the payment X on this policy, when there is a deductible of 100 and a maximum payment of 1000. In other words, if then X = 0,if then X = 1000, otherwise X = S – 100. The probability of a positive claim (S > 100) is 10% and the 26 THE INDIVIDUAL RISK MODEL probability of a large loss is 2%. Given 100 < S < 1100, S has a uniform(100,1100) distribution.

\[\left\{ \begin{array}{ll} s=liability\ loss \\ \ \\ x=payment\ (of\ insurance\ policy ) \end{array} \right.\]

The relationship between X and S among different interval.

\(if\ s\leq100,\ then\ x=0\)

\(if\ 100<s<1100,\ then\ x=s-100\ and\ 0<X<1000\)

\(if\ s \geq1100,\ then\ x=1000\)

The probability of S among different interval

\(P(s>100)=0.1\)

\(P(s\leq100)=0.9\)

\(P(100 < s <1100)=0.08\)

\(P(s \geq1100)=P(s>100)-P(100 < s <1100)=0.02\)

The ditribution of S over (100,1100)

\(S\sim U(100,1100)\)

\(f(s)=\frac{0.08}{1000}=0.00008, \ if\ 100<s<1100\)

Analysis: From the information above, we can conclude the X from three disjoint interval \((X=0),(0<X<1000),(X=1000)\), X will has different distribution over these three intervals.

  • If \(x=0,\ P(x=0)=P(s\leq100)=0.9\) ,therefore X is a \(\underline{discrete\ random\ variable}\), and X will have \(\underline{Point\ mass\ distribution}\).
  • If \(0<x<1000,\ P(0<x<1000)=P(100 < s <1100)=0.08\)S follow uniform distribution over interval(100,1100) from the context, and since X is a linear function(combination) of S over that interval, X is a \(\underline{continuous\ random\ variable}\), and x will have \(\underline{Uniformly\ ditribution}\)and the area under that horizontal line should equal to P(100 s <1100)=0.08.
  • If \(x=1000,\ P(x=1000)=P(s\geq1100)=0.02\), therefore X is a \(\underline{discrete\ random\ variable}\), and X will have \(\underline{Point\ mass\ distribution}\).

Calculate the distribution of X over 0<X<1000

since \(s\sim U(100,1100),\ and\ x=s-100,\ 100<s<1100\leftrightarrow \ 0<x<1000\) \[CDF:F_X(x)=F(X \leq x)=F(S-100 \leq x)=F(S \leq x+100)=\int_{100}^{x+100} f(s) \,ds \ =\int_{100}^{x+100} 0.00008ds =0.00008(x+100)-0.00008(100)=0.00008x \] \[PDF:f_X(x)=\frac{d(F_X(x))}{dx}=0.00008\]

Ploting

#PDF(X)

plot(NA,NA,yaxt = "n",xlim=c(-50,1100), ylim=c(0,0.0002),xlab="X",ylab="",main="mixture pdf(x)")
arrows(0,0,0,0.00019,length = 0.15,angle = 30,lty=1,lwd=2, col="red")+ #Add first arrow which is the pmf of x=0
  points(0,0,pch=19,col="red")+
  segments(0,0.00008,1000,0.00008)+  #add the uniformly distribution when 0<x<1000
  points(0,0.00008)+
  points(1000,0.00008)+
  arrows(1000,0,1000,0.00014,length = 0.15,angle = 30,lty=1,lwd=2, col="red")+ #add second arrow which is the pmf of x=1000
  points(1000,0,pch=19,col="red")+
  text(0,0.0002,"P(x=0)=0.9",col="red")+
  text(1000,0.00015,"P(x=1000)=0.02",col="red")
## integer(0)
  #label the value on Y axis.
  ylabel <- seq(0, 0.0002, by = 0.00001)
axis(2, at = ylabel, las = 1)

#CDF(x)
plot(NA,NA,yaxt = "n",xlim=c(-100,1100), ylim=c(0.8,1.1),xlab="X",ylab="",main="mixture CDF(x)")+
  points(0,0.9,pch=19)+
  abline()+
  segments(0,0.9,1000,0.98)+
  points(1000,0.98)+
  points(1000,1,pch=19)+
  points(0,0.8)+
  text(0,0.817,"(0,0)")
## integer(0)
  arrows(1000,1,1100,1,length = 0.15,angle = 30,lty=1,lwd=2)+
  arrows(0,0.8,-90,0.8,length = 0.15,angle = 30,lty=1,lwd=2)
## integer(0)
  ylabel <- seq(0.9, 1.1, by = 0.02)
axis(2, at = ylabel, las = 1)

Calculating the mean and variance

\(If\ x=0,E(x)=0*0.9=0\)

\(If\ 0<x<1000, E(x)=\int_{0}^{1000} f(x)*x \ dx=\int_{0}^{1000} 0.00008xdx=40\)

\(If\ x=1000,E(x)=1000*0.02=20\)

\(E(x)=(the\ mean\ of\ x=0)+(the\ mean\ of\ 0<x<1000)+(the\ mean\ of\ x=1000)\)

\(E(x)=0+40+20=60\)

\(E(x^2)=0*0*0.9+\int_{0}^{1000}x^2f(x)dx+(1000)^2*0.02=46666.67\)

\(V(x)=E(x^2)-E^2(x)=46666.67-60^2=43066.67\)


QUESTION 2

 Simulate the data (n=1000) from a lognormal distribution with the parameters of your choice. Perform the following tasks:
set.seed(12345) #Set the seed
simDataFrame<-data.frame(rlnorm(1000,0,1))#Generate the data(n=1000) from the lognormal distribution
colnames(simDataFrame)=c("value")#Rename the column for convenience
head(simDataFrame )
##       value
## 1 1.7959405
## 2 2.0329054
## 3 0.8964585
## 4 0.6354021
## 5 1.8328781
## 6 0.1623573
summary(simDataFrame)
##      value         
##  Min.   : 0.06214  
##  1st Qu.: 0.55103  
##  Median : 1.04731  
##  Mean   : 1.72158  
##  3rd Qu.: 1.99086  
##  Max.   :27.95884
var(simDataFrame)
##          value
## value 4.800739

\[if\ x\sim lognorm(u=0,\sigma^2=1),E(x)=e^{(\mu+\frac{\sigma^2}{2})},var(x)=(e^{\sigma^2}-1)e^{2\mu+\sigma}\]

\(Hence,E(x)=1.64872\),and \(var(x)=4.67077\)

a) Plot the histogram of your data in R

ggplot(simDataFrame)+
  geom_histogram(aes(value),color="#e9ecef", alpha=0.6)
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.

b) Estimate the LN-parameters using the Method of Moments. Show your derivations.

Derivations.

#Calculate the m1
n <- length(simDataFrame$value)
m1 <- mean(simDataFrame$value)
m1 
## [1] 1.721577

\[\mu=E(x)=e^{(\mu+\frac{\sigma^2}{2})}=m_1=\frac{1}{n}\sum_{i=1}^{n}x_{i}=1.721577\]

\[e^{(\mu+\frac{\sigma^2}{2})}=1.721577\] \[ln(1.721577)=\mu+\frac{\sigma^2}{2}\]

#Calculate variance
m2 <- sum(simDataFrame$value^2)/n
m2
## [1] 7.759766
s2 <- m2 - m1^2
s2
## [1] 4.795938

\[E(x^2)=m^{'}_2=\frac{1}{n}\sum_{i=1}^{n}x^2_{i}=7.759766\]

\[var(x)=(e^{\sigma^2}-1)e^{2\mu+\sigma}=E(x^2)-[E(x)]^2=7.759766-1.721577^2=4.79593\] \[e^{(\mu+\frac{\sigma^2}{2})}=1.721577\] Therefore we have two equation \[\left\{ \begin{array}{ll} Variance(x)=(e^{\sigma^2}-1)e^{2\mu+\sigma}=4.79593 \\ \ \\ E(X)=e^{(\mu+\frac{\sigma^2}{2})}=1.721577 \end{array} \right.\]

At the beginning \(E(x)=1.64872\),and \(var(x)=4.67077\) which we can see it is very closed to our result

  • How to solve these two equation to ge \(u\),\(\sigma^2\)

we can firstly get the expression of \(\sigma\) in form of \(\mu\)

\(e^{(\mu+\frac{\sigma^2}{2})}=1.721577 \iff ln1.721577=\mu+\frac{\sigma^2}{2} \iff \mu=ln1.721577-\frac{\sigma^2}{2}\)

\((e^{\sigma^2}-1)e^{2\mu+\sigma}=4.79593 \iff (e^{\sigma^2}-1)e^{2(ln1.721577-\frac{\sigma^2}{2})+\sigma}=4.79593\)

Solve this equation, we get \(\sigma=0.99,\sigma^2=0.9801\),therefore\(\mu=0.03308\) Therefore, we can conclude that

\[\left\{ \begin{array}{ll} \hat{\mu}_{MM}=0.03308\\ \ \\ \hat{\sigma^2}_{MM}=0.9801 \end{array} \right.\]

c) Estimate the LN-parameters using the Maximum Likelihood Estimation. Show your derivations.

Recall the LN distribution is\[f(x)=\frac{1}{x\sigma\sqrt{2\pi}}e^{-\frac{(ln(x)-\mu)^2}{2\sigma^2}}\] Firstly, the \[L(x;u,\sigma^2)=\prod_{i=1}^{n}f(x_i)=\prod_{i=1}^{n}\frac{1}{x\sigma\sqrt{2\pi}}e^{-\frac{(ln(x)-\mu)^2}{2\sigma^2}}=(\frac{1}{\sqrt{2\pi\sigma^2}})^n*\frac{1}{\prod_{i=1}^{n}x_i}*e^-{\frac{1}{2\sigma^2}*(\sum^n_{i=1}(lnx_i-\mu)^2)}\]

then, take ln on both side \[\ell(x;u,\sigma^2)=ln(L(x;u,\sigma^2))=ln[(\frac{1}{\sqrt{2\pi\sigma^2}})^n*\frac{1}{\prod_{i=1}^{n}x_i}*e^-{\frac{1}{2\sigma^2}*(\sum^n_{i=1}(lnx_i-\mu)^2)}]\] we can expand the \(\frac{\sum^n_{i=1}(lnx_i-\mu)^2}{2\sigma^2}=\frac{\sum (lnx_i)^2}{2\sigma^2}-\frac{\sum uln(x_i)}{\sigma^2}+\frac{nu^2}{2\sigma^2}\) Therefore \[\ell(x;u,\sigma^2)=-\frac{n}{2}ln(2\pi\sigma^2)-\sum^n_{i=1}lnx_i-\frac{\sum^n_{i=1} (lnx_i)^2}{2\sigma^2}+\frac{\sum^n_{i=1} uln(x_i)}{\sigma^2}-\frac{nu^2}{2\sigma^2}\]

Let’s compute the MLE of \(\mu\)(\(\hat\mu\))

\[\frac{\partial(\ell(x;u,\sigma^2))}{\partial\mu}=\frac{\partial \frac{\sum^n_{i=1} uln(x_i)}{\sigma^2}-\frac{nu^2}{2\sigma^2}}{\partial\mu}=0\] \[\frac{\sum lnx_i}{\sigma^2}=\frac{n\hat\mu}{\sigma^2}\] \[n\hat\mu=\sum lnx_i\] \[\hat\mu=\frac{\sum^n_{i=1}lnx_i}{n}\]

Then, let’s compute the MLE of\(\sigma^2\)(\(\hat\sigma^2\))

\[\frac{\partial(\ell(x;u,\sigma^2))}{\partial\sigma^2}=\frac{\partial[-\frac{n}{2}ln(2\pi\sigma^2)-\frac{\sum^n_{i=1} (lnx_i)^2}{2\sigma^2}+\frac{\sum^n_{i=1} uln(x_i)}{\sigma^2}-\frac{nu^2}{2\sigma^2}]}{\partial\sigma^2}=0\]

\[\frac{\sum(-\hat \mu +lnx_i)^2}{2(\hat{\sigma^2})^2}=\frac{n}{2(\hat{ \sigma^2})}\] \[n=\frac{\sum (-\hat{u_i}+lnx_i)^2}{\hat{\sigma^2}}\] \[\hat\mu=\frac{\sum^n_{i=1}lnx_i}{n},\hat{\sigma^2}=\frac{1}{n}*[\sum^n_{i=1}(-\hat\mu+lnx_i)^2]\] \[\hat{\sigma^2}=\frac{1}{n}*[\sum^n_{i=1}(\frac{\sum^n_{i=1}lnx_i}{n}+lnx_i)^2]\]

Eventually, we get two MLE \[\left\{ \begin{array}{ll} \hat{\sigma^2}=\frac{1}{n}*[\sum^n_{i=1}(\frac{\sum^n_{i=1}lnx_i}{n}+lnx_i)^2] \\ \ \\ \hat\mu=\frac{\sum^n_{i=1}lnx_i}{n} \end{array} \right.\]

#Calculating these two MLE by R
mu=(sum(log(simDataFrame$value)))/n
mu
## [1] 0.04619816
sigma2=(sum(((sum(log(simDataFrame$value)))/n+log(simDataFrame$value))^2))/n
sigma2
## [1] 1.005036

From the R we ge \[\left\{ \begin{array}{ll} \hat{\sigma^2_{MLE}}=\frac{1}{n}*[\sum^n_{i=1}(\frac{\sum^n_{i=1}lnx_i}{n}+lnx_i)^2]=1.005036 \\ \ \\ \hat\mu_{MLE}=\frac{\sum^n_{i=1}lnx_i}{n}=0.04619816 \end{array} \right.\]

We can also try another way to calculate the MLE

logL1 <- function(pars, y){
  sum <- 0
  M <- length(y)
  mu <- pars[1]
  s <- pars[2]
  
  for (i in 1:length(y)) sum <- sum + log(dlnorm(y[i], mu, s))
  return(-sum)
}

#Find MLEs
pars <- c(0, 1)
Q1 <- optim(par = pars, fn = logL1, method = "Nelder-Mead", y=simDataFrame$value, hessian = TRUE)
Q1   
## $par
## [1] 0.04613091 0.99831992
## 
## $value
## [1] 1463.383
## 
## $counts
## function gradient 
##       43       NA 
## 
## $convergence
## [1] 0
## 
## $message
## NULL
## 
## $hessian
##              [,1]         [,2]
## [1,] 1003.3686547    0.1351738
## [2,]    0.1351738 2006.3225285

We can see that the answer is very close the answer we calculated before.

d) Plot the density using estimates in b) repeat the same using estimated in c). Overlay these two densities on your original histogram in a). Discuss your findings.

From(b) \[\left\{ \begin{array}{ll} \hat{\mu}_{MM}=0.03308\\ \ \\ \hat{\sigma^2}_{MM}=0.9801 \end{array} \right.\] From(c) \[\left\{ \begin{array}{ll} \hat{\sigma^2_{MLE}}=1.005036 \\ \ \\ \hat\mu_{MLE}=0.04619816 \end{array} \right.\]

## Density plot using estimates in b)
options(scipen=10000)
hist(simDataFrame$value, breaks=100, xlim=c(0, 15),ylim=c(0,1), prob=TRUE, ylab=NULL, xlab="value", main="Fit model(b) with data") 
curve(dlnorm(x, 0.03308, 0.9801,), col="red", lwd=2, add=TRUE, yaxt="n")# From (b)meanlog=0.03308, variancelog=0.9801

## Density plot using estimates in c)
options(scipen=10000)
hist(simDataFrame$value, breaks=100, xlim=c(0, 15),ylim=c(0,1), prob=TRUE, ylab=NULL, xlab="value", main="Fit model(c) with data") 
curve(dlnorm(x, 0.04619816, 1.005036), col="darkblue", lwd=2, add=TRUE, yaxt="n")# From (c)meanlog=0.04619816, variancelog=1.005036

DISCUSSION: From the plot we have, we can conclude that these two curve are approximately same as the originate data.

e) Calculate the Bias and MSE for both estimators computed in b) and c). Present your summary of the results in a table format. Discuss your findings.

From(b) \[\left\{ \begin{array}{ll} \hat{\mu}_{MM}=0.03308\\ \ \\ \hat{\sigma^2}_{MM}=0.9801 \end{array} \right.\] From(c) \[\left\{ \begin{array}{ll} \hat{\sigma^2_{MLE}}=1.005036 \\ \ \\ \hat\mu_{MLE}=0.04619816 \end{array} \right.\]

*For MM in (b)

We know MM is an unbiased estimator, therefore its bias will be zero.

\(Bias_{\hat{\mu}_{MM}}(\mu)=0.03308-0=0.03308\)

\(MSE_{\hat{\mu}_{MM}}(\mu)=(0-0.03308)^2=0.0010942864\)

\(Bias_{\hat{\sigma^2}_{MM}}(\sigma^2)=0.9801-1=0.0199\)

\(MSE_{\hat{\sigma^2}_{MM}}(\sigma^2)=(1-0.9801)^2=0.00039601\)

*For MLE in (c)

We know MLE is an unbiased estimator, therefore its bias will be zero.

\(Bias_{\hat{\mu}_{MLE}}(\mu)=0.04619816-0=0.04619816\)

\(MSE_{\hat{\mu}_{MLE}}(\mu)=(0-0.04619816)^2=0.00213426998\)

\(Bias_{\hat{\sigma^2}_{MLE}}(\sigma^2)=1.005036-1=0.005036\)

\(MSE_{\hat{\sigma^2}_{MLE}}(\sigma^2)=(1-1.005036)^2=0.00002536129\)

Input =("
           u_MM        u_MLE         σ2_MM       σ2_MLE
Bias       0.03308     0.04619816    0.0199      0.005036
MSE        0.0010943   0.00213427    0.000396    0.00002536129              
")

Matrix.1 = as.matrix(read.table(textConnection(Input),
                     header=TRUE,
                     row.names=1))

Matrix.1
##           u_MM      u_MLE    σ2_MM        σ2_MLE
## Bias 0.0330800 0.04619816 0.019900 0.00503600000
## MSE  0.0010943 0.00213427 0.000396 0.00002536129

Discussion: From the table above, we can conclude that the \(MSE_{\hat{\sigma^2}_{MLE}}(\sigma^2)<MSE_{\hat{\sigma^2}_{MM}}(\sigma^2)\) and \(MSE_{\hat{\mu}_{MLE}}(\mu)>MSE_{\hat{\mu}_{MM}}(\mu)\), it seems like the estimators from Moment method and Maximum likelihood method are both unbiased.

f) Calculate the efficiency of the estimator in c) relative to that in b) for both parameters of interest. Discuss your findings.

  • \(eff(\hat{\mu}_{MM},\hat{\mu}_{MLE})\)

efficiency only exist for unbiased estimator, therefore we can assume their bias is zero.

\(eff(\hat{\mu}_{MM},\hat{\mu}_{MLE})=\frac{Var(\hat{\mu}_{MLE})}{Var(\hat{\mu}_{MM})}=\frac{MSE_{\hat{\mu}_{MLE}}(\mu)-(Bias_{\hat{\mu}_{MLE}}(\mu))^2}{MSE_{\hat{\mu}_{MM}}(\mu)-(Bias_{\hat{\mu}_{MM}}(\mu))^2}=\frac{0.00213426998}{0.0010942864}=1.95037604415\)

\(eff(\hat{\sigma^2}_{MM},\hat{\sigma^2}_{MLE})=\frac{Var(\hat{\sigma^2}_{MLE})}{Var(\hat{\sigma^2}_{MM})}=\frac{MSE_{\hat{\sigma^2}_{MLE}}(\sigma^2)-(Bias_{\hat{\sigma^2}_{MLE}}(\sigma^2))^2}{MSE_{\hat{\sigma^2}_{MM}}(\sigma^2)-(Bias_{\hat{\sigma^2}_{MM}}(\sigma^2))^2}=\frac{0.00002536129}{0.00039601}=0.06404204439\)

Discussion: From the result we have, it seems like \(\hat{\mu}_{MM}\) is better than \(\hat{\mu}_{MLE}\). And \(\hat{\sigma^2}_{MLE}\) is better than \(\hat{\sigma^2}_{MM}\).


QUESTION 3 Suppose you want to test the following hypotheses: \[H_o: µ = 30,000.H_a: µ ≠ 30,000\]

a): Define a rejection region for a given α-level. Assume σ=5,000. All derivations must be included in the report. [Note: the sample size is small!]

n = 15
a1 = 0.05
a2 = 0.05

qt(0.975, n-1)
## [1] 2.144787
qt(0.995, n-1)
## [1] 2.976843
Rejection region for \(\alpha=0.05\) with \(n=15\):

\[RR: \{t<-2.145\}\cup\{t>2.145\}\] \[RR: \{\frac{\bar{X}-\mu_0}{S/\sqrt{n}}<-2.145\}\cup\{\frac{\bar{X}-\mu_0}{S/\sqrt{n}}>2.145\}\] \[RR: \{\bar{X}<-2.145*\frac{S}{\sqrt{n}}+\mu_0\}\cup\{\bar{X}>2.145*\frac{S}{\sqrt{n}}+\mu_0\}\] \[-2.145*\frac{5000}{\sqrt{15}}+30000=27231.09\] \[2.145*\frac{5000}{\sqrt{15}}+30000=32768.91\] \[RR:\{\bar{X}<27231.09\}\cup\{\bar{X}>32768.91\}\]

Rejection region for \(\alpha=0.01\) with \(n=15\):

\[RR: \{t<-2.977\}\cup\{t>2.977\}\] \[RR: \{\frac{\bar{X}-\mu_0}{S/\sqrt{n}}<-2.977\}\cup\{\frac{\bar{X}-\mu_0}{S/\sqrt{n}}>2.977\}\] \[RR: \{\bar{X}<-2.977*\frac{S}{\sqrt{n}}+\mu_0\}\cup\{\bar{X}>2.977*\frac{S}{\sqrt{n}}+\mu_0\}\] \[-2.977*\frac{5000}{\sqrt{15}}+30000=26156.91\] \[2.977*\frac{5000}{\sqrt{15}}+30000=33843.09\] \[RR:\{\bar{X}<26156.91\}\cup\{\bar{X}>33843.09\}\]

b) Construct two Power Curves for various values of β- Type II error rate. These curves must be on the same plot. [Note: you will need to consider various values for the sample mean.]

lower1 = 27231.09
upper1 = 32768.91
lower2 = 26156.91
upper2 = 33843.09 

s = 5000
n = 15

powerfun <- function(mu, n, sd, lower, upper) {
  lower.prob <- (lower-mu)/s*sqrt(n)
  upper.prob <- (upper-mu)/s*sqrt(n)
  power =  pt(lower.prob, n-1) + (1 - pt(upper.prob, n-1))
}

domain <- seq(20000, 40000, 1)

ggplot() +
  geom_path(aes(x=domain, y=powerfun(domain, 15, s, lower1, upper1)), size=0.8, color="royalblue") +
  geom_path(aes(x=domain, y=powerfun(domain, 15, s, lower2, upper2)), size=0.8, color="red") +
  labs(y="Power", x=expression("mu under H"["a"]), title="Power Function (red: a=0.01, blue: a=0.05)")

c) Reflect on your findings. How does α-level impact the shape of the power curve? How does power changes with α? How does power changes with β? A minimum 5 full sentences are required for full credit.

Above, we see the power curves for this t-test for 16 observations with alpha level 0.01 (in red) and 0.05 (in blue). Note that the 0.05 line is an upper bound for the 0.01 line, meaning that the 0.05 level test will have a higher power than a 0.01 level test under the same alternative hypothesis. If we were to choose a higher significance level (say, 0.1), we would see that line lie above the 0.05 line.

For both of the significance levels, the minimums of the power line are equal to the alpha level of the test (0.01 and 0.05). That is the probability of rejecting the null hypothesis when the the true mean is 30,000. The functions then approach 1 as the alternative mean moves away from the center.