Assignment_9

Tom Detzel

3/31/2018



Problem 1

11. The price of one share of stock in the Pilsdorff Beer Company (see Exercise 8.2.12) is given by Yn on the nth day of the year. Finn observes that the differences Xn = Yn+1−Yn appear to be independent random variables with a common distribution having mean μ = 0 and variance σ2 = 1/4. If Y1 = 100, estimate the probability that Y365 is:

A. We are not given the variance of Y, so we turn to brute force before providing a theoretical answer below. Answers immediately below are based on a simulation per question 12 in Grimstead computing the mean, variance and standard deviation of 20,000 samples of Y per code below.

(a) ≥ 100

P(X≥100) = 0.5

(b) ≥ 110

P(X≥110) = 0.1459

(c) ≥ 120

P(X≥120) = 0.0175

set.seed(1234)
x <- as.numeric()
i <- 0
for(i in 1:20000){
    y <-rnorm(364,0, sqrt(0.25))
    y <- 100+sum(y)
    x <- append(x,y)
    i=i+1
  }
hist(x, freq=F, col='lightblue', breaks=50,
     main='Histogram of Y', xlab='Y')
lines(density(x), col='red', lwd=3)
abline(v=110, col='green', lwd=3)
abline(v=120, col='purple', lwd=3)

Experimental values:

E(Y) = 99.9699
V(Y) = 91.6819
SD(Y) = 9.5751

How close are we to the theoretical answer? According to Theorom 6.9 in Grimstead p. 260, “the variance of the sum of any number of mutually independent random variables is the sum of the individual variances.” Here, both Y and X1, X2, . . . , Xn are independent and have the same distributions, then

V(Yn) = nσ2 = 36 x 0.25 = 91.25

Using R as above:

(a) ≥ 100

P(X≥100) = 0.5

(b) ≥ 110

P(X≥110) = 0.1476

(c) ≥ 120

P(X≥120) = 0.0181



Problem 2

Calculate the expected value and variance of the binomial distribution using the moment generating function.

A. Following Grimstead, p. 366, we are going for this moment formula:

\(\mu ={ \mu }_{ 1 }\\ { \sigma }^{ 2 }={ \mu }_{ 2 }-{ \mu }_{ 1 }^{ 2 }\)

From the top, we can start with the probability mass function for the binomial distribution:

\(f(x)=\left( \begin{matrix} n \\ x \end{matrix} \right) { p }^{ x }{ (1-p) }^{ n-x }\)

Compute the moment generating function. Generally it is:

\(M(t)=E({ e }^{ tX })\)

In this instance:

\(M(t)=\sum _{ x={ o }^{ n } }^{ }{ { e }^{ tx } } \left( \begin{matrix} n \\ x \end{matrix} \right) { p }^{ x }{ (1-p) }^{ n-x }\)

\(M(t)=\sum _{ x={ o }^{ n } }^{ }{ { { (p{ e }^{ t }) }^{ x } } } \left( \begin{matrix} n \\ x \end{matrix} \right) { p }^{ x }{ (1-p) }^{ n-x }\)

\(M(t)={ [(1-p)+p{ e }^{ t }] }^{ n }\)

Then differentiate to get the mean \(\mu\) and variance \({ \sigma }^{ 2 }\):

\({ \mu }_{ 1 }={ g }^{ ' }(0)=n({ (1-p)+{ pe }^{ t }) }^{ n-1 }{ pe }^{ t }{ | }_{ t=0 }=np\)

\({ \mu }_{ 2 }={ g }^{ '' }(0)=n(n-1){ p }^{ 2 }+np\)

So that finally …

\({ \mu }={ \mu }_{ 1 }=np\quad and\quad { \sigma }^{ 2 }={ \mu }_{ 2 }-{ \mu }_{ 1 }=np(1-p)\)



Problem 3

Calculate the expected value and variance of the exponential distribution using the moment generating function.

A. We’ll take the same approach as above, although let’s just go ahead and begin with the known pdf and moment generating function for the exponential distribution:

The pdf for the exponential distribution:

\(f(x)-\frac { 1 }{ \theta } { e }^{ -x/\theta }\)

The moment generating function:

\(M(t)=\frac { 1 }{ 1-\theta t } ={ (1-\theta t) }^{ -1 }\)

First step is to differentiate to get the mean:

\(\\ M'(o)={ \mu }_{ 1 }=E(X)\\ =-1{ (1-\theta t) }^{ -2 }(-\theta ){ | }_{ t=0 }\\=-1(-\theta )=\theta\)

Next, compute the variance:

\({ \sigma }^{ 2 }=Var(X)=M''(o)-{ M'(o) }^{ 2 }\)

\(M'(t) = \theta { (1-\theta t) }^{ -2 }\)

\(M''(o) = \theta (-2{ (1-\theta t) }^{ -3 })(-\theta ){ | }_{ t=0 }\)

\({ \sigma }^{ 2 }=2{ \theta }^{ 2 }-{ \theta }^{ 2 }={ \theta }^{ 2 }\)

Note that the exponential mean and variance can also be expressed as rates, where \(\lambda\) is the rate parameter:

\(\mu =\frac { 1 }{ \lambda } \\ { \sigma }^{ 2 }=\frac { 1 }{ { \lambda }^{ 2 } }\)

The same computations are applied to the moment generating function:

\(M′(t)=E(X)={ λ }(−1){ (λ−t) }^{ -2 }(−1)\\ \quad \quad \quad \quad =λ{ (λ−t) }^{ -2 }\\ M′(o)=λ{ λ }^{ -2 }\\ \qquad \mu =\frac { 1 }{ \lambda }\)

and for the variance:

\(M″(t)=Var(X)\\ =λ(−2){ (λ−t) }^{ -3 }(−1)\\ =2λ{ (λ−t) }^{ -3 }\\ { \mu }_{ 2 }=M″(o)=2λ{ λ }^{ -3 }λ=2{ λ }^{ 2 }\\ { \sigma }^{ 2 }={ \mu }_{ 2 }−{ { \mu }_{ 1 } }^{ 2 }=2{ λ }^{ 2 }−{ (\frac { 1 }{ \lambda } ) }^{ 2 }=\frac { 1 }{ { \lambda }^{ 2 } }\)