The purpose of this assignment is to explore the application and properties of the Central Limit Theorem and Generating Functions.
Ex 11, p. 363
The price of one share of stock in the Pilsdorff Beer Company (see Exercise 8.2.12) is given by \(Y_n\) on the nth day of the year. Finn observes that the differences \(X_n\) = \(Y_{n+1} − Y_n\) appear to be independent random variables with a common distribution having mean \(\mu\) = 0 and variance \(\sigma^2 = 1/4\). If \(Y_1 = 100\), estimate the probability that \(Y_{365}\) is (a) \(≥\) 100, (b) \(≥\) 110, and (c) \(≥\) 120.
List what we’re given:
Take what we know for \(S_n\) and re-organize for \(Y_{365}\):
\(S_n = X_1 + X_2 + ... + X_n\)
\(S_n = (Y_2 - Y_1) + (Y_3 - Y_2) + ... + (Y_{n+1} - Y_n)\)
\(S_n = Y_{n+1} - Y_1\)
Once we cancel out, we notice that only the 1st and last \(Y\) values make it through. We can interpret this as the difference in price of one share of stock in the Pilsdorff Beer Company, from day 1 to day 365, can be simplified to the price on that last day minus the price of that first day (being that all in-between terms cancel out).
Substituting in what we have and re-organizing equations, we get:
\(S_{364} = Y_{365} - 100\) –> \(Y_{365} = S_{364} + 100\)
Next (referring to p.360 of the text), we solve for the corresponding mean, variance, and standard deviation:
Taking all that’s been defined above, we can re-arrange our probabilities and use R’s built in pnorm() function to solve for:
\(P(Y_{365} ≥ 100)\) = 0.5 as shown below:
\(P(Y_{365} ≥ 100)\), recall \(Y_{365} = S_{364} + 100\)
\(P(S_{364} + 100 ≥ 100)\)
\(P(S_{364} ≥ 0)\)
#standard deviation calculation
std <- sqrt(91) #sqrt of variance (noted above)
# >= 100
1 - pnorm(0, mean = 0, sd = std)
## [1] 0.5
A note on pnorm(): it returns the integral from −∞ to x of the pdf of the normal distribution (recall: z = \((x - \mu)\)/\(\sigma\). The pnorm function can also take the argument lower.tail. When lower.tail = FALSE then pnorm returns the integral from q to ∞ of the pdf of the normal distribution. Note that pnorm(q, lower.tail = FALSE) is the same as 1-pnorm(q). (Reference)[http://seankross.com/notes/dpqr/]
\(P(Y_{365} ≥ 110)\) = 0.147 as shown below:
\(P(Y_{365} ≥ 110)\), recall \(Y_{365} = S_{364} + 100\)
\(P(S_{364} + 100 ≥ 110)\)
\(P(S_{364} ≥ 10)\)
## [1] 0.1472537
## [1] 0.1472537
\(P(Y_{365} ≥ 120)\) = 0.018 as shown below:
\(P(Y_{365} ≥ 120)\), recall \(Y_{365} = S_{364} + 100\)
\(P(S_{364} + 100 ≥ 120)\)
\(P(S_{364} ≥ 20)\)
## [1] 0.01801584
Calculate the expected value and variance of the binomial distribution using the moment generating function.
The binomial distribution is: \(({n \over k}) p^x (1-p)^{n-x}\)
With this in mind, we can rewrite our moment generating function \(M(t)\) as:
\(M(t) = \Sigma e^{tx} C(n,x) p^x (1-p)^{n-x}\)
\(M(t) = \Sigma pe^{tx} C(n,x) (1-p)^{n-x}\)
\(M(t) = [(1-p) + pe^t]^n\)
Which we can then take the derivative and second derivative of (by applying the Chain rule - derivative of “outer” multiplied by derivative of “inner”) as follows:
Outer = \(n[u]^{n-1}\), where u represents our function.
Inner = \((pe^t)[(1-p) + pe^t]\)
The result \(M'(t) = n(pe^t)[(1-p) + pe^t]^{n-1}\)
Solving @ 0: \(M'(0) = n(pe^0)[(1-p) + pe^0]^{n-1}\) –> \(n(pe^0)[(1-p) + pe^0]^{n-1}\) –> \(n(p(1))[(1-p) + p(1)]^{n-1}\) –>\(np[1-p+p]^{n-1}\) –>\(np[1]^{n-1}\) –>\(np(1)\) –>
Leading to our expected value \(E(X) = np\)
…………………………………………………………………….
Starting with our 1st derivative: \(M'(t) = n(pe^t)[(1-p) + pe^t]^{n-1}\)
We apply the Product Rule and then the Chain Rule (again):
\({d(uv) \over dt} = u{dv \over dt} + v{du \over dt}\)
Taking \(n(pe^t)\) as u and \([(1-p) + pe^t]^{n-1}\) as v, we solve in the following manner:
u = \(n(pe^t)\),
\({du \over dt}\) = \(n(pe^t)\)
v = \([(1-p) + pe^t]^{n-1}\),
\({dv \over dt}\) = \((n-1)(pe^t)[(1-p) + pe^t]^{n-2}\)
\({dv \over dt}\) was calculated using the Chain Rule.
and thus, re-substituting in via the Product Rule, we get:
\(n(n-1)(pe^t)^2[(1-p) + pe^t]^{n-2}\) + \(n(pe^t)[(1-p) + pe^t]^{n-1}\)
Solving @ 0:
\(M''(0) = n(n-1)(pe^0)^2[(1-p) + pe^{0}]^{n-2} + n(pe^{0})[(1-p) + pe^{0}]^{n-1}\)
\(n(n-1)(p^2(1))[(1-p) + p(1)]^{n-2} + n(p(1))[(1-p) + p(1)]^{n-1}\)
\(n(n-1)(p^2)[1]^{n-2} + np[1]^{n-1}\)
\(n(n-1)(p^2)[1] + np[1]\)
\(n(n-1)(p^2) + np\)
And thus substituting in to our equation for variance:
\(\sigma^2 = M''(0) - [M'(0)]^2\)
\(\sigma^2 = n(n-1)(p^2) + np - [np]^2\)
\((n^2-n)p^2 + np - (np)^2\)
\((np)^2-np^2 + np - (np)^2\)
\(-np^2 + np\)
\(np(1-p)\)
By distributing through, cancelling terms and re-arranging, we end up with our variance: \(\sigma^2 = np(1-p)\)
Calculate the expected value and variance of the exponential distribution using the moment generating function.
The exponential distribution is: \(\lambda e^{\lambda x}\)
With this in mind, we can rewrite our moment generating function \(M(t)\) as:
\(M(t) = ∫ e^{tx} \lambda e^{-\lambda x} dx\)
\(M(t) = ∫ \lambda e^{{tx}-\lambda x} dx\)
\(M(t) = ∫ \lambda e^{-(\lambda - t)x} dx\)
Now we integrate and arrive at
\(M(t) = {- \lambda e^{-(\lambda - t)x} \over (\lambda - t)}|{∞\over 0}\)
Plugging in we get \({- \lambda e^{-(\lambda - t)(∞)} \over (\lambda - t)} - {- \lambda e^{-(\lambda - t)(0)} \over (\lambda - t)}\)
e to the negative infinity becomes 0 and e to the 0 becomes 1, leaving us with:
\(M(t) = {\lambda \over (\lambda - t)}\)
Which we can then take the derivative and second derivative of to compute the Expected Value and Variance.
\(M'(t) = {\lambda \over (\lambda - t)}{d \over dt}\)
We apply the Power Rule, taking \((\lambda - t)\) as our u:
\(u^{-1}{d \over dt}\) –> \(-1u^{-2}\)
Substituting back in we get:
\(M'(t) = {-\lambda \over (\lambda - t)^2}\)
\(M'(0) = {-\lambda \over (\lambda - (0))^2}\) –> $M’(0)
\(= {-\lambda \over \lambda^2}\)
\(= {-1 \over \lambda}\)
And thus our expected value E[X] = \(= {-1 \over \lambda}\)
Solving for the secon derivative is done in the same manner, this time with We apply the Power Rule, taking \((\lambda - t)^2\) as our u:
\(u^{-2}{d \over dt}\) –> \(-2u^{-3}\)
Substituting back in we get:
\(M'(t) = {2\lambda \over (\lambda - t)^3}\), being that negative signs cancelled out.
\(M'(0) = 2{\lambda \over (\lambda - (0))^3}\)
\(= 2{\lambda \over (\lambda)^3}\)
\(= {2 \over \lambda^2}\)
Now, plugging in to our variance equation:
\(\sigma^2 = M''(0) - [M'(0)]^2\)
\(= {2 \over \lambda^2} - {-1 \over \lambda}^2\)
\(= {2 \over \lambda^2} - {1 \over \lambda^2}\)
\(= {1 \over \lambda^2}\)
And thus our variance \(\sigma^2 = {1 \over \lambda^2}\).