The purpose of this assignment is to explore the application and properties of the Sum of Random Variables and Law of Large Numbers.
A company buys 100 lightbulbs, each of which has an exponential lifetime of 1000 hours. What is the expected time for the first of these bulbs to burn out? (See Exercise 10.)
We define what we’re given, draw from the hint (Exercise 10), and then utilize Exp[Xi] = 1/ λi to solve for the expected time of the first of these bulbs to burn out as:
#E[Xi] = 1 / λi = 1 / 1000
exp_lt <- 1000
lambda <- 1 / exp_lt
#If Xi∼ Exp(λi) then min Xi∼ Exp(sum(λi)):
n <- 100
min <- n * lambda
#Lastly, E[min Xi] = 1 / λ = 1 / (1/10)
E_X <- 1 / min
sprintf("The expected time is %s hours.", E_X)
## [1] "The expected time is 10 hours."
Assume that X1 and X2 are independent random variables, each having an exponential density with parameter \(\lambda\). Show that Z = X1 − X2 has density \(f_z\)(z) = (1/2)\(\lambda\)e^−\(\lambda\)|z|.
To do subtraction on random variables is just to take the additive inverse as: X1 − X2 = X1 +(−X2). Applying convolution will take the following form:
With the understanding that P(-Y=-1)=P(Y=1) and \(f_-y\)(z-x)=\(f_y\)(x-y)
\(f_z\)(z)=∫\(f_x\)(x)\(f_y\)(x-z) dx
=∫λe^(-λx) λe^-λ(x-z) dx
=λe^λz ∫λ e^(-2λx) dx
=λe^λz (-1/2 e^(-2λx) dx)| evaluated from 0 to infinity
=λ/2 e^λz
And then, being that \(f_z\)(z) = \(f_z\)(-z), we can determine the distribution as: \(f_z\)(z) = (1/2)λe^−λ|z|. Thus confirming the density for Z = X1 - X2.
This problem was completed with reference to MIT MOOC (youtube) and p.293 of the course text.
Let X be a continuous random variable with mean µ = 10 and variance σ^2 = 100/3. Using Chebyshev’s Inequality, find an upper bound for the following probabilities.
Given the definiton from p.316 and applying the Chebyshev Inequality for probability being equivalent to 1/k^2, we solve as follows:
#Define our givens
sigma_squared <- 100/3
sigma <- sqrt(sigma_squared)
eps <- 2 #assign our epsilon
#Calculate our k value
k <- eps / sigma
#Calculate our probability
p_2 <- 1 / k^2
sprintf("The upper bound is: %s.", p_2)
## [1] "The upper bound is: 8.33333333333334."
Which we’ll interpret as 1.0
eps <- 5 #assign our epsilon
#Calculate our new k value
k <- eps / sigma
#Calculate our probability
p_5 <- 1 / k^2
sprintf("The upper bound is: %s.", p_5)
## [1] "The upper bound is: 1.33333333333333."
Which we’ll interpret as 1.0.
eps <- 9 #assign our epsilon
#Calculate our new k value
k <- eps / sigma
#Calculate our probability
p_9 <- 1 / k^2
sprintf("The upper bound is: %s.", p_9)
## [1] "The upper bound is: 0.411522633744856."
eps <- 20 #assign our epsilon
#Calculate our new k value
k <- eps / sigma
#Calculate our probability
p_20 <- 1 / k^2
sprintf("The upper bound is: %s.", p_20)
## [1] "The upper bound is: 0.0833333333333333."
And we thus notice that as our epsilon (and corresponding k-value) go up, our upper bound goes down.