Question 11 on Pg. 303 Chapter 7.2

A company buys 100 lightbulbs, each of which has an exponential lifetime of 1000 hours. What is the expected time for the first of these bulbs to burn out?

The Solution

We can consider the lifespan of each of the 100 lightbulbs as an individual random variable (\(X_1, X_2, ..., X_100\)). Next, we aim to determine the distribution of the minimum (\(M\)) lifespan, indicating the first bulb to fail. Ultimately, we’ll compute the average of that distribution to determine the anticipated time for the initial bulb to fail.

The key insight for this problem is provided by Q#10 in this same chapter, which notes that “the density for \(M\) is exponential with mean \(\mu/n\).” We know the rate parameter (\(\lambda\)) for the distribution of bulb failures is 1/1000, so the mean \(\mu\) is \(1/\lambda = 1000\). Here we are concerned with 100 lightbults, so \(n\) = 100. So, to give us the mean of \(M\), we divide \(\mu\) (1000) by \(n\) (100), and get an expected value of 10 hours.

We can try and simulate this result, as well. We define an exponential distribution with our rate parameter \(\lambda\) and take 100 random samples. We perform this 10,000 times, then aggregate the minimums from each iteration. We can plot that distribution and highlight the mean, which should align to our expected value of 10.

The exponential formula is:

\[N(t) = N_0 e ^ {-\lambda t} \]

lambda would be 1 failure every 1000 hours. Otherwise stated:

\[\lambda = \frac{1}{1000}\]

Now we need to plug in the variables and solve for t

\[100 e ^ {-\frac{1}{1000}t} = 99 \rightarrow e ^ {-\frac{1}{1000}t} = \frac{99}{100} \rightarrow -\frac{1}{1000}t = ln\bigg(\frac{99}{100}\bigg) \rightarrow t = ln\bigg(\frac{99}{100}\bigg)\times-1000 \approx 10.05034\]

# Define the given variables
N0 <- 100
lambda <- 1/1000
Nt <- 99

# Solve for t
t <- -log(Nt/N0) / lambda

# Print the result
print(t)
## [1] 10.05034

Answer: 10.05034 hours

The mean of our simulated distribution is very close to our expected value 10!

Question 14 on Pg. 303

Assume that X1 and X2 are independent random variables, each having an exponential density with parameter λ. Show that Z = X1 − X2 has density \(fz(z)=\frac{1}{2}\lambda e ^{-\lambda \mid z \mid}\)

The solution

Note: In order to keep this straigth in my head I solved Z = X - Y instead of Z = X1 - X2. This notation will be used in this solution.

The convolution formula of Z = X + Y is

\[fx(z) = \int_{-\infty}^{\infty}fx(x)fy(z-x)dx\]

But to do Z = X - Y we need to do Z = X + (-Y). Pluging it into the convolution formula:

\[fz(z) = \int_{-\infty}^{\infty}fx(x)f{-y}(z-x)dx\] Since \(f{-y}(z-x) = fy(x-z)\) we can rewrite the convolution as

\[fz(z) = \int_{-\infty}^{\infty}fx(x)fy(x-z)dx\]

Looking at the case where z < 0

\[ \int_{0}^{\infty}\lambda e ^{-\lambda x} \lambda e ^ {-\lambda(x-z)}dx = \lambda e ^ {\lambda z}\int_{0}^{\infty}\lambda e ^{-2\lambda x} dx = \lambda e ^{\lambda z}(-\frac{1}{2}e ^ {-\lambda x}|_0^{\infty}) = \frac{1}{2}\lambda e^{\lambda z}\]

Now looking at the case where z is greater than or equal to zero. Since the two variables are independent Z = X - Y is the same as -Z = -X + Y. So the distribution is symetric arround zero thus fz(z) = fz(-z). Sp when z >= 0:

\[fz(-z) = \frac{1}{2}\lambda e^{-\lambda z}\]

This implies that \(fz(z)=\frac{1}{2}\lambda e ^{-\lambda \mid z \mid}\)

Question 1 Pg. 320-321

Let X be a continuous random variable with mean µ = 10 and variance \(\sigma^2\) = 100/3. Using Chebyshev’s Inequality, find an upper bound for the following probabilities.

  1. \(P(|X − 10| ≥ 2)\)
  2. \(P(|X − 10| ≥ 5)\)
  3. \(P(|X − 10| ≥ 9)\)
  4. \(P(|X − 10| ≥ 20)\)

The solution

Chebyshev’s Inequality is defined as follows.

\[P(|X - \mu | \geq k \sigma) \leq 1 / k^2 \]

So, for (a), \(k\sigma = 2\). Given that \(\sigma^2 = 100/3\), we know that \(\sigma = \sqrt{100/3}\), so \(k\) is equal to \(2 / \sqrt{100/3}\). We can plug this into the inequality as follows. \[\frac{1}{(2 / \sqrt{100/3})^2}\] This comes out to ~8.333, but because this is a probability, we limit the result to 1 So, \(P(|X − 10| ≥ 2) \leq 1\),the probability that \(x\) deviates from the mean by more than 2 units is, at most 1

We can define a function to apply this same procedure for the other probabilities.

mu <- 10
sigma <- sqrt(100/3)

(a) P(|X − 10| ≥ 2).

k <- 2 / sigma
1 / k^2
## [1] 8.333333

(b) P(|X − 10| ≥ 5).

k <- 5 / sigma
1 / k^2
## [1] 1.333333

(c) P(|X − 10| ≥ 9).

k <- 9 / sigma
1 / k^2
## [1] 0.4115226

(d) P(|X − 10| ≥ 20).

k <- 20 / sigma
1 / k^2
## [1] 0.08333333

Combine all the previous steps into a single code

# Given values
mu <- 10
sigma <- sqrt(100/3)

# Function to calculate upper bound using Chebyshev's Inequality
chebyshev_upper_bound <- function(k) {
  return(1 / k^2)
}

# Define scenarios
scenarios <- list(
  "P(|X - 10| ≥ 2)" = 2,
  "P(|X - 10| ≥ 5)" = 5,
  "P(|X - 10| ≥ 9)" = 9,
  "P(|X - 10| ≥ 20)" = 20
)

# Calculate and store upper bounds for each scenario
results <- data.frame(Scenario = character(), Upper_Bound = numeric())

for (scenario_name in names(scenarios)) {
  k <- scenarios[[scenario_name]] / sigma
  upper_bound <- chebyshev_upper_bound(k)
  results <- rbind(results, data.frame(Scenario = scenario_name, Upper_Bound = upper_bound))
}
Scenario Upper_Bound
P(|X - 10| ≥ 2) 8.3333333
P(|X - 10| ≥ 5) 1.3333333
P(|X - 10| ≥ 9) 0.4115226
P(|X - 10| ≥ 20) 0.0833333