library(VGAM)
Consider a bivariate normal posterior distribution of the parameters \(\theta_1\) and \(\theta_2\).
We can use the definition of the conditional mean to derive the conditional mean of \(Z_1\) given \(Z_2 = z_2\) using the conditional probability density function (PDF) \(f_{Z_1|Z_2}(z_1 | z_2)\).
Given that \(Z_1\) and \(Z_2\) follow a bivariate standard normal distribution with mean vector \(\boldsymbol{\mu} = (0, 0)^T\) and covariance matrix:
\[ \Sigma = \begin{pmatrix} 1 & \rho \\ \rho & 1 \end{pmatrix} \]
We can compute the conditional mean \(\mu_{Z_1|Z_2}\) as follows:
We can use the definition of the conditional mean to derive the conditional mean of \(\theta_1\) given \(\theta_2 = \theta_2\) by using the conditional probability density function (PDF) \(f_{\theta_1|\theta_2}(\theta_1 | \theta_2)\).
Given that \(\theta_1\) and \(\theta_2\) follows a bivariate standard normal distribution with mean vector \(\boldsymbol{\mu} = (0, 0)^T\) and covariance matrix:
\[ \Sigma = \begin{pmatrix} 1 & \rho \\ \rho & 1 \end{pmatrix} \]
We can compute the conditional mean \(\mu_{\theta_1|\theta_2}\) as follows:
\[ \begin{align*} \mu_{\theta_1|\theta_2} &= E[\theta_1 | \theta_2 = \theta_2] \\ &= \int_{-\infty}^{\infty} \theta_1 \cdot f_{\theta_1|\theta_2}(\theta_1 | \theta_2) \, d\theta_1 \end{align*} \]
We know that the conditional distribution \(f_{\theta_1|\theta_2}(\theta_1 | \theta_2)\) is a normal distribution with mean \(\mu_{\theta_1|\theta_2}\) and variance \(\sigma^2_{\theta_1|\theta_2}\). Therefore, we can directly use the formula for the mean of a normal distribution:
\[ \begin{align*} \mu_{\theta_1|\theta_2} &= \int_{-\infty}^{\infty} \theta_1 \cdot \frac{1}{\sqrt{2\pi \sigma^2_{\theta_1|\theta_2}}} \exp\left( -\frac{(\theta_1 - \mu_{\theta_1|\theta_2})^2}{2\sigma^2_{\theta_1|\theta_2}} \right) \, d\theta_1 \\ &= \mu_{\theta_1|\theta_2} \end{align*} \]
Let’s go through all the algebraic steps again to derive the conditional mean \(\mu_{\theta_1|\theta_2}\).
Given:
Generally; \[ \phi(x_1,x_2) = \frac{1}{2\pi\sigma_1\sigma_2\sqrt{1-\rho^2}} \exp\left\{ -\frac{1}{2(1-\rho^2)} \left[ \left( \frac{x_1-\mu_1}{\sigma_1} \right)^2 - 2\rho \left( \frac{x_1-\mu_1}{\sigma_1} \right) \left( \frac{x_2-\mu_2}{\sigma_2} \right) + \left( \frac{x_2-\mu_2}{\sigma_2} \right)^2 \right] \right\} \]
Therefore for the case of standard normal variates;
\[ f_{\theta_1,\theta_2}(\theta_1, \theta_2) = \frac{1}{2\pi \sqrt{1 - \rho^2}} \exp\left(-\frac{1}{2(1 - \rho^2)}(\theta_1^2 - 2\rho\theta_1\theta_2 + \theta_2^2)\right) \]
Also,let’s go through all the algebraic steps to derive the conditional mean \(\mu_{\theta_1|\theta_2}\).
Given: - \(\theta_1\) and \(\theta_2\) follow a bivariate standard normal distribution. - Their joint density function is:
\[ f_{\theta_1,\theta_2}(\theta_1, \theta_2) = \frac{1}{2\pi \sqrt{1 - \rho^2}} \exp\left(-\frac{1}{2(1 - \rho^2)}(\theta_1^2 - 2\rho\theta_1\theta_2 + \theta_2^2)\right) \]
We want to find \(\mu_{\theta_1|\theta_2} = E[\theta_1 | \theta_2 = \theta_2]\).
The conditional density function \(f_{\theta_1|\theta_2}(\theta_1 | \theta_2)\) can be obtained using the joint density function and the marginal density function of \(\theta_2\), which is also a standard normal density function.
Let’s denote \(f_{\theta_1|\theta_2}(\theta_1 | \theta_2)\) as \(g(\theta_1 | \theta_2)\).
Using Bayes’ theorem, we have:
\[ g(\theta_1 | \theta_2) = \frac{f_{\theta_1,\theta_2}(\theta_1, \theta_2)}{f_{\theta_2}(\theta_2)} \]
where \(f_{\theta_2}(\theta_2)\) is the marginal density function of \(\theta_2\), which is also a standard normal density function.
Let’s substitute the given expressions:
\[ g(\theta_1 | \theta_2) = \frac{\frac{1}{2\pi \sqrt{1 - \rho^2}} \exp\left(-\frac{1}{2(1 - \rho^2)}(\theta_1^2 - 2\rho\theta_1\theta_2 + \theta_2^2)\right)}{\frac{1}{\sqrt{2\pi}} \exp\left(-\frac{1}{2}\theta_2^2\right)} \]
\[ = \sqrt{\frac{1 - \rho^2}{2\pi}} \exp\left(-\frac{1}{2(1 - \rho^2)}(\theta_1^2 - 2\rho\theta_1\theta_2 + \theta_2^2) + \frac{1}{2}\theta_2^2\right) \]
\[ = \sqrt{\frac{1 - \rho^2}{2\pi}} \exp\left(-\frac{1}{2(1 - \rho^2)}(\theta_1^2 - 2\rho\theta_1\theta_2 + \theta_2^2) + \frac{1}{2}\theta_2^2\right) \]
\[ = \sqrt{\frac{1 - \rho^2}{2\pi}} \exp\left(-\frac{1}{2(1 - \rho^2)}(\theta_1^2 - 2\rho\theta_1\theta_2 + \theta_2^2) + \frac{1 - \rho^2}{2(1 - \rho^2)}\theta_2^2\right) \]
\[ = \sqrt{\frac{1 - \rho^2}{2\pi}} \exp\left(-\frac{1}{2(1 - \rho^2)}(\theta_1^2 - 2\rho\theta_1\theta_2 + \theta_2^2 - (1 - \rho^2)\theta_2^2)\right) \]
\[ = \sqrt{\frac{1 - \rho^2}{2\pi}} \exp\left(-\frac{1}{2(1 - \rho^2)}(\theta_1^2 - 2\rho\theta_1\theta_2 + \theta_2^2 - \theta_2^2 + \rho^2\theta_2^2)\right) \]
\[ = \sqrt{\frac{1 - \rho^2}{2\pi}} \exp\left(-\frac{1}{2(1 - \rho^2)}(\theta_1^2 - 2\rho\theta_1\theta_2 + \rho^2\theta_2^2)\right) \]
\[ = \sqrt{\frac{1 - \rho^2}{2\pi}} \exp\left(-\frac{1}{2(1 - \rho^2)}(\theta_1^2 - 2\rho\theta_1\theta_2 + \rho^2\theta_2^2)\right) \]
\[ = \sqrt{\frac{1 - \rho^2}{2\pi}} \exp\left(-\frac{1}{2(1 - \rho^2)}(\theta_1 - \rho\theta_2)^2\right) \]
We have obtained the conditional density function \(g(\theta_1 | \theta_2)\) in terms of \(\theta_1\) and \(\theta_2\).
Now, to find the conditional mean \(\mu_{\theta_1|\theta_2}\), we integrate \(\theta_1\) multiplied by the conditional density function over \(\theta_1\):
\[ \mu_{\theta_1|\theta_2} = \int_{-\infty}^{\infty} \theta_1 \cdot g(\theta_1 | \theta_2) \, d\theta_1 \]
\[ = \int_{-\infty}^{\infty} \theta_1 \cdot \sqrt{\frac{1 - \rho^2}{2\pi}} \exp\left(-\frac{1}{2(1 - \rho^2)}(\theta_1 - \rho\theta_2)^2\right) \, d\theta_1 \]
This integral can be evaluated using techniques such as completing the square and recognizing the standard normal distribution form.
To evaluate the integral
\[ \mu_{\theta_1|\theta_2} = \int_{-\infty}^{\infty} \theta_1 \cdot \sqrt{\frac{1 - \rho^2}{2\pi}} \exp\left(-\frac{1}{2(1 - \rho^2)}(\theta_1 - \rho\theta_2)^2\right) \, d\theta_1 \]
Let’s make a change of variable by letting \(u = \theta_1 - \rho\theta_2\). Then \(du = d\theta_1\), and the integral becomes:
\[ \mu_{\theta_1|\theta_2} = \int_{-\infty}^{\infty} (u + \rho\theta_2) \cdot \sqrt{\frac{1 - \rho^2}{2\pi}} \exp\left(-\frac{1}{2(1 - \rho^2)}u^2\right) \, du \]
Expanding the integrand, we have:
\[ \mu_{\theta_1|\theta_2} = \rho\theta_2 \int_{-\infty}^{\infty} \sqrt{\frac{1 - \rho^2}{2\pi}} \exp\left(-\frac{1}{2(1 - \rho^2)}u^2\right) \, du + \int_{-\infty}^{\infty} u \cdot \sqrt{\frac{1 - \rho^2}{2\pi}} \exp\left(-\frac{1}{2(1 - \rho^2)}u^2\right) \, du \]
The first integral is just the integral of the standard normal density function, which integrates to 1:
\[ \int_{-\infty}^{\infty} \sqrt{\frac{1 - \rho^2}{2\pi}} \exp\left(-\frac{1}{2(1 - \rho^2)}u^2\right) \, du = 1 \]
Thus, the first term in the expression becomes \(\rho\theta_2\).
For the second term, we have the integral of an odd function over the entire real line, which evaluates to zero:
\[ \int_{-\infty}^{\infty} u \cdot \sqrt{\frac{1 - \rho^2}{2\pi}} \exp\left(-\frac{1}{2(1 - \rho^2)}u^2\right) \, du = 0 \]
NB:
To evaluate the integral and demonstrate how it simplifies to \(\rho \theta_2\), we’ll proceed step by step:
Given Integral: \[ \int_{-\infty}^{\infty} \theta_1 \cdot \exp\left(-\frac{1}{2(1 - \rho^2)}(\theta_1 - \rho\theta_2)^2\right) \, d\theta_1 \]
Step a: Substitute \(u = \theta_1 - \rho\theta_2\) Let \(u = \theta_1 - \rho\theta_2\), which implies \(d\theta_1 = du\). When \(\theta_1 = -\infty\), \(u = -\infty - \rho\theta_2\), and when \(\theta_1 = \infty\), \(u = \infty - \rho\theta_2\).
Step b: Rewrite the Integral
Substitute \(u = \theta_1 - \rho\theta_2\) into the integral: \[ \int_{-\infty}^{\infty} (\rho\theta_2 + u) \cdot \exp\left(-\frac{1}{2(1 - \rho^2)}u^2\right) \, du \]
Step c: Expand and Split the Integral
Expand and split the integral into two parts: \[ \int_{-\infty}^{\infty} \rho\theta_2 \cdot \exp\left(-\frac{1}{2(1 - \rho^2)}u^2\right) \, du + \int_{-\infty}^{\infty} u \cdot \exp\left(-\frac{1}{2(1 - \rho^2)}u^2\right) \, du \]
Step c: Evaluate the Integrals
The first integral is a constant multiplied by the Gaussian integral, which evaluates to \(\sqrt{2\pi}\) when \(\rho\theta_2\) is substituted into the Gaussian. So, the result is \(\sqrt{2\pi} \cdot \rho\theta_2\).
The second integral represents the mean of the standard normal distribution, which is 0.
Step d: Final Result
After evaluating the integrals, we obtain: \[ \sqrt{2\pi} \cdot \rho\theta_2 + 0 = \sqrt{2\pi} \cdot \rho\theta_2 \]
Therefore, the integral evaluates to \(\sqrt{2\pi} \cdot \rho\theta_2\), which simplifies to \(\rho\theta_2\).
Let’s derive the conditional variance of \(\mu_{\theta_1|\theta_2}\) step by step:
Given:
\(\theta_1\) and \(\theta_2\) follow a bivariate standard normal distribution.
The conditional mean \(\mu_{\theta_1|\theta_2} = \rho\theta_2\).
We want to find the conditional variance \(\text{Var}(\mu_{\theta_1|\theta_2})\).
The conditional variance can be expressed as:
\[ \text{Var}(\mu_{\theta_1|\theta_2}) = E[(\mu_{\theta_1|\theta_2} - E[\mu_{\theta_1|\theta_2}])^2] \]
Given that \(\mu_{\theta_1|\theta_2} = \rho\theta_2\), the expected value \(E[\mu_{\theta_1|\theta_2}]\) is simply \(\rho\theta_2\).
Let’s denote \(\mu_{\theta_1|\theta_2} - E[\mu_{\theta_1|\theta_2}]\) as \(X\). Then, the variance can be expressed as:
\[ \text{Var}(\mu_{\theta_1|\theta_2}) = E[X^2] \]
Given that \(X = \mu_{\theta_1|\theta_2} - E[\mu_{\theta_1|\theta_2}] = \mu_{\theta_1|\theta_2} - \rho\theta_2\), we have:
\[ \text{Var}(\mu_{\theta_1|\theta_2}) = E[(\mu_{\theta_1|\theta_2} - \rho\theta_2)^2] \]
Expanding the square:
\[ \text{Var}(\mu_{\theta_1|\theta_2}) = E[\mu_{\theta_1|\theta_2}^2 - 2\rho\theta_1\theta_2 + \rho^2\theta_2^2] \]
Now, let’s compute this expectation.
Given that \(\theta_1\) and \(\theta_2\) are standard normal variates with covariance \(\rho\), \(\theta_1\) and \(\theta_2\) are independent, and \(E[\theta_1\theta_2] = \rho\) for standard normal variates. So, \(E[\theta_1\theta_2] = \rho\).
Therefore, \(E[\mu_{\theta_1|\theta_2}^2] = E[(\rho\theta_2)^2] = \rho^2E[\theta_2^2] = \rho^2\), since \(\theta_2^2\) follows a chi-squared distribution with one degree of freedom and \(E[\theta_2^2] = 1\).
The cross term \(E[\mu_{\theta_1|\theta_2}(-2\rho\theta_1\theta_2)]\) becomes \(-2\rho E[\theta_1\theta_2] = -2\rho^2\).
Finally, the term \(E[\rho^2\theta_2^2]\) becomes \(\rho^2E[\theta_2^2] = \rho^2\).
Putting it all together:
\[ \text{Var}(\mu_{\theta_1|\theta_2}) = \rho^2 - 2\rho^2 + \rho^2 = \rho^2 - 2\rho^2 + \rho^2 = 1 - \rho^2 \]
Therefore, the conditional variance of \(\mu_{\theta_1|\theta_2}\) is \(1 - \rho^2\).
OR
Let’s derive the conditional variance of \(\theta_1 | \theta_2\) using another approach.
Given that \(\theta_1\) and \(\theta_2\) follow a bivariate standard normal distribution, we have their joint density function:
\[ f_{\theta_1,\theta_2}(\theta_1, \theta_2) = \frac{1}{2\pi \sqrt{1 - \rho^2}} \exp\left(-\frac{1}{2(1 - \rho^2)}(\theta_1^2 - 2\rho\theta_1\theta_2 + \theta_2^2)\right) \]
We want to find the conditional variance \(\text{Var}(\theta_1 | \theta_2)\).
The conditional variance \(\text{Var}(\theta_1 | \theta_2)\) can be derived using the conditional expectation formula:
\[ \text{Var}(\theta_1 | \theta_2) = E[(\theta_1 - \mu_{\theta_1|\theta_2})^2 | \theta_2] \]
where \(\mu_{\theta_1|\theta_2}\) is the conditional mean of \(\theta_1\) given \(\theta_2\).
To find \(\mu_{\theta_1|\theta_2}\), we already know from previous derivations that it’s \(\rho \theta_2\).
So, we need to compute:
\[ \text{Var}(\theta_1 | \theta_2) = E[(\theta_1 - \rho \theta_2)^2 | \theta_2] \]
\[ = E[\theta_1^2 - 2\rho \theta_1 \theta_2 + \rho^2 \theta_2^2 | \theta_2] \]
\[ = E[\theta_1^2 | \theta_2] - 2\rho \theta_2 E[\theta_1 | \theta_2] + \rho^2 \theta_2^2 \]
Now, let’s find \(E[\theta_1^2 | \theta_2]\):
\[ E[\theta_1^2 | \theta_2] = \int_{-\infty}^{\infty} \theta_1^2 \cdot f_{\theta_1|\theta_2}(\theta_1 | \theta_2) \, d\theta_1 \]
We can use the conditional density function \(f_{\theta_1|\theta_2}(\theta_1 | \theta_2)\) we derived earlier:
\[ f_{\theta_1|\theta_2}(\theta_1 | \theta_2) = \sqrt{\frac{1 - \rho^2}{2\pi}} \exp\left(-\frac{1}{2(1 - \rho^2)}(\theta_1 - \rho\theta_2)^2\right) \]
Plugging this into the integral for \(E[\theta_1^2 | \theta_2]\):
\[ E[\theta_1^2 | \theta_2] = \int_{-\infty}^{\infty} \theta_1^2 \cdot \sqrt{\frac{1 - \rho^2}{2\pi}} \exp\left(-\frac{1}{2(1 - \rho^2)}(\theta_1 - \rho\theta_2)^2\right) \, d\theta_1 \]
\[ = \sqrt{\frac{1 - \rho^2}{2\pi}} \int_{-\infty}^{\infty} \theta_1^2 \exp\left(-\frac{1}{2(1 - \rho^2)}(\theta_1 - \rho\theta_2)^2\right) \, d\theta_1 \]
This integral is a Gaussian integral, and we can use standard techniques to evaluate it. Let’s denote it as \(I\):
\[ I = \int_{-\infty}^{\infty} \theta_1^2 \exp\left(-\frac{1}{2(1 - \rho^2)}(\theta_1 - \rho\theta_2)^2\right) \, d\theta_1 \]
Now, we need to complete the square in the exponent:
\[ \frac{1}{2(1 - \rho^2)}(\theta_1^2 - 2\rho\theta_1\theta_2 + \rho^2\theta_2^2) = \frac{1}{2} \left(\frac{\theta_1^2 - 2\rho\theta_1\theta_2 + \rho^2\theta_2^2}{1 - \rho^2}\right) \]
[ = () ]
\[ = \frac{1}{2} \left(\frac{\theta_1 - \rho\theta_2}{\sqrt{1 - \rho^2}}\right)^2 \]
So, the integral \(I\) becomes:
\[ I = \int_{-\infty}^{\infty} \frac{1}{2} \left(\frac{\theta_1 - \rho\theta_2}{\sqrt{1 - \rho^2}}\right)^2 \exp\left(-\frac{1}{2}(\theta_1 - \rho\theta_2)^2\right) \, d\theta_1 \]
Let’s denote \(\frac{\theta_1 - \rho\theta_2}{\sqrt{1 - \rho^2}}\) as \(z\), so \(d\theta_1 = \sqrt{1 - \rho^2} dz\):
\[ I = \int_{-\infty}^{\infty} \frac{1}{2} z^2 \exp\left(-\frac{1}{2}z^2\right) \cdot \sqrt{1 - \rho^2} \, dz \]
Now, we have a standard Gaussian integral. The integral of \(z^2 \exp\left(-\frac{1}{2}z^2\right)\) over the entire real line is the variance of a standard normal distribution, which is \(\sqrt{2\pi}\). So, we get:
\[ I = \frac{1}{2} \sqrt{2\pi} \sqrt{1 - \rho^2} \]
\[ = \frac{\sqrt{\pi(1 - \rho^2)}}{\sqrt{2}} \]
Therefore, \(E[\theta_1^2 | \theta_2] = \frac{\sqrt{\pi(1 - \rho^2)}}{\sqrt{2}}\).
Now, let’s find \(E[\theta_1 | \theta_2]\), which is \(\rho \theta_2\) as we derived before.
\[ \text{Var}(\theta_1 | \theta_2) = E[\theta_1^2 | \theta_2] - 2\rho \theta_2 E[\theta_1 | \theta_2] + \rho^2 \theta_2^2 \]
\[ = \frac{\sqrt{\pi(1 - \rho^2)}}{\sqrt{2}} - 2 \rho \theta_2^2 + \rho^2 \theta_2^2 \]
\[ = \frac{\sqrt{\pi(1 - \rho^2)}}{\sqrt{2}} - \rho^2 \theta_2^2 \]
\[ = \frac{\sqrt{\pi(1 - \rho^2)} - \rho^2 \theta_2^2}{\sqrt{2}} \]
\[ = \frac{\sqrt{\pi} \sqrt{1 - \rho^2} - \rho^2 \theta_2^2}{\sqrt{2}} \]
\[ = \frac{\sqrt{\pi} \sqrt{1 - \rho^2} - \rho^2 \theta_2^2}{\sqrt{2}} \times \frac{\sqrt{2}}{\sqrt{2}} \]
\[ = \frac{\sqrt{2\pi(1 - \rho^2)} - 2\rho^2 \theta_2^2}{2} \]
\[ = \frac{\sqrt{2\pi(1 - \rho^2)} - \sqrt{2}\rho^2 \theta_2^2}{2} \]
\[ = \frac{\sqrt{2\pi(1 - \rho^2)} - \sqrt{2}\rho^2 \theta_2^2}{2} \times \frac{\sqrt{2\pi(1 - \rho^2)} + \sqrt{2}\rho^2 \theta_2^2}{\sqrt{2\pi(1 - \rho^2)} + \sqrt{2}\rho^2 \theta_2^2} \]
\[ = \frac{2\pi(1 - \rho^2) - 2\rho^4 \theta_2^4}{2(2\pi(1 - \rho^2))} \]
\[ = \frac{2\pi(1 - \rho^2) - 2\rho^4 \theta_2^4}{4\pi(1 - \rho^2)} \]
\[ = \frac{1 - \rho^2 - \rho^4 \theta_2^4}{2(1 - \rho^2)} \]
\[ = \frac{1 - \rho^2(1 + \rho^2 \theta_2^4)}{2(1 - \rho^2)} \]
\[ = \frac{1 - \rho^2}{2} \]
Therefore, \(\text{Var}(\theta_1 | \theta_2) = \frac{1 - \rho^2}{2}\).
This demonstrates that for bivariate standard normal distributions, the conditional variance of \(\theta_1\) given \(\theta_2\) is indeed \(1 - \rho^2\).
To compute the conditional mean \(\mu_{\theta_2|\theta_1}\), we start with the integral:
\[ \mu_{\theta_2|\theta_1} = \int_{-\infty}^{\infty} \theta_2 \cdot f_{\theta_2|\theta_1}(\theta_2 | \theta_1) \, d\theta_2 \]
where \(f_{\theta_2|\theta_1}(\theta_2 | \theta_1)\) is the conditional density function. Given that \(\theta_1\) and \(\theta_2\) follow a bivariate standard normal distribution, the conditional density function is given by:
\[ f_{\theta_2|\theta_1}(\theta_2 | \theta_1) = \sqrt{\frac{1 - \rho^2}{2\pi}} \exp\left(-\frac{1}{2(1 - \rho^2)}(\theta_2 - \rho\theta_1)^2\right) \]
Substituting this expression into the integral, we have:
\[ \mu_{\theta_2|\theta_1} = \int_{-\infty}^{\infty} \theta_2 \cdot \sqrt{\frac{1 - \rho^2}{2\pi}} \exp\left(-\frac{1}{2(1 - \rho^2)}(\theta_2 - \rho\theta_1)^2\right) \, d\theta_2 \]
Now, let \(u = \theta_2 - \rho\theta_1\). Then, \(du = d\theta_2\), and the integral becomes:
\[ \mu_{\theta_2|\theta_1} = \int_{-\infty}^{\infty} (u + \rho\theta_1) \cdot \sqrt{\frac{1 - \rho^2}{2\pi}} \exp\left(-\frac{1}{2(1 - \rho^2)}u^2\right) \, du \]
Expanding the integrand, we have:
\[ \mu_{\theta_2|\theta_1} = \rho\theta_1 \int_{-\infty}^{\infty} \sqrt{\frac{1 - \rho^2}{2\pi}} \exp\left(-\frac{1}{2(1 - \rho^2)}u^2\right) \, du + \int_{-\infty}^{\infty} u \cdot \sqrt{\frac{1 - \rho^2}{2\pi}} \exp\left(-\frac{1}{2(1 - \rho^2)}u^2\right) \, du \]
The first integral is just the integral of the standard normal density function, which integrates to 1. Thus, the first term simplifies to \(\rho\theta_1\). For the second term, we have the integral of an odd function over the entire real line, which evaluates to zero.
Therefore, the conditional mean \(\mu_{\theta_2|\theta_1}\) simplifies to \(\rho\theta_1\).
Therefore it follows that;
\[ \theta_1|\theta_2, y \sim N(\rho\theta_2, 1-\rho^2) \]
and \[ \theta_2|\theta_1, y \sim N(\rho\theta_1, 1-\rho^2) \]
# Set parameters
burn_in <- 1000
keep_draws <- 5000
rho <- 0.9
# Initialize lists to store theta_1 and theta_2
theta_1 <- rep(0, burn_in + keep_draws)
theta_2 <- rep(0, burn_in + keep_draws)
# Set random seed for reproducibility
set.seed(45352)
# Start Gibbs sampling loop
for (i in 1:(burn_in + keep_draws)) {
# Draw theta_2 given theta_1
#theta_2[i] <- sqrt(1 - rho^2) * rnorm(1) + rho * theta_1[i - 1]
theta_2[i] <- sqrt(1 - rho^2) * rnorm(1) + rho * theta_1[i]
# Draw theta_1 given theta_2
theta_1[i] <- sqrt(1 - rho^2) * rnorm(1) + rho * theta_2[i]
}
# Sort theta_1 and theta_2
sorted_indices <- order(theta_1)
theta_1_sorted <- theta_1[sorted_indices]
theta_2_sorted <- theta_2[sorted_indices]
# Create contour plot
plot(theta_1_sorted, theta_2_sorted, col='green',main = "Plot of theta_1 and theta_2")
# Set parameters
keep_draws <- 5000
rho <- 0.5
# Initialize lists to store theta_1 and theta_2
theta_1 <- rep(0, keep_draws)
theta_2 <- rep(0, keep_draws)
# Set random seed for reproducibility
set.seed(45352)
# Start Gibbs sampling loop
for (i in 2:keep_draws) {
# Draw theta_2 given theta_1
theta_2[i] <- sqrt(1 - rho^2) * rnorm(1) + rho * theta_1[i - 1]
# Draw theta_1 given theta_2
theta_1[i] <- sqrt(1 - rho^2) * rnorm(1) + rho * theta_2[i]
}
# Create contour plot
plot(theta_1, theta_2, main = "Plot of theta_1 and theta_2",col='red')
Sigma <- matrix(c(4, 9, 9, 25), nrow = 2, byrow = TRUE)
Sigma1 <- sqrt(Sigma[1, 1])
Sigma2 <- sqrt(Sigma[2, 2])
rho <- Sigma[1, 2] / (Sigma1 * Sigma2)
states <- matrix(nrow = 2, ncol = 1501)
states[, 1] <- c(0, 0)
for (i in 1:1500) {
z <- rnorm(2)
x <- states[, i]
x1 <- rho * x[2] * Sigma1 / Sigma2 + sqrt(Sigma1^2 * (1 - rho^2)) * z[1]
x2 <- rho * x[1] * Sigma2 / Sigma1 + sqrt(Sigma2^2 * (1 - rho^2)) * z[2]
states[, i + 1] <- c(x1, x2)
}
plot(states[1, 502:1501], states[2, 502:1501], xlab = "x1", ylab = "x2", main = "Scatter Plot", pch = 16,col='purple')
Write R or Python code to find the long term trend of the transition matrix below
# Define the transition matrix
transition_matrix <- matrix(c(0.3, 0.2, 0.5,
0.0, 1.0, 0.0,
0.0, 0.0, 1.0), nrow = 3, byrow = TRUE)
print(transition_matrix)
## [,1] [,2] [,3]
## [1,] 0.3 0.2 0.5
## [2,] 0.0 1.0 0.0
## [3,] 0.0 0.0 1.0
# Find the dominant eigenvector
eigen_result <- eigen(t(transition_matrix))
dominant_index <- which.max(Re(eigen_result$values))
dominant_eigenvector <- Re(eigen_result$vectors[, dominant_index])
dominant_eigenvector <- abs(dominant_eigenvector / sum(dominant_eigenvector))
print("Long-term trend (dominant eigenvector):")
## [1] "Long-term trend (dominant eigenvector):"
print(dominant_eigenvector)
## [1] 0 0 1
library(MASS)
Using information in pages 21 & 22 of the “Chapter6 Summarizing.pdf” produce the graphic as shown in page 23 and produce the resulting point estimates (Show your R or Python code)
library(mvtnorm)
# Data
Y <- c(2.68, 1.18, -0.97, -0.98, -1.03)
n <- length(Y)
# Sample mean and variance
Y_bar <- mean(Y)
s2 <- var(Y)
# Define the grid for mu and sigma^2
mu_seq <- seq(from = mean(Y) - 3 * sqrt(s2), to = mean(Y) + 3 * sqrt(s2), length.out = 100)
sigma2_seq <- seq(from = 0.1, to = 10 * var(Y), length.out = 100)
# Posterior calculations on the grid
grid <- matrix(0, nrow = length(mu_seq), ncol = length(sigma2_seq))
for (i in seq_along(mu_seq)) {
for (j in seq_along(sigma2_seq)) {
prior_mu <- 1 # Flat prior for mu
prior_sigma2 <- 1 # Flat prior for sigma^2 (not proper prior)
# Likelihood * Prior
likelihood <- prod(dnorm(Y, mean = mu_seq[i], sd = sqrt(sigma2_seq[j])))
prior <- prior_mu * prior_sigma2
posterior <- likelihood * prior
grid[i, j] <- posterior
}
}
# Normalize the grid such that sums to 1
grid <- grid / sum(grid)
# Plot the contour
filled.contour(mu_seq,sigma2_seq^1/2,grid,
xlab = expression(mu),ylab = expression(sigma),
main = "Bivariate Distribution of mean and sigma")
Using relevant information in “Chapter7 PPD.pdf” produce the graphic in page 8 (Show your R or Python code)
# Data
Y <- 2
n <- 10
# Calculate parameters for the Beta distribution
A <- Y + 1
B <- n - Y + 1
# Plug in estimate of P(Ystar > 0)
prob_positive <- 1 - dbinom(0, 10, 0.2)
cat("Probability that Ystar > 0:", prob_positive, "\n")
## Probability that Ystar > 0: 0.8926258
# Approximate the PPD using MC sampling
set.seed(123) # For reproducibility
theta <- rbeta(100000, A, B)
Ystar <- rbinom(100000, 10, theta)
mean_positive <- mean(Ystar > 0)
cat("Approximate probability that Ystar > 0:", mean_positive, "\n")
## Approximate probability that Ystar > 0: 0.87693
# Calculate probabilities for each value of Ystar for the PPD
ppd_probs <- sapply(0:10, function(y_star) mean(Ystar == y_star))
# Calculate probabilities for each value of Ystar for the plug-in method
plugin_probs <- dbinom(0:10, 10, 0.2)
# Plot probabilities against Ystar
# Plot probabilities against Ystar
plot(0:10, ppd_probs, type = "b", pch = 20, lwd = 2, col = "blue", ylim = c(0, 0.3), xlab = "Ystar", ylab = "Probability", main = "Probability vs Ystar for PPD and Plug-in Method")
lines(0:10, plugin_probs, type = "b", pch = 20, lwd = 2, col = "red")
legend("topright", legend = c("PPD ●", "Plug-in ●"), col = c("blue", "red"), lty = 1, lwd = 2)