## Warning: package 'knitr' was built under R version 4.2.3
## Warning: package 'ggplot2' was built under R version 4.2.3
## Warning: package 'cowplot' was built under R version 4.2.3
## Warning: package 'reshape' was built under R version 4.2.3
This section will be brief, as it is just a review of probability. However, as exchangeability and de Finetti’s theorem are important especially in Bayesian statistics, I’ll go into more depth there.
Let \(F, G,\) and \(H\) be events. A “belief” function \(\text{Be}(\cdot)\) should correspond to certain intuitions about our beliefs about the likelihood of events. Probabilitiy functions have axioms that satisfy our notions of belief:
Note that the axioms of probability and theorems discussed in this section are the same whether you subscribe to a Bayesian or frequentist interpretation of probability.
Consider a set \(\mathcal{H}\), which is the “set of all possible truths.” We can partition \(\mathcal{H}\) into discrete subsets \(\{H_1, \dots, H_k\}\), where only one subset consists of the truth.
We can assign probabilities whether each of these sets contains the truth. First, some event in \(\mathcal{H}\) is true, so \(P(\mathcal{H}) = 1\). Let \(E\) be some observation (in this case related to the truth of one \(H_i\)). Then,
Two events \(F\) and \(G\) are independent if \(P(F \cap G) = P(F) P(G)\).
Two events \(F\) and \(G\) are conditionally independent given \(H\) if
\[\begin{align} P(F \cap G \mid H) = P(F \mid H) P(G \mid H) \end{align}\]
This implies \(P(F \mid H \cap G) = P(F \mid H)\). That is, if we know about \(H\), and \(F\) and \(G\) are conditionally independent given \(H\), then knowing \(G\) does not change our belief about \(H\). This is a key property leveraged in Bayesian networks.
A random variable \(Y\) is discrete if the set of all its possible values \(\mathcal{Y}\) is countable, i.e. they can be enumerated \(\mathcal{Y} = \{y_1, y_2, \dots \}\) Examples include the binomial and poisson distributions. Discrete random variables have a probability mass function (PMF) \(f(y) = P(Y = y)\) which assigns a certain probability to every discrete point in its sample space. From this probability mass function, a continuous distribution function (CDF) is also defined:
\[F(y) = P(Y \leq y) = \sum_{y_i \leq y} f(y_i)\]
\(Y\) is continuous if \(\mathcal{Y}\) can take any value in an interval. Therefore, the probability of \(Y\) taking a single value in the sample space is 0. So instead we describe such distributions with probability density functions (PDFs) \(f(y)\), which must be integrated over an interval to obtain a probability: \(P(a \leq y \leq b) = \int_a^b f(y) \; dy\). These variables also have CDFs:
\[\begin{align} F(y) = P(Y \leq y) = \int_{-\infty}^y f(x) \; dx \end{align}\]
Examples include the normal and beta distributions.
In the same way that we use the mean, mode, and median to describe samples, we can use them to describe distributions. Notice that for many distributions, these quantities are not the same.
We also use variance and quantiles to measure the spread of distributions.
Let \(Y_1\) and \(Y_2\) be random variables with sample spaces \(\mathcal{Y}_1\) and \(\mathcal{Y}_2\). Then the joint pdf/density of \(Y_1\) and \(Y_2\) is defined as:
\[ p(y_1, y_2) = P(\{Y_1 = y_1\} \cap \{Y_2 = y_2 \}) \]
and the marginal density of \(Y_1\) is obtained by summing over all possible values of \(Y_2\):
\[\begin{align} p(y_1) &= \sum_{y_2 \in \mathcal{Y}_2} p(y_1, y_2) \\ &= \sum_{y_2 \in \mathcal{Y}_2} p(y_1 \mid y_2) p(y_2). \end{align}\]
The conditional density is
\[\begin{align} p(y_2 \mid y_1) = \frac{p(y_1, y_2)}{p(y_1)} \end{align}\]
Notice that given the joint density \(p(y_1, y_2)\), we can calculate marginal and conditional densities \(\{ p(y_1), p(y_2), p(y_1 \mid y_2), p(y_2 \mid y_1) \}\) by simply summing up the relevant variables. Additionally, given \(p(y_1)\) and \(p(y_2 \mid y_1)\), (or the reverse), we can reconstruct the joint distribution. However, given only marginal densities \(p(y_1)\) and \(p(y_2)\), we can’t reconstruct the joint distribution, since we don’t know whether the events are independent.
In the continuous case, the probability density function is a function of \(y_1\) and \(y_2\) such that the CDF is
\[ F(y_1, y_2) = \int_{-\infty}^{y_1} \int_{-\infty}^{y_2} p(y_1, y_2) \; dy_2 \; dy_1. \]
Obtaining the marginal densities can be done by integrating out the irrelevant variable:
\[\begin{align} p(y_1) &= \int_{-\infty}^{\infty} p(y_1, y_2) \; dy_2 \\ p(y_2) &= \int_{-\infty}^{\infty} p(y_1, y_2) \; dy_1 \\ \end{align}\]
With the marginal densities, you can compute the conditional densities \(p(y_2 \mid y_1) = p(y_1, y_2) / p(y_1)\), etc.
Let \(Y_1, \dots, Y_n\) be random variables dependent on a common parameter \(\theta\). Then \(Y_1 \dots, Y_n\) are conditionally independent given \(\theta\) if
\[\begin{align} p(y_1, \dots, y_n \mid \theta) = p(y_1 \mid \theta) \times \dots \times p(y_n \mid \theta). \end{align}\]
Note this extends naturally from the definition of independent of two random variables, \(P(A \cap B) = P(A) P(B)\). Thus, knowing about any \(Y_i\) does not give any information about the other \(Y_j\). Lastly, the joint density of these variables can be defined as
\[ p(y_1, \dots, y_n \mid \theta) = \prod_i p(y_i \mid \theta). \]
We say that \(Y_1, \dots, Y_n\) are conditionallly independent and identically distributed (i.i.d.):
\[ Y_1, \dots, Y_n \mid \theta \sim \text{i.i.d.} \; p(y \mid \theta). \]
In many situations with several random variables, we would intuit that the specific order of observation of these random variables aren’t important. For example, consider a random sample of 3 participants from an infinite population which may or may not have a property (1 or 0) It makes sense that
\[p(0, 0, 1) = p(1, 0, 0) = p(0, 1, 0).\]
since the likelihood of a person having the property or not is \(\theta\), regardless of the sample. This property is called exchangeability.
Exchangability. Let \(Y_1, \dots, Y_n\) be random variables. If \(p(y_1, \dots, y_n) = p(y_{\pi_1}, \dots, y_{\pi_n})\) for all permutations \(\pi\) of \(\{1, \dots, n\}\), then \(Y_1, \dots, Y_n\) are exchangable.
If \(Y_1, \dots Y_n\) are i.i.d., then they are exchangeable:
\[\begin{align} p(y_1, \dots, y_n) &= \int p(y_1, \dots, y_n \mid \theta)\; d\theta & \\ &= \int \left( \prod_i p(y_i \mid \theta) \right) p(\theta) \; d\theta & \text{(i.i.d.)} \\ &= \int \left( \prod_i p(y_{\pi_i} \mid \theta) \right) p(\theta) \; d\theta & \text{(order of product doesn't matter)} \\ &= p(y_{\pi_1}, \dots, y_{\pi_n}). \end{align}\]
Classical assumption of Bernoulli variables \(X_1, X_2, \dots, X_n\) as outcomes of the same experiment (e.g. a coin flip): independence. But continuing to observe \(X_j\)s should result in a change of opinion about the distribution of coin flip outcomes (e.g. gradually learning coin bias). So Bayesian statisticians should assume exchangeability, a weaker condition than independence.
de Finetti’s theorem. Let \(Y_1, \dots, Y_n\) be a finite subset of an infinitely exchangeable but not necessarily i.i.d. random variables: \[p(y_1, \dots, y_n) = p(y_{\pi_1}, \dots, y_{\pi_n})\] for all partitions \(\pi\). Then our model can be written as \[p(y_1, \dots, y_n) = \int \left( \prod_1^n p(y_i \mid \theta) \right) p(\theta) \; d\theta.\]
So, in general, \[Y_1, \dots, Y_n \mid \theta \text{ are i.i.d.} \Leftrightarrow Y_1, \dots, Y_n \text{ are exchangable for all $n$}\]
Importantly, if we sample from a sufficiently large population, then we can model the sample variables as being approximately conditionally i.i.d.
From Heath & Sudderth (1976). De Finetti’s Theorem on Exchangeable Variables. Also see this neat StackExchange answer.
For every infinite sequence of exchangeable random variables \((X_n)\) having values in \(\{0, 1\}\) and a finite subsequence \(X_1, \dots, X_n\), there is a probability distribution \(F\) such that
\[\begin{align} P(X_1 = 1, \dots, X_k = 1, X_{k+1} = 0, \dots, X_n = 0) &= \\ P\left(\underbrace{1, \dots, 1}_{\text{$k$ times}}, \underbrace{0, \dots, 0}_{\text{$n - k$ times}}\right) &= \int_{0, 1} \theta^k (1 - \theta)^{n-k} F(d \theta) \end{align}\]
Where \(F\) is the prior over \(\Theta\), i.e. the values that \(\theta\) can take.
Lemma. Let \(p_{k, n} = P(X_1 = 1, \dots, X_k = 1, X_{k + 1} = 0, \dots, X_n = 0)\), the quantity on the left side of the above equation, and suppose \(X_1, \dots, X_m\) are exchangable Boolean random variables as before.
Let \(q_r = P\left( \sum_{j = 1}^m X_j = r \right)\), that is, \(q_i\) is the probability that exactly \(i\) of the \(m\) variables are 1.
Also denote \((x)_k = \prod_{j = 0}^{k - 1} (x - j) = \frac{x!}{(x - k)!}\).
Then,
\[\begin{align} p_{k, n} = \sum_{r = 0}^{m} \frac{(r)_k (m - r)_{n - k}}{(m)_n} q_r \text{ for } 0 \leq k \leq n \leq m \end{align}\]
where \(\frac{(r)_k (m - r)_{n - k}}{(m)_n}\) is some complicated combinatorics term for the number of ways to that \(r\) ones and \(m - r\) zeros can be placed in a sequence of length \(m\).
With the sustitution \(\frac{r}{m} = \theta\), we can re-package the summation of discrete \(r\) as an integral over a stepwise function \(F_m\) with domain \([0, 1]\) that jumps \(q_r\) at every interval \(0 \leq r / m \leq 1\):
\[ p_{k, n} = \int_{0}^{1} \frac{(\theta m)_k ((1 - \theta)m)_{n - k}}{(m)_n} F_m(d\theta). \]
As \(m\) goes to infinity, the steps of the stepwise function grow infinitely small, such that the function converges to the continuous probability distribution \(F\). By Helly’s theorem and some hand-waving in this explanation, the components of the integrand converge to the exponential terms in the theorem: \(\theta^k (1 - \theta)^{n - k}\).
The marginal probability distribution of a father’s occupation is found by summing over the rows of the son’s occupation. Formally, let \(F\) be the father’s occupation and \(S\) be the son’s. Then \(P(F = f) = \sum P(F = f, S = s)\).
This time sum over columns.
In the row of the table where the father’s occupation is “farm”, normalize the values of the son’s occupation such that they sum to one. This can be done by dividing each value by the sum of the row. Formally, this is calculating
\[\begin{align} P(S = s \mid F = \text{farm}) &= \frac{P(S = s, F = \text{farm})}{P(F = \text{farm})} \\ &= \frac{P(S = s, F = \text{farm})}{\sum_{s' \in S} P(S = s', F = \text{farm})}. \end{align}\]
Normalize the column where the son’s occupation is “farm” like above.
Since expectation is linear, \[\begin{align} \mathbb{E}(a_1 Y_1 + a_2 Y_2) &= a_1\mathbb{E}(Y_1) + a_2\mathbb{E}(Y_2) \\ &= a_1 \mu_1 + a_2 \mu_2 \end{align}\]
Since \(Y_1\) and \(Y_2\) are independent, \(\text{Cov}(Y_1, Y_2) = 0\). When adding variances, it’s necessary to combine terms like below: \[\begin{align} \text{Var}(a_1 Y_1 + a_2 Y_2) &= a^2\text{Var}(Y_1) + b^2\text{Var}(Y_2) + 2ab\text{Cov}(Y_1, Y_2) \\ &= a^2\sigma_1^2 + b^2\sigma_2^2 \end{align}\]
\[\begin{align} \mathbb{E}(a_1 Y_1 - a_2 Y_2) &= a_1\mathbb{E}(Y_1) - a_2\mathbb{E}(Y_2) \\ &= a_1 \mu_1 - a_2 \mu_2 \end{align}\]
\[\begin{align} \text{Var}(a_1 Y_1 - a_2 Y_2) &= a^2\text{Var}(Y_1) + b^2\text{Var}(Y_2) - 2ab\text{Cov}(Y_1, Y_2) \\ &= a^2\sigma_1^2 + b^2\sigma_2^2 \\ &= \text{Var}(a_1 Y_1 + a_2 Y_2) \end{align}\]
Let \(X, Y, Z\) be random variables with joint density \(p(x, y, z) \propto f(x, z) g(y, z) h(z)\).
\[\begin{align} p(x \mid y, z) &= \frac{p(x, y, z)}{p(y, z)} \\ &= \frac{p(x, y, z)}{\int p(x, y, z) \; dx} \\ &\propto \frac{f(x, z) g(y, z) h(z)}{\int f(x, z) g(y, z) h(z) \; dx} \\ &\propto \frac{f(x, z) g(y, z) h(z)}{g(y, z) h(z) \int f(x, z) \; dx} \\ &\propto \frac{f(x, z)}{\int f(x, z) \; dx} \end{align}\]
\[\begin{align} p(y \mid x, z) &= \frac{p(x, y, z)}{p(x, z)} \\ &\propto \frac{f(x, z) g(y, z) h(z)}{\int p(x, y, z) \; dy} \\ &\propto \frac{f(x, z) g(y, z) h(z)}{\int f(x, z) g(y, z) h(z) \; dy} \\ &\propto \frac{f(x, z) g(y, z) h(z)}{f(x, z) h(z) \int g(y, z) \; dy} &\propto \frac{g(y, z)}{\int g(y, z) \; dy} \end{align}\]
It is sufficient to show that \(p(x \mid y, z) = p(x \mid z)\):
From (a), \(p(x \mid y, z) \propto \frac{f(x, z)}{\int f(x, z) \; dx}\). Now
\[\begin{align} p(x \mid z) &= \frac{p(x, z)}{p(z)} \\ &\propto \frac{\int p(x, y, z) \; dy}{\int \int p(x, y, z) \; dy \; dx} \\ &\propto \frac{\int f(x, z) g(y, z) h(z) \; dy}{\int \int f(x, z) g(y, z) h(z) \; dy \; dx} \\ &\propto \frac{f(x, z) h(z) \int g(y, z) \; dy}{h(z) \left( \int g(y, z) \; dy \right) \left( \int f(x, z) \; dx \right)} \\ &\propto \frac{f(x, z)}{\int f(x, z) \; dx} \\ &\propto p(x \mid y, z) \end{align}\]
Now, since \(p(x \mid z) \propto p(x \mid y, z)\) but additionally \(\int p(x \mid z) = \int p(x \mid y, z) = 1\), \(p(x \mid z) = p(x \mid y, z)\), so \(x\) and \(y\) are conditionally independent given \(z\).
Use P3 and condition on an always true event:
\[\begin{align} P(H_j \cap E \mid 1 = 1) &= P(H_j \mid 1 = 1) P(E \mid H_j \cap 1 = 1) = P(H_j)P(E \mid H_j) \\ P(H_j \cap E \mid 1 = 1) &= P(E \mid 1 = 1) P(H_j \mid E \cap 1 = 1) = p(E)p(H_j \mid E) \\ &= P(H_j) P(E \mid H_j) \end{align}\]
If we can assume that \(p(\mathcal{H}) = 1\), then
\[\begin{align} P(E) &= P(E \cap H) \\ &= P(E \cap (H_1 \cup H_2 \cup \dots \cup H_k)) \\ &= P((E \cap (H_1)) \cup (E \cap (H_2 \cup \dots \cup H_k))) & \text{Distributing} \\ &= P(E \cap H_1) + P(E \cap (H_2 \cup \dots \cup H_k)) & \text{P2; Implied condition on a true event} \end{align}\]
Proceed inductively from b.
\[\begin{align} & P(H_j)P(E \mid H_j) = P(E)P(H_j \mid E) \\ \implies& P(H_j \mid E) = \frac{P(H_j)P(E \mid H_j)}{P(E)} \\ \implies& P(H_j \mid E) = \frac{P(H_j)P(E \mid H_j)}{\sum_k P(E \cap H_k)} & \text{From c} \\ \implies& P(H_j \mid E) = \frac{P(H_j)P(E \mid H_j)}{\sum_k P(E \mid H_k) P(H_k)} & \text{P3} \\ \end{align}\]
x = rbind(
# Y = 0 Y = 1
c(.5 * .4, .5 * .6), # X = 0
c(.5 * .6, .5 * .4) # X = 1
)
rownames(x) = c("X = 0", "X = 1")
kable(x, col.names = c("Y = 0", "Y = 1"))| Y = 0 | Y = 1 | |
|---|---|---|
| X = 0 | 0.2 | 0.3 |
| X = 1 | 0.3 | 0.2 |
\[\mathbb{E}(Y) = 0.5\]. Probability the ball is green is \(0.5\).
\[\begin{align} \text{Var}(Y \mid X = 0) &= \mathbb{E}((Y \mid X = 0)^2) - \mathbb{E}(Y \mid X = 0)^2 \\ &= 1^2 p(Y = 1 \mid X = 0) + 0^2 p(Y = 0 \mid X = 0) - \\ & \quad (1 p(Y = 1 \mid X = 0) + 0 p(Y = 0 \mid X = 0))^2 \\ &= .6 - (.6)^2 \\ &= 0.24 \\ &= \text{Var}(Y \mid X = 1) & \text{Not going to do this one} \end{align}\]
\[\begin{align} \text{Var}(Y) &= \mathbb{E}(Y^2) - \mathbb{E}(Y)^2 \\ &= 1^2 p(Y = 1) + 0^2 p(Y = 0) - (0.5)^2 \\ &= 0.5 - (0.5)^2 = 0.25 \end{align}\]
\(\text{Var}(Y)\) is slightly larger since we are more uncertain about the value of \(Y\) when we have not yet flipped the coin. Knowing which urn we will draw from clarifies our probabilites of obtaining a green or red ball.
\[\begin{align} P(X = 0 \mid Y = 1) &= .6 \end{align}\]
If \(A \perp B \mid C\) then
\[\begin{align} P(A, B \mid C) &= P(A \mid C) P(B \mid C) \\ &= (1 - P(A^c \mid C))P(B \mid C) \\ &= P(B \mid C) - P(B \mid C)P(A^c \mid C) \end{align}\]
so \[P(B \mid C)P(A^c \mid C) = P(B \mid C) - P(A, B \mid C)\].
Also notice \[\begin{align} & P(B \mid C) = P(A, B \mid C) + P(A^c, B \mid C) & \text{LTP} \\ \implies & P(A^c, B \mid C) = P(B \mid C) - P(A, B \mid C) \end{align}\]
Equating the two equations above, we have \[ P(A^c, B \mid C) = P(B \mid C) P(A^c \mid C) \]
We do this similarly for the other events.
For a case where \(A \perp B \mid C\) holds but \(A \perp B \mid C^c\) does not, consider a Bayesian network where knowing \(C\) results in 100% belief in the values of \(A\) and \(B\) (e.g. \(P(A, B \mid C) = 1\)), but absent of \(C\), values \(A\) and \(B\) have some probability of manifesting that affect each other.
No matter how likely the event, the maximum money you will obtain is $1. Thus no rational person would give more than $1 for the possibility of obtaining $1, as such an action would always result in lost money.
Since either \(E\) or \(E^c\) will definitively happen, we would want someone who is betting on the occurrences of either \(E\) or \(E^c\) to even out.
A frequentist idea: if I were to pick at random many, many \(x\) sampled from the census roll, how many of them would be Hindu?
A subjectivist interpretation: how strongly do I believe that a random person from this census roll is Hindu?
(Should be \(0.15\))
If I were to pick at random many, many \(x\) from the census roll, in the long run average, what proportion of them would be \(6452859\)?
How strongly do I believe that this \(x\) I am going to sample is \(6452859\)?
An interesting one
Assume person \(6452589\) is a random person sampled from Sri Lanka that is either Hindu or not. In the long run, if I observed many persons \(6452589\), how many of them would be Hindu?
Person \(6452589\) is one person who may or may not be Hindu. How strongly do I believe that he/she is Hindu?
Skipping