We have two coins: one is a fair coin and the other is a coin that produces heads with probability \(\frac{3}{4}\). One of the two coins is picked at random, and this coin is tossed \(n\) times. Let \(S_n\) be the number of heads that turns up in these \(n\) tosses.
No—not prior to the actual tossing. The Law of Large Numbers, in both strong and weak forms, requires independent and identically distributed (iid) random variables. In this case, our two random variables are neither. The first random variable represents which coin is selected, the second represents the number of heads obtained. These two random variables are neither identically distributed, nor are they independent, since the number of heads shown depends on the coin selected.
Yes—to any arbitrary level of certainty, but not absolute certainty. At this point we are dealing with a sum of iid random variables. Therefore, the average number of heads will converge in probability to the population mean—the mean number of heads inherent to the coin selected. All that is needed is to toss the coin enough times to achieve the desired certainty.
The upper bound on the number of tosses may be obtained using Chebyeshev’s Inequality in the form: \[ \begin{align} P\left(\left|\frac{S_n}{n} - \mu\right|\geq \epsilon\right)&\leq\frac{\sigma^2}{n\epsilon^2}\\ \end{align} \]
Without loss of generality, assume a head is 1 and a tail is 0. Therefore, the mean of the fair coin is 0.5 and of the unfair coin is 0.75. These are different by 0.25. So if we set \(\epsilon\) to be anything less than 0.125, we will be certain of separation, as 0.5 + 0.125 = 0.75 - 0.125. The larger we set \(\epsilon\), the quicker we will be certain. It should be that we can set \(\epsilon\) equal to 0.125 and then roll one more time, as these distributions should not have pathological plateaus where the equality comes into play. If we find the larger of the needed rolls for each die, than that (or one more) will be sufficient.
For the fair coin, \(\mu = p = 0.5\) and \(\sigma^2 = \frac{p(1-p)}{n} = \frac{0.25}{n}\). Therefore for the average we want \(\frac{1}{4n\cdot 0.125^2} = 0.05\) so \(\mathbf{n = 320}\).
For the unfair coin, \(\mu = p = 0.75\) and \(\sigma^2 = \frac{p(1-p)}{n} = \frac{0.1875}{n}\). Therefore for the average we want \(\frac{3}{16n\cdot 0.125^2} = 0.05\) so \(\mathbf{n = 240}\).
So if we toss this coin 320 (or maybe 321) times, we should be 95% certain as to which distribution it comes from. If the empirical mean is less than 0.625, then it comes from the fair coin. If the empirical mean is greater than 0.625, then it comes from the unfair coin.