Hi!

I´m currently preparing for a final test on Mathematical Statistics and as you might have guessed, my doubts are more on the theoretical part of the course.

So, I have worked on three different questions but have not been able to finish any of them, here is what I´ve done so far:

Problem 1

  1. The sum of the values obtained in a random sample of size 5 taken from a population with Poisson distribution will be used to test the null hypothesis

\(H_{0}\): population mean > 2

against the alternative hypothesis:

\(H_{1}\): population mean \(\leq\) 2

The null hypothesis will be rejected if and only if the sum of the observations is 5 or less.

The power function for the means: 1.5, 2, 2.4, 2.6 and

Obtain a table with probabilities of errors type 1 and 2

My solution:

Let X ~ Poisson(\(\theta\)), so that E[X] = \(\theta\)

Then let \(X_{i}\) with i in {1, 2, …, 5} be the random sample, so

\(Y = \sum^{5}_{i = 1} X_{i}\), and we reject \(H_{0}\) iff \(Y \leq 5\)

Then, we know that \(Y\) ~ Poisson(\(5 \theta\)) and so our power function will be given by:

\(\pi (\theta)\) = P(\(Y \leq 5\)) = \(\sum^{5}_{y = 0} e^{-5 \theta} \frac{(5 \theta)^{y}}{y!}\)

i.e. \(\pi (\theta)\) = \(e^{-5 \theta} (1 + 5 \theta + \frac{(5 \theta)^{2}}{2!} + \frac{(5 \theta)^{3}}{3!} + \frac{(5 \theta)^{4}}{4!} + \frac{(5 \theta)^{5}}{5!})\)

And from here, the first part of the problem would be to get the values of

\(\pi (1.5)\), \(\pi (2)\), \(\pi (2.4)\) and \(\pi (2.6)\) right?

But what I don´t get in this problem is how to construct the classical table with probabilities of errors Type I and II.

Then, for Problem 2

  1. Consider a population with Bernoulli distribution, we have the hypothesis test:

\(H_{0}\): p = \(p_{0}\)

\(H_{1}\): p < \(p_{0}\)

\(\alpha\)

Given a sample of size “n”, use the correct theory to find the region of rejection, the test statistic (for the exact test and for a large sample size), and the property that the region has in order to generate the corresponding unilateral alternative hypothesis and specify it

My solution

I followed the process of a similar problem that we saw on class, so here we start by considering \(X\) ~ \(Bernoulli (p)\) and we note that \(\hat{p}\) = \(\frac{X}{n}\)

Then, our likelyhood function is:

we let \(\omega_{0} = [p_{0}, 1)\) and \(\omega_{1} = (0, p_{0})\) so that:

And using the likelyhood ratio we get:

Taking \(ln(\Lambda) = h(\hat{p})\) we get:

We then analyze the behavior of $h() with the following 2 limits:

And taking the derivative we get:

hence

But from here I don´t know how to continue. I think the next step is to give a size of an arbitrary \(\alpha\) , i.e. some sort of \(P(X \leq some bound | p = p_{0}) = \alpha\) in order to find the unilateral interval, but I don´t know how to get there.

Also, I think the wanted test statistic is \(Z = \frac{\hat{p} - p_{0}}{\sqrt{\frac{p_{0} q_{0}}{n}}}\) but I´m also not sure.

And finally, Problem 3

  1. Given a random sample of size “n” taken from a normally distributed population with unknown mean and variance, find the region of rejection and the test statistic (completely specified) to prove the null hypothesis

\(H_{0}\): \(\sigma\) = \(\sigma_{0}\) against the alternative hypothesis:

\(H_{1}\): \(\sigma \neq\) \(\sigma_{0}\)

My Solution

I´m actually sort of lost in this one, the only thing I think I´ve figured out is that if we consider \(X_{i}\) ~ \(N(\mu, \sigma)\) independent, we then can get

and then, by the m.g.f, we know that:

And from there I don´t know how to continue.

So, I know it is sort of a long post, and I thank you for the time you take on reading this. Any help you can provide will be great for a starting-to-panic student.