Problem 2 chapter 11.3 page 442
Given this transition matrix:
\(\begin{pmatrix} .5 & .33 & .17 \\ .25 & 0 & .75 \\ 0 & 1 & 0 \end{pmatrix}\)
a) Show that this is a regular markov chain:
For the markov chain to be regular, the matrix must eventually become positive after being raised to a power. This means that after being raised to certain power, being in any state after starting in any state must be possible. We don’t have to worry about the first row because all entries are non-zero. If we start in state 2 and move to state 3, we will always move back to state two, then can move to either state 1 or 3. This means that after 3 moves any state is possible when starting in state 2. Starting in state 3 will cut this down by 1 because we’re skipping one of the steps just listed.
Raising the matrix to powers, we see this:
v <- c(.5,.75, 0, 1/3, 0, 1, 1/6, .25, 0)
dim(v) <- c(3,3)
v %*% v
## [,1] [,2] [,3]
## [1,] 0.500 0.3333333 0.1666667
## [2,] 0.375 0.5000000 0.1250000
## [3,] 0.750 0.0000000 0.2500000
v%*%v%*%v
## [,1] [,2] [,3]
## [1,] 0.5000 0.3333333 0.1666667
## [2,] 0.5625 0.2500000 0.1875000
## [3,] 0.3750 0.5000000 0.1250000
b) The process is started in state 1; find the probability that it is in state 3 after two steps.
v[1,] %*% v[,3]
## [,1]
## [1,] 0.1666667
c) Find the limiting probability vector w
library(pracma)
## Warning: package 'pracma' was built under R version 3.4.3
p_prime <- t(v) - diag(1,3,3)
u <- rbind(c(1,1,1), p_prime)
uu <- cbind(u, c(1,0,0,0))
rref(uu)
## [,1] [,2] [,3] [,4]
## [1,] 1 0 0 0.5000000
## [2,] 0 1 0 0.3333333
## [3,] 0 0 1 0.1666667
## [4,] 0 0 0 0.0000000