A computer is shared by 2 users who send tasks to a computer remotely and work independently. At any minute, any connected user may disconnect with probability 0.5, and any disconnected user may connect with a new task with probability 0.2. Let \(X(t)\) be the number of concurrent users at time \(t\) (in minutes). This is a Markov chain with 3 states: 0, 1, and 2. The probability transition matrix can be calculated and it is
\[\begin{bmatrix} 0.64 & 0.32 & 0.04 \\ 0.40 & 0.50 & 0.10 \\ 0.25 & 0.50 & 0.25 \\ \end{bmatrix}\]
Generate 10000 transitions of the Markov chain. What percent of times do you find your generated Markov chain in each of its 3 states?
Creating the matrix:
P = matrix(c(.64, .4, .25, .32, .5, .5, .4, .1, .25), nrow=3)
We also load the library expm for matrix exponentiation
using the %^% operator provided by the package.
Using the matrix to generate 10000 transitions. For the sake of simplicity, only the first 5 generations.
P%^%1
## [,1] [,2] [,3]
## [1,] 0.64 0.32 0.40
## [2,] 0.40 0.50 0.10
## [3,] 0.25 0.50 0.25
P%^%2
## [,1] [,2] [,3]
## [1,] 0.6376 0.5648 0.3880
## [2,] 0.4810 0.4280 0.2350
## [3,] 0.4225 0.4550 0.2125
P%^%3
## [,1] [,2] [,3]
## [1,] 0.730984 0.680432 0.408520
## [2,] 0.537790 0.485420 0.293950
## [3,] 0.505525 0.468950 0.267625
P%^%4
## [,1] [,2] [,3]
## [1,] 0.8421326 0.7783909 0.4625668
## [2,] 0.6118411 0.5617778 0.3371455
## [3,] 0.5780223 0.5300555 0.3160113
P%^%5
## [,1] [,2] [,3]
## [1,] 0.9659629 0.8899613 0.5303338
## [2,] 0.7005758 0.6452508 0.3852006
## [3,] 0.6609593 0.6080005 0.3632173
If we were to generate a 10000 transitions, our matrix would approach infinity, like so:
P%^%10000
## [,1] [,2] [,3]
## [1,] Inf Inf Inf
## [2,] Inf Inf Inf
## [3,] Inf Inf Inf
What we are trying to find to answer the problem of this question, is really the Steady State Distribution. We can find it since we know that the sum of the distribution quantities for each state is equal to 1, that is: \[\pi_1 + \pi_2 + \pi_3 = 1\]
To do this in R, we use our matrix and make a tibble, then solve using a system of equations.
A = rbind(t(P - diag(3)), c(1, 1, 1))
b = c(0, 0, 0, 1)
cbind(A, b)
## b
## [1,] -0.36 0.4 0.25 0
## [2,] 0.32 -0.5 0.50 0
## [3,] 0.40 0.1 -0.75 0
## [4,] 1.00 1.0 1.00 1
qr.solve(A, b) # our steady state distribution
## [1] 0.3996012 0.3816974 0.2114622
We can now use this to answer the question. We find that our markov chain is in state 1 40.00% of the time, in state 2 38% of the time, and in state 3 22% of the time. Otherwise summarized in the table below:
tableF
## [,1] [,2] [,3]
## state 1 2 3
## perc 40 38 22