Due: 2/19/2014
Feynman Liang (feynman.liang@gmail.com)
First calculate the expected proportion of games won by the team.
eigensystem = eigen(t(matrix(c(0.8, 0.2, 0.3, 0.7), nrow = 2, ncol = 2, byrow = TRUE)))
eigensystem
## $values
## [1] 1.0 0.5
##
## $vectors
## [,1] [,2]
## [1,] 0.8321 -0.7071
## [2,] 0.5547 0.7071
This shows the steady state probability distribution pibar is proportional to eigensystem$vectors[,1].
After normalizing
pibar = eigensystem$vectors[, 1]/sum(eigensystem$vectors[, 1])
pibar
## [1] 0.6 0.4
Applying the chain rule for conditional probabilities yields P(dinner) = pibar(win) * P(dinner | win) + pibar(loose) * P(dinner | loose)
pibar %*% c(0.7, 0.2) # P(dinner)
## [,1]
## [1,] 0.5
n <- 100
nvertices <- 20
p <- 0.5
coinflips <- sample(c(-1, 1), n, replace = TRUE, c(1 - p, p))
walk <- cumsum(c(0, coinflips))%%nvertices # modulo number of edges
theta <- seq(0, 2 * pi, 0.01)
## Commented to reduce output for (k in 1:n) {
## plot(cos(theta),sin(theta),type='l',ylim=c(-1.2,1.2),
## asp=1,col='blue',xlab='',ylab='',main=paste('Time step',k))
## points(cos(walk[k]*2*pi/nvertices),sin(walk[k]*2*pi/nvertices),col='red',pch=19)
## Sys.sleep(0.05) }
As p -> 1, the walk is more likely to make it around the circle
We will show that
pibar %*% P = pibar for pibar[,]=1/N.
Fix some position i. We know that pibar(i+1)=pibar(i)=pibar(i-1)=1/N.
In the next step, the two possible ways to get to i are from i+1
and i-1. Therefore,
pibar(i) %*% P = p * pibar(i-1) + (1 - p) * pibar(i+1) = p (1/N) + (1 - p)(1/N) = (p + 1 - p) / N = 1 / N
Since i was arbitrary, this shows pibar %*% P = pibar.
E[T] = 1/pibar(1) = N
In the first step, the particle starting at state 1 can either go ccw with probability p or cw with probability 1-p.
Afterwards, to hit every point other than state 1, the particle must proceed on a one-direction random walk of length
N - 1. This is equivalent to gambler's ruin with N = N-1 and j=1.
Let A denote the event of hitting all other points before returning to 1.
For the case p=1/2, the gambler's ruin result yields
P(A | X_1 = N-1)= P(A | X_1 = 1) = j / N = 1 / (N - 1)
therefore
P(A) = P(A | X_1 = N-1) P(X_1 = N-1) + P(A | X_1 = 1) P(X_1 = 1) = 0.5/(N-1) + 0.5/(N-1) = 1/(N-1)
For the case p != 1/2, the general formula is
P(A | X_1 = 1) = ( 1 - ((1-p)/p)^j ) / ( 1 - ((1-p)/p)^N )
P(A | X_1 = N-1) = ( 1 - ((1-q)/q)^j ) / ( 1 - ((1-q)/q)^N )
where q = 1 - p. Therefore
P(A) = P(A | X_1 = N-1) P(X_1 = N-1) + P(A | X_1 = 1) P(X_1 = 1)
= p(( 1 - ((1-p)/p) ) / ( 1 - ((1-p)/p)^{N-1} )) + q( 1 - ((1-q)/q) ) / ( 1 - ((1-q)/q)^{N-1} )
Defining this as a function
P.VisitAllOthersBeforeReturn <- function(N, p) {
q <- 1 - p
if (p == 0.5) {
return(1/(N - 1))
} else {
return(p * (1 - ((1 - p)/p))/(1 - ((1 - p)/p)^(N - 1)) + q * (1 - ((1 -
q)/q))/(1 - ((1 - q)/q)^(N - 1)))
}
}
Applying the function we defined earlier for N=20 and p = 0.5, 0.6, 0.9
p = c(0.5, 0.6, 0.9)
Map(function(prob) P.VisitAllOthersBeforeReturn(20, prob), p)
## [[1]]
## [1] 0.05263
##
## [[2]]
## [1] 0.2002
##
## [[3]]
## [1] 0.8
This illustrates that the probability of going around the circle before returning to state 0
increases as p -> 1. This makes sense because higher values of p and q lead to higher probabilities
of a complete walk around the circle. The values of p and q are minimized at p=0.5, thus the probability
of a tour is minimized. As p -> 1, the probability of completing a ccw tour increases. In the limit p = 1,
the random walk reduces to a deterministic ccw tour of the circle. This agrees with the observation in (2)
where as p -> 1 the probability for a walk around the circle increases.