INTRODUCTION

A Markov chain (discrete-time Markov chain or DTMC), named after Andrey Markov, is a random process that undergoes transitions from one state to another on a state space. It must possess a property that is usually characterized as “memorylessness”: the probability distribution of the next state depends only on the current state and not on the sequence of events that preceded it. This specific kind of “memorylessness” is called the Markov property. Markov chains have many applications as statistical models of real-world processes.

Source: https://en.wikipedia.org/wiki/Markov_chain.

CASE STUDY

We use this example of MC: https://rpubs.com/alex-lev/42458.

RESEARCH GOAL

We want to find optimum state transition probability using cumulative transition probability of MC as objective function. Our optimization problem deals with a local optimum, where Pmin is a minimal value for transition probability while PMC is a low limit for cumulative transition probability of MC.

library(expm)
## Loading required package: Matrix
## 
## Attaching package: 'expm'
## Следующий объект скрыт от 'package:Matrix':
## 
##     expm
library(markovchain)
## Package:  markovchain
## Version:  0.4.3
## Date:     2015-11-27
## BugReport: http://github.com/spedygiorgio/markovchain/issues
library(diagram)
## Loading required package: shape
library(pracma)
## 
## Attaching package: 'pracma'
## Следующие объекты скрыты от 'package:expm':
## 
##     expm, logm, sqrtm
## Следующие объекты скрыты от 'package:Matrix':
## 
##     expm, lu, tril, triu

FRAMEWORK

my_net<-function(PT, CPT)
#***************************************
#Function my_net computes cumulative transition probaility for given markov chain
#PT - state transition probability
#CPT - cumulative transition probability
#***************************************
{

P<-PT
Q<-1-P


S1<-c(0.00,0.99,0,0,0.01,0) #Vector of transition probailities from S(1) to S(i), i=1...6
S2<-c(0.00, 0.00, P, 0.00, Q, 0.00) #Vector of transition probailities from S(2) to S(i), i=1...6
S3<-c(0, 0, 0, P, Q, 0) #Vector of transition probailities from S(3) to S(i), i=1...6
S4<-c(0.00, 0.00, 0.00, 0.00, Q, P) #Vector of transition probailities from S(4) to S(i), i=1...6
S5<-c(0,0,0,0,1,0) #Vector of transition probailities from S(5) to S(i), i=1...6
S6<-c(0,0,0,0,0.01, 0.99) #Vector of transition probailities from S(6) to S(i), i=1...6
MCN<-matrix(cbind(S1,S2,S3,S4,S5,S6),nrow=6,ncol=6,byrow=T) #Matrix as a combination of vectors S1 - S6
MCT<-t(MCN) # Matrix transposition

stateNames <- c("S1","S2","S3", "S4", "S5", "S6") #Names for states of markov chain (MC)
row.names(MCT) <- stateNames#Row names
colnames(MCT) <- stateNames#Colon names

MC21 <- as(MCT, "markovchain")#Special type of MC based on transposed matrix


plot(MC21)#Plot of MC
MC31 <- MCN %^% 16#Power of matrix

CPT=round(MC31[1,6],3)#Rounding result
return(CPT)

}

CALCULATIONS

#Computing cumulative transition probability 
POUT<-NULL
for (i in 1:19) 
{
  P<-0.05+i*0.05
    
  if (P==1.) P<-0.99  #100% value is not obvious.
  XX<-P
 
  YY<-my_net(P)
 
  POUT<-rbind(POUT,c(XX,YY))#filling data frame with results
 
} 

RESULTS

##         PT   CPT
##  [1,] 0.10 0.001
##  [2,] 0.15 0.003
##  [3,] 0.20 0.007
##  [4,] 0.25 0.014
##  [5,] 0.30 0.024
##  [6,] 0.35 0.038
##  [7,] 0.40 0.056
##  [8,] 0.45 0.080
##  [9,] 0.50 0.110
## [10,] 0.55 0.146
## [11,] 0.60 0.190
## [12,] 0.65 0.241
## [13,] 0.70 0.301
## [14,] 0.75 0.370
## [15,] 0.80 0.449
## [16,] 0.85 0.539
## [17,] 0.90 0.640
## [18,] 0.95 0.752
## [19,] 0.99 0.851

CONCLUSIONS

So we got optimal transition probability value for our MC as 0.85 provided cumulative probability is greater than 0.5. \[P_{min} =0.85\] \[P_{MC}>0.5\]