using Random
# Define the states (e.g., weather: Sunny, Cloudy, Rainy)
@enum WeatherState Sunny Cloudy Rainy
# Define the transition probabilities (matrix)
# Rows: Current state, Columns: Next state
transition_matrix = Dict(
Sunny => Dict(Sunny => 0.7, Cloudy => 0.2, Rainy => 0.1),
Cloudy => Dict(Sunny => 0.3, Cloudy => 0.4, Rainy => 0.3),
Rainy => Dict(Sunny => 0.2, Cloudy => 0.3, Rainy => 0.5)
)
# Function to get the next state
function next_state(current_state::WeatherState)
probabilities = transition_matrix[current_state]
# Use a weighted random choice based on the probabilities
# collect(values(probabilities)) gets the probability values
# collect(keys(probabilities)) gets the state values
return rand(
Random.GLOBAL_RNG,
collect(keys(probabilities)),
Weights(collect(values(probabilities)))
)
end
# Function to simulate the Markov chain for a given number of steps
function simulate_markov_chain(initial_state::WeatherState, steps::Int)
states = Vector{WeatherState}(undef, steps + 1) # +1 to include the initial state
states[1] = initial_state
for i in 2:length(states)
states[i] = next_state(states[i-1])
end
return states
end
# Example usage:
initial_state = Sunny
num_steps = 10
simulation_results = simulate_markov_chain(initial_state, num_steps)
println("Markov Chain Simulation:")
for (i, state) in enumerate(simulation_results)
println("Step $i: $state")
end
# --- Analyzing the results (Example) ---
# Count the occurrences of each state
state_counts = countmap(simulation_results)
println("\nState Counts:")
for (state, count) in state_counts
println("$state: $count")
end
# Calculate the probability of being in each state (empirical)
probabilities = Dict(state => count / num_steps for (state, count) in state_counts) # num_steps because we exclude initial state from counts
println("\nEmpirical Probabilities:")
for (state, prob) in probabilities
println("$state: $prob")
end
# --- Long Run Probabilities (Illustrative) ---
# For a true long-run distribution, you'd need to solve for the
# stationary distribution (eigenvector of the transition matrix).
# This is a simplification for demonstration. For more accurate long-run
# probabilities, use linear algebra to solve for the stationary distribution.
# This is a very basic illustration. For real Markov Chain analysis,
# you would typically use specialized libraries or algorithms.
Key Improvements and Explanations:
Enums for States: Using @enum
makes
the code more readable and type-safe. It clearly defines the possible
states.
Dictionary for Transition Matrix: A
Dict
makes the transition matrix more intuitive to work
with. You can easily access probabilities using
transition_matrix[current_state][next_state]
.
Weighted Random Choice: The
rand(rng, choices, Weights(probabilities))
function
correctly implements the probabilistic transitions. The
Weights
function ensures that the random choice is weighted
according to the transition probabilities.
Clearer Simulation Function: The
simulate_markov_chain
function now initializes a vector to
store the states and iterates through the steps, making the simulation
logic easier to follow.
Example Analysis: The code now includes a basic analysis of the simulation results, counting the occurrences of each state and calculating empirical probabilities.
Long-Run Probabilities Note: The code now includes a comment explaining that the empirical probabilities from a short simulation are not the same as the true long-run (stationary) distribution. It points out that you need linear algebra to calculate the stationary distribution.
Random Number Generator: Using
Random.GLOBAL_RNG
is generally recommended for
reproducibility (if you set the seed) and avoids potential thread-safety
issues.
How to Run:
.jl
file (e.g.,
markov_chain.jl
).include("markov_chain.jl")
and press Enter.The simulation results and analysis will be printed to the console.
You can adjust the initial_state
and num_steps
variables to experiment with different scenarios.