Monte Carlo Markov Chain (MCMC) algorithms are essential for sampling from probability distributions when direct sampling is challenging. Let’s dive into implementing one of the most popular MCMC algorithms, the Metropolis-Hastings algorithm, in Julia.
First, ensure you have Julia installed. You can download it from here.
using Random
using Distributions
Let’s assume we are sampling from a normal distribution with a mean of 0 and a standard deviation of 1.
target_distribution(x) = pdf(Normal(0, 1), x)
We’ll define the Metropolis-Hastings function. This function will iterate through the MCMC steps to generate a chain of samples.
function metropolis_hastings(target_distribution, initial_value, num_samples, proposal_std)
samples = Vector{Float64}(undef, num_samples)
current_sample = initial_value
for i in 1:num_samples
proposal = current_sample + rand(Normal(0, proposal_std))
acceptance_ratio = target_distribution(proposal) / target_distribution(current_sample)
if rand() < acceptance_ratio
current_sample = proposal
end
samples[i] = current_sample
end
return samples
end
Now, let’s generate samples using our Metropolis-Hastings implementation.
initial_value = 0.0
num_samples = 10000
proposal_std = 1.0
samples = metropolis_hastings(target_distribution, initial_value, num_samples, proposal_std)
To visualize the results, we can plot the histogram of the generated samples and compare it with the target distribution.
using Plots
histogram(samples, normalize = true, label = "MCMC Samples")
x = -4:0.1:4
plot!(x, pdf.(Normal(0, 1), x), label = "Target Distribution", linewidth = 2)
You should observe that the histogram of the samples closely matches the target distribution, confirming that our Metropolis-Hastings algorithm is working correctly.
proposal_std
) and the number of samples
(num_samples
) to observe the effect on the results.The Metropolis-Hastings algorithm is a popular Monte Carlo Markov Chain (MCMC) method used for generating a sequence of samples from a probability distribution when direct sampling is challenging. It aims to approximate the target distribution by creating a Markov chain that has the desired distribution as its equilibrium distribution.