Dilip Ganesan
Let us consider the two independent identically distributed random variables U1 and U2, the Variance is given by
but in reality,
The variance of (U1+U2)/2 will be smaller if U1 and U2 are negatively correlated.
Hence we can say that for negatively correlated variables can reduce variance.
If U is uniformly distributed on [0,1], then 1-U has the same distribution but they are negatively correlated then the variance will reduce.
Let us take a example from the Rizzo book.
# This is a function, which has a flag with a default antithetic as True.
# Based on the flag negative correlations are set.
MC.Phi <- function(x, R = 10000, antithetic = TRUE)
{
u <- runif(R/2)
if (!antithetic)
v <- runif(R/2)
else
v <- 1 - u
u <- c(u, v)
cdf <- numeric(length(x))
for (i in 1:length(x))
{
g <- x[i] * exp(-(u * x[i])^2 / 2)
cdf[i] <- mean(g) / sqrt(2 * pi) + 0.5
}
cdf
}
x <- seq(.1, 2.5, length=5)
Phi <- pnorm(x)
set.seed(123)
MC1 <- MC.Phi(x, anti = FALSE)
set.seed(123)
MC2 <- MC.Phi(x)
print(round(rbind(x, MC1, MC2, Phi), 5))
## [,1] [,2] [,3] [,4] [,5]
## x 0.10000 0.70000 1.30000 1.90000 2.50000
## MC1 0.53983 0.75825 0.90418 0.97311 0.99594
## MC2 0.53983 0.75805 0.90325 0.97132 0.99370
## Phi 0.53983 0.75804 0.90320 0.97128 0.99379
# From the range of value between .1 and 2.5 let us take the value of x as 1.50
m <- 1000
MC1 <- MC2 <- numeric(m)
x <- 1.50
for (i in 1:m)
{
MC1[i] <- MC.Phi(x, R = 1000, anti = FALSE)
MC2[i] <- MC.Phi(x, R = 1000)
}
print(sd(MC1))
## [1] 0.004035471
print(sd(MC2))
## [1] 0.0007043272
print(round((var(MC1)- var(MC2))/var(MC1)*100),2)
## [1] 97
There is a reduction of 96.9% almost 97% reduction in variance using the Antithetic variables, compared to just Monte Carlo Integration.