1. Problem Set 1 This week, we’ll empirically verify Central Limit Theorem. We’ll write code to run a small simulation on some distributions and verify that the results match what we expect from Central Limit Theorem. Please use R markdown to capture all your experiments and code. Please submit your Rmd file with your name as the filename.
  1. First write a function that will produce a sample of random variable that is distributed as follows: f(x) = x, 0 ??? x ??? 1 f(x) = 2 ??? x, 1 < x ??? 2 (1) That is, when your function is called, it will return a random variable between 0 and 2 that is distributed according to the above PDF. Please note that this is not the same as writing a function and sampling uniformly from it. In the online session this week, I’ll cover Sampling techniques. You will find it useful when you do the assignment for this week. In addition, as usual, there are one-liners in R that will give you samples from a function. We’ll cover both of these approaches in the online session.

  2. Now, write a function that will produce a sample of random variable that is distributed as follows: f(x) = 1 ??? x, 0 ??? x ??? 1 f(x) = x ??? 1, 1 < x ??? 2 (2)

  3. Draw 1000 samples (call your function 1000 times each) from each of the above two distributions and plot the resulting histograms. You should have one histogram for each PDF. See that it matches your understanding of these PDFs.

  4. Now, write a program that will take a sample set size n as a parameter and the PDF as the second parameter, and perform 1000 iterations where it samples from the PDF, each time taking n samples and computes the mean of these n samples. It then plots a histogram of these 1000 means that it computes.

  5. Verify that as you set n to something like 10 or 20, each of the two PDFs produce normally distributed mean of samples, empirically verifying the Central Limit Theorem. Please play around with various values of n and you’ll see that even for reasonably small sample sizes such as 10, Central Limit Theorem holds."

library(ggplot2)


function1 <- function(x)
{
  if (x >=0 && x <=2)
  {
    if (x <=1)
    {
      return(x)
          }
    else {
      return(2-x)
    }
  }
  
}
#sample1 <- sapply(runif(1000, min = 0, max = 2),function1)

"Use inverse sampling method to do sampling"
## [1] "Use inverse sampling method to do sampling"
invcdf <- function(y) {
  if (y >= 0 && y <= 1) {
    ret <- ifelse(y < 0.5,sqrt(2*y),2-sqrt(2*(1-y)))
  }
}

sampleinv <- sapply(runif(20000),invcdf)
sdf.inv = data.frame(sampleinv)

second function

function2 <- function(x)
{
  if (x >=0 && x <=2)
  {
    if (x <=1)
    {
      return(1-x)
    }
    else {
      return(x-1)
    }
  }
  
}


## Using built-in R Sample function to do this without knowing any sampling
"If you don't want to know how to do these sampling yourself, you can always just use the samp function in R"
## [1] "If you don't want to know how to do these sampling yourself, you can always just use the samp function in R"
xgrid=seq(0,2,by=0.01)
fxgrid <- sapply(xgrid,function2)
nx <- sample(xgrid,10000,replace=TRUE,prob=fxgrid)

par(mfrow=c(2,1))

meanfunc <-function(n,pdf)
{
   x <- seq(0,2,by =0.01)
   prob.pdf <- sapply(x,pdf)
  
  meanval <-c()
  
  for(i in 1:1000)
  {
    samples <- sample(x,n,replace=TRUE,prob=prob.pdf)
    meanval <- c(meanval,mean(samples))
    
  }
  print ("Mean of Sampling is:")
  print(mean(meanval))
  
  ggplot(data =data.frame(meanval),aes(meanval))+ geom_histogram(binwidth=.05) + ggtitle("Mean Distribution")
}

meanfunc(20,function1)
## [1] "Mean of Sampling is:"
## [1] 0.999304

meanfunc(30,function1)
## [1] "Mean of Sampling is:"
## [1] 0.999351

meanfunc(20,function2)
## [1] "Mean of Sampling is:"
## [1] 0.995516

meanfunc(30,function2)
## [1] "Mean of Sampling is:"
## [1] 0.9958507

meanfunc(10,function1)
## [1] "Mean of Sampling is:"
## [1] 0.997217

meanfunc(10,function2)
## [1] "Mean of Sampling is:"
## [1] 1.012949

From the above it is proved that even for reasonably small sample sizes such as 10, Central Limit Theorem holds, that is the mean value of each sample set is

normally distributed around the mean of the original distribution