In this lab we will focus on sensitivity analysis and Monte Carlo simulations.
Sensitivity analysis is the study of how the uncertainty in the output of a mathematical model or system (numerical or otherwise) can be apportioned to different sources of uncertainty in its inputs. We will use the lpSolveAPI R-package as we did in the previous lab.
Monte Carlo Simulations utilize repeated random sampling from a given universe or population to derive certain results. This type of simulation is known as a probabilistic simulation, as opposed to a deterministic simulation.
An example of a Monte Carlo simulation is the one applied to approximate the value of pi. The simulation is based on generating random points within a unit square and see how many points fall within the circle enclosed by the unit square (marked in red). The higher the number of sampled points the closer the result is to the actual result. After selecting 30,000 random points, the estimate for pi is much closer to the actual value within the four decimal points of precision.
In this lab, we will learn how to generate random samples with various simulations and how to run a sensitivity analysis on the marketing use case covered so far.
Remember to always set your working directory to the source file location. Go to ‘Session’, scroll down to ‘Set Working Directory’, and click ‘To Source File Location’. Read carefully the below and follow the instructions to complete the tasks and answer any questions. Submit your work to RPubs as detailed in previous notes.
For your assignment you may be using different data sets than what is included here. Always read carefully the instructions on Sakai. Tasks/questions to be completed/answered are highlighted in larger bolded fonts and numbered according to their particular placement in the task section.
In order to conduct the sensitivity analysis, we will need to download again the lpSolveAPI package unless you have it already installed in your R environment
# Require will load the package only if not installed
# Dependencies = TRUE makes sure that dependencies are install
if(!require("lpSolveAPI",quietly = TRUE))
install.packages("lpSolveAPI",dependencies = TRUE, repos = "https://cloud.r-project.org")
We will revisit and solve again the marketing case discussed in class (also part of previous lab).
# We start with `0` constraint and `2` decision variables. The object name `lpmark` is discretionary.
lpmark = make.lp(0, 2)
# Define type of optimization as maximum and dump the screen output into a `dummy` variable
dummy = lp.control(lpmark, sense="max")
# Set the objective function coefficients
set.objfn(lpmark, c(275.691, 48.341))
Add all constraints to the model.
add.constraint(lpmark, c(1, 1), "<=", 350000)
add.constraint(lpmark, c(1, 0), ">=", 15000)
add.constraint(lpmark, c(0, 1), ">=", 75000)
add.constraint(lpmark, c(2, -1), "=", 0)
add.constraint(lpmark, c(1, 0), ">=", 0)
add.constraint(lpmark, c(0, 1), ">=", 0)
Now, view the problem setting in tabular/matrix form. This is a good checkpoint to confirm that our contraints have been properly set.
lpmark
## Model name:
## C1 C2
## Maximize 275.691 48.341
## R1 1 1 <= 350000
## R2 1 0 >= 15000
## R3 0 1 >= 75000
## R4 2 -1 = 0
## R5 1 0 >= 0
## R6 0 1 >= 0
## Kind Std Std
## Type Real Real
## Upper Inf Inf
## Lower 0 0
# solve
solve(lpmark)
## [1] 0
Next we get the optimum results.
# display the objective function optimum value
get.objective(lpmark)
## [1] 43443517
# display the decision variables optimum values
get.variables(lpmark)
## [1] 116666.7 233333.3
For the sensitivity part we will add two new code sections to obtain the sensitivity results.
# display sensitivity to coefficients of objective function.
get.sensitivity.obj(lpmark)
## $objfrom
## [1] -96.6820 -137.8455
##
## $objtill
## [1] 1e+30 1e+30
##### 1A) For this exercise we are only interested in the first part of the output labeled `objfrom`. Explain in coincise manner what the sensitivity results represent in reference to the marketing model.
## The sensitivity results represent the coefficients of the objective function 275.691Radio and 48.341TV, if these coefficients move all the way down to -96.6820 and -137.8455 respectively and all the way up to infinite; then, there is not going to be any impact on the optimum solution for sales.
# display sensitivity to right hand side constraints.
# There will be a total of m+n values where m is the number of contraints and n is the number of decision variables
get.sensitivity.rhs(lpmark)
## $duals
## [1] 124.12433 0.00000 0.00000 75.78333 0.00000 0.00000 0.00000
## [8] 0.00000
##
## $dualsfrom
## [1] 1.125e+05 -1.000e+30 -1.000e+30 -3.050e+05 -1.000e+30 -1.000e+30
## [7] -1.000e+30 -1.000e+30
##
## $dualstill
## [1] 1.00e+30 1.00e+30 1.00e+30 4.75e+05 1.00e+30 1.00e+30 1.00e+30 1.00e+30
##### 1B) For this exercise we are only interested in the first part of the output labeled `duals`. Explain in coincise manner what the two non-zero sensitivity results represent. Distinguish the binding/non-binding constraints, the surplus/slack, and marginal values.
## 124.12433 represents the amount at which opt solution will increase if the budget, ceteris paribus, increases by one unit. That is to say, if budget increases to $350001 then the objective function (Z) will increase by $124, meaning that $124 is the marginal value.
## 75.7833 represents the amount at which sales will increase by $75 if the resources increase by one unit.
#The two non-zero results represent binding constraints,implying marginal values, because any change in the constraint will result in a change in the optimum value,no room for surplus or slack. The other results are non-binding constraints, imply no marginal value, when increasing resource there is no impact on optimal solution.(surplus, slack in resource)
To acquire a better understanding of the sensitivity results, and to confirm integrity of the calculations, independent tests can be conducted.
For this task we will be running a Monte Carlo simulation to calculate the probability that the daily return from S&P will be > 5%. We will assume that the historical S&P daily return follows a normal distribution with an average daily return of 0.03 (%) and a standard deviation of 0.97 (%).
To begin we will generate 100 random samples from the normal distribution. For the generated samples we will calculate the mean, standard deviation, and probability of occurrence where the simulation result is greater than 5%.
To generate random samples from a normal distribution we will use the rnorm() function in R. In the example below we set the number of runs (or samples) to 100.
# number of simulations/samples
runs = 100
# random number generator per defined normal distribution with given mean and standard deviation
sims = rnorm(runs,mean=0.03,sd=0.97)
# Mean calculated from the random distribution of samples
average= mean(sims)
average
## [1] -0.0562616
# STD calculated from the random distribution of samples
std= sd(sims)
std
## [1] 0.8467954
# probability of occurrence on any given day based on samples will be equal to count (or sum) where sample result is greater than 5% divided by total number of samples.
prob= sum(sims >=0.05)/runs
prob
## [1] 0.43
# Repeat calculations here
runs2 = 1000
sims2 = rnorm(runs2, mean=0.03,sd=0.97)
average2= mean (sims2)
average2
## [1] -0.02328808
std2= sd(sims2)
std2
## [1] 0.9714042
prob2 = sum(sims2 >=0.05)/runs2
prob2
## [1] 0.463
#Repeat calculations for 10000
runs3= 10000
sims3 = rnorm(runs3, mean=0.03,sd=0.97)
average3= mean(sims3)
average3
## [1] 0.02700069
std3= sd(sims3)
std3
## [1] 0.9702365
prob3 = sum(sims3 >=0.05)/runs3
prob3
## [1] 0.487
pi that was presented in the introductory paragraph?A <- matrix(
c(average, average2, average3,std, std2, std3, prob, prob2, prob3 ),
nrow=3,
ncol=3,
byrow = TRUE)
dimnames(A) = list(
c("Average","STD","Prob"),
c("100","1000","10000"))
A
## 100 1000 10000
## Average -0.0562616 -0.02328808 0.02700069
## STD 0.8467954 0.97140421 0.97023651
## Prob 0.4300000 0.46300000 0.48700000
The higher the number of simulations, the closer the STD and average of the sample get to the daily return from S&P. The best bet would be on the probability with the bigger sample. The more randomly variables we pick as a sample the more accurate its value will be to the actual result. The last 2C) exercise is optional for those interested in further enhancing their subject matter learning, and refining their skills in R. Your work will be assessed but you will not be graded for this exercise. You can follow the instructions presented in the video Excel equivalent example at [https://www.youtube.com/watch?v=wKdmEXCvo9s]
#Monday
runs1 = 10000
sims1 = rnorm(runs1, mean=0.03, sd=0.97)
#Tuesday
runs2 = 10000
sims2 = rnorm(runs2, mean = 0.03, sd=0.97)
#Wednesday
runs3 = 10000
sims3 = rnorm(runs3, mean = 0.03, sd=0.97)
#Thursday
runs4 = 10000
sims4 = rnorm(runs4, mean = 0.03, sd=0.97)
#Friday
runs5 = 10000
sims5 = rnorm(runs5, mean = 0.03, sd=0.97)
prob1= sum(sims1 >=0.05)/runs1
prob2= sum(sims2 >=0.05)/runs2
prob3= sum(sims3 >=0.05)/runs3
prob4= sum(sims4 >=0.05)/runs4
prob5= sum(sims5 >=0.05)/runs5
probCum2= (prob2 + 1)* (prob1+1)-1
probCum3= (prob3 + 1)* (probCum2+1)-1
probCum4= (prob4 + 1)* (probCum3+1)-1
probCum5= (prob5 + 1)* (probCum4+1)-1
probCum5
## [1] 6.234944