In this lab we will focus on sensitivity analysis and Monte Carlo simulations.
Sensitivity analysis is the study of how the uncertainty in the output of a mathematical model or system (numerical or otherwise) can be apportioned to different sources of uncertainty in its inputs. We will use the lpSolveAPI R-package as we did in the previous lab.
Monte Carlo Simulations utilize repeated random sampling from a given universe or population to derive certain results. This type of simulation is known as a probabilistic simulation, as opposed to a deterministic simulation.
An example of a Monte Carlo simulation is the one applied to approximate the value of pi. The simulation is based on generating random points within a unit square and see how many points fall within the circle enclosed by the unit square (marked in red). The higher the number of sampled points the closer the result is to the actual result. After selecting 30,000 random points, the estimate for pi is much closer to the actual value within the four decimal points of precision.
In this lab, we will learn how to generate random samples with various simulations and how to run a sensitivity analysis on the marketing use case covered so far.
Remember to always set your working directory to the source file location. Go to ‘Session’, scroll down to ‘Set Working Directory’, and click ‘To Source File Location’. Read carefully the below and follow the instructions to complete the tasks and answer any questions. Submit your work to RPubs as detailed in previous notes.
For your assignment you may be using different data sets than what is included here. Always read carefully the instructions on Sakai. Tasks/questions to be completed/answered are highlighted in larger bolded fonts and numbered according to their particular placement in the task section.
In order to conduct the sensitivity analysis, we will need to download again the lpSolveAPI package unless you have it already installed in your R environment
# Require will load the package only if not installed
# Dependencies = TRUE makes sure that dependencies are install
if(!require("lpSolveAPI",quietly = TRUE))
install.packages("lpSolveAPI",dependencies = TRUE, repos = "https://cloud.r-project.org")
We will revisit and solve again the marketing case discussed in class (also part of previous lab).
# We start with `0` constraint and `2` decision variables. The object name `lpmark` is discretionary.
lpmark = make.lp(0, 2)
# Define type of optimization as maximum and dump the screen output into a `dummy` variable
dummy = lp.control(lpmark, sense="max")
# Set the objective function coefficients
set.objfn(lpmark, c(275.691, 48.341))
Add all constraints to the model.
add.constraint(lpmark, c(1, 1), "<=", 350000)
add.constraint(lpmark, c(1, 0), ">=", 15000)
add.constraint(lpmark, c(0, 1), ">=", 75000)
add.constraint(lpmark, c(2, -1), "=", 0)
add.constraint(lpmark, c(1, 0), ">=", 0)
add.constraint(lpmark, c(0, 1), ">=", 0)
Now, view the problem setting in tabular/matrix form. This is a good checkpoint to confirm that our contraints have been properly set.
lpmark
## Model name:
## C1 C2
## Maximize 275.691 48.341
## R1 1 1 <= 350000
## R2 1 0 >= 15000
## R3 0 1 >= 75000
## R4 2 -1 = 0
## R5 1 0 >= 0
## R6 0 1 >= 0
## Kind Std Std
## Type Real Real
## Upper Inf Inf
## Lower 0 0
# solve
solve(lpmark)
## [1] 0
Next we get the optimum results.
# display the objective function optimum value
get.objective(lpmark)
## [1] 43443517
# display the decision variables optimum values
get.variables(lpmark)
## [1] 116666.7 233333.3
For the sensitivity part we will add two new code sections to obtain the sensitivity results.
# display sensitivity to coefficients of objective function.
get.sensitivity.obj(lpmark)
## $objfrom
## [1] -96.6820 -137.8455
##
## $objtill
## [1] 1e+30 1e+30
objfrom. Explain in coincise manner what the sensitivity results represent in reference to the marketing model.The sensitivty results define the bounds of potential values for the coefficients that would not result in any change in the optimal solution. As long as the coefficient values are between what is listed (-96.682 for XRadio, -137.8455 for Xtv) and positive infinity the optimal solution will not change.
# display sensitivity to right hand side constraints.
# There will be a total of m+n values where m is the number of contraints and n is the number of decision variables
get.sensitivity.rhs(lpmark)
## $duals
## [1] 124.12433 0.00000 0.00000 75.78333 0.00000 0.00000 0.00000
## [8] 0.00000
##
## $dualsfrom
## [1] 1.125e+05 -1.000e+30 -1.000e+30 -3.050e+05 -1.000e+30 -1.000e+30
## [7] -1.000e+30 -1.000e+30
##
## $dualstill
## [1] 1.00e+30 1.00e+30 1.00e+30 4.75e+05 1.00e+30 1.00e+30 1.00e+30 1.00e+30
duals. Explain in coincise manner what the two non-zero sensitivity results represent. Distinguish the binding/non-binding constraints, the surplus/slack, and marginal values.The two non-zero sensitivty results represent the marginal value gained in sales by increasing the amount in the budget by one unit. These are the marginal gains of the two coefficients, and they result from the two binding constraints (X1 + X2 <= 350000, and 2X1 - X2 = 0). They are binding because there is zero slack, and any change in these conditions will cause a change in the optimal solution.
To acquire a better understanding of the sensitivity results, and to confirm integrity of the calculations, independent tests can be conducted.
I would change the value of the other binding constraint from 0 to 1, and see if that made an incremental change in sales equal corresponding to the sensitivity.
For this task we will be running a Monte Carlo simulation to calculate the probability that the daily return from S&P will be > 5%. We will assume that the historical S&P daily return follows a normal distribution with an average daily return of 0.03 (%) and a standard deviation of 0.97 (%).
To begin we will generate 100 random samples from the normal distribution. For the generated samples we will calculate the mean, standard deviation, and probability of occurrence where the simulation result is greater than 5%.
To generate random samples from a normal distribution we will use the rnorm() function in R. In the example below we set the number of runs (or samples) to 100.
# number of simulations/samples
runs = 100
# random number generator per defined normal distribution with given mean and standard deviation
sims = rnorm(runs,mean=0.03,sd=0.97)
# Mean calculated from the random distribution of samples
average = mean(sims)
average
## [1] 0.01400681
# STD calculated from the random distribution of samples
std = sd(sims)
std
## [1] 0.9895473
# probability of occurrence on any given day based on samples will be equal to count (or sum) where sample result is greater than 5% divided by total number of samples.
prob = sum(sims >=0.05)/runs
prob
## [1] 0.49
# number of simulations/samples
runs2 = 1000
runs3 = 10000
# random number generator per defined normal distribution with given mean and standard deviation
sims2 = rnorm(runs2,mean=0.03,sd=0.97)
sims3 = rnorm(runs3,mean=0.03,sd=0.97)
average2 = mean(sims2)
average3 = mean(sims3)
average2
## [1] 0.02907087
average3
## [1] 0.03183512
std2 = sd(sims2)
std3 = sd(sims3)
std2
## [1] 0.9570813
std3
## [1] 0.9605983
prob2 = sum(sims2 >=0.05)/runs2
prob3 = sum(sims3 >=0.05)/runs3
prob2
## [1] 0.487
prob3
## [1] 0.489
pi that was presented in the introductory paragraph?runs4 = c(runs, runs2, runs3)
average4 = c(average, average2, average3)
std4 = c(std, std2, std3)
prob4 = c(prob, prob2, prob3)
out = rbind(Runs = runs4, Average = average4, Standard_Deviation = std4, Probability = prob4)
out
## [,1] [,2] [,3]
## Runs 100.00000000 1.000000e+03 1.000000e+04
## Average 0.01400681 2.907087e-02 3.183512e-02
## Standard_Deviation 0.98954731 9.570813e-01 9.605983e-01
## Probability 0.49000000 4.870000e-01 4.890000e-01
As the number of simulations increase, our values became a lot closer to our ideal values. The third scenario with 10,000 simulations gives us the best predictor of probability because it uses the most simulations. In the image use case calculating pi, when n increased, the number for pi got closer to the theoretical value.
The last 2C) exercise is optional for those interested in further enhancing their subject matter learning, and refining their skills in R. Your work will be assessed but you will not be graded for this exercise. You can follow the instructions presented in the video Excel equivalent example at [https://www.youtube.com/watch?v=wKdmEXCvo9s]