PMI-RMP Road to Certificate

Author

Rick Zhang

Published

January 6, 2026

Identify key stakeholders and their needs, and the project position within the organization(urgency or not)

  1. Deliver value incrementally

  2. Monitor result for business value

  3. Measure progress

  4. Subdivide task to find MVP (minimum viable product= core value loop)

  5. Make data-driven decisions

Term Focus Audience Scope & Detail Example Outcome
POC (Proof of Concept) Feasibility (“Can it be built?”) Internal (team/stakeholders) Very limited; tests core idea Basic script showing a tech works
Prototype(Design & Usability) “How will it look/feel?” Internal + early testers More visual/functional mockup Clickable wireframe or demo model
MVP (Minimum Viable Product) A quick prototype for Market Validation (“Will users pay for it?”) External (early customers) Functional product with core features Basic app released to test demand
MBI (Minimum Business Increment) Business Value (“Does it deliver measurable business impact?”) External (paying customers / broader market) The highest, most quickly realized piece of work that gives value A shipped increment that independently drives real revenue, retention, cost savings, or another key business metric

Deliver Value Incrementally

Deliverable:

Kanban board: The tool shows the stage of work required to deliver value incrementally.

Backlog: Created by product owner, this is a list of prioritized work used by the project team to do their work.

Lessons Learned , PDCA and agile.

Keep business value in check

Ensure the delivered value remains aligned with business goals.

  1. Financial gain - Increased sales, revenue or profit

  2. Improvement - efficient, quality, conditions or infrastructure

  3. New customers and opportunities - gain in market share

  4. First to market - Prestige for the organization and competitive advantage.

  5. Social - Impact a wider community or cause.

  6. Technological - Improve processes or strengthen digital infrastructure or presence.

    Project Charter

    Business Case - provides the justification for a project, program, or portfolio, it is essential to ensure the business value of the project work.

    Release Planning - Product roadmap, iterations, sprints - allow product owner and team to decide how much needs to be developed and how long it will take to have a releasable product based on business goals.

Subdivide tasks to find MVP (sub-tasks)

MVP:minimum viable products are the foundation of a prototype that allows early testing and increased potential of a project.

Story Mapping - Kanban board, customer validation

MoSCoW method - separate user requirements into 4 categories.

Must Have - Non-negoticable atribute

Should have - important but not essential

Could have - Desirable, time and budget permitting

Won’t have - Not in budget or time line “nice to have” but has no real value

MBI - Facebook’s “Like” Button

  • They are post-validation (after MVP/proof of demand)

  • They are independently deployable and provide end-to-end value (no “half-finished” experience).

  • They tie directly to measurable business outcomes (revenue, cost savings, retention, efficiency).

  • They are intentionally small to enable fast feedback and low risk.

Measure progress - best tools - quantities and quality

  • Define value from the customer

  • Determine value expectations

  • Set target and baselines

  • Determine metrics that communicate progress

  • Select one or more means of collecting metric data

  • Collect data at regular intervals

TOOLS:

  • WBS, Kanban,

  • Burndown charts for predicting when all the work will be completed.

  • EVM (Earned value management) : track project performance against baseline (cost, quality,time,scope and resources)

  • Reporting and tracking tools (PMIS, microsoft project etc, cumulative flow diagram, velocity charts etc)

  • Retrospectives - after an iteration or phase, actively measures progress.

Make data-driven decisions (focus on business only)

Data from different sources (internal and external, cost and profit, quality , customer and supplier etc)

Tools:

  • schedule data

  • release planning

  • quality metrics

  • work performance data

  • risk register

  • requirements traceability matrix

  • Product roadmap

Questionairs

Question 1 (Risk Analysis – Quantitative) Your project has three key risks with the following data after Monte Carlo simulation (10,000 iterations):

  • Risk A: Probability 40%, EMV = –$80,000

  • Risk B: Probability 25%, EMV = –$120,000

  • Risk C: Probability 15%, EMV = +$50,000 (upside opportunity The contingency reserve is currently set at $150,000. The sponsor asks you to recommend the most appropriate adjustment. What should you do?

A) Increase contingency reserve to $200,000 to cover the P50 value B) Recommend $170,000 based on aggregated expected monetary value C) Set reserve at $0 because upside opportunity offsets threats D) Perform sensitivity analysis before deciding on reserve

Answer:B

EMV (Expected Monetary Value)= prob x Impact $

Threats=A (-80,000)+B(-120,000)=-200,000$

Oppotunity:C (+50,000)

Net aggregated EMV=-200,000+50,000= - 150,000$

(current reserve matches exactly). But the question implies adjustment needed, and option B says $170k (perhaps a slight buffer or misread). Actually, upon close review, the best is often the aggregated threats’ EMV without fully netting upside unless the opportunity is realized in the same scenarios (Monte Carlo would show this).

Question 2 (Risk Response – Threats) During risk response planning for a high-priority technical risk, the team identifies a vendor who can deliver a proven alternative component that reduces probability from 70% to 10%, but adds $45,000 to the budget. The cost of impact if the risk occurs is estimated at $300,000. What is the most appropriate strategy?

A) Accept the risk because the cost of mitigation exceeds 15% of impact B) Mitigate by contracting the vendor (secondary risk created) C) Transfer the risk fully to the vendor via warranty clause D) Avoid the risk by redesigning the component in-house

Answer:B

EMV=prob x impact

Previous EMV=0.7 x 300,000= 210,000

Mitigated EMV=0.1 x 300,000 = 30,000

Saving=210,000-30,000= 180,000 >> budget added 45,000$, cost -effective

Question 3 (Monitor and Close Risks) In the monthly risk review meeting, you notice that a previously low-priority risk has triggered and is now impacting the critical path. The risk owner has not updated the risk register in two months. What is your BEST immediate action?

A) Escalate to the project sponsor for additional funding B) Update the risk register, reassess probability/impact, and trigger the response plan C) Close the risk as “realized” and document lessons learned D) Issue a change request to extend the schedule

Answer:B

Question 4 (Monte Carlo Simulation – Details and Outputs) You are performing a Monte Carlo simulation on your project’s schedule using 5,000 iterations. The simulation assumes triangular distributions for activity durations: optimistic (O), most likely (ML), and pessimistic (P). After running the simulation, the output shows a mean project duration of 120 days, with P10 at 105 days, P50 at 118 days, and P90 at 140 days. Stakeholders request a contingency reserve to achieve an 80% confidence level in meeting the 130-day target. Based on the simulation results, what is the recommended contingency reserve in days?

Formula reminder: Contingency reserve = (Target confidence level duration) - (Baseline/mean duration), adjusted from simulation percentiles.

A) 10 days (P90 - mean) B) 12 days (P80 interpolated ≈ 130 days - mean) C) 20 days (P90 - P50) D) 25 days (P90 - P10 / 2)

Answer:B

Contingency reserve= P80 ( around 130) - P50 (118) = 12 days

Question 5 (Decision Tree Analysis – Branching and EMV) Your team is evaluating two vendor options for a critical component using decision tree analysis. Vendor A costs $100,000 upfront with a 60% chance of on-time delivery (value +$500,000) and 40% chance of delay (impact -$200,000). Vendor B costs $150,000 upfront with an 80% chance of on-time delivery (same +$500,000 value) and 20% chance of delay (same -$200,000 impact). Calculate the net EMV for each path and recommend the better option.

Formula: Net Path Value = (Probability × Outcome) - Initial Cost; Overall EMV = Sum of net path values.

A) Choose Vendor A: EMV = +$60,000 B) Choose Vendor B: EMV = +$90,000 C) Choose Vendor A: EMV = +$140,000 D) Choose Vendor B: EMV = +$210,000

Answer: D

A: EMV=0.6*500,000+0.4*-200,000 - 100,000=120,000$

B: EMV= 0.8 * 500,000 + 0.2* -200,000 - 150,000=210,000$

Question 6 (Sensitivity Analysis – Tornado Diagram and Formulas) In quantitative risk analysis, you create a tornado diagram to show sensitivity of the project’s NPV to key variables. The baseline NPV is $1,200,000. Variables include: Cost overrun (range -20% to +30%, sensitivity impact ±$400,000), Revenue delay (range -10% to +15%, impact ±$250,000), and Market demand (range -15% to +20%, impact ±$150,000). Which variable should be prioritized for further mitigation, and why?

Formula: Sensitivity = (Max impact - Min impact) / Baseline, but prioritize by widest bar in tornado (absolute impact range).

A) Cost overrun: widest range (±$400,000) B) Revenue delay: moderate range but higher percentage sensitivity C) Market demand: narrowest range, least priority D) All equal; perform Monte Carlo next

Answer: A

Cost Overrun Sensitivity%= 400,000*2/1200,000= 67%

Revenue delay = 250,000*2 /1200,000=42%

Maekt demand = 150,000*2/1200,00=0.25%

Question 7 (Monte Carlo Simulation – Distributions and Correlations) During Monte Carlo setup for cost risk analysis, you model three correlated risks: Material cost (normal distribution, mean $50,000, SD $10,000), Labor cost (lognormal, mean $80,000, SD $15,000), and Exchange rate fluctuation (uniform, $0.90-$1.10). Risks have a +0.7 correlation between material and labor. After 10,000 iterations, the simulation outputs a P75 total cost of $160,000 against a baseline of $130,000. What is the primary reason to include correlations in the model?

A) To reduce iteration count for faster computation B) To accurately reflect real-world dependencies, avoiding under/overestimation of variance C) To convert all distributions to triangular for simplicity D) To eliminate the need for sensitivity analysis

Code
set.seed(2026)          # for reproducibility
n <- 10000

# ─── 1. Generate correlated Material and Labor ────────────────────────

# Correlation matrix (material ↔ labor = 0.7, exchange independent)
rho <- 0.7
cor_mat <- matrix(c(1.0, rho, 0,
                    rho, 1.0, 0,
                    0,   0,   1.0), nrow = 3, byrow = TRUE)

# Cholesky decomposition: L %*% t(L) = cor_mat
L <- t(chol(cor_mat))

# Independent standard normals (n rows × 3 columns)
Z <- matrix(rnorm(n * 3), nrow = n, ncol = 3)

# Correlated standard normals
U <- Z %*% L

# ─── 2. Transform to target distributions ─────────────────────────────

# Material ~ Normal(50,000, 10,000)
material <- 50000 + 10000 * U[, 1]

# Labor — lognormal with *real-space* mean = 80,000 and sd = 15,000
m <- 80000
s <- 15000
sigma <- sqrt(log(1 + (s/m)^2))          # sd of log(labor)
mu    <- log(m) - 0.5 * sigma^2          # mean of log(labor)
labor <- exp(mu + sigma * U[, 2])

# Exchange rate — independent Uniform(0.90, 1.10)
exchange <- runif(n, 0.90, 1.10)

# ─── 3. Total cost ────────────────────────────────────────────────────
total_cost <- (material + labor) * exchange

# ─── 4. Results ───────────────────────────────────────────────────────
cat("Monte Carlo results (", n, " iterations):\n\n", sep = "")
Monte Carlo results (10000 iterations):
Code
cat("P75 total cost:", format(round(quantile(total_cost, 0.75)), big.mark = ","), "\n")
P75 total cost: 143,906 
Code
cat("Median       :", format(round(median(total_cost)),     big.mark = ","), "\n")
Median       : 128,749 
Code
cat("Mean         :", format(round(mean(total_cost)),       big.mark = ","), "\n")
Mean         : 129,782 
Code
cat("Baseline     :", format(130000, big.mark = ","), "\n")
Baseline     : 130,000 
Code
cat("P75 contingency:", format(round(quantile(total_cost, 0.75) - 130000), big.mark = ","), "\n\n")
P75 contingency: 13,906 
Code
P75 <- round(quantile(total_cost, 0.75))
# Summary statistics
summary(total_cost)
   Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
  54302  114870  128749  129782  143906  236489 
Code
# Optional: histogram
 hist(total_cost, breaks = 80, col = "#aacdff", border = "white",
      main = "Total Cost Distribution — 10,000 iterations",
      xlab = "Total Cost ($)", las = 1)
 abline(v = quantile(total_cost, 0.75), col = "red", lwd = 2, lty = 2)
 legend("topright", legend = paste("P75 =", P75), col = "red", lty = 2)

Question 8 (Decision Tree Analysis – Multi-Stage with Formulas) In a multi-stage decision tree for a product launch, the first decision is to invest $200,000 in R&D with 70% success probability (leading to market test) or abandon (EMV $0). If successful, market test costs $100,000 with outcomes: High demand (50%, +$1,000,000), Medium (30%, +$400,000), Low (20%, -$50,000). Calculate the overall EMV and decide if to proceed with R&D.

Formula: Roll back from end nodes: EMV_node = Sum (P × Outcome) - Cost_at_node; compare to alternatives.

A) Proceed: Overall EMV = +$210,000 B) Abandon: EMV = $0 (better) C) Proceed: Overall EMV = +$385,000 D) Proceed: Overall EMV = +$157,000

Answer:D

EMV if R&D failed= 0.3 x 0$ - 200,000 = - 200,000

EMV if R&D succeeds=0.7*(0.5*1000000+0.3*400000-0.2*50000-100000) =357000

Overall EMV= 357000-200,000=157000$

Code
library(rdecision)

# Define nodes with labels
d1 <- DecisionNode$new("Invest in R&D?")
c1 <- ChanceNode$new("R&D Success?")
d2 <- DecisionNode$new("Do Market Test?")
c2 <- ChanceNode$new("Demand Level")
d_high <- DecisionNode$new("High Demand - Launch?")
d_med <- DecisionNode$new("Medium Demand - Launch?")
d_low <- DecisionNode$new("Low Demand - Launch?")

# Leaf nodes (set utility = 0 since not health-related)
t_ab_root <- LeafNode$new("Abandon (root)", utility = 0)
t_fail <- LeafNode$new("Fail (-$200k)", utility = 0)
t_ab_post <- LeafNode$new("Abandon post R&D (-$200k)", utility = 0)
t_h_no <- LeafNode$new("No launch high (-$300k)", utility = 0)
t_h_yes <- LeafNode$new("Launch high (+$700k net)", utility = 0)
t_m_no <- LeafNode$new("No launch med (-$300k)", utility = 0)
t_m_yes <- LeafNode$new("Launch med (+$100k net)", utility = 0)
t_l_no <- LeafNode$new("No launch low (-$300k)", utility = 0)
t_l_yes <- LeafNode$new("Launch low (-$350k net)", utility = 0)

# Define edges (use named arguments to avoid positional mismatches)
# For Actions (from decisions): source, target, label="", cost=0, benefit=0
# For Reactions (from chances): source, target, p=0, cost=0, benefit=0, label=""

e_abandon_root <- Action$new(d1, t_ab_root, label = "Abandon", cost = 0, benefit = 0)
e_invest <- Action$new(d1, c1, label = "Invest $200k", cost = 200000, benefit = 0)

e_fail <- Reaction$new(c1, t_fail, p = 0.3, cost = 0, benefit = 0, label = "Fail 30%")
e_success <- Reaction$new(c1, d2, p = 0.7, cost = 0, benefit = 0, label = "Success 70%")

e_abandon_post <- Action$new(d2, t_ab_post, label = "Abandon", cost = 0, benefit = 0)
e_test <- Action$new(d2, c2, label = "Test $100k", cost = 100000, benefit = 0)

e_high <- Reaction$new(c2, d_high, p = 0.5, cost = 0, benefit = 0, label = "High 50%")
e_med <- Reaction$new(c2, d_med, p = 0.3, cost = 0, benefit = 0, label = "Medium 30%")
e_low <- Reaction$new(c2, d_low, p = 0.2, cost = 0, benefit = 0, label = "Low 20%")

e_high_no <- Action$new(d_high, t_h_no, label = "No", cost = 0, benefit = 0)
e_high_yes <- Action$new(d_high, t_h_yes, label = "Yes +$1M", cost = 0, benefit = 1000000)
e_med_no <- Action$new(d_med, t_m_no, label = "No", cost = 0, benefit = 0)
e_med_yes <- Action$new(d_med, t_m_yes, label = "Yes +$400k", cost = 0, benefit = 400000)
e_low_no <- Action$new(d_low, t_l_no, label = "No (best)", cost = 0, benefit = 0)
e_low_yes <- Action$new(d_low, t_l_yes, label = "Yes -$50k", cost = 0, benefit = -50000)

# Assemble
nodes <- list(d1, c1, d2, c2, d_high, d_med, d_low,
              t_ab_root, t_fail, t_ab_post, t_h_no, t_h_yes,
              t_m_no, t_m_yes, t_l_no, t_l_yes)
edges <- list(e_abandon_root, e_invest, e_fail, e_success, e_abandon_post, e_test,
              e_high, e_med, e_low, e_high_no, e_high_yes,
              e_med_no, e_med_yes, e_low_no, e_low_yes)

# Build tree
dt <- DecisionTree$new(V = nodes, E = edges)

# Evaluate (expected values, all strategies)
res <- dt$evaluate(setvars = "expected")

# Compute EMV (Benefit - Cost)
res$EMV <- res$Benefit - res$Cost

# Print results
#print(res)

# Overall EMV and decision
optimal_emv <- max(res$EMV)
cat("\nOptimal overall EMV: $", format(optimal_emv, big.mark = ","), "\n")

Optimal overall EMV: $ 164,000 
Code
cat("Decision: Proceed with R&D (positive EMV)\n")
Decision: Proceed with R&D (positive EMV)
Code
# Draw tree (probs on chance edges, costs/benefits on actions if non-zero)
dt$draw(border = TRUE)

Question 9 (PERT vs Triangular Distributions in Monte Carlo) You are modeling activity duration risk using both PERT and triangular distributions for Monte Carlo simulation (10,000 iterations each). For one critical path activity:

  • Optimistic = 8 days, Most Likely = 12 days, Pessimistic = 22 days

  • PERT expected duration = (O + 4ML + P) / 6 = 13 days

  • Triangular expected duration = (O + ML + P) / 3 = 14 days

The simulation using PERT shows a project P80 duration of 145 days; using triangular shows P80 of 152 days. Assuming the same inputs and correlations, what is the most likely reason for the difference in P80 output?

A) Triangular distribution has higher variance due to equal weighting of extremes B) PERT assumes beta distribution with lower standard deviation C) Triangular distribution is inappropriate for schedule risk D) Monte Carlo iteration count should be increased to 50,000

Question 10 (Decision Tree – Expected Value of Perfect Information – EVPI) A project faces a key uncertain event with two outcomes: Favorable (60% probability, project NPV +$800,000) or Unfavorable (40%, NPV –$300,000). Without information, the best decision is to proceed (EMV = 0.6×800k + 0.4×(–300k) = +$360,000). A market study can perfectly predict the outcome at a cost of $80,000. What is the Expected Value of Perfect Information (EVPI)?

Formula: EVPI = Expected value with perfect information – Expected value without information

A) $80,000 B) $120,000 C) $200,000 D) $280,000

Expected value EMV without information=0.6 x 800k + 0.4 x (-300k)= +$ 360,000

Expected value EVwPI with information= 0.6 x 800k +0.4 x $0 = + $480,000

EVPI=EVwPI-EMV without info = 480,000-360,000=120,000

Question 11 (Monte Carlo – Latin Hypercube vs Simple Monte Carlo) Your risk analyst proposes switching from simple (random) Monte Carlo sampling to Latin Hypercube Sampling (LHS) for a cost risk model with 15 input variables. After running both with 5,000 iterations, LHS produces a tighter confidence interval for the P90 total cost estimate. What is the primary advantage of LHS in this context?

A) It reduces computation time significantly B) It provides better coverage of the input probability space with fewer iterations C) It eliminates the need to model correlations D) It automatically converts all distributions to normal

Answer:B

Latin Hypercube Sampling (LHS) is a stratified sampling technique that ensures each input distribution is sampled more evenly across its range, even with fewer iterations. This leads to:

  • Lower variance in output estimates

  • Tighter confidence intervals

  • More stable P90/P10 estimates compared to simple random (Monte Carlo) samplin

Question 12 (Sensitivity Analysis – Correlation Coefficients and Partial Rank Correlation) In a Monte Carlo simulation output, you review a sensitivity tornado diagram based on Spearman rank correlation coefficients between input variables and project NPV. The top three ranked inputs are:

  • Input X: SRC = +0.68

  • Input Y: SRC = –0.55

  • Input Z: SRC = +0.42

Later, partial rank correlation coefficients (PRCC) are calculated to control for confounding:

  • X (controlling for others): PRCC = +0.65

  • Y: PRCC = –0.12

  • Z: PRCC = +0.38

Which input should be prioritized for risk response planning, and why?

A) Input X – highest absolute SRC and stable PRCC B) Input Z – highest PRCC after controlling for others C) Input Y – large negative SRC indicates strong threat D) None – PRCC differences indicate multicollinearity issues

Answer A

  • Spearman Rank Correlation (SRC) shows raw correlation with the output.

  • Partial Rank Correlation Coefficient (PRCC) removes the confounding effect of other variables (controls for multicollinearity).

  • Input X remains very strong even after controlling (PRCC = +0.65) → it has an independent, significant effect.

  • Input Y drops dramatically (from –0.55 to –0.12) → much of its apparent effect was due to correlation with other variables.

  • Input Z stays moderate.

Prioritize Input X — it has the strongest independent influence on NPV. Correct: A — Very well done!