ScopeCalc - Monte Carlo Calculator
Paste your team's cycle times and get probabilistic delivery forecasts in seconds.
Enter how long your last completed items took (separate by commas or new lines).
How many items are in your upcoming project or sprint?
Results will appear here after running the simulation.
Forecast delivery timelines using the same probabilistic techniques used inside ScopeCone. Feed in your completed cycle times, set the backlog size you care about, and compare the 50%, 85%, and 95% outcomes before you promise a date.
How Monte Carlo forecasting turns delivery history into probabilities
Monte Carlo forecasting samples thousands of possible futures from your historical cycle-time distribution. Each draw picks a completed work item at random and sums the durations until it reaches your backlog size. Repeating that experiment hundreds or thousands of times produces a probability distribution that is grounded in what your team has actually delivered—not what they wished they could deliver.
The tool defaults to 1,000 iterations, which is enough to stabilise the 50%, 85%, and 95% confidence bands for most software teams. Because the process samples with replacement, it copes well with thin datasets and does not require normally distributed data; skewed or fat-tailed cycle times are expected in complex product work.
Worked example: 18 items for a platform refresh
Load the sample data to see how an 18-item platform initiative behaves. The historical throughput is a mix of quick wins (2–3 days) and slower gnarly tasks (6–8 days). Running 1,000 simulations shows a median (P50) completion time of roughly 74 days, while the 85% band lands around 88 days. That tells you the team can speak confidently about an 11–13 week window, even though individual delivery times vary wildly.
Share the URL with stakeholders and they will see the exact same inputs, results, and histogram. That keeps follow-up conversations focused on the trade-offs—reducing scope, funding parallel work, or accepting a lower confidence band—instead of arguing about whose spreadsheet is the newest.
When to use Monte Carlo versus velocity or story points
Use Monte Carlo whenever you have at least 20–30 completed items that represent the type of work you are planning. It shines when teams operate with flow-based metrics (cycle time, throughput) or in environments where story points drift over time. If you need to compare multiple scenarios, run the simulation with different backlog sizes or apply filters to include only work from a specific class of service.
Velocity and story points can still help inside the team, but they should not be the only source for delivery commitments. Combining Monte Carlo outputs with qualitative signals—risk reviews, dependency mapping, tech debt spikes—produces a more honest plan for engineering leaders and stakeholders alike.
Citations
- Troy Magennis – Forecasting using Monte Carlo simulation
Explains how sampling cycle times creates probabilistic delivery dates for agile teams.
- Daniel Vacanti – Why probabilistic forecasting beats velocity
Deep dive into throughput-based forecasting with Monte Carlo simulations and cumulative-flow data.
- Scrum.org – Monte Carlo simulation for Sprint Planning
Practical guidance on using Monte Carlo to set Sprint goals with transparent confidence levels.
Related reading
- Why engineering roadmap estimation fails
How deterministic estimates break down and ways to communicate uncertainty with leadership.
- Engineering metrics: a pragmatic analysis
Evidence-backed take on which delivery metrics actually correlate with better outcomes.
- Tech debt cost calculator
Quantify maintenance drag alongside your probabilistic delivery plan to balance capacity.
Frequently Asked Questions
Questions we hear from teams evaluating this tool for their roadmap and estimation workflow.
What input data should I use?
Export completed work items from your delivery tool (Jira, Linear, Azure DevOps) and calculate cycle time from start to finish. Include only work that matches the type of initiative you are forecasting.
How many completed items do I need before the results stabilise?
Aim for at least 20–30 cycle times. The more representative history you have, the smoother the distribution becomes. Fewer data points still work, but expect wider confidence bands.
Should I remove obvious outliers before running the simulation?
Only remove an outlier if you can guarantee the underlying cause will not happen again. Monte Carlo thrives on variance—it is better to keep genuine slow items in the dataset so the forecast remains honest.
Can I run separate forecasts for different classes of work?
Yes. Segment your historical data (for example, customer features versus platform investments) and run the simulator for each slice. You can then combine the probability curves to plan portfolio-level scenarios.
What confidence band should I communicate to stakeholders?
Most teams present both the median (P50) and a safer commitment level such as P85. The delta between them is a great conversation starter about scope trade-offs, staffing, or risk mitigation.
How does this relate to velocity-based planning?
Velocity works when teams keep point estimation discipline and the backlog composition stays stable. Monte Carlo skips the subjective scoring and relies on actual cycle time data, so it adapts faster when work mixes change.