This spreadsheet helps forecast how long a single feature or single project will take to deliver using agile (scrum or kanban) using Monte Carlo simulation. We have to balance keeping the spreadsheet light to use and easy to understand whilst capturing the most likely factors that cause feature or project blowout.

We have made our Single Feature Forecaster spreadsheet available for free for over six months, and get a lot of feedback and questions about how it works and how to use it. This post tries to answer how to use it. Future post will cover how it works in detail.

Get the spreadsheet here: Single Feature Forecaster Spreadsheet (latest version) 

(uses no macros – all formulas, requires Microsoft Excel or Google sheets (but it is slow))

Single Feature Agile Forecasting Inputs

Single feature forecaster inputs

Forecast a single feature spreadsheet inputs

1. Start date – seems relatively straightforward, but don’t be fooled. Poor choice of start date is a major factor in poor forecast accuracy. Often the planned start date is missed, staff aren’t moved from prior projects, or a final decision to begin doesn’t arrive in time. Be pragmatic. Too optimistic or pessimistic is will cause errors. Consider other projects that may run long, these might impact on the start date for this feature or project.

2. How Many Stories – this is the base amount of work to be performed for this feature or project. Its is a range estimate, and our advice is to keep it wide. Monte Carlo works best when it is more certain that the actual final range is within the estimate, so making this range narrow increases the risk of a miss. Include any item count that is included in your throughput samples of estimates. If your throughput includes defects, then estimate defects. Our advice is to NOT include defects, to just include work contributing to the features. Also, don’t include risks, these are best managed separately (in the Risk tab).

3. Story Split Rate – this range estimate wasn’t in our first release. What we encountered is the mistake (that all of our competitors make) when historical throughput data is used to forecast. Items in the backlog are often split before being delivered. If the throughput count is post-split (items are split into multiple parts or grow in scope), the forecast will be out by factors of up the three times (if stories split on average into three parts, which is common in our experience). This is a significant miss, so we added it to the spreadsheet. Our advice is to split low estimate 1, and high estimate 3 until you have more accurate data.

4. Throughput – The rate that work will be delivered. The only requirement is that it is in the same units as the story estimate, whether that be story points or count. There are two ways to give this estimate. Use a low and high range guess if no historical data is available, or historical data if it is available. If a range estimate is used, keep it wide. You are estimating the rate that the story estimate given in input 2 and 3. If that includes defects, then this estimate should as well. We find it easier to NOT include defects in either the throughput or story estimate. Start wide pay particular attention to the low estimate and be realistic and use the lowest you have seen (don’t be afraid of 0 or 1 until you get more information). Don’t be to pessimistic on the high bound. Go one higher than you expect or have seen. Remember to set whether your estimate is per week or two weeks. We like to keep data per-week. If you choose to use historical data, enter your samples in the Throughput Samples tab and choose “Data” from the drop-down list.

The Results Table

Single Feature Forecaster Results

Forecast Single Feature Results Table

Forecast results are calculated continuously. We have tried to suggest that 85th percentile and above is safe territory. What we have found traditional estimation methods relay on averages. Although not exact, the 50th percentile often matches what is expected and although (much) more realistic, the 85th percentile is a shock. Be ready to defend the process used. To help, we have added some charts that pictorially help explain how it works.

Forecast Result Charts

Simulation Run Lightning Chart

Forecast Single Feature Burndown Lightning Chart (for 50 of 1000 simulations)

The lightning burndown chart shows the first fifty simulated project burn-downs. When explaining the process used to others, explain that this spreadsheet hypothetically complete the project 1,000 times given the estimates you provide. 85th percentile will be the date that 850 of the simulations completed on or before. Go onto explain that traditional methods would give a date closer to where only 500 of these simulated runs would have completed.

Single Feature Forecaster Histogram

Forecast Single Feature Result Histogram

The other chart displays the histogram of results, up to 20 segments maximum. This is similar to the results table, but we find people just want a date and find this histogram too technical to interpret. When multiple risks are entered into the risk tab, this histogram can get very complex. We will go into much deeper detail in a future post on how to capture project risk factors.

Conclusion

We use this same spreadsheet when doing complex analysis. For us its a great first step to getting to a reliable and defensible forecast outcome and allows us to play with the scope and risk factors to find a close solution. We have more complex tools, but sometimes this answers the main question asked – is it possible.

Feel free to send us ideas on improving it.

Again, Get the spreadsheet here: Single Feature Forecaster Spreadsheet (latest version)