Problem in a nutshell: Sometimes extra work needs to be done before delivery because something went wrong, or when a feature was built something was learnt that means additional innovation is required. How can these factors be managed in a forecast early and dealt with earlier. We find asking the simple question “What could go wrong?” helps us be more right when forecasting.

Features or project work starts with a guessed amount of work. As the feature is built, other technical learning can cause delays. For example, when a feature for giving suggestions about what other products you might buy turns out to be too slow to be useful during real-time shopping, additional work may be needed to build an index server specifically to make these results return faster. From a probabilistic perspective, there is a known amount of work (the original feature) and an additional “possible” amount of work if it performs poorly. This is a risk. It has a probability of being needed (less than 100%) and an impact if (and ONLY if it comes true).

If we performed a simple Monte Carlo simulation for this scenario, and said that there was a 50% chance performance would fail, the result would be an equal chance of an early date, and a later date. There would also be a normal distribution of uncertainty around each of these dates. The result would be “Multi-Modal” – jargon for meaning more than one peak of highest probability. The average delivery date is early July, but it has almost NO CHANCE! It will be around mid June, or early September. Based mainly on if this risk comes true.

Monte Carlo of a 50% risk produces with our Single Feature Forecaster spreadsheet.

Figure 1- Monte Carlo of a 50% risk produces with our Single Feature Forecaster spreadsheet.

What does this mean? A few things –

  1. Estimating and quibbling over whether a story is a 5 point or 8 point story is pointless. That changes the result in this case by a few weeks. Stop estimating stories and start brainstorming risks.
  2. If we know that risks can cause these bi-modal probability forecasts, we need to stop using AVERAGE which would give us the nonsense July delivery that won’t happen.
  3. Probabilistic forecasting is necessary to make sense of this type of forecasting. But how?

How do you forecast these risks?

It seems harder than it is. Here is how I generated the above forecast (figure 1) using the Single Feature Forecast spreadsheet that uses no macro’s or programatic add-ins – its PURE formula, so its not that complex to follow. Monte Carlo forecasting plays out feature completion 1000’s of times. In the chart image shown in figure 1 above, you can see the first 50 hypothetical project outcomes in the lower chart (it looks like lightning strikes). You can see that there are two predominant ways the forecast plays out with some variability based on our range estimates for number of stories and throughput estimates (it could be actual throughput data, i just started with a range of 1 to 5 stories per week, but use data when you can). Its either shorter or longer, but not not a lot of chance in between.

Here are the basic forecast guesses for this feature –

The main forecast data to deliver a feature.

Figure 2 – The main forecast data to deliver a feature.

Once we have this data, lets enter the risks. In this case, just one –

Risks definition

Figure 3 – Risks definition

The inputs in figure 3 represent a risk that has a 50% chance of occurring, and if it does, 30 to 40 more stories are needed to implement an index server. This risk is added (30-40 stories picked at random) are added to the forecast 50% of the time. The results shown in figure 1 clearly shows that to be predictable in forecasting the delivery date, determining which peak is more likely is critically important. If the longer date is unacceptable, reducing the probability of that risk early beneficial. As a team or a coach, i would set the team a goal of halving the risk probability of needed an index server (from 50% to 25%), or determining early if its certain an index server is needed and the later date real.

For example, by doing a technical spike it is determined it is less likely that an index server is needed. The team agrees there is a 25% chance, they ruled out 3 out of 4 reasons an index server might be needed. The only chance in the spreadsheet is the risk likelihood being reduced to 25% (from 50% as shown in Figure 3) The forecast now looks like this –

25% chance of performance risk.

Figure 4 – 25% chance of performance risk.

Its clear to see that there is now a 75% chance of hitting June versus September. This is well worth knowing, and until we can show how things going wrong cause us to stress when asked to estimate a delivery date, the conversation is seen as the team being evasive rather than carefully considering what they know.

This example is for a single major delivery blocker risk. Its common that there are 3 to 5 risks like this in significant features or projects. The same modeling and forecasting techniques work, but rather than just two peaks, there will be more peaks and troughs. Strategy stays the same, reduce likelihoods, and prove early if a risk is certain. Then make good decisions with a forecast that constantly shows the uncertainty in the forecast.

Conclusion

If you aren’t brainstorming risks and forecasting them using Monte Carlo forecasting you are likely to miss dates. Averages cannot be useful when forecasting multi-modal forecast outcomes common to IT projects. Estimating work items is the least of your worries in projects and features where technical risks abound. We find three risks commonly cause most of the chaos and rarely find none.

Main point – its easier than you think to model risk factors, and we suggest that you take a look at our spreadsheets that support this type of analysis.

Troy