Single Feature Forecast Spreadsheet

Posted by in Featured, Reference, Tools |

We often build tools that help forecast or teach the concepts behind our statistical methods. Turning our internal tools into public property takes some time and tuning. The spreadsheet performs a Monte Carlo simulation to generate a delivery date forecast for a single feature. There are no macros, everything in this spreadsheet is based on formulas.

Features –

  • Given a start date, story count range estimate and throughput/velocity it gives date and probability forecasts
  • Throughput can be story count based or velocity based
  • If historical throughput/velocity data is available, it can be used instead of a simple range estimate
  • If risks are known, they can be used to forecast their impact – totally optional
  • Charts so you can see the analysis visually – just for teaching
  • NO MACROS – No security issues to have to worry about. We only do what Excel does in formulas and don’t share or send any data externally.

Get it here –

Throughput Forecaster.xlsx

(See all of our free tools here: http://bit.ly/SimResources)

How it works –

Given the feature count estimate range, 500 hypothetical completions of this feature are simulated. These simulated trials are used to compute how likely a given delivery date is against others. This analysis correctly computes how the uncertainty in feature size performs against the uncertainty of delivery rate (historically or estimated).

We will be posting hands-on labs about its application in the future. For now, we just ant to make our internal tools available for use by the industry so that we can improve on the simplistic and flawed current methods of forecasting that plague the software development industry.

Troy.

Read More

SimML Reference Documentation

Posted by in Featured, Reference |

OK, its about time but we have a draft document of our SimML specification. We have tried multiple formats and often struggled to keep the documentation in sync with our rapid releases. We do releases at least every month, and it was difficult to maintain.

We want your feedback on this format:

https://github.com/FocusedObjective/FocusedObjective.Resources/blob/master/SimML%20Reference.xlsx?raw=true

The latest is ALWAYS publically available on our GitHub resource account. And its part of our definition of done for all work that this is updated. Feel free to keep us honest!

Know that this is a work in progress. We have the model setup section documentated, but not the execution instructions. We use that section mainly internally, so its unlikely you edit it by hand, but we WILL give you that documentation over the next few months anyway.

Troy

Read More

Forecasting Error: Not Accounting For Scope Increase

Posted by in Featured, Reference |

Initial estimates of the amount of work for a project or idea lacks detail. Attempting to forecast using historical rates would be in error if:

1. The granularity of work breakdown differs from historical samples.

2. The project isn’t completely fleshed out as to what features are required

3. The project ignores system and operational issues. New environments, changes in current environments, security or performance audits and requirements.

If every project needed to be completely understood with every detail, then forecasting would take too long. The rate of failure in delivery of waterfall projects howed that even attempting to completely design and understand projects doesn’t improve delivery likelihood.

Tracking the increase in scope and the causes for previous projects allows projects at an idea level to forecast with some certainty of likely scope increase. The recommended technique is to keep clear records of the amount of total scope for each project, categorized by work-item type. Some categories we recommend are:

1. Split (straight split of known work)

2. Discovered Scope (scope found only after deling into the detail)

3. New Requirement (nothing to do with the original ideas, added features)

4. Adopted work (work the team took on but isn’t actually part of a project)

By tagging each backlog item with these tags, a growth-rate from an original amount of work can be computed. This adjustment can be applied to new ideas when quantitatively forecasted. These metrics are also good to put targets around. None of these items are bad by default, its just good to know where the scope increases are coming from and to manage/consider them when forecasting proposals.

 

Read More

Free Tools and Resources

Posted by in Featured, Reference, Tools |

We often build custom tools and spreadsheets during our consulting work. We offer these to the community for free under a Creative Commons Attribution Non-Commercial License. Please help us keep these resources free and updated by abiding by the conditions of this license.

\Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

These tools are kept under version control on GitHub here –

https://github.com/FocusedObjective/FocusedObjective.Resources

 

Read More

Agile Forecasting Error: Assuming Uniform Throughput or Velocity

Posted by in Featured, Forecasting, Reference |

When planning, either on the back of a napkin or using more statistical methods, some estimation of “how long” is often needed. Precision will differ depending on needs, for example it might just be good enough to know the calendar quarter to confirm that a new website will be in place well in advance of a promotional period. Other people might want more clarity because they are making trade-offs between multiple options. We often get asked to confirm or build a forecasting model for these organizations and see some common errors that cause erroneous forecasts.

This post and subsequent posts will outline some of the common ones we see.

Assuming uniform throughput, and that team performance is the only factor for throughput

Whether forecasting using velocity or story-count over time, the amount of work being completed is a measure of throughput or in plain terms, a rate of completed work over time. We often see organizations consider this rate as within the teams control, and used as a measure of progress and performance. Sometime it is, but if we plot the throughput history, notable areas of instability and step-changes are evident. Knowing the sources of these, and how to adjust throughput forecasts knowing this in advance is key to improving any method of delivery estimation.

throughput

Figure 1 – Throughput run charts like this show discontinuities that are other than team completion rate and need to be considered when forecasting

Common causes we see and adjust for –

1. Team forming stage and other phases – Teams often take on new technologies and team members at the beginning of a new piece of work. It should be expected that the early storming phases for tams and new investigative development will be slower than when a team is long-running and stable. We start with an adjustment of minus 50% for the first 20% of a project.

2. Calendar events – Differing by regional geography and countries, often there are whole calendar periods where throughput drops dramatically and recovers slowly. Depending on the granularity of our forecasts, consider known holiday days where no-one will work, long weekends where people take vacation to extend the single days holiday to a week or more, and the biggest factor of all, December. If an organization has a “use or lose” vacation policy, a lot of technical staff end up using that vacation in December, and some combine it with new vacation and extend to January. We see roughly 4 weeks of almost no progress in some organizations. Impact is cascading if teams have tight dependencies. Forecasting over these periods is challenging.

3. Organizational changes – Employee concerns and stress during leadership changes and re-orgs is another step-function factor in throughput impact. Even rumors can be seen in throughput run charts. Expect a -20% decrease recovering over one to two months depending on how well the change is accepted and communicated. For large companies, we assume there will be at least one of these for every six month period.

4. Changes in the way work is sub-divided or described – this is an obvious one, but often overlooked. New processes or constraints or motivations will impact the way work is sub-divided. Throughput or velocity being captured at one granularity is not going to forecasts work in a different granularity. We often adjust for this by taking a sample of prior work and getting teams to break-down using the new process to find a multiplying factor. Performing this regularly for samples of work in each quarter going back 12 months helps normalize a throughput run-chart back to “real rate of progress.” This process re-plots the historical throughput to a similar rate of that being used today with the aim of isolating team process improvements rather than work-size anomalies.

These are just a start of the factors that influence throughput in ways that make it a poorer predictor of the future than it could be. These factors apply no matter what unit of completion rate used, be it velocity, or story count.

 

Read More

Paper: The Economic Impact of Software Development Process Choice – Cycle-time Analysis and Monte Carlo Simulation Results

Posted by in Featured, Forecasting, Reference |

Troy Magennis has recently written a paper on economic impact of Agile calculations using a variety of cycle-time analysis and Monte Carlo techniques. We would like feedback on this paper and especially contrary views.

Download the paper here: The Economic Impact of Software Development Process Choice – Cycle-time Analysis and Monte Carlo Simulation Results

Abstract:

IT executives initiate software development process methodology change with faith that it will lower development cost, decrease time-to-market and increase quality. Anecdotes and success stories from agile practitioners and vendors provide evidence that other companies have succeeded following a newly chosen doctrine. Quantitative evidence is scarcer than these stories, and when available, often unverifiable.

 

This paper introduces a quantitative approach to assess software process methodology change. It proposes working from the perspective of impact on cycle-time performance (the time from the start of individual pieces of work until their completion), before and after a process change.

 

This paper introduces the history and theoretical basis of this analysis, and then presents a commercial case study. The case study demonstrates how the economic value of a process change initiative was quantified to understand success and payoff.

 

Cycle-time is a convenient metric for comparing proposed and ongoing process improvement due to its easy capture and applicability to all processes. Poor cycle-time analysis can lead to teams being held to erroneous service level expectations. Properly comparing the impact of proposed process change scenarios, modeled using historical or estimated cycle-time performance helps isolate the bottom line impact of process changes with quantitative rigor.

This paper will be presented at the HICSS Conference in January 2015.

 

 

Read More