Dependency and Skill Capacity Planning (portfolio planning)

Posted by in Featured, Tools | 2 comments

As features and products get larger more than one team needs to collaborate to deliver. This is often a tough problem to plan, co-ordinate and deliver effectively. We are sharing our tools that help understand how team load is balanced during the planning process with three goals in mind –

  1. To visually document what work is on teams or skillsets with current planned features in mind
  2. To highlight when load exceeds a teams or skillsets capacity (including the impact of in-planned work)
  3. To help co-ordinate more optimal start dates for pieces or work to avoid saturating any one team or skillset

Download the spreadsheet here

Skillset and Dependency Planning Spreadsheet

The tool is a “simple” spreadsheet that helps build a cohesive portfolio plan, and documentation about how we use it. Its pretty general, its main intent is to show the load on specific teams or skillsets over time interval (day, week, sprint, etc). You enter specific pieces of work for specific teams and the effort involved. You define what units “effort” is measured in. With a low and high guess about how long each work item will take (again in days, weeks, months), it can visually sum up utilization in those periods for each team. A simple heatmap shows load on each team. When you reach a limit you define, its clear to see.

Here is how we use it for monthly or quarterly planning –

  1. We enter a list of teams or special skillsets
  2. We ask for specific important dates, vacations, shows, release, etc.
  3. We have the assembled group identify the MOST IMPORTANT feature (based on cost of delay optimally, they use our COD spreadsheet for that)
  4. We break out what work is involved and what teams will do that work, with broad range estimates of time (weeks normally)
  5. We set the start dates to align dependencies
  6. We look at the heat-map. Is there more capacity? Yes, repeat from step 3

Yes, we focus the group on one feature at a time. When we reach capacity, we look for ways to resolve the constraint –

  1. Shift start dates to spread the load on the constraining teams or skillsets
  2. We add more capacity to the constraining teams or skillsets

We avoid a plan that is overcapacity at any point. If there is still reserve, we might look for ways those teams can assist the constraining teams, rarely do we suggest just add lower priority features just to keep them busy.

Download the spreadsheet here

Please give us your feedback. If it works and if not, why not.

Troy

 

 

Read More

Doing Team Metrics Right

Posted by in Featured, Reference, Tools | 1 comment

It seems common knowledge that measuring teams and presenting data is an Agile dysfunction. I disagree. But can see and have participated in abusive metric relationships in the past. I think we need to discuss better ways of achieving an evidence based Agile approach; without those participating feeling (or being) abused.

Here are my top five list of traits that make metric dashboards useful –

  1. Measure competing things – its relatively easy to game a single metric, so its important to measure the impact of moving one metric by showing the others. Help teams target moving one metric and observe any negative impacts on others.
  2. Make informed and smart trades – trading something the team is better than other teams in similar circumstance for something they desire to improve. Help teams identify what metric category they could trade (be less good) to raise another metric (become better).
  3. Trends not numbers are important – observing unintended drifting over time of metric averages. Its about understanding something has changed, not how good or bad. Help teams react earlier to often slow moving regression in a metric or two. Less effort in correction the earlier it is detected.
  4. Look for global or local trends – Comparing trends across teams is key to spotting system level opportunities (every team is impacted) versus single team opportunities. Help teams target improving things they can do without fighting a system level factors they are unlikely to solve.
  5. No team will be good at everything – If a team is adversely trending on one metric, point out they are above average on another. Pick competing metrics so that no team will be great or terrible at all of them. There will always be a mix.

This list borrows heavily from the work of Larry Maccherone who correctly observed that a balanced blend of metric types gives the most information for identifying trends and improvement opportunities. His advice is to measure at least one thing from four broad areas –

  1. How much
  2. How well
  3. How responsive
  4. How repeatable or reliably

An implementation of this was recently made available in spreadsheet form. Driven from work item Start date, Completed date and Type, the spreadsheet builds a dashboard page in Excel. The choice of the four metrics was somewhat from experience, and there are plenty of alternative that might fit your context better. The advice stands though, pick a metric from the four areas.

To help roll out the dashboard, we have created a single page cheat-sheet to educate the teams on what each metric means and what to expect if that metric is overdriven. The goal is to stable in all four, not excessively good at any one.

Download the spreadsheet by clicking here

Download the cheatsheet documentation here

As always, love feedback and stories as to how this spreadsheet has worked in the real world. We want to improve it over time.
Troy

Read More

KanbanSim and ScrumSim v2.0 Released + Simplified Licensing

Posted by in Featured, Forecasting, Tools |

We are growing up. We made it to V2.0 of our flagship product, KanbanSim and ScrumSim. We have added over 100 new features since we have launched.

We have also invested heavily improving the interactive modeling features that customers are using to quickly experiment with model input impact analysis to find optimal solutions (e.g. drag the number of developers slider and see cost and date impact). We have also invested heavily in the model editor, making code completion and inline documentation, and model snippets making creating new models faster.

Our licensing has also been updated to how we really did it anyway, and its in your benefit –

  1. KanbanSim and ScrumSim is FREE (no catch) for individuals and companies up to 10 employees.
  2. If your company has more than 10 employees (its the honor system), licenses are $995 per person
  3. If your company wants annual software maintenance and support, its $4,995 for each 10 license per division, and then 20% a year to renew.

We simplified our licensing because we wanted no barriers to getting started, and have found that even our generous 6-12 month trial period made some customers uncomfortable to start. We also found that larger companies felt uncomfortable having to pay so little! So, we want to help them feel “at ease” knowing they get every version the moment its released and email and phone support if necessary.

See our Downloads page to get the latest version. And please, tell your friends.

Read More

Excel Spreadsheet Tips and Design Goals

Posted by in Featured, Reference, Tools |

We put a lot of effort into our free spreadsheet tools. We want our spreadsheets to be usable by anyone who needs an answer to their questions. Some good practices have become common in our tools and we want to share them with you as well.

No Macros or Add-Ins

A lot of what we do would be easier if we used Excel macros or add-ins. We resist the urge to do this. By including macros or add-ins we are shipping code and Excel gives all manner of warnings to the user that the spreadsheet may be a security risk. It also inhibits the spreadsheets being compatible on Google Sheets. We haven’t needed to use macros, and compete feature for feature against many paid add-ins. We are extremely proud to be performing Monte Carlo simulation and distribution fitting algorithms using plain formulas that YOU can see. Nothing is hidden. But gee, a lot of sleepless nights in this goal…

Title and Read me worksheet

Always have a title page worksheet that describes the spreadsheets intention, and who and how to contact us with ideas for improving them. Document the main steps in getting a result. Some of our spreadsheets are four or five worksheets deep and we want to avoid people becoming immediately lost in the appropriate workflow.

Data entry is by offset reference

Initial versions of our spreadsheets had a column for user data to be populated. What we found we needed was the ability to copy, paste, drag, delete, manipulate this data in the spreadsheet directly. However, whenever a column or cell was moved, our formulas referencing those cells broke. We solved this problem by using an indirect reference to “duplicate” the user data on another worksheet using Excel’s INDEX function, for example:

INDEX(OriginalCompletedDateValues,ROW(A2))

This formula references another cell by row number, and the row number doesn’t change with any user action you can do with the clipboard, dragging or manipulation. We write all formulas against the indirect copy of the original data in a worksheet commonly called “Calculations”. We copy down the formula as shown, where the A2 will be a series, A3, A4, A5 …

We standardized on a worksheet called “Your Data” which no formula every directly references. We find ourselves dragging columns of different data into this sheet and everything keeps just working.

Column References and Capacity Growth

No matter how big we make the spreadsheets support input data, someone always emails wanting more. We structure our formulas now to limit the places where an absolute row count is needed. Here are the ways we tackle this –

  1. Use column references A:A rather than A2:A5000 in formulas wherever possible
  2. Use names ranges, Formulas-Name Manager to define names for ranges. We use the names in formulas where possible
  3. When we need to handle a date range, we do 250 weeks or 5,000 day dates individually
  4. We always set the background color of a formula cell to the calculation cell type, so we can visually see where we stopped copying a formula row down. We also try and put a comment at that point saying how the user can expand it.

Top Left to Bottom Right Flow

We design our worksheets to be read and populated from the top left to bottom right. We also try and number each input sell title, and add a tip nearby. When we get a question we try and add more documentation to these tips. We would love feedback on where people get stuck. This is pretty standard user design guidelines.

Auto Chart Axis Growth & Shrink

A very hidden feature we utilize is how the charts automatically grow and shrink the axis values to match your specific data. By default, you specify a pre-determined range for chart data, but we don’t know in advance how many days or weeks your data contains. To auto-grow/shrink, we use a named cell range that starts at the first row of data for the column being chosen, and stops at the last row with valid data (not blank or zero is the normal rule). We bind this named range to the chart axis, and Excel takes care of the rest. For example, for cycle time a range formula CycleTimeRange is defined as –

=OFFSET(Calculations!$L$2,0,0,COUNTIF(Calculations!$A:$A,">0"))

This decodes to a range Calculations!L2:Lx where x is the last row before any zero date (excel counts empty dates as a zero integer value, we just know this by experimentation). In any chart data series, you reference this range like this (it has to be fully qualified) –

='Throughput and Cycle Time Calculator (5000 samples).xlsx'!CycleTimeRange

This technique allows us to handle any amount of user data and have the charts auto-resize. Its a little cumbersome to get working, but works great one you get it right.

Documenting Complex Formulas

On our more complex calculation sheets we try and add a not about each formula and how it works. Nothing is hidden in the formula bar, but some aren’t even clear to us after we have written and debugged them the first time.

Notes document formulas in calculation sheets.

Notes document formulas in calculation sheets.

We continually learn new things, and will post more tips over time. Please give us feedback on what we can do to make these spreadsheets easier to use.

Troy

Read More

Multiple Feature Cut Line Forecast Spreadsheet

Posted by in Featured, Reference, Tools |

Forecast what features will make a release deadline and experiment with scope and start order. Do this early in the planning process to make better decisions about feature scope and start order by showing what will make a target delivery date with a simple tick (yes) or cross (no) for a set of desired features. Often referred to as the cut-line, this spreadsheet shows what will make the cut and what won’t. Often seeing what won’t make a release helps everyone look harder at whether the order matches business need.

Although we have kept this spreadsheet simple on the input side, it performs a Monte Carlo simulation for each feature (1,000 hypothetical deliveries) just like the Single Feature Forecaster spreadsheet. This spreadsheet uses the completion date for each feature as the start date for the next. It then shows which features will be delivered on or before a target date.

Download this spreadsheet from here:  Multiple Feature Cut Line Forecaster: I want to forecast multiple features at once

Multiple feature forecast inputs and results. Tick = yes, Cross = No hope

Multiple feature forecast inputs and results. Tick = yes, Cross = No hope

 

The Forecast Inputs

1. Start date: The calendar date where work will actually begin for the first feature. This shouldn’t be the “planned” start date, it should be the date when you are sure work will start. Look at other project that might be delayed or other reasons key staff might be unavailable to start when deciding what this date should be. We find the start date is often overlooked as an easy estimate to make, its not. Its one of the few that involve thinking about company wide factors rather than just the project or feature work and the people in the room

2. Target Date – This is the calendar date that decides whether to show a tick or cross. If the likelihood of a feature delivery date (set by 3. Likelihood) is on or before this date, that feature is said to make the deliverable and adorned with a green tick-mark. Otherwise it will be given a red-cross mark. This should be the date that code is tested, packaged and ready to enter any final deployment pipeline. Often this date is before the product is released to market, its a hand-off date to operations.

4. Split Rate – We find that a big cause of forecast accuracy is using a delivery rate (throughput or velocity) based on work that has been split into multiple parts when started. It makes the team appear to be delivering faster than it is. Our most commonly seen settings (and suggested if no actual data exists) is a low estimate of 1 (not split) and a high estimate of 3 (a backlog item is split into three parts for delivery). Check if the throughput and velocity estimate or data includes defects. If it does, bump these estimates up to model the most common defect rate rate range so each item embeds its allocation of defects. For example, if every piece of work get one defect, start with a low estimate of 2 (1 for the work + 1 for a defect).

5. Throughput Estimate or Data – The throughput estimate is where you express the delivery pace of the team. This is the rate that  work is delivered in the same units estimated for each feature. Velocity in points or throughput (count of items completed over time) can be used, the only requirement is that the same units be used for item estimates. You also have the choice here of whether to use a low and high bound guess, or to use historical data (where samples are placed in the Throughput worksheet. Our advice is to start with a wide estimate (spread out low and high value). What is important is the actual values of throughput and velocity found out through delivery fall within this estimate, not to have a narrow estimate. When real data starts to flow in from the team, switch over to using it instead of the guess. Normally about 11 samples are needed before i’m comfortable changing to use data. Try both, and see when the forecasts begin to agree, then use the data samples.

7. The Features and Estimates – This is the part of the spreadsheet where you describe the feature (by name) and get a low and high guess of the amount of work for each feature. Similar guidance here to entering the throughput estimate, start wide and narrow as real data and lessons allows. Remember to consider the split rate in input 4. Don’t double count splitting or defect inputs. Choose either to account for scope creep and defects in these estimates or through the splitting estimates – be consistent and explicit! The first column, start order is for convenience. Sometimes when some features are going to miss a release date, experimenting by changing the start order gets the “agreed” features the best chance of making the cut. Its provided here so that immediate results are shown during discussions and meetings (we used to cut and paste rows, but it was slow and error prone, so we added the ability to control order by column A).

8. Month Adjustments – We found some major holiday events, seasons, or company conferences made significant differences to throughput rate. These inputs allow the throughput or velocity estimates to be adjusted depending on which month work falls. The spreadsheet multiplies its random pick for delivery pace by the multiplier you enter for each month if work falls in that month. Its fine tuning mostly. In Europe we find summer pace slows. In the US we find less fluctuation. One key insight we see if a companies vacation policy matters. For companies that have a “use it or lose it” policy for vacation time, the last month (often December) is a fraction of other months for throughput. Ask and look for reasons one month might fluctuate. Re-orgs, team hire and ramp up, conventions and conferences are some of our common reasons for using these adjustments.

The Forecast Results

Screenshot 2016-03-30 11.24.35

The Forecast results are shown to the right of each feature input. Its designed to be simple enough your manager can understand it. We show the start date because its important that any date we show in the table assumes this is the start date; miss that date, then the forecast is incorrect.

Given the start order in column A for each feature, a green tick means on or before the Target date (given the likelihood percentage confidence entered as input 3, 85% in this case). A red cross means it will miss by at least a week (or 2 weeks depending on the time interval entered in the throughput input as part of input 5.). An orange exclamation mark means within one throughput period (on the fence). We encourage our stakeholders to look at what is going to miss and offer to trade something that is going to be delivered. This experimentation is easy by changing the start order in the column A adjacent to each feature. The start orders must be non-duplicated, ascending order from 1 to 10 (errors are shown if you get this wrong).

Conclusion

This tools aims to help early and vigorous discussion about scope and start order of multiple features competing for the same delivery team. Use it to help others see likely outcomes and make hard decisions earlier. Although we have more complex forecasting tools, this is often the right tool for early exploratory discussions about what resources are needed and what set of features will make the cut given teams historical pace of delivery.

Download this spreadsheet from here:  Multiple Feature Cut Line Forecaster: I want to forecast multiple features at once

Love to hear feedback on the type of discussions it causes and suggestions for improvements: troy.magennis@focusedobjective.com

Troy

Read More

Monte Carlo Forecasting Exercises

Posted by in Featured, Forecasting, Tools |

Monte Carlo forecasting is simpler than most people believe. To prove this we teach the basics using paper tutorials and dice. We make these tutorials publicly available for you to use to teach others within your company (or at home with your kids).  We welcome all feedback on what works and what doesn’t.

Basic Monte Carlo Forecasting Tutorial

Get the Basic Monte Carlo Forecasting Tutorial here.

“Discover what Monte Carlo forecasting is by performing it by hand. This exercise simulates completing a project many times and plots the outcomes. Perform 9 more trials. Each trial involves filling all rows in a column until the remaining work count reaches zero.”

This tutorial is the simplest burn-down style Monte Carlo forecast. It simulates 10 trials starting at 20 stories being completed at a rate determined by a roll of a six-sided dice. The burndowns are plotted on a chart to demonstrate that each trial takes a different route to completion.

Basic Monte Carlo Exercise

Perform ten Monte Carlo forecast trials and then chart the results.

Basic Monte Carlo Chart Sheet

Chart the burndown results of ten Monte Carlo forecast trials

Throughput Monte Carlo Forecasting Tutorial

Get the Throughput Monte Carlo Simulation Tutorial here.

This is a more advanced tutorial. It has extensive instructions on its opening page, and a completed example as the last page. We use this in our training courses, and often play this with all staff during our forecasting engagements. It get students to perform 3 of 10 simulation trials (7 are already performed) using historical throughput samples (given to you). It walks the students through the mathematics of computing percentiles on result trials to help people understand how to interpret 85th percentile for example.

Throughput Monte Carlo Forecasting Tutorial

Throughput Monte Carlo Forecasting Tutorial

There are many teaching moments during this tutorial. The most common is why is all this “complexity” necessary. We answer that in the document –

“Historical throughput data for teams measures delivery rate for a wide portion of the development system (the wider the better). Team throughput per week accounts for delays; for example waiting time, impediments, staff availability, interruptions and un-recorded work. The impact of these delays is more significant to a forecast than the hands-on time alone. This is a reason developer estimates are unreliable when forecasting projects, they don’t account for delays and system dynamics. In a stable system (e.g. the team isn’t blown-up), throughput will be a good predictor of future delivery rate even with large item size variability.”

We will add more of these over time as we learn ourselves how to teach the methods we use and why they give superior results than dividing average rate of work delivery by an average amount of work remaining.

Troy

Read More