Latent Defect Estimation – How many bugs remain?

Posted by in Featured, Forecasting | 0 comments

Get the spreadsheet here -> Latent Defect Estimation Spreadsheet

Not all software is perfect the moment it is written by a sleep deprived twenty-year-old developer coming off a Games of Thrones marathon weekend. Software has defects. Maybe minor, maybe not, but its more likely than not that software has un-discovered defects. One problem is in knowing when it’s safe to ship the version you have, should testing continue, or will customers be better off having this version that solves new problems for them? Its not about zero (known) defects. Its about getting value to the customer faster for their feedback to help drive future product direction. There is risk in too much testing and beta trial time.

Yes, you heard right. We want an estimate of something we haven’t found yet. In actual fact, we want an estimate of “if it is there, how likely would we have been to see it.” A technique used by biologists for counting fish in a pond becomes a handy tool for answering this fishy question as well. How many undiscovered defects are in my code? Can (or should) we ship yet?

The Capture-Recapture Method

The method described here is a way to estimate how well the current investigation for defects is working. The basic principle is to have multiple individuals or groups analyze the same feature or code and record their findings. The ratio of overlap (found by both groups) and unique discovery (found by just one of the groups) gives an indication of how much more there might be to find.

I first encountered this approach by reading work by Walt Humphries who is notable for the Team Software Process (TSP) working out of Carnegie Mellon University’s Software Engineering Institute (SEI). He first included capture-recapture as a method for estimating latent defect count as part of the TSP. Joe Schofield also published more recent papers on implementing this technique for defect estimation, and it’s his example I borrow here (see references at the end of this post).

I feel compelled to say that not coding a defect in the first place is superior to estimating how many you have to fix, so this analysis doesn’t give permission to avoid defects using any and all extraordinary methods (pair programming, test driven development, code reviews, earlier feedback). It is far cheaper to avoid defects than fixing them later. This estimation process should be an “also,” and that’s where statistical sampling techniques work best. Sampling is a cost effective way to build confidence that if something big is there, chances are we should have seen it.

The capture-recapture method assigns one group to find as many defects as they can for a feature or area of code or documentation. A second (and third or fourth) group tests and records all defects they find.  Some defects found will be duplicates, and some defects uniquely discovered by just one of the groups.

This is a common techniques used to answer biological population problems. Estimating how many fish are in a pond is achieved by tagging a proportion of the fish, returning them to the pond and then recapturing a sample. The ratio of tagged versus untagged fish allows the total fish in the pond to be estimated. Rather than fish, we use the defects found by one group as tagged fish, and compare the defects found by a second group. The ratio of commonality between the defects found gives an estimate of how thorough defect discovery has been.

If two independent groups find exactly the same defects, it is likely that the latent defect count is extremely low. If each independent group found all unique defects, then it’s likely that test coverage isn’t high and a large number of defect remain to be found and testing should continue. Figure 1 shows this relationship.

Figure 1

The capture recapture method uses the overlap from multiple groups to scale how many undiscovered defects still exist. Assumes both groups feel they have thoroughly tested the feature or product.

Capture-recapture overlap venn diagrams

Capture-recapture overlap venn diagrams

Equation 2 shows the two-part calculation required to estimate the number of un-discovered defects. First the total number of defects is estimated by multiplying the defect count found by group A by the defect count of defects found by group B. This is then divided by the count of the number of defects found by both (the overlap). The second step of the calculation subtracts the currently found defect count (doesn’t matter who found it) from the total estimated. This is the number of defects still un-discovered.

Equation 2

Capture-recapture equations

Capture-recapture equations

Figure 2 shows a worked example of capturing what defects each group discovered and using Equation 2 to compute the total estimated defect count, and the estimated latent un-discovered defect count. 3 defects are estimated still lurking to be found. This estimate doesn’t say how big they are, or whether it’s worth proceeding with more testing, but it does say that its likely two-thirds of the defects have been found, and the most egregious defects likely to have been found by one of the two groups. Confidence building.

Figure 2

Example capture recapture table and calculation to determine how many defects remain un-discovered.

Capture-recapture defect table analysis

Capture-recapture defect table analysis

To understand why Equation 2 works and how we got there, we take the generic fish in the pond capture-recapture equation and rearrange it to solve for Total fish in pond, which in our context is the Total number of defects for our feature or code. Equation 3 shows this transition step by step (thanks to my lovely wife for the algebra help!).

Equation 3

The geeky math. You don’t need to remember this. It shows how to get from the fish in the pond equation to the total defects equation.

 

LAtent defect estimation using capture-recapture algebra

Like all sampling methods, its only as valid as the samples. The hardest part I consistently struggle with is getting multiple groups reporting everything they see. The duplicates matter, and people are so used to NOT reporting something already known, it’s hard to get them to do it. I suggest going paper to a simple paper system. Give each group a different color post-it note pads and collect them only at the conclusion of their testing. Collate them on a whiteboard, sticking them together if they are the same defect as shown in Figure 3. Its relatively easy to count the total from each group (yellow stickies, and blue stickies) and the total found by both (the ones attached to each other). Removing the electronic tool avoids people seeing prematurely what the other groups has found.

Figure 3

Tracking defects reported using post-it notes. Stick post-its together when found by both groups.

Example of capture-recapture of defects using post-it notes.

Example of capture-recapture of defects using post-it notes.

Having an intentional process for setting up capture recapture experiment is key. This type of analysis takes effort, but the information it yields is a valuable yardstick on how releasable a feature currently stands. It’s not a total measure of quality, the market may still not like the solution as developed which is why their is risk in not deploying it, but they certainly won’t like it more if it is defect ridden. Customers need a stable product to give reliable feedback about improving the solution you imagined versus just this looks wrong. The two main capture, recapture experiment vehicles are using bug-bash days, and customer beta test programs.

Bug-Bash Days

Some companies have bug-bash days. This is where all developers are given dedicated time to look for defects in certain features. These are ideal days to set multiple people the task of testing the same code area, and performing this latent defect analysis. It helps to have a variety of skillsets and skill levels perform this testing. It’s the different approaches and expectations to using a product that kicks up the most defect dust. The only change from traditionally running a bug-bash day is that each group keeps individual records on the defects they find.

To setup the capture-recapture experiment, dedicate time for multiple groups of people test independently as individuals or small groups. Two or three groups work best. Working independently is key. They should record their defects without seeing what else the other groups have found, avoid having the groups use a common tool, because even though you instruct them not to look at other groups logged defects, they might (use post-it notes as shown earlier in Figure 3). They should be told to log every defect they find even if its minor. They should be told to only stop once they feel they have given the feature a good thorough look and would be surprised if they missed something big.

Performing this analysis for every feature might be too expensive, so consider doing a sample of features. Choose a variety of features that might be key indicators of customer satisfaction.

Customer Beta Programs

Another way of getting this data is by delivering the product you have to real customers as part of a beta test program. Allocate members at random to two groups, they don’t even have to know what group they are in, you just need to know during analysis. Capture every report from every person, even if it’s a duplicate of a known issue previously reported. Analyze the data from the two groups for overlap and uniqueness using this method to get an estimate for latent defects.

Disciplined data capture requires that you know what group each beta tester is in. A quick way is to use the first letter of the customer’s last name. A-K is group A, L-Z is group B. It won’t be exactly equal membership counts, but it is an easy way to get roughly two groups. Find an easy way in your defect tracking system to record which groups reported which defects. You need a total count found by group A, a total count found by group B, a count of defects found by both, and a total number of unique defects reported. If you can, add columns or tags to record “Found by A” and “Found by B” in your electronics tools and find a way of counting based on these fields. If this is difficult, set a standard for the defects title by appending a “(A)”, “(B)” or “(AB)” string to the end of the defect title. Then you can then count the defects found only by A, by B and by both by hand (or if clever, search).

There will be a point of diminishing return on continuing the beta, this capture – recapture process could be used as a “go” indicator the feature is ready to go-live. In this case, you can keep the analysis ongoing until a latent defect count hits a lower trigger value which is an indication of deployment quality. Using this analysis could shorten a beta period and get a loved product into the customers’ hands earlier with the revenue benefits that will bring.

Summary – don’t do this by hand

We of course have a spreadsheet for this purpose. We are still getting it to shareable quality, but the equations and the mathematics matches this article and has been used successfully in commercial settings. Please give it a try and let us know how it works for you.

Get the spreadsheet here -> Latent Defect Estimation Spreadsheet

Capture-recapture spreadsheet.

Capture-recapture spreadsheet.

References

http://www.ifpug.org/Conference%20Proceedings/ISMA3-2008/ISMA2008-22-Schofield-estimating-latent-defects-using-capture-recapture-lessons-from-biology.pdf

http://joejr.com/CRMQAI.pdf

Introduction to the Team Software Process; Humphrey; 2000; pgs. 345 – 350

 

Read More

Metrics don’t have to be evil – 5 Traps and tips for using metrics wisely

Posted by in Featured, Forecasting | 0 comments

Problem in a nutshell: Metrics can be misused. Metrics can (and will) be gamed. This doesn’t mean we should avoid using any quantitative measures for team and project decision making – we just need to know why and what we are measuring, and interpret the results accordingly

“Just like dynamite, it would appear that metrics can be used for good as well as evil. It all depends on how you use them.”

1. Don’t embarrass people

Embarrassing people is easy to do when showing metrics they feel responsible for. This causes data to be hidden, obscured, and mis-reported. This leaves you with an incomplete and inaccurate picture even with data. Once you embarrass someone, thats the last time they will trust any metric, and the last time you have an accurate metric.

Do

  • Focus on trends rather than single point values.
  • Leave axis values off charts where possible; focus people on trends.
  • Exclude any name information Its OK for that team to identify themselves, but NOT for others to point out another team.

 

Figure 1 – Its the trend that matters. No team names or axis values help compare “trend”

2. Focus on Trends Not Individual Values

Trends are charts of the same measure over time. Trends help make sense of noisy data by helping see relative direction of change. Figure 1 shows a trend-line applied to cycle time data. The orange line is the team looking at its data, the grey line is the trend of the same measure of the rest of the company. This chart shows that the team is driving down its cycle time average over time, whereas the company trend is level over time.

Do

  • Capture data that helps show trend values over time
  • Add linear trend-line to data to help see the big picture of change
  • Help teams see how their trend tracks against “others” in similar situation
  • “other” means teams in SIMILAR situations, don’t compare apples versus oranges. Eg. sustainment teams versus production support teams.

3. Use Balanced Metrics

Tracking just one metric promotes overdriving that metric at the loss of everything else. Multiple opposing metrics should be equally shown with the emphasis that trade something you are above the trend with for something that is trending worse than others. Changing one metric is easy; changing that metric without decimating another is much harder.

Larry Maccherone in his “Software Development Performance Index” uses a metric from multiple quadrants –

Responsiveness – Time in Process average (often called cycle time).

Productivity – Throughput / team size (team size is to help normalize team size, making bigger teams and smaller team trends comparable)

Predictability – variability of throughput / size values. Helps teams identify they have peaks and troughs rather than smooth flow

Quality – How ready to release is the codebase? Could be number of open blocking P1 or P2 defects, or a score based on passing tests, number of un-merged feature branches, performance regressions. This is always the most difficult to find for each company. Avoid defect counts alone. Find ways to make quality mean improved customer experience.

Do –

  • Look for opposable measures. NO team should be able to be BEST at all, just one or two
  • Being BEST in a measure is an alarm! It means that they may be overdriving one measure at the sacrifice of others
  • Always show the measures together so people can see the tradeoffs they are making
Always show balanced metrics together. Avoids focus on just one.

Always show balanced metrics together. Avoids focus on just one.

4. Use Sampling – Track some metrics just sometimes

Some metrics are expensive to capture. You don’t need every metric all of the time. Sampling allows data to be captured for a short period of time to get a snapshot of how high or how low the metric is compared to estimate. For example, how much interrupt driven work is the team fielding requests for? Get the team to stick a post-it note on a whiteboard every time they do a “small job.” over the week you will get a good indication of percentage and make appropriate process changes. You can repeat one week next month and don’t track for the other three. This has made the cost of getting this metric 1/4 of the original cost and given the same result! Sampling is a powerful and underused technique.

Do

  • For measure that rely on people to do extra work to capture; use sampling. For example, track one week a month.
  • It takes less data than you think. 11 samples give a representative picture of a measure, by 30 samples you are almost certain the result is similar to every sample.

5. What, So What, Now What – Help people see the point

There has to be a reason for tracking and showing a metric. Make it clear how a metric trend aligns to a better decisions and improvement. If people don’t know why a metric is being tracked, they will assume its to track them personally! Help them see its about the work and the system, not the worker and their livelihood!

Do

  • Promote system metrics rather than personal metrics
  • Promote team metrics rather than personal metrics
  • Share how a trend of a metric has led to a better decision or improvement
  • Be vigilant about dropping metrics that are just available to capture – have a reason

In summary

Metrics aren’t evil. Although they are often mis-used, they don’t need to be. Make people responsible for determining actions on their own metrics. Send ideas and stories on what you have seen work and fail.

 

Read More

Risks – Things that could make a big difference

Posted by in Featured, Forecasting | 0 comments

Problem in a nutshell: Sometimes extra work needs to be done before delivery because something went wrong, or when a feature was built something was learnt that means additional innovation is required. How can these factors be managed in a forecast early and dealt with earlier. We find asking the simple question “What could go wrong?” helps us be more right when forecasting.

Features or project work starts with a guessed amount of work. As the feature is built, other technical learning can cause delays. For example, when a feature for giving suggestions about what other products you might buy turns out to be too slow to be useful during real-time shopping, additional work may be needed to build an index server specifically to make these results return faster. From a probabilistic perspective, there is a known amount of work (the original feature) and an additional “possible” amount of work if it performs poorly. This is a risk. It has a probability of being needed (less than 100%) and an impact if (and ONLY if it comes true).

If we performed a simple Monte Carlo simulation for this scenario, and said that there was a 50% chance performance would fail, the result would be an equal chance of an early date, and a later date. There would also be a normal distribution of uncertainty around each of these dates. The result would be “Multi-Modal” – jargon for meaning more than one peak of highest probability. The average delivery date is early July, but it has almost NO CHANCE! It will be around mid June, or early September. Based mainly on if this risk comes true.

Monte Carlo of a 50% risk produces with our Single Feature Forecaster spreadsheet.

Figure 1- Monte Carlo of a 50% risk produces with our Single Feature Forecaster spreadsheet.

What does this mean? A few things –

  1. Estimating and quibbling over whether a story is a 5 point or 8 point story is pointless. That changes the result in this case by a few weeks. Stop estimating stories and start brainstorming risks.
  2. If we know that risks can cause these bi-modal probability forecasts, we need to stop using AVERAGE which would give us the nonsense July delivery that won’t happen.
  3. Probabilistic forecasting is necessary to make sense of this type of forecasting. But how?

How do you forecast these risks?

It seems harder than it is. Here is how I generated the above forecast (figure 1) using the Single Feature Forecast spreadsheet that uses no macro’s or programatic add-ins – its PURE formula, so its not that complex to follow. Monte Carlo forecasting plays out feature completion 1000’s of times. In the chart image shown in figure 1 above, you can see the first 50 hypothetical project outcomes in the lower chart (it looks like lightning strikes). You can see that there are two predominant ways the forecast plays out with some variability based on our range estimates for number of stories and throughput estimates (it could be actual throughput data, i just started with a range of 1 to 5 stories per week, but use data when you can). Its either shorter or longer, but not not a lot of chance in between.

Here are the basic forecast guesses for this feature –

The main forecast data to deliver a feature.

Figure 2 – The main forecast data to deliver a feature.

Once we have this data, lets enter the risks. In this case, just one –

Risks definition

Figure 3 – Risks definition

The inputs in figure 3 represent a risk that has a 50% chance of occurring, and if it does, 30 to 40 more stories are needed to implement an index server. This risk is added (30-40 stories picked at random) are added to the forecast 50% of the time. The results shown in figure 1 clearly shows that to be predictable in forecasting the delivery date, determining which peak is more likely is critically important. If the longer date is unacceptable, reducing the probability of that risk early beneficial. As a team or a coach, i would set the team a goal of halving the risk probability of needed an index server (from 50% to 25%), or determining early if its certain an index server is needed and the later date real.

For example, by doing a technical spike it is determined it is less likely that an index server is needed. The team agrees there is a 25% chance, they ruled out 3 out of 4 reasons an index server might be needed. The only chance in the spreadsheet is the risk likelihood being reduced to 25% (from 50% as shown in Figure 3) The forecast now looks like this –

25% chance of performance risk.

Figure 4 – 25% chance of performance risk.

Its clear to see that there is now a 75% chance of hitting June versus September. This is well worth knowing, and until we can show how things going wrong cause us to stress when asked to estimate a delivery date, the conversation is seen as the team being evasive rather than carefully considering what they know.

This example is for a single major delivery blocker risk. Its common that there are 3 to 5 risks like this in significant features or projects. The same modeling and forecasting techniques work, but rather than just two peaks, there will be more peaks and troughs. Strategy stays the same, reduce likelihoods, and prove early if a risk is certain. Then make good decisions with a forecast that constantly shows the uncertainty in the forecast.

Conclusion

If you aren’t brainstorming risks and forecasting them using Monte Carlo forecasting you are likely to miss dates. Averages cannot be useful when forecasting multi-modal forecast outcomes common to IT projects. Estimating work items is the least of your worries in projects and features where technical risks abound. We find three risks commonly cause most of the chaos and rarely find none.

Main point – its easier than you think to model risk factors, and we suggest that you take a look at our spreadsheets that support this type of analysis.

Troy

 

 

Read More

Calendar Days vs Work Days (Storing and using cycle time data)

Posted by in Featured, Forecasting | 0 comments

Problem in a nutshell: Should work time in process (cycle time) and lead time be workdays or calendar days? Does it matter which we use for forecasting?

Just want the spreadsheet: Get it here: Cycle%20Time%20Adjustments.xlsx

We get this question a lot. Should weekend and holidays be captured in cycle time data. Our answer is along the lines, “whatever you have.” It doesn’t matter from a forecasting perspective, as long as you are consistent. Here are this issues that may sway one way or the other –

  1. If your work item estimate are time based, and they are expressed in work days, it may be easier to use work days as time in process (cycle time) numbers.
  2. If you are capturing item data as date started and date ended, then it will naturally fall out to include weekends, and we can remove those days to get work days.

We often have to convert from one to the other. Its easy if we have dates to work from, because Excel has some helpful functions for computing workdays between dates (removing non-work days and a list of public holidays) Lookup the NETWORKDAYS documentation. This is an lossless conversion.

Trickier is when we just have a number of days. We spend some time checking into how these were calculated. If it is calendar days and we cannot get the raw date data, we use a statistical approach for removing an approximate number of days from each sample. Here is how our algorithm works (it’s a little complicated, but it is the best we have found).

For every multiple of 7 days in the original cycle time we can remove 2 days. If we have a cycle time of 7 days, and we know that a company works 5 days (Monday to Friday), then we can remove two days. When a cycle time is less than 7 days, we have to guess what day of the week the work started. For example, if a cycle time is 3 days, valid starting days where all 3 days fit into a working week are, Monday, Tuesday, Wednesday. If the work started Thursday, 1 day would be weekend, Friday, 2 days weekend. If every day of the working week has equal chance, there is a 3/5 chance the right value is a 3, 1/5 it’s a 2, and 1/5 it’s a 1. We use these probabilities and adjust based on a uniform random chance.

Don’t worry. We of course have encoded all of this logic into a spreadsheet. Our Cycle Time Adjustments.xlsx spreadsheet can convert in both directions for dates and numerical cycle time inputs. It can never be exact for numerical cycle times, but it is pretty close from our round trip testing (dates -> numerical -> dates).

Get it here: Cycle%20Time%20Adjustments.xlsx

You can see our logic for the time based probability logic in the time based setup worksheet.

Setup for the probabilities of cycle time adjustment.

Setup for the probabilities of cycle time adjustment.

 

For recommendations about data capture of cycle time and lead times, we suggest –

  1. Capture date started (committed to start delivery), date completed and date captured as an option to be considered (often, date created).
  2. Store cycle time data as dates, don’t convert to days until the last moment you need to.
  3. Be consistent with date format. We like yyyy-mmm-dd (eg. 2016-Apr-20) as a format that is unlikely to be confusing whatever the native date format is in your country or region.

Troy

Read More

Decision Making

Posted by in Featured, Forecasting | 3 comments

I recently have the opportunity to do training with Michael Tardiff, a gifted facilitator and trainer for Solutions IQ. One of Michael’s specialty subject is group decision making. We take different approaches to teaching this topic, i’m more about getting to any answer, Michael is more about knowing the method used to get to the answer so that it has the greatest chance of surviving use over time. Michael is right of course, the goal of decision making is to get to the right answer (for now) and to avoid future “I never agreed to that” problems. Whilst consensus isn’t necessarily the key, finding agreement that persists over stress and time is the purpose and goal.

Michael says there are four basic types of decision making process, and others are a combination of these –

  1. King Rules (gets to live why we like their decisions, then beheaded)
    speed: fastest, risk: high if technical, long-lasting: until change of king
  2. Majority Rules (works while the minority of the last decision believes they will be the majority one day)
  3. Consent (staying silent means you agree)
  4. Consensus (hardest to achieve, but once agreed it was so hard, it tends to stick)
    speed: slowest, risk: low if the right people agree consensus, long-lasting: good

Its important to call out (when its unsaid) how a decision is being made or has been made. Consensus is the longest and hardest to achieve, but tends to stick because people are invested in the decision. Consent offers middle ground if there is time and capability to handle objections. If your system demands King Rules, just acknowledge it. Majority rules is a muddy area. You haven’t managed to sway the minority opinion who might believe their day will come. But, if a decision is needed by a certain time, or total agreement may never be achieved it is a (often) fair way to resolve decisions. But it may not stick for long.

Hofstede’s Cultural Dimension Theory (see here)

Decision making styles can be culturally impacted. Even within one country, there are very different styles in lively discussion one coast to the other (in the USA, West coast are more consensus introverts, and East coast more Extrovert). Pay attention when working with experts from cross geographies that the ability for challenging authority varies, and you may just think you have consensus. The classic measure of this is Hofstede’s Cultural Dimension Theory which ranks countries based on set of interesting dimension relevant to decision making attributes. I’ve found an awareness of Power distance index (PDI): The Power Distance Index is defined as “the extent to which the less powerful members of organizations and institutions (like the family) accept and expect that power is distributed unequally.” important. And Long-term orientation vs. short-term orientation (LTO): This dimension associates the connection of the past with the current and future actions/challenges. A lower degree of this index (short-term) indicates that traditions are honored and kept, while steadfastness is valued, key to understanding some group dynamics. More ideas can be found in these articles and books: Wikipedia: Cross Cultural Decision Making, and the book Advances Cross Cultural Decision Factors Ergonomics.

Its key that even the introverts who know why a decision is a poor or impossible choice gets heard by the group, independent of salary or positional power. If the decision is more technical than opinion, weight the technical voices in the room higher than the opinion voices.

Reducing Thrashing

To reduce elongated analysis time, I often nudge teams in the following directions –

  1. “Good for now” Agree for how long you are going to test the decision and revisit it for further analysis. Often by helping people remember a decision isn’t in stone, but for now, they overcome hesitancy to commit based on uncertainty.
  2. “Close the gap” Narrow in on a few actionable things. Even if you cant decide on the whole solution, can you agree on first steps. Often, the team realizes that most of the value in the decision is achieved.
  3. “Guard Rails” Identify what factors occurring invalidate any key assumptions and need the decision revisited. Helps people agree for now and feel that dooms-day scenarios are protected against.
  4. “Agree on Research” If agreement on the decision can’t be reached, identify what research inputs are needed to proceed and get a decision. Document what is in the way of reaching a decision and what data would clarify and get clarity or reduce uncertainty.
  5. And Sebastian Eichner (@stdout) mentioned another important tool. “Roll a dice and pick at random.” Often people find reasons why the one picked at random isn’t a viable choice, or if the decision is really that similar in risk and reward, its as good a choice as any! Use it to draw out opinions.

Its good to have teams make smaller, less risky decisions to practice putting contrary views in a productive way. Decision making is a skill to be built in a team, and a great indicator of team maturity.

The one final point often mentioned. “Who is responsible for a decision if one can’t be reached?” There is an eventual moment where King Rules needs to and should apply. If the cost of no decision outweighs the risk of moving forward, someone has to make the best decision they can. If thats you, and you are in a position of power you have a couple of acceptable choices –

  1. Delegate to the most informed expert, and say “which one, we need a choice and i think you have the most information” and then cover them if it goes badly.
  2. Break the deadlock. If two options are equally liked by different people, make it clear that no decision is worse and that you are going decision A for two or three months (as long as you need to see if it was likely right). By making it clear you are only stepping in because of the cost of no decision as a tie-breaker, you still give the team a good chance of making their own choices. If this is re-occuring, you need to make staff changes!

Troy

Read More

Does setting arbitrary goals (times or dates) work?

Posted by in Featured, Forecasting |

Problem in a nutshell: Work should be released when it reaches the quality needed with the features required. Of course small releases give the fastest feedback, and this post isn’t saying you should do larger releases. This post looks at whether setting a date or time goal does impact delivery.

Runners in the New York marathon finish in higher concentration just prior to hour and half hour and fifteen-minute elapsed time boundaries. Why? It is speculated that their is a mental race going on in each runner head and they try and achieve the next personal goal-post. Don’t they just run as fast as they can? Sure, but they also need something to pace themselves against in order to judge their ongoing pace and balance it against exhaustion (it should be noted i’ve never run a marathon!).

Clustering of finish times.

Clustering of finish times.

Its 1.4 times more likely to finish 3:59 than 4:01.

Its 1.4 times more likely to finish 3:59 than 4:01.

Having a goal in mind means that constant adjustments throughout the marathon help achieve finishing on goal. Whilst the runners can’t pickup half hour faster than personal best, they get early feedback they are off pace and adjust early to maybe reach a few minute(s) before a boundary.

I think the same needs to happen when we set goals for software delivery teams. They need constant feedback that they are on-pace for adjustment early – NOT cramming at the end. Having a date in mind is the only way to compare delivery pace of work versus a pace required to achieve that delivery without heroics. Heroics are failure. It puts teams in burnout mode and they fail to continue consistent pace after crunch making it impossible to reliably forecast. If I see a team moving into a feature or project have crunched in a prior delivery, I halve my throughput estimates for one to two times the crunch period. Its just NOT cost effective to have teams crunch.

My advice –

  1. If teams have crunched, reduce throughput estimates by 1/2 for 2 times the crunch period they endured
  2. DON’T use throughput samples during crunch mode. They are artificially high and cause crunch mode in the next plan!
  3. Set a delivery date and work out what team size and scope will fit into that period (using our spreadsheets of course :))
  4. Track delivery pace against this plan. The moment delivery falls behind, revisit the scope expected and communicate it is at risk. Get small actions taken earlier
  5. Track when teams are crunching versus sustainable. I put a C and a S in the notes of any throughput weeks I capture in our spreadsheets. Any team spending more than 10% year crunching is costing the company delivery pace and money. Compute this by estimating the salary of the team and computing what running half pace for 2x the crunch period costs.

Troy.

 

 

Read More

Excel Spreadsheet Tips and Design Goals

Posted by in Featured, Reference, Tools |

We put a lot of effort into our free spreadsheet tools. We want our spreadsheets to be usable by anyone who needs an answer to their questions. Some good practices have become common in our tools and we want to share them with you as well.

No Macros or Add-Ins

A lot of what we do would be easier if we used Excel macros or add-ins. We resist the urge to do this. By including macros or add-ins we are shipping code and Excel gives all manner of warnings to the user that the spreadsheet may be a security risk. It also inhibits the spreadsheets being compatible on Google Sheets. We haven’t needed to use macros, and compete feature for feature against many paid add-ins. We are extremely proud to be performing Monte Carlo simulation and distribution fitting algorithms using plain formulas that YOU can see. Nothing is hidden. But gee, a lot of sleepless nights in this goal…

Title and Read me worksheet

Always have a title page worksheet that describes the spreadsheets intention, and who and how to contact us with ideas for improving them. Document the main steps in getting a result. Some of our spreadsheets are four or five worksheets deep and we want to avoid people becoming immediately lost in the appropriate workflow.

Data entry is by offset reference

Initial versions of our spreadsheets had a column for user data to be populated. What we found we needed was the ability to copy, paste, drag, delete, manipulate this data in the spreadsheet directly. However, whenever a column or cell was moved, our formulas referencing those cells broke. We solved this problem by using an indirect reference to “duplicate” the user data on another worksheet using Excel’s INDEX function, for example:

INDEX(OriginalCompletedDateValues,ROW(A2))

This formula references another cell by row number, and the row number doesn’t change with any user action you can do with the clipboard, dragging or manipulation. We write all formulas against the indirect copy of the original data in a worksheet commonly called “Calculations”. We copy down the formula as shown, where the A2 will be a series, A3, A4, A5 …

We standardized on a worksheet called “Your Data” which no formula every directly references. We find ourselves dragging columns of different data into this sheet and everything keeps just working.

Column References and Capacity Growth

No matter how big we make the spreadsheets support input data, someone always emails wanting more. We structure our formulas now to limit the places where an absolute row count is needed. Here are the ways we tackle this –

  1. Use column references A:A rather than A2:A5000 in formulas wherever possible
  2. Use names ranges, Formulas-Name Manager to define names for ranges. We use the names in formulas where possible
  3. When we need to handle a date range, we do 250 weeks or 5,000 day dates individually
  4. We always set the background color of a formula cell to the calculation cell type, so we can visually see where we stopped copying a formula row down. We also try and put a comment at that point saying how the user can expand it.

Top Left to Bottom Right Flow

We design our worksheets to be read and populated from the top left to bottom right. We also try and number each input sell title, and add a tip nearby. When we get a question we try and add more documentation to these tips. We would love feedback on where people get stuck. This is pretty standard user design guidelines.

Auto Chart Axis Growth & Shrink

A very hidden feature we utilize is how the charts automatically grow and shrink the axis values to match your specific data. By default, you specify a pre-determined range for chart data, but we don’t know in advance how many days or weeks your data contains. To auto-grow/shrink, we use a named cell range that starts at the first row of data for the column being chosen, and stops at the last row with valid data (not blank or zero is the normal rule). We bind this named range to the chart axis, and Excel takes care of the rest. For example, for cycle time a range formula CycleTimeRange is defined as –

=OFFSET(Calculations!$L$2,0,0,COUNTIF(Calculations!$A:$A,">0"))

This decodes to a range Calculations!L2:Lx where x is the last row before any zero date (excel counts empty dates as a zero integer value, we just know this by experimentation). In any chart data series, you reference this range like this (it has to be fully qualified) –

='Throughput and Cycle Time Calculator (5000 samples).xlsx'!CycleTimeRange

This technique allows us to handle any amount of user data and have the charts auto-resize. Its a little cumbersome to get working, but works great one you get it right.

Documenting Complex Formulas

On our more complex calculation sheets we try and add a not about each formula and how it works. Nothing is hidden in the formula bar, but some aren’t even clear to us after we have written and debugged them the first time.

Notes document formulas in calculation sheets.

Notes document formulas in calculation sheets.

We continually learn new things, and will post more tips over time. Please give us feedback on what we can do to make these spreadsheets easier to use.

Troy

Read More