Errors in formulas in complex spreadsheets can be hard to detect. After a few compound formulas (one formula depends on the output of another), the permutations can be large to enumerate. Traditional software development uses unit testing to help initially write code by initially failing, then passing as code is written, and to confirm code continues to work after re-factoring or seemingly unrelated changes are made. Unit testing spreadsheets seems to be just as valuable.

Our Multiple Feature Cut Line Forecaster spreadsheet is complex It doesn’t use any macros (VB code), it is all formula based. Its complexity is in the detailed formulas and the cascading of one result as an input to another formula. We needed to test we got this right.

We decided to do the lightest test possible and use Excel’s built-in Scenario Manager. The Scenario Manager allows values to be changed in tandem on a worksheet. We pre-built sets of inputs and tested the calculation results against a “human tested” result.

How it works:

1. We defined scenarios that set input fields to know states
2. We defined test rows that compare the current calculated results against an expected result we determined by hand
3. We show a pass or fail for each test row only when the correct scenario is shown

excel unit tests

We were happy with the result. Our tests found errors we had missed, and we occasionally make changes that impact the current tests. Its not ideal, but the effort was light for the confidence we have in our tools. Worth the effort.

Excel Challenges:

1. Scenarios are hard-coded to a limit of 32 input changes per scenario. We often had more. To solve this some of the tests need multiple scenarios to be applied sequentially to change all the values we needed. A pain, but not a show-stopper.

2. The scenarios can’t make changes across sheets. This means that our unit tests are on the bottom of the Forecast tab. We would like it elsewhere, but this limitation means it has to just be out of sight (rows 50 onwards, at least off the first screen unless you scroll down)

3. No test runner. We have an aversion to macro’s. This means we need scenarios to be shown using the Scenario Manager by hand. Changing the scenario will cause some tests to fail and others to pass. This looked messy. Our fix was to target the pass/fail indicator against a scenario. To achieve this each scenario sets a field value (1, 2, 3, 10, etc). Each test row indicated which scenarios that row targets. This way, we avoid showing an alarming fail when testing another scenario.

Troy