If you have trouble viewing this email, please click here
Banner image
The Brode Report  |  October 2010

David Brode profile   As some of you know, I struggle to answer the basic question "What do you do?"

Just last week a client explained it like this: We start by having a discussion about how my business works & what I'm not sure about. Then you go away & build a model. Finally, you and I talk strategy using the model to guide our thinking. Rinse, repeat.

Until next month, all the best,
David Brode
  Struggling with spreadsheet data overload? Crazy-hard analysis?
Multiple entity forecasts? Complex cash flow structures?
I'm always happy to discuss situations--call me at (303) 444-3300.
Fixing a Large Model So a Transaction Could Close

Here’s the story of a classic problem that turned into a very satisfying project. My client had an outside group build a model to support a structured finance transaction. In this case, my client was arranging two transactions, sitting between the bank funding the deal and the finance companies borrowing the money.

I was brought in because some of the parties didn’t understand the model results and the transactions couldn’t close until everyone believed the model was giving accurate answers. In particular, the model had given different results at different times, and reproducing or reconciling the results was extremely difficult.

Making matters worse, the model was large (14 files, 89 megabytes), incredibly poorly coded, and completely undocumented. If you took the time—a lot of time—and made a data logic flowchart, it would have looked like a Rube Goldberg machine complete with a tipping bucket of water and a snapping turtle.

Complicating matters, it wasn’t a simple deterministic model.

Left square bracket Permit me a digression to explain. There are two types of models: deterministic and stochastic. Deterministic models take inputs and do math on them to get outputs. This is what most people do with Excel. By contrast, stochastic models take random numbers as inputs and move from there. For example, given a mean and a standard deviation and random numbers, it is possible to create a normal distribution for an input variable. Stochastic models run thousands of scenarios and then present their results statistically, often with averages, sometimes with more sophistication. Right square bracket

Putting this in context, the lender doesn’t just want to see a rosy scenario that showed how the debtor will pay back their money. Instead, a stochastic model can tell you that the probability of default is 0.73% and the expected loss of principal is 0.30%. With this information, it’s possible to give a bond rating, and guess what—the exercise was to rate the debt so that the bank could get favorable treatment with regulators regarding the reserves they needed to hold against this loan.

In a deterministic model excess size is cumbersome and inconvenient. It makes tracing logic and validating results difficult, and refreshes and saves seem to take forever. In a stochastic model, excessive size can be a death knell. If a recalculation takes one second, 100,000 iterations of a model will take nearly 28 hours! In this model refreshes took over 4 seconds. Through deep examination of the model, I was able to find the logic bottlenecks and increase the efficiency over 20-fold for full runs! This allowed all parties to rapidly see the results of more “what-if” questions. With this increased functionality, we were able to put to bed all of the outstanding concerns about the model, optimize the transaction structure, and get to the closing table quickly.

Bottom line: this was a huge, complicated mess--just the kind of mess I like.

This type of task is an exercise in model forensics. Here’s what I did:
  • Organized: numbered the pages and created a shorthand for documenting the system.
  • Simplified: centralized calculations so the same work wasn't being done in multiple places and reduced the number of workbooks from 14 to two.
  • Centralized: Moved inputs to one page and created scenario inputs (as discussed in the September, 2010 newsletter) so we could have a single version of the model and not one for every time we ran it, enabling real comparisons across inputs.
  • Tested: Conducted extensive testing and debugging.
In retrospect there was no silver bullet in finding the solution. We found success in small changes, piece by piece. This problem was solved by having the experience to know where to look for problems and the attention to detail to chase them down. These non-dramatic examples include:
  • The random number generator software gave a different sequence of numbers if it perceived the model to be different, and that could mean an extra comment label would throw off testing results. I replaced a $3,000 per server software package (on four servers) with a $20 shareware package that delivered consistent results.
  • Locating tiny input factor differences that had been missed because of inputs being spread over different pages.
  • A single cell on one calc page had become hardcoded in some models, causing certain results to be in error.
And in the end all parties in the transaction were able to get behind the model. The result was two transactions closing for $150 million and $100 million. The model continues to be run quarterly to support the transaction, and so far real-world results are running ahead of expectations.

As we got into maintenance mode, I created a plan to make the model really clean and get rid of the terrible legacy code that made simple things look difficult and confused so many people. I like nothing more than to take an ugly model and turn it into a thing of beauty that can move conversations forward to facilitate getting deals closed quickly.

So if you have any crazy large scary models and need an outside opinion on how to make progress, I’d love to hear from you.

Bookmark and Share

 Linkedin icon Connect with me | Share your thoughts | Subscribe | See past issues | www.brodegroup.com
The Brode Group Strategic Financial Consulting - Real-World Results (303) 444-3300