A good friend and repeat entrepreneur recently asked me what we look for when reviewing the financial statements of a prospective investment.
While it certainly depends on stage and business model, we do see a dramatic difference in approach from different companies, and this can be a quick filter as to whether we’re going to spend more time looking at an investment opportunity or not. Beyond the balance sheet and income statement, on which there isn’t much room for creativity, the forecast is the place where the maturity of thinking about how to build a business really comes out.
Often we’ll see a top-down “2% of the market” approach to revenues along with excruciating detail on the expense side. While it’s great to see how an entrepreneur thinks about the phasing in of talent over time and assumptions on future costs, this amounts to false precision because really it needs to be driven by actual achievement of revenue and the leading indicators of revenue (i.e. distribution partnerships, customer validation, etc). So the biggest early red flag we see is this lopsided approach where expenses are built bottoms-up but revenues are built top-down. Seeing revenue drivers tied to assumptions that, if not already validated, can be spoken about as clearly defined “experiments” to be run is the biggest thing I look for. Structure that breaks out business level leading indicators wired into the revenue forecast shows real maturity, and showing a linkage between costs and revenues rather than having them entirely separated is another positive indicator.
CFO’s are often taught to model “scenarios”, presenting in their simplest form a baseline, upside and downside budgets. While these aren’t terribly useful in themselves as they’re usually arbitrary variations, they demand an underlying sensitivity analysis capability be built into the model which can be helpful. Most finance types will model trite scenarios which are too abstract to be useful; what happens if product is delayed a quarter, if revenue ramps at half the predicted rate, if per-saleshead productivity varies, etc. We’ve seen that the questions which matter are generally more fundamental; what happens if the product just doesn’t resonate with the broader market the way it seems to with our limited pilot customers!
Once a business has traction, analytics become extremely useful. For instance, with one of our early stage companies, they quickly identified a direct selling model that worked and build a detailed model tracking salesperson productivity, cost of customer acquisition (CAC), lifetime customer value (LTV), and worked to refine their lead generation process. This provided the confidence that upon us writing a cheque, they scaled from three to thirty sales people within 90 days. Seeing the depth of their model at a small scale reduced the risk on investing. Now that’s a high-velocity sales model so may not relate exactly to other business models, but the point is they knew enough about one business process to hire and fire people quickly based on a model rather than hum and haw for months about whether a new hire is effective or not.
“Getting to Plan B” (John Mullins & Randy Komisar) and “The Lean Startup” (Eric Ries) are both terrific books detailing how to use more disciplined “experimental method” approaches to quickly iterate through assumptions. Hit the highest risk assumptions first, design an experiment to prove or disprove it, then move on. Done properly, this kind of thoughtful approach to validating a business model comes out in the financial forecast and that’s probably the key thing we look for evidence of.