Rating The Venture Capitalists

Next Story

CrunchWeek: Nokia’s Lumia 1020, Dropbox’s Developer Conference, Hulu’s Non-Sale

Editor’s note: Zach Noorani is a former VC and recent graduate of MIT Sloan. Follow him on Twitter @znoorani.

For a profession full of business strategy experts, it’s impressive how rarely and narrowly the venture capital industry is subjected to conventional competitive analysis. Ask yourself: who are the VC’s customers? Limited partners (LPs) are probably the technically correct answer, but startups would certainly be the more fashionable one. It’s a reasonable debate. Either way, one’s the supplier and one’s the customer. Hell, maybe they’re both customers and VC’s are some kind of glorified broker. But the fact that there isn’t a popular consensus illustrates how impotent we’ve been at examining the industry.

This is important because if we recognized that a venture fund and a McDonald’s are actually the same thing at some level, we’d start asking some really good questions, such as “which funds are best in customer satisfaction?” If by “customers” we mean LPs, that’s just benchmarked returns, though the jury is still out on their satisfaction levels. But what if we meant startups — “how satisfied are they by their investment vendors?” Now I don’t read a lot, but I’ve never come across any comprehensive attempts at answering that question.

What if we isolated the primary metric that startups care about vis-à-vis their investors and measure how well they’re doing against it? So much discussion focuses on the value that VCs add over and above their money — and perhaps that can be substantial — but within this framework that’s little more than competitive posturing: a free McFlurry with the purchase of any Extra Value Meal if you will. The real metrics that startups should care about are much more coldly financial: their own cash on hand and their ability to get more of it.

Say a startup succeeds in raising a round of financing. Let’s call that having been acquired by the VC as a customer. That company’s founders have made a very long-term and personally significant bet that they can in some way “put a dent in the universe.” All they need is enough money and time to do it. If their VC won’t continue giving them money, or help them get that money, does anything else matter?

So which investors do the best job of getting their portfolio companies money or, in other words, satisfying their customers? Get out your rulers, we’re gonna measure some stuff!

VC customer satisfaction score: Methodology

With a budget of about zero for this project, I’m relying entirely on CrunchBase, which graciously offers its entire database free for download here. Obviously there are many holes and inaccuracies with CrunchBase entries, far too many for me to manually scrub (12K companies, 21K investment rounds). As such I’m using it exactly as it is, including entries I know to be inaccurate under the assumption that all investors are misrepresented somewhat equally.

I then defined a couple of simple parameters to derive a measure of customer satisfaction.

Any company raising a Series A in the five-year period between June 2006 and May 2011 qualifies as a customer. While I’d love to do this analysis on angel investments, the data is too sketchy to work with. Y Combinator, for example, has seeded 143 companies, according to CrunchBase where the actual number is 511 (excluding the current class of 53). Series A’s, on the other hand, are a much more well-defined milestone and so are quite widely reported. Setting the sample two years in the past as opposed to one or three years is admittedly arbitrary. But the intent is to allow enough time for a startup to burn through its Series A and raise a follow-on round. Using these constraints produces a sample of 2,285 startups that raised an average of $7.3 million ($4.5 million median) each in Series A capital.

A satisfied customer is any company in the above sample that has raised at least $20 million in total, exited (acquired or had an IPO), or raised at least $1 million in the last two years. Again, these levels are relatively arbitrary but are trying to capture that a startup has either already raised quite a bit of money, no longer needs it, or is continuing to successfully raise meaningful amounts. In other words, the company is satisfied by the job of their investment vendors. Of the 2,285 total customers, 371 have been acquired, 15 had an IPO and 1,864 currently have an unknown status. Among the “status unknown” segment, 662 companies have raised >$20 million and 739 have raised >$1 million in the last two years (fair amount of overlap with these two groups).

The last step is just dividing the number of satisfied customers by total customers for each fund. However, there are a few important limitations in how I’ve defined a satisfied customer.

First, acquisitions are of course not always good outcomes, and some of those 371 sales (a small minority) were likely just of assets or were acqui-hires under poor terms and are therefore false positives.

Second, the “Kickstarter problem” produces a handful of false negatives. That is to say there are a bunch of businesses that are so damn profitable that they don’t bother raising additional capital even though they could. With about $14 million in gross-margin dollars in 2012 (up > 3X year-over-year), Kickstarter could easily self-fund a doubling of its headcount (to 120) if it wanted. As a result, the company hasn’t raised money since early 2011 preventing its investors from getting credit for its satisfaction.

Lastly, CrunchBase does a poor job of tracking early-stage debt capital raises even though they’re a common way for venture-backed companies to fund operations. More false negatives arise as a result.

VC customer satisfaction score: Results

rating-vcs1

In the chart, note that Foundation Capital participated in the Series A round of 12 companies between June 2006 and May 2011, and all of those companies have either been acquired, gone public, raised >$20 million total, or >$1 million in the last two years. Congratulations!

The rankings are out of the 93 investors who made at least 10 investments over the five-year period. Overall, 59 percent of startups in the sample qualify as satisfied so a great many investors outperformed random chance. High-volume investors appear to perform somewhat better than the norm: Only 10 of the 93 scored below the 59 percent overall average, and the group as a whole has an un-weighted average score of 74 percent. In general, though, satisfaction levels don’t meaningfully skew by investment volume; rather, mediocrity is well represented throughout the distribution. For example, the 1,484 investors with just one deal have an average satisfaction score of 57 percent.

Go here to see the full list of the 93 investors and here to see the exhaustive list of investors ordered by number of investments.

To test the funding level assumptions, I redid the analysis using >$30 million total raised and >$5 million in the last two years (vs. $20 million and $1 million respectively). Overall satisfaction levels drop to 50 percent and some investors look a bit better or worse, but the general result is the same. Here are the high-volume investor rankings using the higher capital level methodology.

Holy hell, look at SV Angel and First Round Capital!

More impressive than the fact that SV Angel’s and First Round Capital’s satisfaction scores are in the mid-80s and rankings are in the mid-teens is that they have almost twice as many customers as any other investor, all with fund sizes that often prevent them from investing their full pro rata amount. So unlike Andreessen Horowitz or Bessemer, which can comfortably fund an entire $40 million round on their own, SV Angel and First Round depend a lot on other VCs to co-invest with them.

Whether they’re amazing at picking startups that end up performing well, helping their companies succeed, or just convincing other VCs to co-invest, who cares? Their customers get the money they need to build their businesses. And they do all this at a massive scale. 

Does this mean it’s better to raise money from Accel than Sequoia?

It’s fun to criticize anyone at the bottom of a ranking. But in truth, the scores for most of these high-volume investors don’t differ by much. Especially considering the sample sizes for each fund, many of the differences might not mean a whole lot. Take Flybridge, for example. If just two more of their 13 customers become dissatisfied, its score drops to 77 percent and its ranking slips to No. 39 (from 93 percent and No. 3 respectively).

That said, what about Accel vs. Sequoia? Accel comes in at No. 11 in the rankings with a score of 88 percent, whereas Sequoia’s at No. 65 with a score of 68 percent. If eight of Sequoia’s 40 customers flip to satisfied, they’d tie Accel. That delta could mean any number of things. Maybe it’s margin of error in a very volatile industry, maybe they make more speculative bets because they prefer the return profile, or maybe they really are quicker to cut loose companies that are going sideways. Said differently, some of those reasons could be quite unfriendly for founders, some irrelevant.

At the very least, isn’t this a track record you as a prospective customer want to understand? How have they treated the nine out of 10 customers they won’t make real money on? In picking an investor, how much brand prestige would you be willing to trade for an extreme commitment to getting you what you care most about?

Enough with the McFlurry promotions and misdirection. Make VCs compete on their equivalents of pricing, food quality and customer service.