There's been a lot of debate recently about the quality of startups that go through multiple accelerators. This all kicked-off with Sam Altman's post Getting Into YCombinator which discouraged the practice. Responses included Dave McClure's Plays Well with Others and Calcanis's Incubator Hopping.
"We now have enough data to know that the track record of companies that go through multiple accelerators is much worse than companies that just do YC." - sama
Unfortunately other than the above statement (with undisclosed data) a lot of this debate was driven by personal opinions rather than any firm evidence. I've done a fair amount of work analyzing startup data both for fun and profit so I naturally wanted to get a better answer. I started by pulling together a couple of data sets on seed accelerators, one a personal one I've being building for a while and secondly Jed's excellent Seed-DB and started crunching them.
After filtering out the edge cases such as startups going through multiple cohorts of the same accelerator and startups that have gone through three or more accelerators I was left with a data set of 132 startups who have been through two accelerators (16 of which went through YC after another accelerator).
Out of those 132 there are two clear successes (>$100m valuation) - Sphero (which has raised around $100m in investment) and GrabCad (acquired for ~$100m). PagerDuty (around ~$40m raised) likely falls into this group as well. Roughly around 1-2% of all accelerator alumni end up at >$100m so from this initial analysis it doesn't appear going through multiple accelerators is a strong negative signal.
However the problem with this approach is that it doesn't adjust for cohort age (older companies naturally tend to be larger) or more importantly the quality of the accelerator. We'd expect the alumni of top-tier accelerators to be significantly better than those from an average accelerator in any case. Another issue with this approach is that we're talking about a tiny handful of companies so the numbers can be very easily skewed.
I got around these problems by creating a dataset that reasonably represents a group similar to our "double accelerator group". For each startup in our group I randomly picked another startup that was in the same cohort/accelerator to control for that effect and added it to the comparison group. I then generated several hundred comparison groups so I could measure the variance we would expect to get from random luck. I then segmented the groups by bands of how much money they had raised (using that as a proxy for progress) and measured the percentage of startups that fell into each band.
The green diamonds show how the double accelerator group performed compared to the boxplot showing startups going through their first accelerator. Not only is it not a negative signal, but overall companies that go through multiple accelerators tend to do meaningfully better than average for raising money <$20m and the same as average for raising above that (as these cohorts mature I wouldn't be surprised to see the >$20m raises also increase to above average).
While startups going through a second accelerator might do better in general the same might not apply to YC; after all YC is somewhat unique among accelerators. So I reran my analysis using only the 16 YC companies  comparing them against their own cohorts.
While there are some differences we again see that overall startups for whom YC is their second accelerator tend to be more likely to raise at each band then the average YC company.
In practice almost all of the returns for an accelerator come from a handful of unicorns and we can't know at this stage if any of these companies will become unicorns, but based on the early date there's no evidence that they won't come out of this group.
This should come with the proviso that YC almost certainly has better data than I do on their portfolio (I'm likely missing a number of YC companies which have been through other accelerators) but I'd encourage YC and any other accelerator looking at this to make sure they're analyzing their data in a reasonable way. I'm more than happy to share my modelling code with any accelerators which would find it useful.
 The 16 companies for reference were Bagaveev Corporation, Final, CribSpot, Labdoor, Chariot, Seva Coffee, Nomiku, Valor Water Analytics, Leada, Plate Joy, uBiome, FlightCar, Vayable, chute, MarketBrief and PagerDuty.