product
March 20

Where the Successful-Startups-Move-Fast Mindset Comes from

In one of the teams I had been working with some time ago, I had a couple of disputes about hypotheses, analytics, researches, and how all of these shapes the better experiment-based desicions. My point was “proven” statistically: form a hypothesis, outline its metrics, develop, launch, and if p < α (success), roll it out. The problem is that I had not considered that this team was working in an early stage sturtup. A few months later, I started to see the flaw in my arguments hiding in the difference in how startups and big businesses answer the question “Why to do an experiment?”. And it’s all about how much everybody can lose.

Let’s stop firstly on big businesses' mindset here, as it is what everybody else is talking about when the conversation flows into the subject.

Actually, that’s exactly the reason I've written this text. There are not so many materials about facts behind “why to move fast” and too much information about how to conduct A/B-tests, how to calculate statistics and how it helps to grow your business.

How It Works for Big Guys

Big companies have resources: money, people, time. And they have all of these things like PMF, big user base to test on, working product(s) to play with. What can they do with these boons (or burdens, depends on a perspective)? Two options:

  1. start something new, hoping to get a bigger fish
  2. hone what they have already to gradually increasing the profit or/and decreasing costs and margins.

The first option is good to do concurrently with the second one. Otherwise, it would be too shortsighted to throw away a working product chasing something else (most of the time, hallucinations). Anyway, 99% of this work stays in labs (or whatever they call it).

Hopping on the second option of honing what you have is way safier for corporations, as they have too much to lose. The trick here is that all the experiments are predictable in terms of methodology (described in the first paragraph). Yes, it’s still plenty of work using and combining different statistical methods (bootstrapping, CUPED, switchback experiments, and who knows what custom-made solutions big research teams use). The point is that no matter the direction you choose, you will face with precise predictions you can calculate and model.

How It Goes for Startups

The absolutelly different story with small startups that have burdens like these:

  • restricted by resources (money, people, time)
  • poorly working product
  • no user base.

Merely founded startups roaming in darkness attempting to find their PMF or to fix a shitty onboarding that scares users out. Face the truth, here is almost nothing to hone gradually. But here is the place to find something new and working. Moreover, if your biggest startup problem is to increase the landing conversion up to 0.5%, you are a damn genius (or a fool). More realisticly sounds that you have 1% in the Sign-Up → Payment conversion, but you seek for 35%. That’s a lot of work and sounds like an entirely different product or a part of it to come up with. And coming up with a new product, achieving PMF or launching a startup at all are not so predictable as A/B-tests conducting by established businesses. You can’t model it in advance.

Math Base (No Formulas, I Promise)

Without going deep into the math of this difference, let’s introduce just a math concept called probability distributions. Yes, we all know that there are normal, Bernoulli, exponential ones. Many of them are used in experiments, many statistics (like the famous t-test) are created for those distributions. They are pretty useful to calculate (predict) how your experiment could go and end. All math behind A/B-tests and questionaires are based on those distributions.

But unfortunately not so much knows and uses so-called Cauchy distribution. You know why? Because it’s unpredictable or “phatological” in math terms. This distribution has wider “tails” which means that there are more unexpected events can occur (way more than for a normally distributed test). Therefore, as we assume that “coming up with a new product, achieving PMF or launching a sturtup” are distributed by Cauchy distribution, more big experiments you do, faster you will succeed (a success is equal to falling under a "tail"). Much faster than by conducting a typical A/B-tests on your landing, attempting to improve that wasn’t working in the first place.

Of course, you can use more statistical methods for big guys to run experiments in a sturtup to be more “data-driven”. But it slows you down a lot. The only way to succeed is to release as fast as possible to find a working solution with a big impact. And “big” here is the key to make sure that you do right. If you see no radical changes (assuming you need them in a sturtup), you do it wrong. Because 0.5%-1% will not do the weather. Make it +20%. How? Launch 10 versions and one of them will shine almost immediately (with no “we have yet 23 days till the experiment ends”). Do not polish to no avail: if it’s not working, do it the other way—it’s faster.

Speaking of why big companies do not do it the startup way, I see a few reasons:

  • investors’ anticipations are different, they give you no money if your risk with too much at stakes
  • the small increases and optimizations can give you millions, at the same time big failures can take everything you’ve earned
  • they do such experiments but for early-stage ideas, new products to get in other market, new experiment features via MVP.

Conclusion

I don’t push you to go study Wikipedia for more in-depth learnings (maybe I only nudge a little). You just need to see why there is the difference. That’s exactly why only fast-moving startups outlast.