You're here when: You have some data pointing in a direction, but the sample is small, the signal is noisy, and you're not sure if acting now is confidence or recklessness.
The Heuristic
The question isn't "is my data statistically significant?" It's "what happens if I'm wrong?"
- Reversible decisions don't need much data. If you can undo it in a day, decide now. The cost of waiting almost always exceeds the cost of being wrong.
- Irreversible decisions earn their analysis. Signing a 12-month vendor contract, killing a product line, or making a key hire, these deserve the extra week of data collection.
- The cost of delay is real but invisible. Every week you spend gathering more data is a week you're not acting. That has a price too, even if nobody puts it on a dashboard.
- Directional confidence beats statistical significance. If 7 out of 10 users in your sample hit the same wall, you don't need 10,000 users to confirm it. Patterns at small scale are often real.
Decision Tree
Quick Example
A B2B SaaS team noticed that 4 out of 12 trial users abandoned during the onboarding flow at the same step. The PM wanted to wait for 100 trials before acting. The engineer pointed out that fixing the step was a two-hour change, fully reversible. They shipped the fix that afternoon. Trial completion jumped from 60% to 78% over the next month. The "statistically insignificant" signal was real, and waiting would have cost them weeks of lost conversions.
The Asymmetry Framework
Jeff Bezos frames this as Type 1 vs. Type 2 decisions. Type 1 decisions are one-way doors, irreversible or nearly so. These deserve careful analysis. Type 2 decisions are two-way doors, you can walk back through them if things go wrong.
The risk is treating Type 2 decisions like Type 1. Slowing down, forming committees, requesting more data, running longer tests, when the actual cost of being wrong on a reversible change is trivial compared to the cost of not deciding for three weeks.
Douglas Hubbard takes this further. In most business decisions, the value of additional information is lower than people assume. His research shows that for the majority of decisions, you already have enough information to act, you just don't feel like you do. The feeling of uncertainty and the actual risk of being wrong are different things. Calibrating the difference is a skill worth building.
The Anti-Pattern
Analysis Paralysis. The team waits four weeks for statistical significance on a button color change that could be reverted in five minutes. Meanwhile, three actual problems go unaddressed because "we don't have enough data yet." The irony: the decisions that get over-analyzed are usually the least consequential ones. The big, irreversible calls get made on gut feel because there's never enough data for those anyway.
Written with ❤️ by a human (still)