🧱 argil.io

Should I trust this data or investigate further?

3 min read
Last updated March 30, 2026

You're here when: A dashboard shows something surprising. A big spike. A sudden drop. A counterintuitive trend. Someone is about to call a meeting. You need to decide whether to act on it or dig deeper first.

The Heuristic

Surprising data is either an insight or an error. Before reacting, run a quick sanity check. The more surprising the number, the more likely it's a data problem rather than a real change.

  • How surprising is the change? A 5% shift might be normal variance. A 50% spike almost never is. The magnitude of surprise should determine your investigation effort. Small moves deserve a glance. Large moves deserve 30 minutes of digging before you tell anyone.
  • Was anything deployed or changed recently? Check the deploy log, the feature flag dashboard, and the marketing calendar. The most common cause of surprising data is a code change that broke tracking, a new feature that shifted user behavior, or a marketing campaign that spiked traffic from a different segment.
  • Does the pattern appear across multiple segments? If signups doubled but only from one country, one device type, or one referral source, that's a targeted signal. If it doubled uniformly across everything, it's more likely a tracking issue or a bot attack.
  • Does it pass basic sanity checks? Is the number physically possible? Is the order of magnitude right? Does the direction make sense given what you know about the product? A dashboard showing 200% conversion rate is a bug, not an insight.

Decision Tree

Loading visualization...

Quick Example

A SaaS startup saw their signup conversion rate jump from 3% to 8% overnight. The growth lead drafted a Slack message celebrating the change and crediting a landing page update from the previous week. Before sending it, an engineer checked the raw data. A deploy the previous evening had introduced a bug that fired the signup event twice for some users. The "conversion rate increase" was double-counted events. The actual rate hadn't changed. The entire celebration would have been based on a bug.

The 5-Minute Sanity Check

Before reacting to any surprising number, run this routine. It takes five minutes and prevents most false alarms.

  1. Check the timeframe. Is the change visible over multiple days, or is it a single-day anomaly? Single-day spikes are usually noise or incidents.
  2. Check the source. Filter by channel, device, country, or user segment. If the change is isolated to one segment, investigate that segment specifically.
  3. Check the deploy log. Did anything ship in the 24-48 hours before the change appeared? Code changes are the most common cause of data anomalies.
  4. Check the denominator. If a rate metric changed, check both the numerator and denominator separately. A conversion rate "increase" might actually be a traffic drop (smaller denominator) rather than more conversions.
  5. Check a known-good baseline. Look at a metric you know hasn't changed (total server requests, database row counts, payment processor data). If a metric that should be stable also looks weird, the problem is in the data layer, not in user behavior.

If it survives all five checks, it's probably real. React accordingly.

The Anti-Pattern

The Premature Insight. Seeing a spike, Slacking the team, calling an all-hands, and proposing strategy changes, only to discover two hours later that it was bot traffic, a tracking bug, or a timezone shift in the analytics tool. Every false alarm costs credibility. After three of these, the team stops trusting data entirely, and that's harder to fix than any tracking bug. The rule is simple: investigate first, announce second. Surprises in data should trigger curiosity, not press releases.

Loading visualization...

Written with ❤️ by a human (still)