🧱 argil.io

How to define your North Star metric

2 min read
Last updated March 30, 2026

Use this when: You have data flowing and dashboards running, but no clear metric hierarchy. The team tracks 15 things, argues about which ones matter, and every meeting turns into a debate about what "good" looks like.

You're done when: You have one North Star metric, 3-5 supporting input metrics, targets for each, and a dashboard the entire team looks at weekly.

The Sequence

Loading visualization...

Template

Loading visualization...

Value Delivery, Not Revenue

Revenue is tempting as a North Star, but it is a lagging indicator. By the time revenue drops, the problem started weeks or months ago. Revenue tells you the patient is sick. It does not tell you why, and it does not give you time to intervene.

The North Star should measure value delivery directly. For Spotify, time spent listening captures whether users are getting the value they came for. If listening time goes up, subscriptions follow. If listening time drops, cancellations follow, but with a delay. The listening metric gives them weeks of lead time that a revenue metric does not.

The same logic applies at any scale. If you run a project management tool, your North Star might be "projects with at least 3 collaborators active this week." That metric captures the core value: teams using your tool together. If it goes up, retention and revenue will follow. If it drops, you have early warning.

Revenue belongs in your investor dashboard. Your operating dashboard needs a metric your team can influence directly through the product they build every day. The question to ask: if a product change improves this metric but has no immediate revenue impact, would we still ship it? If the answer is yes, you have a good North Star.

Example

A scheduling platform for service businesses tracked MRR, total bookings, and NPS. All three went into the weekly meeting and all three sparked different conversations. They ran the filter: MRR fails the "leading" test (it lags). NPS fails the "within influence" test (it moves slowly and reflects everything, not just product). Total bookings captures value delivery, is leading (a drop in bookings predicts churn weeks later), and is within the team's influence (better scheduling UX, reminders, and integrations directly increase bookings).

They chose "weekly completed bookings per active account" as the North Star. Input metrics: new accounts activated, booking flow completion rate, reminder open rate, and repeat booking rate. Within one quarter, the team stopped debating which metric to focus on. Every feature proposal got evaluated against one question: does this increase weekly completed bookings per account? The scheduling UX team improved booking flow completion from 64% to 78%. The integrations team added calendar sync, which increased repeat bookings by 23%. MRR grew 40% that quarter, but nobody tracked MRR as the primary goal. They tracked value delivery, and revenue followed.

Loading visualization...

Written with ❤️ by a human (still)