The skill: A metrics review is a recurring meeting where the team looks at key numbers, spots what changed, and decides what to do about it. Done well, it's 30 minutes that align the whole team. Done poorly, it's a ritual that makes everyone feel busy without learning anything.
In a Nutshell
- The format: 5-5-15-5. Five minutes for context (what happened this week). Five minutes walking through the metrics. Fifteen minutes on anomalies and discussion. Five minutes for action items.
- Prep before the meeting, not during it. The owner populates the numbers, flags anomalies, and writes a one-paragraph summary before anyone walks into the room.
- 5-8 metrics is the ceiling. If you're reviewing more than that, you're skimming, not analyzing. Pick the ones that reflect your current priorities.
- Every number needs context. Show prior period, target, and trend. "Revenue was $120K" means nothing. "Revenue was $120K, up 8% from last month, 5% below target" tells a story.
- End with action items or kill the meeting. If three reviews in a row produce zero actions, the meeting isn't working. Either the metrics are wrong or the team isn't empowered to act on what they see.
- Rotate the prep owner. This builds data literacy across the team. The person who pulls the numbers learns more about the data than anyone else in the room.
- Anomalies are the whole point. The meeting isn't a victory lap through green numbers. It's a diagnostic session. What moved? Why? Is it a one-time blip or a trend?
The Three Anti-Patterns
The stare-at-dashboard meeting. Someone shares their screen, opens Metabase, and scrolls through charts for 45 minutes. Nobody prepared. Nobody knows what to look at. People nod along, check their phones, and leave having learned nothing. This happens when there's no prep owner and no agenda.
The blame meeting. A number is down. The room spends 30 minutes figuring out whose fault it is. The metric doesn't recover because the team is arguing about attribution instead of fixing the problem. This happens when metrics are tied to individual performance instead of team outcomes. Fix it by framing every anomaly as "what happened and what do we do" instead of "who caused this."
The everything-is-fine meeting. Numbers are flat or slightly up. Nobody digs deeper. The meeting ends with "looks good, same time next week." Meanwhile, a cohort retention problem is quietly compounding underneath stable topline numbers. This happens when the review only looks at aggregate metrics. Fix it by always including at least one leading indicator or segmented view that can surface problems before they hit the topline.
Do's and Don'ts
Written with ❤️ by a human (still)