Lester Leong
How to Choose a North Star Metric (A Framework That Actually Works)
The Problem with Most North Star Metrics
Eight out of ten companies I have worked with started our engagement with a North Star metric that was either unmeasurable, uninfluenceable, or measuring the wrong thing entirely. The pattern is remarkably consistent. Someone reads a blog post about how Airbnb uses "nights booked" or how Facebook used "10 friends in 7 days," the leadership team spends an afternoon picking their version of that, and the metric ends up on a dashboard that nobody uses to make decisions.
The metric fails because choosing a North Star is a strategy decision dressed up as a measurement problem. You cannot pick the right metric without first being precise about what stage your business is in, what lever actually drives retention, and what your team can realistically influence in the next 90 days.
I have helped roughly 20 companies through this exercise across SaaS, marketplaces, fintech, and content platforms. The framework below is what I use every time. It comes from watching teams adopt a metric, try to operationalize it, and either succeed or fail based on whether the metric met five specific criteria.
The Five Criteria
A North Star metric must pass all five of these tests. Failing any single one makes it operationally useless, regardless of how strategically elegant it sounds.
1. It measures value delivered, not activity performed. Page views, logins, and sessions measure what users do inside your product. They do not measure whether users got what they came for. Your North Star must capture the moment where the user receives the value your product promises. For a fintech lending platform, that is funded loans, not applications started. For a content platform, that is content consumed to completion, not articles opened.
2. It is a leading indicator of revenue. If your metric goes up this month but revenue does not follow within one to three months, the metric is not measuring what matters. This is the test that kills most vanity metrics. Monthly active users can increase while revenue stays flat if the new users are low-intent or in a free tier with no conversion path. The North Star must have a demonstrable, quantifiable relationship with revenue. When I was at a financial social media startup, we tracked "users who shared a portfolio insight with at least one connection." That metric correlated with 90-day retention at 0.74. MAU correlated at 0.31. When we pitched acquirers, the insight-sharing metric told a retention story that MAU never could. It directly influenced the acquisition narrative.
3. The team can influence it within one sprint cycle. If your North Star is annual recurring revenue, your product team cannot run an experiment and see the result in two weeks. The metric is too lagging. A good North Star sits in the middle of the causal chain: upstream enough that teams can move it with targeted interventions, downstream enough that it actually predicts the business outcome you care about. Weekly or biweekly measurability is the target.
4. It is a single number, not a composite. I have seen teams create weighted index scores combining engagement, retention, and satisfaction into a proprietary "health score." These composites are impossible to decompose when they move. Did the score drop because engagement fell, because retention fell, or because the weights are wrong? A North Star must be a single, unambiguous count or rate. "Weekly users who complete a core action" is a metric. "Engagement health index" is a dashboard decoration.
5. It applies across the entire user base. If your metric only captures one segment (enterprise but not SMB, or buyers but not sellers on a marketplace), it will systematically bias your roadmap toward that segment. The North Star should be decomposable by segment for analysis, but the top-level metric must represent the full business.
The Decision Tree
When I run this exercise with a client, the first question is always about business model. The business model determines which category of value delivery matters most, and that narrows the candidate metrics dramatically.
SaaS (subscription revenue): The core value exchange is ongoing utility. Users pay monthly because the product continues to solve a recurring problem. The North Star should measure repeated value extraction, not initial adoption. Start with: "Users who performed [core value action] at least [frequency threshold] times in the past [measurement window]."
Marketplace (transaction revenue): The core value is successful matching. Both sides of the marketplace must receive value, or one side churns and the marketplace collapses. The North Star should measure completed transactions, not listings or signups. Start with: "Transactions completed per [time period] where both parties [signal of satisfaction]."
Fintech (financial outcome): The core value is a financial result for the user. The North Star should measure the financial outcome delivered, not the financial activity initiated. Start with: "[Financial outcomes delivered] per [time period]." This is where many fintech teams go wrong. They track applications, not approvals. Deposits, not returns. The activity metric is seductive because it is larger, but the outcome metric is what retains users.
Content/media (attention and engagement): The core value is content that is worth the user's time. The North Star should measure content consumption that indicates genuine value, not passive exposure. Start with: "Users who [completed/engaged deeply with] at least [threshold] pieces of content in [time period]."
For AI-specific products, the North Star question gets more nuanced. I wrote separately about [how AI changes the North Star equation](/insights/north-star-metric-ai), because the variability of AI output quality makes traditional engagement proxies especially misleading.
Four Examples from the Field
B2B SaaS (project management tool, Series A). The team came in using MAU. Their MAU was growing 12% month over month but net revenue retention was flat at 101%. The metric was not predicting expansion. We dug into the data and found that the best predictor of account expansion was "number of projects with at least 3 collaborators active in the past 14 days." When that number grew within an account, the account expanded seats within 60 days, at a rate of 73%. We made it the North Star. Within two quarters, the product team shifted from features that increased individual logins (which inflated MAU) to features that increased collaborative project activity. Net revenue retention moved to 112%.
Two-sided marketplace (freelancer platform, seed stage). The founder was tracking gross merchandise volume. GMV looked healthy at $180K per month. But repeat transaction rate was 22%, meaning the marketplace was functioning as a one-time matching engine, not a retained network. We shifted the North Star to "clients who completed a second transaction within 45 days." That metric was at 18% and gave the team a specific retention problem to solve. They redesigned the post-project experience, added structured feedback that helped freelancers improve their profiles, and moved the repeat rate to 31% over four months. GMV followed.
Fintech (personal finance app, Series B). The team tracked "linked bank accounts" as their North Star because it was easy to measure and grew consistently. The problem: 40% of users who linked an account never returned after day 7. The account-linking metric was measuring setup completion, not value delivery. We replaced it with "users who took at least one financial action (budget adjustment, bill setup, or savings transfer) in the past 7 days." That metric was smaller (obviously) but correlated with 6-month retention at 0.68 versus 0.19 for linked accounts.
Content platform (industry newsletter, bootstrapped). The founder was optimizing for subscriber count, which had crossed 15,000. Open rate was 38%, which looked healthy. But revenue (sponsorships priced on clicks) was stagnant because click-through rate on sponsor placements was declining. Subscribers were opening out of habit but not engaging deeply. We shifted the North Star to "subscribers who clicked at least one link per issue in the past 4 issues." That captured active, engaged readers (the ones sponsors actually pay for), and it reframed the editorial strategy from "keep the subject line interesting enough to open" to "make the content valuable enough to act on."
The Three Mistakes That Kill North Star Metrics
Mistake 1: Picking a vanity metric because it goes up. Vanity metrics are metrics that increase naturally with time and scale but do not indicate health. Total registered users, cumulative revenue, and total sessions are all vanity metrics. They never go down (short of catastrophe), so they are useless for decision-making. Your North Star must be capable of declining. A metric that only delivers good news is not a metric.
Mistake 2: Picking a metric that is too lagging. Annual recurring revenue, [lifetime value](/insights/customer-ltv-calculation-startups), and net promoter score are all important business metrics. None of them should be your North Star. They move too slowly to guide weekly product decisions. By the time ARR declines, the retention problem that caused it happened three to six months ago. Your North Star should be responsive enough to show the impact of a product change within two to four weeks.
Mistake 3: Picking a metric the team cannot influence. I worked with a company that chose "organic search traffic" as their North Star. The product team had no control over SEO. The marketing team owned content and technical SEO. The metric sat on the product team's dashboard, nobody could move it with their own work, and within two months the team stopped looking at it. A North Star only works if the team that owns it can run experiments against it.
How to Run the Exercise
This takes about four hours spread across two sessions.
Session 1 (2 hours): Candidate identification.
1. List every metric the company currently tracks. All of them. This usually produces 15 to 40 metrics. 2. Filter through the five criteria above. Most teams eliminate 80% of candidates in this step. 3. For the surviving candidates (usually 3 to 5), pull the historical data and run a correlation analysis against the business outcome that matters most (revenue, retention, or growth, depending on stage). 4. Rank the candidates by predictive power.
Session 2 (2 hours): Validation and operationalization.
1. Take the top candidate and pressure-test it: Can every team in the company influence it? Does it apply to all segments? Can it decline? 2. Define the measurement specification precisely. What counts as the action? What is the time window? What is the minimum threshold? 3. Build the dashboard (or spec it for the data team). 4. Set the first 90-day target based on current baseline plus a realistic improvement rate (I typically use 5-10% quarterly improvement as the starting assumption for a metric the team has never explicitly optimized).
The output is a one-page document: the metric, the definition, the current baseline, the 90-day target, and the three to five product levers the team believes can move it. That document becomes the operating contract for the next quarter.
The North Star Is a Commitment, Not a Dashboard Widget
The hardest part of this exercise is not analytical. It is organizational. Choosing a North Star means the leadership team is committing to optimize for one thing above all else. That commitment creates tension. Feature requests that improve a secondary metric at the expense of the North Star must be deprioritized. Teams that are used to optimizing their own local metrics must realign.
Working on a GenAI product at a large finance tech company has reinforced this for me. At scale, the North Star does not just guide product decisions. It shapes hiring, resource allocation, and how teams are evaluated. A poorly chosen metric at that scale misallocates enormous resources. Getting it right at the startup stage, before the organizational inertia sets in, is one of the highest-leverage decisions a founding team can make.
Most teams treat the North Star as a reporting exercise. The teams that win treat it as the single most important strategic decision they make each quarter.
---
I help teams find the metric that actually drives their business. If your North Star feels like guesswork, let's fix it. [lester@gradientgrowth.com](mailto:lester@gradientgrowth.com)