Lester Leong
Analytics Maturity Assessment: Every Company Overestimates. Here's Where You Actually Sit.
The Overestimation Problem
Every company I have assessed overestimates their analytics maturity by at least two levels. This is not a rounding error. It is a consistent, structural pattern that leads organizations to invest in the wrong capabilities, hire for the wrong roles, and wonder why their analytics function produces volume but not impact.
The overestimation happens because organizations conflate data availability with analytical maturity. They have data warehouses, dashboards, and BI tools. They run weekly metrics reviews. They have analysts on staff. From the outside, this looks like a mature analytics operation. From the inside, the analytics function is almost entirely descriptive. It tells the organization what happened. It rarely tells the organization what to do about it.
The gap between "we have data" and "data improves our decisions" is wider than most leaders realize. Closing it requires a clear framework for understanding where you are and what the next level actually demands.
The Five Levels
A simple maturity framework. Five levels, each building on the one before it.
Level 1: Data Exists. The organization has data somewhere. It lives in production databases, spreadsheets, and third-party tools. Someone can query it if asked, but there is no systematic access. Getting an answer to a data question requires knowing who to ask and waiting for them to have time. Most questions go unasked because the friction is too high.
Level 2: Dashboards and Tracking. Key metrics are defined, tracked, and visible. The organization has invested in BI tooling and the leadership team reviews a standard set of metrics on a regular cadence. Everyone can see what happened last week. This is where most companies that describe themselves as "data-driven" actually sit.
Level 3: Analysis and Interpretation. Someone interprets the data and explains what happened. When a metric moves, an analyst investigates the cause, identifies contributing factors, and presents findings. The output is an explanation, not just a number. This level requires skilled analysts who can distinguish signal from noise and communicate findings clearly. Many organizations believe they are here but are actually at Level 2 with occasionally deeper dives on high-visibility issues.
Level 4: Recommendations and Decision Support. Analysis comes with a recommendation. Instead of "activation dropped 8% in the enterprise segment due to a change in traffic mix," the output becomes "activation dropped 8% in enterprise. Based on the traffic composition shift and historical patterns, we recommend adjusting the onboarding flow for that segment. Here is the expected impact and the tradeoffs." The analysis is structured for a decision, not just for understanding.
Level 5: Integrated Decision Systems. Decisions are routinely informed by analytical frameworks. Experiments run continuously. The organization has feedback loops that connect decisions to outcomes and update the analytical models accordingly. Analytics is not a support function that responds to questions. It is embedded in the decision-making process itself, and its output shapes the agenda rather than reacting to it.
Why the Gap Between Level 2 and Level 4 Persists
Most organizations are stuck at Level 2. Some reach Level 3 on high-priority issues. Almost none operate consistently at Level 4 or above. The conventional diagnosis is that this is a technical problem (better tools needed) or a capacity problem (more analysts needed). Both are wrong.
The gap between Level 2 and Level 4 is not technical. The tools required to produce recommendations are the same tools required to produce dashboards. The SQL, the statistical methods, the visualization libraries are identical. An analyst who can build a retention dashboard can also build a retention recommendation. The constraint is not capability.
The gap is organizational. It has three components.
Analytics output is not structured for decisions. Most analytics teams deliver answers to questions: what happened, how much, why. They do not deliver recommendations because they have not been asked to, and because recommending a course of action requires understanding the business context well enough to evaluate tradeoffs. That understanding comes from proximity to decision-makers, which most analytics teams lack.
Decision-makers do not pull analytics into their process. In Level 2 organizations, leadership consults data after the decision is already forming. The strategic direction is set in a room. Then someone asks the analytics team to "pull the numbers" to support it. Analytics becomes a validation function rather than an input to the decision. This is not malicious. It is a habit born from years of analytics delivering descriptive outputs that were too late or too generic to influence the actual choice.
There is no feedback loop. Decisions are made, but nobody tracks whether the analytical input improved the outcome. Without that feedback loop, the organization cannot learn which types of analytical support produce value and which are performative. Every investment in analytics feels equally justified because none of them are measured against results.
The Headcount Fallacy
When organizations recognize that their analytics function is underperforming, the default response is to hire more analysts. This is almost always the wrong intervention.
Adding analysts to a Level 2 organization produces more dashboards, more reports, and more ad hoc analyses. It does not produce better decisions because the bottleneck is not analytical capacity. The bottleneck is the connection between analytics and the people who make decisions. More analysts working in a structure that separates them from decision-making will produce more output with the same minimal impact.
The intervention that actually moves the needle is structural. Place an analyst in the room where decisions happen. Give them the context to understand the tradeoffs. Require them to deliver recommendations, not just findings. Measure them on whether their work changed a decision, not on how many queries they ran.
One analyst embedded in a decision-making process will outperform five analysts producing reports that circulate by email.
Starting the Transition
Be honest about where you sit. If your analytics function primarily produces dashboards and descriptive reports, you are at Level 2 regardless of how sophisticated your data infrastructure is. That is not a failure. It is a starting point.
Pick one recurring decision that matters. Staff retention, pricing changes, feature prioritization. Assign an analyst to own that decision's analytical support end to end. Require the output to include a recommendation with explicit tradeoffs. Track whether the recommendation influenced the decision and whether the decision produced the expected outcome.
The maturity gap is real, but it is not closed with better tools or bigger teams. It is closed by changing what analytics is asked to produce and holding it accountable for decisions, not for dashboards.