← All Essays

The Metrics That Actually Matter

I’ve inherited a lot of dashboards in my career. Every time I’ve taken over a new organization, one of the first things I’m shown is the reporting suite — the metrics the team tracks, the KPIs leadership reviews, the numbers that supposedly tell us how we’re doing.

And almost every time, the dashboard is lying. Not because the numbers are wrong — they’re usually accurate. It’s lying because it’s measuring the wrong things, or measuring the right things in the wrong way, or presenting information that makes everyone feel informed while actually obscuring the problems that matter.

Building a metrics framework that actually drives behavior — not just reporting — is one of the hardest operational challenges I’ve faced. And getting it wrong has real consequences.

The vanity metrics trap

When I took over a global support organization, the primary metric the team was evaluated on was ticket closure rate. The dashboard showed how many tickets were closed each week, each month, each quarter. The numbers were consistently strong. Leadership was satisfied.

But customers were miserable. CSAT scores were in the low 70s. Escalation volume was climbing. The same issues were being reported over and over. How could the team be closing tickets at a healthy rate while customers were increasingly unhappy?

Because ticket closure rate incentivizes the wrong behavior. When your primary metric is how quickly and how often you close tickets, people optimize for closing tickets — not for solving problems. Tickets were being closed without resolution. Complex issues were being broken into multiple tickets to inflate the count. Customers were being asked “can we close this?” before the problem was actually fixed.

The metric was accurate. The behavior it was driving was destructive.

That’s what a vanity metric is. It’s a number that looks good on a dashboard and makes everyone feel productive while the actual health of the operation deteriorates underneath it.

What makes a metric useful

After rebuilding metrics frameworks at multiple organizations, I’ve landed on a simple test for whether a metric is worth tracking: does knowing this number change what someone does tomorrow?

If the answer is no — if the metric is interesting but not actionable — it’s reporting, not management. And there’s a place for reporting. But it shouldn’t be confused with the KPIs that drive operational decisions.

A useful metric has three characteristics:

It’s connected to an outcome someone cares about. Not an internal process outcome — a real outcome. Customer satisfaction. Revenue retention. Employee engagement. Time to value. If you can’t draw a straight line from the metric to something the business actually needs, you’re measuring activity, not impact.

It’s influenceable by the people being measured. This sounds obvious, but I’ve seen teams measured on metrics they have no ability to affect. Support teams measured on product defect rates. Services teams measured on sales forecast accuracy. If the people looking at the number can’t change it through their own actions, the metric creates frustration, not improvement.

It drives behavior in the right direction. This is the hardest one, because it requires thinking through second-order effects. Measuring response time? People will respond faster — but they might respond with lower quality. Measuring resolution rate? People will close more tickets — but they might close them prematurely. Every metric creates an incentive, and you have to think carefully about what behavior that incentive actually encourages.

The framework I built

When I rebuilt the metrics framework for my organization, I structured it in layers. Not because I love complexity, but because a single number can never tell you the full story, and different audiences need different levels of detail.

Layer one: health indicators. These are the three to five numbers that tell me whether the organization is fundamentally healthy. I looked at these daily. For a support org, mine were: CSAT (are customers satisfied with the experience?), first contact resolution rate (are we solving problems efficiently?), employee NPS (is the team healthy?), and backlog trend (are we keeping up with demand?). If all four of those were moving in the right direction, we were in good shape. If any of them were trending wrong, I knew where to dig.

Layer two: operational metrics. These are the numbers my managers reviewed weekly to run their teams. Response time by tier, resolution time by complexity, escalation rates, knowledge base utilization, queue distribution. These metrics told us how the machine was performing and where the bottlenecks were. They were the dials we could turn.

Layer three: diagnostic metrics. These are the numbers we pulled when something in layer one or two looked wrong. Repeat contact rate. Transfer rate between teams. Time spent on specific issue categories. Agent utilization by shift. We didn’t review these regularly — we used them to investigate problems when they surfaced.

The discipline is in keeping the layers separate. Leaders who try to track everything at once end up tracking nothing effectively. Your daily dashboard should have five numbers, not fifty.

The eNPS story

The metric that taught me the most about measurement was employee NPS. When I took over, our eNPS was negative 11. For context, anything below zero means more of your employees are detractors than promoters. Negative 11 meant the team was, on average, actively unhappy and would not recommend working there.

That number got my attention faster than any customer metric could have. Because I’ve learned — repeatedly — that unhappy teams deliver unhappy customer experiences. You can’t fix the output without fixing the input.

I started tracking eNPS quarterly and, more importantly, I started sharing the results with the team. Not just the score — the verbatim comments. The things people were actually saying about what was working and what wasn’t. And I committed publicly to acting on the feedback.

Over the course of eighteen months, eNPS went from negative 11 to positive 50. That’s not a small move. That’s a fundamental shift in how the team felt about their work, their leadership, and their organization. And it correlated directly with improvements in every customer-facing metric we tracked. CSAT went up. Escalations went down. Resolution times improved. Not because we implemented some new process or tool, but because the people doing the work were more engaged, more invested, and more willing to go the extra mile.

The eNPS number didn’t fix anything by itself. But it gave me a signal I could act on, and it gave the team evidence that their feedback was being heard. That’s what a good metric does.

CSAT is necessary but insufficient

Every support and services organization tracks customer satisfaction. And they should — it’s the most direct signal of whether you’re delivering value. But CSAT alone is a dangerously incomplete picture.

I’ve seen organizations with respectable CSAT scores that were hemorrhaging customers. How? Because CSAT measures the experience of the people who interact with your support team — not the people who’ve given up and stopped calling. A customer who’s so frustrated they’ve escalated to their account manager, or started evaluating competitors, or simply stopped reporting issues because they’ve lost faith in the process — that customer never fills out a satisfaction survey. They just leave.

That’s why I always paired CSAT with retention metrics and escalation trends. A rising CSAT score alongside rising escalation volume isn’t a win — it means the easy cases are going well but the hard cases are getting worse. A stable CSAT score alongside declining retention means your support experience is fine but your product or service isn’t meeting needs. The numbers need context, and context comes from looking at metrics in combination, not isolation.

The reporting problem

One of the insidious things about metrics is that the act of reporting them can become its own industry. I’ve worked in organizations where people spent more time preparing reports about performance than actually improving it. Monthly business reviews with sixty-slide decks. Weekly status reports that took half a day to compile. Dashboards with so many widgets that nobody could identify the signal in the noise.

When I see that, I know two things: first, leadership doesn’t trust the operation enough to accept a simple summary, and second, the people creating those reports are burning time that should be spent on actual work.

I’ve made it a rule in my organizations: no metric should take more than fifteen minutes to pull. If it does, either the data infrastructure is broken or the metric is too complex to be useful. Invest in building systems that surface the numbers automatically — not in building a reporting culture where analysts spend their weeks assembling PowerPoints.

Metrics as conversation starters, not conclusions

The biggest mistake I see leaders make with metrics is treating them as answers instead of questions. A CSAT score of 78 doesn’t tell you anything by itself. Is that good? Compared to what? Last quarter? Your industry benchmark? Your own target? And even if it’s below target, the number doesn’t tell you why. It just tells you to go look.

I teach my leaders to use metrics as conversation starters. “Our first contact resolution rate dropped three points this month — what changed?” That’s a useful conversation. It leads to root cause analysis, process improvement, maybe a training gap, maybe a product issue. The metric didn’t solve anything. It pointed us toward the right question.

Organizations that treat metrics as conclusions — “our numbers are good, so we’re good” — are the ones that get blindsided. The dashboard was green right up until the moment the biggest customer left.

What I’d tell a leader building their first framework

If you’re starting from scratch, resist the urge to measure everything. Pick three to five metrics that represent the health of your operation from the perspectives that matter most: the customer, the team, and the business. Make sure each one passes the test — does knowing this number change what someone does tomorrow? Build the discipline to review them consistently, investigate when they move, and resist the temptation to add more metrics every time someone asks a question the current set doesn’t answer.

And above all, remember that metrics are tools for driving behavior. The question isn’t “what can we measure?” It’s “what behavior do we want, and what measurement will encourage it?” If you start there, you’ll end up with a dashboard that actually matters — not one that just looks impressive.

— Bruno