Platform

AI

AI Agents
Sense, decide, and act faster than ever before
AI Visibility
See how your brand shows up in AI search
AI Feedback
Distill what your customers say they want
Amplitude MCP
Insights from the comfort of your favorite AI tool

Insights

Product Analytics
Understand the full user journey
Marketing Analytics
Get the metrics you need with one line of code
Session Replay
Visualize sessions based on events in your product
Heatmaps
Visualize clicks, scrolls, and engagement

Action

Guides and Surveys
Guide your users and collect feedback
Feature Experimentation
Innovate with personalized product experiences
Web Experimentation
Drive conversion with A/B testing powered by data
Feature Management
Build fast, target easily, and learn as you ship
Activation
Unite data across teams

Data

Warehouse-native Amplitude
Unlock insights from your data warehouse
Data Governance
Complete data you can trust
Security & Privacy
Keep your data secure and compliant
Integrations
Connect Amplitude to hundreds of partners
Solutions
Solutions that drive business results
Deliver customer value and drive business outcomes
Amplitude Solutions →

Industry

Financial Services
Personalize the banking experience
B2B
Maximize product adoption
Media
Identify impactful content
Healthcare
Simplify the digital healthcare experience
Ecommerce
Optimize for transactions

Use Case

Acquisition
Get users hooked from day one
Retention
Understand your customers like no one else
Monetization
Turn behavior into business

Team

Product
Fuel faster growth
Data
Make trusted data accessible
Engineering
Ship faster, learn more
Marketing
Build customers for life
Executive
Power decisions, shape the future

Size

Startups
Free analytics tools for startups
Enterprise
Advanced analytics for scaling businesses
Resources

Learn

Blog
Thought leadership from industry experts
Resource Library
Expertise to guide your growth
Compare
See how we stack up against the competition
Glossary
Learn about analytics, product, and technical terms
Explore Hub
Detailed guides on product and web analytics

Connect

Community
Connect with peers in product analytics
Events
Register for live or virtual events
Customers
Discover why customers love Amplitude
Partners
Accelerate business value through our ecosystem

Support & Services

Customer Help Center
All support resources in one place: policies, customer portal, and request forms
Developer Hub
Integrate and instrument Amplitude
Academy & Training
Become an Amplitude pro
Professional Services
Drive business success with expert guidance and support
Product Updates
See what's new from Amplitude

Tools

Benchmarks
Understand how your product compares
Templates
Kickstart your analysis with custom dashboard templates
Tracking Guides
Learn how to track events and metrics with Amplitude
Maturity Model
Learn more about our digital experience maturity model
Pricing
LoginContact salesGet started

AI

AI AgentsAI VisibilityAI FeedbackAmplitude MCP

Insights

Product AnalyticsMarketing AnalyticsSession ReplayHeatmaps

Action

Guides and SurveysFeature ExperimentationWeb ExperimentationFeature ManagementActivation

Data

Warehouse-native AmplitudeData GovernanceSecurity & PrivacyIntegrations
Amplitude Solutions →

Industry

Financial ServicesB2BMediaHealthcareEcommerce

Use Case

AcquisitionRetentionMonetization

Team

ProductDataEngineeringMarketingExecutive

Size

StartupsEnterprise

Learn

BlogResource LibraryCompareGlossaryExplore Hub

Connect

CommunityEventsCustomersPartners

Support & Services

Customer Help CenterDeveloper HubAcademy & TrainingProfessional ServicesProduct Updates

Tools

BenchmarksTemplatesTracking GuidesMaturity Model
LoginSign Up

What Are Vanity Metrics and How to Stop Using Them

The ultimate vanity metric explainer. What are vanity metrics? How do you identify them? And how do you replace them with better KPIs?
Insights

Jun 30, 2022

18 min read

John Cutler

John Cutler

Former Product Evangelist, Amplitude

Humorous quote that adapts a Carly Simon song to describe vanity and metrics usage
You’re so vain (you’re so vain) I bet you think this metric is about you Don’t you don’t you?

Carly Simon

What are vanity metrics?

Vanity metrics make us feel good but don’t help us do better work or make better decisions. Vanity metrics put optics before rigor, learning, and transparency. The metric and/or an outcome is heralded as a win, but things don’t add up. Most of the time, it boils down to a lack of experience with data storytelling, selecting meaningful KPIs, and communicating outcomes. In some cases, vanity metrics are the only metrics available.

But everyone, at some point, has been lured in by good news and has let their guard down.

"Hey everyone, check out the unique user count from yesterday!"

"Hey everyone, look at registration for the event!"

It’s easy to criticize vanity metrics, but we’ve all been there.

In this post, I will describe three common problems that lead us to vanity metrics. Then I will share The Vanity Metric Test, a way to review metrics and know if you are veering into vanity metric territory. If you’re short on time and want to jump straight to the review, click here.

Download the free Vanity Metric Test worksheet here

Vanity metric problems

In chatting with teams about vanity metrics, I’ve noticed three fundamental problems.

  • Vanity metrics lack context.
  • Vanity metrics have unclear intent.
  • Vanity metrics do not guide action and learning.

Problem 1: Vanity metrics lack context

First, we have the problem of missing context. Page Views, Daily Active Users, and Sign-Ups mean something but aren’t very helpful in isolation. The problems arise when we communicate these metrics without referencing the bigger picture. It’s not what we say, but rather what we don’t say—e.g., “compared to,” “as an input into,” “balanced by,” “an early signal of,” “part of the…” “as a ratio of,” “with the following caveats,” etc.

Missing context impacts everyone:

  • Marketing: There are many ways to boost content views in the short term. It is much harder to create a piece of evergreen content that attracts potential buyers for weeks and years. Getting a boost of initial traffic is a positive early signal, but it needs a footnote.
  • Sales: Hitting a quarterly sales goal is a huge accomplishment. It is noteworthy for a variety of reasons. But how did the team hit the goal? Did they bend on pricing? Did they move deals forward? Did they rob Peter to pay Paul? More context is required (e.g., comparing pricing to prior quarters).
  • Product: Launching a new feature is a huge milestone. Early feature adoption product metrics are a positive signal. But customers aren’t necessarily using the feature. They may just be trying the feature. In fact, all of the in-app pop-ups suggesting people try the feature may be increasing curiosity clicks. Trying the feature is an input into the probability of longer-term use.

Other examples of potentially missing context: Average purchases are up, but so are order returns. Conversions are up from ads that don’t speak to your value proposition. One channel is cannibalizing another channel. The app is easier for new users but harder for experienced users. Time spent in the app is up, but your goal is to save people time. People are querying the data more, but that’s because they are having trouble understanding the results. Customers are more active in the app, but they’ve shifted to wasting time instead of valuable networking.

Note how in each of these examples, context is everything. The lack of counterbalancing information makes it hard to make sense of the big picture and where the metric fits.

In addition to the surrounding context, we need to ensure people understand the Why.

Problem 2: Vanity metrics have unclear intent

Second, we have confusion about the intent of the metric. The definition of the metric may be explicit, but what we are trying to measure is unclear. A classic example here is Return Visits. Did I return to the product because I liked the product? Or because the product was hard to use, and I needed to take a break? Or needed customer service’s help?

Many classic web “engagement” metrics like Page Views, Time on Page, and Average Session Duration are remnants of a pre-mobile, pre-device-swapping, pre-30-browser-tab, pre-single-page-app era. They were the best proxies for engagement and value exchange available at the time, but aren’t the best measures we have available now.

The connection between what we are attempting to measure and the “proxy” we’ve chosen is extremely clear with some metrics. Or so we think! For example, I tell a friend that I was able to sleep eight hours last night. My friend interprets my intent as, “John is trying to communicate that he had a good night of sleep.”

But hours of sleep is but one of many variables. This study mentions ~23 sleep variables used when studying sleep quality, including REM latency, REM sleep, small movements in sleep, the timings of different sleep cycles, the number of cycles, etc. This study mentions that sleep duration may have a “direct association with mortality.” Yikes!

Its authors introduce the Pittsburgh Sleep Quality Index and clearly outline the intent of the metric.

The Pittsburgh Sleep Quality index was developed with several goals: (1) to provide a reliable, valid, and standardized measure of sleep quality; (2) to discriminate between “good” and “poor” sleepers; (3) to provide an index that is easy for subjects to use and for clinicians and researchers to interpret; and (4) to provide a brief, clinically useful assessment of a variety of sleep disturbances that make affect sleep quality.

Communicating intent is critical. These authors likely faced trade-offs. Ease of use for subjects may not immediately equal depth of use for researchers. Standardization is helpful for comparability but often involves reducing contextual factors. The assessment is “brief”, which involves a trade-off between assessment completion rates and the depth of the assessment.

A great statement of intent covers the fundamental tradeoffs and goals.

What does effectively stating metric intent look like?:

Relaying the facts. Seeking theories/insights:

Here is the number of outages we had in the last 30 days and how that compares to past periods. Note the increase. What’s going on here, do you think? What are we seeing?

As a proxy for something not directly measurable:

Our North Star Metric is “Loyal DIYers,” defined as the number of users who performed high-value DIY project actions combined with their community involvement. It is a proxy for a combination of loyalty, satisfaction, and using our product in ways congruent with our community-oriented strategy. The data suggests—but does not prove (yet)—that this is a leading indicator of higher customer lifetime value and viral acquisition.

We want to find an actionable metric that 1) a team can move and 2) will contribute to the mid-term success of the business.

The Hex Pistols are going to focus on improving the effectiveness of the onboarding workflow. It is a juggling act. We know we can rush people through and not set them up for success. Or we can make it very comprehensive, reducing the probability of them seeing the product in action. To guide our work, we will focus on decreasing the 90% percentile time to project sharing. Project sharing is an early signal that users are comfortable and able to use the product.

Intent matters!

Problem 3: Vanity metrics do not guide action and learning

I recently asked Twitter and LinkedIn:

  • What is your test for when something is a vanity metric? (Twitter)
  • How do you know when a metric is a vanity metric? (LinkedIn)

One of the highest-ranking “tests” was whether the metric guided actions and decisions.

When no one can act in a meaningful way upon what it shows us. When no possible value for the metric will prompt us to actually improve anything. Ola Berg

The result is not actionable. Regardless [of whether] the metric goes up or down, we don’t change what we do. Chris Lukassen

When nobody gets worried if it stops rising/plateaus/or declines. ex: “Our NPS score is 90!” one month followed by “Our NPS score is 50!” next month. Heidi Atkinson

Action, decisions, and learning are a big deal.

If a number keeps going up, and the only action it inspires is a furrowed brow in an all-hands meeting, you probably have a vanity metric on your hands. If a team carts out a metric to celebrate, but when it drops, they don’t shift their strategy or tactics, you’re probably looking at a vanity metric.

Examples include not-very actionable metrics include:

  • Average Session Length. It goes up or down. What do you do?
  • New Users (minus acquisition channel). It goes up or down. What do you do?
  • New Followers. It goes up or down. What do you do?

There are a couple of caveats here.

A metric can be meaningful but not immediately actionable.

In our North Star Workshops, we stress that the North Star Metric should ideally be a bit out of reach. It is the output of teams influencing the various North Star Inputs. Why wouldn’t you want an actionable North Star Metric? The NSM intends to act as a leading indicator of sustainable business performance (in the multi-year timeframe). Almost by definition, it will be a bit distant from day-to-day work. We need inputs to serve as the “bridge” between everyday work and that meaningful input into business success.

We track our North Star Metric, and if it stalls, it will force us to reconsider our strategy, but a team doesn’t wake up each morning hoping to influence it directly.

A metric can be exploratory. We don’t know what to do with it yet.

Teams are generally aware of the “actionability” test, but almost to a fault. They will spend months and months trying to figure out a “magic metric” or set of magic metrics that do it all—actionable, predictive, explanatory, etc. Product leaders get seriously stressed when handed a metric to “own” but are unsure whether they can “control” movements in the metric.

The result? Teams use vanity metrics that are “safe” because they convey good news. They aren’t helpful, but they don’t pretend to be actionable, so they don’t ruffle any feathers. We don’t want this.

It is OK to use exploratory metrics instead. Just call them out.

A slight reduction in uncertainty may be enough to inspire action.

Product work is about making decisions under conditions of uncertainty. If you want until you are 100% certain about something, you will be acting too late. Therefore, we shouldn’t shoot for perfect metrics that reduce all uncertainty about the actions we take.

Goodhart’s Law and the tension between good measurement and good targets

Goodhart’s Law states that:

“When a measure becomes a target, it ceases to be a good measure.”

Contrast this with my co-worker Adam Greco’s guidance about Vanity Metrics:

If someone isn’t going to be promoted or fired if a metric goes up or down, it is probably a vanity metric

Adam Greco

Here we have a tension/paradox. Once a metric becomes a target and becomes a signal of doing a good/bad job, you risk it becoming a vanity metric because people will make sure it goes up. And yet we want our metrics to mean something—to be relevant, to be good proxies, and to inform relevant decisions.

Examples of Goodhart’s Law:

  • If a team has a target of predictably shipping features, they will be less likely to process disconfirming new feedback that might appear “unpredictable.”
  • If a team has a target of increasing average order size, they will be more likely to increase average order size at the expense of future outcomes, brand loyalty, etc.
  • If a manager has a target of hiring a certain number of people in a quarter, they will be more likely to hire someone who isn’t the best candidate.

So what can this tell us about using more effective metrics and fewer vanity metrics? First are responsible for selecting meaningful goals and targets and defining effective “guardrails” to understand any adverse 2nd or 3rd order effects. We can’t defeat Goodhart’s Law completely—you have to assume that people will play the game you insist on them playing—but we can strive to establish checks and balances.

Using Adam’s tip, you can also ask yourself, “what do we want to reward here?” Being accountable for business results makes sense. But you don’t want to promote people based on them hitting arbitrary metrics and success theater. I’m a big believer in Bill Walsh’s idea of The Score Takes Care of Itself. Targets should encourage positive habits and routines.

Recap

We described three common problems associated with vanity metrics:

  • Vanity metrics lack context
  • Vanity metrics have unclear intent
  • Vanity metrics do not guide action and learning

The effective use of metrics includes providing context, stating your intent, and picking metrics that guide action and learning. Pointing to a metric and saying “that is a vanity metric” is equivalent to saying “you are using that metric as a vanity metric.”

The Vanity Metric Test

We’ve discussed various problems that contribute to using vanity metrics and problems associated with vanity metrics. Now it is time to put your metrics to the test.

In this section, we present ten statements that describe the healthy and effective use of metrics. You’ll notice the themes we explored earlier in this post: context, intent, responsible action, and learning.

For each statement, we suggest you:

  1. Discuss the prompt with your team
  2. Seek diverse perspectives
  3. Flag items that need attention

Want to download the worksheet and use it with your team? Download here.


S1: The team understands the underlying rationale for tracking the metric.

Tip: Include metrics orientation in your employee onboarding plan. Amplitude customers frequently use our Notebooks feature to provide context around key metrics.


S2: We present the metric alongside related metrics that add necessary context. When presented in isolation, we add required footnotes and references.

Tip: Normalize displaying guardrail and related metrics in presentations.


S3: The hypotheses (and assumptions) connecting the metric to meaningful outcomes and impact are clearly articulated, available, and open to challenge/discussion.

Tip: Use tree diagrams (driver trees, North Star Framework, assumption trees, etc.) and causal relationship diagrams to communicate hypothesized causal relationships. Consider playing the “Random Jira Ticket” game. Can you randomly pick a Jira ticket and “walk the tree” up from that item to something that will matter in the long term?


S4: The metric calculation/definition is inspectable, checkable, and decomposable. Its various components, clauses, features, etc., can be separated. Someone with good domain knowledge can understand how it works.

Tip: Whenever possible, share the metric so that someone can “click in” to how it is calculated. For example, if the metric involves a filter like “shared with more than 7 users in the 7 days”, it should be possible to adjust that clause and see how that number compares to the total number of users. Build trust by enabling people to recreate the metric.


S5: The metric is part of a regularly reviewed and discussed dashboard, scorecard, or report. It has survived healthy scrutiny. If the metric is more exploratory and untested (or an “I was curious whether….”), that context is clear from the outset.

Tip: Scrutiny is a good thing. The more eyes you can get on a metric, the better. Invite criticism. Record questions as they come up. Make each “showing” of the metric (e.g., at all-hands or product review) successively better.


S6: The team has a working theory about what changes in the metric indicate.

Tip: Here’s a basic prompt to get you thinking: “An increase in this metric is a signal that _______ , and a decrease in this metric is a signal that _______.”


S7: Over time, the metric provides increasing value and confidence. We can point to specific decisions and actions resulting from using the metric (and those actions are reviewable). The company would invest in continuing tracking it and communicating it.

Tip: Indicate confidence levels when displaying metrics, and keep a decision/action log. Try to normalize not being 100% sure at first and balancing displaying metrics with high confidence levels with new candidate metrics with lower confidence levels.


S8: The team establishes clear thresholds of action (e.g., “if it exceeds X, then we may consider Y”). The metric can go down. And if it goes down, it will likely inspire inspection/action.

Tip: Conduct a scenario planning workshop to understand better how movements in the metric will dictate future behavior. Set monitors in your analytics tool to warn you when you have reached a threshold.


S9: The metric is comparative (over time, vs. similar metrics, etc.) Put more broadly, if tracking it for a protracted period, it is possible to make apples vs. apples comparisons between periods.

Tip: Include period over period views in your dashboards to get more eyes on comparisons.


S10: The team uses the metric to communicate challenges AND wins. Not just wins.

Tip: Leaders set the tone here. Discuss situations that didn’t work out as you expected and how you used data to figure that out.


Summary

Vanity metrics are metrics that make us feel good, but don’t help us do better work or make better decisions. No one is immune to using vanity metrics! The key is ensuring you provide context, state the intent of the metrics you use, and clarify the actions and decisions that the metric (or metrics) will drive.

To define meaningful metrics, check out the North Star Playbook. Establishing a North Star Metric and constellation of actionable inputs is a powerful way to avoid using vanity metrics.

North Star Playbook Ad CTA
About the author
John Cutler

John Cutler

Former Product Evangelist, Amplitude

More from John

John Cutler is a former product evangelist and coach at Amplitude.

More from John
Topics

Metrics

Platform
  • Product Analytics
  • Feature Experimentation
  • Feature Management
  • Web Analytics
  • Web Experimentation
  • Session Replay
  • Activation
  • Guides and Surveys
  • AI Agents
  • AI Visibility
  • AI Feedback
  • Amplitude MCP
Compare us
  • Adobe
  • Google Analytics
  • Mixpanel
  • Heap
  • Optimizely
  • Fullstory
  • Pendo
Resources
  • Resource Library
  • Blog
  • Product Updates
  • Amp Champs
  • Amplitude Academy
  • Events
  • Glossary
Partners & Support
  • Contact Us
  • Customer Help Center
  • Community
  • Developer Docs
  • Find a Partner
  • Become an affiliate
Company
  • About Us
  • Careers
  • Press & News
  • Investor Relations
  • Diversity, Equity & Inclusion
Terms of ServicePrivacy NoticeAcceptable Use PolicyLegal
EnglishJapanese (日本語)Korean (한국어)Español (Spain)Português (Brasil)Português (Portugal)FrançaisDeutsch
© 2025 Amplitude, Inc. All rights reserved. Amplitude is a registered trademark of Amplitude, Inc.
Blog
InsightsProductCompanyCustomers
Topics

101

AI

APJ

Acquisition

Adobe Analytics

Amplify

Amplitude Academy

Amplitude Activation

Amplitude Analytics

Amplitude Audiences

Amplitude Community

Amplitude Feature Experimentation

Amplitude Guides and Surveys

Amplitude Heatmaps

Amplitude Made Easy

Amplitude Session Replay

Amplitude Web Experimentation

Amplitude on Amplitude

Analytics

B2B SaaS

Behavioral Analytics

Benchmarks

Churn Analysis

Cohort Analysis

Collaboration

Consolidation

Conversion

Customer Experience

Customer Lifetime Value

DEI

Data

Data Governance

Data Management

Data Tables

Digital Experience Maturity

Digital Native

Digital Transformer

EMEA

Ecommerce

Employee Resource Group

Engagement

Event Tracking

Experimentation

Feature Adoption

Financial Services

Funnel Analysis

Getting Started

Google Analytics

Growth

Healthcare

How I Amplitude

Implementation

Integration

LATAM

Life at Amplitude

MCP

Machine Learning

Marketing Analytics

Media and Entertainment

Metrics

Modern Data Series

Monetization

Next Gen Builders

North Star Metric

Partnerships

Personalization

Pioneer Awards

Privacy

Product 50

Product Analytics

Product Design

Product Management

Product Releases

Product Strategy

Product-Led Growth

Recap

Retention

Startup

Tech Stack

The Ampys

Warehouse-native Amplitude

Recommended Reading

article card image
Read 
Product
Getting Started: Product Analytics Isn’t Just for Analysts

Dec 5, 2025

5 min read

article card image
Read 
Insights
Vibe Check Part 3: When Vibe Marketing Goes Off the Rails

Dec 4, 2025

8 min read

article card image
Read 
Customers
How CAFU Tripled Engagement and Boosted Conversions 20%+

Dec 4, 2025

8 min read

article card image
Read 
Customers
The Future is Data-Driven: Introducing the Winners of the Ampy Awards 2025

Dec 2, 2025

6 min read

Explore Related Content

Integration
Using Behavioral Analytics for Growth with the Amplitude App on HubSpot

Jun 17, 2024

10 min read

Personalization
Identity Resolution: The Secret to a 360-Degree Customer View

Feb 16, 2024

10 min read

Product
Inside Warehouse-native Amplitude: A Technical Deep Dive

Jun 27, 2023

15 min read

Guide
5 Proven Strategies to Boost Customer Engagement

Jul 12, 2023

Video
Designing High-Impact Experiments

May 13, 2024

Startup
9 Direct-to-consumer Marketing Tactics to Accelerate Ecommerce Growth

Feb 20, 2024

10 min read

Growth
Leveraging Analytics to Achieve Product-Market Fit

Jul 20, 2023

10 min read

Product
iFood Serves Up 54% More Checkouts with Error Message Makeover

Oct 7, 2024

9 min read