top of page

Product health tracking from scratch 

within 3 months, mid-merger

TL;DR

  • Context: I joined mid-merger & worked for 3 months. Leadership wanted “UX benchmarking” across the full user needs pyramid, but tools, tracking, and engineering capacity were limited.

  • ​Goal: Create a practical way to track product experience health over time — and get something tangible fast (not just a concept).

  • ​My role: Lead quant-focused researcher: measurement framing, survey system design, pilot setup, iteration plan, stakeholder alignment.

  • Method: Benchmarking audit → metric strategy for high & low level → in-product survey strategy + first intercepts (exit intent + post-completion).

  • Data: First intercept signals + response behavior learnings, plus rollout path for adding new items into existing CSAT/NPS).

  • Outputs: A clear two-layer measurement system + a shippable intercept setup + a realistic “what we can do now” plan within 3 months.

  • Impact: Got leadership buy-in early, turned a huge vague ambition into a system we could actually start running despite merger chaos.

It felt like building a lighthouse in the fog — creating a signal everyone can trust, even while everything around it is changing

 

The team had been struggling with benchmarking for months and had tried paths that weren’t scalable. In the middle of a merger, I started by going wide: one plan that laid out every viable route — trade-offs included — grounded in leadership’s wishlist and real constraints. Once leadership was on board, I switched into action mode, split the work into two lanes, and shipped the first real step: product health tracking via in-product intercepts.

ChatGPT Image Jan 13, 2026, 05_13_01 PM.png

Project steps

diamond-model_measurment strategy.png

My Work-Flow
 

I used a diverge–converge approach:

I went wide to map the full option space (Discover), then narrowed it into a clear measurement strategy and first-step plan (Define), built the pilot setup in real tooling constraints (Develop), and finally shipped the intercepts, shared the learnings, and iterated toward rollout (Deliver).

1. Reality check & mapping the options

What happened?
 

In Benchmarking started as an “everything at once” request. Leadership wanted it to cover the full user needs pyramid — from basic functionality, task success all the way up to top-level loyalty signals — plus competitor tracking. That’s why it became very wide in scope, and why the team had been spinning for months, including trying some non-scalable paths like constant manual UX tests across all 8 markets.

Discover: Before pushing a single solution,
I went wide and created one plan that captured:

  • the full option space (what benchmarking could mean)

  • what each option would actually require
    (tools, tracking, people, time)

  • what was feasible now vs. what was blocked

​

The risk wasn’t “lack of ideas.” The risk was picking a direction that sounds good but would not run reliably, especially in a merger environment with limited engineering support.

​

Define: I created a clear map with trade-offshelped leadership and the team align on a practical first steps:

  • start with internal benchmarking first (what we can control and learn from quickly) + reframe it as Product Health Tracking

  • focus on the customer side of the marketplace & the core conversion funnel

  • only then expand outward (once the foundations exist)

Once the option space was clear, we needed a strategy for what to measure — and how to evolve from what already exists.
 

There were legacy surveys (CSAT/NPS), but they weren’t enough on their own for what product teams needed day-to-day:

  • Where is friction happening?

  • Is it getting better or worse?

  • What should we fix first?

​

Challenge: These conversations can easily get stuck in loops: CSAT vs NPS, one metric vs many, survey fatigue, tracking limitations, etc., especially when teams are already overloaded.

 

Solution: I proposed a simple two-layer system
 

  • Tier 1 (North Star direction): A small set of long-term experience outcome items we can gradually layer into existing programs and build as new North Star Metrics once validated.

  • Tier 2 (Product health tracking): Short in-product intercept surveys that are repeatable, flow-specific, and diagnostic.


The key idea: Don’t rip and replace. Instead we should start with building the intercepts (Tier 2) to earn fast -  and meanwhile layer Tier 1 (North Star Metrics) carefully into existing North Star surveys.

On to the practical step: making the system real.

To avoid a “happy path only” view, I designed two intercept moments on the same core conversion funnel (Homeowner job posting ):

  1. Exit intent → catch the “unhappy path”

  2. Post-completion → capture the “happy path”

​

Considerations:

  • Only surveying after success misses the people who struggled and left

  • Only surveying exciters misses what “good” looks like

  • In-product intercept => max. 3-4 questions

  • Basic questionnaire structure should be unified across the product, following the Tier 2 structure: goal, ease/usefulness, qual themes

  • Learning from past: forced open feedback collects noise data

​

I built the first intercepts around strict rules:

  • Short (low effort using optional + display logic)

  • Structured first (to trend + segment)

  • Optional text only 

 

I designed the first intercepts for iteration in order to revise response rate, drop-offs, answer distribution and quality. I picked a market with high volume (Germany), to collect lot of data quickly, iterate fast and learn.​

Measurement only matters if people use it.
So I made the outputs simple, repeatable, and easy to act on.

​

What I delivered

  • A clear picture of the measurement system (Tier 1 + Tier 2)

  • A realistic plan for what we can do now vs later

  • A shippable intercept foundation that starts generating product health signals (Tier 2)

  • A transition plan for the North Star metrics (Tier 1): keep CSAT/NPS stable while layering in better items over time + validating them by analysing the data. 
     

Impact

  • Leadership aligned early, which unlocked momentum

  • We stopped debating and started running something real

  • We created a system that can grow, even in a messy environment

Happy End?

... not the fairytale kind (yet).
More the realistic kind:

 

We didn’t magically fix measurement in three months during a merger.

But we did build a lighthouse - and we turned it on.

ChatGPT Image Jan 13, 2026, 04_10_52 PM.png
splodge4_edited.png

© 2024 Rebekka Hoffmann. Powered and secured by Wix

bottom of page