Why are response volumes different between Measure and Identify sections?
Response volumes for a selected time frame are calculated differently as the two sections serve different purposes in your data analysis.
TL;DR
- Measure counts by when surveys were sent (operational coverage & response rate).
- Identify counts by when responses were received (experience trend over time).
Because end-users may respond days or weeks after the survey is sent, the same time frame can legitimately show different response volumes in Measure vs. Identify.
What each section is for
-
Measure: “What are we sending to and receiving from end-users?”
Keyed to the survey sent date from your ITSM tool. Use this to manage measurement activity: coverage, response rate, and unique respondents. -
Identify: “How is Experience developing over time?”
Keyed to the response received date. Use this to analyze trends and changes in experience over time, across channels, or segments.
Why do the numbers differ even with the same time frame and filters
When you select the same period (e.g., Last 6 months) in both sections:
- Measure includes responses tied to surveys sent during that period, even if they arrive later.
- Identify includes all responses received during that period, even if the surveys were sent earlier.
Since responses can arrive with a delay after the survey is sent, the two views are expected to show different totals for the same date range.
Example from demo data (illustrative):
Measure shows 12,104 responses vs. Identify 11,605 responses for the same filters.
The gap reflects differing date anchors (sent-date vs. received-date) and response lag.
- Measure page shows data based on when the surveys were sent from your ITSM tool, as this page is where you can track your measurement activities, i.e., “What are you sending to and receiving from end-users?" This page also shows other data such as 'Surveys sent' and 'Response rate', which follow the same calculation logic.

- Identify section shows data based on when the responses are received, as this is where to focus on “How is Experience developing over time?” It hence shows all responses received during your selected time frame, even if their corresponding tickets were resolved before that time frame started.

The three numbers on the Measurement activity card
These three indicators often get compared side-by-side; here’s how they’re calculated and how they behave over time.

-
Surveys sent
- What it means: How many feedback requests were sent during the selected period (e.g., January 2025).
- Date anchor: Sent date (Measure logic).
- Does it change later? No. Once the period is closed, this count is fixed.
-
Responses received
- What it means: How many responses have been received to those surveys sent (from item 1), regardless of when the responses arrived.
- Date anchor: Still tied to the “sent in period” cohort, but responses may come later.
- Does it change later? Yes. It can increase as late responses to those surveys arrive.
-
Response volume (rate) – Last 6 months
- What it means: The total responses received in each of the last six months counted by response received date (not by when the surveys were sent). The percentage shown with each bar reflects the response rate context for that month/period in the UI.
- Date anchor: Received date (Identify logic).
- Does it change later? No, for a given “as-of” monthly report, the bars represent responses received within those months. If a response arrives later, it is counted only in the month it arrives — it is never moved retroactively into past months.
Practical takeaway
- Use Surveys sent to understand your measurement coverage in a period.
- Use Responses received (for that sent cohort) to track participation catching up.
- Use Response volume (rate) to see how experience activity (responses) actually landed over time, month by month.
Which view should I use, and when?
-
Capacity & Coverage (operational): Use Measure
- Are we sending enough surveys?
- What’s our response rate for the period?
- How many unique persons have responded?
-
Trends & Insights (experience): Use Identify
- How is experience trending month-by-month?
- Are there seasonal patterns or changes after an intervention?
- Which channels, services, or locations are improving or declining?
When do Measure and Identify numbers appear to “match”?
They can be close when response lag is minimal (most people respond shortly after the survey) and when the time window is wide enough that late responses fall inside the same comparison window. Otherwise, differences are normal and expected.
Common pitfalls to avoid
-
Comparing Measure’s “Responses received” to Identify’s “Response volume” as if they were the same metric.
They are anchored to different dates (sent vs. received). -
Expecting “Surveys sent” to change after the month-end.
It won’t—only the Responses received (for that cohort) can grow as late responses arrive. -
Reading the six-month bars as “responses to surveys sent in those months.”
The bars show responses received in those months, irrespective of when the related surveys were sent.