Summarize this documentation using AI
Overview
If you’re building retention in Customer.io, you’re going to QA constantly—cart recovery, browse abandon, post-purchase cross-sell, winback. The problem is your QA activity looks exactly like real customer behavior unless you intentionally separate it, which quietly breaks segmentation, skews reporting, and can trigger messages to internal testers at the worst time. If you want a second set of eyes on your tracking plan and test strategy, book a strategy call.
Test support is the operational layer that keeps “testing the app” from turning into “training your automations on fake behavior.” In most retention programs, we’ve seen the biggest issues come from identity stitching during QA (anonymous → identified), and from test events entering production journeys.
How It Works
In practice, you need two things to QA safely: (1) a reliable way to mark a person/device as a tester, and (2) a consistent rule in Customer.io that excludes that traffic from production segments and journeys. The SDK is where you control both identity and event payloads, so that’s where you enforce the discipline.
- Identity is the root of clean testing. Your app generates anonymous activity before login, then you call
identifyafter signup/login. If your QA flow reuses devices/accounts, you’ll accidentally stitch test events onto real profiles unless you use dedicated test identities and a durable “is_test” marker. - Events should carry a test flag (or route to a test workspace). The safest pattern is either: send QA data to a separate Customer.io workspace/environment, or attach a boolean like
is_test: trueon every event from test builds / internal accounts and filter it out in segments and triggers. - Segments and journey entry rules do the containment. Once
is_testexists on the person (and/or on events), you exclude testers everywhere that matters: cart recovery triggers, browse abandon, post-purchase flows, and your KPI segments (30/60/90-day repeat purchasers, at-risk cohorts, etc.).
Real D2C scenario: Your team QA’s a cart abandonment flow on iOS. A tester adds items, triggers add_to_cart, then closes the app. If that tester later logs into a real customer account on the same device during support testing, anonymous cart events can merge into the real profile after identify. Without test support, you’ll send a “You left something behind” SMS to an actual customer with a cart they never created.
Step-by-Step Setup
The goal here is simple: make test detection deterministic from the app side, then make Customer.io journeys treat test traffic as non-existent. Do this once and you’ll stop chasing phantom conversions and broken holds.
- Decide your test identity strategy (don’t skip this).
Pick one:- Separate workspace/environment: QA builds send to a non-prod Customer.io workspace. Cleanest for analytics + safest for messaging.
- Single workspace with flags: Production workspace, but you tag internal users/events with
is_testand exclude them everywhere.
- Mark testers at
identifytime.
When your app calls the Customer.io SDKidentify, include a stable attribute like:is_test: truefor internal accounts- Optionally
test_group: "qa"ortest_build: "staging"to debug where noise came from
Operational note: don’t rely on “email contains +test” alone unless you control every QA identity. People forget and you’ll miss exclusions. - Tag events from test contexts.
For key retention events (e.g.,viewed_product,add_to_cart,checkout_started,order_completed), addis_test: truein the event properties when the app is in a QA build or the identified person is a tester. - Validate anonymous-to-identified stitching behavior.
Run this exact QA script on one device:- Open app logged out → fire
viewed_product - Log in as a known test user → call
identifywithis_test: true - Confirm the pre-login events are associated with the same test profile (not a real customer)
- Open app logged out → fire
- Build a “Test Users” segment in Customer.io.
Segment definition typically includesis_test = true(and optionally internal email domain). This becomes your global exclusion list. - Apply exclusions to production triggers.
For every retention journey entry (cart abandon, browse abandon, replenishment, winback), add an entry condition or filter: Person is NOT in segment “Test Users” and/or Event propertyis_testis not true. - Set up a dedicated QA journey.
Create a parallel “QA Cart Abandon” journey that only allowsis_test = true. This gives your team a safe place to test copy, timing, and channel routing without touching production.
When Should You Use This Feature
If you’re shipping app changes weekly (new PDP, new checkout, new subscription flow), you’re constantly changing event timing and identity behavior. Test support becomes non-negotiable once you have multiple channels live or you’re optimizing based on event-driven conversion metrics.
- Cart recovery QA across app versions. When you change checkout steps, you need to confirm
checkout_startedandorder_completedstill fire correctly without triggering production SMS to employees. - Browse abandon and product discovery experiments. If you’re testing recommendation modules, you’ll generate a ton of
viewed_productnoise that can distort “high intent” segments unless excluded. - Post-purchase flows tied to SKU-level events. When you add new line items or bundles, QA events can pollute “customers who bought X” segments and send the wrong cross-sell.
- Reactivation/winback targeting. Internal testers often look “inactive” then suddenly “purchase” in QA—this breaks holdouts and misleads your winback performance read.
Operational Considerations
Test support isn’t a one-time toggle—it’s a data contract between your app and Customer.io. The operational risk is that your segmentation and orchestration assume clean identity + clean events, and QA is where both go off the rails.
- Segmentation hygiene: Maintain a single source of truth for exclusions (a “Test Users” segment). Don’t rebuild ad-hoc exclusions inside every journey—people forget one and that’s the one that pages you.
- Data flow realities: If you use both server-side and SDK events (common in D2C: app fires
add_to_cart, backend confirmsorder_completed), make sure the test flag exists in both streams. Otherwise you’ll exclude app events but still trigger journeys off backend purchase events. - Identity stitching: The highest-risk moment is when anonymous activity merges after
identify. Use dedicated test accounts, and avoid logging into real customer accounts on QA devices used for testing unless you’re explicitly testing support flows. - Orchestration across channels: Even if email is “safe,” push and SMS are not. One missed exclusion can send a push to an employee at 2am because your time windows and quiet hours are configured for customers, not staff.
Implementation Checklist
If you want this to actually stick, treat it like an instrumentation requirement, not a marketing preference. This checklist is the minimum bar before you trust retention reporting again.
- App calls
identifywith a durableis_testattribute for internal/test accounts - Key retention events include
is_testin properties (or QA builds route to a separate workspace) - Anonymous → identified merge behavior validated on at least one device per platform
- Customer.io “Test Users” segment created and shared internally as the global exclusion source
- All production journeys exclude “Test Users” (entry filters + any conversion/goal tracking)
- Dedicated QA journeys exist for cart abandon and post-purchase (so QA doesn’t touch prod)
- Internal QA accounts documented (who owns them, when to rotate, how to access)
Expert Implementation Tips
Once the basics are in place, the wins come from making QA faster while keeping data pristine. These are the patterns that hold up when multiple teams touch the app and retention program.
- Prefer build-based routing over manual flags when possible. If your mobile app has distinct bundle IDs / environments, route staging builds to a staging Customer.io workspace. It prevents human error (someone forgetting to set
is_test). - Use two layers of safety: person-level + event-level. Person attribute
is_testcatches most cases; event propertyis_testprotects you when events fire beforeidentifyor when backend events don’t inherit person attributes cleanly. - Create a “QA Inbox” channel policy. In your QA journeys, send to internal Slack/webhook or a restricted email list rather than real SMS/push. That way you can validate logic without risking channel compliance issues.
- Keep a single canonical event name set. Don’t create “test_add_to_cart” events. That splits logic and guarantees drift. Same event names, different environment/flags.
Common Mistakes to Avoid
Most teams don’t break retention because they lack tooling—they break it because QA behavior leaks into production audiences. These are the failure modes we see repeatedly.
- Relying on internal email domains only. Contractors, agencies, and shared QA accounts often don’t match your domain. Use explicit flags.
- Forgetting backend events. You exclude SDK events but your server still sends
order_completedfor test orders, triggering post-purchase flows and inflating repeat purchase rate. - Testing on real customer accounts. This is how you end up with “ghost carts” and incorrect product affinity segments. Use dedicated test accounts and devices.
- No QA-only journeys. If QA uses production journeys, you’ll constantly pause campaigns, add temporary filters, and accidentally ship those temporary filters into production.
- Not validating merge behavior. Anonymous activity merging into identified profiles is subtle and platform-dependent. If you don’t test it, you’ll debug it later inside campaign performance.
Summary
If you’re using the SDK to drive retention triggers, test support is how you keep QA from corrupting identity, segments, and performance reads. Either route QA builds to a separate workspace or enforce a strict is_test contract at identify + event level. Once it’s in place, cart recovery and repeat purchase automations become much easier to trust.
Implement Test Support with Propel
If you’re tightening up your SDK tracking and want to make sure your Customer.io journeys aren’t learning from test behavior, we can help you pressure-test identity stitching, event contracts, and exclusion logic end-to-end in Customer.io. When you’re ready, book a strategy call and we’ll map a QA-safe instrumentation plan that won’t break your cart recovery, post-purchase, or winback flows.