Summarize this documentation using AI
Overview
If you run retention in Customer.io, the fastest way to break your segmentation is letting internal QA traffic look like real shoppers. Test support is how you validate SDK tracking (identify, events, device tokens) and still keep revenue, cart recovery, and reactivation reporting trustworthy. If you want a second set of eyes on your tracking plan before you ship, you can book a strategy call—this is usually where we catch the “one missing identify call” that causes weeks of misattribution.
In most retention programs, we’ve seen test data creep in through mobile devices (push tokens), staging builds pointing at production, and shared QA accounts. The fix isn’t “be careful”—it’s putting a clear, enforced marker in your SDK layer and making Customer.io campaigns respect it.
How It Works
Test support is basically a contract between your app and Customer.io: your SDK sends a reliable signal that a profile/device/event is “test,” and your orchestration (segments, campaigns, reporting) treats it differently. The key is doing this at the identity layer (identify + attributes) and not only at the message layer (like excluding a list of emails), because devices and anonymous sessions are where most pollution starts.
- Identity stitching starts with Identify. When a user logs in (or you otherwise know who they are), your app calls the Customer.io SDK
identifywith a stable identifier (typically your internal customer_id). That’s the anchor Customer.io uses to tie events + devices to a person. - Tag test state as an attribute. Your app sets something like
is_test_user=true(and ideallyenvironment=staging|production) on the person profile during identify. This is what you’ll filter on everywhere else. - Events still fire, but become filterable. Your app continues sending events like
product_viewed,added_to_cart,checkout_started,order_completed. Because the person is tagged, you can exclude them from segments, conversions, and journey entry. - Devices matter for push. If your QA device registers for push in production, it can receive real campaigns unless you also gate push sends using that same test attribute (or you keep QA devices in a separate workspace/project).
Practical example: your team QA’s a cart abandonment flow on iOS. They add to cart, close the app, and wait for push + email. Without test support, those events inflate your abandoned cart volume and can even “convert” if your QA completes checkout with a discount code—now your recovery rate looks better than it is.
Step-by-Step Setup
The goal here is simple: every test interaction should be unmistakably marked at the moment the SDK identifies the user (and ideally even before, if you track anonymous activity). Then you wire Customer.io segments/campaign rules to ignore or isolate those profiles.
- Decide your test markers (keep them boring and consistent).
Use a boolean likeis_test_userand a string likeenvironment. Avoid clever naming—operators need to filter fast. - Implement the marker in your app’s Identify call.
When your app calls the Customer.io SDKidentify, include attributes:id: your stable customer identifieris_test_user: true for QA/internal accounts (false or absent for real users)environment:stagingfor non-prod builds;productionfor App Store/Play builds
is_test_useronly after login, but you’re firing “anonymous” cart events before login. If you do anonymous tracking, you’ll want a parallel approach for anonymous profiles too (see tips below). - Gate event tracking in non-prod builds (optional, but safer).
If you have staging builds, either:- Send staging builds to a separate Customer.io workspace/project, or
- Force
environment=stagingand exclude it everywhere in production journeys.
- Create a “Exclude Test Users” segment in Customer.io.
Build a segment that matchesis_test_user=trueORenvironment=staging. This becomes your universal suppression segment. - Add exclusion rules to your retention campaigns and journeys.
For cart recovery, post-purchase, replenishment, winback—add an entry filter or early exit condition:is_test_user != trueandenvironment = production. - Validate with a real QA loop.
Run one test checkout and one abandoned cart from a flagged QA account. Confirm:- Events appear on the profile (so tracking works)
- The profile does not enter production journeys
- Your conversion metrics don’t count the QA order as a “real” conversion (depending on how you report)
When Should You Use This Feature
If you’re serious about retention performance, you should treat test support as mandatory plumbing—not a “nice to have.” It’s most valuable when you’re iterating quickly and need to QA flows without second-guessing your dashboards.
- Cart recovery QA on mobile. You’re testing push + email timing after
added_to_cartand need confidence you’re not inflating abandoned cart counts. - Repeat purchase / replenishment journeys. You’re tuning delays based on
order_completedand don’t want internal orders to skew “days-to-reorder” logic. - Reactivation experiments. You’re running winback offers to “inactive 60 days” and don’t want employees with dormant accounts to enter and redeem codes.
- High-volume release cycles. Every app release includes analytics changes; test support keeps your production workspace usable while QA hammers flows.
Operational Considerations
Where teams get tripped up isn’t the SDK call—it’s how the data moves through segmentation and orchestration over time. If you don’t operationalize the rule, test users leak into at least one campaign.
- Segmentation hygiene: maintain a single canonical “Exclude Test Users” segment and reference it everywhere. Duplicated logic across campaigns drifts and eventually breaks.
- Data flow realities: make sure the attribute is set before the events you care about. If
added_to_cartfires before identify/attribute update, the user can enter a journey before the “test” flag lands. - Anonymous-to-known stitching: if you track anonymous browsing/cart building, decide how you’ll mark QA sessions. Common approach: QA builds always set
environment=stagingat SDK init, so even anonymous profiles are filterable. - Orchestration across channels: push is the usual leak. Even if email is suppressed by address conventions, a device token can still receive messages unless your journey filters on
is_test_user/environment. - Reporting and conversion criteria: if your journey conversion goal is “purchase event,” QA purchases can close out journeys and artificially boost conversion. Exclude test users at entry, not only from sends.
Implementation Checklist
Before you declare this “done,” you want a checklist that covers identity, events, and journey gating. This is the stuff that keeps your retention machine from getting noisy as your team grows.
- App sets
is_test_user(and/orenvironment) during the Customer.io SDKidentifycall - Staging builds are isolated (separate workspace) or permanently tagged
environment=staging - Core retention events are tracked consistently:
product_viewed,added_to_cart,checkout_started,order_completed - A single “Exclude Test Users” segment exists and is referenced across campaigns
- All cart recovery / post-purchase / winback journeys have entry filters or early exits excluding test users
- QA run confirms: events arrive, profiles don’t enter journeys, and conversions aren’t credited
Expert Implementation Tips
Once the basics are in, a few operator moves make this much harder to break—especially when multiple engineers and marketers touch the system.
- Prefer build-based flags over email-based rules. If you rely on “@company.com” exclusions, contractors and shared Gmail QA accounts will slip through. Build flags are deterministic.
- Set the test marker as early as possible. If your SDK supports setting attributes at init (or you can identify immediately with a device-scoped ID), do it. The earlier the attribute exists, the less likely a journey entry happens before suppression.
- Create a dedicated QA journey. Mirror your cart recovery journey but invert the filter (
is_test_user=true). That way QA can validate timing/content without touching production metrics. - Log identity transitions. Keep an internal log when a profile moves from anonymous → known (identify). Most “why did this user get two cart emails?” issues come from duplicate identify calls or mismatched IDs.
Common Mistakes to Avoid
Most issues show up as “our abandoned cart numbers look off” or “QA devices got a real promo.” These are the patterns behind that.
- Marking test users only in Customer.io, not in the app. If the app doesn’t send the flag, you’ll never catch anonymous events or push tokens.
- Setting
is_test_userafter events fire. The user can enter a journey onadded_to_cartbefore the attribute update arrives. - Relying on a suppression list instead of journey entry filters. Suppression stops sends, but it doesn’t stop journey entry or conversion attribution.
- Using shared QA accounts without stable IDs. If multiple people log into the same account across devices, you’ll get messy device graphs and confusing message histories.
- Pointing staging builds at production API keys. This is the fastest way to pollute data at scale—lock it down in CI/CD and config management.
Summary
If your SDK tracking is clean but your test traffic isn’t isolated, your retention program will lie to you—especially on cart recovery and winback. Tag test users/devices at identify time, exclude them at journey entry, and keep one canonical suppression segment. That’s the difference between confident iteration and constant second-guessing.
Implement Test Support with Propel
If you’re wiring this up and want to pressure-test the identity stitching (anonymous → known), push token handling, and journey exclusion rules inside Customer.io, it’s worth a quick working session. Bring your current SDK identify/event payloads and one retention flow you care about (cart recovery is usually the fastest). You can book a strategy call and we’ll map the cleanest way to QA without contaminating production segments or conversion reporting.