A/B Test events (Data In) in Customer.io

Customer.io partner logo

Table of Contents

Summarize this documentation using AI

This banner was added using fs-inject

Lorem ipsum dolor sit amet, consectetur adipiscing elit.

Overview

If you’re running retention experiments, the difference between “clean test results” and “total chaos” usually comes down to how your assignment data enters Customer.io. If you want a second set of eyes on your event design before it breaks segmentation or attribution, book a strategy call and we’ll pressure-test the data model with you.

A/B test events are simply the events you send into Customer.io that say: “this person was assigned to variant X for experiment Y.” Once that’s in reliably, you can trigger journeys, branch messaging, and measure conversion without guessing who saw what.

How It Works

Customer.io doesn’t magically know your experiment assignments—your source of truth (your app, storefront, experimentation tool, or backend) has to send them in as events. In practice, the assignment event becomes the spine of your retention test: it’s what segments reference, what triggers journeys, and what reporting uses to interpret outcomes.

  • Your system assigns a variant (A/B, control/treatment, or multi-variant) and emits an event into Customer.io.
  • Identity resolution happens at ingest: the event must be tied to the right person (email, customer_id, or your chosen identifier). If you send assignment events anonymously and never merge them, your “variant audience” will be incomplete.
  • Event properties carry the experiment metadata (experiment name/key, variant, timestamp, optional context like channel, product, or cohort). These properties are what you’ll filter on in segments and journey conditions.
  • Triggers and segmentation depend on consistency: if you change property names mid-test (e.g., variant vs variation), you’ll silently split your audience and your reporting won’t reconcile.

Real D2C scenario: You want to test whether adding a “free shipping threshold” callout in cart recovery increases recovery rate. The moment a shopper abandons cart, your backend assigns them to control (standard emails) or treatment (emails with threshold messaging) and sends ab_test_assigned. Customer.io uses that event to route them into the right branch and ensures you can later segment “treatment” vs “control” for conversion analysis.

Step-by-Step Setup

Before you build the journey, lock the event contract. Most retention programs get burned by building flows first and “figuring out data later,” which creates brittle triggers and messy cohorts.

  1. Pick your identifier strategy (and stick to it).
    Decide whether you’ll send events keyed by customer_id, email, or another stable ID. Avoid device-only identifiers for retention tests unless you have a guaranteed merge path.
  2. Define a single assignment event name.
    Use something explicit like experiment_assigned or ab_test_assigned. Don’t create a new event name per experiment unless you have a strong governance reason.
  3. Standardize required properties.
    At minimum, send:
    • experiment_key (e.g., cart_recovery_shipping_threshold_v1)
    • variant (e.g., control, treatment)
    • assigned_at (timestamp—either as event time or explicit property)
  4. Send the event at the moment of assignment (not when the message sends).
    If you emit assignment only when the email goes out, you’ll bias your test toward deliverable users and exclude people who should have been in the cohort.
  5. Ensure “assign once” behavior in your source system.
    Your app/backend should prevent re-assignment. Re-sending the same person into a different variant mid-test is the fastest way to invalidate results and break journey branching.
  6. Validate in Customer.io Activity Logs.
    Check a few known users and confirm the event shows up with the expected properties. Then confirm you can build a segment like: “Has event ab_test_assigned where experiment_key equals X and variant equals treatment.”
  7. Only then build the journey trigger/branching.
    Trigger a journey off the relevant behavioral event (e.g., cart_abandoned), then branch based on the latest assignment event (or assignment property) to route messaging.

When Should You Use This Feature

If you’re serious about retention optimization, you need assignment data flowing into Customer.io—not just “we sent two different emails.” The assignment event is what keeps cohorts clean across channels and over time.

  • Cart recovery tests: subject line vs offer, free shipping threshold messaging, SMS timing, dynamic bundles in the reminder sequence.
  • Repeat purchase acceleration: replenishment timing windows (day 21 vs day 28), cross-sell logic (category-based vs quiz-based), loyalty prompt placement.
  • Reactivation: different winback incentives (no discount vs $10 vs 15%), different content angles (new arrivals vs bestsellers vs social proof).
  • Holdout measurement: sending an explicit holdout variant assignment so you can quantify incremental lift rather than raw conversion.

Operational Considerations

On paper, an “assignment event” is simple. In practice, segmentation accuracy and trigger reliability depend on how you handle identity, deduping, and timing across systems.

  • Identity resolution is the make-or-break point.
    If your assignment happens pre-checkout (anonymous) but your conversion happens post-checkout (known), you need a merge strategy so the assignment event follows the person. Otherwise, your “treatment” segment will undercount and your results will look artificially strong or weak.
  • Event timing affects orchestration.
    If cart_abandoned fires before ab_test_assigned, your journey can start without the assignment and route people incorrectly. Fix this by assigning earlier, or by adding a short delay + “wait until assignment exists” pattern.
  • Property naming is governance, not preference.
    Pick experiment_key and variant and never deviate. Small inconsistencies create “shadow cohorts” that don’t match segments and break branching conditions.
  • Deduplication matters.
    If your system retries events, you can end up with multiple assignment events per person. Decide whether you’ll treat “latest assignment wins” or block repeats at the source.
  • Keep assignment separate from outcome events.
    Don’t overload the assignment event with purchase outcomes. Send clean purchase/order events separately so you can measure lift without contaminating the cohort definition.

Implementation Checklist

Use this as a pre-flight before you rely on the data for routing or reporting. It’s easier to fix an event contract now than after you’ve shipped three experiments and can’t reconcile cohorts.

  • Assignment event name is standardized (one name, many experiments)
  • Required properties exist: experiment_key, variant, timestamp
  • Identifier strategy is documented (email vs customer_id) and consistent
  • Anonymous-to-known merge path exists if assignment can happen pre-identification
  • Source system enforces “assign once” (no mid-test variant flips)
  • Event arrives before (or reliably shortly after) the journey trigger
  • Segments can be built cleanly for each variant using event filters
  • Activity Logs confirm real users are receiving correct properties

Expert Implementation Tips

These are the small operator moves that keep tests clean when you’re running multiple retention experiments at once.

  • Use a stable experiment key naming convention.
    Include surface + intent + version (e.g., winback_offer_tier_v2). When you’re looking at segments three months later, you’ll thank yourself.
  • Store assignment as both an event and (optionally) a person attribute.
    Events are best for auditability; a “current_experiment_variant” attribute can simplify branching when you only care about the latest state. Just don’t overwrite it if you need historical truth.
  • Add context properties when they change the message logic.
    For D2C, channel and product context matter: channel=email, cart_value, category. Keep it tight—only add what you’ll actually use for segmentation or analysis.
  • Build a QA segment for each variant.
    A segment like “Assigned to experiment X within last 1 day” lets you quickly sanity-check counts and spot ingestion failures before revenue is impacted.

Common Mistakes to Avoid

Most A/B tests don’t fail because the creative was bad—they fail because the data didn’t enter Customer.io in a way that kept cohorts trustworthy.

  • Assigning variants inside Customer.io without a single source of truth.
    If your app also assigns variants, you’ll end up with mismatched cohorts across channels (email vs SMS vs onsite).
  • Sending assignment after the user already entered the journey.
    This creates “default branch” routing and contaminates both cohorts.
  • Using inconsistent property names across services.
    One tool sends variation, another sends variant. Your segments only catch half the users.
  • Re-randomizing on every event.
    If a user abandons cart twice and gets re-assigned, you’ve effectively turned your test into noise.
  • Ignoring anonymous traffic.
    If a large share of cart abandoners aren’t identified at assignment time, your “test audience” becomes a biased subset unless you merge identities correctly.

Summary

If you want reliable retention experiments in Customer.io, treat A/B assignment as first-class data-in: consistent event naming, clean identity resolution, and predictable timing.

When the assignment event is solid, segmentation stays accurate and journeys branch correctly—so you can iterate on cart recovery, repeat purchase, and winback without second-guessing the cohort.

Implement A B Test with Propel

If you’re already running experiments, the fastest win usually comes from tightening the data contract and orchestration so cohorts don’t drift across channels. We’ll review how your assignment events enter Customer.io, validate identity/merge behavior, and make sure your segments and triggers won’t break mid-test—then map it into a practical rollout plan.

If that would help, book a strategy call and bring one live experiment (like cart recovery or winback). We’ll use it to sanity-check your entire A/B test data-in pipeline.

Contact us

Get in touch

Our friendly team is always here to chat.

Here’s what we’ll dig into:

Where your lifecycle flows are underperforming and the revenue you’re missing

How AI-driven personalisation can move the needle on retention and LTV

Quick wins your team can action this quarter

Whether Propel AI is the right fit for your brand, stage, and stack