Summarize this documentation using AI
Overview
Random Cohorts in Customer.io is a simple way to split shoppers into consistent, randomized groups inside a journey, so you can run clean tests without exporting lists or relying on manual segmentation. For D2C teams, it is most useful when you want to prove incrementality (holdouts) or compare creative and offer strategies that impact first purchase conversion, cart recovery, and repeat purchase.
A common scenario is abandoned checkout, where you want to test whether adding an SMS step actually lifts recovered revenue, or just cannibalizes organic conversions. Propel helps D2C teams design these tests so you get an answer you can trust and a program you can scale, book a strategy call.
If you are implementing this in Customer.io, treat Random Cohorts as your built-in randomizer for experiments that need stable group assignment.
How It Works
Random Cohorts in Customer.io assigns each person who reaches the block into one of multiple randomized paths (cohorts) based on the percentage split you set.
In practice, you drop the Random Cohorts block into a workflow, define how many cohorts you want (often 2 to 4), and set the distribution (for example, 50/50 or 80/20). Each cohort then continues down its own branch where you can change channel mix, timing, creative, or offers. Because the assignment happens at the moment a shopper hits the block, you can keep everything else identical and isolate the variable you are testing.
Use it alongside conversion goals and reporting in Customer.io so you can compare cohorts on actual purchase behavior, not just clicks.
Step-by-Step Setup
Random Cohorts in Customer.io is easiest to set up when you already have a single journey that represents the behavior you want to influence (checkout started, product viewed, post-purchase, etc.).
- Open the journey where you want to run a test (example: “Checkout Started” abandonment flow).
- Identify the exact point where the audience is “eligible” for the experiment (example: right after the trigger event, before any messages send).
- Add a Random Cohorts block at that point in the workflow.
- Choose the number of cohorts you need. Start with 2 unless you have enough volume to support more.
- Set the percentage split (example: 50% Cohort A, 50% Cohort B). For holdouts, consider 90% treatment, 10% holdout.
- Name each branch clearly based on what changes (example: “Email only” vs “Email + SMS” or “10% off” vs “Free shipping”).
- Build the branch logic so the only difference between cohorts is the variable you are testing (keep timing, suppression rules, and eligibility consistent).
- Add a purchase-based goal or conversion criteria that matches your revenue outcome (example: “Order completed within 72 hours”).
- QA with internal test profiles to confirm cohort assignment and message timing behave as expected.
- Launch, then monitor cohort volumes daily for the first week to ensure the split is landing as intended.
When Should You Use This Feature
Random Cohorts in Customer.io is most valuable when you need a trustworthy answer about what is actually driving incremental revenue, not just engagement.
Use it for:
- Cart and checkout recovery experiments: Test “email only” vs “email then SMS” vs “email then paid social retargeting sync,” while keeping the audience identical.
- Offer strategy tests: Compare free shipping vs percent-off vs no offer for second purchase conversion after a first order.
- Creative and positioning tests: “Product discovery” messaging vs “social proof” messaging for browse abandonment.
- Holdouts to prove incrementality: Keep a small cohort that receives no messages (or a lighter touch) to measure true lift and avoid over-crediting automation.
- Timing tests: Send the same message at 30 minutes vs 4 hours after abandonment to find the best revenue-per-recipient window.
Operational Considerations
Random Cohorts in Customer.io works best when your data and orchestration rules are tight, otherwise you will “test” noise.
- Eligibility rules first: Filter out shoppers who already purchased, are suppressed, or are in a conflicting promo flow before the cohort split. If you do it after, you can skew the cohort distribution.
- One variable per test: If Cohort B changes both offer and channel, you will not know what caused the lift.
- Volume and duration: If your store only has 200 abandonments per week, a 4-way split will crawl. Use fewer cohorts, or run longer.
- Cross-channel collisions: Coordinate with your SMS tool, paid retargeting, and onsite promo logic so a holdout is truly a holdout.
- Consistent attribution window: Define a fixed conversion window (example: 72 hours) and stick to it across cohorts so results are comparable.
Implementation Checklist
Random Cohorts in Customer.io is straightforward to launch when you confirm these basics before turning it on.
- Trigger event and eligibility filters are finalized (example: checkout started, no purchase since event).
- Cohort split percentages match your testing plan (example: 90/10 holdout or 50/50 A/B).
- Each cohort branch is labeled with the single variable being tested.
- Suppression rules are identical across cohorts (unless suppression is the variable).
- Conversion goal is purchase-based and has a defined time window.
- Creative and offers are approved and consistent with your promo calendar.
- QA confirms cohort assignment, message sends, and exit conditions.
- Reporting plan is documented (what metric wins, when you will decide, what happens next).
Expert Implementation Tips
Random Cohorts in Customer.io gets powerful when you use it to answer the questions that actually change your program roadmap.
- In retention programs we’ve implemented for D2C brands, the highest leverage use is an always-on holdout for key flows (checkout recovery, post-purchase cross-sell). A small, persistent holdout prevents you from scaling “busy work” automations that do not create lift.
- Run your first test as “channel mix” before “creative.” If you do not know whether SMS adds incremental revenue beyond email, optimizing copy is polishing the wrong surface.
- Use uneven splits to protect revenue while you learn. For checkout recovery, 80/20 or 90/10 is often enough to detect direction without risking a big short-term dip.
- Keep a shared naming convention: Flow name + test name + cohort label (example: “Checkout Abandon v3, SMS Lift, Holdout”). It saves hours when you review results later.
Common Mistakes to Avoid
Random Cohorts in Customer.io can produce misleading results when execution details are sloppy.
- Splitting too late in the flow: If messages or filters happen before the split, you can bias who reaches each cohort.
- Changing multiple things at once: Offer plus timing plus channel changes make the test unreadable.
- Ignoring inventory and margin realities: Testing a steep discount on a low-margin hero SKU can “win” revenue but lose profit.
- Not controlling other promos: If your sitewide popup gives everyone 10% off, your “no offer” cohort is not truly no offer.
- Calling a winner too early: Weekend behavior, payday spikes, and campaign blasts can distort short windows. Set a minimum sample size or minimum run length.
Summary
Random Cohorts is the cleanest way to run A/B tests and holdouts inside journeys, so you can prove what actually drives incremental orders. Use it when you want confident decisions on channel mix, offers, and timing in Customer.io.
Implement with Propel
If you want Random Cohorts set up with proper holdouts, reporting, and promo-safe orchestration in Customer.io, we can help. book a strategy call.