Run A/B testing from a customer journey

Completed

You can use A/B testing to find out which one of two similar email message designs is likely to be most successful when it's sent to your target audience. Testing helps ensure that the winning design can be sent automatically to the remaining audience in your customer journey.

The steps involved in designing the A/B test email message were mentioned in a previous unit. To start the A/B, you need to setup a customer journey and define the test details.

Within the customer journey, you add an email tile and select the email message with the A/B test. When a message has an A/B test designed for it, the tile within the journey shows A and B icons in its corner. These tiles start in gray because you haven't yet set up the test for this tile (they'll turn blue after you enable the test). Once the email is added, you need to define the test details, including:

  • Choose A/B test: Your selected email design must have at least one test set up that hasn't been used, but it might have more. Select the name of the test that you want to run on this tile. You can run only one test at a time.

  • A/B distribution percentage: This defines how many contacts (as a percentage of the total number of contacts in the target segment) you'd like to include in the test. You can choose from 10, 20, 30, 40, or 50. For example, if you choose 10%, that means 10% of your segment will receive version A and 10% will receive version B. All test contacts, and the versions each receives, are selected randomly.

  • Winning metric: This defines the winning design based on the click-through rate (how often a recipient clicked on a link in the message) or on the open rate (how often a recipient opened the message). In each case, the winner is the version that produced the most clicks or opens as a proportion of the total number of times that version was sent.

  • Test duration: These settings establish how long the test should run. For best results, we recommend running each test for at least 24 hours, or longer if possible. Especially if you're targeting a worldwide audience (to compensate for time zones). At the end of this time, the system will analyze the results and send the winning design to the remaining contacts in the segment. Contacts who received the losing design won't be re-sent the winning one.

Once you go live with the journey, the test will begin. It starts by sending version A to a small part of your segment while also sending version B to another part of the segment. It waits for the period of time you chose, analyzes the interaction results, and then chooses a winner based on your selected criteria. The journey then automatically sends the winning design to the rest of the segment.

Screenshot showing an example of a saved customer journey.

Important

To produce reliable test results, you should always send each version (A and B) to a minimum of 100 recipients before allowing the system to choose a winner. A typical recommended setup would use a 1,000-member segment (or larger), with a test distribution that sends version A to 10% of the segment, version B to another 10%, and then sends the winning design to the remaining 80%.

It's possible to run an A/B test with as little as just one or a few recipients for each version, but this can often result in an uneven or non-random distribution of versions and unreliable final results. We recommend that you only do this while experimenting with the feature.

For more information, see Prepare to execute your test from a customer journey.