See all Insights

The Importance of A/B Testing your Marketing Emails

Far too many digital marketers overlook the importance of A/B testing their marketing emails. In some cases, they may feel overwhelmed by creating the email campaign in the first place. In other cases, their focus is on different aspects of marketing, like their social media or content creation. There’s so much that goes into getting an outbound email out the door that it can be easy to cut corners when it comes to A/B testing. But A/B tests–and the insights they yield–are just too important to omit. If you commit a little bit of time to conduct your own tests, you will deepen your understanding of what your prospects find valuable. These observations will allow you to communicate with your prospects in a more customized and effective way.

Generally, when we talk about A/B testing marketing emails, it’s to improve one of two things: open rates or click-through rates. These tests must be conducted separately in order to ensure the success (or lack thereof) is attributed to the appropriate variable. Let’s explore how to set up your A/B tests, decide which testing variables to start with, and determine how to measure the tests’ success.

How to conduct an email A/B test

Many of the top marketing automation tools provide a built-in interface to facilitate A/B testing your emails. In most cases, you’ll be able to set up your test by following these four steps:

  1. Create and select the message versions you’d like to test.
  2. Choose what percentage of the list you’d like to use for testing.
  3. Schedule the test run.
  4. Schedule the final run.

Marketing automation software, however, is not a requirement for those who wish to improve the effectiveness of their marketing emails through testing. You can also test manually by following these steps:

  1. First, you’ll need to randomly segment part of your list into testing groups (more on this later).
  2. Next, send out a message to each group, and evaluate your results.
  3. Finally, select a winning version to send to the rest of the list.

What email marketing variables should you test?

When you begin designing your email, start by deciding which testing variable you’d like to start with. Remember to test just one variable at a time in order to accurately attribute the email’s success.

If you’re testing to improve your email open rates, you’ll need to focus on variables that users see before they would normally open an email. This means testing:

  • The day of the week the email is sent
  • The time of day the email is sent
  • The week of the month the email is sent
  • The sender
  • The subject line
  • The preview text
  • Whether or not there’s an attachment

The other common testing objective is to improve email click-through rates. As audiences are subjected to an increasing number of marketing emails each day, getting users to open an email is only half the battle. The job of a discerning email marketer becomes increasingly challenging. You need to establish trust with your audience, inspire curiosity, and ignite interest within the message you’re composing. An increase in email click-through rates is usually an indicator that you’ve successfully built that trust and interest. As you test, consider some of these variables:

  • Formatting: branded vs unbranded.
  • Button color: which color button will draw the most clicks?
  • Button vs text link vs URL: do your readers prefer to click on a button, a text link, or even an actual URL?
  • What text is linked: should you link the name of your blog post or more action-oriented text like “read more?”
  • Imagery: will adding images to your email make it more engaging or more distracting?

How to determine the size of your A/B test lists

There are a couple of ways to determine the sample size for your test lists. 1) You can use this Sample Size Calculator tool (which is very complicated), or 2) you can use the table we put together below. This table has a 5% margin of error, allowing you to be 95% confident that the results of your winning test email will be representative of the results of your final run.

Total Email List Size Test List – Size of each version of the A/B Test Total Test List Size (for both versions of the A/B Test) Test List – % of the distribution list for each version of the A/B Test Total Test List – Percent of both test lists combined
1,000 274 548 27% 54%
2,000 320 640 16% 32%
3,000 339 678 11% 22%

 
To use the table, determine the approximate number of emails that will be delivered from your distribution list and match that number to its corresponding row. Follow that row to determine the size of each A/B Test list. For example, If you have 2,000 names on your list, version A of your test should be comprised of 320 names and version B should also contain 320 names.

Schedule your test run and final run

Now that you have two versions of your message ready to go, and you have your test list sizes set, it’s time to schedule the test run and the final run. Allow the test versions to run for 24 hours. This is typically enough time for audiences to receive and decide if they want to engage with the message or not. After the 24 hour period, you’ll have enough data to conclude which version of your message (if any) outperformed the other. Some marketing automation tools will automatically send out the winning version of the message as the final run. If you’re not using marketing automation, schedule the winning version to be sent out to the remainder of your list after the test run is complete.

Interpreting the results of you A/B tests

We’ve put together a digital worksheet that can help you organize and track your A/B testing variables and outcomes. Use this tool and the accompanying Statistical Significance Calculator to track the success of your A/B tests over time. Simply input the number of emails sent in each test version, then input the number of opens or clicks, depending on your test. The calculator will compute whether or not your test is statistically significant at a 90%, 95%, and a 99% confidence level.

Be aware that not all of your tests will provide clear, actionable results. Some tests may show that users are indifferent to the various versions you are testing. While it’s great to find statistically significant results, it’s important not to lose motivation if some of your tests don’t return statistically significant results right away. Keep testing, keep learning, and you’ll uncover some actionable insights to help you reach your audience in new ways.

Related Posts