Site icon Critical Impact | Email & SMS Automation Marketing

Achieve Better Email Marketing Goals with A/B testing

You may be reading this because you’ve tried your hand at A/B testing, and, well, nothing really happened. Email A didn’t outperform Email B in any noteworthy way; the reverse wasn’t true either.

It can get frustrating, but don’t make the mistake of giving up. It’s a process, and it takes a while to warm up to, but, when implemented properly, A/B testing can produce an increase in your email open and click-through rates.

A/B testing is the process of sending one variation of your email to a subset of your subscribers and a different variation to another subset of subscribers and then looking at results to see which one performed better. This can vary depending on the brand or industry.

Results may vary from one brand or industry to another. For example, a short copy might get a better click-to-open rate in eCommerce but might perform poorly with infomercial products.

Instead of relying on generic research data, test with your audience and product to draw the correct conclusions.

Three steps to A/B testing success

Step 1: Select Enough Subscribers to Yield Meaningful Results

Critical Impact’s A/B Test tool will automatically create two randomly selected test lists. But what percentage of your list should you select for the A/B Test? You should make sure that you have enough subscribers in your A and B lists to yield meaningful results.

If you prefer hard numbers to “winging it,” then use a free “A/B Split Test Significance Calculator,” such as this one to determine how large your A and B pools need to be in order to produce results that are statistically significant.

Step 2: Test Time

Now that you’ve selected the size for an A and B list you can work with, it’s time to find out what truly makes your subscribers tick.

From the time-of-delivery to the all-important subject line, there are several factors you can tinker with when defining your A/B split test parameters. You can try different presentation styles for your call-to-action, different button colors, and different images to feature.

You can test longer emails vs. shorter ones, different writing styles, and different headlines. Wow! Where should you begin?

If you want the best chance at intelligible, actionable results, then you can’t just randomly pick a test parameter. You should instead form a hypothesis. “My subscribers are more likely to respond favorably to my emails if…” Your hypotheses will be more on-point if you understand how your subscribers interact with your emails. Observe real subscribers interacting with your email campaign—to help you form the best possible hypotheses.

Step 3: Select a Winner!

One of the great things about the Critical Impact email marketing platform is that it can easily be configured to take instant and automatic action after running A/B tests. As soon as the A/B test data shows a preference for email A or email B, the “winning” email is used for the remainder of the campaign.

These “A/B instant impact” campaigns work by allowing the user to add a “remainder” list to their A and B lists. The A and B campaigns are dispatched first, then the “winning” campaign is identified and sent to the “remainder” list.

You have two options to select the “winning” email: select either the one that has the higher open rates or higher click rates. You’ll want to select this option based on the type of test you are performing.

Select to send the winner to the higher open rates when running a test for subject line or from name, since these are the variables, people see before choosing to open the message. If you’re comparing two messages, it’s better to select the winner based on click rates. If you’re running a Send Time comparison, you could use either metric, depending on which is more important to you.

With this knowledge in hand, you should be ready to increase your click or open rates using Critical Impact’s A/B Test technology.

Set up the parameters

Test and test one thing at a time

This might sound obvious but testing too many things leaves you unable to determine how successful your test was for each variable. Initially go with the practices that should get you the best results, but when testing start with one by one to get the best result for that item. When you have a result, you are satisfied with, move on to testing the next variable.

This way you should be able to hone your message and format for a successful campaign. You can try to do many tests if your list is large enough, or you can do it over different campaigns though you will have different messaging in the copy and subject line, but mostly the format, tone and direction of content should be similar.

Let’s look at some items to test. In this case we are going to frame this in terms of a retail sale.

(Don’t forget though to test by time of day or day or week as well)

Subject line

Email Copy

CTA

CTA text ideas

Here are a few text ideas for your testing on text:

[A]

Buy Now

Check Out

Add to Cart

[B]

Let’s Shop!

I’m Ready to Go!

Let’s Check Out Now!

Common examples of additional copy include:

Conclusion

Of course, there is more to cold call emailing, but these points will help you with a good head start. We don’t want to write a whole sales course outline here, but hope this article helps.

Until next time, keep those emails rolling!

– Jim

Jim Gibbs, Senior Account Executive at Critical Impact

Jim Gibbs is Critical Impact’s Growth Channel team lead and has been selling and closing for a long time. Jim is known to be able to sell bottled water to fish.

Exit mobile version