Expecting instant and spectacular results
Testing your templates, your copy, your timetable can be a great help, but most of the time it won’t give you the Holy Grail. You have to test major and minor changes also in order to narrow your email down to the perfect version. But if you change just something small in most cases the difference will not be so spectacular that you can tell it truly is the alteration that made a difference or some other factor.
It might take a while until you get statistically significant results, especially if you have a small email list. You may need to repeat your tests various times to be able to make a well established decision. You don’t have to be a pro in statistics, but an A/B split test significance calculator might be quite handy.
Not testing at the same time
If you have multiple versions of your email, you should test them at the exact same time. Except when you are testing for the ideal time to send your mail. In this case you should use the exact same template, copy, subject line and so on. The one and only difference should be in timing.
If you don’t test different versions simultaneously you risk getting inconsistent results. Recipients won’t react to your emails the same way at different times. It definitely matters if you send your emails in the morning or the afternoon, on the weekend or in the middle of the working week. Be really specific about the goals of your test, and the general rule of thumb is to modify only a single element at the same time.
Running too many tests
It’s great news that you want to improve your email campaigns, and definitely A/B tests are a great way to do that, but you shouldn’t overuse testing. Too many tests mean overwhelming amount of data, which can be confusing and impossible to analyze in its entirety.
You can and should absolutely test for important changes, but keep the number of simultaneously running tests low so you can interpret the results properly.
Defining the wrong KPIs
Before you even think about starting a test, define it clearly what is the goal of your test, and how are you going to measure the results. A small difference between templates can increase your click through rate, but in the end you may end up with lower conversion (purchase, order, whatever) rate. You may be looking at decreasing open rates while missing the fact that simultaneously more people click on your links in the mail.
In most cases the final conversion won’t happen inside the email, so you have to be able to track your website properly to be able to make the right decisions.
Not testing your list right
If you make extreme changes to your email template, test with a smaller number of subscribers first – this way the unsubscribe rate will be less drastic if it goes wrong. This is also true for offers where you want to get the most conversions in a limited time: test only a (random) portion of your list (or segment) so you can refine your template and copy.
Obviously there are tons of other tips we should list here. But maybe a definitive guide to email A/B testing would make the most sense. What do you think? Would you like to read about the depths of A/B testing?