Monday Mornings with Madison

The Power of A-B Testing – Part 1

WHAT, WHEN AND HOW TO TEST

Imagine this scenario.  The sales manager and production manager are working with marketing to create a promotional email for the company.  After deciding on a message, they discuss what time of day to deploy the eblast.  Sam thinks it should be sent at the beginning of the workday, around 9am, as usual.  Mike disagrees and thinks it should be sent at the end of the day, around 9pm.

To make his case, Mike cites a recent study by Experian Marketing Services which analyzed the best time of the day to send emails.  The study found that emails sent between 8pm and 11:59pm had the highest unique open rate (21.7%), highest unique click rate (4.2%), and highest transaction rate (0.34).  Those were all considerably higher than during any other time of day.  It was also the time of day when recipients received the lowest volume of emails.  Sam is unconvinced.  He cites a DEG study that found that the highest email open time ran from around 8am to about 1pm, with a small dip around 11am.  Moreover, the DEG study indicated that statistically the worst open rate time was 8pm.  With such different results and opinions, what should the marketing department do?  If the marketing department is savvy, the answer is to do both.  Say hello to the power of A-B testing.

What is A-B Testing?

Here is a basic definition.  A-B testing is a simple random experiment with two variables or options, A and B, which are the control and treatment in the experiment.  It is also known as random controlled experiments or split testing. As the name implies, two versions (A and B) are compared, which are identical except for one variation that might affect the person (customer/client/patient) behavior. Version A might be the currently used version (control), while Version B is modified in some respect (treatment).

Why A-B Test?

While the scenario described in the introduction is fictional, the actual studies cited are real.  Both studies were done by real organizations and produced very different results.  It begs the question, what time should companies deploy eblasts?  But it actually poses a bigger and more important question.  How does a company’s leadership decide on the best course of action on the myriad of situations that arise when even empirical data is split on the best path? The answer to both questions is that it depends on many factors and therefore will vary from company to company.  What then is a company to do?

French philosopher Jean Buridan told the story of a donkey who stood between two luscious stacks of hay but could not decide which to eat.  Each looked appetizing for different reasons.  Torn, he stood there agonizing over his decision.  In the end, the donkey starved to death. Business leaders are faced with this same situation when presented with multiple advertisements, web pages, email blasts, subject lines, or paths to conversion.  Which is better?  Can the current option be improved?  Marketers are often asked for the best course of action for a campaign.  The best they can do is offer their opinion based on experience. With A-B testing, it is no longer a matter of opinion. It is a matter of fact.  A-B testing can eliminate the guess work.

Chances are that most businesses have experienced disagreements about the best course of action:  option A or option B.  In those situations, the leadership at a business should not assume that ‘they know best.’  Instead, they can try different options or processes to determine the most effective approach. Rather than assume that what one individual does is what all people would do, it’s better to allow the facts to speak for themselves.

So how does A-B testing work?  Let’s take our fictional scenario.  The exact same eblast might be deployed to half of the group (Group A) at 9am and the other half of the group (Group B) might receive the eblast at 9pm.  The company would then compare the open rate of the eblast for Group A versus Group B to see when the email was opened and read most.  They might also look at how many people responded to the email’s call-to-action from Group A versus Group B.  They might find, for example, that while more people opened the eblast at 9am, they were significantly less likely to respond to the call-to-action.  On the other hand, it might be that Group B had a lower open rate for the eblast but a substantially higher response to the call-to-action.  Since the end goal was to get people to act, the company might find that for eblasts that are not just informative but have a specific call-to-action, deploying at night is more effective.  Of course, there are many other variables that might have also affected that email’s open rate, including the subject line, sender’s name, day of the week, time of year, etc.  Therefore, the more A-B testing and tracking a company does in examining any given question, the more likely they are to determine what is truly the best practice for that company and its specific audience.

When to A/B Test?

A/B Testing is a good tool to answer any question for which there is either disagreement or uncertainty about the answer and where that issue could have a big impact on results.  For example, when it comes to email campaigns, a company can test for questions such as:

  • What day of the week gets a better open rate?
  • What time of day works best for a particular promotion such as a sale or coupon?
  • What type of promotion generates the most sales?
  • What day or time works best for disseminating an informative newsletter?
  • What subject line style works best? Hard sell? Soft sell?
  • Should a subject line always include the company’s name? Or should it hint at what’s in the message?  Or should it be a teaser line?
  • Is it better to use the company’s name in the “From” line, or a person’s name as the Sender?
  • Does time of day affect overall click rate?
  • When is an offer most effective; when deployed before, during or after a holiday?

Although it is very useful in making decisions related to email campaigns, A-B testing is not limited to just email.  It can be used to test the effectiveness of digital ads, web pages, online tools, web offers, client preferences, and much more.  Given that it is such a powerful tool for assessing and optimizing sales and marketing efforts, it’s a wonder that companies don’t do more of it.  Why is that?  Perhaps it is because A-B testing requires patience, tracking and time for data analysis.

Next week, we will look at the best ways to do A-B testing and debunk the myths and mistakes related to A-B testing.  Don’t miss it.

Quote of the Week

“Never stop testing, and your advertising will never stop improving.”
David Ogilvy

 

© 2014, Written by Keren Peters-Atkinson, CMO, Madison Commercial Real Estate Services. All rights reserved.

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

WordPress Appliance - Powered by TurnKey Linux