Marketing is both Art and Science. In this era of digital marketing, being creative holds as much importance as being able to analyze data pertaining to your customer, conversion patterns, and run data-driven campaigns. While designing any campaign element be it a landing page, email template, or a call-to-action button, it is natural to make choices based on biases driven by one’s personal preferences but assumptions-based decisions are certainly detrimental to marketing success.
The question is how do marketers get answers to the questions like which placement position of CTA will work the best or which heading will resonate more with the audience and hence lead to maximum Click-Through-Rate and conversion. This is where A/B testing tools come to rescue for marketers.
A/B Testing or split test is an experiment where two variations of a digital asset with changes to a single variable are compared. Refer the unaltered or the base version as the “Control” and the one with the variations as “Challenger”. Then, these two versions are experimented with similarly sized audiences over a specific period to analyze which one led to more conversions. This actionable insight enables marketers to keep their content strategy and design in line with their customers and generate more inbound leads.
Hubspot has a powerful and effective A/B testing tool which lets you set testing for landing pages, CTAs, and emails. This post puts forth a checklist on key things that you should be practicing while planning, executing, and concluding an A/B test.
You might want to optimize your campaign by testing different elements e.g. an e-mailer with a link to a landing page with the content-offer but if you try to test both the e-mailer and the landing page at the same time, You will not know whether it was emailer or the landing page that ultimately impacted the conversion rate. Similarly for one digital asset, test one variable at a time, which can be referred to as an "independent variable" and measure its performance. You should not test the colour of a CTA and placement of the CTA at the same time.
Before you set up an A/B test, make sure that you have clarity on which key metric or KPI (Key Performance Indicator) that you want to measure performance against once the A/B test is launched. Whether it is traffic number, click rate, open rate, or conversion. Assign a target to this metric or KPI that you would like to achieve, compare the performance, and optimize the variable to achieve the closest result.
It is important to know the right sample size for the A/B test and then send it to the different audience groups sets but of equal size is also important. Hubspot automatically splits traffic and hence each variation gets a random sampling of visitors. Hubspot also helps in determining the size of your sample group using a slider which lets you do a 50/50 A/B test of any sample size.
Statistical significance is very crucial for an effective A/B testing but is often ignored. It is important to define the confidence threshold of your key metric, what it essentially means is that you decide how significant your results need to be to justify choosing one variation over another. The higher the percentage of your confidence level, the more sure you can be about your results.
Unless you are testing the best time to reach out to your target audience, do not run A/B tests separately for the two variations. Both the variations should be A/B tested at the same time for reliable inference. Also, make sure that you run the test long enough to obtain a substantial sample size. The period will vary depending on how much traffic or the number of visits that you get which can produce a sample size of statistical significance. If the traffic is high, you might not have to run for a very long time and vice versa.
Once you have inferred which variation performs the best with respect to the key metric, we must test if our results are statistically significant before making any change to the asset. Hubspot offers a free A/B Testing Tool kit that offers a Significance Calculator and a test tracking template to maintain all the records of your A/B tests.
The calculator takes two inputs; the total number of attempts like emails sent or visitors or impressions and key metric outcome which could be the number of clicks or forms filled or any other conversion. The calculator gives the confidence level for the winning variation which can be measured against the chosen value for statistical significance.
If one variation is statistically better than the other, then you have a clear winner and you can close the test by disabling the losing variation. If neither variation is statistically better, then mark the test as inconclusive and retain the original variation as the variable that you A/B tested didn't have any impact on the results.
Accurate A/B testing can bring a lot of value to your marketing efforts and increase the ROI significantly. You can make data-driven decisions while developing marketing strategies as you have insights on what will work the best. This will lead to more conversions and hence increase in both the top line and the bottom line of your business.