Using data and analytics to carry out testing is something that most clients expect their agency to be doing. ‘Test and learn’ is a phrase that gets thrown around quite a lot, sometimes potentially more as a soundbite rather than with an actual purpose. In this blog post, we look at how to best measure and test effectiveness of paid media campaigns.
Testing, measuring and learning should be ‘always on’. But it’s essential that the tests are designed to allow us to effectively measure the right outcomes — ones that are valuable to the business and allow us to adapt our campaigns to deliver more.
What is A/B testing?
An A/B test is used to assess the impact of changing one variable in a campaign, landing page or anything you’re looking to test. A/B testing is nothing new, but it’s important to outline some key principles when planning or setting up an A/B test to ensure it returns reliable and actionable data.
Firstly, test one variable. Decide what the key thing you want to determine is and set that as your variable, ensuring all other elements are identical. For example, say you’re testing two ads served to the same audience and each ad has different copy, imagery, headlines and CTA. At the end of your test you will know if Ad One is better than Ad Two at delivering your preferred result (e.g. clicks, engagement, conversion), but you won’t know which element of your ad drove that performance.
A more specific test with two ads served to the same audience but with the same copy, headline and CTA but different images means that at the end of your test you will know for certain which image drives your performance. You can then set up a second test where both ads have the same image but different headlines, and understand which of those is most effective, and so on.
Another crucial thing with A/B testing is ensuring that one person can only see one version of the ad. If someone sees an ad that prompts them to click and convert then what’s the likelihood that they’ll perform the same action if they see a different ad for the same product? This would end up skewing your data to show that the best performing ad is in fact the one that was seen first.
A/B testing tools
Meta, TikTok and Snapchat have options to set up A/B split testing within the platform ensuring that the test parameters are as tight as they can be to deliver the most valuable results, so I’d recommend trying out A/B testing on Facebook and these other platforms as a good starting point.
You can also run A/B tests directly within the Google Ads platform. It’s essential to set objectives prior to setting up experiments to be able to measure success and analyse the correct metrics. For example, how many clicks were driven through to the website or the ad with the highest conversion rate could both be key performance indicators depending on what your campaign goal is.
One way to run an A/B test directly within the platform is by creating a custom experiment. This allows you to duplicate an original campaign and compare how the experiment campaign (with variants changed) performs over time. The experiment will share the original campaign’s budget and traffic and helps you make more informed decisions on which tactics improve overall performance.
Alternatively, you can also set up ad variations within the platform by simply duplicating your ad and changing up the headlines or descriptions. This is a great way to test variants such as different promotions, USPs, services or call to actions. Ad variations can also include changing the final URL and mobile URL, allowing you to test different landing pages from the website.
Statistical significance is something that can easily be forgotten about when measuring the results of a test. It’s important to ensure that you have enough data to mean that you can take the results from a test as fact and implement the learnings on future campaigns.
Say you set up an A/B test and serve 1,000 impressions to audience A and 1,000 impressions to audience B. Audience A generates two clicks whilst audience B generates four. At a data level, audience B were more engaged with 100% more clicks and a 2x higher CTR, so would you suggest pausing audience A and spend 100% of your remaining budget on audience B? The amount of data collected in this example is so low that it is not statistically significant and so should not be used to inform any campaign optimisations.
However, if instead you deliver 1,000,000 impressions to each audience and generate 2,000 clicks and 4,000 clicks from audience A and B respectively, your CTRs are the same as the first scenario but the amount of data means that these numbers are now statistically significant, allowing you to make an informed optimisation decision.
Platforms that allow us to geo target offer interesting testing opportunities to measure the efficacy of a channel. One option could be running a video campaign targeting one city, with significant budget and measuring brand search uplift or generic search CTR compared to other similar cities that aren’t having the video ads served. You would need to ensure that the spend on the video campaign is sufficient for the size of the city and brand.
Similarly, if a client sees similar purchasing behaviours in two similar cities, it might be valuable to understand what would happen if you stopped spending on a paid social campaign in one of them and doubled the investment in the other. This would tell you what impact on your bottom line social ads have.
In both of these scenarios it’s essential to make sure that the metrics needed to measure success are:
- aligned prior to launching
- without other factors that could influence the outcome e.g. other channels, seasonality etc.
The same applies to paid search campaigns. For example, if your business or service is located in a specific area — let’s say Brixton, for example — it’d be a valuable test to determine whether there’s enough volume to use geo targeting and cover the local area or is it actually more beneficial to target the whole of the UK and widen your audience reach? When running tests like this, it’s vital to ensure that your ad messaging and keyword targeting are aligned to be as relevant as possible.
Planning a Test Checklist
It’s great to have all this information on the different types of tests you can use in paid media, but what exactly do you need to know and prepare before doing these tests? Here’s our test checklist.
Testing is an essential part of a paid media campaign. We need to ensure we have a framework in place to determine the variables to test, what metrics are available to measure these and ultimately if the results will help inform how to improve our campaign results and deliver more for our clients’ business. Are you looking for a data-driven agency to Imagine Better results for your business? Get in touch with us today to start the journey.