Quick ABC's to A/B Testing
Sites frequently undergo changes to stay relevant and engaging.
People want more and more with their online experience, but how can we keep up with the demand? How do we know which feature works best with our users?
Is it adding more messaging or simplifying the online order process?
A/B testing helps you figure that out.
We can compare a control group versus a treatment group to isolate the key variable that affects your site activity. Below are three simple steps which I refer to as the ABC’s of A/B (Split) testing - with a splash of Adobe tips and tricks.
What business goal is your new site feature or campaign trying to achieve?
The goals you define guide you to success metrics that measure your test’s performance.
Some might say “increase in sales” and others “higher engagement on site." These goals measure as “conversion rates” and “pages per visitor” in web analytics.
You can then focus on these primary metrics to evaluate the test’s success. For a more thorough analysis, you can also utilize more metrics that can support the test results like amount of traffic.
Adobe Target allows you to set up your A/B test while segmenting your audience based on user behavior. It typically gives you the key vanity metrics with a confidence level and % lift.
Because Adobe Target can be transferred as A4T (Analytics For Target) while setting up the test, you can use the assigned code to create any segment (based on "Analytics Experience" variable) you wish within regular Adobe Analytics.
This is where you can have tons of fun and also utilize Adobe Analysis Workspace to set up personalized treatment data side by side.
Having a balanced comparison between your control and treatment groups allows you to distinguish the cause of your results.
Try to make sure your user base is as close to an apples-to-apples comparison as possible. Sites have many elements that can subtly affect your results, such as simultaneous promotions or even a peak traffic period led by an external campaign.
Try to limit those variables that are not the focus of your test. It can easily skew your results and make them unusable.
Your test won’t be perfect, but do your best to isolate the test’s focus on the changing feature or campaign.
You can always do more and more tests to watch for consistencies. Do just keep to one at a time if you can. This will ensure that you can be confident in the new feature leading to an increase in conversions.
Since you can integrate Adobe Analytics segmentation while testing, you can create segments to remove the other A/B test codes from your data set. This gets quite tricky, but is definitely a workaround. Adobe Target can be very particular about group segments from the beginning, so use your best judgment to avoid any overlap.
You’ll need enough data to reach confidence in your results where it is guaranteed after the test.
Think about how long to run the test.
For sites with larger traffic, you can easily gather a good sample size within a few weeks. For sites that don’t have as much traction, it will take more time to gain enough data to make a strong conclusion.
You can look into automated A/B tools that can easily help you evaluate the confidence levels for your test.
Like I mentioned earlier, Adobe Target already provides confidence levels for each treatment within the test.
Aim for at least a 90% confidence over a 2 week period.
If it stays consistent over the 2 weeks and after, you can assume the results are good to move forward! Just continue to watch the numbers. The average daily totals should stay level - try placing the data in a line graph. It'll be easier to view!
Reach out if you need help to structure your A/B tests or any Adobe Analytics reporting! It’s always fun to see the results for all types of scenarios.