How to A/B test with Fresh Relevance
One of our top tips for anyone getting started with personalization would be to define your metrics and then TEST, MEASURE and OPTIMIZE. Personalized marketing is all about driving results for your metrics through your content – whether that’s video, email, social posts, articles, or press releases for example – and then working out what really hits the mark with a given audience. The more content you test, the more effectively you can measure differences and then go on to fine-tune and optimize it.
Within Fresh Relevance you have the ability to go one step further and really enhance your marketing by A/B testing your content – making sure it’s performing at its best to deliver you those all important results. This is made easier with our new Optimize Center – a single, easy to use area, where you can create, monitor and access the reporting for all tests of individual pieces of personalized web or email content, email triggers and full cross-channel experiences.
There are a number of variables you can test within Fresh Relevance, some more complex than others. One piece of advice we’d give anyone who’s getting started with testing is to keep it simple. Try not to test too many variations at once or else you’ll lose sight of what’s working and what isn’t.
9 simple A/B tests to get you started
1) Triggered emails – testing of wait times for emails (e.g. 20mins vs. 30mins)
This will really help you find the sweet spot for your email cadences and in some cases we’ve found that a shorter wait time increases conversions.
2) Triggered emails – testing number of email stages (e.g. 1 email vs. 2 emails)
Some shoppers want to sleep on a purchase. Others try to wait for a coupon. Many are simply distracted when the first abandonment email arrives. There’s potential for sales uplifts of up to 54% by adding multiple stages.
3) Triggered emails – email content (subject lines, layout or design)
Mixing up your subject lines, layout, or putting CTAs in different positions can help draw in the customer’s attention, especially if they’ve seen the same or similar emails from you over a period of time.
4) Recommendation SmartBlock – testing of data sources (e.g. ‘people like you buy’ vs. similar products)
Different data sources might return alternative upsell opportunities or more relevant products depending on the customer.
5) Recommendation SmartBlock – testing of CTA color or text (e.g. blue vs. yellow or ‘buy now’ vs. ‘take a look’)
Contrasting colors and different language can help draw in the customers attention.
6) Recommendation SmartBlock – utilizing different filters on data sources
Does applying the Best Tag Value (Category) or Price Affinity Predictor vs. not applying them have an impact on uplift?
7) Data capture popover – short form vs. long form
How much data do you really need from your customers? Do they have enough brand or product affinity to give you more/less of their data? The logic says a short form should work better…
8) Banner CTA SmartBlock – customer name personalization vs. no personalization
Do customers respond better to name personalization or not? This could also apply when adding name personalization to email subject lines, or title/header of Recommendation Smartblocks for example. One client saw a 37% sales uplift when A/B testing dynamic name personalization in their marketing emails.
9) Banner CTA SmartBlock – Test different images or designs
Similar to email content or design and recommendation CTA color or language, different images or designs can help draw in the attention of customers.
We’ll be covering more top tips in future editions of FreshPro. In the meantime, book a demo to find out how Fresh Relevance could help your business generate results with personalization.