< Back

New feature: How to optimize testing with our Multi-Armed Bandit feature

New feature: How to optimize testing with our Multi-Armed Bandit feature Headshot

By Lucy Russell, Head of Product Management

New feature: Multi-armed bandit - thumbnail

We’re excited to introduce our new Multi-Armed Bandit, developed in collaboration with the University of Portsmouth.

Our new feature uses machine learning algorithms to automatically deploy the best-performing content, triggers or experiences toward the goals you set. This helps online businesses to get the maximum ROI from their content or triggers within the Fresh Relevance system.
 

Why are we adding this enhancement?

There are several problems with traditional A/B tests. 

Firstly, many of our clients lack the time and experience to run effective A/B tests, which often require some manual interpretation and deployment of the ‘winner’.  

Secondly, the time it takes for a traditional A/B test to run to significance, to review the results and to deploy the winner can result in lost revenue. This time delay may even mean that a significant winner cannot be found before the campaign has finished.

Finally, the winner may change over time depending on the influencing factors. This is rarely seen by an A/B test that you normally run for a set period of time and then end. 

Speed of getting to the winner is key for maximum ROI. And continuing to check to ensure the winner is still performing better than other options ensures that performance doesn’t drop off over time. 

That’s exactly what our new Multi-Armed Bandit does: it’s fast to find the best-performing items and automatically deploys them to the highest traffic. The Multi-Armed Bandit then continues to allocate a small amount of traffic to the underperforming items and to monitor their performance. If it detects a noticeable change in an item's performance, e.g. due to a change in influencing factors, it will automatically shift traffic around to ensure that the now best-performing items are shown the most.  
 

Our Chief Technology Officer, David Henderson, explains: “Typically our clients are short on time and resources, meaning traditional A/B tests and manual intervention are not an option.  Through our work with the University of Portsmouth, our new functionality within the Fresh Relevance system makes it easier to run optimizations and gives better results. Clients create different pieces of content, add to our system, then let the machine learning do the hard work of making the decisions about which to show for maximum ROI.”


How to use Multi-Armed Bandit

Multi-Armed Bandit can be used to optimize three key areas of functionality:

  • SmartBlocks and Slots, such as for individual image content like a hero banner: to let Multi-Armed Bandit allocate the most traffic to the top-performing banners.
  • Trigger programs: to optimize best-performing email content or send times.
  • Whole experiences, such as for setting up a few different versions of a flash sale: to let Multi-Armed Bandit work out the best-performing one and allocate the higher proportion of traffic.

Once you’ve decided what you want to optimize with Multi-Armed Bandit (content, triggers or whole experiences), it's simple to set it up via our Optimize Center.

In our Marketing Rule Trees, you can drag content, triggers or experiences into the same location to let Multi-Armed Bandit determine which ones to show.  

In addition to the existing configuration options of rotation, randomization and A/B test, you can now select Multi-Armed Bandit. Once selected, all you need to do is add the goal that the optimization works towards.

Currently, we have the following goals available:

  • Increase conversion
  • Increase average order value
  • Increase identification


Once saved, it will appear in our Optimize Center, so it's fast to adjust or to view the results.

Our Multi-Armed Bandit then shows visitors the items within the optimization to achieve the specified goal. Reporting is available on multiple metrics, alongside our standard metrics, so you can clearly see what data the Multi-Armed Bandit is working from and what it is doing on an ongoing basis.
 


 

What’s next?

We’re looking to expand Multi-Armed Bandit further by extending the goals that users can chose from, as well as working towards the addition of a Contextual Multi-Armed Bandit. This will also take into consideration anything that is already known about the user (such as past behavior, location, etc) while making the decision of what content to show.

Book a demo to see our Multi-Armed Bandit feature in action and find out how it could help your business optimize testing for maximum ROI.

New feature: How to optimize testing with our Multi-Armed Bandit feature Headshot

By Lucy Russell

Head of Product Management

As Head of Product Management at Fresh Relevance, Lucy works closely with our customers and the wider team to shape the product roadmap and oversees the roll-out of all new features.