Seeing is believing: Case studies
How to optimize your listings with Manage Your Experiments
A/B testing: which marketer doesn’t use it? It’s the go-to technique to check what works and what doesn’t. So, the good news is that you can now implement that same testing method with Amazon’s Manage Your Experiments! This option, also called split testing, has been available in the U.S. for two years. Recently, European sellers can start using it as well. Here’s how it works.
How it works
Anyone who has ever worked with an A/B test knows that experimenting gives you a lot of new insights. For example, you learn how to create better content that appeals to your target audience and how to generate more sales as a result.
The concept is straightforward. Manage Your Experiments allows you to create two different types of content for the same ASIN. One group sees version A throughout the experiment, while the other group sees version B. If you are a customer in the test group, you will see this content wherever it is available. For example, the title the seller is experimenting with will be shown in search results, ads, product detail pages, and the shopping cart/checkout. This will keep the test constant for the eight to ten weeks it runs.
To experiment with content, you can choose from the following options: product images, product titles, or A+ content. Within the latter category, you can customize various components:
- Titles
- Images
- Texts
- The order of your A+ modules
- Adding additional modules to your existing A+ content
- Adding infographics instead of product photography for your images
How do you do it?
The great thing about Manage Your Experiments is not only that it’s going to give you lots of new insights but also that it’s pretty easy to set up. Follow these steps:
- Go to brands > manage your experiments.
- Select an appropriate ASIN that you want to experiment with. Some ASINs might not appear. This means they are not suitable for testing.
- Add the details of the experiment, such as the name, hypothesis, duration, and start date.
- Select the content for your experiment. You can enter different titles and/or images. That’s version A. To create version B, click the ‘start by duplicating version A’
- link, then make the changes for version B.
- Submit your experiment for Amazon’s approval. Once approved, your experiment starts automatically. Always check if your test has actually started. If not, make the necessary changes to ensure your experiment complies with Amazon rules.
Tip: make sure the content of the two versions really differs from one another to determine with certainty what works.
The results
At the end of the testing period, you analyze the data and determine which version performed best. You can then decide to publish the winner. Before that stage, you can keep an eye on how the process is going. Throughout the experiment, Amazon updates the results for you to check weekly, and you can always access the data via the dashboard. Also, if you want to stop the A/B test without finishing the testing period, you can. However, we don’t recommend it because quick results aren’t reliable indicators of future results. If you base your next steps on premature conclusions, you risk choosing the wrong version.
At the end of the experiment, Amazon calculates what the winning version might bring you in terms of sales over the next year. But, of course, this is no guarantee of actual results.
How to interpret the results
When analyzing the results, you first check whether your hypothesis is correct. For example, suppose you are convinced that a particular title (version A) will score much better than another (version B); consider what factors determine that. If the results confirm your hypothesis, you can use those factors to test other ASINs to verify it. In addition, you can check various KPIs at the detail level, such as sales, conversion rate, number of sales per unique visitor, clicks, sample size, etc.
Are you not seeing specific or decisive results right now? Try to draw lessons from that, too. For example, perhaps the adjustment was too small to make a difference, or the product was already attracting too little traffic. Of course, another scenario is that your changes did not affect your customers’ choices. In that case, it is a good idea to continue testing other elements. It is also possible that both versions were just fine, in which case you can use existing elements as a recipe for success.
Whatever the outcome of your split tests, there is a lot to learn. Need some guidance setting up and interpreting your experiments? VNDR. is happy to help you out! Contact one of our experts, and we’ll look over it together.