Iterative A/B testing - how to evolve an idea you've already tested

Sam Walker
Optimisation Consultant

Iterative A/B testing - how to evolve an idea you've already tested

So, your test has concluded, you have the results, now what do you do with those results? It’s rare that a test will be an outright winner, or loser for that matter. There will almost always be something to take away and move forward with. Sometimes a test can achieve its stated goals from the hypothesis. Yet, unforeseen consequences mean this version of your idea isn’t implemented. This is where iteration comes in.

There are several aspects that go into deciding whether to iterate on a previously tested idea. Is there a good enough idea to bother with a follow up test? Does the data suggest that the idea could be successful with some tweaks? Is this an area of the site that should be focused on for such a prolonged period, or are there other areas to divert attention to? All these thoughts should factor into your decision to iterate on a previously completed test.

Why should you iterate?

Learn more about your visitors

Just because a test hasn’t won, doesn’t automatically mean it lost. Every new test is an opportunity to learn more about visitor behaviour on your site. For example, if you introduce a bulk buy option to a site that offers subscription packages you may get a lot of bulk sales. But that comes at the cost of subscribers. The takeaway here is that given a choice, users would rather have multiple quantities of a product but don't want to subscribe to another ‘thing’.  If you identify the area that has been negatively impacted (subscriptions) you can try to find the best solution to avoid that moving forward.  As you already have the core idea of what you wish to improve, making slight alterations can be as simple as lines of copy, or adding a pre-selected option.

Optimise further

On the flip side, just because a test has won, doesn’t mean the specific element is fully optimised. This scenario can take form in the way of differences between device types. On desktop, two options displayed side-by-side may translate as on top of each other on mobile. The optimal design may be to put two options in one shared content box.

Maintain momentum

It also allows you to progress your testing strategy without having to constantly brainstorm new test ideas. Rolling through ideas from one test to the next also allows you to maintain momentum in an area of your marketing plan where it is very easy to lose it. Not having a test to roll into can lead to too much time debating what the ‘perfect’ next test is instead of iterating further on an idea you’ve already tested.

What should be iterated?

This is where you must remain focused and retain the belief you had in your initial idea. It is always easy to discard a previous idea after it ‘fails’, but often it is more so the implication of a feature, not the actual feature itself that has caused the negative uplift.

Identify the essential feature

The best process is to identify the essential feature driving the initial idea.  For example, newsletter sign-ups, upselling products, increased subscription revenue or gift-card sales. Or just simply funnel progression.  Even though the first time round yielded negative results, the next stage is a chance for you to fix that. The point of testing is to find the optimal solution after all.

Take bulk buy for instance, there are many ways to implement bulk buy on a site. However, that can lead to negative impacts elsewhere. Bulk buys can affect subscription purchases which damages long-term retained revenue.  Your next option then could be to shift the placement of it so that users aren’t comparing bulk vs. subscriptions but bulk vs. single purchase.

Tweak timings

A common way for sites to increase newsletter sign-ups is to offer a discount incentive in a pop-up. While this isn’t a bad idea, it can be tweaked by something as simple as delaying the time it takes for the pop-up to show. This allows a user to get to know the product they are looking at before then showing them the discount they could receive. Now they can associate a value to the discount being offered. You may need to go even further by adding some lines of copy so users know exactly where they can use the discount code.

Optimise for device type

Another easy trap to fall into is when your test idea has won, and you think it is okay to leave it alone. Most often the times when you may need to tweak a winning test a little bit further is when you can optimise for a specific device type. The trick is to be flexible to different screen sizes and dimensions and at the same time not break a more universal experience between devices.  

As a lot of websites adapt to the scale of the screen they are on, what may be a side-by-side comparison on desktop can end up being stacked on mobile. This can lead to a scenario where one of the options isn’t visible to the user.

How to iterate

Sometimes you will have to go back to the drawing board and rethink the whole approach. Other times the last piece will be staring you in the face so hard, you’ll wonder why you didn’t think of it the first time around.

Identify ways to improve

After each test concludes you will need to complete a full analysis looking at all the metrics that have been tracked. This includes both key & behavioural metrics to get as well-rounded an understanding of the impact as possible. At this point you may identify ways to improve the area you have been testing on without fully fleshing out the full iterated idea. This is good as you will likely need to have a discussion & collaboration with other members of your team before you get into planning.

Think like a user

Iteration is a good strategy because in part it allows for a quick turnaround, as you shouldn’t need to dump hours of brainstorming into the next version of an idea. Once you begin to over-think you can lose what inspired the original test. The best way to look at how best to use the new feature is to think like a user, treat the feature as something the user wants. Was there any reason a user may have been put off completing the desired action? Did they not have the time to understand the value of a discount offered? Were they not able to fully compare two products because they weren’t side-by-side? Could they simply not see the feature due to colours chosen? Start with the small, simple reasons and find the best solution for the user that would turn them into a customer.

What not to do

Skip the full analysis

There are many traps that you can fall into immediately after a test has concluded. One would be not to complete an analysis of the results from the previous test before discussing the next steps.

Take too long

It could also be easy to take too long to iterate on your idea. The next step should be a simple improvement, iterating isn’t about big drastic changes, it’s a simple fix to an already high potential idea. This doesn’t mean rushing into rerunning this test though. Iterating serves many features, and one is to maintain momentum. This works best when you have other tests you want to run before/after you iterate on a previous design. To roll from one test to the next provides a sense of accomplishment and progress. It also helps you fully understand where a previous test went wrong and then gives you the time to adapt and build the next version of it. Without this cycle you could lose momentum. It can leave you in a hole for weeks trying to pick the ‘right’ next test, missing out on so many learning opportunities.

It is easy to feel deflated after a test has lost, especially if it was an idea that you’re proud of. But if you have your metrics set right then there is always something you can get out of a test, even if it isn’t a revenue increase. The more you test, the more you learn about your site and your customers.