Insight tests and impact tests - why you need both for successful CRO

Matt Scaysbrook
Director of Optimisation

Insight and impact - two core values of the test-and-learn approach.

The former represents a long-term value to the organisation, whilst the latter appeals to the short-term needs. But you need both if your CRO programme is going to be sustainable and valuable.

Impact tests are critical to demonstrating progress against predefined goals; like any investment, there is inevitably an expectation of tangible value generated. And tests focused on impact can provide that in spades.

Insight tests are equally critical, both in delivering the ah-ha moments that fundamentally change an organisation’s perspective, and in creating the conditions for which future impact is increasingly more likely.

So whilst it is undeniable that any successful CRO programme requires both insight and impact, what do tests like this look like? And how do you achieve the right balance?

Insight tests

What is an insight test?

Insight tests seek to answer specific questions, and their effectiveness in achieving that aim rely on two core tenets:

  • Visual / functional changes constrained to a particular area or element - this lends itself to creating a clear link between cause and effect
  • Depth and breadth of tracking to offer genuine explanation for observed visitor behaviour - why certain behaviour changed is more important than what the business value (in £s) is

The constraint of scope in the test is enabled by a tight hypothesis that can link the impact on a specific metric to a change in visitor behaviour driven directly by that change.

The focus on answering a question about visitor behaviour or motivation does not prevent insight tests from generating an impactful outcome, but that is a secondary consideration.

Example

For example, an insight test may look to answer a question such as

Do our visitors value the democratic social proof of our reviews over the clarity on delivery costs?

The answer to this question could indeed result in a tangible onsite conversion uplift if true, but knowing that this insight could inform other areas of the business is the more valuable element.

It could change how the business prioritises key messages in external advertising, or indeed the level of investment made in gathering more reviews or reducing the unit cost of delivery.

Whatever the result of an insight test, there is wider value to be found in the implementation of that insight in other areas.

To continue with the example above, the test could be a technically-simple one: if the current delivery cost element holds a prominent position on the product page, swapping it out with a product reviews element could help to provide the answer to the question.

But to do so, you would need multiple data-points to confirm the insight - for example:

  • What % of visitors clicked on the tested elements?
  • What % of visitors in each experience then added to basket?
  • Was there a direct correlation between those clicking on the tested elements and the add-to-basket impact?
  • How did visitors then progress through the checkout process in each experience?
  • Were there any differences in progression based on whether visitors clicked or didn’t click on the tested elements?

Whilst this is not an exhaustive list of data points, it should serve to represent the importance of viewing the wider impact of discrete changes in an insight test. If you’re looking to generate insight that could impact wider business priorities, it must be based on solid evidence.

And ideally you need to have thought through the evidence you are seeking before you run the test, not after - nothing worse than analysing an insight test only to realise that you’re missing the granularity you need to make a definitive judgement!

The problem with insight tests

Insight tests have the ability to generate significant ripple effects across an organisation, but those effects can take time to implement, and therefore the immediate business value can be muted. So whilst they are an essential part of the core value of testing, focusing on them alone can draw questions about the tangible value of your programme.

Impact tests

What is an impact test?

Impact tests care little about answering a question, unless that question happens to be Can we increase the business value generated through X? - and in recognising that the vast majority of organisations expect a demonstrable ROI from their investments, here’s where impact tests earn their corn.

If you’re looking for a significant impact, you should consider:

  • Big bold changes; you can’t expect a meaningful shift in commercial performance if you aren’t willing to take the risk to achieve them
  • Focus on an area with the greatest potential; this is likely the part of the sales / leads process that performs most weakly

The sweeping scope of the changes you make in an impact test often prevents the more granular cause-and-effect analysis that would get from an insight test, but that is one of the prices paid when seeking step-changes in performance.

This does not mean that you throw detailed tracking out of the window of course, but it will likely limit your ability to pinpoint why your new experience delivered the results it did. As the opposite of an insight test, these tests prioritise tangible organisational value over a deeper understanding of visitor behaviour.

Example

For example, an impact test could look to create an entirely new checkout experience. Turning a one-page checkout into a multi-stage checkout, or vice-versa, could be one way to do this, but it could also include changes in reassurance messaging, the display of forms and fields, error-handling, data entry order and more!

As mentioned above, whilst you would still look to track the impact of each of those changes, and wherever possible create a comparative dataset with the existing checkout, the likelihood that you will be able to distinguish meaningful cause-and-effect is very low. You could (and very likely would!) use the same level of tracking as you would in an insight test, but the sheer number of changes means that discerning which of them made the biggest difference is very challenging, if not impossible.

But, the scale of the changes means that the chances of you failing to achieve a detectable impact in the metrics you are tracking is equally very low. For better or worse, you are very unlikely to have a flat result!

The problem with impact tests

Impact tests can provide results very quickly, and can deliver significant benefits to key organisational goals if successful. But if they prove unsuccessful, the challenges with unpicking the reasons why can mute their long-term value. So whilst they are required at times to help satisfy short-term urges for meaningful change, a total focus on them would dent the future value of your programme that only insight can provide.

Your next test - insight or impact?

Insight and impact are the yin and yang of your CRO programme. You need both if your programme is going to thrive. But too much focus on one or the other will result in lost value, whether short-term or long-term.

So when you’re planning your next test, consider which your programme really needs right now. Have you built up the capital to focus primarily on insight, or are there questions building around the hard numbers that your programme is delivering?

And therefore how can you plot the best course for your organisation’s needs right now?

Subscribe for CRO tips & advice

Landing in your inbox once a month