The CRO Process: Four Steps to Take Before Your Test Goes Live

Laura Fox
Senior Marketing Specialist

Steps One to Four of the CRO Process: Idea Generation, Test Planning, Test Build and QA

How does the CRO team fit into the CRO process?

Before we dive into step 1, let’s go through the CRO team roles and responsibilities so you can understand how they come together.

We have three main areas of responsibility when it comes to our CRO team. First of all, we have Consultants.They are in charge of the strategy, the planning, the project management, and also the analysis of the tests that we run.

We then have developers who are obviously involved in building out the tests. They're also involved in feasibility checkings early in the process and into implementation and maintenance of the test.

Finally, the third area of responsibility is QA or quality assurance. This is a team who will check that the tests work as expected. This entails visual verification scope checks to make sure that they respect the scope of the test as planned. And most importantly, they check that the conversion events for your test do track.

Note that in our world we make sure to separate the developer from the QA side. It is quite possible that at your end this is a boundary that does not necessarily exist. And that's actually fine as long as you can always ensure that whoever develops the test does not get to QA it because you kind of need a fresh pair of eyes to bring that extra impartiality if you want to the checking process.

Step 1 of the CRO process: Idea Generation

What goals are you trying to help the business achieve?

You want to make sure that your tests and your programme as a whole bring strategic value. And that means not just looking at what particular metric  you want to target but really how's that helping the business as a whole to move forward.

What data sources have you got access to?

These are essential for giving you the validation for the ideas that you have, not just for identifying where problem areas might be, but also for giving you the justification that you're likely to need to get that stakeholder approval to move forward. Whether those stakeholders are internal or external, it doesn't matter. But the more sources that you can provide that back up the idea that you have, the easier it will be to get that sign off.

What behavioural changes might be required to make those goals attainable?

This is about turning your thoughts to people and away from just metrics. If your tests are going to be successful, it is people's decision making that you will have to influence in order to make that happen. It isn't about moving numbers, it is about changing how people think and how they interact with your site.

Showing an idea

So for us, showing an idea or a concept comes with four sections to it. The first is the hypothesis.

The hypothesis

This we generate from that known business goal, the behavioural insight about how the impact that we're going to have and the planned execution. So we write this backwards. Knowing the business goal is the easy part. Understanding the behavioural influence is the hard part. And the execution, how we're going to have that behavioural impact, that bit comes last.

Background and Rationale

This is where we're looking at the data points that we've got from quantitative tools or qualitative tools or any user research that's been done. Basically, what is the evidence that we can provide for this idea? And then in the rationale, we're taking that evidence and explicitly linking it inwards to the concept of the idea that we've come up with.

Visual mock-up

How might it look? Some people are much better working with pictures than they are working with just words themselves. So having that means that you're dealing with all possible eventualities for people who are going to look at this idea and need to provide their approval on it.

Step 2 of the CRO process: Test planning

Once you've got your concept down and people are happy with it, the next step is going to be planning the test itself. And this is going to revolve around a single key document called the test plan.

What is the test plan?

Essentially, it's a lot of specifications about the test. That's where you write all the details about it.

Why do I need a test plan?

It's to keep track essentially of what is going to be tested and who you're going to test it with.

Who is the test plan for?

It’s for everyone involved in the test process from yourself, product owners to developers, to quality assurance. Anybody who will have to understand what has been tested and who might be interested in understanding that in the future as well.

Test plan format

The format for this document is entirely up to you. We've seen it in many different formats over the years. It can be a simple word document, it can be a Jira page, it can be a card on the Trello board. In the end, all that matters is that you log the specific contents that's needed and the format itself lends itself to a lot of flexibility.

We have three main sections in our test plan:

Nuts and Bolts

Background -  why the test idea came about and the rationale for it

Hypothesis - we repeat it here

Success metrics - to understand what you define the success of the test to be

Location - where it's supposed to run on the website

Audience -  the browsers and the devices combinations that are going to be targeting

Traffic level - how many people in your audience are going to see this test? Is it everyone or is it only a portion of your audience?

QA scenarios - things that you suspect might be breaking and that therefore you would really like for the QA team to check attentively!

Changes

1) Visuals - we can't insist enough on having visuals not only for the experiment, but also for the control version of the test. You need the before and the after pictured in that test plan. If you do not, your website changes all the time. And when you look at this six months from now, you might not be sure against what the new experiment was tested. So it’s super important to take screenshots, concept everything, including how it looks now.

2) Variant transforms - describing the transformations that are going to be necessary for your tests to be run. E.g. The See More button should be sticky

Metrics

Conversions - e.g. clicks on See More button

It is very important to understand that if you can't actually think of a way to track something, maybe you shouldn't be testing it at all. If your metrics don't come to you easily maybe you should rethink the test altogether!

Step 3 of the CRO process:Test Build

Review the test plan

So the first job for the developer, once the test plan is signed off, is to look at that plan itself, clarify anything that they are uncertain about before they write any code. The reason for this is fixing the details of the plan is a lot faster than fixing the code once it's been written.

We have worked with a lot of developers over the years and one of the worst things that you can hear from them is once the test has been built and you ask questions: I thought it looked wrong, but I did it anyway.

If you're working with development teams, it's important to give them the confidence that they can and they should question what you plan. None of us are infallible, we make mistakes, whether it's typos, you've missed a section, whatever it might be, but yeah the important thing is that their first job is to look at that test plan, clarify anything that they don't understand.

Set up the test

From there, they're then going to be setting up the test in whatever CRO platform that you're using. So transform URLs, test entry logic, the transforms themselves and any conversion tracking that you might need.

Repeat checks

And then last, but again, certainly not least, the developers job is to make sure that they believe that the test is robust before they pass it onto the QA team. Not skimping there means that they're not just passing on issues to other people. (thinking back to the speed and reliability elements here).

What does the test build look like in its different elements?

Transform URLs

What page or pages need to have visual or functional changes on them?

Test entry logic

What conditions qualify a visitor to be eligible for the test and when should they enter it?

Sometimes this is as simple as on the load of a particular page or a series of pages. Sometimes a visitor may need to get to a certain section within that page or they may need to enter certain information into a page in order to qualify. Test entry logic is hugely important. If it is not done accurately, you'll end up comparing apples and pears, some people who have seen your change and others who haven't, but they're all being counted in the same way. So from a developer's perspective, in the test build phase, getting that part accurate and robust is hugely important for the solidity of the tests that you're running.

Transforms

This is the bit that most of us think about when it comes to test build. It's those visual and functional changes that form the differences between the control and the experiments that you're running.

Conversion tracking

A lot of tools these days will integrate with the analytics platform that you or a client are using. So sometimes you don't need to add extra information in there, but if you do, this is where that point happens. Data that is collected within the CRO platform itself, usually that comes from elements or functionality that you've introduced within your test that don't exist in the control at all. Therefore, there is no existing tracking for them in an analytics platform because until this test is live, they don't exist.

Step 4 of the CRO process: QA

With the QA team that's dedicated to just checking tests six main areas are going to be checked:

Test specification

First of all, you're going to have the test specification, which is basically does the test actually do what I expected to do?

Look and feel

Then the team is going to look at the look and feel of the test. Does it look right? Does it look as per the designs that were given in the test plan?

Pagehide scope

When you are going to release a change to be tested, you do not want people to actually see the original version of the page. You want them to be presented with the change so that they do not know they’re in the test.

This needs to be organised by devs and it needs to be organised only on the page when the test is actually taking place and the change is happening. So what the QA team are going to check is that this scope is basically respected and that the page hiding the control and then revealing the experiment is only happening where the test actually runs and not across the whole site.

Code quality

Make sure that there's no errors being thrown by the code that the devs added onto the site.

Conversion tracking

You want to make sure that your metrics are being tracked correctly.

Unknown scenarios

These are little things that may break but that you may not have thought about. A typical one can be for example, going back from the test page through the back button and then coming back. That's a typical scenario that we notice may break and therefore it may come under unknown scenarios. This is something that the QA team will find that you or the developer did not necessarily think about.

Checked for all device/ browser combinations

Now all of this has to be done for all devices and all browsers that you are going to test on. This is very important and the reason why it is important is because not all browsers particularly are equal.

So when you do these checks you often have a matrix like this of essential checks, yes or no, is it working, is it not working? And as you may notice we got a few things in red here and depending upon how wide the problem is, either you have a test wide issue if somehow what you were checking doesn't work across several browsers and devices but occasionally you may have maybe a browser specific issue and this happens particularly with Safari. We find things tend to look different there and tend to need to be coded slightly differently.

So it is very important that everything gets checked across all the browsers and devices that you're going to present your user with. Otherwise you might have bad surprises.

Better quality output with QA

Yes, developers can do QA but most of them don't like it! It's not their thing so having someone for whom this is a specialist job in the end, if you want a job done well, you have specialists to do it.

I know for our business, when we brought in a specialist QA for the first time, the quality that we were outputting went up massively. We spotted more things before they went over to a client. It is very much overlooked, but it's a big win on the reliability side of the programme as a whole.

Related resources:

Webinar: Our CRO Process in action: 10 steps for repeatable success

The CRO process - three steps to take during the testing phase

The CRO process : three steps to take after your test is over

Subscribe for CRO tips & advice

Landing in your inbox once a month