Q&A session - How to build the foundations of a winning CRO programme (part 3/3)

Laura Fox
Senior Marketing Specialist

After discussing Nando’s CRO journey with George and honing in on the three key things he said were crucial in building successful foundations for your CRO programme, we opened the floor to questions from the audience. Hope you find the Q&A useful.

The full conversation is below, but if you want to jump to specific questions and answers you can do here:

Matt: So the first question, I will ask this one to you George. How do we choose a tool if we're just starting, because what if our needs change as we test more?

George: I think that builds on quite nicely to what I just mentioned at the end there. In terms of tools, my personal opinion is that tools can come and go throughout your programme. Realistically, you want to match a tool that meets your short and medium-term needs. So it's got the capabilities to be able to run tests which you envisage being able to run over the next few years, ideally with a bit of scope to go beyond that as things improve and you find a rhythm.

My instincts will tell me to find the right tool for you that suits you now, get the right process in place, and way of working, and find that rhythm and your tools can come and go as you develop along that journey. Find a tool that suits your budget and your short and medium-term needs.

Matt:  I think the point you made there about the budget is a big one as well, because tools can go from free to a few hundred pounds a month to 10s of thousands of pounds if you choose them to.

It isn't too specific to say but for Nando's your budget for people far exceeds your budget for tools doesn't it by quite some distance?

George: Yeah 100%. And right now I wouldn't have it any other way. That's something I've actually grown to learn more about over the years. It's definitely the people and the process which is the most important aspect of the programme. You could have the swankiest tool out there, but if you don't know how to use it, or you can't make the most of it, it almost decreases the value of it. So yeah, pick the tool which fits your budget and where you are in your journey. Getting the right people and the right processes in place is 100% your priority.

Matt: I've always looked at it as good people can plug gaps in tools, in a way that good tools can't plug gaps in people. Bit of a crude way of looking at it but it's certainly been my experience.

Okay, next question. This one could be fun. George, have you ever run a test where the results have completely surprised you?

George: Yes, we definitely have. And in both ways to be honest. Testing ideas can come from all angles of the business and we like to think that we're very open to running tests if there's data or insights that suggest we should be focusing on a particular area and testing them. We will look to explore that and there's definitely been tests whereby we might have been supporting other stakeholders on their hypotheses and trying to validate that whereby a test has been less successful than they might have thought or more successful. It does happen fairly regularly.

One of the tests which I think probably surprised us the most, and it came at a bit of a bad point because there had already been certain decisions that had been made in some aspects, we were looking to test the navigation on our menu. On our order menu we have a horizontal navigation, which you can slide across and scroll through as well. This is something which we think we will revisit anyway. The test was to run that alongside a more traditional e-commerce burger navigation, whereby you've got the little icon in the corner that expands out and as a drop-down of the different categories. It's not something that is as common in the food industry for ordering but obviously very common with other retail websites. There was a strong belief across the business that the horizontal navigation (what we have now) is the optimal, and it just didn't return great results. It shocked quite a few people and that's something which will definitely be revisiting because we know there's improvements to be made there it just wasn't the right time to change it.

Matt: Sometimes the thing that is the prettiest is not the thing that works best. I've seen that time and again over the years.

George: And also, most people would look at their immediate peers for examples of how we should be designing stuff on the website and I think this is a case where we've followed suit with best practices within our industry but by looking outside of that and looking at other styles of navigation, that aren't as common with our industry, it gives you a different picture and gives you more options. So, yeah, that's an example of one test, they do pop up a lot but that's what testing is all about.

Matt: The phrase I often use is if we had all the answers we wouldn't be running tests we'd just tell you what to do. I think the bit about looking outside of industry or category, something I was told a long long time ago, which was if you only ever do what your competitors do, at best, you will be as good as them - you will never be better than them.

Next question, I'm going to ask you again George. If you carry out your research first, quantitative and qualitative, why would you need to test if you know what your customers want?

George: There's very rarely a case where both our data sources pinpoint us to the solution. The quantitative and qualitative research guides us to what we think could be an issue, and off the back of that there could be a number of different hypotheses that we want to test.

There might be some prevailing ones that seem more obvious, but you’re basing that on trend data and funnels or KPIs which obviously alerts you to an area. On the qualitative side, there might be a handful of survey responses or a sample of session recordings or even if you get to directly speak to a handful of customers in person, you might think that's enough but when you actually put a change live in front of customers, whether it's hundreds or thousands of customers over the course of a couple of weeks, you can't replicate that in any other environment. There's bugs and fixes which you would look to deploy which you know are just complete friction points and there's no alternative to change it now but if you're fundamentally changing the user experience, if you're trying to change the way customers behave, based on my experience of running tests, there's no test which is too small to run and it's a worthy method to validate your hypotheses from the data you've got.

Matt: Nando's have an advantage of a lot of traffic, therefore it's easier so some of those smaller tests that you talk about, it's easier for you than it might be for a site that has substantially less traffic.

George: Yeah that is a massive benefit! It comes back to earlier when we talked about the risk element. Everyone's within their remit to use their data sources and make a decision. I mean we're trying to move towards a test and learn mindset. I think from a development perspective if we can try and validate some of our hypotheses before we roll out and push live to 100% of customers, overall we'll achieve better results as a business.

The other benefit of it is that any successful tests that we run and we want to hand over to the development team just to put into the live environment, it's much easier for us to do that if we've got two or four weeks’ worth of testing data or to say we ran this test, these are the KPIs we were monitoring and this is the incremental efforts. In our case, it's mainly sales or revenue, whatever the metric is if you can prove that over the last couple of weeks or however long you were running that test for, you could prove incremental value, it makes it a lot easier to work with the tech team to say we would like this to be implemented and this is why.

Matt: It's easier to get the priority isn't it? A client I worked with many years ago used to send his head of development a message every morning reminding him of how much money it had cost him to not have that, whatever the winning variation was, deployed which was effective! It got things done for him.

Cool. I think we've probably just about got time for one last one. So the question is, how do you work out what to test first?

George:  It's a very good question and it's something which you know it's not even just first, it's next, it's every day, it's never-ending. This is an area that I've taken a lot of focus on, especially over the last year. Once you've got bigger visibility in the business in this area, what's coming up next is really important because that's what people really want to know in the business. What's coming up, and how is it performing, with the middle bit they're not so concerned about. It's the exciting ends of the programme, so making sure that when that lands that it's something which is actually going to be impactful, that's really important.

It all starts with those data sources so for me it has to come from the quantitative and qualitative data points to pinpoint where are the areas that we can improve or where do we feel like our biggest areas of opportunity are to improve. And obviously we are going to be supporting the business objectives.

One thing that we've adopted more recently, is a prioritisation matrix which is not too uncommon with what technology teams use in general to help prioritise features. Obviously, the matrix is very flexible depending on how your business operates and the different variables which are important to you. Ours consists of the expected impact on user behaviour. Is it a new element? Does it sit above the fold? Trying to get customers to take a particular action. Is there data to support it? Does it support one of our objectives? All of those factors gather up into an overall score which for any given test we would rank, which helps us focus on upcoming tests. So it's work in progress and there are bits which we continuously change and try to improve where possible but that helps give us a bit more of a focus to make sure that any tests which we come up with or comes in from other areas of the business gets fed into a fair prioritisation method. It's just making sure that it's backed up by data and helping to achieve the overall objectives.

Matt: It's a difficult question!

We're just about out of time. Thanks, everyone for joining and I hope you've gained something from George's experience over the last four years.

If there are any questions that we haven't had time to answer or any more in-depth pieces that anyone wants to talk about off the back of this session, feel free to get in touch directly or through the website.

If you are just starting out on your CRO journey, there's a free guide that we've created with over 50 questions answered in there that commonly come up at this stage.

A final thank you to George, thank you for your time this morning, have a good day everyone!

Watch the webinar recording

You can catch up on the whole conversation here:

Subscribe for CRO tips & advice

Landing in your inbox once a month