In part two of the conversation with George at Nando’s, Matt delves deeper into George’s three key learnings from building their CRO programme from scratch ( with examples). Hope you enjoy it. The full conversation is below, but if you want to jump to specific sections you can do here:
- Start small - to build trust (jump to section)
- Start small - increased scope = increased risk (jump to section)
- Start small - the risk that you perceive today is far less than the risk of doing nothing (jump to section)
- Start small - areas to focus on (jump to section)
- Regularity and building momentum in the early days ( jump to section)
- Objectives and KPIs - know what you’re optimising for ( jump to section)
- Objectives and KPIs - Business objectives > questions > test objectives > KPIs (jump to section)
- Objectives and KPIs - your hypothesis (jump to section)
- Quantitative and qualitative data - what and why ( jump to section)
- Quantitative and qualitative data - examples ( jump to section)
- Summary ( jump to section)
- One final point from George ( jump to section)
Matt: Let's try and break these key points down. So starting small is point number one. Where we started with Nando's all those years ago, the first test we ever ran wasn't even about anything transactional was it? It was on the newsletter signup page. So this was about demonstrating, not just to you guys who we worked with every day what the process looked like, but to give you something to go to your stakeholders with. I think the reason that was so important is that you've got to build that trust haven't you within the business that this new thing that you're doing which frankly can be quite disruptive if done the wrong way, you need that trust if that programme is going to grow and be successful. There's a mixture of things that you've got, part of it is process, part of it is results as well. In the end, CRO is an investment like any other for a business. There has to be a tangible element to it, the returns that businesses are getting. Between those that's what builds that trust up.
I know that this next bit is something that you've seen over time, when that trust goes up, you get the scope, that increase in scope to do more exciting things. But I think what often gets missed is the last part of this, which is the risk element for people. The greater that scope gets, the greater the risk gets and senior people in any business that's the thing that they tend to consider first isn't it? It's not what could we gain, its what is the risk of doing this versus doing nothing. How have you been able to deal with the increasing visibility of what you do from those senior people?
George: It can be tough at times, and you need to back your own corner and know that the process that you have in place, and previous activity can justify why you've been given authority to run activity at risk.
I guess the caveat to it all is the beauty of CRO and A/B testing is that you’re cushioned by the fact that it's very agile. If, worst-case scenario, something wasn't quite working right or there are factors outside of your control that needed to be attended to you can turn things off, or suspend things very quickly and easily! So although there might be a perceived risk with running certain tests, the management of that risk is actually really easy. You will always get stakeholders raising their concerns but as long as you can clearly communicate the process, what we're doing is mitigating a longer-term risk by testing this and that can easily be managed in the moment. It's something that I have personally found to be a hurdle to get over.
Matt: That's a really interesting point though. I guess what you're saying is that the risk that they perceive today is far less than the longer-term risk of doing nothing. It's a message that personally I've always been a big believer in from a CRO perspective.
There is a lot of commentary in our industry about how things should be, call it the perfect CRO programme or the perfect experimentation programme, and I guess my view has always been that nobody goes from doing nothing to that, it just doesn't happen, its unrealistic that that will happen. It is a process of growth that you go through and I think I'm gonna pinch that line for the future! The risk that you perceive today is far less than the risk of doing nothing in the future - that one's going in the notes!
There are a few things that we point to as areas to focus on and I know a number of these have come up with Nando's over the years. If you're looking for something of a checklist, the first one is effecting change with speed. I think George sort of alluded to this a moment ago about being able to spin things up and take them down where necessary. That's a critical element to getting that initial investment. Can you deliver change faster than, call it the normal process.
Secondly, emphasising risk mitigation. It is something that comes as part of the CRO package but others won't necessarily see that unless you tell them that is the case.
Thirdly, areas of measurable impact - it sounds really obvious and you know there's probably a number of people who are listening to us this morning thinking well yeah! But that measurability isn't just about financial benefit. It's about proving that you can affect change, and have a number that sits behind that. As I said earlier, the first test that we ran with Nando's was not something that people were going to be writing in their diaries that night (!) but it was an important part in showing what the process looked like and that we could granularly measure the outputs of what we did. It also came with pretty low technical effort.
That's the fourth one - low technical effort. And the reason that the low technical effort part matters, there's two of them really, the first one is, you don't want to technically fall at the first hurdle! That is not a place that you want to be. Even as a third party, as an agency, we don't want this because we want you to be able to tell a good story to your stakeholders about why this is a good idea, so us going yeah we can build something really complex, in the first few weeks, whether we can or we can't, it's just not smart to try and do that. The other part is, if you have a high technical effort, that's a level of investment needed, the higher the investment is the higher the perceived risk of that is. So, wherever possible try and avoid anything that's too complex.
And the last one is educating stakeholders on value. This is something that George has obviously worked very hard on and continues to do on a daily basis, because it doesn't change does it? You get new stakeholders into the business or you're starting to impact areas you haven't looked at before and that brings more people in. The education of those who don't deal with this every day is massively important. One of the things I've found across a lot of different businesses that we've worked with is that those in more senior positions, because CRO as a thing is not necessarily new anymore but it's newer than a lot of other disciplines, those who are in those senior positions, very few of them have ever been hands-on in a CRO context. It wasn't something that they did earlier in their careers and so things that aren't well known, they worry people even more. We all worry about things that we don't understand. Someone goes right we're going to do this and you're like I have absolutely no idea what that means, or what I'm going to do with it. If you're looking for a breakdown of what George means by the importance of starting small those are the five bullet points for that.
Around that low technical effort piece. Sometimes we get the question about whether that means about getting out as many tests as possible. And so I wanted to put this bit in, to understand the difference between simplicity and number of tests and what we term as regularity. If you assume in a binary world there are two types - you've got a simple test and you've got a complex test. Those simpler ones, you can get a lot more of them out. We see those cycles usually six to eight weeks. You do something complex, you're looking maybe 10 to 12 weeks. But this isn't about us trying to get out as many tests as possible, it's about that regularity in the early stages. If you can get tests out and concluded regularly, that brings a regular string of results and that helps with effectively building that momentum, that this is a program that is going places.
And the phrase that I've used often is that you almost have to be seen as one of those catalysts, one of the principal catalysts of change within the business. That point you made earlier George, what's the risk of doing nothing, that's the point you're trying to get across isn't it that businesses that don't change at all eventually fail. Even taking the last couple of years for Nando's, if you hadn't in previous years invested in your online platforms for delivery and collection, the pandemic would have been very different for you. That change became utterly essential!
So getting across the idea that change isn't something that is optional, it has to be done. And so the question isn't should we change, the question is how do we change in the right way balancing those risk factors along the way. And that momentum that gets built, we see it across a lot of clients that we work with. Once you start building that momentum, people get excited about this stuff, about what could be possible and what could be done. And that helps to build that scope up as we referred to earlier. If you take too long from the inception of your test to getting the results of it, people forget what you were going to do. They're no longer excited about what the outcome is because they can't even remember three months back what it was that you said that you were testing. So if you are at the early stages of your programme, think of it more like regularity rather than getting as many tests out as possible.
The second point George made about the things that he's learned was objectives and KPIs. And I think in the simplest sense, if you don't know what you're optimising for you can't optimise - it's not possible to do that and whilst it sounds obvious there is a bit more to it.
So the breakdown that George gave was effectively what is the business trying to achieve because what you need to do is understand that this is an investment a business is making. It isn't about the tests that you want to run. It’s how do we make sure that we're all pulling in the same direction. For you George, just before I go through the rest of that, how easy was that to get those objectives when you started?
George: The business has gone through a lot of change over the last few years. In all honesty, we've got to a position now where we're in a multi-channel world and we're not resting on our laurels with our restaurant eat-in business being as strong as it is and we need to make sure that we are set up as best as possible to run a multi-channel business. It's absolute basics but making sure there was a clear set of business objectives is something that has been developed on in recent times at Nando's. It's work in progress, and then year on year it's going to get better. It's been a challenge. Just getting that alignment as well, so even if a business has got set objectives, there's also cross-functional objectives that need to play into the business objectives and so there's a few pieces that need to be pulled together and that's always being worked on. It should be more simple than it is is what I'd say! Trying to get that is super important.
Matt: Have you found that sometimes when those objectives come down that they're challengeable? Are they ones where you feel the experience of the last four years of you running tests and living in that stuff every day, are you able to push back up and say well I know you've set this, but based on what I know that's not necessarily the right thing to do?
George: Yeah definitely. I think in more recent cases it's been understanding how that can be measured and realistically how can our day-to-day activities be measurably supporting that. And I think that's one of the things we've had conversations on internally. It's all well and good having objectives whether they're achievable or not but if you can't necessarily clearly measure your impact on them then it's tricky. We're lucky that we're in a space that is very measurable so it's not as difficult, but I still need to make sure that there is a clear alignment between our objectives and the business objectives.
Matt: So getting those business objectives as George said it's not always as straightforward as it sounds. It should be easy but it is a challenge for a number of businesses that we've worked with and as you said George with Nando's, that has got better and better over the years. You know perfect isn't a place you ever reach, it's something you strive for and try to get to.
And from those business objectives, the next step on is what are the questions that then need to be answered. With those questions, you can feed those down into overall test objectives. And then what are specific KPIs for those tests.
I’ve pulled together a fairly noddy example based on Nando's. For example, if Nando's goal was to be the number one seller of chicken by volume in the UK, one of their business questions might be as simple as, how do we sell more chicken? Well then the objective of the test is we want to sell more chicken online, and therefore a specific KPI for that test could be the conversion rate from visit to order on the Nando's site. So if you're looking for an example for your own business as to what that flow might look like hopefully that will help you. And as George mentioned, those business objectives, you do need to be aligned with those but also in a measurable way. You want to be able to play back up that chain and say, yep we did this and this is the output of it and that's how you know, I, or my team, or my department that's how we're supporting the business in those bigger objectives.
There was something else that George mentioned earlier after this point. Once you've got these, it doesn't give you a test. From there on this is where the hypothesis comes in. This is a very important element for that communication upwards. Anyone could pretty much do what we've done there just breaking down from business objective all the way down to a test KPI. The key bit then is what do we do with that? How are we then going to turn that into an experience? And with hypotheses, we split them into two different ways - how you would present it to somebody else within the business and how you write it before you get there so the structure that we've used with Nando's over the years has been:
By making this change
Visitors will have this or these behavioural reactions
Thereby triggering this impact on whatever the business metric is.
But when we're writing them, we're flipping it around so the ‘thereby’ part, you already know the impact that you want to have we've just been through that part, knowing the metric you would like to change is the easy part. The ‘visitors will’ part - what change in behaviour would result or could result in that metric moving? And then only lastly do you handle the 'by' part so what on-site change could trigger that behaviour? I think all too often we think what's the change I want to make, oh this is the test that I want to run. We tend to get anchored on that 'by' part, on the change that we want to do, but by flipping it around you're leading with what the business wants out of this, out of the investment that they're making and it helps to keep that narrative tied in with the objectives that the business has.
By way of an example something fairly simple:
By introducing a guest check out
Visitors will no longer be frustrated by forced registration on their first visit
Thereby increasing new visitor to order sales conversion
It sounds simple when you read it but it's a key part in turning those objectives and KPIs that you absolutely need from the start into something that is starting to sound more like a test that can be run.
The last of George's points was around the importance of having both quantitative and qualitative data. Within your hypothesis the ‘affecting the behavioural change’ element is always the hardest part because you're not trying to move numbers, you're trying to change how human beings interact with things. And if you ever lose sight of the fact that that is what you're trying to do it really makes your job very difficult. You're not trying to push a bounce rate down, you're trying to encourage more people to stay on the site or move deeper into the site, and those people have to be at the forefront of it.
And as George mentioned earlier, that quantitative data, using Google Analytics as an example. That is effectively what has happened in the past in aggregate numbers. And the qualitative is why might it have happened at a more individual level. Your quantitative data provides you with areas of focus at scale. Whereas the qualitative tends to provide an insight that informs on what current behaviour is but it's harder to review in large quantities. How we've tried to characterise these things for people in the past is quantitative - what are our biggest challenges? Qualitative - how might we be able to solve those?
And just to illustrate George's point a bit further what sort of things do these look like as examples:
Our conversion from the basket page has dropped by 10% over the last quarter (quantitative). If you know that, you're then going to your qualitative sources - Session recordings are showing people that aren't clicking on a T&Cs checkbox. The first point is saying what has happened. The second point is now giving you an idea of what you might be able to do with that.
Homepage bounce rate is much higher on mobile than it is on desktop - that's a fact. Heat maps show 50% of mobile visitors don't scroll far enough to see a CTA to tap on'. That again now gives you something to use as action.
Lead generation forms are converting at less than 3% (quantitative). Exit survey shows people just aren't comfortable giving us their phone number ( qualitative). Again, that's then something that can be actioned.
So just to summarise what George has taken us through from his experience:
Start small - it's hugely important. I can honestly say in my experience, I have failed at times to advise somebody strongly enough on the start small point. And sometimes that stops a programme before it even starts. You get a few months in and you're just not generating that momentum that you need to. So think about the processes that you have, and the combination of that and the results, they will help you to build the trust that you will need because that trust will bring you scope and impact. And as George mentioned earlier, that point around using that to educate your stakeholders on what they should and can expect from what you're doing, because you need them, they're the ones holding the budget strings that you require to keep your program going.
Objectives and KPIs - it sounds very obvious but you do have to know what you're working towards. And working backwards from whatever those business objectives are to help form the test KPIs that you have. If you're finding that those test KPIs are not in line with the objectives that you've got, this probably isn't a test you want to run or a direction that you want to take. It has to be there proving that you're moving towards those top level business objectives.
Mix of quantitative and qualitative data - the key point here is that they are symbiotic, one isn't better than the other, and it doesn't substitute for the other! Your quant., that's an identification of problem areas. The qual. gives you the behavioural insights in order to make the changes that you will likely need to shift those business objectives that you've got.
George, before we wrap up and move on to some of the questions that have come through is there anything else that you want to add today?
George: I think without overwhelming listeners these are definitely the key bits. Those three points and taking your time and thinking things through and not rushing into things is important. It needs to be approached in the right way. It's a central point to a lot of different operations in the digital space. It's fun, it's very valuable. But it does need to be done in the right way. I'd also suggest the process which you onboard into and get into a rhythm of is the most important element of that, asides from the tool and tests you're running. Those elements come and go, but the process that you're operating in, you know, whether it's with a partner or internally, that process - in terms of how you go from ideating and figuring out what you want to test to running analysis is really important. So yeah take your time and don't just jump straight in.
Matt: Yeah, absolutely.
Thanks for reading!
The full Q&A session from the webinar is coming soon so keep an eye out for it.
Watch the webinar recording
You can catch up on the whole conversation here: