Skip to main content

What is the best way to approach split testing?

In a recent webinar we were asked

“When will you do bold testing and when do you just do tweaking?
Or do you always go for bold testing?”

It’s a great question, and, as so often happens, the short answer is ‘it depends’.

So we gathered the thoughts of our optimisers and other CRO professionals. Here’s what they had to say…

Johann Van Tonder, Lead Optimiser at AWA

johann-van-tonder-c-o-o

For me there’s a difference between ‘meek tweaks’ (button colours, CTA above the fold etc.) and sensible simple single variable tests based on research, analysis and evidence.

‘Meek tweaks’ as defined above are a waste of time, but single variable tests have their place. It depends on the situation and context.

As an agency we tend to lean more towards bolder tests. After all, everyone wants to get big uplifts in a short space of time. The trade-off when you go for a higher magnitude of uplift (positive or negative) is that you can lose precision in learning. The latter comes from testing single variables. These small tests don’t exclude the possibility of big uplifts in addition to knowledge. With a bold test, where you change many elements at once, you’ll only get the uplift, but you won’t know what exactly influenced customers to spend more money (or less money!)

There are times when it would be the wrong approach to run a bold test, for example when you’re still learning about your visitors. Precision in learning is more important in that instance.

Dave Mullen, Lead Optimiser at AWA

dave-mullen-optimiser

There’s definitely a balance. A slight tweak, if addressing a key need, can be very powerful.

As a rule of thumb, you need the test to create a big enough difference to measure with statistical significance, so unless you’re huge (e.g. Amazon) then you can rarely test only meek tweaks. At the other end of the spectrum, the boldest change is a full website redesign, and I definitely wouldn’t recommend that as best practice unless you put the relevant benchmarks and practices in place to identify any negative changes as they occur to protect your conversion rate.

Dan Croxen-John, CEO at AWA

about-dan

There are five good reasons why bold test are always worth going for.

1. In terms of split testing statistics and duration of split tests, bold tests either work well very quickly, or they bomb very quickly and you get instant feedback. Whereas if it’s a mild tweak then you’re going to be waiting for quite some time to find out whether it’s worked or not.
2. It’s really exciting. It’s exciting to be challenging some of the perceived wisdom about the way that customers are and the way that your internal team think customers are vs. the way that the research suggests that they actually are.
3. Very rarely do changing button colours or moving things around on a page just for the sake of it get the kind of increases a business wants – despite the soundbite headlines.
4. You get to learn about your customers in a way that can help other channels. For example when we tested putting an endorsement by a vet on to a pet food website, we learned a lot that could travel to other channels. With a button test, that’s not going to help the rest of the business
5. The bigger risk you take, the bigger the potential reward – regardless of the size of your website as I explain here – https://www.awa-digital.com/blog/mvt-and-split-testing/local-maxima-and-the-value-of-being-bold

John Barnes, Lead Optimiser at AWA

_john_barnes_circle

It can often depend on context – weight of evidence, where the hypothesis has come from etc. Although one situation where bold testing is definitely an advantage is when you are testing a site with low traffic. Larger changes have a bigger impact (positive or negative) which in turn reduces test duration. Of course, this would also apply to sites where traffic isn’t an issue, but bold testing allows even smaller sites to look for big wins

Bryan Eisenberg, author of Always Be Testing

Bryan_EisenbergTesting virtually random variations of elements in the hope something gives you a little lift is common practise. It may achieve some gains, but you’ll burn out waiting for the results. This is why so many marketing optimization efforts fizzle out over time.

Instead of trying to test endless variations of minutiae, I recommend that businesses should look for the big ideas that impact customer experience and buying process and test for variables that seem to really matter to visitors. You can always come back to the smaller variations later.

Peep Laja, founder of ConversionXL

peep-laja-conversionxl

A lot of precious time and traffic are wasted on stupid tests. Businesses should always test high-impact data-driven hypotheses, not small tweaks like button colour. You don’t have enough traffic, nobody does. Use your traffic on high-impact stuff. Test data-driven hypotheses. Don’t waste time on testing no brainers, just implement.

The team at Unbounce, landing page building and split testing tool

unbounce-logo

Psychological tactics like changing the button colour do influence behavior and improve conversions. Although such wins usually have minimal impact on revenue goal. Avoid this, by understanding your customers’ needs and concerns and trying to address them on your website. Such bold tests will usually give you a better revenue lift than psychological tactics tests.

The team at Visual Website Optimizer, A/B Testing Software

vwo_logoThere are four scenarios when testing a radical change makes more sense than testing small tweaks:

1. When you have a low-traffic site
2. When your current design has reached its maximum conversion potential
3. When your current design has huge scope for improvement
4. When you want big wins… and you’re not ready to settle for any less

Conclusion

There is no right or wrong answer to this question. The answer lies in context. At AWA, we run whatever type of test is most appropriate for the situation. During the Insight Generation phase, we can be found running insightful tests, making small tweaks to key pages of the conversion funnel to measure impact and importance. Once we have identified where the opportunities lie, the type of test we run depends on the insight we’ve found. In some cases, that can be redesigning an entire page, in others, it can be optimising key elements of the page, or introducing new elements to the page.

To find out more about the types of tests we run, and the types of results they achieve, read our case studies. If you’re looking to start split testing on your site, but you don’t know where to start, contact us today to discuss how we can work together.

What’s your experience of this?

Have you run any meek tweaks that have brought you big rewards? What’s been your most successful bold test? How would you answer this question? Add your thoughts to this discussion using the comment form below – we’d love to hear from you.

Read our ebook below to find out how a focus on three power metrics can double your CRO success.

Is your CRO programme delivering the impact you hoped for?

Benchmark your CRO now for immediate, free report packed with ACTIONABLE insights you and your team can implement today to increase conversion.

Takes only two minutes

If your CRO programme is not delivering the highest ROI of all of your marketing spend, then we should talk.