Testing Hypotheses: Why Guessing is Good Business

Hypotheses: Why Guessing is Good Business

In high school, we learned that a hypothesis is “an educated guess.” In 2011, The Lean Startup by Eric Ries hit the startup world like a hurricane, upending old ideas and clearing space for learning by using hypotheses to test ideas, and validate (or invalidate) explanations for customer behaviors.

Lean methodology has since spread to just about everything any business can do – including marketing and website optimization. After all, we’re diving into an era in which everything can be tracked, often in real-time. We can test our ideas before investing in them. We can get to know our target audience, and what they respond to, better and faster than ever before.

Which means, an e-commerce business doesn’t have to commit to a huge risk when optimizing its website. You can try out ideas and adjust intelligently with a hypothesis.

Creating a Valuable Hypothesis

Creating a valuable hypothesis is one of the most challenging parts of any optimization program. The hypothesis is the result of a long process of gathering data, analyzing and compiling the information, identifying flaws, problems and inefficiencies. And its purpose is to teach you something, though not always the lesson you think you’ll learn.

To create a hypotheses, we must take into account the data gleaned from the first round of research, in addition to the characteristics of the target audience, and identify desirable outcomes – along with how to achieve them. You also need to identify key performance indicators (KPIs), so you can tell whether your hypothesis is proving true, or false.

What is a Hypothesis, exactly?

The scientific definition of a hypothesis is: a proposed explanation for an observed phenomena or event. In order for the proposed explanation to qualify as a hypothesis, it must be verifiable and offer a way of predicting the phenomenon. And, it has to stand up to every attempt to disprove it.

In CRO, we use a hypothesis to define an idea for improving a website. Then, we test the improvement against the existing website. If the hypothetical improvement achieves the targeted KPI, then it becomes a permanent change (at least until the next improvement).

But there are so many other ways this can go.

  • If the expected KPI is achieved, you’ve proven your hypothesis is sound. Move forward.
  • If the expected KPI isn’t achieved (but comes close), you’re still improving, but you’ve got some more thinking to do. Maybe that KPI wasn’t realistic. Maybe you missed a step. Don’t just pat yourself on the back for smallish wins and move on.
  • If the expected KPI does a somersault and lands on its head (ie. you got it very, very wrong), it’s time to start again from scratch.

So where do you start in creating your hypothesis (and the KPIs to go with it)?

You start with a whole lot of research

You’ll need:

  • A comprehensive analysis of the current site so you can pinpoint the main issues that may be inhibiting conversion.
  • This analysis must take into account all aspects of the site – and the entire on-site sales funnel. Usually the first area we test is the technical part of the site, followed by user experience (UX) and, finally, the content.

If your problem is technical, then fixing the broken bit is usually sufficient to get your conversion rates up to where they should be. For example, if a technical analysis of the site reveals a broken or misdirected link to another part of the site, just correct the problem. No further testing required.

However, the UX analysis poses more complex challenges to the analyst/optimizer. Some of the choices made here are purely aesthetic and cannot be accurately quantified or judged based on taste or intuition alone. For example, would a red banner or orange banner yield more clicks? We can’t know until we test.

How do you know what to test? In the analysis process, you’ll find parts of the website that are underperforming for non-technical issues (they work, but they’re not generating the desired action). You’ll need to form a hypothesis to explain the underperformance and how to fix it.

Time to brainstorm!

Testing Hypotheses
Testing Hypotheses

See how the analytical data filters down to inform the ideas? Those ideas become hypotheses to test until the website is measurably improved.

Ideas on paper

Once the research phase of the optimization program is completed, the CRO team should have a number of clear ideas for what might be wrong with the site. Those ideas might range from simple, like changing the location of the CTA, to complex, like changing the way checkout works on the e-commerce site.

But this process doesn’t only work for fixing faults in a website. We use the exact same process when we look at a website that performs well now, but could perform even better with some changes.

Once we’ve identified a few ideas, we can begin testing them, one at a time, with an A/B test.

An A/B test is very simple – it compares the unchanged version of the web page (called control) with the altered page (called variation). For a website landing page, for example, you could compare that page as-is with a version featuring a bright orange CTA button. But, you wouldn’t re-do everything on the second version – you would only change one thing at a time: The color of the button. Or the shape of the button. Or the placement of the button. But not all of them at once.

A/B Test - Hypotheses Testing
A/B Test – Hypotheses Testing

The hypothesis might read: “We think the current blue CTA button is getting lost on the page – people can’t see it easily – which decreases conversion rates. By changing the CTA button to orange, we expect our conversion rates to increase by 20% within two months of implementation.”

The hypothesis takes our assumptions – that conversions are dropping because of the CTA button color, and that by changing the color, the conversion rates will increase – and determines our KPIs for success within a specific time frame.

That last part is important, because these tests are meant to be fast and efficient, and you can’t be fast and efficient without a deadline.

If these instructions are starting to sound familiar, it’s because they’re based on an acronym we’ve all heard: SMART.

  • Specific
  • Measurable
  • Achievable
  • Realistic
  • Time-based

When you have a properly framed hypothesis, like the above, you’ll gain valuable insights whether or not your hypothesis succeeds. The results may open a way to more comprehensive redesign or point to some other solution to be tested. If it’s successful, the change will result in increasing the bottom line of the company.

Doesn’t sound too hard, does it?

Yet, there are pitfalls even experienced optimizers and analysts fall into. Here’s what not to do.

Pitfalls awaiting the optimizer/analyst

Trying your favorite idea first.

You have a short list of ideas to test, but there’s one you really like. While many ideas sound awesome at first and people tend to get excited about them, it is well worth your time to sort the ideas according to the criteria of technical requirements, time constraints and the scope of proposed change and its impact. Prioritizing your ideas around these pragmatic factors will save you time and effort in testing.

This isn’t to say you shouldn’t test your hypotheses in order – you should. Hypotheses are usually weighted according to how much impact you expect your proposed change to have, not just on your website, but on your business.

Confusing the “idea” with your hypothesis.

They’re related, but they’re not the same. An idea is only the first step towards a hypothesis. To turn an idea into hypothesis, we need to make sure the hypothesis statement includes these features:

  • The element that will be changed
  • Proposed variation
  • Target audience or scope of the change
  • KPIs with which to measure the performance
  • Desired objective to be achieved
  • Logic of the change: I.e. If I change (element) into (variation), then the (target audience) will react, increasing my KPI, which will result in achieving the (objective)

Example time! Testing Hypotheses

Here’s how our hypothesis creation process typically works (as outlined by our very own CROs).

  • In analyzing the web site, we have noticed that many people give up when presented with the request for payment information.
  • After viewing the relevant screen, we realized that the only option presented to the visitors is to give their credit card information and billing address.
  • The dropout rate at this step in the conversion funnel is around 78%.
  • Average revenue per user of the website is US$85 and 750 users have put the product into the cart and proceeded to the paying, with only 165 reaching the thank you page.
Hypothesis Creation Process
Hypothesis Creation Process

From this research, we can conclude that there is a trust issue with the payment method and that customers are reluctant to leave their payment information. We can formulate the following hypotheses:

1. If we add a new payment method that users trust (such as Paypal), then our conversions will increase by up to 200% (for example), reaching 330 customers. This would result in increasing the revenue by approximately US$30,000.

Change involved: Adding a trustworthy payment method
Scope of change: All users
KPI: Rate of conversions in the funnel
Desired objective: Increase in revenue

2. If we add more security indicators, such as Norton or McAffee guaranteed, and https or Google Trusted store, more of the customers will proceed to payment and our conversions could increase by 200%, thereby increasing the revenue by approximately US$30,000.

Change involved: Adding a trust and security indicator
Scope of change: All users
KPI: Rate of conversions in the funnel
Desired objective: Increase in revenue

Keep in mind that the values are assumed to be high to better illustrate the example. Real life hypothesis would probably assume lower lifts.

From here, we set up the test that will compare the present variant of the site (the control) to the improved variant with PayPal added. We will also take into account that adding PayPal involves some more footwork to get set up.

The two hypotheses can be tested simultaneously in a multivariate test or sequentially in a classic A/B test. If you would like to learn more about the testing and testing methods, check our post on A/B testing. If the test results show us that one of or both of the alternative variations brings in more conversions, than we have validated our hypotheses. If not, we go back to the drawing board, having proven that it is not only trust issues that inhibit conversions on our site.

What happens if the hypothesis fails

The important thing to keep in mind is that having a hypothesis fail a test is not the end of the world (or your testing program). Each time this happens, you learn something from it. A series of failed hypotheses, however, may point to two possible things – you have either hit the local maxima, the plateau that has no way up without radical changes, or your hypotheses are weak. Either case calls for radical reconsideration of your methods.

Hey, it happens.

Published by

Edin is a Senior CRO Consultant. Edin is into Google Analytics and testing (any A/B testing tool really) and likes to write about it. You can follow Edin on Twitter.