Inside Growth at Wistia: The Process Behind Our A/B Tests
March 28, 2018
Topic tags
Andrew Capland
Marketing
Over the past three years here at Wistia, we’ve run over 150 A/B tests to improve our conversion rates and funnel metrics. We’ve run tests at all phases of the funnel and across most of the channels we use to communicate with our customers. We’ve tested website pages, content, product design, email flows, our sales experience, and even the way our plans are set up.
I’ll admit it — at first we had no idea what we were doing. We didn’t know anything about setting data-driven hypotheses, much less analyzing the results. Naturally, we made tons of mistakes. But by running all these tests, we actually avoided making some pretty bad decisions, increased our conversion rates, and also learned a ton along the way.
As time went on, we started to add more process to the mix. That process ultimately drove more learnings, and slowly, those learnings turned into results. As a video company, many of the tests we run involve video in some way. But, the process is exactly the same if you’re testing video, copy, images, complete page layouts, navigation updates, user onboarding flows, pricing models — you name it. Regardless of what you choose to A/B test, when it comes down to it, you need a thorough process, and that’s what I’m here to talk about in this post!
Most marketers aren’t running experiments
As a startup marketer and data nerd, I used to dream about working at a company that had a big focus on testing and optimization. I read articles from “famous” marketers, hung out on Growthhackers, watched tons of videos from growth pioneers, and even took online classes from Reforge. But I never actually ran many tests, and I started to notice that a lot of the marketers I was following only shared insights into why they ran A/B tests, but not how to actually do it.
How do you balance your priorities, choose between ideas, implement a winner, or even know when to cut your losses and move on? I was convinced it had to be me — every other marketing team on the planet had this stuff figured out. Right?
Turns out, I was wrong.
“How do you balance your priorities, choose between ideas, implement a winner, or even know when to cut your losses and move on?”
After being a bit vulnerable with like-minded, data-driven folks in the space, I learned that tons of marketers out there just don’t have the systems and tools in place to establish a culture of experimentation and A/B testing — in other words, I wasn’t alone.
As we started to learn more and refine our testing, we began sharing our results in blog posts and at conferences. It became clear pretty fast that this was something marketers really wanted to utilize and learn more about, so we thought we’d share what goes on here behind the scenes of our own growth experiments.
Three must-have documents
So you’re ready to start A/B testing and hopefully learn a bit more about what makes your customers tick. These are three documents I recommend getting started with to set the stage for your all of your current and future experiments to come.
1. The Master Ideas log
We use this document to keep track of every experiment or testing opportunity we’ve ever had. Along with each idea, is a one sentence description of the idea, its ICE score (we’ll cover that later), who suggested the idea, why they think it will be successful, and if there are any blockers. When we’re set to run an experiment, we also include a link to the individual experiment doc.
This document should have three important tabs:
- Completed work: a record of what you’ve done and results
- In progress: an indication of what tests are currently running
- Backlog of ideas: a living, breathing tab that folks can contribute to
We consider this spreadsheet the home base for all of our A/B testing work. It’s a great log of everything we’re working on can easily be shared with everyone from our executive team, to new hires. Need a high-level look at what growth work is being done and how successful the tests are? Check the Master Ideas log.
2. The A/B Test roadmap
This document lays out our overall A/B testing roadmap. It includes the next tests we’re going to run, how long we’re going to run them for, and includes time for design, copy, and build. We typically review this once a month and use this to drive our weekly sprints.
As you start to run more tests, one of the questions that almost always comes up is: “How can I optimize the number of tests that are running at any given time without overlap?” By setting up a roadmap, you can easily get a bird’s eye view of those potential conflicts so you can effectively communicate with your design and development teams right from the start.
3. An Experiment Test document
This is where we log all of the granular, detailed pieces of information about a specific test. The experiment doc should include your hypothesis, what you hope to achieve, the design of the test, and what you’ll learn, whether it’s a win or a loss.
As a general best practice, you should always document your tests. With so many moving pieces and sometimes several tests running at the same time, having a record of all of the things you’re doing and eventually, what you learned from them, will help fuel future ideas for other areas of the business. Plus, being so explicit with your documentation makes it easy for anyone else at the company to see what you’re up to without constantly asking for updates on how a test is progressing.
Putting the process to work
Identify areas in the funnel that aren’t working well
In general, we try to identify areas that we know we have a shot at improving. For example, if there’s an important page on our site, like a product page, that doesn’t seem to be converting really well, we’ll run a survey and asking something like “What’s stopping you from creating a free account right now?” Or similarly, if it’s a page that’s driving sales conversions, we might ask “Why haven’t you signed up to chat with a member of our sales team yet?”
Surveys really help us gain insight into the temperature of folks on the page, which is valuable information we can use to drive new experiments. In other instances, however, we’ll use tools like FullStory or CrazyEgg to watch our users interact with our content. These tools let you physically see where your customers get stuck or distracted in your product — that text link you thought was super obvious? Think again.
“Surveys really help us gain insight into the temperature of folks on the page, which is valuable information we can use to drive new experiments.”
At the end of the day, real users are our best source of insights. We use these insights to identify key problems. We then take those problems and come up with ways to solve them, which leads to further experimentation and even more learning.
Brainstorm ideas to solve the problems
When it comes to creative ideas for solving problems, your tendency will likely be to jump right to the perfect solution first. And if the problem is super obvious, that might actually work just fine.
But most of the time, you’ll probably start with the wrong solution. Or at least, that’s what we’ve done here at Wistia before developing a system to help guide us. We’d come up with an obvious, quick fix to a problem only to be totally let down by the results. So after lots of failures and time spent guessing, we found that the following process helped us pick the ideas that are likely to have the biggest impact.
First, we get in a room and brainstorm a bunch of different solutions to the problem at hand (this is the really fun part). We typically invite folks from departments across the business (like customer happiness, customer success, sales, marketing, and creative) to help solicit ideas.
Rank your ideas to find top contenders
We rank our ideas using the ICE framework, which was pioneered by Sean Ellis from Growthhackers. It isn’t rocket science, but we’ve found that it’s super helpful to have some way to determine which idea is actually worth your time. Here’s quick overview of the process.
- The “I” stands for Impact.Does the idea have the potential to have a big impact on your conversion rates? If it’s a little change, it will probably have a little impact. Bigger changes tend to produce bigger results.
- The “C” stands for Confidence.Do we think the idea has a high chance of succeeding or is it just a random idea that somebody tossed out? This one is usually the hardest to be honest with, and where the user research and information we collected in the previous stages becomes super helpful.
- The “E” stands for Effort. Is this an idea that we can execute in a couple of hours, or is it something that will take weeks or months of design and engineering time?
Together, we rank each of the ICE components on a 1–5 scale. Then, take the average to determine which ideas are the strongest.
The result? A prioritized list of ideas that will help us improve conversion rates. Typically, we do this on a whiteboard at first to keep things fluid, and then later transcribe it over into our Master Backlog of Ideas document. That way, we always have a running list of good ideas that have already been ranked, which makes it easier for us to determine which projects we want to work on sooner rather than later.
Start your test document
Creating a test document is a critical part in this process. We write down what our hypothesis is and why, what success looks like, and all the supporting information we have about the test. We also leave space for how long the test will need to run, the results of the experiment, and what we learned once the test is complete.
Another key component you should consider — and get ready to put your dorky hat on with this one — is naming each experiment with an alphanumeric string. For us, this usually looks something like WC001, WC002, etc. (the WC in this example refers to wistia.com), but we also use EMAIL for email tests and APP for product tests. When you’re scaling up your testing, these numbers will become more helpful than you know.
We kept having conversations that sounded a little something like this: “You know the test I’m talking about, right? The one with the funny video. Or wait, are you talking about the one from December with Lenny?” Referencing a test can be confusing without the proper descriptors in place, so do it right the first time and make a naming convention.
Run the test, document your results, and discuss I won’t get into the nuance of actually setting up the A/B test (check out this post I wrote on doing A/B tests specifically with video if you’re interested), but I will focus on is arguably the most important part of our growth process: documenting your results. I’ve said it before and I’ll say it again — taking the time to document your results and share it with your teammates is super important. Win or lose, these results can help impact a number of areas of the business in ways you might not even realize. You’ll make better decisions over time, which will lead to success for the business, and ultimately, that’s what what testing is all about.
Get ready to grow
There you have it! While it might not seem glamorous, putting process behind your testing efforts can lead to serious business impact. If you’re just getting started with testing and experimentation, the best thing you can do is get your ducks in a row right from the start — set up your documents, establish a standard naming convention, and create a process you feel confident in. This will make the entire process run more smoothly, so you can spend less time tracking down information, and more time coming up with awesome ideas you can actually work with. Start brainstorming with your team today and get rollin' on those tests.
Have you found a process that works well for your team when it comes to A/B testing? Any resources or information you’ve found particularly helpful? Share them in the comments below!