Online Lead Generation 3 of 3: Introduction to Experimental Marketing

Experimental Marketing 101

Marketing and experimentation go together like peas and carrots. Experimental marketing (think a/b testing, multivariate, etc...) has been around for well over a decade, but it's still a baffling concept to an alarming number of marketers. Don't worry. We are not here to cast judgement. It's our mission break down this seemingly complex marketing technique into actionable steps you can use to take your marketing to the next level. Please, read on...

Embrace the Scientific Method
Do you remember your 4th grade science class when you studied the Scientific Method? Yeah... we didn't think so. But as it turns out, this is an extremely practical, and solid methodology for conducting marketing experiments. Basically, the Scientific Method is broken out into five steps, each of which we'll explore through a marketing lens. Put on your goggles and lab coat, it's about to get nerdy.

Step 1: Start with a question
It's as simple as that. A question. What do you want to know? A few examples could be"

  • How can I improve conversion on my website?
  • How can I improve click through rates in my email campaigns?
  • How can I learn which message is most meaningful to my target audience?

Once you've put a label on what it is you're trying to learn, you've made a significant milestone. You have articulated a gap in your knowledge about how to engage the audience you're trying to connect with. This is the most important step of all, as everything else in the process is predicated on this step. Go ahead... write down your question.

Step 2: Do background research
Next, you need to gather data to help you better understand the performance of your existing marketing efforts. This data will directly be informed by the question you asked in Step 1:

  • Question: How can I improve conversion on my website? You'll want to measure your current website's conversion rate. Don't know? Talk to us. You'll also want to look at where that traffic is coming from (organic v. paid search v. referrals v. campaigns), which page they're converting on, and what they do after they convert.
  • Question: How can I improve click through on email campaigns? You'll want to dig into your historical email campaign analytics. Are there any discernible trends around which subject matter generated a measurable increase in click through rate? What was it? How can you expand on that content?

Step 3: Construct a Hypothesis
At this point, you'll want to begin pulling your research together into a cohesive statement. For example, have you found most people convert on your company's "Pricing" page? If so, do you have explicit paths to your pricing page, or calls-to-action evident on your homepage or campaign landing pages?

  • Example Hypothesis: Providing more obvious paths to pricing pages will improve my site's conversion rate.

This hypothesis is a high-level, testable statement that is designed to produce measurable results. There is a very useful article by our friends at MarketingSherpa on using the following template statement to help you craft your hypotheses:

Changing __________
To ____________
Will Result in _________
— MarketingSherpa

Step 4: Conduct an Experiment
Now for the fun stuff - just don't blow your eyebrows off like I did when I got my first chemistry set when I was 11. 

The design of your experiment is critically important to ensuring you're gathering meaningful data when you're actually conducting your experiment. Every experiment design needs to meet three criteria to be considered to be valid:

  • There must be a control
    • The control is there to ensure you've got an accurate representation of your baseline data, inline with experiment data. The control should be your standard treatment (i.e. the original design of your landing page or email layout).
  • Each treatment must test the same variable
    • I've seen marketers put together an experiment where the control was testing clicks on the "Subscribe" button, and in the treatment, they were measuring clicks on the "Request a Quote" button. It is foundational to make sure you're measuring the same thing in each treatment.
  • The experiment's control and variables should be measuring the same success metric
    • In reference to testing the same variable, you need to be sure you're measuring the same metric to get an accurate picture. If you're measuring clicks on a "Subscribe" button in the control, but measuring "Request a Quote" in the variation, you're going to be comparing two different metrics - Clicks on "Subscribe" vs. Clicks on "Request a Quote". This will give you useless data, and will torch the validity of your experiment.

Once you've got your experiment design in a solid place, you're ready to start testing. There are a number of platforms and strategies you could use to test, we don't recommend any single tool over the other, but here is a great place you could start to do some research, or you could contact us.

Step 5: Analyze Data and Draw Conclusions
In the final step of experimentation, you're going to need to look at the results of your experiment and determine whether your hypothesis was verified, or if you need to go back to the drawing board to create another experiment.

You're going to want to look at a few different things when you're analyzing data, and you don't need to be a data scientist to ensure you've got good data either:

  • Selection Bias
    • Selection bias occurs when non-random samples of your audience are exposed to your test treatments. For instance, if you sent people from Ohio to the control, and people from Michigan to the variation, the data isn't random, because the data is structured as "people from Ohio vs. people from Michigan" - there may be some demo and psychographic factors that influence behaviors in each group, so it confounds the data, making it useless.
  • Instrumentation Bias
    • Instrumentation bias occurs when you rely on a single analytics tool or calculation to verify the results of your experiment. You'll want to make sure you check your data to ensure there isn't a calculation error in your primary tool.
  • History Effect
    • History effect occurs when an external event impacts the validity of your test data. For instance, let's say you're selling Widgets, and there is an impending governmental regulation that will impact Widgets at the end of the year. You decide to run a campaign in the 4th quarter, you see a massive influx of sales. Was it your campaign, or was it the regulatory changes that drove sales? This is history effect.
  • Statistical Significance
    • I hate statistics. YUCK! As it turns out, statistics are an extremely practical science to ensure the data your experiments are producing is valid enough to make business decisions on. Most testing tools these days have statistics calculators built in, but remember, you don't want to rely solely on whatever tool you're using for these calculations. I find this tool by MECLABS offers a very easy-to-use, practical way to double-check your test data, while documenting the important aspects of your experiment, such as your question, hypothesis, and success metrics.

In Conclusion
You are about to embark on one of the most rewarding, incredible, and awesome journeys that digital marketing has to offer - follow the white rabbit down the hole of experimental marketing... you won't be sorry.

Digital Radar, LLC is a digital marketing agency founded in Dayton, OH, and specializes in building quantifiable, results-oriented digital marketing strategies for businesses of all shapes and sizes. Want to know what we can do for you? Go ahead- reach out - you'll be glad you did, and so will your boss.