Topics: A/B testing, experimentation, marketing measurement

Always-on Testing – A New Approach to Marketing Measurement

Long live measurement

Many years back, I worked as an attribution consultant for emerging brands. It was the first time in my career that I was truly wearing a client’s jersey. Neither a vendor nor an agency, I was actually on the brand side, supporting marketing teams with advanced marketing measurement.

I could viscerally feel what clients had been saying for years: “I cannot trust attribution reporting for decision-making.” We are not talking about last-click here. Multi-touch attribution reporting still had major gaps. It could not report on incrementality, it could not do mobile media, it could not deal with walled gardens, it could not handle TV, and it was not great at direct mail. It wasn’t working.

For the clients I worked with, that was 90 percent of their budgets. Classic multi-touch attribution was not going to fly. It was time to go back to the drawing board for a completely different approach.

A/B testing for incrementality on media campaigns turned up as the top candidate to try.  Marketers were already using A/B testing on landing pages, creatives, and a plethora of other functions. It was already happening in bits and pieces on the media side. On re-marketing, for example, some forward-thinking marketers would split the CRM file into two cohorts, use one as the holdout and the other for activation. Then they would check the CRM back end for transactions against the cohorts.

At the end of the test, results would read something like “Re-marketing is 6 percent incremental overall on e-commerce orders, but in January it was 12 percent incremental, and in March it was 14 percent incremental.” Independent of one another, those results are not statistically significant.

It turns out, incrementality — like conversion rates — is not just one number forever. It has meaningful variations from season to season. It became apparent that marketers needed a testing methodology that was “always on.”

From my experience over the years, I knew any new approach would require:

  • A multivariate framework like Design of Experiments (DoE). Simple A/B testing was not enough for most tactics;
  • Independent DoEs for each tactic, depending on how they are activated (e.g., Facebook Prospecting, Retargeting, Catalog Housefile, Catalog Rental);
  • Standardized design for each tactic so that it could accommodate channel-specific best practices while being configurable to meet the brand’s learning objectives for that tactic;
  • Scalable technology to automate each experiment; and
  • Results from DoEs integrated with vendors’ performance reporting to make it actionable.

We are still learning every day. There are miles to go before we sleep, but today when I see brands that work with Measured scale into Facebook for prospecting using results from the always-on Facebook-Prospecting DoE, I feel something I haven’t felt in a while as a measurement professional:  job satisfaction!

About the author

Madan Bharadwaj

Expert in advertising measurement, attribution and analytics

Related Posts

21 Million Dollar Growth - Measured Inc

It’s the Right Time to Invest In the Future of Measurement

Telescope Partners invests $21M in Measured to validate the business impact of media investments for DTC brands.

Read

Calculator and finance balance sheet

3 Reasons Why CFOs Love Incrementality Measurement and Reporting

Incrementality reveals the impact media investments have on the business metrics CFOs care about.  

Read

Fingers pointing at Apple logo

Apple Didn’t Kill Facebook Measurement. It Was Already Broken.

Privacy restrictions are simply exposing platform attribution for the broken system that it is.

Read