Press    Guest Post: The Growing Culture of Marketing Experimentation

Original Publisher
ClickZ

By Nick Stoltz, COO, Measured

Marketers are turning to new methods to analyze the effectiveness of their campaigns, and they are getting more accurate results more quickly with incrementality measurement.

The role of data, measurement and analytics in marketing has been growing in influence and impact for the past two decades. More recently, experimentation has re-emerged as an important marketing measurement tool.

This re-emergence is being driven in part by privacy concerns, which have made it increasingly difficult to collect cohesive data and apply attribution algorithms at the user level.

More broadly it is a product of marketer’s desire to actively test hypotheses in-market using experimentation to create best of breed marketing programs.

Building this discipline really requires only two things: (1) recognizing that any measurement or analytics program is incomplete without active experimentation and (2) understanding how to build and deploy testing against key hypotheses.

In practice, it takes commitment not to take for granted answers from Google Analytics, vendor reporting, lift studies, media mix modeling or multi-touch attribution.

Whether those methods agree, or more importantly when they disagree, organizations should consider those results as a starting point to develop key questions and hypotheses for experimentation programs.

Most marketers start with the same questions

Marketers consistently wonder about the same things:

  • Are lower funnel tactics like branded search and retargeting producing incremental conversions?
  • Are there synergies between marketing vehicles?
  • Is marketing more effectively deployed against existing customers or new customer acquisition?
  • Which prospecting tactics are truly growing my new acquisitions?
  • Which customer marketing tactics add value, and do I need all of them?

These are foundational questions for any brand, and there are effective ways to build out an experimental learning agenda against them. When first building out an experimental program, start by implementing incrementality testing at the channel level for key marketing tactics.

Based on spend, strategic priority or current performance assessments, brands can prioritize designing and implementing incrementality tests for key marketing channels such as retargeting, social, paid search or direct mail/catalog.

Limitations of lower funnel attribution

Each marketing channel requires a carefully designed approach to ensure that audiences are segmented into test and control groups. Marketing must then be withheld from the control group to serve as a baseline for measuring incrementality within the test group.

Often these channel-level tests confirm long-held suspicions, providing the in-market data needed to reallocate budgets.

Many marketers suspect that lower funnel retargeting programs are “stealing” credit by attaching themselves to converting pathways, and incrementality testing quantifies this effect and informs optimal budget levels for these programs.

Soft Surroundings, a fashion and beauty retailer, recently deployed retargeting incrementality testing with the hypothesis that they were overspending on retargeting.

Extensive testing revealed that was indeed the case, allowing the team to immediately reduce the retargeting budget by 30% and allocate over $120,000 per month to better performing tactics without seeing any meaningful drop-off in retargeting or site conversion.

Getting a clearer picture of Facebook prospecting

Sometimes incrementality testing uncovers opportunities that other measurement approaches have not yet identified.

I recently worked with a retailer that measured Facebook advertising with a combination of Google Analytics and Facebook-reported metrics restricted to a one-day click-through and a one-day view-through window.

Incrementality testing discovered that while these methods were reasonably accurate for valuing Facebook as a whole, they significantly underreported the impact of Facebook prospecting tactics.

Active incrementality testing showed that view-through conversions not tracked by Google Analytics and outside of their one-day view-through window drove incremental and previously uncredited conversions. Based on this updated view, the company reallocated funding to prospecting.

Experimental learning doesn’t end with these channel-level experiments. Often these set the foundation by identifying opportunities for more sophisticated experimental design.

For Soft Surroundings, incrementality testing identified that Facebook prospecting was driving incremental new customers well below internal acquisition targets.

Rather than increase spend across the board and wait and see, Soft Surroundings used an experiment to explore scale at the ad set level for all Facebook prospecting ad sets performing below acquisition targets.

The scale test was designed to simulate elevated spend, frequency and audience penetration for each of these ad sets against a smaller subset of their prospecting audiences.

Over a few weeks, they scaled to 1.5x, 2x, 3x and more at the ad set level to effectively draw out the diminishing returns curve for each set.

Based on scale testing conducted in September, Soft Surroundings was able to increase Facebook prospecting budgets by 61% in October while only seeing at 22% increase in cost per acquisition, well within targets.

Without active test design the same budget increases may have taken many months to achieve while the team deployed smaller budget increases and waited to observe their impact on acquisition cost.

AARP’s incrementality experimentation

In another example, AARP used incrementality findings at the channel level to actively test whether branded paid search budgets would be better spent on paid social.

This radical shift was viewed with skepticism outside the marketing organization, and the team wanted to verify that its channel-level observations held as they shifted to paid social.

A carefully design geo-based experiment confirmed the hypothesis but also uncovered additional findings. There was more upside on paid social, but this was best accomplished with mild (rather than major) cuts to paid search due to synergies between the channels.

Geos with mild cuts to paid search and large increases in paid social showed double-digit topline acquisition growth while geos with major cuts to paid search saw a 19% efficiency decrease in paid social channels.

The net takeaway was that there were some cost savings to be had in paid search, but the major upside opportunity was in securing incremental budgets for paid social.

There is no copy-paste formula to build out a culture of experimentation nor is there a single learning agenda that is a fit for every brand. A culture of experimentation requires an organizational commitment to constantly test assumptions and validate long-held beliefs against carefully designed tests.

An impactful learning agenda requires input from executives, marketers and data scientists and must be revisited regularly in order to continue to drive value, but the upside is real and impactful.

Applying a test, learn, grow philosophy to marketing is the fastest way to drive change that meaningfully affects the bottom line.

 

Sometimes incrementality testing uncovers opportunities that other measurement approaches have not yet identified.

By Nick Stoltz, COO, Measured

Marketers are turning to new methods to analyze the effectiveness of their campaigns, and they are getting more accurate results more quickly with incrementality measurement.

The role of data, measurement and analytics in marketing has been growing in influence and impact for the past two decades. More recently, experimentation has re-emerged as an important marketing measurement tool.

This re-emergence is being driven in part by privacy concerns, which have made it increasingly difficult to collect cohesive data and apply attribution algorithms at the user level.

More broadly it is a product of marketer’s desire to actively test hypotheses in-market using experimentation to create best of breed marketing programs.

Building this discipline really requires only two things: (1) recognizing that any measurement or analytics program is incomplete without active experimentation and (2) understanding how to build and deploy testing against key hypotheses.

In practice, it takes commitment not to take for granted answers from Google Analytics, vendor reporting, lift studies, media mix modeling or multi-touch attribution.

Whether those methods agree, or more importantly when they disagree, organizations should consider those results as a starting point to develop key questions and hypotheses for experimentation programs.

Most marketers start with the same questions

Marketers consistently wonder about the same things:

  • Are lower funnel tactics like branded search and retargeting producing incremental conversions?
  • Are there synergies between marketing vehicles?
  • Is marketing more effectively deployed against existing customers or new customer acquisition?
  • Which prospecting tactics are truly growing my new acquisitions?
  • Which customer marketing tactics add value, and do I need all of them?

These are foundational questions for any brand, and there are effective ways to build out an experimental learning agenda against them. When first building out an experimental program, start by implementing incrementality testing at the channel level for key marketing tactics.

Based on spend, strategic priority or current performance assessments, brands can prioritize designing and implementing incrementality tests for key marketing channels such as retargeting, social, paid search or direct mail/catalog.

Limitations of lower funnel attribution

Each marketing channel requires a carefully designed approach to ensure that audiences are segmented into test and control groups. Marketing must then be withheld from the control group to serve as a baseline for measuring incrementality within the test group.

Often these channel-level tests confirm long-held suspicions, providing the in-market data needed to reallocate budgets.

Many marketers suspect that lower funnel retargeting programs are “stealing” credit by attaching themselves to converting pathways, and incrementality testing quantifies this effect and informs optimal budget levels for these programs.

Soft Surroundings, a fashion and beauty retailer, recently deployed retargeting incrementality testing with the hypothesis that they were overspending on retargeting.

Extensive testing revealed that was indeed the case, allowing the team to immediately reduce the retargeting budget by 30% and allocate over $120,000 per month to better performing tactics without seeing any meaningful drop-off in retargeting or site conversion.

Getting a clearer picture of Facebook prospecting

Sometimes incrementality testing uncovers opportunities that other measurement approaches have not yet identified.

I recently worked with a retailer that measured Facebook advertising with a combination of Google Analytics and Facebook-reported metrics restricted to a one-day click-through and a one-day view-through window.

Incrementality testing discovered that while these methods were reasonably accurate for valuing Facebook as a whole, they significantly underreported the impact of Facebook prospecting tactics.

Active incrementality testing showed that view-through conversions not tracked by Google Analytics and outside of their one-day view-through window drove incremental and previously uncredited conversions. Based on this updated view, the company reallocated funding to prospecting.

Experimental learning doesn’t end with these channel-level experiments. Often these set the foundation by identifying opportunities for more sophisticated experimental design.

For Soft Surroundings, incrementality testing identified that Facebook prospecting was driving incremental new customers well below internal acquisition targets.

Rather than increase spend across the board and wait and see, Soft Surroundings used an experiment to explore scale at the ad set level for all Facebook prospecting ad sets performing below acquisition targets.

The scale test was designed to simulate elevated spend, frequency and audience penetration for each of these ad sets against a smaller subset of their prospecting audiences.

Over a few weeks, they scaled to 1.5x, 2x, 3x and more at the ad set level to effectively draw out the diminishing returns curve for each set.

Based on scale testing conducted in September, Soft Surroundings was able to increase Facebook prospecting budgets by 61% in October while only seeing at 22% increase in cost per acquisition, well within targets.

Without active test design the same budget increases may have taken many months to achieve while the team deployed smaller budget increases and waited to observe their impact on acquisition cost.

AARP’s incrementality experimentation

In another example, AARP used incrementality findings at the channel level to actively test whether branded paid search budgets would be better spent on paid social.

This radical shift was viewed with skepticism outside the marketing organization, and the team wanted to verify that its channel-level observations held as they shifted to paid social.

A carefully design geo-based experiment confirmed the hypothesis but also uncovered additional findings. There was more upside on paid social, but this was best accomplished with mild (rather than major) cuts to paid search due to synergies between the channels.

Geos with mild cuts to paid search and large increases in paid social showed double-digit topline acquisition growth while geos with major cuts to paid search saw a 19% efficiency decrease in paid social channels.

The net takeaway was that there were some cost savings to be had in paid search, but the major upside opportunity was in securing incremental budgets for paid social.

There is no copy-paste formula to build out a culture of experimentation nor is there a single learning agenda that is a fit for every brand. A culture of experimentation requires an organizational commitment to constantly test assumptions and validate long-held beliefs against carefully designed tests.

An impactful learning agenda requires input from executives, marketers and data scientists and must be revisited regularly in order to continue to drive value, but the upside is real and impactful.

Applying a test, learn, grow philosophy to marketing is the fastest way to drive change that meaningfully affects the bottom line.

Original Publisher
ClickZ

 

Sometimes incrementality testing uncovers opportunities that other measurement approaches have not yet identified.

Press    Measured CEO: Cross-Channel Attribution Via Incrementality Measurement

Original Publisher
ClickZ

D2C companies like Peloton are increasingly turning to incrementality measurement for attribution. Measured CEO Trevor Testwuide on why that’s the case.

“While multi-touch attribution (MTA) was supposed to end the struggle to prove campaign value, all it has proven is that it’s extremely expensive, time-consuming, and nearly impossible for fast-growing brands deploying significant walled-garden media.”

When Trevor Testwuide and his co-founder Madan Bharadwaj set out to launch Measured in 2017, their mission was to “inform the true causal influence of media tactics” for DTC retailers. Specifically, they wanted to address certain gaps in other forms of attribution:

Advertisers want to know how their walled-garden media is driving consumer behavior.

Advertisers want insights to be reliable to inform high-value decisions, not guesswork.

Since then, brands including, Peloton, Rosetta Stone, FabFitFun, Soft Surroundings, and Drizly have used the platform to grow their businesses. Measured helps them by informing media’s incremental contribution through experiments in an always-on way to inform media investment decisions.

We spoke with Trevor Testwuide, Measured CEO and co-founder, to learn more about their company and the progress they’re making in the world of marketing attribution and measurement. He and Madan — and many on their team — have been in the cross-channel measurement and attribution space for nearly a decade. Prior to Measured, he co-founded Conversion Logic.

As of August 2019, Measured remains self-funded and has not taken outside investment. Their clients primarily consist of retail, ecommerce, and DTC brands spending more than $5 million in media per year. They integrate with more than 200 partners in ecommerce, CRM, media, and adtech.

Where does multi-touch attribution (MTA) fall short, and what advantages does incremental measurement give?

“Multi-touch attribution (MTA) was and conceptually is a very powerful, granular, tactical optimization tool,” says Trevor. “The challenge is that the majority of the media mix that we see with fast-growing brands cannot be measured with MTA. Facebook, Instagram, Pinterest, YouTube, retargeting, catalogue — MTA can’t measure those things. I can’t map Facebook to Google Ads to Pinterest to a conversion.”

With all media analytics, the end goal is to understand the marginal contribution of each media tactic. What is the marginal lift per unit of investment?

But over the last five years, MTA has become increasingly difficult for marketers, particularly in D2C brands. Consumer data is increasingly hidden inside walled gardens where MTA is unable to measure and map across platforms. Rather, MTA works well for open, exchange-based media. Which these days, represents a small percentage of the overall media mix for most DTC brands.

Measuring incremental contribution, on the other hand, is a way of conducting controlled experimentation to inform the incremental contribution of each type of media or platform in a conversion.

With Measured, says Trevor, they’ve designed separate experiments for Facebook, for catalogs, for retargeting, etc. to inform the role each of those plays in a path to conversion.

“Think of them like best-in-class A/B tests to inform the lift of that media.”

ClickZ: Tell us about your background and how you came to co-found Measured?
Trevor Testwuide: Madan and I — and many on our team — have lived in this category of cross-channel measurement since the early innings of algorithmic multi-touch attribution.

I met Madan in 2011 when I joined Visual IQ. Madan was Head of Product, and he was our brain. I was on the customer side, and we ended up working very closely together there. That was the first phase of algorithmic multi-touch attribution — the technology was so nascent back then, and we really got to know our clients.

I then left Visual IQ at the beginning of 2014 and founded a company called Conversion Logic, which had a lot of success.

Then at the beginning of 2017, I left Conversion Logic to team back up with my good friend Madan who’d been independent consulting for a couple years with a lot of D2C brands like GrubHub.

In those couple years, MTA had started getting more difficult in terms of data collection with walled gardens and sensitivity around identity. Madan had realized that moving forward, the smartest path to informing cross-channel investment decisions was really smart experimentation.

He was convinced that was the direction cross-channel investment was going. He started building out some of those experiments, and at the beginning of 2017 we teamed up to take this new methodology to market.

We incorporated the business in March of 2017. Our first client was Johnny Was, a fast-growing women’s bohemian retailer, who signed a contract right in March 2017.

Now we’re two and a half years later, and we have a portfolio of great D2C and consumer brands.

CZ: Briefly describe Measured – what’s your elevator pitch?
TT: Measured helps inform the incrementality of paid media for acquisition marketers to drive cross-channel investment decisions.

Its approach is rooted in innovative, always-on A/B experimentation, which is proven to be the most effective and accurate methodology for determining incremental contribution.

Measured is powered by a privacy-compliant and quality-controlled marketing data platform provided as a service.

CZ: In plain English, explain what Measured does (as you would to someone not immersed in the space).
TT:
 Measured looks at how advertising dollars are spent across various online and offline channels and analyzes which strategies have the biggest impact on sales, so marketers can make better decisions about where to focus their budgets.

There are a lot of outlets for advertisers to prospect for new customers–Facebook, Instagram, Pinterest, Podcast, Catalog, TV, etc.–and some will get better results than others. Measured identifies which outlets are most effective.

CZ: What is the biggest problem Measured solves for customers?
TT:
 Measured helps its customers spend their marketing dollars most effectively to drive performance for their business.

CZ: How many competitors are in your space?
TT:
 It’s hard to say exactly. We’ve seen a couple competitors starting to work specifically on incrementality, but there are several more doing more traditional attribution such as media mix modeling and multi-touch attribution.

CZ: Briefly explain the technology that underpins your solution
TT:
 Measured uses sophisticated experimental design to inform media’s true incremental contribution to your business. We design experiments to run always-on A/B testing on various ad campaigns to determine which ones have the most impact on sales or other metrics important to our customers.

CZ: Why do customers choose you over your competitors? What do you do that they don’t?
TT:
 Most of our customers have tried the more “traditional” forms of attribution: media mix modeling or multi-touch attribution. MMM is a long-range strategic planning tool but not a good tactical — daily/weekly/monthly/quarterly — decisioning tool. It’s also quite expensive. MTA has become ineffective because of data restrictions and blind spots.

CZ: Before and after: What impact would your technology or solution have if a company were to implement it tomorrow?
TT:
 A new client could onboard and start using our product within 3-4 weeks. Within 6-8 weeks, they would have a clear sense of the true incremental contribution of their prospecting tactics and the ability to compare them in an apples-to-apples way to inform cross-channel investment decisions.

CZ: What are some examples of real world brand success stories?
Our focus is on D2C companies, ecommerce, and retail in a first-party capacity. We work with clients including Peloton, Drizly, FabFitFun, Johnny Was, and Soft Surroundings.

If you spoke with the CEO of Johnny Was, who we’ve worked with for 2.5 years, he would tell you that we helped him double his online business in 18 months by growing his paid media in a profitable way. We helped him line up and be able to compare incremental net profit per dollar investment of catalog vs. Facebook vs. Pinterest, for example.

So as he’s acquiring new customers, he’s doing it in a profitable way against that first purchase — so the first purchase is either break-even or better. With that methodology and framework set up, he can scale in his top-performing prospecting tactics.

Soft Surroundings would tell you that we helped them right-side their retargeting investment. It’s common that we see D2C brands over-invested in retargeting. After using Measured to analyze its retargeting strategy, Soft Surroundings saved $40,000 over three months with no impact to their bottom line. By cutting that retargeting spend, they brought their overall customer acquisition cost down by 20%.

CZ: What are you focusing on for the next year?
TT:
 Helping our clients grow most effectively through incrementality measurement and smarter media investment decisioning.

Winning with and for our customers!
Continuing to innovate and build on our best in class product.
Adding more great brands.
CZ: What challenges do you see in the industry and what are you doing to prepare?
TT:
 Media tracking driving measurement capabilities continues to evolve and we will adapt with best-in-class experimentation as it changes.

CZ: Who do you look to for example and insight?
TT:
 Salesforce is our role model for best-in-class, scalable, B2B SaaS products complemented by excellent enablement services.

CZ: In the world of attribution and measurement, what things should marketers be focusing on, learning about, etc in the next 3-6 or 12 months?
TT:
 Marketers need to be keenly aware of understanding incrementality. That’s what we’re all trying to get to in advanced media measurement. There are tools like us out there that can help you get a keen sense for it.

Another trend we’re observing is that media mixes are getting more comprehensive. It used to be that you’d see most all of the D2C budget just concentrated on Facebook and Google. Now we’re seeing D2C portfolios that are quite a bit more comprehensive, pushing into more prospecting tactics that they weren’t considering before. We’re seeing more podcasts, TV, social influencer, and other social channels.

And with a more comprehensive media mix, you need even more comprehensive measurement behind it — reporting as well. Brands will be looking to get all of their media performance unified in one cross-channel reporting view.

 

 

Measured helps its customers spend their marketing dollars most effectively to drive performance for their business.

D2C companies like Peloton are increasingly turning to incrementality measurement for attribution. Measured CEO Trevor Testwuide on why that’s the case.

“While multi-touch attribution (MTA) was supposed to end the struggle to prove campaign value, all it has proven is that it’s extremely expensive, time-consuming, and nearly impossible for fast-growing brands deploying significant walled-garden media.”

When Trevor Testwuide and his co-founder Madan Bharadwaj set out to launch Measured in 2017, their mission was to “inform the true causal influence of media tactics” for DTC retailers. Specifically, they wanted to address certain gaps in other forms of attribution:

Advertisers want to know how their walled-garden media is driving consumer behavior.

Advertisers want insights to be reliable to inform high-value decisions, not guesswork.

Since then, brands including, Peloton, Rosetta Stone, FabFitFun, Soft Surroundings, and Drizly have used the platform to grow their businesses. Measured helps them by informing media’s incremental contribution through experiments in an always-on way to inform media investment decisions.

We spoke with Trevor Testwuide, Measured CEO and co-founder, to learn more about their company and the progress they’re making in the world of marketing attribution and measurement. He and Madan — and many on their team — have been in the cross-channel measurement and attribution space for nearly a decade. Prior to Measured, he co-founded Conversion Logic.

As of August 2019, Measured remains self-funded and has not taken outside investment. Their clients primarily consist of retail, ecommerce, and DTC brands spending more than $5 million in media per year. They integrate with more than 200 partners in ecommerce, CRM, media, and adtech.

Where does multi-touch attribution (MTA) fall short, and what advantages does incremental measurement give?

“Multi-touch attribution (MTA) was and conceptually is a very powerful, granular, tactical optimization tool,” says Trevor. “The challenge is that the majority of the media mix that we see with fast-growing brands cannot be measured with MTA. Facebook, Instagram, Pinterest, YouTube, retargeting, catalogue — MTA can’t measure those things. I can’t map Facebook to Google Ads to Pinterest to a conversion.”

With all media analytics, the end goal is to understand the marginal contribution of each media tactic. What is the marginal lift per unit of investment?

But over the last five years, MTA has become increasingly difficult for marketers, particularly in D2C brands. Consumer data is increasingly hidden inside walled gardens where MTA is unable to measure and map across platforms. Rather, MTA works well for open, exchange-based media. Which these days, represents a small percentage of the overall media mix for most DTC brands.

Measuring incremental contribution, on the other hand, is a way of conducting controlled experimentation to inform the incremental contribution of each type of media or platform in a conversion.

With Measured, says Trevor, they’ve designed separate experiments for Facebook, for catalogs, for retargeting, etc. to inform the role each of those plays in a path to conversion.

“Think of them like best-in-class A/B tests to inform the lift of that media.”

ClickZ: Tell us about your background and how you came to co-found Measured?
Trevor Testwuide: Madan and I — and many on our team — have lived in this category of cross-channel measurement since the early innings of algorithmic multi-touch attribution.

I met Madan in 2011 when I joined Visual IQ. Madan was Head of Product, and he was our brain. I was on the customer side, and we ended up working very closely together there. That was the first phase of algorithmic multi-touch attribution — the technology was so nascent back then, and we really got to know our clients.

I then left Visual IQ at the beginning of 2014 and founded a company called Conversion Logic, which had a lot of success.

Then at the beginning of 2017, I left Conversion Logic to team back up with my good friend Madan who’d been independent consulting for a couple years with a lot of D2C brands like GrubHub.

In those couple years, MTA had started getting more difficult in terms of data collection with walled gardens and sensitivity around identity. Madan had realized that moving forward, the smartest path to informing cross-channel investment decisions was really smart experimentation.

He was convinced that was the direction cross-channel investment was going. He started building out some of those experiments, and at the beginning of 2017 we teamed up to take this new methodology to market.

We incorporated the business in March of 2017. Our first client was Johnny Was, a fast-growing women’s bohemian retailer, who signed a contract right in March 2017.

Now we’re two and a half years later, and we have a portfolio of great D2C and consumer brands.

CZ: Briefly describe Measured – what’s your elevator pitch?
TT: Measured helps inform the incrementality of paid media for acquisition marketers to drive cross-channel investment decisions.

Its approach is rooted in innovative, always-on A/B experimentation, which is proven to be the most effective and accurate methodology for determining incremental contribution.

Measured is powered by a privacy-compliant and quality-controlled marketing data platform provided as a service.

CZ: In plain English, explain what Measured does (as you would to someone not immersed in the space).
TT:
 Measured looks at how advertising dollars are spent across various online and offline channels and analyzes which strategies have the biggest impact on sales, so marketers can make better decisions about where to focus their budgets.

There are a lot of outlets for advertisers to prospect for new customers–Facebook, Instagram, Pinterest, Podcast, Catalog, TV, etc.–and some will get better results than others. Measured identifies which outlets are most effective.

CZ: What is the biggest problem Measured solves for customers?
TT:
 Measured helps its customers spend their marketing dollars most effectively to drive performance for their business.

CZ: How many competitors are in your space?
TT:
 It’s hard to say exactly. We’ve seen a couple competitors starting to work specifically on incrementality, but there are several more doing more traditional attribution such as media mix modeling and multi-touch attribution.

CZ: Briefly explain the technology that underpins your solution
TT:
 Measured uses sophisticated experimental design to inform media’s true incremental contribution to your business. We design experiments to run always-on A/B testing on various ad campaigns to determine which ones have the most impact on sales or other metrics important to our customers.

CZ: Why do customers choose you over your competitors? What do you do that they don’t?
TT:
 Most of our customers have tried the more “traditional” forms of attribution: media mix modeling or multi-touch attribution. MMM is a long-range strategic planning tool but not a good tactical — daily/weekly/monthly/quarterly — decisioning tool. It’s also quite expensive. MTA has become ineffective because of data restrictions and blind spots.

CZ: Before and after: What impact would your technology or solution have if a company were to implement it tomorrow?
TT:
 A new client could onboard and start using our product within 3-4 weeks. Within 6-8 weeks, they would have a clear sense of the true incremental contribution of their prospecting tactics and the ability to compare them in an apples-to-apples way to inform cross-channel investment decisions.

CZ: What are some examples of real world brand success stories?
Our focus is on D2C companies, ecommerce, and retail in a first-party capacity. We work with clients including Peloton, Drizly, FabFitFun, Johnny Was, and Soft Surroundings.

If you spoke with the CEO of Johnny Was, who we’ve worked with for 2.5 years, he would tell you that we helped him double his online business in 18 months by growing his paid media in a profitable way. We helped him line up and be able to compare incremental net profit per dollar investment of catalog vs. Facebook vs. Pinterest, for example.

So as he’s acquiring new customers, he’s doing it in a profitable way against that first purchase — so the first purchase is either break-even or better. With that methodology and framework set up, he can scale in his top-performing prospecting tactics.

Soft Surroundings would tell you that we helped them right-side their retargeting investment. It’s common that we see D2C brands over-invested in retargeting. After using Measured to analyze its retargeting strategy, Soft Surroundings saved $40,000 over three months with no impact to their bottom line. By cutting that retargeting spend, they brought their overall customer acquisition cost down by 20%.

CZ: What are you focusing on for the next year?
TT:
 Helping our clients grow most effectively through incrementality measurement and smarter media investment decisioning.

Winning with and for our customers!
Continuing to innovate and build on our best in class product.
Adding more great brands.
CZ: What challenges do you see in the industry and what are you doing to prepare?
TT:
 Media tracking driving measurement capabilities continues to evolve and we will adapt with best-in-class experimentation as it changes.

CZ: Who do you look to for example and insight?
TT:
 Salesforce is our role model for best-in-class, scalable, B2B SaaS products complemented by excellent enablement services.

CZ: In the world of attribution and measurement, what things should marketers be focusing on, learning about, etc in the next 3-6 or 12 months?
TT:
 Marketers need to be keenly aware of understanding incrementality. That’s what we’re all trying to get to in advanced media measurement. There are tools like us out there that can help you get a keen sense for it.

Another trend we’re observing is that media mixes are getting more comprehensive. It used to be that you’d see most all of the D2C budget just concentrated on Facebook and Google. Now we’re seeing D2C portfolios that are quite a bit more comprehensive, pushing into more prospecting tactics that they weren’t considering before. We’re seeing more podcasts, TV, social influencer, and other social channels.

And with a more comprehensive media mix, you need even more comprehensive measurement behind it — reporting as well. Brands will be looking to get all of their media performance unified in one cross-channel reporting view.

 

Original Publisher
ClickZ

 

Measured helps its customers spend their marketing dollars most effectively to drive performance for their business.

Press    Why Marketing Attribution Hasn’t Lived Up to the Hype—Yet

Original Publisher
ClickZ

By Trevor Testwuide, co-founder and CEO at Measured

Cross-channel marketing measurement has been through a tremendous evolution over the last 10-plus years. Today more than ever, cross-channel measurement is evolving and adapting to modern data collection realities.

In the early and mid-2000s, marketing data and analytics for digital media was just a curiosity that no one knew how to capitalize on. The first ones to figure out how to capture and measure media assumed a significant competitive advantage for a while. Over the last decade, marketing data analysis has become table stakes for any brand looking to grow or evolve.

A decade ago, digital advertising was mainly limited to display and search, and those programs were tiny compared to offline marketing.

Today, the average digital marketing mix consists of a comprehensive social strategy, online video, programmatic display and search. These are complemented by a mix of non-addressable and offline tactics such as podcast, TV and direct mail to make for a broad paid media portfolio.

While the opportunities for a brand to reach and stimulate its audience have evolved, marketing measurement has not kept pace.

The media opportunities to target a person with the right message at the right time and place on the right device has exploded. Along with this, the data set and opportunities for analysis have also become overwhelming for most.

For as much change and progress as there has been, marketers still struggle to execute data-driven decisions that drive growth in revenue and profits for their businesses. Advances in data and measurement and the evolving rules of tracking and identity management have led to new challenges. Marketers still must seek a competitive advantage, particularly as more tools and data have flooded the market.

With all of these changes, it’s clear that some of the progress we thought would change the world has fallen flat, but the progress continues with some new promising possibilities.

Multi-touch attribution has fallen short

This one hits close to home because I spent six very meaningful years of my career living in the MTA category. The people I worked with were talented and had the best intentions.

We thought attribution via terabytes of user-level data, billons of cookies and millions of converting and non-converting sequences would unlock the secrets of what made customers buy something or not. MTA was supposed to spark a revolution in marketing decision-making and show us which ads to buy to ensure a sale.

It turns out that user-level data is messy and the politics of ad tech played a major role. Instead of a clear path through a dark forest, we were trying to follow a trail of breadcrumbs obscured by dirt and fallen leaves.

We learned that it is essentially impossible to track users accurately across multiple channels and devices over a reasonable time.

Today, walled gardens such as Google and Facebook make the idealistic goal we desired an impossible proposition.

The biggest non-search recipients of digital budgets – Facebook, Instagram, YouTube and Google Display Network – represent more than 80% of spending, but their impressions can’t be tracked and mapped to one another at the user level.

All that is left is paid search, and you don’t need MTA to optimize search.

Long before Cambridge Analytica and GDPR, MTA was dead-on-arrival. Now GDPR and other privacy laws are making MTA deployments more expensive while the walled gardens shrink the impact MTA can have.

Marketing mix modeling is too much for most

Marketing Mix Modeling, when developed with a deep understanding of the business, can be a valuable strategic tool. Its strength in modeling both media and non-media data sources can make it a powerful tool for the enterprise marketer to inform annual and quarterly strategy planning.

The reality of MMM is there is quite a bit of art to the science. The modeler is most effective when she has a keen understanding for the nuances of the business such as competition, macroeconomic influences, momentum and seasonality.

The problem with MMM is that its success depends on several factors, including the size of non-addressable marketing budgets, the number of offline transactions and the availability of large steady-state multi-year historical data sets.

Those requirements mean any MMM benefits are minimal for marketers in rapidly changing markets.

Combine those requirements and limitations with the dependence on third-party consulting services, and you end up with a great tool that is too expensive for companies outside the Fortune 1000.

Incrementality testing is becoming a must-have tool

Every marketing team engages in some flavor of A/B testing with various messaging and tactics, but the quality of testing varies from ad hoc programs to those supported by advanced data science teams.

Uber and Netflix have spent millions building experimentation capabilities in-house. They have entire teams executing tests to inform the incrementality of various marketing programs.

But applying incrementality testing to cross-channel media decisioning is a cross-functional challenge fraught with complexity for any team.

Even for the best and brightest in-house teams, it would be steep challenge to assemble the required expertise across marketing analytics, adtech, data science, data engineering and product into an orchestrated practice to solve for cross-channel incrementality measurement.

Meanwhile, marketers at smaller organizations leverage vendor-provided testing tools to compare creative messaging or seek answers to media lift. The simple truth is there are powerful advanced incrementality measurement capabilities available, and deploying an in-house practice to seek parity is expensive, timely and high risk.

Ad hoc split-testing won’t lead to a meaningful competitive advantage or a winning customer acquisition strategy. Marketers need an always-on best-in-class design of experiments to inform their most meaningful high-value decisions for growth.

Over the last two decades, marketers have seen a major evolution in performance-driven media. In parallel, the industry has understandably been frustrated with the limitations of measurement, especially given the abundance of big data.

Incrementality measurement done right is the path forward for marketers to inform trusted high-value media investment decisions.

 

It turns out that user-level data is messy. Instead of a clear path through a dark forest, we were trying to follow a trail of breadcrumbs obscured by dirt and fallen leaves.

By Trevor Testwuide, co-founder and CEO at Measured

Cross-channel marketing measurement has been through a tremendous evolution over the last 10-plus years. Today more than ever, cross-channel measurement is evolving and adapting to modern data collection realities.

In the early and mid-2000s, marketing data and analytics for digital media was just a curiosity that no one knew how to capitalize on. The first ones to figure out how to capture and measure media assumed a significant competitive advantage for a while. Over the last decade, marketing data analysis has become table stakes for any brand looking to grow or evolve.

A decade ago, digital advertising was mainly limited to display and search, and those programs were tiny compared to offline marketing.

Today, the average digital marketing mix consists of a comprehensive social strategy, online video, programmatic display and search. These are complemented by a mix of non-addressable and offline tactics such as podcast, TV and direct mail to make for a broad paid media portfolio.

While the opportunities for a brand to reach and stimulate its audience have evolved, marketing measurement has not kept pace.

The media opportunities to target a person with the right message at the right time and place on the right device has exploded. Along with this, the data set and opportunities for analysis have also become overwhelming for most.

For as much change and progress as there has been, marketers still struggle to execute data-driven decisions that drive growth in revenue and profits for their businesses. Advances in data and measurement and the evolving rules of tracking and identity management have led to new challenges. Marketers still must seek a competitive advantage, particularly as more tools and data have flooded the market.

With all of these changes, it’s clear that some of the progress we thought would change the world has fallen flat, but the progress continues with some new promising possibilities.

Multi-touch attribution has fallen short

This one hits close to home because I spent six very meaningful years of my career living in the MTA category. The people I worked with were talented and had the best intentions.

We thought attribution via terabytes of user-level data, billons of cookies and millions of converting and non-converting sequences would unlock the secrets of what made customers buy something or not. MTA was supposed to spark a revolution in marketing decision-making and show us which ads to buy to ensure a sale.

It turns out that user-level data is messy and the politics of ad tech played a major role. Instead of a clear path through a dark forest, we were trying to follow a trail of breadcrumbs obscured by dirt and fallen leaves.

We learned that it is essentially impossible to track users accurately across multiple channels and devices over a reasonable time.

Today, walled gardens such as Google and Facebook make the idealistic goal we desired an impossible proposition.

The biggest non-search recipients of digital budgets – Facebook, Instagram, YouTube and Google Display Network – represent more than 80% of spending, but their impressions can’t be tracked and mapped to one another at the user level.

All that is left is paid search, and you don’t need MTA to optimize search.

Long before Cambridge Analytica and GDPR, MTA was dead-on-arrival. Now GDPR and other privacy laws are making MTA deployments more expensive while the walled gardens shrink the impact MTA can have.

Marketing mix modeling is too much for most

Marketing Mix Modeling, when developed with a deep understanding of the business, can be a valuable strategic tool. Its strength in modeling both media and non-media data sources can make it a powerful tool for the enterprise marketer to inform annual and quarterly strategy planning.

The reality of MMM is there is quite a bit of art to the science. The modeler is most effective when she has a keen understanding for the nuances of the business such as competition, macroeconomic influences, momentum and seasonality.

The problem with MMM is that its success depends on several factors, including the size of non-addressable marketing budgets, the number of offline transactions and the availability of large steady-state multi-year historical data sets.

Those requirements mean any MMM benefits are minimal for marketers in rapidly changing markets.

Combine those requirements and limitations with the dependence on third-party consulting services, and you end up with a great tool that is too expensive for companies outside the Fortune 1000.

Incrementality testing is becoming a must-have tool

Every marketing team engages in some flavor of A/B testing with various messaging and tactics, but the quality of testing varies from ad hoc programs to those supported by advanced data science teams.

Uber and Netflix have spent millions building experimentation capabilities in-house. They have entire teams executing tests to inform the incrementality of various marketing programs.

But applying incrementality testing to cross-channel media decisioning is a cross-functional challenge fraught with complexity for any team.

Even for the best and brightest in-house teams, it would be steep challenge to assemble the required expertise across marketing analytics, adtech, data science, data engineering and product into an orchestrated practice to solve for cross-channel incrementality measurement.

Meanwhile, marketers at smaller organizations leverage vendor-provided testing tools to compare creative messaging or seek answers to media lift. The simple truth is there are powerful advanced incrementality measurement capabilities available, and deploying an in-house practice to seek parity is expensive, timely and high risk.

Ad hoc split-testing won’t lead to a meaningful competitive advantage or a winning customer acquisition strategy. Marketers need an always-on best-in-class design of experiments to inform their most meaningful high-value decisions for growth.

Over the last two decades, marketers have seen a major evolution in performance-driven media. In parallel, the industry has understandably been frustrated with the limitations of measurement, especially given the abundance of big data.

Incrementality measurement done right is the path forward for marketers to inform trusted high-value media investment decisions.

Original Publisher
ClickZ

 

It turns out that user-level data is messy. Instead of a clear path through a dark forest, we were trying to follow a trail of breadcrumbs obscured by dirt and fallen leaves.